Commit 33675ede authored by Thomas Loughlin's avatar Thomas Loughlin
Browse files

Using AI in Support

parent b0e30025
Loading
Loading
Loading
Loading
Original line number Diff line number Diff line
---
title: Using AI in Support
description: Practical guide for Support Engineers on using AI tools day-to-day, including tool selection by scenario, prompt engineering and model selection
---

## Overview

This page provides practical, scenario-driven guidance for using AI tools for daily Support Engineering work. It brings together tool capabilities, data classification rules, prompt engineering techniques and model selection advice into a single reference.

Before using this page, ensure you have read:

1. [AI and Support Work](_index.md) for the decision framework on when to use AI
1. [AI tool selection](ai-tool-selection.md) for detailed tool capabilities and approval status
1. [AI usage recommendations](ai-usage-recommendations.md) for general prompting strategies

## Tools by scenario

Different tools are suited to different tasks. The table below maps common Support Engineering scenarios to the recommended tool, based on data classification, access capabilities and strengths.

| Scenario | Recommended tool | Why |
|----------|-----------------|-----|
| Summarising a long Zendesk ticket for handover | [Glean](ai-tool-selection.md#glean) | Reads ticket context directly from the Zendesk sidebar without copy-pasting |
| Finding similar past tickets, issues or MRs | [Glean](/handbook/support/ai/ai-tool-selection/#glean) or [GitLab Duo Agentic Chat](/handbook/support/ai/ai-tool-selection/#gitlab-duo-agentic-chat) | Glean searches across Zendesk, Handbook and Slack; Agentic Chat searches GitLab issues and MRs directly |
| Diagnosing a CI/CD failure using customer logs | [GitLab Duo Chat](/handbook/support/ai/ai-tool-selection/#gitlab-duo-chat) | Approved for Red data; has GitLab product knowledge and can process logs safely |
| Multi-step investigation across issues, MRs and code | [GitLab Duo Agentic Chat](/handbook/support/ai/ai-tool-selection/#gitlab-duo-agentic-chat) | Can autonomously traverse the GitLab ecosystem and synthesise findings from multiple sources |
| Drafting or refining an internal document or workflow | [Anthropic Claude](/handbook/support/ai/ai-tool-selection/#anthropic-claude-web) | Strong at drafting, summarisation and reasoning with long context windows |
| Polishing wording in a Google Doc or Gmail | [Gemini (Workspace)](/handbook/support/ai/ai-tool-selection/#gemini-chat-workspace) | Integrated directly into Google Workspace where the content already lives |
| Catching up on a Slack channel after PTO | [Slack AI](/handbook/support/ai/ai-tool-selection/#slack-ai) or [Glean](/handbook/support/ai/ai-tool-selection/#glean) | Slack AI summarises in-place; Glean can provide a broader cross-platform summary |
| Generating a draft KB article from a resolved ticket | [GitLab Duo Agentic Chat](/handbook/support/ai/ai-tool-selection/#gitlab-duo-agentic-chat) | Can access the ticket context (Red data approved) and generate structured output following a template |
| Brainstorming troubleshooting approaches | [Anthropic Claude](/handbook/support/ai/ai-tool-selection/#anthropic-claude-web) or [GitLab Duo Chat](/handbook/support/ai/ai-tool-selection/#gitlab-duo-chat) | Claude for general reasoning; Duo Chat when the problem is GitLab-specific |
| Prototyping prompt ideas for Duo custom rules | [Anthropic Claude](/handbook/support/ai/ai-tool-selection/#anthropic-claude-web) | Good for iterative drafting and refinement before deploying prompts elsewhere |

## Data classification: quick rules

Choosing the right tool starts with understanding the data you are working with. The full classification standard is at [Data Classification Standard](../../security/policies_and_standards/data-classification-standard/).

| Data type | Classification | Approved tools |
|-----------|---------------|----------------|
| Customer PII, credentials, logs with sensitive data, ticket attachments | [RED](/handbook/security/policies_and_standards/data-classification-standard/#red) | GitLab Duo (Chat, Agentic Chat, Agent Platform) only |
| Internal documentation, handbook content, anonymised examples, workflow drafts | [ORANGE](/handbook/security/policies_and_standards/data-classification-standard/#orange) | All approved tools (Glean, Claude, Gemini, Slack AI, NotebookLM, Zoom AI, etc.) |

> When in doubt, treat data as Red. Sanitise customer-identifying information before using any Orange-classified tool. See the [working on tickets guidance on LLM use](../workflows/working-on-tickets.md#can-i-use-output-from-an-llm-in-ticket-replies) for verification requirements.

## Prompt engineering for Support Engineers

Effective prompting reduces wasted iterations and produces more actionable output. The principles below apply across all AI tools.

### Core principles

1. **State the goal clearly** - specify the artifact you need (a diagnosis, a KB draft, a reply skeleton or a checklist) rather than asking open-ended questions.
1. **Provide context** - include the GitLab version, environment type (SaaS or self-managed), install method, error messages and what has already been tried.
1. **Constrain the output** - ask for a specific format such as three to five bullets, a step-by-step checklist or two options with pros and cons.