Using Devtron Intelligence
Last updated
Was this helpful?
Last updated
Was this helpful?
Devtron Intelligence is an AI assistant that helps you troubleshoot issues faster by analyzing your Kubernetes workloads. It offers smart and easy-to-understand suggestions using large language models (LLM) of your choice.
Check out the Results section to see where Devtron gives you AI-powered explanation for troubleshooting.
User must have permissions to:
Deploy Helm Apps (with environment access)
Edit the ConfigMaps of 'default-cluster'
Restart the pods
Devtron Intelligence supports all major large language models (LLM) e.g., OpenAI, Gemini, AWS Bedrock, Anthropic and many more.
You can generate an API key for an LLM of your choice. Here, we will generate an API key from OpenAI.
Go to strings.is and encode your API key in base64. This base64 encoded key will be used while creating a secret in the next step.
Go to Devtron's Resource Browser → (Select Cluster) → Create Resource
Paste the following YAML and replace the key with your base64-encoded OpenAI key. Also, enter the namespace where the AI Agent chart will be installed:
apiVersion: v1
kind: Secret
metadata:
name: ai-secret
namespace: <your-env-namespace> # Namespace where the AI Agent chart will be installed
type: Opaque
data:
## OpenAiKey: <base64-encoded-openai-key> # For OpenAI
## GoogleKey: <base64-encoded-google-key> # For Gemini
## azureOpenAiKey: <base64-encoded-azure-key> # For Azure OpenAI
## awsAccessKeyId: <base64-encoded-aws-access-key> # For AWS Bedrock
## awsSecretAccessKey: <base64-encoded-aws-secret> # For AWS Bedrock
## AnthropicKey: <base64-encoded-anthropic-key> # For Anthropic
Deploy the chart in the cluster whose workloads you wish to troubleshoot. You may install the chart in multiple clusters (1 agent for 1 cluster).
Go to Devtron's Chart Store.
Search the ai-agent
chart and click on it.
Click the Configure & Deploy button.
In the left-hand pane:
App Name: Give your app a name, e.g. ai-agent-app
Project: Select your project
Deploy to environment: Choose the target environment (should be associated with the same namespace used while creating secret key in Step 3)
Chart Version: Select the latest chart version.
Chart Values: Choose the default one for the latest version.
In the values.yaml
file editor, add the appropriate additionalEnvVars
block based on your LLM provider. Use the tabs below to find the configuration snippet of some well-known LLM providers.
additionalEnvVars:
- name: MODEL
value: gpt-4o-mini ## Examples: gpt-4o, gpt-4, gpt-3.5-turbo
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
key: OpenAiKey ## Key of the secret created in Step 3
name: ai-secret ## Name of the secret created in Step 3
- name: CLUSTER_NAME
value: document-nonprod ## Name of the target cluster (optional)
Click the Deploy Chart button.
In the App Details page of the deployed chart, expand Networking and click on Service.
Locate the service entry with the URL in the format: <service-name>.<namespace>:<port>
. Note the values of serviceName
, namespace
, and port
for the next step.
In a new tab, go to Resource Browser → (Select Cluster) → Config & Storage → ConfigMap
Edit the ConfigMaps:
devtron-cm
Ensure the below entry is present in the ConfigMap (create one if it doesn't exist). Here you can define the target cluster and the endpoint where your Devtron AI service is deployed:
CLUSTER_CHAT_CONFIG: '{"<targetClusterID>": {"serviceName": "", "namespace": "", "port": ""}}'
dashboard-cm
To enable AI integration via feature flag, check if the below entry is present in the ConfigMap (create one if it doesn't exist).
FEATURE_AI_INTEGRATION_ENABLE: "true"
Go to Resource Browser → (Select Cluster) → Workloads → Deployment
Click the checkbox next to the following Deployment workloads and restart them using the ⟳
button:
devtron
dashboard
Perform a hard refresh of the browser to clear the cache:
Mac: Hold down Cmd
and Shift
and then press R
Windows/Linux: Hold down Ctrl
and then press F5
Devtron supports Explain option at the following screens (only for specific scenarios where troubleshooting is possible through AI):
Path: Resource Browser → (Select Cluster) → Workloads → Pod
Path: Resource Browser → (Select Cluster) → Workloads → Pod → Pod Last Restart Snapshot
Path: Resource Browser → (Select Cluster) → Events
Path: Application → App Details → Application Status Drawer
Path: Application → App Details → K8s Resources (tab) → Workloads