Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 175 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

v0.7

Introduction

Devtron is a tool integration platform for Kubernetes.

Devtron deeply integrates with products across the lifecycle of microservices i.e., CI/CD, security, cost, debugging, and observability via an intuitive web interface. Devtron helps you to deploy, observe, manage & debug the existing Helm apps in all your clusters.

Devtron's Key Features:

No Code Software Delivery Workflow for Kubernetes

  • Workflow which understands the domain of Kubernetes, testing, CD, SecOps so that you don't have to write scripts

  • Reusable and composable components so that workflows are easy to construct and reason through

Multi-cloud Deployment

  • Deploy to multiple Kubernetes clusters on multiple cloud/on-prem from one Devtron setup

  • Works for all cloud providers and on-premise Kubernetes clusters

Easy DevSecOps Integration

  • Multi-level security policy at global, cluster, environment, and application-level for efficient hierarchical policy management

  • Behavior-driven security policy

  • Define policies and exceptions for Kubernetes resources

  • Define policies for events for faster resolution

Application Debugging Dashboard

  • One place for all historical Kubernetes events

  • Access all manifests securely, such as secret obfuscation

  • Application metrics for CPU, RAM, HTTP status code, and latency with a comparison between new and old

  • Advanced logging with grep and JSON search

  • Intelligent correlation between events, logs for faster triangulation of issue

  • Auto issue identification

Enterprise-Grade Security and Compliances

  • Fine-grained access control; control who can edit the configuration and who can deploy.

  • Audit log to know who did what and when

  • History of all CI and CD events

  • Kubernetes events impacting application

  • Relevant cloud events and their impact on applications

  • Advanced workflow policies like blackout window, branch environment relationship to secure build and deployment pipelines

Implements GitOps

  • GitOps exposed through API and UI so that you don't have to interact with git CLI

  • GitOps backed by Postgres for easy analysis

  • Enforce finer access control than Git

Operational Insights

  • Deployment metrics to measure the success of the agile process. It captures MTTR, change failure rate, deployment frequency, and deployment size out of the box.

  • Audit log to understand the failure causes

  • Monitor changes across deployments and reverts easily

Compatibility Notes

  • Application metrics only work for K8s version 1.16+

Contributing Guidelines

Community

Get updates on Devtron's development and chat with the project maintainers, contributors, and community members.

Vulnerability Reporting

We, at Devtron, take security and our users' trust very seriously. If you believe you have found a security issue in Devtron, please responsibly disclose it by contacting us at security@devtron.ai.

Getting Started

This section includes information about the minimum requirements you need to install and use Devtron.

Devtron is installed over a Kubernetes cluster. Once you create a Kubernetes cluster, Devtron can be installed standalone or along with CI/CD integration:

In this section, we will cover the basic details on how you can quickly get started with Devtron. First, lets see what are the prerequisite requirements before you install Devtron.

Prerequisites

Create a Kubernetes Cluster

You can create a cluster using one of the following cloud providers as per your requirements:

Cloud Provider
Description

AWS EKS

Google Kubernetes Engine (GKE)

Azure Kubernetes Service (AKS)

k3s - Lightweight Kubernetes

Install Helm

Recommended Resources

The minimum requirements for installing Helm Dashboard by Devtron and Devtron with CI/CD as per the number of applications you want to manage on Devtron are provided below:

  • For configuring small resources (to manage not more than 5 apps on Devtron):

Integration
CPU
Memory

Devtron with CI/CD

2

6 GB

Helm Dashboard by Devtron

1

1 GB

  • For configuring medium/larger resources (to manage more than 5 apps on Devtron):

Integration
CPU
Memory

Devtron with CI/CD

6

13 GB

Helm Dashboard by Devtron

2

3 GB

Note:

  • Please make sure that the recommended resources are available on your Kubernetes cluster before you proceed with Devtron installation.

  • It is NOT recommended to use brustable CPU VMs (T series in AWS, B Series in Azure and E2/N1 in GCP) for Devtron installation to experience consistency in performance.

Installation of Devtron

You can install Devtron standalone (Helm Dashboard by Devtron) or along with CI/CD integration. Or, you can upgrade Devtron to the latest version.

Choose one of the options as per your requirements:

Installation Options
Description

Devtron installation with the CI/CD integration is used to perform CI/CD, security scanning, GitOps, debugging, and observability.

Upgrade Devtron to latest version

You can upgrade Devtron in one of the following ways:

Install Devtron

Devtron is installed over a Kubernetes cluster. Once you create a Kubernetes cluster, Devtron can be installed standalone or along with CI/CD integration.

Try Devtron Enterprise for Free

Choose one of the options as per your requirements:

Loading...

Install Devtron with CI/CD and GitOps (Argo CD)

In this section, we describe the steps in detail on how you can install Devtron with CI/CD by enabling GitOps during the installation.

Try Devtron Enterprise for Free

Before you begin

Install Devtron with CI/CD along with GitOps (Argo CD)

Run the following command to install the latest version of Devtron with CI/CD along with GitOps (Argo CD) module:

Install Multi-Architecture Nodes (ARM and AMD)

To install Devtron on clusters with the multi-architecture nodes (ARM and AMD), append the Devtron installation command with --set installer.arch=multi-arch.

Note:

Configure Blob Storage during Installation

Configuring Blob Storage in your Devtron environment allows you to store build logs and cache. In case, if you do not configure the Blob Storage, then:

  • You will not be able to access the build and deployment logs after an hour.

  • Build time for commit hash takes longer as cache is not available.

  • Artifact reports cannot be generated in pre/post build and deployment stages.

Choose one of the options to configure blob storage:

Run the following command to install Devtron along with MinIO for storing logs and cache.

Note: Unlike global cloud providers such as AWS S3 Bucket, Azure Blob Storage and Google Cloud Storage, MinIO can be hosted locally also.

Run the following command to install Devtron along with AWS S3 buckets for storing build logs and cache:

  • Install using S3 IAM policy.

Note: Please ensure that S3 permission policy to the IAM role attached to the nodes of the cluster if you are using below command.

  • Install using access-key and secret-key for AWS S3 authentication:

  • Install using S3 compatible storages:

Run the following command to install Devtron along with Azure Blob Storage for storing build logs and cache:

Run the following command to install Devtron along with Google Cloud Storage for storing build logs and cache:

Check Status of Devtron Installation

Note: The installation takes about 15 to 20 minutes to spin up all of the Devtron microservices one by one.

Run the following command to check the status of the installation:

The command executes with one of the following output messages, indicating the status of the installation:

Check the installer logs

Run the following command to check the installer logs:

Devtron dashboard

Run the following command to get the Devtron dashboard URL:

You will get an output similar to the example shown below:

Use the hostname aaff16e9760594a92afa0140dbfd99f7-305259315.us-east-1.elb.amazonaws.com (Loadbalancer URL) to access the Devtron dashboard.

Note: If you do not get a hostname or receive a message that says "service doesn't exist," it means Devtron is still installing. Please wait until the installation is completed.

Note: You can also use a CNAME entry corresponding to your domain/subdomain to point to the Loadbalancer URL to access at a customized domain.

Devtron Admin credentials

When you install Devtron for the first time, it creates a default admin user and password (with unrestricted access to Devtron). You can use that credentials to log in as an administrator.

After the initial login, we recommend you set up any SSO service like Google, GitHub, etc., and then add other users (including yourself). Subsequently, all the users can use the same SSO (let's say, GitHub) to log in to Devtron's dashboard.

The section below will help you understand the process of getting the administrator credentials.

For Devtron version v0.6.0 and higher

Username: admin Password: Run the following command to get the admin password:

For Devtron version less than v0.6.0

Username: admin Password: Run the following command to get the admin password:

Install Devtron without Integrations

Try Devtron Enterprise — Free for 14 Days

Before you begin

Add Helm Repo

helm repo add devtron https://helm.devtron.ai

Update Helm Repo

helm repo update devtron

Install Helm Dashboard by Devtron

Run the following command to install Helm Dashboard by Devtron:

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd

Install Multi-Architecture Nodes (ARM and AMD)

To install Devtron on clusters with the multi-architecture nodes (ARM and AMD), append the Devtron installation command with --set installer.arch=multi-arch.

Devtron Dashboard

Run the following command to get the dashboard URL:

kubectl get svc -n devtroncd devtron-service -o jsonpath='{.status.loadBalancer.ingress}'

You will get the result something as shown below:

[test2@server ~]$ kubectl get svc -n devtroncd devtron-service -o jsonpath='{.status.loadBalancer.ingress}'
[map[hostname:aaff16e9760594a92afa0140dbfd99f7-305259315.us-east-1.elb.amazonaws.com]]

The hostname aaff16e9760594a92afa0140dbfd99f7-305259315.us-east-1.elb.amazonaws.com as mentioned above is the Loadbalancer URL where you can access the Devtron dashboard.

You can also do a CNAME entry corresponding to your domain/subdomain to point to this Loadbalancer URL to access it at a custom domain.

Host
Type
Points to

devtron.yourdomain.com

CNAME

aaff16e9760594a92afa0140dbfd99f7-305259315.us-east-1.elb.amazonaws.com

Devtron Admin credentials

When you install Devtron for the first time, it creates a default admin user and password (with unrestricted access to Devtron). You can use that credentials to log in as an administrator.

After the initial login, we recommend you set up any SSO service like Google, GitHub, etc., and then add other users (including yourself). Subsequently, all the users can use the same SSO (let's say, GitHub) to log in to Devtron's dashboard.

The section below will help you understand the process of getting the administrator credentials.

For Devtron version v0.6.0 and higher

Username: admin Password: Run the following command to get the admin password:

kubectl -n devtroncd get secret devtron-secret \
-o jsonpath='{.data.ADMIN_PASSWORD}' | base64 -d
For Devtron version less than v0.6.0

Username: admin Password: Run the following command to get the admin password:

kubectl -n devtroncd get secret devtron-secret \
-o jsonpath='{.data.ACD_PASSWORD}' | base64 -d

Upgrade

Install Devtron on Minikube, Microk8s, K3s, Kind, Cloud VMs

You can install and try Devtron on a high-end machine or a Cloud VM. If you install it on a laptop/PC, it may start to respond slowly.

Try Devtron Enterprise for Free

Prerequisites

  1. 2 vCPUs

  2. 4GB+ of free memory

  3. 20GB+ free disk space

Before you get started, finish the following actions:


Tutorial


For Minikube, MicroK8s, Kind, K3s

To install Devtron on Minikube/MicroK8s/Kind cluster, run the following command:

To install Devtron on K3s cluster, run the following command:

Access Devtron Dashboard

To access the dashboard on Minikube cluster, run the following command:

This will directly open the dashboard URL in your browser

To access the dashboard on MicroK8s/Kind/K3s cluster, run the following command to port-forward the devtron service to port 8000:

Get Admin Credentials

When you install Devtron for the first time, it creates a default admin user and password (with unrestricted access to Devtron). You can use those credentials to log in as an administrator.

After the initial login, we recommend you set up any SSO service like Google, GitHub, etc., and then add other users (including yourself). Subsequently, all the users can use the same SSO (let's say, GitHub) to log in to Devtron's dashboard.

The section below will help you understand the process of getting the administrator credentials.

For Devtron version v0.6.0 and higher

Username: admin Password: Run the following command to get the admin password:

For Devtron version less than v0.6.0

Username: admin Password: Run the following command to get the admin password:


For Cloud VM (AWS EC2, Azure VM, GCP VM)

It is recommended to use Cloud VM with 2vCPU+, 4GB+ free memory, 20GB+ storage, Compute Optimized VM type & Ubuntu Flavoured OS.

Create MicroK8s Cluster

Install Devtron

Get devtron-service Port Number

Make sure that the port used by the devtron-service remain open in the VM's security group or network security group.

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Usage

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Devtron uses a modified version of .

Check out our . Directions for opening issues, coding standards, and notes on our development processes are all included.

Join the

Follow

Raise feature requests, suggest enhancements, report bugs at

Read the

: Devtron installation with the CI/CD integration is used to perform CI/CD, security scanning, GitOps, debugging, and observability.

: The Helm Dashboard by Devtron, which is a standalone installation, includes functionalities to deploy, observe, manage, and debug existing Helm applications in multiple clusters. You can also install integrations from .

Create a

You can create any (preferably K8s version 1.16 or higher) for installing Devtron.

Create a cluster using . Note: You can also refer our customized documentation for installing Devtron with CI/CD on AWS EKS .

Create a cluster using .

Create a cluster using .

Create a cluster using . Note: You can also refer our customized documentation for installing Helm Dashboard by Devtron on Minikube, Microk8s, K3s, Kind .

Make sure to install .

Refer to the section for more information.

The Helm Dashboard by Devtron which is a standalone installation includes functionalities to deploy, observe, manage, and debug existing Helm applications in multiple clusters. You can also install integrations from .

With this option, you can install Devtron with CI/CD by enabling GitOps during the installation. You can also install other integrations from .

Note: If you have questions, please let us know on our discord channel.

Explore of Devtron with its Enterprise version trial ().

Installation Options
Description
When to choose

Devtron installation with the CI/CD integration is used to perform CI/CD, security scanning, GitOps, debugging, and observability.

Use this option to install Devtron with Build and Deploy CI/CD integration.

The Helm Dashboard by Devtron which is a standalone installation includes functionalities to deploy, observe, manage, and debug existing Helm applications in multiple clusters. You can also install integrations from .

Use this option if you are managing the applications via Helm and you want to use Devtron to deploy, observe, manage, and debug the Helm applications.

With this option, you can install Devtron with CI/CD by enabling GitOps during the installation. You can also install other integrations from .

Use this option to install Devtron with CI/CD by enabling GitOps, which is the most scalable method in terms of version control, collaboration, compliance and infrastructure automation.

Note: If you have questions, please let us know on our discord channel.

Explore of Devtron with its Enterprise version trial ().

Install if you have not installed it.

Note: If you want to configure Blob Storage during the installation, refer .

If you want to install Devtron for production deployments, please refer to our recommended overrides for .

Refer to the AWS specific parameters on the page.

Refer to the Azure specific parameters on the page.

Refer to the Google Cloud specific parameters on the page.

Status
Description
Host
Type
Points to

If you want to uninstall Devtron or clean Devtron helm installer, refer our .

Related to installation, please also refer section also.

Note: If you have questions, please let us know on our discord channel.

In this section, we describe on how you can install Helm Dashboard by Devtron without any integrations. Integrations can be added later using .

If you want to install Devtron on Minikube, Microk8s, K3s, Kind, refer this .

Get full access to all with a 14-day free trial — no interruptions, no limitations. Want to know how it works? .

Install if you have not installed it.

Note: This installation command will not install CI/CD integration. For CI/CD, refer section.

Note: If you want to uninstall Devtron or clean Devtron helm installer, refer our .

To use the CI/CD capabilities with Devtron, you can Install the or .

Explore of Devtron with its Enterprise version trial ().

Create a cluster using or or or .

Install .

Install .

After port-forwarding, you can access the dashboard at this URL:

If you want to uninstall Devtron or clean up the Devtron Helm installer, refer .

If you have questions, please let us know on our Discord channel.

Devtron with CI/CD
Helm Dashboard by Devtron
Devtron Stack Manager
Helm Installation
Kubernetes cluster
helm
Override Configurations
all capabilities
read more
Devtron Stack Manager
section
Enterprise features
Learn more about getting a trial license
Helm
install Devtron with CI/CD
uninstall Devtron
Devtron with CI/CD
Devtron with CI/CD along with GitOps (Argo CD)
Argo Rollout
contributing guidelines
Discord Community
@DevtronL on Twitter
GitHub issues
Devtron blog
Kubernetes cluster, preferably K8s version 1.16 or higher
Recommended Resources
helm repo add devtron https://helm.devtron.ai

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set installer.modules={cicd} \
--set argo-cd.enabled=true
helm repo add devtron https://helm.devtron.ai 

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set installer.modules={cicd} \
--set minio.enabled=true \
--set argo-cd.enabled=true
helm repo add devtron https://helm.devtron.ai

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set installer.modules={cicd} \
--set configs.BLOB_STORAGE_PROVIDER=S3 \
--set configs.DEFAULT_CACHE_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CACHE_BUCKET_REGION=us-east-1 \
--set configs.DEFAULT_BUILD_LOGS_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CD_LOGS_BUCKET_REGION=us-east-1 \
--set argo-cd.enabled=true
helm repo add devtron https://helm.devtron.ai

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set installer.modules={cicd} \
--set configs.BLOB_STORAGE_PROVIDER=S3 \
--set configs.DEFAULT_CACHE_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CACHE_BUCKET_REGION=us-east-1 \
--set configs.DEFAULT_BUILD_LOGS_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CD_LOGS_BUCKET_REGION=us-east-1 \
--set secrets.BLOB_STORAGE_S3_ACCESS_KEY=<access-key> \
--set secrets.BLOB_STORAGE_S3_SECRET_KEY=<secret-key> \
--set argo-cd.enabled=true
helm repo add devtron https://helm.devtron.ai

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set installer.modules={cicd} \
--set configs.BLOB_STORAGE_PROVIDER=S3 \
--set configs.DEFAULT_CACHE_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CACHE_BUCKET_REGION=us-east-1 \
--set configs.DEFAULT_BUILD_LOGS_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CD_LOGS_BUCKET_REGION=us-east-1 \
--set secrets.BLOB_STORAGE_S3_ACCESS_KEY=<access-key> \
--set secrets.BLOB_STORAGE_S3_SECRET_KEY=<secret-key> \
--set configs.BLOB_STORAGE_S3_ENDPOINT=<endpoint> \
--set argo-cd.enabled=true
helm repo add devtron https://helm.devtron.ai

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set installer.modules={cicd} \
--set secrets.AZURE_ACCOUNT_KEY=xxxxxxxxxx \
--set configs.BLOB_STORAGE_PROVIDER=AZURE \
--set configs.AZURE_ACCOUNT_NAME=test-account \
--set configs.AZURE_BLOB_CONTAINER_CI_LOG=ci-log-container \
--set configs.AZURE_BLOB_CONTAINER_CI_CACHE=ci-cache-container \
--set argo-cd.enabled=true
helm repo add devtron https://helm.devtron.ai

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set installer.modules={cicd} \
--set configs.BLOB_STORAGE_PROVIDER=GCP \
--set secrets.BLOB_STORAGE_GCP_CREDENTIALS_JSON=eyJ0eXBlIjogInNlcnZpY2VfYWNjb3VudCIsInByb2plY3RfaWQiOiAiPHlvdXItcHJvamVjdC1pZD4iLCJwcml2YXRlX2tleV9pZCI6ICI8eW91ci1wcml2YXRlLWtleS1pZD4iLCJwcml2YXRlX2tleSI6ICI8eW91ci1wcml2YXRlLWtleT4iLCJjbGllbnRfZW1haWwiOiAiPHlvdXItY2xpZW50LWVtYWlsPiIsImNsaWVudF9pZCI6ICI8eW91ci1jbGllbnQtaWQ+IiwiYXV0aF91cmkiOiAiaHR0cHM6Ly9hY2NvdW50cy5nb29nbGUuY29tL28vb2F1dGgyL2F1dGgiLCJ0b2tlbl91cmkiOiAiaHR0cHM6Ly9vYXV0aDIuZ29vZ2xlYXBpcy5jb20vdG9rZW4iLCJhdXRoX3Byb3ZpZGVyX3g1MDlfY2VydF91cmwiOiAiaHR0cHM6Ly93d3cuZ29vZ2xlYXBpcy5jb20vb2F1dGgyL3YxL2NlcnRzIiwiY2xpZW50X3g1MDlfY2VydF91cmwiOiAiPHlvdXItY2xpZW50LWNlcnQtdXJsPiJ9Cg== \
--set configs.DEFAULT_CACHE_BUCKET=cache-bucket \
--set configs.DEFAULT_BUILD_LOGS_BUCKET=log-bucket \
--set argo-cd.enabled=true
kubectl -n devtroncd get installers installer-devtron \
-o jsonpath='{.status.sync.status}'

Downloaded

The installer has downloaded all the manifests, and the installation is in progress.

Applied

The installer has successfully applied all the manifests, and the installation is completed.

kubectl logs -f -l app=inception -n devtroncd
kubectl get svc -n devtroncd devtron-service \
-o jsonpath='{.status.loadBalancer.ingress}'
[map[hostname:aaff16e9760594a92afa0140dbfd99f7-305259315.us-east-1.elb.amazonaws.com]]

devtron.yourdomain.com

CNAME

aaff16e9760594a92afa0140dbfd99f7-305259315.us-east-1.elb.amazonaws.com

kubectl -n devtroncd get secret devtron-secret \
-o jsonpath='{.data.ADMIN_PASSWORD}' | base64 -d
kubectl -n devtroncd get secret devtron-secret \
-o jsonpath='{.data.ACD_PASSWORD}' | base64 -d
helm repo add devtron https://helm.devtron.ai

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set components.devtron.service.type=NodePort --set installer.arch=multi-arch
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

helm repo add devtron https://helm.devtron.ai

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set components.devtron.service.type=NodePort
minikube service devtron-service --namespace devtroncd
kubectl -n devtroncd port-forward service/devtron-service 8000:80
kubectl -n devtroncd get secret devtron-secret \
-o jsonpath='{.data.ADMIN_PASSWORD}' | base64 -d
kubectl -n devtroncd get secret devtron-secret \
-o jsonpath='{.data.ACD_PASSWORD}' | base64 -d
sudo snap install microk8s --classic 
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube
newgrp microk8s
microk8s enable dns storage helm3
echo "alias kubectl='microk8s kubectl '" >> .bashrc
echo "alias helm='microk8s helm3 '" >> .bashrc
source .bashrc
helm repo add devtron https://helm.devtron.ai

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set components.devtron.service.type=NodePort 
kubectl get svc -n devtroncd devtron-service -o jsonpath='{.spec.ports[0].nodePort}'
AWS EKS
here
GKE
AKS
k3s - Lightweight Kubernetes
here
Devtron with CI/CD
Helm Dashboard by Devtron
Devtron Stack Manager
Devtron with CI/CD along with GitOps (Argo CD)
Devtron Stack Manager
Upgrade Devtron from UI
Devtron with CI/CD
Helm Dashboard by Devtron
Devtron Stack Manager
Devtron with CI/CD along with GitOps (Argo CD)
Devtron Stack Manager
all capabilities
read more
Helm
Devtron Installation
uninstall Devtron
FAQ
all capabilities
read more
Minikube
MicroK8s
Kind
K3s
Helm3
kubectl
http://127.0.0.1:8000
uninstall Devtron
configure blob storage duing installation

Demo on Popular Cloud Providers

Here we have demonstrated the installation of Devtron on popular cloud providers. The videos are easy to follow and provide step-by-step instructions.

Installing on EKS Cluster


Installing on AKS Cluster


Installing on GKE Cluster

Install Devtron Enterprise Trial

Introduction

With the Enterprise version of Devtron, you can access the premium features beyond the open-source version. For your advanced and challenging use cases, you get comprehensive enterprise features including but not limited to:

  1. Release orchestration

  2. Resource monitoring

  3. Advanced filtering

  4. Fine-grained access control

  5. Security scans

  6. Policies related to approval, deployment, plugins, tags, infra...and many more.

Already using Devtron's Open Source version?

This guide is intended for fresh installation of Devtron Enterprise. If you're currently using the open-source (OSS) version of Devtron, we do not recommend converting your existing setup to the Enterprise edition.


Install Devtron Enterprise

Note

Please ensure that cluster kubeconfig is properly configured and available in your system.

1. Add Devtron Helm Repository

2. Choose an Installation Option

  • To install Devtron with all core enterprise features except ArgoCD:

  • To include ArgoCD integration, add --set devtron.argo-cd.enabled=true

To install only the Devtron Dashboard (without CI/CD, ArgoCD, Security, Notification, or Monitoring):

3. Obtain the Dashboard URL

Run the following command to get the Dashboard URL:

You can access your Devtron Dashboard using the LoadBalancer URL displayed in the output.

Accessing the Dashboard locally (MicroK8s/Kind/K3s)

To obtain the Dashboard URL when MicroK8s/Kind/K3s running locally, run the following command to port-forward the devtron service to port 8000

After port-forwarding, The Dashboard URL will be: http://127.0.0.1:8000

Accessing the Dashboard via NodePort

To obtain the Dashboard URL on MicroK8s/Kind/K3s using NodePort, run the following command to retrieve the port number assigned to the service:

The Dashboard URL will be: http://<HOST_IP>:<NODEPORT>/dashboard

Accessing the Dashboard locally from a remote VM (Port Forwarding via Kubeconfig)

To obtain the Dashboard URL if Devtron is installed on a remote VM (e.g., AWS EC2, Azure VM, GCP Compute Engine) using MicroK8s, Kind, or K3s, run the following commands:

The Dashboard URL will be http://127.0.0.1:8000 on your local machine.

To access the dashboard on Minikube cluster, run the following command:

This will directly open the dashboard URL on your browser

Accessing the Dashboard via NodePort

To obtain the dashboard URL on Cloud VMs using NodePort, run the following command to retrieve the port number assigned to the service:

The Dashboard URL will be: http://<HOST_IP>:<NODEPORT>/dashboard

Accessing the Dashboard locally from a remote VM (Port Forwarding via Kubeconfig)

To obtain the Dashboard URL if Devtron is installed on a remote VM (e.g., AWS EC2, Azure VM, GCP Compute Engine) using MicroK8s, Kind, or K3s, run the following commands:

The Dashboard URL will be http://127.0.0.1:8000 on your local machine.

License Activation

Generate License Key

  1. You will see an installation fingerprint that uniquely identifies your installation. Copy the fingerprint and click the Get License link.

What if my installation is airgapped and has no Internet access?

  1. Log in to the License Dashboard using SSO with a valid work email. Personal email addresses are not allowed.

  2. From your work email address, the system will try to autopopulate the details in the form. If not, you can enter or modify the details too.

  3. Paste the fingerprint you copied earlier and click Get License Key.

  4. Your license will be generated. Copy the license key.

Note

The license key you generate will be valid only for your enterprise installation. It is uniquely mapped to your installation fingerprint.

Facing Issues?


Log in to Devtron

  1. After successful license activation, you will see the Devtron login page.

  2. Initially, log in with the administrator credentials. By default, the username is admin. Run the following command to get the admin password:

Note

When you install Devtron for the first time, it creates a default admin user and password (with unrestricted access to Devtron). You can use it to log in as an administrator.


Additional Actions

Check License Details

In Devtron, click the Help menu (top-right corner) → About Devtron to know the following:

  • License details (Key and Expiry)

  • Installation fingerprint

  • Enterprise version

Update License

If you have a new license key, you can update the license key directly within Devtron, from the About Devtron page.

Renew License

If your trial license has expired and you wish to renew it, email us at enterprise@devtron.ai or reach out to your Devtron representative.


Troubleshoot Issues

Backup for Disaster Recovery

Regular backups for Devtron PostgreSQL and ArgoCD are crucial components of a disaster recovery plan, as they protect against potential data loss due to unforeseen circumstances. This documentation provides instructions on how to take backups of Devtron and store them either on AWS S3 or Azure containers.

  1. Go to the devtron chart store and search for devtron-backups chart.

  1. Select the devtron-backups and click Configure & Deploy.

  2. Now follow either of the options described below according to your Cloud provider.

AWS S3 Backup

To store Devtron backups on AWS S3, please follow these steps:

  1. Create an S3 bucket to store the Devtron backup, you can configure the bucket to delete all the objects older than 15/30 days.

  2. Create a user with sufficient permissions to push to the S3 bucket created in step 1.

  3. Obtain the access key and secret access key for the created user.

  4. Configure the devtron-backups chart for AWS S3 by selecting the appropriate options:

  1. Deploy the chart, and the Devtron backup will be automatically uploaded to the AWS S3 bucket at the scheduled intervals.

Azure Containers Backup

To store Devtron backups on Azure Containers, please follow these steps:

  1. Create a storage account in Azure.

  2. Within the storage account, create two containers for the Devtron backup.

  3. Navigate to Security + Networking > Access Key section in Azure and copy the Access Key:

  1. Configure the devtron-backups chart for Azure Containers by providing the Access Key:

  1. Before deploying the backup chart, ensure that AWS.enabled is set to false. This will ensure that Devtron backup will be automatically uploaded to the configured Azure containers on the scheduled intervals.

By following these steps, you can ensure that your Devtron data is securely backed up and protected against any potential data loss, enabling you to recover quickly in case of emergencies.

Installation Configurations

Configure Secrets

For Helm installation this section refers to secrets section of values.yaml.

Configure the following properties:

Configure ConfigMaps

For Helm installation this section refers to configs section of values.yaml.

Configure the following properties:

Configure Resources

Devtron provides ways to control how much memory or CPU can be allocated to each Devtron microservice. You can adjust the resources that are allocated to these microservices based on your requirements. The resource configurations are available in following sizes:

Small: To configure the small resources (e.g. to manage less than 10 apps on Devtron ) based on the requirements, append the Devtron installation command with -f https://raw.githubusercontent.com/devtron-labs/devtron/main/charts/devtron/resources-small.yaml.

Configure Overrides

For Helm installation this section refers to customOverrides section of values.yaml. In this section you can override values of devtron-cm which you want to keep persistent. For example:

You can configure the following properties:

Storage for Logs and Cache

AWS SPECIFIC

While installing Devtron and using the AWS-S3 bucket for storing the logs and caches, the below parameters are to be used in the ConfigMap.

NOTE: For using the S3 bucket it is important to add the S3 permission policy to the IAM role attached to the nodes of the cluster.

The below parameters are to be used in the Secrets :

AZURE SPECIFIC

While installing Devtron using Azure Blob Storage for storing logs and caches, the below parameters will be used in the ConfigMap.

GOOGLE CLOUD STORAGE SPECIFIC

While installing Devtron using Google Cloud Storage for storing logs and caches, the below parameters will be used in the ConfigMap.

To convert string to base64 use the following command:

Note:

  1. Ensure that the cluster has read and write access to the S3 buckets/Azure Blob storage container mentioned in DEFAULT_CACHE_BUCKET, DEFAULT_BUILD_LOGS_BUCKET or AZURE_BLOB_CONTAINER_CI_LOG, or AZURE_BLOB_CONTAINER_CI_CACHE.

  2. Ensure that the cluster has read access to AWS secrets backends (SSM & secrets manager).


We can use the --set flag to override the default values when installing with Helm. For example, to update POSTGRESQL_PASSWORD and BLOB_STORAGE_PROVIDER, use the install command as:

Configuration of Blob Storage

Blob Storage allows users to store large amounts of unstructured data. Unstructured data is a data that does not adhere to a particular data model or definition, such as text or binary data. Configuring blob storage in your Devtron environment allows you to store build logs and cache.

In case, if you do not configure the Blob Storage, then:

  • You will not be able to access the build and deployment logs after an hour.

  • Build time for commit hash takes longer as cache is not available.

  • Artifact reports cannot be generated in pre/post build and deployment stages.

You can configure Blob Storage with one of the following Blob Storage providers given below:

Note: You can also use the respective following command to switch to another Blob Storage provider. As an example, If you are using MinIO Storage and want to switch to Azure Blob Storage, use the command provided on the Azure Blob Storage tab to switch.

Use the following command to configure MinIO for storing logs and cache.

Note: Unlike global cloud providers such as AWS S3 Bucket, Azure Blob Storage and Google Cloud Storage, MinIO can be hosted locally also.

  • Configure using S3 IAM policy:

NOTE: Please ensure that S3 permission policy to the IAM role attached to the nodes of the cluster if you are using the below command.

  • Configure using access-key and secret-key for aws S3 authentication:

  • Configure using S3 compatible storages:

Use the following command to configure S3-compatible storage (e.g., Longhorn) for storing build logs and cache.


Configuring NodeSelectors and Tolerations

Adding Custom Configurations

When installing Devtron, you can specify nodeSelectors and tolerations to fine-tune your deployment. These configurations can be added using either additional --set flags or a separate values.yaml file.

Global vs. Component-level Configurations

  • Global Configurations: When specified at the global level, these settings apply to all Devtron microservices, except for ArgoCD.

  • Component-Level Configurations: You can also apply these settings to specific components individually.

  • Priority: If a configuration is specified at both the global and component levels, the component-level setting takes precedence for that particular component.

Using --set Flags

You can use the --set flag to specify individual values directly in the Helm command.

  1. nodeSelector

To set a nodeSelector:

This example sets the nodeSelector to schedule pods on a node with the hostname "node1".

  1. Tolerations

To set tolerations:

This example adds a tolerance for pods to be scheduled on nodes with the taint "example-key".

Using values.yaml

In the values.yaml file of devtron chart, set the values of the following fields:


Set StorageClass for Devtron Microservices

You can specify a StorageClass to be used by Devtron microservices' Persistent Volume Claims (PVCs) if a default StorageClass is not already configured in your cluster.

Checking for a Default StorageClass

To check if your cluster has a default StorageClass, run:

This command will list all available StorageClasses in your cluster, including the default storage class set (if any). The default StorageClass (if any) can be identified by the (default) label next to its name.

Setting a Default StorageClass

If no StorageClass class is set as default, you can set one using the following command:

Or, if you do not want to change the default StorageClass or prefer to use a different StorageClass for Devtron microservices, specify it during installation using the --set flag:


Secrets

ConfigMaps

Dashboard Configurations

Override Configurations

To modify a particular object, it looks in namespace devtroncd for the corresponding configmap as mentioned in the mapping below:

apiVersion, kind, metadata.name in the multiline string is used to match the object which needs to be modified. In this particular case it will look for apiVersion: extensions/v1beta1, kind: Ingress and metadata.name: devtron-ingress and will apply changes mentioned inside update: as per the example inside the metadata: it will add annotations owner: app1 and inside spec.rules.http.host it will add http://change-me.

Once we have made these changes in our local system we need to apply them to a Kubernetes cluster on which Devtron is installed currently using the below command:

Run the following command to make these changes take effect:

Our changes would have been propagated to Devtron after 20-30 minutes.

Recommended Resources for Production use

The overall resources required for the recommended production overrides are:

The production overrides can be applied as pre-devtron installation as well as post-devtron installation in the respective namespace.

Pre-Devtron Installation

If you want to install a new Devtron instance for production-ready deployments, this is the best option for you.

Create the namespace and apply the overrides files as stated above:

After files are applied, you are ready to install your Devtron instance with production-ready resources.

Post-Devtron Installation

If you have an existing Devtron instance and want to migrate it for production-ready deployments, this is the right option for you.

In the existing namespace, apply the production overrides as we do it above.

FAQs

4. What's the purpose of 'Login as administrator' option on the login page?

When you install Devtron for the first time, it creates a default admin user and password (with unrestricted access to Devtron). You can use that credentials to log in as an administrator. After the initial login, we recommend you set up any SSO service like Google, GitHub, etc., and then add other users (including yourself). Subsequently, all the users can use the same SSO (let's say, GitHub) to log in to Devtron's dashboard.

Configurations

You can configure Devtron by using configuration files. Configuration files are YAML files which are user-friendly. The configuration allows you to quickly roll back a configuration change if necessary. It also aids cluster re-creation and restoration.

There are two ways you can perform configurations while setting up Devtron dashboard:

Cloud Provider:

Cloud Provider:

Cloud Provider:

Enjoy an uninterrupted 14-day free trial and explore to their full potential.

Instead, we suggest you to perform a for the best experience.

Upon successfully obtaining the dashboard URL and accessing the dashboard, you will see a License Activation screen upon visiting your Dashboard URL as shown below. If you already have a license key, paste it and click Activate. If not, you can .

In case your installation is not connected to the Internet, clicking the Get License link will display a QR code that you can scan with an Internet-enabled device to obtain a license ().

Go back to your License Activation page (from ). Paste your license key and click Activate.

Visit the section to identify the issue or connect with .

After the initial login, we recommend you set up any like Google, GitHub, etc., and then add other users (including yourself). Subsequently, all the users can use the same SSO (let's say, GitHub) to log in to the Dashboard.

Issue
What it means
Where is it shown
Solution
Parameter
Description
Default
Parameter
Description
Default
Parameter
Description
Default
Parameter
Description
Default
Parameter
Description
Parameter
Description
Parameter
Description
Default

The following tables contain parameters and their details for Secrets and ConfigMaps that are configured during the installation of Devtron. If the installation is done using Helm, the values can be tweaked in file.

Use the following command to configure AWS S3 bucket for storing build logs and cache. Refer to the AWS specific parameters on the page.

Use the following command to configure Azure Blob Storage for storing build logs and cache. Refer to the Azure specific parameters on the page.

Use the following command to configure Google Cloud Storage for storing build logs and cache. Refer to the Google Cloud specific parameters on the page.

Alternatively, you can specify the StorageClass in the values.yaml file by modifying the .

Parameter
Description
Default
Necessity
Parameter
Description
Default
Necessity
Parameter
Description

In certain cases, you may want to override default configurations provided by Devtron. For example, for deployments or statefulsets you may want to change the memory or CPU requests or limit or add node affinity or taint tolerance. Say, for ingress, you may want to add annotations or host. Samples are available inside the directory.

component
configmap name
purpose

Let's take an example to understand how to override specific values. Say, you want to override annotations and host in the ingress, i.e., you want to change devtronIngress, copy the file . This file contains a configmap to modify devtronIngress as mentioned above. Please note the structure of this configmap, data should have the key override with a multiline string as a value.

In case you want to change multiple objects, for eg in argocd you want to change the config of argocd-dex-server as well as argocd-redis then follow the example in .

To use Devtron for production deployments, use our recommended production overrides located in . This configuration should be enough for handling up to 200 microservices.

Name
Value
1. How will I know when the installation is finished?

Run the following command to check the status of the installation:

The above command will print Applied once the installation process is complete. The installation process could take up to 30 minutes.

2. How do I track the progress of the installation?

Run the following command to check the logs of the Pod:

3. How can I restart the installation if the Devtron installer logs contain an error?

First run the below command to clean up components installed by Devtron installer:

Next,

Still facing issues, please reach out to us on .

You can also setup ingress while setting up Devtron dashboard. Refer for ingress setup.

Amazon Web Services (AWS)
Microsoft Azure
Google Cloud Platform (GCP)
helm repo add devtron https://helm.devtron.ai
helm repo update devtron
helm install devtron devtron/devtron-enterprise --create-namespace --namespace devtroncd 
helm install devtron devtron/devtron-enterprise --create-namespace --namespace devtroncd --set devtron.argo-cd.enabled=true
helm install devtron devtron/devtron-enterprise --create-namespace --namespace devtroncd \
--set devtron.installer.modules={} --set devtron.security.enabled=false  \
--set devtron.notifier.enabled=false  --set devtron.security.trivy.enabled=false --set devtron.monitoring.grafana.enabled=false
kubectl get svc -n devtroncd devtron-service -o jsonpath='{.status.loadBalancer.ingress}'
kubectl -n devtroncd port-forward service/devtron-service 8000:80
kubectl get svc -n devtroncd devtron-service -o jsonpath='{.spec.ports[0].nodePort}'
scp user@cloud-vm-ip:/path/to/kubeconfig ~/.kube/config 
# Export the kubeconfig file from the remote VM to your local system.

kubectl config use-context <context-name>
# Set the correct context.

kubectl -n devtroncd port-forward service/devtron-service 8000:80
# This command will forward traffic from the service running on the 
# remote VM's MicroK8s, Kind, or K3s cluster to your local system’s port.
minikube service devtron-service --namespace devtroncd
kubectl get svc -n devtroncd devtron-service -o jsonpath='{.spec.ports[0].nodePort}'
scp user@cloud-vm-ip:/path/to/kubeconfig ~/.kube/config 
# Export the kubeconfig file from the remote VM to your local system.

kubectl config use-context <context-name>
# Set the correct context.

kubectl -n devtroncd port-forward service/devtron-service 8000:80
# This command will forward traffic from the service running on the 
# remote VM's MicroK8s, Kind, or K3s cluster to your local system’s port.
kubectl -n devtroncd get secret devtron-secret \
-o jsonpath='{.data.ADMIN_PASSWORD}' | base64 -d

POSTGRESQL_PASSWORD

Using this parameter the auto-generated password for Postgres can be edited as per requirement(Used by Devtron to store the app information)

NA

WEBHOOK_TOKEN

If you want to continue using Jenkins for CI then provide this for authentication of requests should be base64 encoded

NA

CI_NODE_LABEL_SELECTOR

Labels for a particular nodegroup which you want to use for running CIs

NA

CI_NODE_TAINTS_KEY

Key for toleration if nodegroup chosen for CIs have some taints

NA

CI_NODE_TAINTS_VALUE

Value for toleration if nodegroup chosen for CIs have some taints

NA

DEFAULT_CACHE_BUCKET

AWS bucket to store docker cache, it should be created beforehand (required)

DEFAULT_BUILD_LOGS_BUCKET

AWS bucket to store build logs, it should be created beforehand (required)

DEFAULT_CACHE_BUCKET_REGION

AWS region of S3 bucket to store cache (required)

DEFAULT_CD_LOGS_BUCKET_REGION

AWS region of S3 bucket to store CD logs (required)

BLOB_STORAGE_S3_ENDPOINT

S3 compatible bucket endpoint.

BLOB_STORAGE_S3_ACCESS_KEY

AWS access key to access S3 bucket. Required if installing using AWS credentials.

BLOB_STORAGE_S3_SECRET_KEY

AWS secret key to access S3 bucket. Required if installing using AWS credentials.

AZURE_ACCOUNT_NAME

Account name for AZURE Blob Storage

AZURE_BLOB_CONTAINER_CI_LOG

AZURE Blob storage container for storing ci-logs after running the CI pipeline

AZURE_BLOB_CONTAINER_CI_CACHE

AZURE Blob storage container for storing ci-cache after running the CI pipeline

BLOB_STORAGE_GCP_CREDENTIALS_JSON

Base-64 encoded GCP credentials json for accessing Google Cloud Storage

DEFAULT_CACHE_BUCKET

Google Cloud Storage bucket for storing ci-logs after running the CI pipeline

DEFAULT_LOGS_BUCKET

Google Cloud Storage bucket for storing ci-cache after running the CI pipeline

echo -n "string" | base64
helm install devtron devtron/devtron-operator --create-namespace --namespace devtroncd \
--set secrets.POSTGRESQL_PASSWORD=change-me \
--set configs.BLOB_STORAGE_PROVIDER=S3
helm repo update

helm upgrade devtron devtron/devtron-operator --namespace devtroncd \
--reuse-values \
--set installer.modules={cicd} \
--set minio.enabled=true
helm repo update

helm upgrade devtron devtron/devtron-operator --namespace devtroncd \
--reuse-values \
--set installer.modules={cicd} \
--set configs.BLOB_STORAGE_PROVIDER=S3 \
--set configs.DEFAULT_CACHE_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CACHE_BUCKET_REGION=us-east-1 \
--set configs.DEFAULT_BUILD_LOGS_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CD_LOGS_BUCKET_REGION=us-east-1
helm repo update

helm upgrade devtron devtron/devtron-operator --namespace devtroncd \
--reuse-values \
--set installer.modules={cicd} \
--set configs.BLOB_STORAGE_PROVIDER=S3 \
--set configs.DEFAULT_CACHE_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CACHE_BUCKET_REGION=us-east-1 \
--set configs.DEFAULT_BUILD_LOGS_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CD_LOGS_BUCKET_REGION=us-east-1 \
--set secrets.BLOB_STORAGE_S3_ACCESS_KEY=<access-key> \
--set secrets.BLOB_STORAGE_S3_SECRET_KEY=<secret-key>
helm repo update

helm upgrade devtron devtron/devtron-operator --namespace devtroncd \
--reuse-values \
--set installer.modules={cicd} \
--set configs.BLOB_STORAGE_PROVIDER=S3 \
--set configs.DEFAULT_CACHE_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CACHE_BUCKET_REGION=us-east-1 \
--set configs.DEFAULT_BUILD_LOGS_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CD_LOGS_BUCKET_REGION=us-east-1 \
--set secrets.BLOB_STORAGE_S3_ACCESS_KEY=<access-key> \
--set secrets.BLOB_STORAGE_S3_SECRET_KEY=<secret-key> \
--set configs.BLOB_STORAGE_S3_ENDPOINT=<endpoint>
helm repo update
helm upgrade devtron devtron/devtron-operator --namespace devtroncd \
--reuse-values \
--set installer.modules={cicd} \
--set secrets.AZURE_ACCOUNT_KEY=xxxxxxxxxx \
--set configs.BLOB_STORAGE_PROVIDER=AZURE \
--set configs.AZURE_ACCOUNT_NAME=test-account \
--set configs.AZURE_BLOB_CONTAINER_CI_LOG=ci-log-container \
--set configs.AZURE_BLOB_CONTAINER_CI_CACHE=ci-cache-container
helm repo update

helm upgrade devtron devtron/devtron-operator --namespace devtroncd \
--reuse-values \
--set installer.modules={cicd} \
--set configs.BLOB_STORAGE_PROVIDER=GCP \
--set secrets.BLOB_STORAGE_GCP_CREDENTIALS_JSON=eyJ0eXBlIjogInNlcnZpY2VfYWNjb3VudCIsInByb2plY3RfaWQiOiAiPHlvdXItcHJvamVjdC1pZD4iLCJwcml2YXRlX2tleV9pZCI6ICI8eW91ci1wcml2YXRlLWtleS1pZD4iLCJwcml2YXRlX2tleSI6ICI8eW91ci1wcml2YXRlLWtleT4iLCJjbGllbnRfZW1haWwiOiAiPHlvdXItY2xpZW50LWVtYWlsPiIsImNsaWVudF9pZCI6ICI8eW91ci1jbGllbnQtaWQ+IiwiYXV0aF91cmkiOiAiaHR0cHM6Ly9hY2NvdW50cy5nb29nbGUuY29tL28vb2F1dGgyL2F1dGgiLCJ0b2tlbl91cmkiOiAiaHR0cHM6Ly9vYXV0aDIuZ29vZ2xlYXBpcy5jb20vdG9rZW4iLCJhdXRoX3Byb3ZpZGVyX3g1MDlfY2VydF91cmwiOiAiaHR0cHM6Ly93d3cuZ29vZ2xlYXBpcy5jb20vb2F1dGgyL3YxL2NlcnRzIiwiY2xpZW50X3g1MDlfY2VydF91cmwiOiAiPHlvdXItY2xpZW50LWNlcnQtdXJsPiJ9Cg== \
--set configs.DEFAULT_CACHE_BUCKET=cache-bucket \
--set configs.DEFAULT_BUILD_LOGS_BUCKET=log-bucket
helm repo update

helm upgrade devtron devtron/devtron-operator --namespace devtroncd \
--reuse-values \
--set configs.BLOB_STORAGE_PROVIDER=S3 \
--set configs.DEFAULT_CACHE_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CACHE_BUCKET_REGION=us-east-1 \
--set configs.DEFAULT_BUILD_LOGS_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CD_LOGS_BUCKET_REGION=us-east-1 \
--set secrets.BLOB_STORAGE_S3_ACCESS_KEY=<access-key> \
--set secrets.BLOB_STORAGE_S3_SECRET_KEY=<secret-key> \
--set configs.BLOB_STORAGE_S3_ENDPOINT=<endpoint>
helm install devtron devtron/devtron-operator \
    --create-namespace --namespace devtroncd \
    --set global.nodeSelector."kubernetes\.io/hostname"=node1
helm install devtron devtron/devtron-operator \
    --create-namespace --namespace devtroncd \
    --set global.tolerations[0].key=example-key \
    --set global.tolerations[0].operator=Exists \
    --set global.tolerations[0].effect=NoSchedule \
    --set global.tolerations[0].value=value1
global:
  nodeSelector:
    kubernetes.io/hostname: node1  # For nodeSelector
  tolerations:
    - key: example-key  # For tolerations
      operator: Exists
      value: "value1"
      effect: NoSchedule
kubectl get sc 
kubectl patch storageclass <storageclassname> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}
helm install devtron devtron/devtron-operator \
    --create-namespace --namespace devtroncd \
    --set global.storageClass="<storageclassname>" # set your preferred StorageClass

ACD_PASSWORD

ArgoCD Password for CD Workflow

Auto-Generated

Optional

AZURE_ACCOUNT_KEY

Account key to access Azure objects such as BLOB_CONTAINER_CI_LOG or CI_CACHE

""

Mandatory (If using Azure)

GRAFANA_PASSWORD

Password for Grafana to display graphs

Auto-Generated

Optional

POSTGRESQL_PASSWORD

Password for your Postgresql database that will be used to access the database

Auto-Generated

Optional

AZURE_ACCOUNT_NAME

Azure account name which you will use

""

Mandatory (If using Azure)

AZURE_BLOB_CONTAINER_CI_LOG

Name of container created for storing CI_LOG

ci-log-container

Optional

AZURE_BLOB_CONTAINER_CI_CACHE

Name of container created for storing CI_CACHE

ci-cache-container

Optional

BLOB_STORAGE_PROVIDER

Cloud provider name which you will use

MINIO

Mandatory (If using any cloud other than MINIO), MINIO/AZURE/S3

DEFAULT_BUILD_LOGS_BUCKET

S3 Bucket name used for storing Build Logs

devtron-ci-log

Mandatory (If using AWS)

DEFAULT_CD_LOGS_BUCKET_REGION

Region of S3 Bucket where CD Logs are being stored

us-east-1

Mandatory (If using AWS)

DEFAULT_CACHE_BUCKET

S3 Bucket name used for storing CACHE (Do not include s3://)

devtron-ci-cache

Mandatory (If using AWS)

DEFAULT_CACHE_BUCKET_REGION

S3 Bucket region where Cache is being stored

us-east-1

Mandatory (If using AWS)

EXTERNAL_SECRET_AMAZON_REGION

Region where the cluster is setup for Devtron installation

""

Mandatory (If using AWS)

ENABLE_INGRESS

To enable Ingress (True/False)

False

Optional

INGRESS_ANNOTATIONS

Annotations for ingress

""

Optional

PROMETHEUS_URL

Existing Prometheus URL if it is installed

""

Optional

CI_NODE_LABEL_SELECTOR

Label of CI worker node

""

Optional

CI_NODE_TAINTS_KEY

Taint key name of CI worker node

""

Optional

CI_NODE_TAINTS_VALUE

Value of taint key of CI node

""

Optional

CI_DEFAULT_ADDRESS_POOL_BASE_CIDR

CIDR ranges used to allocate subnets in each IP address pool for CI

""

Optional

CI_DEFAULT_ADDRESS_POOL_SIZE

The subnet size to allocate from the base pool for CI

""

Optional

CD_NODE_LABEL_SELECTOR

Label of CD node

kubernetes.io/os=linux

Optional

CD_NODE_TAINTS_KEY

Taint key name of CD node

dedicated

Optional

CD_NODE_TAINTS_VALUE

Value of taint key of CD node

ci

Optional

CD_LIMIT_CI_CPU

CPU limit for pre and post CD Pod

0.5

Optional

CD_LIMIT_CI_MEM

Memory limit for pre and post CD Pod

3G

Optional

CD_REQ_CI_CPU

CPU request for CI Pod

0.5

Optional

CD_REQ_CI_MEM

Memory request for CI Pod

1G

Optional

CD_DEFAULT_ADDRESS_POOL_BASE_CIDR

CIDR ranges used to allocate subnets in each IP address pool for CD

""

Optional

CD_DEFAULT_ADDRESS_POOL_SIZE

The subnet size to allocate from the base pool for CD

""

Optional

GITOPS_REPO_PREFIX

Prefix for Gitops repository

devtron

Optional

RECOMMEND_SECURITY_SCANNING=false
FORCE_SECURITY_SCANNING=false
HIDE_DISCORD=false

RECOMMEND_SECURITY_SCANNING

If True, security scanning is enabled by default for a new build pipeline. Users can however turn it off in the new or existing pipelines.

FORCE_SECURITY_SCANNING

If set to True, security scanning is forcefully enabled by default for a new build pipeline. Users can not turn it off for new as well as for existing build pipelines. Old pipelines that have security scanning disabled will remain unchanged and image scanning should be enabled manually for them.

HIDE_DISCORD

Hides discord chatbot from the dashboard.

Storage for Logs and Cache
Storage for Logs and Cache
Storage for Logs and Cache

argocd

argocd-override-cm

GitOps

clair

clair-override-cm

container vulnerability db

clair

clair-config-override-cm

Clair configuration

dashboard

dashboard-override-cm

UI for Devtron

gitSensor

git-sensor-override-cm

microservice for Git interaction

guard

guard-override-cm

validating webhook to block images with security violations

postgresql

postgresql-override-cm

db store of Devtron

imageScanner

image-scanner-override-cm

image scanner for vulnerability

kubewatch

kubewatch-override-cm

watches changes in ci and cd running in different clusters

lens

lens-override-cm

deployment metrics analysis

natsOperator

nats-operator-override-cm

operator for nats

natsServer

nats-server-override-cm

nats server

natsStreaming

nats-streaming-override-cm

nats streaming server

notifier

notifier-override-cm

sends notification related to CI and CD

devtron

devtron-override-cm

core engine of Devtron

devtronIngress

devtron-ingress-override-cm

ingress configuration to expose Devtron

workflow

workflow-override-cm

component to run CI workload

externalSecret

external-secret-override-cm

manage secret through external stores like vault/AWS secret store

grafana

grafana-override-cm

Grafana config for dashboard

rollout

rollout-override-cm

manages blue-green and canary deployments

minio

minio-override-cm

default store for CI logs and image cache

minioStorage

minio-storage-override-cm

db config for minio

kubectl apply -f file-name -n devtroncd
kubectl patch -n devtroncd installer installer-devtron --type='json' -p='[{"op": "add", "path": "/spec/reSync", "value": true }]'

cpu

6

memory

13GB

kubectl create ns devtroncd
kubectl apply -f prod-configs -n devtroncd
kubectl -n devtroncd get installers installer-devtron -o jsonpath='{.status.sync.status}'
pod=$(kubectl -n devtroncd get po -l app=inception -o jsonpath='{.items[0].metadata.name}')&& kubectl -n devtroncd logs -f $pod
all the features of Devtron Enterprise
check snapshot
Single Sign-On (SSO) service
values.yaml
following line in values.yaml
manifests/updates
devtron-ingress-override.yaml
devtron-argocd-override.yaml
manifests/updates/production
cd devtron-installation-script/
kubectl delete -n devtroncd -f yamls/
kubectl -n devtroncd patch installer installer-devtron --type json -p '[{"op": "remove", "path": "/status"}]'
install Devtron
Discord
Installation Configurations
Override Configurations
here
fresh installation of Devtron Enterprise
generate a fresh license key
step 1
Devtron Support
Troubleshoot
Storage for Logs and Cache
Storage for Logs and Cache
Storage for Logs and Cache

Someone from your organization has already availed a license

License Dashboard

Reach out to enterprise@devtron.ai for another trial

The license key is incorrect or partial

License Activation Page

Go to the License Dashboard and recheck the license

The license key has become invalid for your installation fingerprint

License Activation Page

Generate a new license from License Dashboard.

The fingerprint is incorrect or partial

License Dashboard

Go to the License Activation Page and verify the fingerprint

You have exhausted the free trial

License Activation Page or License Dashboard

Reach out to enterprise@devtron.ai for renewal

You cannot generate more than 1 license key for 1 fingerprint

License Dashboard

Contact Support

BASE_URL_SCHEME

Either of HTTP or HTTPS (required)

HTTP

BASE_URL

URL without scheme and trailing slash, this is the domain pointing to the cluster on which the Devtron platform is being installed. For example, if you have directed domain devtron.example.com to the cluster and the ingress controller is listening on port 32080 then URL will be devtron.example.com:32080 (required)

change-me

DEX_CONFIG

NA

EXTERNAL_SECRET_AMAZON_REGION

AWS region for the secret manager to pick (required)

NA

PROMETHEUS_URL

URL of Prometheus where all cluster data is stored; if this is wrong, you will not be able to see application metrics like CPU, RAM, HTTP status code, latency, and throughput (required)

NA

Host URL

Host URL is the domain address at which your devtron dashboard can be reached.

Add Host URL

To add host URL, go to the Host URL section of Global Configurations.

On the Host URL page:

  • Enter the host URL in the Host URL field.

  • Or, you can select auto-detect from your browser.

  • Next, click Update.

Clusters & Environments

Introduction

Devtron allows you to connect and manage your existing Kubernetes clusters by adding them to its platform. Once a cluster is added, you can create different environments within it, making it possible to deploy your applications.

Go to Global Configurations → Clusters & Environments → Add Cluster (button)

You can add any of the following cluster types:


Add Kubernetes Cluster

Who Can Perform This Action?

Users need to have super-admin permission to add a Kubernetes cluster to Devtron.

On the Add Cluster screen, select Add Kubernetes Cluster.

You can choose to add your Kubernetes cluster using either of the following methods:

Add Cluster Using Server URL & Bearer Token

Note

  1. To add a Kubernetes cluster on Devtron using Server URL and Bearer Token, provide the following information:

Field
Description

Name

Enter the name of your cluster.

Server URL

Bearer Token

Paste the bearer token of your cluster

  1. complete the remaining steps (optional):

Tip

Add Cluster Using Kubeconfig

In case you prefer to add clusters using kubeconfig, follow these steps:

  1. Copy and paste your kubeconfig file into the editor. Alternatively, you may browse and select the file as well.

  1. Click the Get Cluster button. This action will display the cluster details alongside the kubeconfig.

  1. If your kubeconfig file lists multiple clusters, they will be displayed in the window. Use the checkboxes to select the desired cluster(s) and click Save.

  1. Click the saved cluster, and complete the remaining steps (optional):

Note

Ensure that the kubeconfig file has admin permissions. It is crucial for Devtron to have the necessary administrative privileges; otherwise, it may encounter failures or disruptions during deployments and other operations. Admin permission is essential to ensure the smooth functioning of Devtron and to prevent any potential issues that may arise due to insufficient privileges.

When adding a new cluster to Devtron, you must choose how Devtron will connect to it. There are three connection options available:

Direct Connection

Clusters with a directly accessible API server endpoint—either publicly or via private peering—can be added as Direct Connection clusters.

  • Devtron connects directly without an intermediary.

  • Recommended when the cluster is publicly accessible or has a direct network route from Devtron.

For security reasons, some Kubernetes clusters are deployed behind a proxy. In this setup, Devtron routes all communication through the specified proxy URL.

  • Use this option when network restrictions require traffic to go through a proxy server.

  • Requires specifying a Proxy URL (e.g., http://proxy.example.org:3128).

When a direct connection isn't possible, Devtron can connect to the Kubernetes cluster through an SSH tunnel, ensuring secure and encrypted communication.

  • Requires:

    • SSH Server URL (e.g., http://proxy.example.org).

    • Username for authentication.

    • Authentication Method:

      • Password

      • SSH Private Key

      • Both Password & SSH Private Key

Use Secure TLS Connection

For a secure cluster connection, you can opt for TLS connection, where you need to provide Certificate Authority Data, a TLS Key, and a TLS Certificate.

Field
Description

Certificate Authority (CA) Data

TLS Key

The private key associated with the client certificate for authentication.

TLS Certificate

The client certificate used to authenticate with the Kubernetes API server.

Configure Prometheus (Enable Application Metrics)

If you want to see application metrics against the applications deployed in the cluster, Prometheus must be deployed in the cluster. Prometheus is a powerful tool to provide graphical insight into your application behavior.

Provide the information in the following fields:

Field
Description

Prometheus endpoint

Provide the URL of your Prometheus

Authentication Type

Prometheus supports two authentication types:

  • Basic: If you select the Basic authentication type, then you must provide the Username and Password of Prometheus for authentication.

  • Anonymous: If you select the Anonymous authentication type, then you do not need to provide the Username and Password. Note: The fields Username and Password will not be available by default.

TLS Key & TLS Certificate

These fields are optional and can be used when you use a customized URL.

Click Save Cluster to save your cluster on Devtron.


Who Can Perform This Action?

Users need to have super-admin permission to add an isolated/airgapped cluster to Devtron.

For air-gapped Kubernetes clusters with restricted inbound and outbound traffic, Devtron enables seamless management using isolated clusters. While these are not actual clusters with API endpoints, they provide a convenient way to deploy applications in such environments.

  1. On the Add Cluster screen, select Add Kubernetes Cluster.

  1. Add a cluster name (e.g. banking-airgapped-cluster) and click Save Cluster.

You have successfully configured an isolated cluster.

Note

  • Download and install manually in a fully air-gapped setup.


Add Environment to a Cluster

Who Can Perform This Action?

Users need to have super-admin permission to add an environment to a cluster.

  1. Fill the following details within the Add Environment modal window.

Field
Description

Environment Name

Enter a name for your environment.

Enter Namespace

Enter a namespace corresponding to your environment. Note: If this namespace does not exist in your cluster, Devtron will create it. If it already exists, Devtron will map the environment to it.

Environment Type

Select your environment type:

  • Production

  • Non-production

Note: Devtron shows deployment metrics (DORA metrics) for environments tagged as Production only.

  1. Click Save. Your new environment will be visible in your cluster as shown below.


Edit Environment

Who Can Perform This Action?

Users need to have super-admin permission to edit an environment in a cluster.

You can also make edits to an existing environment if need be by clicking the edit icon.

Feature
Editable?

Production/Non-Production Option

✅ Yes

Description

✅ Yes

Labels for Namespace

✅ Yes

Environment Name

❌ No

Namespace Name

❌ No

Click Update to save your changes.


Delete Environment

Who Can Perform This Action?

Users need to have super-admin permission to delete an environment from a cluster.

If an environment is no longer needed, you can delete it by following these steps:

  1. Click the delete icon for the environment you wish to remove.

Important

  1. A confirmation dialog will appear. Click Confirm to permanently delete the environment.


Extras

Get Cluster Credentials

Prerequisite

Note

You can get the Server URL and Bearer Token by running the following command depending on the cluster provider:

If you are using EKS, AKS, GKE, Kops, Digital Ocean managed Kubernetes, run the following command to generate the server URL and bearer token:

curl -O https://raw.githubusercontent.com/devtron-labs/utilities/main/kubeconfig-exporter/kubernetes_export_sa.sh && bash kubernetes_export_sa.sh cd-user  devtroncd

If you are using a microk8s cluster, run the following command to generate the server URL and bearer token:

curl -O https://raw.githubusercontent.com/devtron-labs/utilities/main/kubeconfig-exporter/kubernetes_export_sa.sh && sed -i 's/kubectl/microk8s kubectl/g' \
kubernetes_export_sa.sh && bash kubernetes_export_sa.sh cd-user \
devtroncd

Benefits of Self-hosted URL

  • Disaster Recovery:

    • You cannot edit the server URL of a cloud-specific provider. If you're using an EKS URL (e.g. *****.eu-west-1.elb.amazonaws.com), it will be a tedious task to add a new cluster and migrate all the services one by one.

    • But in case of using a self-hosted URL (e.g. clear.example.com), you can just point to the new cluster's server URL in DNS manager and update the new cluster token and sync all the deployments.

  • Easy Cluster Migrations:

    • In case of managed Kubernetes clusters (like EKS, AKS, GKE etc) which is a cloud provider specific, migrating your cluster from one provider to another will result in waste of time and effort.

    • On the other hand, migration for a self-hosted URL is easy, as the URL belongs to a single hosted domain independent of the cloud provider.

Ingress Setup

Introduction

Refer the section relevant to you:


Enabling Ingress during Devtron Installation

Using set flag

You can use the --set flag to specify the desired Ingress settings.

Here, we have added 5 configurations you can perform depending on your requirements:

Only Basic Configuration

To enable Ingress and set basic parameters, use the following command:

helm install devtron devtron/devtron-operator -n devtroncd \
  --set components.devtron.ingress.enabled=true \
  --set components.devtron.ingress.className=nginx \
  --set components.devtron.ingress.host=devtron.example.com

Configuration Including Labels

To add labels to the Ingress resource, use the following command:

helm install devtron devtron/devtron-operator -n devtroncd \
  --set components.devtron.ingress.enabled=true \
  --set components.devtron.ingress.className=nginx \
  --set components.devtron.ingress.host=devtron.example.com \
  --set components.devtron.ingress.labels.env=production

Configuration Including Annotations

To add annotations to the Ingress resource, use the following command:

helm install devtron devtron/devtron-operator -n devtroncd \
  --set components.devtron.ingress.enabled=true \
  --set components.devtron.ingress.className=nginx \
  --set components.devtron.ingress.host=devtron.example.com \
  --set components.devtron.ingress.annotations."kubernetes\.io/ingress\.class"=nginx \
  --set components.devtron.ingress.annotations."nginx\.ingress\.kubernetes\.io\/app-root"="/dashboard"

Configuration Including TLS Settings

To configure TLS settings, including secretName and hosts, use the following command:

helm install devtron devtron/devtron-operator -n devtroncd \
  --set components.devtron.ingress.enabled=true \
  --set components.devtron.ingress.className=nginx \
  --set components.devtron.ingress.host=devtron.example.com \
  --set components.devtron.ingress.tls[0].secretName=devtron-tls \
  --set components.devtron.ingress.tls[0].hosts[0]=devtron.example.com

Comprehensive Configuration

To include all the above settings in a single command, use:

helm install devtron devtron/devtron-operator -n devtroncd \
  --set components.devtron.ingress.enabled=true \
  --set components.devtron.ingress.className=nginx \
  --set components.devtron.ingress.host=devtron.example.com \
  --set components.devtron.ingress.annotations."kubernetes\.io/ingress\.class"=nginx \
  --set components.devtron.ingress.annotations."nginx\.ingress\.kubernetes\.io\/app-root"="/dashboard" \
  --set components.devtron.ingress.labels.env=production \
  --set components.devtron.ingress.pathType=ImplementationSpecific \
  --set components.devtron.ingress.tls[0].secretName=devtron-tls \
  --set components.devtron.ingress.tls[0].hosts[0]=devtron.example.com

Using ingress-values.yaml

Create an ingress-values.yaml file. You may refer the below format for an advanced ingress configuration which includes labels, annotations, secrets, and many more.

components:
  devtron:
    ingress:
      enabled: true
      className: nginx
      labels: {}
        # env: production
      annotations: {}
        # nginx.ingress.kubernetes.io/app-root: /dashboard
      pathType: ImplementationSpecific
      host: devtron.example.com
      tls: []
    #    - secretName: devtron-info-tls
    #      hosts:
    #        - devtron.example.com

Once you have the ingress-values.yaml file ready, run the following command:

helm install devtron devtron/devtron-operator -n devtroncd  --reuse-values  -f ingress-values.yaml

Configuring Ingress after Devtron Installation

After Devtron is installed, Devtron is accessible through devtron-service. If you wish to access Devtron through ingress, you'll need to modify this service to use a ClusterIP instead of a LoadBalancer.

You can do this using the kubectl patch command:

kubectl patch -n devtroncd svc devtron-service -p '{"spec": {"ports": [{"port": 80,"targetPort": "devtron","protocol": "TCP","name": "devtron"}],"type": "ClusterIP","selector": {"app": "devtron"}}}'
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations: 
    nginx.ingress.kubernetes.io/app-root: /dashboard
  labels:
    app: devtron
    release: devtron
  name: devtron-ingress
  namespace: devtroncd
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - backend:
          service:
            name: devtron-service
            port:
              number: 80
        path: /orchestrator
        pathType: ImplementationSpecific 
      - backend:
          service:
            name: devtron-service
            port:
              number: 80
        path: /dashboard
        pathType: ImplementationSpecific
      - backend:
          service:
            name: devtron-service
            port:
              number: 80
        path: /grafana
        pathType: ImplementationSpecific  
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations: 
    nginx.ingress.kubernetes.io/app-root: /dashboard
  labels:
    app: devtron
    release: devtron
  name: devtron-ingress
  namespace: devtroncd
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: devtron-service
          servicePort: 80
        path: /orchestrator
      - backend:
          serviceName: devtron-service
          servicePort: 80
        path: /dashboard
        pathType: ImplementationSpecific  

Optionally, you also can access Devtron through a specific host by running the following YAML file:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations: 
    nginx.ingress.kubernetes.io/app-root: /dashboard
  labels:
    app: devtron
    release: devtron
  name: devtron-ingress
  namespace: devtroncd
spec:
  ingressClassName: nginx
  rules:
    - host: devtron.example.com
      http:
        paths:
          - backend:
              service:
                name: devtron-service
                port:
                  number: 80
            path: /orchestrator
            pathType: ImplementationSpecific
          - backend:
              service:
                name: devtron-service
                port:
                  number: 80
            path: /dashboard
            pathType: ImplementationSpecific
          - backend:
              service:
                name: devtron-service
                port:
                  number: 80
            path: /grafana
            pathType: ImplementationSpecific

Enable HTTPS For Devtron

Once Ingress setup for Devtron is done and you want to run Devtron over https, you need to add different annotations for different ingress controllers and load balancers.

1. Nginx Ingress Controller

In case of nginx ingress controller, add the following annotations under service.annotations under nginx ingress controller to run devtron over https.

(i) Amazon Web Services (AWS)

If you are using AWS cloud, add the following annotations under service.annotations under nginx ingress controller.

  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<acm-arn-here>"

(ii) Digital Ocean

If you are using Digital Ocean cloud, add the following annotations under service.annotations under nginx ingress controller.

annotations:
  service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
  service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
  service.beta.kubernetes.io/do-loadbalancer-certificate-id: "<your-certificate-id>"
  service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"

2. AWS Application Load Balancer (AWS ALB)

In case of AWS application load balancer, add following annotations under ingress.annotations to run devtron over https.

  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
    alb.ingress.kubernetes.io/certificate-arn: "<acm-arn-here>"

3. Azure Application Gateway

In case of AWS application load balancer, the following annotations need to be added under ingress.annotations to run devtron over https.

 annotations:
  kubernetes.io/ingress.class: "azure/application-gateway"
  appgw.ingress.kubernetes.io/backend-protocol: "http"
  appgw.ingress.kubernetes.io/ssl-redirect: "true"
  appgw.ingress.kubernetes.io/appgw-ssl-certificate: "<name-of-appgw-installed-certificate>"

For an Ingress resource to be observed by AGIC (Application Gateway Ingress Controller) must be annotated with kubernetes.io/ingress.class: azure/application-gateway. Only then AGIC will work with the Ingress resource in question.

Note: Make sure NOT to use port 80 with HTTPS and port 443 with HTTP on the Pods.

Git Accounts

Git Accounts allow you to connect your code source with Devtron. You will be able to use these git accounts to build the code using the CI pipeline.

Add Git Account

To add git account, go to the Git accounts section of Global Configurations. Click Add git account.

Provide the information in the following fields to add your git account:

Field
Description

Name

Git host

It is the git provider on which corresponding application git repository is hosted. Note: By default, Bitbucket and GitHub are available in the drop-down list. You can add many as you want by clicking [+ Add Git Host].

URL

Authentication Type

Devtron supports three types of authentications:

  • User auth: If you select User auth as an authentication type, then you must provide the Username and Passwordor Auth token for the authentication of your version control account.

  • Anonymous: If you select Anonymous as an authentication type, then you do not need to provide the Username and Password. Note: If authentication type is set as Anonymous, only public git repository will be accessible.

  • SSH Key: If you choose SSH Key as an authentication type, then you must provide the Private SSH Key corresponding to the public key added in your version control account.

Update Git Account

To update the git account:

  1. Click the git account which you want to update.

  2. Update the required changes.

  3. Click Update to save the changes.

Updates can only be made within one Authentication type or one protocol type, i.e. HTTPS (Anonymous or User Auth) & SSH. You can update from Anonymous to User Auth & vice versa, but not from Anonymous or User Auth to SSH and vice versa.

Note:

GitOps

Introduction

In Devtron, you can either use Helm or GitOps (Argo CD) to deploy your applications and charts. GitOps is a branch of DevOps that focuses on using Git repositories to manage infrastructure and application code deployments.

If you use the GitOps approach, Devtron will store Kubernetes configuration files and the desired state of your applications in Git repositories.


Steps to Configure GitOps

Who Can Perform This Action?

Users need to have super-admin permission to configure GitOps.

  1. Go to Global Configurations → GitOps

The Git provider you select for configuring GitOps might impact the following sections:

  1. In the Directory Management in Git section, you get the following options:

    • Use default git repository structure:

      This option lets Devtron automatically create a GitOps repository within your organization. The repository name will match your application name, and it cannot be changed. Since Devtron needs admin access to create the repository, ensure the Git credentials you provided in Step 3 have administrator rights.

    • Allow changing git repository for application:

  2. Click Save/Update. A green tick will appear on the active Git provider.

Feature Flag

Alternatively, you may use the feature flag FEATURE_USER_DEFINED_GITOPS_REPO_ENABLE to enable or disable custom GitOps repo.

For disabling - FEATURE_USER_DEFINED_GITOPS_REPO_ENABLE: "false" For enabling - FEATURE_USER_DEFINED_GITOPS_REPO_ENABLE: "true"

How to Use Feature Flag

  1. Select the cluster where Devtron is running, i.e., default_cluster.

  2. Go to the Config & Storage dropdown on the left.

  3. Click ConfigMap.

  4. Use the namespace filter (located on the right-hand side) to select devtroncd namespace. Therefore, it will show only the ConfigMaps related to Devtron, and filter out the rest.

  5. Find the ConfigMap meant for the dashboard of your Devtron instance, i.e., dashboard-cm (with an optional suffix).

  6. Click Edit Live Manifest.

  7. Add the feature flag (with the intended boolean value) within the data dictionary

  8. Click Apply Changes.


Supported Git Providers

Below are the Git providers supported in Devtron for storing configuration files.

GitHub

Prerequisite

  1. A GitHub account

Fill the following mandatory fields:

Field
Description

Git Host

Shows the URL of GitHub, e.g., https://github.com/

GitHub Organisation Name

GitHub Username

Provide the username of your GitHub account

Personal Access Token

GitLab

Prerequisite

  1. A GitLab account

Fill the following mandatory fields:

Field
Description

Git Host

Shows the URL of GitLab, e.g., https://gitlab.com/

GitLab Group ID

GitLab Username

Provide the username of your GitLab account

Personal Access Token

Azure

Prerequisite

Fill the following mandatory fields:

Field
Description

Azure DevOps Organisation Url*

Azure DevOps Project Name

Azure DevOps Username*

Provide the username of your Azure DevOps account

Azure DevOps Access Token*

Bitbucket

Here, you get 2 options:

Bitbucket Cloud

Prerequisite

  1. A Bitbucket account

Fill the following mandatory fields:

Field
Description

Bitbucket Host

Shows the URL of Bitbucket Cloud, e.g., https://bitbucket.org/

Bitbucket Workspace ID

Bitbucket Project Key

Bitbucket Username*

Provide the username of your Bitbucket account

Personal Access Token

Bitbucket Data Center

Prerequisite

A Bitbucket Data Center account

Fill the following mandatory fields:

Field
Description

Bitbucket Host

Enter the URL address of your Bitbucket Data Center, e.g., https://bitbucket.mycompany.com

Bitbucket Project Key

Bitbucket Username*

Provide the username of your Bitbucket Data Center account

Password

Provide the password to authenticate your Bitbucket Data Center account


Miscellaneous

Creating Organization in GitHub

We do NOT recommend using GitHub organization that contains your source code.

  1. Create a new account on GitHub (if you do not have one).

  2. On the upper-right corner of your GitHub page, click your profile photo, then click Settings.

  3. On the Access section, click Organizations.

  4. On the Organizations section, click New organization.

  5. On the Set up your organization page,

    • Enter the organization account name, contact email.

    • Select the option your organization belongs to.

    • Verify your account and click Next.

    • Your GitHub organization name will be created.

  6. Go to your profile and click Your organizations to view all the organizations you created.

Additional References

Creating Group in GitLab

  1. Create a new account on GitLab (if you do not have one).

  2. You can create a group by going to the 'Groups' tab on the GitLab dashboard and click New group.

  3. Select Create group.

  4. Enter the group name (required) and select the optional descriptions if required, and click Create group.

  5. Your group will be created and your group name will be assigned with a new Group ID (e.g. 61512475).

Creating Project in Azure DevOps

  1. Go to Azure DevOps and navigate to Projects.

  2. Select your organization and click New project.

  3. On the Create new project page,

    • Enter the project name and description of the project.

    • Select the visibility option (private or public), initial source control type, and work item process.

    • Click Create.

    • Azure DevOps displays the project welcome page with the project name.

Additional References

Creating Workspace in Bitbucket

  1. Create a new individual account on Bitbucket (if you do not have one).

  2. Select your profile and settings avatar on the upper-right corner of the top navigation bar.

  3. Select All workspaces from the dropdown menu.

  4. Select the Create workspace on the upper-right corner of the Workspaces page.

  5. On the Create a Workspace page:

  • Enter a Workspace name.

  • Enter a Workspace ID. Your ID cannot have any spaces or special characters, but numbers and capital letters are fine. This ID becomes part of the URL for the workspace and anywhere else where there is a label that identifies the team (APIs, permission groups, OAuth, etc.).

  • Click Create.

  1. Your Workspace name and Workspace ID will be created.

Additional References

Container/OCI Registry

You can configure a container registry using any registry provider of your choice. It allows you to build, deploy, and manage your container images or charts with easy-to-use UI.

Add Container Registry

  1. From the left sidebar, go to Global Configurations → Container/OCI Registry.

  2. Click Add Registry.

  3. Choose the Registry type:

    • Private Registry: Choose this if your images or artifacts are hosted or should be hosted on a private registry restricted to authenticated users of that registry. Selecting this option requires you to enter your registry credentials (username and password/token).

    • Public Registry: Unlike private registry, this doesn't require your registry credentials. Only the registry URL and repository name(s) would suffice.

  4. Assuming your registry type is private, here are few of the common fields you can expect:

    Fields
    Description

    Name

    Registry URL

    Provide the URL of your registry in case it doesn't come prefilled (do not include oci://, http://, or /https:// in the URL)

    Authentication Type

    Push container images

    Push helm packages

    Tick this checkbox if you wish to push helm charts to your registry

    Use as chart repository

    Tick this checkbox if you want Devtron to pull helm charts from your registry and display them on its chart store. Also, you will have to provide a list of repositories (present within your registry) for Devtron to successfully pull the helm charts.

    Set as default registry

    Tick this checkbox to set your registry as the default registry hub for your images or artifacts

  5. Click Save.

Supported Registry Providers

ECR

Amazon ECR is an AWS-managed container image registry service. The ECR provides resource-based permissions to the private repositories using AWS Identity and Access Management (IAM). ECR allows both Key-based and Role-based authentications.

Provide the following additional information apart from the common fields:

Fields
Description

Registry URL

Example of URL format: xxxxxxxxxxxx.dkr.ecr.<region>.amazonaws.com where xxxxxxxxxxxx is your 12-digit AWS account ID

Authentication Type

Select one of the authentication types:

  • EC2 IAM Role: Authenticate with workernode IAM role and attach the ECR policy (AmazonEC2ContainerRegistryFullAccess) to the cluster worker nodes IAM role of your Kubernetes cluster.

    • Access key ID: Your AWS access key

    • Secret access key: Your AWS secret access key ID

Docker

Provide the following additional information apart from the common fields:

Fields
Description

Username

Provide the username of the Docker Hub account you used for creating your registry.

Password/Token

Azure

Provide the following additional information apart from the common fields:

Fields
Description

Registry URL/Login Server

Example of URL format: xxx.azurecr.io

Username/Registry Name

Provide the username of your Azure container registry

Password

Provide the password of your Azure container registry

Artifact Registry (GCP)

Remove all the white spaces from JSON key and wrap it in a single quote before pasting it in Service Account JSON File field

Provide the following additional information apart from the common fields:

Fields
Description

Registry URL

Example of URL format: region-docker.pkg.dev

Service Account JSON File

Paste the content of the service account JSON file

Google Container Registry (GCR)

Remove all the white spaces from JSON key and wrap it in single quote before pasting it in Service Account JSON File field

Quay

Provide the following additional information apart from the common fields:

Fields
Description

Username

Provide the username of your Quay account

Token

Provide the password of your Quay account

Other

Provide below information if you select the registry type as Other.

Fields
Description

Registry URL

Enter the URL of your private registry

Username

Provide the username of your account where you have created your registry

Password/Token

Provide the password or token corresponding to the username of your registry

Advanced Registry URL Connection Options

  • Allow Only Secure Connection: Tick this option for the registry to allow only secure connections

  • Allow Secure Connection With CA Certificate: Tick this option for the registry to allow secure connection by providing a private CA certificate (ca.crt)

  • Allow Insecure Connection: Tick this option to make an insecure communication with the registry (for e.g., when SSL certificate is expired)

You can use any registry which can be authenticated using docker login -u <username> -p <password> <registry-url>. However these registries might provide a more secured way for authentication, which we will support later.

Registry Credential Access

Super-admin users can decide if they want to auto-inject registry credentials or use a secret to pull an image for deployment to environments on specific clusters.

  1. To manage the access of registry credentials, click Manage.

There are two options to manage the access of registry credentials:

Fields
Description

Do not inject credentials to clusters

Select the clusters for which you do not want to inject credentials

Auto-inject credentials to clusters

Select the clusters for which you want to inject credentials

  1. You can choose one of the two options for defining credentials:

Use Registry Credentials

If you select Use Registry Credentials, the clusters will be auto-injected with the registry credentials of your registry type. As an example, If you select Docker as Registry Type, then the clusters will be auto-injected with the username and password/token which you use on the Docker Hub account.

Click Save.

Specify Image Pull Secret

You can create a Secret by providing credentials on the command line.

Create this Secret and name it regcred (let's say):

kubectl create -n <namespace> secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

where,

  • namespace is your sub-cluster, e.g., devtron-demo

  • your-registry-server is your Private Docker Registry FQDN. Use https://index.docker.io/v1/ for Docker Hub.

  • your-name is your Docker username

  • your-pword is your Docker password

  • your-email is your Docker email

You have successfully set your Docker credentials in the cluster as a Secret called regcred.

Typing secrets on the command line may store them in your shell history unprotected, and those secrets might also be visible to other users on your PC during the time when kubectl is running.

Enter the Secret name in the field and click Save.

SSO Login Services

Once Devtron is installed, it has a built-in admin user with super admin privileges with unrestricted access to all Devtron resources. We recommended to use a user with super admin privileges for initial and global configurations only and then switch to local users or configure SSO integration.

To add/edit SSO configuration, go to the SSO Login Services section of Global Configurations.

Supported SSO Providers

Below are the SSO providers which are available in Devtron. Select one of the SSO providers (e.g., GitHub) to configure SSO:

Dex implements connectors that target specific identity providers for each connector configuration. You must have a created account for the corresponding identity provider and registered an app for client key and secret.

Refer the following documents for more detail.

  • https://dexidp.io/docs/connectors/

  • https://dexidp.io/docs/connectors/google/

1. Create new SSO Configuration

  • Go to the Global Configurations → SSO Login Services and click any SSO Provider of your choice.

  • In the URL field, enter the valid Devtron application URL where it is hosted.

  • For providing redirectURI or callbackURI registered with the SSO provider, you can either select Configuration or Sample Script.

  • Provide the client ID and client Secret of your SSO provider (e.g. If you select Google as SSO provider, then you must enter $GOOGLE_CLIENT_ID and $GOOGLE_CLIENT_SECRET in the client ID and client Secret respectively.)

  • Select Save to create and activate SSO Login Service.

Note:

  • Only single SSO login configuration can be active at one time. Whenever you create or update any SSO configuration, it will be activated and used by Devtron and previous configurations will be deleted.

  • Except for the domain substring, URL and redirectURI remains same.

2. Update SSO Configuration

You can change SSO configuration anytime by updating the configuration and click Update.Note: In case of configuration change, all users will be logged out of Devtron and will have to login again.

3. Configuration Payload

  • type : Any platform name such as (Google, GitLab, GitHub etc.)

  • name : Identity provider platform name

  • config : User can put connector details for this key. Platforms may not have same structure but common configurations are clientID, clientSecret, redirectURI.

  • hostedDomains : Domains authorized for SSO login.


Next Steps

Chart Repositories

Note: After the successful installation of Devtron, click Refetch Charts to sync and download all the default charts listed on the dashboard.

Add Chart Repository

To add chart repository, go to the Chart Repositories section of Global Configurations. Click Add repository.

Provide below information in the following fields:

Fields
Description

Name

Provide a Name of your chart repository. This name is added as prefix to the name of the chart in the listing on the helm chart section of application.

URL

This is the URL of your chart repository. E.g. https://charts.bitnami.com/bitnami

Update Chart Repository

You can also update your saved chart repository settings.

  1. Click the chart repository which you want to update.

  2. Modify the required changes and click Update to save you changes.

Note:

  • You can perform a dry run to validate the below chart repo configurations by clicking Validate.

Google

Introduction

Integrating Google as your Single Sign-On (SSO) provider enables users to authenticate with their Google accounts, ensuring secure and streamlined access to Devtron. This document walks you through setting up Google SSO in Devtron, ensuring users can log in smoothly.

Prerequisites

To configure Google SSO in Devtron, you will need:

  • Super Admin permissions

Get the Redirect URI from Devtron

Before configuring Google as an SSO provider,

  • You need to retrieve the Redirect URI from Devtron, which will be required in Google Cloud while setting up OAuth credentials.

    • Log in to Devtron.

    • Navigate to Global Configurations → SSO Login Services.

    • Select Google as the authentication provider.

    • Enter the Host URL in the URL field. (This is essential to generate the correct Redirect URI.)

    • Copy the Redirect URI displayed in this section. You will need to enter this in Google Cloud.

Configure OAuth in Google Cloud Console

The next step is to configure OAuth credentials in Google Cloud Console. This involves creating a Google OAuth Client ID and Client Secret, which will be used in Devtron for authentication.

To set up OAuth, follow these steps:

  • Navigate to APIs & Services → OAuth Consent Screen and configure the required details as shown on the screen.

  • In APIs & Services → Credentials, create a new OAuth Client ID:

    • Select 'Web application' as the application type.

    • Paste the Redirect URI (copied from Devtron) under Authorized Redirect URIs.

  • Click Create to generate the Client ID and Client Secret.

Google SSO Requires a Valid Domain with HTTPS

Examples of valid URIs:

✅ https://devtron.example.com/api/dex/callback

✅ https://auth.yourcompany.com/callback

Examples of invalid URIs:

❌ http://localhost:8080/callback

❌ http://192.168.1.10/callback

You can see a new client ID is created in the APIs & Services → Credentials, under OAuth 2.0 Client IDs section. To obtain Client ID and Client Secret, click on the name (devtron-sso in our case) of the OAuth 2.0 Client IDs

Copy the Client ID and Client Secret, as they will be required in Devtron’s SSO configuration.

Configure Google SSO in Devtron

The next step is to configure Devtron to use these credentials for authentication. For this, navigate back to Global Configurations → SSO Login Services, here you can already find a configuration template.

Configuration

In the configuration,

  • Enter the OAuth Credentials:

    • Paste the Client ID obtained from Google Cloud in the clientID field.

    • Paste the Client Secret obtained from Google Cloud in the clientSecret field.

  • Configure Hosted Domains (Optional):

    • If you want to restrict authentication to specific domains (e.g., only users from company.com can log in), add these under hostedDomains in Devtron.

    • If you want to allow all users with any valid Google account, remove the entire hostedDomains section from the configuration.

  • Enter the Redirect URI:

    • Copy the Redirect URI displayed in Devtron and paste the value in the redirectURI field.

  • Click Update to save the configuration, once saved, Google SSO is successfully configured

Although Google SSO is now set up, users will not be able to sign in unless they are explicitly added to Devtron with the necessary permissions.

Important: Enable User Access After SSO Setup

To ensure users can log in:

  • Go to Global Configurations → Authorization → User Permissions.

  • Click Add User.

  • Enter their email (matching their Google account).

  • Assign the required role.

  • Click Save to complete the setup.

Once saved, Devtron will use Google OAuth for authentication, allowing users to log in using their Google accounts.

Reference

GitHub

Sample Configuration


Values You Would Require at SSO Provider

Devtron provides a sample configuration out of the box. There are some values that you need to either get from your SSO provider or give to your SSO provider.

Values to Fetch

  • clientID

  • clientSecret

Values to Provide

  • redirectURI (provided in SSO Login Services by Devtron)


Reference

Deployment Charts

Introduction

Devtron Apps leverage helm charts to carry out deployment of your images and configuration. Devtron includes predefined Helm charts (e.g., Deployment, Rollout, StatefulSet) that cover majority of your use cases.

For any use case not addressed by the default Helm charts, you can upload your own Helm chart and use it as a deployment chart in Devtron.

Tutorial


Preparing a Deployment Chart

1. Create a Helm Chart

You can use the following command to create a Helm chart:

helm create my-custom-chart
Field
Description

Name

Name of the Helm chart (Required).

Version

This is the chart version. Update this value for each new version of the chart (Required).

Description

Give a description to your chart (Optional).

Example of Chart.yaml

2. Create an Image Descriptor Template File

The Image Descriptor Template file is a GO template that produces a valid JSON file upon processing. It allows Devtron to dynamically inject values from the CD pipeline into your Helm chart during deployment. Therefore, details like image repository, tag, and environment are automatically populated at the placeholders specified in .image_descriptor_template.json.

  • In the root directory of your chart, create a file named .image_descriptor_template.json using the following command:

    touch .image_descriptor_template.json
  • Ensure the above file is created in the directory where the main Chart.yaml exists (as shown below):

  • Paste the following content in .image_descriptor_template.json file:

    {
        "server": {
            "deployment": {
                "image_tag": "{{.Tag}}",
                "image": "{{.Name}}"
            }
        },
        "pipelineName": "{{.PipelineName}}",
        "releaseVersion": "{{.ReleaseVersion}}",
        "deploymentType": "{{.DeploymentType}}",
        "app": "{{.App}}",
        "env": "{{.Env}}",
        "appMetrics": {{.AppMetrics}}
    }

    You can customize this template to include only the values your deployment needs. For instance, if you only require the image repository and tag, your template would look like:

    {
        "image": {
            "repository": "{{.Name}}",
            "tag": "{{.Tag}}"
        }
    }

Got a JSON Error?

If your code editor highlights a syntax error (property or EOF error) in the above JSON, ignore it.

3. Add app-values.yaml

The app-values.yaml file is simply a subset of your values.yaml file. Therefore, you can insert specific entries from values.yaml that you wish to display.

However, if you upload the chart without an app-values.yaml or with an empty one, your deployment template will appear blank (as shown below) or null.

4. Add release-values.yaml

  • You can use autoPromotionSeconds to decide how long to keep old pods running once the latest pods of new release are available.

In the root directory of your chart, create a file named release-values.yaml with the following command:

touch release-values.yaml

Use the following content in the release-values.yaml file (edit it as per your requirement):

server:
 deployment:
   image_tag: IMAGE_TAG
   image: IMAGE_REPO
   enabled: false
dbMigrationConfig:
  enabled: false

pauseForSecondsBeforeSwitchActive: 0
waitForSecondsBeforeScalingDown: 0
autoPromotionSeconds: 30

#used for deployment algo selection
orchestrator.deploymant.algo: 1 

5. Package the chart in a tgz format

The Helm chart to be uploaded must be packaged as a versioned archive file in the format: <helm-chart-name>-x.x.x.tgz. Both <helm-chart-name> and x.x.x will be automatically fetched from the name and version fields present in the Chart.yaml file, respectively."

Note

Ensure you navigate out of the Helm chart folder before packaging it in a '.tgz' format

Run the following command to package the chart:

helm package my-custom-chart

The above command will generate a <helm-chart-name>-x.x.x.tgz file.


Uploading a Deployment Chart

Who Can Perform This Action?

Only super admin users can upload a deployment chart. A super admin can upload multiple versions of a chart.

Steps

  • Go to Global Configurations → Deployment Charts.

  • Click Upload Chart.

  • Click Select .tgz file and upload your packaged deployment chart (in .tgz format).

The system initiates the validation of your uploaded chart. You may also click Cancel upload if you wish to abort the process.

Validation Checks

In the uploading process, your file will be validated against the following criteria:

  • Supported archive template should be in *.tgz format.

  • Chart.yaml must include the name and the version number.

  • .image_descriptor_template.json file should be present.

The following are interpretations of the validation checks performed:

Validation Status
Description
User Action

Success

Enter a description for the chart and select Save or Cancel upload

Unsupported template

Upload another chart or Cancel upload

New version detected

Enter a Description and select Save to continue uploading, or Cancel upload

Already exists

  • Edit the version and re-upload the same chart using Upload another chart.

  • Upload a new chart with a new name using Upload another chart

  • Cancel upload


Viewing Deployment Charts

Who Can Perform This Action?

Only super-admins can view deployment charts.

To view the list of available deployment charts, go to Global Configurations → Deployment Charts page.

  • You can search a chart by its name, version, or description.


Using Deployment Chart in Application

Note


Who Can Perform This Action?

Only super-admins can edit the GUI schema of deployment charts.

Reference

You can edit the GUI schema of the following deployment charts:

  1. Default charts provided by Devtron (Deployment, Job & CronJob, Rollout Deployment, and StatefulSet)

  2. Custom charts uploaded by you

Tutorial

Steps

In this example, we will edit the Deployment chart type provided by Devtron.

  1. Click the edit button next to the chart as shown below.

  2. You may start editing the schema by excluding existing fields/objects or including more of them. Click the Refer YAML button to view all the supported fields.

  3. While editing the schema, you may use the Preview GUI option for a real-time preview of your changes.

  4. Click Save Changes.

Next, if you go to App Configuration → Base Configurations → Deployment Template, you will be able to see the deployment template fields (in GUI) as per your customized schema.

OIDC

Sample Configuration


Values You Would Require at SSO Provider

Devtron provides a sample configuration out of the box. There are some values that you need to either get from your SSO provider or give to your SSO provider.

Values to Fetch

  • clientID

  • clientSecret

Values to Provide

  • redirectURI (provided in SSO Login Services by Devtron)


Reference

Keycloak

Prerequisites


Steps on Keycloak Admin Console

Creating a Client

Here, we will add Devtron as a client for using Keycloak SSO.

  1. In the Admin Console, go to Clients and click Create client.

  2. Within General Settings:

    • Enter devtron in the Client ID field. We will use this ID while configuring SSO later in Devtron.

    • Enter Devtron in the Name field.

  3. Within Capability config, turn on Client Authentication.

  4. Within Login settings, enter https://<DEVTRON_BASE_URL>/orchestrator/api/dex/callback in the following fields.

    • Valid redirect URIs

    • Valid post logout redirect URIs

    • Web origins

  5. Click Save.

Getting Client Secret

Here, we will obtain the secret we need while configuring SSO in Devtron.

  1. Go to the Credentials tab of the client you created.

  2. Use the copy button next to the Client Secret field and paste it somewhere for future reference.

Creating Users

Here, we will create a user that can log in to Devtron via SSO. We will assign a username and password that the user can enter while logging in to Devtron via Keycloak SSO.

  1. In the Admin Console, go to Users and click Add user.

  2. Give a username (e.g., usertest) in the Username field and enter the user's email address (e.g., usertest@example.com) in the Email field.

  3. Click Create. Your user creation will be successful.

  4. Go to the Credentials tab of the user you created.

  5. Click Set password.

  6. Enter the password and confirm it.

  7. Click Save.

Retrieving Issuer URL

Here, we will obtain the Issuer URL we need while configuring SSO in Devtron.

  1. In the Admin Console, go to Realm settings.

  2. In the General tab, scroll down to the Endpoints field, and click the OpenID Endpoint Configuration link.

  3. This will open a new page, copy the value of the key named issuer, and paste it somewhere for future reference.


Steps on Devtron

Configuring OIDC SSO

Who Can Perform This Action?

Users need to have super-admin permission to configure SSO.

  1. Go to Global Configurations → SSO Login Services → OIDC.

  2. Below the URL field, take the help of the Click to use option to populate the exact URL if the displayed one is incorrect.

  3. In the Configuration editor, do the following:

  4. Click Save or Update to activate Keycloak SSO login.

Adding Users

Who Can Perform This Action?

Users need to have super-admin permission to add users.

Here, we will add the user we created in the Keycloak Admin Console. If this step is skipped, the user might not be able to log in to Devtron via Keycloak.

  1. Go to Global Configurations → Authorization → User Permissions.

  2. Click + Add Users.

  3. In the Email addresses field, enter the email address of the user you created in Keycloak.

  4. Click Save.

Note

Kindly get in touch with us if you encounter any issues while logging out of Keycloak on Devtron as it might be buggy.

GitLab

Sample Configuration


Values You Would Require at SSO Provider

Devtron provides a sample configuration out of the box. There are some values that you need to either get from your SSO provider or give to your SSO provider.

Values to Fetch

  • clientID

  • clientSecret

Values to Provide

  • redirectURI (provided in SSO Login Services by Devtron)


Reference

Microsoft

Sample Configuration


Values You Would Require at SSO Provider

Devtron provides a sample configuration out of the box. There are some values that you need to either get from your SSO provider or give to your SSO provider.

Values to Fetch

  • clientID

  • tenantID (required only if you want to use Azure AD for auto-assigning permissions)

  • clientSecret

Values to Provide

  • redirectURI (provided in SSO Login Services by Devtron)


Reference


Make sure to add tenantID in the SSO configuration field without fail.

SSO login requires exact matching between Devtron permission group names and AD groups. Any discrepancies or missing groups will prevent successful login.

If your AD permissions aren't reflecting in Devtron, a quick sign-out and sign-in can resolve the issue.

LDAP

Sample Configuration


Values to fetch from LDAP

Devtron provides a sample configuration out of the box. Here are some values you need to fetch from your LDAP.

  • bindDN

  • bindPW

  • baseDN


Reference


SSO login requires exact matching between Devtron permission group names and LDAP user groups. Any discrepancies or missing groups will prevent successful login.

If you're missing some permissions that you know you should have, try logging out and signing back in to Devtron. This will refresh your permissions based on your latest LDAP user group.

OpenShift

Sample Configuration


Values You Would Require at SSO Provider

Devtron provides a sample configuration out of the box. There are some values that you need to either get from your SSO provider or give to your SSO provider.

Values to Fetch

  • clientID

  • clientSecret

Values to Provide

  • redirectURI (already provided in SSO Login Services by Devtron)


Reference

Approval Policy

When it comes to critical environments (let's say, production), you as a superadmin might want to introduce an approval flow for application deployment or changes made to the configuration files. Enforcing such restrictions will prevent unwanted deployments and direct modifications to sensitive configurations.

The Approval Policy feature in Devtron lets you introduce an approval mechanism whenever your users perform the following actions:

  • Deploying an Application to an Environment

  • Changes in Deployment Template

  • Changes in ConfigMap

  • Changes in Secret


Create an Approval Policy

Who Can Perform This Action?

Users need to have super-admin permissions to create an approval policy.

  1. Go to Global Configurations → Approval Policy.

  2. Click + Create Profile.

  3. Give a name to the policy, e.g., banking-prod-approval, and add a description (optional) preferably explaining what it does.

  4. Additionally, you can decide who can grant approval from the following 3 options:

    • Option 1: Choose Any Approver if you want to allow any user with Image Approver permissions and/or Configuration Approver permissions to approve 'Deployment' request and 'Configuration Change' respectively. Choose the number of approvals your users must get to proceed with their changes. The permissible limit ranges from one approval (minimum) to six approvals (maximum).

    • Option 3: Choose Specific Approver → Specific Users (dropdown) to cherry-pick the names of the user(s) who can provide an approval. Here, there is no upper limit to the approvals (unlike the above options), so the user must obtain approvals from all the specific members you add to the policy.

How do approvals of User Groups work?

If a user belongs to multiple groups (see Option 2 above), their approval is considered and counted for each group. For example, if you mandate 2 approvals: 1 from DevOps group and 1 from Compliance group; an approval from a common user (belonging to both groups) will count as 2 approvals.

However, once a group's required approvals are met, extra approvals won’t count. For example, if a request needs 2 Security and 3 QA approvals and already has 2 Security and 2 QA approvals, an approval from a user in both teams will count only for QA. The user appears in both lists but doesn’t add to Security’s count.

Can super-admins approve the requests?

Yes, apart from the users having approver access, super-admins can also approve the requests (provided the requests are not their own).

What happens if a specific user mentioned in the policy gets deleted from Devtron or has their permissions revoked?

Even if the user mentioned in the policy no longer exists, the approval conditions will remain unchanged. Therefore, to prevent unfulfilled approval conditions because of an absent user, it's best to remove that specific user from the policy.

  1. Click Save Changes.


Apply an Approval Policy

Who Can Perform This Action?

Users need to have super-admin permissions to apply an approval policy.

  1. After you create an approval policy, you can apply it. Click Apply Profile on the same screen.

  2. From the Select profiles to apply dropdown, choose the policy you wish to apply. You also have the option to select more than one policy (if they exist) using the checkbox.

  3. Choose the scope from the dropdown given next to Use selected policy for approval of. Here you can decide whether your policy is for:

    • Approval of Deployment - Select 'Deployments' from the dropdown.

    • Approval of Configuration Change - Select 'Configuration change' from the dropdown. You can further select: Deployment template, ConfigMaps, Secrets. Select the ones to which your policy should apply so that any change to your chosen configurations will require an approval.

  4. Under Apply to, you get the following options to choose from:

    • Specific Criteria - Select this option to apply your policy to specific environment(s) of specific applications.

      Example: In case of Deployment

      Example: In case of Configuration Change

    • By match criteria - Select this option to use a combination of filters to create criteria. Your policy will only apply to target pipelines/configurations fulfilling your criteria (including existing and future ones). (Optional) You may also write a note for your other team members to understand the intent and context of your policy.

      Example: In case of Deployment

      Example: In case of Configuration Change

    • Global - Select this option to apply your chosen policies to every deployment pipeline or configurations (existing and future) of all applications in all clusters.

      Example: In case of Deployment

      Example: In case of Configuration Change

  5. Click Save Changes.


Apply Multiple Policies

Who Can Perform This Action?

Users need to have super-admin permissions to apply more policies to a scope.

Apply More Policies to a Scope

  1. Go to Applied Profiles tab.

  2. Use the filters to find the applied profile and scope (e.g., Global, Cluster, Application).

  3. Click the context menu.

  4. Click Manage policy.

  5. Use the Select profiles to apply dropdown and tick the policy/policies you wish to apply.

  6. Click Save Changes.

Apply More Policies in Bulk

  1. Use the checkboxes to select the relevant scopes (e.g., Global, Cluster, Application).

  2. Click the Manage Profiles button on the floating widget.

  3. Click Add.

  4. Use the Select profile to apply dropdown and tick the policy/policies you wish to apply in bulk.

  5. Review the changes if needed, and click Save Changes.

How do multiple policies work if applied together?

If you apply multiple policies together, the user has to meet the approval conditions of all the applied policies. Example 1: if 'Policy A' demands 3 approvals specifically from John, Jane, and Jessy; and if 'Policy B' requires 1 approval from 'Product User Group', the user will have to get 4 approvals. Example 2: if 'Policy A' demands 3 approvals specifically from John, Jane, and Jessy; and if 'Policy B' requires 2 approvals from anyone, the user will still have to get 3 approvals from John, Jane, and Jessy. In short, the stricter conditions from the policies are enforced first and they have to be fulfilled.


Remove Applied Policies

Who Can Perform This Action?

Users need to have super-admin permissions to remove an applied approval policy.

If you have already applied policies and wish to remove some of them from a scope, follow the steps below. The approval conditions of the removed policy will no longer apply to the given scope, and the conditions of other policies (if applied to the same scope) will remain.

Remove Policies Applied to a Scope

  1. Go to Applied Profiles tab.

  2. Use the filters to find the applied profile and scope (e.g., Global, Cluster, Application).

  3. Click the context menu.

  4. Click Manage policy.

  5. In the Select profiles to apply dropdown, click 'x' next to the policy/policies you wish to remove.

  6. Click Save Changes.

Remove Applied Policies in Bulk

  1. Use the checkboxes to select the relevant scopes (e.g., Global, Cluster, Application)..

  2. Click the Manage Profiles button on the widget.

  3. Click Remove.

  4. In the Remove Approval Policy dropdown, click 'x' next to the policy/policies you wish to remove.

  5. Review the changes if needed, and click Save Changes.

Note


Delete Applied Policies

Who Can Perform This Action?

Users need to have super-admin permissions to delete an applied policy.

  1. Go to Applied Profiles tab.

  2. Use the filters to find the applied profile(s).

  3. Click the Delete option in the context menu or use the checkboxes to select multiple scopes for deletion.


Delete an Approval Policy

Who Can Perform This Action?

Users need to have super-admin permissions to delete an approval policy.

If you no longer require a given approval policy, you may delete it. This action will automatically remove its rules enforced earlier for both, deployments and configuration change.

  1. Go to Profiles tab.

  2. Click the delete icon next to the profile you wish to delete.


Results

Approving Deployment Request

Assume you created a policy (shown below) that blocks the deployment of a banking application to an environment unless there are two approvals. No user can trigger the deployment unless the images are approved.

  1. The user first requests approval of the intended image. Only those with the necessary permissions will show up in the approver list. Moreover, the user can also opt to notify all users apart from the approvers.

  2. Only those with Image Approver permissions can then approve the request.

  3. The user can then proceed with deploying the approved image.

Approving Configuration Change Request

Assume you created a policy (shown below) that prevents direct changes to the configuration files (Deployment Template, ConfigMaps, Secrets) of a banking application unless there is one approval.

  1. The user first requests approval for pushing a configuration change in Deployment Template/ConfigMap/Secret.

  2. Only those with Configuration Approver permissions can then approve the request.

Deployment Window

Unplanned or last minute deployments of applications can affect the services of an organization. Consequently, its business impact will be severe if such disruptions occur during peak hours or critical periods (say festive season or no deployment on Fridays).

Therefore, Devtron comes with a feature called 'Deployment Window' that allows you to define specific timeframes during which application deployments are either blocked or allowed in specific environments. Moreover, actions that can potentially impact the existing deployment are also restricted, which include:

However, exempted users can still perform the above actions even during blocked periods.

Types of Deployment Window

Difference between a Blackout Window and Maintenance Window

Technically, both of them are different methods of restricting deployments to an environment. For example, specifying either a blackout window of [8:00 AM to 10:00 PM] or a maintenance window of [10:00 PM to 8:00 AM] essentially does the same job. You can define either of them depending on your use case.


Configuring Deployment Window

Who Can Perform This Action?

Users need to have super-admin permission to configure deployment window.

Go to Global Configurations → Deployment Window.

This involves two parts:

Creating Deployment Window

This involves the process of creating a blackout window or a maintenance window.

  1. In the Windows tab, click + Add Window.

  2. Choose a deployment window type, i.e., maintenance window or blackout window.

  3. In your deployment window, make sure to choose the correct time zone (by default it is determined from the browser you use).

Why Time Zone?

Let's say you are a super-admin located in New Delhi (GMT +5:30) and you wish to restrict midnight deployments according to the Californian timings (GMT -07:00) for your team in the US. Therefore, it's crucial to choose the correct time zone (i.e., GMT -07:00) and then add the duration (see next steps).

This ensures that deployments occur at the intended local time, helping to avoid disruptions, and facilitating co-ordinated operations across different regions.

  1. Click + Add duration.

  2. The following options are available for you to enforce the deployment window:

    • Once: Use this to make your deployment window active between two specific date and time, e.g., 20 Jun 2024, 08:00 PM ➝ 26 Jun 2024, 05:00 PM

    • Daily: Use this to make your deployment window active everyday between specific timings, e.g., daily between 12:00 AM ➝ 06:00 AM

    • Weekly: On selected days at specific timings, e.g., Wed and Sun • 02:00 AM ➝ 05:30 AM

    • Weekly Range: Between days of the week, e.g., Mon (02:00 AM) to Fri (05:30 AM)

    • Monthly: On or between days of the month, e.g., Day 1 (10:30 PM) to Day 2 (06:30 AM)

You can also add Start Date and End Date to your recurring deployment window.

Let's say you wish to enforce a blackout window every weekend to prevent unsolicited deployments by your team. If you select a weekly range (e.g., Saturday 12:00 AM to Monday 12:00 AM) and apply the deployment window without specifying dates, the weekend restrictions will persist indefinitely.

However, by specifying a start date and an end date (as shown below), your deployment window will have a defined validity period. This ensures that the deployment window restrictions are temporary and do not extend beyond the intended timeframe.

After clicking Done, you can use the + Add duration button to add more than one duration (for e.g., one monthly and one weekly) in a given deployment window.

  1. You can also determine the users who can take actions (say deployment) even when restrictions are in place. These can be super-admins, specific users, both, or none.

  2. Enter a display message to show the user whose deployment gets blocked, e.g., Try deploying on Monday - Weekend deployment is not a best practice - Contact your Admin. This will help the user understand the restriction better.

  3. Click Save Changes.

If required, you can edit a deployment window to modify it as shown below.

You may delete a deployment window if it's not needed anymore. If the deployment window was applied to any deployment pipeline (application + environment), the restrictions would no longer exist.

Applying Window to Deployment Pipelines

This involves the process of applying the deployment window you created above to your deployment pipeline(s).

  1. Go to the Apply To tab and click the No windows dropdown next to the [application + environment] you wish to apply deployment window(s).

  2. Select the deployment windows from the dropdown and click Save Changes.

Bulk Apply

  1. If you wish to apply deployment windows to multiple applications and environments at once, use the checkbox.

    We recommend you to use the available filters (Application, Environment, Deployment Window) to simplify the process of application + environment selection.

  2. On the floating widget, click Manage Windows.

  3. Click the Add Deployment Windows dropdown to choose the deployment window(s).

  4. Use the Review Changes option to confirm the impacted environment(s) and click Save.

You can remove deployment window(s) applied to one or more deployment pipelines as shown below.


Checking Deployment Window

Who Can Perform This Action?

Users with view only permission or above for an application can view all deployment windows configured for its deployment pipelines.

Overview Page

However, if a deployment window doesn't exist for an environment, the message No deployment windows are configured would be displayed next to it.

You may click the dropdown icon to view the details which include:

  • Type of deployment window (Blackout/Maintenance)

  • Name and description

  • Frequency of window (once, weekly, monthly, yearly)

  • Duration

App Details Page

Unlike the Overview page which shows deployment windows for all environments, the App Details page does not show all deployment windows configured for the environment. It shows:

  • Active deployment windows

  • Upcoming deployment windows

For example, if the super-admin has configured 4 deployment windows (say 2 Blackout and 2 Maintenance), you will see 4 cards stacked upon each other. However, no cards will be shown if deployment windows aren't configured. You may click on the windows card stack to view the details of active and upcoming deployment windows.

The default time period for showing upcoming deployment windows is 90 days. You can configure this individually for blackout and maintenance windows, via ConfigMap, in the Orchestrator microservice as shown below:


Result

The below functions are blocked during an ongoing blackout window or outside maintenance window.

The exempted users specified in the deployment window configuration can perform the above actions.

Hibernation

When you hibernate an application, it becomes non-functional. To avoid this, hibernation of application is blocked.

Restart Workloads

Although Kubernetes handles the restart process smoothly, there is a possibility of interruptions or downtime. To avoid this, restarting workloads (say Pod, Deployment, ReplicaSet) of an application is blocked when deployment is restricted.

Deletion of Workloads

Deployment

Go to the Build & Deploy tab. The CD pipelines with restricted deployment will carry a DO NOT DEPLOY label.

Despite that, if a user selects an eligible image and proceeds to deploy, it will show Deployment is blocked along with a list of exempted users who are allowed to deploy.

Not just manual trigger, deployments remain blocked even if the trigger mode is automatic. In such cases, if a new CI image is built, the user has to manually deploy once the deployment block is lifted.

The Deployment History tab will also log whether a given deployment was initiated during a blackout window or outside a maintenance window.

Rollback

Rolling back to an older version, by using a previously deployed image, is barred during a deployment block.

Deletion of CD Pipeline

Go to App Configuration → Workflow.

In Devtron, deleting a CD pipeline affects the current state of the deployed application. Moreover, it might impact future deployments and you will also lose information about past deployments, i.e., Deployment History.

If you attempt to delete any CD pipeline with restricted deployment, it will show Pipeline deletion is blocked.


Impact on Application Groups

Let's say you have 10 applications in your application group, and a blackout window is ongoing for 3 of them. In such a case, if you deploy your application group, those 3 applications will not get deployed. Therefore, you might experience a partial success along with an option to retry the failed deployments.

The same stands true for other bulk actions like hibernate, unhibernate, and restart workloads.

API Tokens

API tokens are the access tokens for authentication. Instead of using username and password, it can be used for programmatic access to API. It allows users to generate API tokens with the desired access. Only super admin users can generate API tokens and see the generated tokens.

Generate API Token

To generate API tokens, go to Global Configurations -> Authorization -> API tokens and click Generate New Token.

  • Enter a name for the token.

  • Add Description.

  • Select an expiration date for the token (7 days, 30 days, 60 days, 90 days, custom and no expiration).

  • To select a custom expiration date, select Custom from the drop-down list. In the adjacent field, you can select your custom expiration date for the API token.

  • You can assign permission to the token either with:

    • Super admin permission: To generate a token with super admin permission, select Super admin permission.

    • Specific permissions: Selecting Specific permissions option allows you to generate a token with a specific role for:

      • Devtron Apps

      • Helm Apps

      • Kubernetes Resources

      • Chart Groups

  • Click Generate Token.

A pop-up window will appear on the screen from where you can copy the API token.

Use API Token

Once Devtron API token has been generated, you can use this token to request Devtron APIs using any API testing tool like Jmeter, Postman, Citrus. Using Postman here as an example.

Open Postman. Enter the request URL with POST method and under HEADERS, enter the API token as shown in the image below.

In the Body section, provide the API payload as shown below and click Send.

As soon as you click Send, the created application API will be triggered and a new Devtron app will be created as provided in the payload.

Update API Token

To set a new expiration date or to make changes in permissions assigned to the token, we need to update the API token in Devtron. To update the API token, click the token name or click on the edit icon.

To set a new expiration date, you can regenerate the API token. Any scripts or applications using this token must be updated. To regenerate a token, click Regenerate token.

A pop-up window will appear on the screen from where you can select a new expiration date.

Select a new expiration date and click Regenerate token.

This will generate a new token with a new expiration date.

To update API token permissions, give the permissions as you want to and click Update Token.

Delete API Token

To delete an API token, click delete icon. Any applications or scripts using this token will no longer be able to access the Devtron API.

Permission Groups

Using the Permission groups, you can assign a user to a particular group and a user inherits all the permissions granted to the group.

The advantage of the Permission groups is to define a set of privileges like create, edit, or delete for the given set of resources that can be shared among the users within the group.

Add Group

Go to Global Configurations → Authorization → Permissions groups → Add group.

Enter the Group Name and Description.

Devtron Apps Permissions

In Devtron Apps option, you can provide access to a group to manage permission for custom apps created using Devtron.

Provide the information in the following fields:

You can add multiple rows for Devtron Apps permission.

Once you have finished assigning the appropriate permissions for the groups, Click Save.

Helm Apps Permissions

In Helm Apps option, you can provide access to a group to manage permission for Helm apps deployed from Devtron or outside Devtron.

Provide the information in the following fields:

You can add multiple rows for Devtron app permission.

Once you have finished assigning the appropriate permissions for the groups, Click Save.

Jobs

In Jobs option, you can provide access to a group to manage permission for jobs created using Devtron.

Provide the information in the following fields:

You can add multiple rows for Jobs permission.

Once you have finished assigning the appropriate permissions for the groups, Click Save.

Kubernetes Resources Permissions

Only super admin users will be able to see Kubernetes Resources tab and provide permission to other users to access Resource Browser.

To provide Kubernetes resource permission, click Add permission.

On the Kubernetes resource permission, provide the information in the following fields:

You can add multiple rows for Kubernetes resource permission.

Once you have finished assigning the appropriate permissions for the groups, Click Save.

Chart Group Permissions

In Chart group permission option, you can manage the access of groups for Chart Groups in your project.

You can only give users the ability to create or edit, not both.

Click Save once you have configured all the required permissions for the groups.

Edit Permissions Groups

You can edit the permission groups by clicking the downward arrow.

Edit the permission group.

Once you are done editing the permission group, click Save.

If you want to delete the groups with particular permission group, click Delete.

Notifications

Introduction

With the Manage Notification feature, you can manage the notifications for your build and deployment pipelines. You can receive the notifications on Slack or via e-mail.

Go to the Global Configurations -> Notifications

Notification Configurations:

Click Configurations to add notification configuration in one of the following options:

Manage SES Configurations

You can manage the SES configuration to receive e-mails by entering the valid credentials. Make sure your e-mail is verified by SES.

Click Add and configure SES.

Click Save to save your SES configuration or e-mail ID

Manage SMTP Configurations

You can manage the SMTP configuration to receive e-mails by entering the valid credentials. Make sure your e-mail is verified by SMTP.

Click Add and configure SMTP.

Click Save to save your SMTP configuration or e-mail ID

Manage Slack Configurations

You can manage the Slack configurations to receive notifications on your preferred Slack channel.

Click Add to add new Slack Channel.

Click Save and your slack channel will be added.

Manage Notifications

Click Add New to receive new notification.

Manage Slack Notifications

Send To

Select Pipelines

  • Then, to fetch pipelines of an application, project and environment.

    • Choose a filter type(environment, project or application)

    • You will see a list of pipelines corresponding to your selected filter type, you can select any number of pipelines. For each pipeline, there are 3 types of events Trigger, Success, and Failure. Click on the checkboxes for the events, on which you want to receive notifications.

Click Save when you are done with your Slack notification configuration.

Manage SES Notifications

Send To

  • Click Send To box, select your e-mail address/addresses on which you want to send e-mail notifications. Make sure e-mail id are SES Verified.

Select Pipelines

  • To fetch pipelines of an application, project and environment.

    • Choose a filter type(environment, project or application)

    • You will see a list of pipelines corresponding to your selected filter type, you can select any number of pipelines. For each pipeline, there are 3 types of events Trigger, Success, and Failure. Click on the checkboxes for the events, on which you want to receive notifications.

Click Save once you have configured the SES notification.

Manage SMTP Notifications

Send To

  • Click Send To box, select your e-mail address/addresses on which you want to send e-mail notifications. Make sure e-mail IDs are SMTP Verified.

Select Pipelines

  • To fetch pipelines of an application, project and environment.

    • Choose a filter type(environment, project or application)

    • You will see a list of pipelines corresponding to your selected filter type, you can select any number of pipelines. For each pipeline, there are 3 types of events Trigger, Success, and Failure. Click on the checkboxes for the events, on which you want to receive notifications.

Click Save once you have configured the SMTP notification.

Figure 1: License Activation Screen
Figure 2: Copying Installation Fingerprint
Figure 3: Log in to License Dashboard
Figure 4: Entering User Details
Figure 5: Pasting Installation Fingerprint
Figure 6: Copying Generated License Key
Figure 7: Pasting License Key and Activating
Figure 8: Devtron Login Page
Figure 9: 'About Devtron' Help Menu
Figure 10: Updating License
AWS Backup Configuration
Azure Storage Account Key
Azure Backup Configuration

License Claimed

Invalid License Key

License Key No Longer Valid

Invalid Fingerprint

License Has Expired

License Key Already Exists for Fingerprint

dex config if you want to integrate login with SSO (optional) for more information check

Figure 1: Adding a Cluster

- If you have access to the cluster, use this option.

- For airgapped-related use-cases, use this option.

Figure 2: Choosing Cluster Type
Figure 3: Selecting 'Add Kubernetes Cluster'

Figure 4: Choosing a Method

Refer to learn the process of getting Server URL and bearer token.

Enter the Server URL of your cluster (with https) Note: We recommend using a instead of cloud hosted URL.

Figure 5: Enter Cluster Credentials

If you have a kubeconfig file ready, you may skip the above process and refer instead.

Figure 6: Choosing Kubeconfig Option
Figure 7: Get Cluster List from Kubeconfig
Figure 8: Clicking Save

Choose Method of Connection

Figure 9: Choosing Direct Connection

Via Proxy

Limitation: Deployments via are not recommended for clusters connected via proxy.

Figure 10: Choosing 'Via Proxy'

Via SSH Tunnel

Limitation: Deployments via are not recommended for clusters connected via SSH Tunnel.

Figure 11: Choosing 'Via SSH Tunnel'

If your cluster is managed (e.g., , , ), you might need to download these certificates from your cloud provider’s dashboard or API.

The CA certificate (see: ) used to verify the Kubernetes API server’s identity.

Figure 12: Using Secure TLS Connection

Enable application metrics to configure Prometheus as shown below. In case it is not available, make sure to install the Monitoring (Grafana) integration from to configure Prometheus.

Figure 13: Enabling Application Metrics

Add Isolated Cluster

Figure 14: Selecting Isolated Cluster
Figure 15: Saving Isolated Cluster
Figure 16: New Isolated Cluster

When you deploy to an isolated environment, Devtron automatically packages application manifests and images into a . You can then either:

Push it to an (provided pushing of helm package is enabled), allowing manifests to be pulled manually or automatically via Devtron on air-gapped cluster (if pull access to the OCI registry is available).

Whether it is a or , a newly created cluster initially has no environments, so click Add Environment.

Figure 17: Adding an Environment
Figure 18: Saving an Environment

Add/Edit labels to namespace - You can attach labels to your specified namespace in the Kubernetes cluster. Using labels will help you filter and identify resources via CLI or other Kubernetes tools. to know more about labels.

Figure 19: Adding Labels to Namespace
Figure 20: Newly Created Environment in the Cluster
Figure 21: Editing Environment in the Cluster
Figure 22: Updating Environment in the Cluster
Figure 23: Deleting Environment

Environment deletion is not allowed if any application has a CD pipeline corresponding to the environment. In such a case, go to and delete the deployment pipeline first, and then return to delete the environment. This action is irreversible, so make sure no critical applications or resources depend on the environment before deleting.

Figure 24: Confirming Environment Deletion

must be installed on the bastion.

We recommend using a self-hosted URL instead of a cloud-hosted URL. Refer the benefits of a .

Figure 25: Generating Cluster Credentials

If you wish to use as a means to access the Devtron services available in your cluster, you can configure it either during the installation or after the installation of Devtron.

If you have successfully configured Ingress, refer .

If you are installing Devtron, you can enable Ingress either via or by using to specify the desired Ingress settings.

As an alternative to the method, you can enable Ingress using ingress-values.yaml instead.

Next, create ingress to access Devtron by applying the devtron-ingress.yaml file. The file is also available on this . You can access Devtron from any host after applying this yaml.

For k8s versions < 1.19, :

Provide a name to your Git provider. Note: This name will be available on the App Configuration > drop-down list.

Provide the Git host URL. As an example: for GitHub, for GitLab etc.

You can enable or disable a git account. Enabled git accounts will be available on the App Configuration > .

Figure 1: Global Configuration - GitOps

Select any one of the to configure GitOps.

Figure 2: Selecting a Provider

Fill all the mandatory fields. Refer to know more about the respective fields.

Figure 3: Entering Git Credentials

Select this option if you wish to use your own GitOps repo. This is ideal if there are any confidentiality/security concerns that prevent you from giving us admin access. Therefore, the onus is on you to create a GitOps repo with your Git provider, and then on Devtron. Make sure the Git credentials you provided in Step 3 have at least read/write access. Choosing this option will unlock a page under the tab.

Figure 4: Need for User-defined Git Repo
Using Feature Flag

Go to .

A GitHub organization. If you don't have one, refer .

Enter the GitHub organization name. If you do not have one, refer .

Provide your personal access token (PAT). It is used as an alternate password to authenticate your GitHub account. If you do not have one, create a GitHub PAT . Access Required: repo - Full control of private repositories (able to access commit status, deployment status, and public repositories). admin:org - Full control of organizations and teams (Read and Write access). May not be required if you are using user-defined git repo. delete_repo - Grants delete repo access on private repositories.

A GitLab group. If you don't have one, refer .

Enter the GitLab group ID. If you do not have one, refer .

Provide your personal access token (PAT). It is used as an alternate password to authenticate your GitLab account. If you do not have one, create a GitLab PAT . Access Required: api - Grants complete read/write access to the scoped project API. write_repository - Allows read/write access (pull, push) to the repository.

An organization on Azure DevOps. If you don't have one, refer .

A project in your Azure DevOps organization. Refer .

Enter the Org URL of Azure DevOps. Format should be https://dev.azure.com/<org-name>, where <org-name> represents the organization name, e.g.,

Enter the Azure DevOps project name. If you do not have one, refer .

Provide your Azure DevOps access token. It is used as an alternate password to authenticate your Azure DevOps account. If you do not have one, create a Azure DevOps access token . Access Required: code - Grants the ability to read source code and metadata about commits, change sets, branches, and other version control artifacts. .

- Select this if you wish to store GitOps configuration in a web-based Git repository hosting service offered by Bitbucket.

- Select this if you wish to store GitOps configuration in a git repository hosted on a self-managed Bitbucket Data Center (on-prem).

A workspace in your Bitbucket account. Refer .

Figure 5: Entering Details of Bitbucket Cloud

Enter the Bitbucket workspace ID. If you do not have one, refer

Enter the Bitbucket project key. If you do not have one, refer . Note: If the project is not provided, the repository is automatically assigned to the oldest project in the workspace.

Provide your personal access token (PAT). It is used as an alternate password to authenticate your Bitbucket Cloud account. If you do not have one, create a Bitbucket Cloud PAT . Access Required: repo - Full control of repositories (Read, Write, Admin, Delete) access.

Figure 6: Entering Details of Bitbucket Data Center

Enter the Bitbucket project key. Refer .

Pick a for your organization. You have the option to select create free organization also.

For more information about the plans available for your team, see . You can also refer official doc page for more detail.

You can also refer official page for more details.

You can also refer for more details.

While are typically used for storing built by the CI Pipeline, an OCI registry can store container images as well as other artifacts such as . In other words, all container registries are OCI registries, but not all OCI registries are container registries.

Figure 1: Container/OCI Registry
Figure 2: Add a Registry

Choose a provider from the Registry provider dropdown. View the .

Provide a name to your registry, this name will appear in the Container Registry drop-down list available within the section of your application

The credential input fields may differ depending on the registry provider, check

Tick this checkbox if you wish to use the repository to push container images. This comes selected by default and you may untick it if you don't intend to push container images after a CI build. If you wish to to use the same repository to pull container images too, read .

Before you begin, create an and attach the ECR policy according to the authentication type.

User Auth: It is a key-based authentication, attach the ECR policy (AmazonEC2ContainerRegistryFullAccess) to the .

Provide the password/ corresponding to your docker hub account. It is recommended to use Token for security purpose.

For Azure, the service principal authentication method can be used to authenticate with username and password. Visit this to get the username and password for this registry.

JSON key file authentication method can be used to authenticate with username and service account JSON file. Visit this to get the username and service account JSON file for this registry.

JSON key file authentication method can be used to authenticate with username and service account JSON file. Please follow to get the username and service account JSON file for this registry.

You can create a Pod that uses a to pull an image from a private container registry. You can use any private container registry of your choice, for e.g., .

Figure 3: Using Registry Credentials
Figure 4: Using Image Pull Secret

Only users with privileges can create SSO configuration. Devtron uses for authenticating a user against the identity provider.

Make sure that you have a .

id : Identity provider platform which is a unique ID in string. (Refer to

After configuring an SSO for authentication, you need to in Devtron, else your users won't be able to log in via SSO.

In case you have enabled auto-assign permissions in or , relevant must also exist in Devtron for a successful login.

You can add more chart repositories to Devtron. Once added, they will be available in the All Charts section of the .

Only a can configure SSO. If you are setting up SSO for the first time, use instead.

A Google Cloud account to create and manage OAuth credentials. If you don’t have one, you must create it at the .

Ensure that the is correctly configured in Devtron. This is crucial because the Redirect URI is generated based on the Host URL.

Figure 1: Get the Redirect URI

Access and create a new project or select an existing one.

Google does not support IP addresses as valid redirect URIs. You must use a valid domain name () accessible over HTTPS.

Figure 2a: Creating OAuth Client
Figure 2b: Client ID Created
Figure 2c: Get the Client ID and Client Secret

For a detailed step-by-step guide, refer to Google’s official documentation: .

Figure 3: Configuring SSO in Devtron
Figure 4a: Configuring User Permissions
Figure 4b: Adding User with required permissions

For detailed steps on managing user permissions, refer to the .

Fetching Client ID and Secret
Copying Redirect URI from Devtron
Pasting Redirect URI

Figure 1: Deployment Charts

This video contains a quick walkthrough of the steps mentioned in the section of this page and the subsequent uploading of the deployment chart on Devtron.

Note: Chart.yaml is a metadata file that gets created when you create a . The following table consists the fields that are relevant to you in Chart.yaml.

to view a sample 'Chart.yaml' file.

Figure 2: Filepath of Image Descriptor Template

In the root directory of your chart, Devtron expects an app-values.yaml file. It uses this file to determine the values to be displayed on the as shown below.

Figure 3: Chart Values
Figure 4: Blank Chart Values

The release-values.yaml file contains essential values needed for deployment that aren’t covered by . For example:

Some dynamic values (such as IMAGE_TAG and IMAGE_REPO from the ) are populated here because they are needed for deployment.

Figure 5: Global Configurations - Deployment Charts
Figure 6: Upload Chart Button
Figure 7: Uploading .tgz File
Figure 8: Cancelling Upload

The files uploaded are validated ()

The archive file do not match the ()

You are uploading a newer version of an existing chart ()

There already exists a chart with the same version ()

Figure 9: Viewing Deployment Charts

You can add new by clicking Upload Chart.

Once you successfully upload a deployment chart, you can start using it as a deployment template for your application. Refer to know more.

Figure 10: Using Deployment Charts

The deployment strategy for a deployment chart is fetched from the chart template and cannot be configured in the .

Editing GUI Schema of Deployment Charts

This section is an extension of feature. Refer the document to know more about the significance of having a custom GUI schema for your deployment templates.

Figure 11: Edit GUI Schema Button

A GUI schema is available for you to edit in case of Devtron charts. In case of custom charts, you may have to define a GUI schema yourself. To know how to create such GUI schema, refer .

Figure 12: Editable Schema
Figure 13: Refer YAML Button
Figure 14: Preview GUI Button
Figure 15: Save Changes
Fetching Client ID and Secret
Copying Redirect URI from Devtron
Pasting Redirect URI

Install and on your server or cloud environment.

Create a new for your application.

Figure 1: Creating Client on Keycloak
Figure 2: Client ID and Name
Figure 3: Enabling Client Authentication Toggle

to know where to find DEVTRON_BASE_URL.

Figure 4: Entering Callback/Redirect URIs
Figure 5: Obtaining Client Secret
Figure 6: Creating User Data
Figure 7: Adding User Password
Figure 8: OpenID Endpoint Configuration Link
Figure 9: Locating Issuer URL

Here, we will set up an OIDC SSO and enter the values we obtained in the .

Figure 10: Choosing OIDC SSO
Figure 11: Populating Correct Orchestrator URL

In the issuer field, paste the URL you got while .

In the clientID field, paste the ID you entered while .

In the clientSecret field, paste the secret you got under .

In the redirectURI field, make sure to enter the same redirect URI you gave in step 4 of .

Figure 12: Sample Keycloak SSO Config
Figure 13: Adding Users to Devtron
Figure 14: Entering User Data and Permissions

Assign necessary permissions to this new user. Refer to know more.

Now, you may log out and test the Keycloak OIDC login method using the . Clicking the Login with Oidc button will land you on Keycloak's login page.

Figure 15a: Login using OIDC method
Figure 15b: Keycloak's Login Page
Fetching Client ID and Secret
Copying Redirect URI from Devtron
Pasting Redirect URI

Fetching Client ID and Tenant ID
Fetching Secret
Copying Redirect URI from Devtron
Pasting Redirect URI

Auto-assign Permissions

Since Microsoft supports , this feature further simplifies the onboarding process of organizations having a large headcount of users. It also eliminates repetitive permission assignment by automatically mapping your Azure AD groups to Devtron's during single sign-on (SSO) login.

Enabling Permission Auto-assignment

If you've defined groups in your Active Directory, you can create corresponding permission groups in Devtron with the same names. When members of those Active Directory groups first log in to Devtron, they'll automatically inherit the permissions from their Devtron permission group. This means you can't manually adjust or add mapped to a permission group.

Once you save the configuration with this feature enabled, existing user permissions will be cleared and the future permissions will be managed through linked to Azure Active Directory (Microsoft Entra ID) groups.

Auto-assign Permissions

Since LDAP supports creation of User Groups, this feature simplifies the onboarding process of organizations having a large headcount of users. It also eliminates repetitive permission assignment by automatically mapping your LDAP User groups to Devtron's during single sign-on (SSO) login.

Enabling Permission Auto-assignment

If you've created user groups in LDAP, you can create corresponding permission groups in Devtron with the same names. When members of those user groups first log in to Devtron, they'll automatically inherit the permissions from their Devtron permission group. This means you can't manually adjust or add mapped to a permission group.

Once you save the configuration with this auto-assign feature enabled, existing user permissions will be cleared and the future permissions will be managed through linked to LDAP user groups.

Introduction

Option 2: Choose Specific Approver → User Group → Add Criteria to choose one or more who can provide the requisite number of approvals. The permissible limit is [1 to 6] for each user group you add. From the selected group(s), only the users having Image Approver and/or Configuration Approver permissions can approve.

As shown in step 2 of , you can choose multiple policies and apply them to a scope (e.g., Global, Cluster, Application, Environment, Base Configuration). However, if you have already applied and now you wish to apply more policies to the same scope, you may do so by following either of the below steps:

At least one policy must remain applied to a scope, so you cannot remove all the policies from a scope. You may use the instead.

If you have already applied policies to a scope (e.g., Global, Cluster, Application) and wish to delete all of them from that given scope, follow the steps below. Note: This will not you originally created. Moreover, deployment pipelines may still continue inheriting profiles from higher scopes (e.g., Global, Cluster, Application).

If is configured in Devtron, the approver gets notified via email. This enables the approver to take an action directly from the mail, such as View Request and Approve Request.

If is configured in Devtron, the approver gets notified via email. Therefore, the approver can take an action directly from the mail as shown below.

Introduction

Name
Blackout Window
Maintenance Window

Give an appropriate name to your deployment window (e.g., weekend restrictions) and write a brief description that explains what the deployment window does. Refer to view the pages where the window name and description will appear.

Option
When To Use

The Deployment window section shows the blackout and maintenance windows configured for each of the application.

Similar to , deletion of workloads might disrupt the desired state and behavior of the application, hence it is barred during a deployment block.

Just like application, are also subjected to deployment windows.

The section for Specific permissions contains a drop-down list of all existing groups for which a user has an access. This is an optional field and more than one groups can be selected for a user.

You can either grant permission to a user group or specific permissions to manage access for:

The Devtron Apps option will be available only if you install .

Dropdown
Description
Dropdown
Description
Dropdown
Description

In Kubernetes Resources option, you can provide permission to view, inspect, manage, and delete resources in your clusters from page in Devtron. You can also create resources from the Kubernetes Resource Browser page.

Dropdown
Description

The Chart group permission option will be available only if you install .

Action
Permissions

Key
Description
Key
Description
Key
Description

When you click on the Send to box, a drop-down will appear, select your slack channel name if you have already configured Slack Channel. If you have not yet configured the Slack Channel,

If you have not yet configured SES, .

If you have not yet configured SMTP, .

Snapshot
Snapshot
Snapshot
Snapshot
Snapshot
Snapshot
Argocd documentation
EKS
AKS
GKE
Devtron Stack Manager
OCI registry
Workflow Editor
kubectl
Ingress
link
apply this yaml
Git repository
Deployment Template
Charts
add it to the specific application
GitOps Configuration
App Configuration
Devtron's Resource Browser
this link
plan
GitHub's products
GitHub organization
Azure DevOps - Project Creation
official Bitbucket Workspace page
IAM user
link
link
link
Google
GitHub
GitLab
Microsoft
LDAP
OpenID Connect
OpenShift
dexidp.io
Chart Store
Super-Admin
Admin Credentials
Google Cloud Console
Host URL
Google Cloud Console
FQDN
Get Google API Client ID
User Permissions Documentation
View Google Documentation
View Dex IdP Documentation
View GitHub Documentation
View Dex IdP Documentation
helm chart
Click here
Deployment Template
RJSF JSON Schema Tool
View Okta Documentation
Configure Keycloak SSO
Configure Okta SSO
View Dex IdP Documentation
configure Keycloak
realm in Keycloak
Click here
user permissions
View GitLab Documentation
View Dex IdP Documentation
View Microsoft Documentation
View Dex IdP Documentation
Active Directory (AD)
Permission Groups
individual permissions for users
permission groups
What is LDAP
Permission Groups
individual permissions for users
Permission Groups
Kubernetes Cluster
Isolated Cluster
Server URL & Bearer Token
Kubeconfig
Get Cluster Credentials
Choose Connection Type
Use Secure TLS Connection
Configure Prometheus
Add Cluster Using Kubeconfig
Choose Connection Type
Use Secure TLS Connection
Configure Prometheus
Kubernetes Cluster
Isolated Cluster
self-hosted URL
During Devtron Installation
After Devtron Installation
Post Ingress Setup
set flag
ingress-values.yaml
Only Basic Configuration
Configuration Including Labels
Configuration Including Annotations
Configuration Including TLS Settings
Comprehensive Configuration
set flag
supported Git providers
supported Git providers
GitHub
GitLab
Azure
Bitbucket
Creating Organization in GitHub
Creating Group in GitLab
Creating Project in Azure
Bitbucket Cloud
Bitbucket Data Center
Creating Workspace in Bitbucket
Supported Registry Providers
Use Registry Credentials
Specify Image Pull Secret
Preparing a Deployment Chart
app-values.yaml
image descriptor JSON file
charts or chart versions
previous section
retrieving issuer URL
creating the client
client credentials tab
client creation
user credentials

Definition

Time period during which deployments are not allowed

Only time period during deployments are allowed

Use

To block deployments when systems are already stable and running a critical business in peak hours

To allow deployments preferably during non-business hours so as to minimize any negative impact on end-users

In case of overlap?

Blackout window gets a higher priority over maintenance window

Maintenance window has a lower priority

Start Date

Use this to enforce restrictions of a deployment window only after a specific date

End Date

Use this to stop restrictions of a deployment window beyond a specific date

Both Start Date and End Date

Use these to confine your deployment window restrictions between two dates

DEPLOYMENT_WINDOW_FETCH_DAYS_BLACKOUT: "90"
DEPLOYMENT_WINDOW_FETCH_DAYS_MAINTENANCE: "90"

View

Enable View to view chart groups only.

Create

Enable Create if you want the users to create, view, edit or delete the chart groups.

Edit

  • Deny: Select Deny option from the drop-down list to restrict the users to edit the chart groups.

  • Specific chart groups: Select the Specific Charts Groups option from the drop-down list and then select the chart group for which you want to allow users to edit.

Microsoft
LDAP
permission groups

Configuration Name

Provide a name to the SES Configuration.

Access Key ID

Valid AWS Access Key ID.

Secret Access Key

Valid AWS Secret Access Key.

AWS Region

Select the AWS Region from the drop-down menu.

E-mail

Enter the SES verified e-mail id on which you wish to receive e-mail notifications.

Configuration Name

Provide a name to the SMTP Configuration

SMTP Host

Host of the SMTP.

SMTP Port

Port of the SMTP.

SMTP Username

Username of the SMTP.

SMTP Password

Password of the SMTP.

E-mail

Enter the SMTP verified e-mail id on which you wish to receive e-mail notifications.

example
Git repository
https://github.com
https://gitlab.com
here
here
https://dev.azure.com/devtron-test
here
More information on scopes in Azure DevOps
Bitbucket Project Key
here
Bitbucket Project Key
Build Configuration
IAM user
Token
View Snapshot
View Snapshot
View Snapshot
View Openshift Documentation
View Dex IdP Documentation
SES/SMTP
SES/SMTP
application groups
User permissions
CI/CD integration
Kubernetes Resource Browser
CI/CD integration
self-hosted URL
How to create organization in GitHub
GitLab Group ID
Azure DevOps Project Name
Bitbucket Workspace ID
Registry Providers
Registry Credential Access
View Snapshot
required template
Apply an Approval Policy
Apply More Policies to a Scope
Apply More Policies in Bulk
Remove Policies Applied to a Scope
Remove Applied Policies in Bulk
delete procedure
delete the approval policy
Hibernation
Restart Workloads
Deletion of Workloads
Deployment
Rollback
Deletion of CD Pipeline
Creating Deployment Window
Applying Window to Deployment Pipelines
this section
Hibernation
Restart Workloads
Deletion of Workloads
Deployment
Rollback
Deletion of CD Pipeline
restart workloads
Devtron Apps
Helm Apps
Jobs
Kubernetes Resources
Chart Groups
SES Configurations
SMTP Configurations
Slack Configurations
Configure Slack Channel
Configure SES
Configure SMTP

Project

Select a project from the drop-down list to which you want to give permission to the group. You can select only one project at a time. Note: If you want to select more than one project, then click Add row.

Environment

Select the specific environment or all environments from the drop-down list. Note: If you select All environments option, then a user gets access to all the current environments including any new environment which gets associated with the application later.

Application

Select the specific applications or all applications from the drop-down list corresponding to your selected Environments. Note: If you select All applications option, then a user gets access to all the current applications including any new application which gets associated with the project later .

Role

  • View only

  • Build and Deploy

  • Admin

  • Manager

Project

Select a project from the drop-down list to which you want to give permission to the group. You can select only one project at a time. Note: If you want to select more than one project, then click Add row.

Environment or cluster/namespace

Select the specific environment or all existing environments in default cluster from the drop-down list. Note: If you select all existing + future environments in default cluster option, then a user gets access to all the current environments including any new environment which gets associated with the application later.

Application

Select the specific application or all applications from the drop-down list corresponding to your selected Environments. Note: If All applications option is selected, then a user gets access to all the current applications including any new application which gets associated with the project later .

Role

  • View only

  • View & Edit

  • Admin

Project

Select a project from the drop-down list to which you want to give permission to the group. You can select only one project at a time. Note: If you want to select more than one project, then click Add row.

Job Name

Select the specific job name or all jobs from the drop-down list. Note: If you select All Jobs option, then the user gets access to all the current jobs including any new job which gets associated with the project later.

Workflow

Select the specific workflow or all workflows from the drop-down list. Note: If you select All Workflows option, then the user gets access to all the current workflows including any new workflow which gets associated with the project later.

Environment

Select the specific environment or all environments from the drop-down list. Note: If you select All environments option, then the user gets access to all the current environments including any new environment which gets associated with the project later.

Role

  • View only

  • Run job

  • Admin

Cluster

Select a cluster from the drop-down list to which you want to give permission to the user. You can select only one cluster at a time. Note: To add another cluster, then click Add another.

Namespace

Select the namespace from the drop-down list.

API Group

Select the specific API group or All API groups from the drop-down list corresponding to the K8s resource.

Kind

Select the kind or All kind from the drop-down list corresponding to the K8s resource.

Resource name

Select the resource name or All resources from the drop-down list to which you want to give permission to the user.

Role

  • View

  • Admin

Slack Channel

Name of the Slack channel on which you wish to receive notifications.

Webhook URL

Project

Select the project name to control user access.

Scoped Variables

Introduction

In any piece of software or code, variables are used for holding data such as numbers or strings. Variables are created by declaring them, which involves specifying the variable's name and type, followed by assigning it a value.

Devtron offers super-admins the capability to define scoped variables (key-value pairs). It means, while the key remains the same, its value may change depending on the following context:

  • Global: Variable value will be universally same throughout Devtron.

Advantages of using scoped variables

  • Reduces repeatability: Configuration management team can centrally maintain the static data.

  • Simplifies bulk edits: All the places that use a variable get updated when you change the value of the variable.

  • Keeps data secure: You can decide the exposure of a variable's value to prevent misuse or leakage of sensitive data.


How to Define a Scoped Variable

On Devtron, a super-admin can download a YAML template. It will contain a schema for defining the variables.

Download the Template

  1. From the left sidebar, go to Global Configurations → Scoped Variables

  2. Click Download template.

  3. Open the downloaded template using any code editor (say VS Code).

Enter the Values

The YAML file contains key-value pairs that follow the below schema:

Field
Type
Description

apiVersion

string

The API version of the resource (comes pre-filled)

kind

string

The kind of resource (i.e. Variable, comes pre-filled)

spec

object

The complete specification object containing all the variables

spec.name

string

Unique name of the variable, e.g. DB_URL

spec.shortDescription

string

A short description of the variable (up to 120 characters)

spec.notes

string

Additional details about the variable (will not be shown on UI)

spec.isSensitive

boolean

Whether the variable value is confidential (will not be shown on UI if true)

spec.values

array

The complete values object containing all the variable values as per context

The spec.values array further contains the following elements:

Field
Type
Description

category

string

The context, e.g., Global, Cluster, Application, Env, ApplicationEnv

value

string

The value of the variable

selectors

object

A set of selectors that restrict the scope of the variable

selectors.attributeSelectors

object

A map of attribute selectors to values

selectors.attributeSelectors.<selector_key>

string

The key of the attribute selector, e.g., ApplicationName, EnvName, ClusterName

selectors.attributeSelectors.<selector_value>

string

The value of the attribute selector

Here's a truncated template containing the specification of two variables for your understanding:

apiVersion: devtron.ai/v1beta1
kind: Variable
spec:

# First example of a variable
  - name: DB_URL
    shortDescription: My application's customers are stored
    notes: The DB is a MySQL DB running version 7.0. The DB contains confidential
      information.
    isSensitive: true
    values:
      - category: Global
        value: mysql.example.com

# Second example of a variable
  - name: DB_Name
    shortDescription: My database name to recognize the DB
    notes: NA
    isSensitive: false
    values:
      - category: Global
        value: Devtron
      - category: ApplicationEnv
        value: app1-p
        selectors:
          attributeSelectors:
            ApplicationName: MyFirstApplication
            EnvName: prod

Upload the Template

  1. Once you save the YAML file, go back to the screen where you downloaded the template.

  2. Use the file uploader utility to upload your YAML file.

  3. The content of the file will be uploaded for you to review and edit. Click Review Changes.

  4. You may check the changes between the last saved file and the current one before clicking Save.


How to Edit an Existing Scoped Variable

Only a super-admin can edit existing scoped variables.

Option 1: Directly edit using the UI

Option 2: Reupload the updated YAML file

Reuploading the YAML file will replace the previous file, so any variable that existed in the previous file but not in the latest one will be lost


How to Use a Scoped Variable

Once a variable is defined, it can be used by your authorized users on Devtron. A scoped variable widget would appear only on the screens that support its usage.

  • Workflow Editor → Edit build pipeline → Pre-build stage (tab)

  • Workflow Editor → Edit build pipeline → Post-build stage (tab)

  • Workflow Editor → Edit deployment pipeline → Post-Deployment stage (tab)

  • Workflow Editor → Edit deployment pipeline → Post-Deployment stage (tab)

  • Deployment Template

  • ConfigMaps

  • Secrets

Upon clicking on the widget, a list of variables will be visible.

Use the copy button to copy a relevant variable of your choice.

It would appear in the following format upon pasting it within an input field: @{{variable-name}}


Order of Precedence

When multiple values are associated with a scoped variable, the precedence order is as follows, with the highest priority at the top:

  1. Global

Example

  1. Environment + App: This is the most specific scope, and it will take precedence over all other scopes. For example, the value of DB name variable for the app1 application in the prod environment would be app1-p, even though there is a global DB name variable set to Devtron. If a variable value for this scope is not defined, the App scope will be checked.

  2. App: This is the next most specific scope, and it will take precedence over the Environment, Cluster, and Global scopes. For example, the value of DB name variable for the app1 application would be project-tahiti, even though the value of DB name exists in lower scopes. If a variable value for this scope is not defined, the Environment scope will be checked.

  3. Environment: This is the next most specific scope, and it will take precedence over the Cluster and Global scopes. For example, the value of DB name variable in the prod environment would be devtron-prod, even though the value of DB name exists in lower scopes. If a variable value for this scope is not defined, the Cluster scope will be checked.

  4. Cluster: This is the next most specific scope, and it will take precedence over the Global scope. For example, the value of DB name variable in the gcp-gke cluster would be Devtron-gcp, even though there is a global DB name variable set to Devtron-gcp. If a variable value for this scope is not defined, the Global scope will be checked.

  5. Global: This is the least specific scope, and it will only be used if no variable values are found in other higher scopes. The value of DB name variable would be Devtron.


List of Predefined Variables

There are some system variables that exist by default in Devtron that you can readily use if needed:

  • DEVTRON_IMAGE: Provides full image path of the container image, e.g., gcr.io/k8s-minikube/kicbase:v0.0.39

Currently, these variables do not appear in the scoped variable widget, but you may use them.

Pull Image Digest

Introduction

Though it can be enabled by an application-admin for a given CD Pipeline, Devtron also allows super-admins to enable pull image digest at environment level.

This helps in better governance and less repetitiveness if you wish to manage pull image digest for multiple applications across environments.

Who Can Perform This Action?

Users need to have super-admin permission to enable pull image digest at environment level.


Steps to Enable Pull Image Digest

From the left sidebar, go to Global Configurations → Pull Image Digest.

For all Environments

This is for enabling pull image digest for deployment to all environments.

  1. Enable the toggle button next to Pull image digest for all existing & future environments.

  2. Click Save Changes.

For Specific Environments

This is for enabling pull image digest for specific environments. Therefore, only those applications deploying to selected environment(s) will have pull image digest enabled in its CD pipeline.

  1. Use the checkbox to choose one or more environments present within the list of clusters you have on Devtron.

  2. Click Save Changes.

External Links

External Links allow you to connect to the third-party applications within your Devtron dashboard for seamlessly monitoring/debugging/logging/analyzing your applications. You can select from the pre-defined third-party applications such as Grafana to link to your application for quick access.

Configured external links will be available on the App details page. You can also integrate Document or Folder using External Links.

Some of the third-party applications which are pre-defined on Devtron Dashboard are:

  • Grafana

  • Kibana

  • Newrelic

  • Coralogix

  • Datadog

  • Loki

  • Cloudwatch

  • Swagger

  • Jira etc.

Use Case for Monitoring Tool

To monitor/debug an application using a specific Monitoring Tool (such as Grafana, Kibana, etc.), you may need to navigate to the tool's page, then to the respective app/resource page.

External Links can take you directly to the tool's page, which includes the context of the application, environment, pod, and container.

Prerequisites

Before you begin, configure an application in the Devtron dashboard.

  • Super admin access

  • Monitoring tool URL

Add an External Link

  1. On the Devtron dashboard, go to the Global Configurations from the left navigation pane.

  2. Select External links.

  1. Select Add Link.

  2. On the Add Link page, select the external link (e.g. Grafana) which you want to link to your application from Webpage.

The following fields are provided on the Add Link page:

Field
Description

Link name

Provide name of the link.

Description

Description of the link name.

Show link in

  • All apps in specific clusters: Select this option to select the cluster.

  • Specific applications: Select this option to select the application.

Clusters

Choose the clusters for which you want to configure the selected external link with.

  • Select one or more than one cluster to enable the link on the specified clusters.

  • Select All Clusters to enable the link on all the clusters.

Applications

Choose the application for which you want to configure the selected external link with.

  • Select one or more than one application to enable the link on the specified application.

  • Select All applications to enable the link on all the applications. Note: If you enable `App admins can edit`, then you can view the selected links on the App-Details page.

URL Template

The configured URL Template is used by apps deployed on the selected clusters/applications. By combining one or more of the env variables, a URL with the structure shown below can be created: http://www.domain.com/{namespace}/{appName}/details/{appId}/env/{envId}/details/{podName} If you include the variables {podName} and {containerName} in the URL template, then the configured links (e.g. Grafana) will be visible only on the pod level and container level respectively. The env variables:

  • {appName}

  • {appId}

  • {envId}

  • {namespace}

Note: The env variables will be dynamically replaced by the values that you used to configure the link.

Note: To add multiple links, select + Add another at the top-left corner.

Click Save.

Access an external link

Note: If you enable App admins can edit on the External Links page, then only non-super admin users can view the selected links on the App Details page.

Manage External links

On the External Links page, the configured external links can be filtered/searched, as well as edited/deleted.

Select Global Configurations > External links.

  • Filter and search the links based on the link's name or a user-defined name.

  • Edit a link by selecting the edit icon next to an external link.

  • Delete an external link by selecting the delete icon next to a link. The bookmarked link will be removed in the clusters for which it was configured.

Catalog Framework

Ideally, all resources such as microservices, clusters, jobs, pods, etc. should contain detailed information so that its users know what each of those resources do, how to use them, as well as all their technical specs. Access to such data makes it easier for engineers to quickly discover and understand the relevant resources.

Currently, Devtron supports catalog framework for the following resource types (a.k.a. resource kind):

There are two parts involved in the creation of a desirable resource catalog:


Defining a Schema

Who Can Perform This Action?

Only a super-admin can create/edit a schema.

  1. Go to Global Configurations → Catalog Framework.

  2. Choose a resource type, for which you wish to define a schema, for e.g., Devtron applications.

  3. You can edit the schema name and description.

  4. There is a sample schema available for you to create your own customized schema. Using this schema, you can decide the input types that renders within the form, for e.g., a dropdown of enum values, a boolean toggle button, text field, label, and many more.

  5. After defining your schema, click Review Changes.

  6. You get a side-by-side comparison (diff) highlighting the changes you made.

  7. Click Save.

Similarly, you can define schemas for other resource types.

Note: If you edit a field (of an existing schema) for which users have already filled the data, that data will be erased. You will receive a prompt (as shown below) to confirm whether you want to proceed with the changes.


Filling the Schema-Generated Form

Once a catalog schema exists for a resource type, its corresponding form would be available in the overview section of that resource type.

  1. Since we defined a schema for Devtron applications in the above example, go to the Overview tab of your application (any Devtron application). Click the Edit button within the About section.

  2. The schema created for Devtron applications would render into an empty form as shown below.

  3. Fill as many details as an application owner to the best of your knowledge and click Save.

  4. Your saved data would be visible in a GUI format (and also in JSON format) as shown below.

This catalog data would be visible to all the users who have access to the application, but its data can be edited only by the resource owners (in this case, application admin/managers).

Tags Policy

Managing resources in Kubernetes often requires categorizing and grouping resources for better visibility and analysis. A common use case is cost allocation. By analyzing Kubernetes labels, teams can identify which departments consume the most resources. However, this is only possible if the relevant tags are propagated as Kubernetes labels.

The Tags Policy feature in Devtron allows you to enforce a tag that must be provided before application creation or before deployment to an environment. For example, you can create a tag named team and if it is propagated as labels to Kubernetes resources, you can easily audit the team-wise usage and resource consumption by its tag. Additionally, you can enforce deployment-specific rules for your applications if required tags are missing.

Adding Tag Policy

Who Can Perform This Action?

Users need to have super-admin permission to create tag policy.

  1. Go to Global Configurations → Tags Policy.

  2. Click + Add Tag.

  3. Suggested tags/Mandatory tags - You can either suggest tags or make them mandatory when creating applications:

    • If you want the person creating an application to compulsarily provide a tag, use Mandatory tags. Mandatory tags have two consequences:

      • Blocks application creation if the required tag is not provided.

      • Blocks deployment if restriction is enforced.

  4. Select Project(s) - To mandate a tag, you must provide a project where this policy should apply. All applications in the project will require the mandatory tag to be entered. Whereas, suggested tags are shown as suggestions globally (i.e., for all apps).

  5. Tag Key - Enter the key from the key-value pair (tag), e.g., Business Unit, Team, Owner.

  6. Value Choices - Here, you can create a list of values for the key-value pair (tag). A tag value can be a free text or you can restrict it to a set of values for the user to choose from.

You may enable Allow Custom Input to give the user a choice to enter their own value if it is unavailable in the list. Or you may skip creating the list of choices altogether so that your user can enter their own value.

  1. Description - Write a brief description explaining the significance of the tag.

  2. Allow/Block Deployments - Mandatory tags additionally let you define what should happen if users do not configure them in the intended projects:

    • Allow deployments - Use this option if you want to allow the user to deploy an existing application where mandatory tags are not configured yet.

    • Block deployment stages of all environments - This will prevent the user from deploying an existing application to all environments if mandatory tags are not configured.

Changing Propagation in Suggested Tags vs. Mandatory Tags

In suggested tags: When you enable/disable tag propagation, users can still disable/enable it during app creation, ensuring its tags propagate to associated Kubernetes resources.

In mandatory tags: When you enable/disable tag propagation, users do not get the option to change the propagation setting.

  1. (Optional) Click the + option to create more suggested tags or more mandatory tags in one go.

  2. Click Save to create the tag(s).


Editing a Tag

Who Can Perform This Action?

Users need to have super-admin permission to edit tags.

You can edit an existing tag key to do the following:

  • Modify the tag key

  • Add/remove value choices

  • Tweak the description

  • Change deployment restrictions

  • Add or remove projects

  • Convert Tags from Suggested to Mandatory (or vice versa)

  • Enable/Disable the propagation of tags

Once done, click Update to apply the changes.

Editing in Bulk

You may use the checkboxes to add/remove projects from multiple tags at once as shown below.


Deleting a Tag

Who Can Perform This Action?

Users need to have super-admin permission to delete tags.

If you delete a 'Suggested Tag', it will no longer show up as a suggestion to your users while adding tags. If it's a 'Mandatory Tag', the deployment rules (if any, associated with that tag) will no longer be enforced.

However, this action will not delete the applied tag from existing applications.

If you wish to delete multiple tags, you may use the checkboxes to select the tags and delete them from the floating widget as shown below.


Results

Appearance of Mandatory Tags

  • The mandatory tag is available for users to configure after they select the project in the app creation page. It is marked by a red asterisk.

  • For an existing application, users can configure it from the Overview page of the application.

  • In a project where mandatory tags are enabled, if the user does not provide values for the mandatory tags, the user cannot create an app in that project.

Appearance of Suggested Tags

Users can see a dropdown list of your suggested tags while creating a new app or on the Overview page of an existing application.

Impact on Deployment Pipelines

The same is true for auto-triggering deployment pipelines. A new image available after the build stage will not auto-trigger the deployment pipeline due to the missing mandatory tags.

Impact on Application Group

Impact on Release

Plugin Policy


Tutorial


Creating a Plugin Policy

Who Can Perform This Action?

Users need to have super-admin permission to create a plugin policy.

  1. Go to Global Configurations → Plugin Policy.

  2. Click + Create Profile.

  3. Give a name to the profile, e.g., check-jira, and add a description (optional) preferably explaining what it does.

  4. Choose whether the profile should apply to the Build pipeline or the Deployment pipeline.

Note

A single policy cannot apply to both build and deployment pipelines simultaneously. You can create separate policies instead.

  1. Under Mandatory Plugin(s), click Add Plugin.

  2. A list of plugins will appear for you to choose from. Select one or more plugins to make them mandatory for the pipeline you selected in step 4.

Tip

There is a search box for you to quickly find the plugins. Moreover, since plugins are classified by tags, you can use the tag filter to find your intended plugins.

  1. Click Done.

  2. Use the dropdown menu to choose the stage (pre or post) at which you wish to enforce your chosen plugin(s). You can select mixed stages too.

    Here, Pre means before and Post means after.

  3. Decide the action that the system should take in case your policy is not followed by the intended pipelines in your application workflow.

    • Allow respective triggers with warning - This will allow the non-compliant pipeline to run. However, it will display an 'Action required' warning at the intended stage (pre/post) of the pipeline in the application's workflow.

    • Block respective triggers immediately - This will not allow the non-compliant pipeline to run (whether manual execution or automated) effective immediately, unless the user configures the mandatory plugins.

    • Block respective triggers from date/time - This will allow the non-compliant pipeline to run only till a given date and time. After that, it will block the non-compliant pipeline unless the user configures the mandatory plugins.

  4. Click Save Changes.


Applying a Plugin Policy

Who Can Perform This Action?

Users need to have super-admin permission to apply a plugin policy.

  1. After you create a policy, you can apply it. Click Apply Profile on the same screen.

  2. From the Select profiles to apply dropdown, choose the policy you wish to apply. You also have the option to select more than one policy (if they exist) using the checkbox.

  3. Under Apply selected policies to all pipelines:

    • Global - Select this option to apply your chosen policies to all application workflows across all clusters.

    • By Match Criteria - Select this option to use a combination of filters to decide the target pipelines fulfilling your criteria. Your policy will only apply to such target pipelines.

  4. (Skip this if you chose Global in the previous step) Upon choosing the By Match Criteria option, the following match criteria are available for you:

    • Project

    • Application

    • Cluster

    • Environment

    • Branch fixed

    • Branch regex

    Once you select the criteria, choose the value displayed to you in the dropdown list as shown below.

  5. (Optional) You may also write a note for your other team members to understand the intent and context of your policy.

  6. Click Save Changes.

Once you apply the plugin policy, you can view the pipelines that are not adhering to your policy as shown below. Clicking on the non-compliant pipeline will take you directly to the application workflow prompting you to take action.


Results

Since we created a policy that blocks the trigger of non-compliant deployment pipelines, no user can trigger the deployment unless the mandatory plugins are configured as shown below.

Image Promotion Policy

An ideal deployment workflow may consist of multiple stages (e.g., SIT, UAT, Prod environment).

Therefore, Devtron offers a feature called 'Image Promotion Policy' that allows you to directly promote an image to the target environment, bypassing the intermediate stages in your workflow including:


Creating an Image Promotion Policy

Who Can Perform This Action?

Users need to have super-admin permission to create an image promotion policy.

You can create a policy using our APIs or through Devtron CLI. To get the latest version of the devtctl binary, please contact your enterprise POC or reach out to us directly for further assistance.

Here is the CLI approach:

Syntax:

Arguments:

  • --name (required): The name of the image promotion policy.

  • --description (optional): A brief description of the policy, preferably explaining what it does.

  • --failCondition (optional): Images that match this condition will NOT be eligible for promotion to the target environment.

  • --approverCount (optional): The number of approvals required to promote an image (0-6). Defaults to 0 (no approvals).

  • --allowRequestFromApprove (optional): (Boolean) If true, user who raised the image promotion request can approve it. Defaults to false.

  • --allowImageBuilderFromApprove (optional): (Boolean) If true, user who triggered the build can approve the image promotion request. Defaults to false.

  • --allowApproverFromDeploy (optional): (Boolean) If true, user who approved the image promotion request can deploy that image. Defaults to false.

  • --applyPath (optional): Specify the path to the YAML file that contains the list of applications and environments to which the policy should be applicable.

If an image matches both pass and fail conditions, the priority of the fail condition will be higher. Therefore, such image will NOT be eligible for promotion to the target environment.

If you don't define both pass and fail conditions, all images will be eligible for promotion.


Applying an Image Promotion Policy

Who Can Perform This Action?

Users need to have super-admin permission to apply an image promotion policy.

You can apply a policy using our APIs or through Devtron CLI. Here is the CLI approach:

  • Create a YAML file and give it a name (say applyPolicy.yaml). Within the file, define the applications and environments to which the image promotion policy should apply, as shown below.

  • Apply the policy using the following CLI command:

Result

Promoting Image to Target Environment

Who Can Perform This Action?

Users with build & deploy permission or above (for the application and target environment) can promote an image if the image promotion policy is enabled.

Here, you can promote images to the target environment(s).

  1. Go to the Build & Deploy tab of your application.

  2. Click the Promote button next to the workflow in which the you wish to promote the image. Please note, the button will appear only if image promotion is allowed for any environment used in that workflow.

  3. In the Select Image tab, you will see a list of images. Use the Show Images from dropdown to filter the list and choose the image you wish to promote. This can be either be an image from the CI pipeline or one that has successfully passed all stages (e.g., pre, post, if any) of that particular environment.

  4. Use the SELECT button on the image, and click Promote to...

  5. Select one or more target environments using the checkbox.

  6. Click Promote Image.

The image's promotion to the target environment now depends on the approval settings in the image promotion policy. If the super-admin has enforced an approval process, the image requires the necessary number of approvals before promotion. On the other hand, if the super-admin has not enforced approval, the image will be automatically promoted since there is no request phase involved.

  1. If approval(s) are required for image promotion, you may check the status of your request in the Approval Pending tab.

Approving Image Promotion Request

Who Can Perform This Action?

  1. Go to the Build & Deploy tab of your application.

  2. Click the Promote button next to the workflow.

  3. Go to the Approval Pending tab to see the list of images requiring approval. By default, it shows a list of all images whose promotion request is pending with you.

All the images will show the source from which it is being promoted, i.e., CI stage or intermediate stage (environment).

  1. Click Approve for... to choose the target environments to which it can be promoted.

  2. Click Approve.

You can also use the Show requests dropdown to filter the image promotion requests for a specific target environment.

If there are pending promotion requests, you can approve them as shown below:

Deploying a Promoted Image

Who Can Perform This Action?

Users with build & deploy permission or above for the application and environment can deploy the promoted image.

In the Build & Deploy tab of your application, click Select Image for the CD pipeline, and choose your promoted image for deployment.

You can check the deployment of promoted images in the Deployment History of your application. It will also indicate the pipeline from which the image was promoted and deployed to the target environment.

Devtron Upgrade

Devtron can be upgraded in one of the following ways:

Upgrade Devtron using Helm

Versions Upgrade

Upgrade Devtron from the UI

Lock Deployment Configuration

Therefore, Devtron allows super-admins to restrict such fields from modification or deletion.

This stands true for deployment templates in:

How is this different from the 'Protect Configuration' feature?

Whereas, the 'lock deployment configuration' feature goes one step further. It is meant to prevent any edits to specific keys by non-super-admins. This applies only to deployment templates and is performed at global-level. -->


Locking Deployment Keys

Who Can Perform This Action?

Users need to have super-admin permission to lock deployment keys.

  1. Go to Global Configurations → Lock Deployment Config. Click Configure Lock.

  2. (Optional) Click Refer Values.YAML to check which keys you wish to lock.

  3. Click Save.

  4. A confirmation dialog box would appear. Read it and click Confirm.


Result

  • User can hide/unhide the locked keys as shown below.

  • Let's assume the user edits one of the locked keys...

    ...and saves the changes.

  • A modal window will appear on the right highlighting the non-eligible edits.

  • Let's assume the user edits a key that is not locked or adds a new key.

  • The modal window will highlight the eligible edits. However, it will not let the user save those eligible edits unless the user clicks the checkbox: Save changes which are eligible for update.

Who Can Perform This Action?

Only a super-admin, manager, or application admin can edit the configuration values.

  • Once the user clicks the Update button, the permissible changes will reflect in the deployment template.

The same result can be seen if the user tries to edit environment-specific deployment templates.

Update Devtron from Devtron UI

Devtron can be updated from the Devtron Stack Manager > About Devtron section.

  • Select Update to Devtron

The update process may show one of the following statuses, with details available for tracking, troubleshooting, and additional information:

Updating Devtron also updates the installed integrations.

0.3.x-0.4.x

If you want to check the current version of Devtron you are using, please use the following command.

Follow the below mentioned steps to upgrade the Devtron version using Helm

1. Check the devtron release name

2. Set release name in the variable

3. Annotate and Label all the Devtron resources

4. Fetch the latest Devtron helm chart

5. Upgrade Devtron

5.1 Upgrade Devtron to latest version

OR

5.2 Upgrade Devtron to a custom version. You can find the latest releases from Devtron on Github https://github.com/devtron-labs/devtron/releases

0.5.x-0.6.x

If you want to check the current version of Devtron you are using, please use the following command.

Follow the below mentioned steps to upgrade the Devtron version using Helm

1. Check the devtron release name

2. Set release name in the variable

3. Run the following script to upgrade

Please ignore any errors you encounter while running the upgrade script

0.4.x-0.5.x

If you want to check the current version of Devtron you are using, please use the following command.

Follow the below mentioned steps to upgrade the Devtron version using Helm

1. Apply Prerequisites Patch Job

If you are using rawYaml in deployment template, this update can introduce breaking changes. We recommend you to update the Chart Version of your app to v4.13.0 to make rawYaml section compatible to new argocd version v2.4.0.

Or

We have released a argocd-v2.4.0 patch job to fix the compatibilities issues. Please apply this job in your cluster and wait for completion and then only upgrade to Devtron v0.5.x.

2. Check the devtron release name

3. Set release name in the variable

4. Fetch the latest Devtron helm chart

5. Upgrade Devtron

5.1 Upgrade Devtron to latest version

OR

5.2 Upgrade Devtron to a custom version

You can find the latest releases from Devtron on Github https://github.com/devtron-labs/devtron/releases

0.4.x-0.4.x

If you want to check the current version of Devtron you are using, please use the following command.

Follow the below mentioned steps to upgrade the Devtron version using Helm

1. Check the devtron release name

2. Set release name in the variable

3. Fetch the latest Devtron helm chart

4. Upgrade Devtron

4.1 Upgrade Devtron to latest version

OR

4.2 Upgrade Devtron to a custom version. You can find the latest releases from Devtron on Github https://github.com/devtron-labs/devtron/releases

Click here
Docker Hub
Dex
Fetching Client ID
Fetching Secret
Copying Redirect URI from Devtron
Pasting Redirect URI
Figure 1a: Approval for Deployment
Figure 1b: Approval for Configuration Change
Figure 2: Approval Policy
Figure 3: 'Create Profile' Button
Figure 4: Entering Policy Details
Figure 5: Allowing Any Approver
Figure 6: Allowing Approvers from a User Group
Figure 7: Allowing Specific Users
Figure 8: Apply Profile Button
Figure 9: Selecting Profiles
Figure 10: Choosing Scope
Figure 11a: Specific Criteria for 'Deployment' Approval
Figure 11b: Specific Criteria for 'Configuration Change' Approval
Figure 12a: Match Criteria for 'Deployment' Approval
Figure 12b: Match Criteria for 'Configuration Change' Approval
Figure 13a: Global Scope for 'Deployment' Approval
Figure 13b: Global Scope for 'Configuration Change' Approval
Figure 14: Applying More Policy
Figure 15: Applying More Policy in Bulk
Figure 16: Remove Applied Policy from a Scope
Figure 17: Removing Policies in Bulk
Figure 18: Deleting Applied Policies (One-by-one or Bulk)
Figure 19: Deleting Approval Policy
Figure 20: Example
Figure 21: Request Approval for Deployment
Figure 22: User with 'Image Approver' Permissions granting approval
Figure 23: Approval via Email
Figure 24: Deployment of Approved Image
Figure 25: Example
Figure 26: Request Approval for Configuration Change
Figure 27: User with 'Configuration Approver' permissions granting approval
Figure 28: Config Approval via Email
Figure 1: Deployment Window
Figure 2: Deployment Window in Global Configurations
Figure 3: Adding Deployment Window
Figure 4: Name and Description
Figure 5: Selecting Deployment Window Type
Figure 6: Selecting Timezone
Figure 7: Adding Duration
Figure 8: Setting Window Duration
Figure 9: Setting Start and End Date
Figure 10: Selecting Unrestricted Users
Figure 11: Writing Display Message
Figure 12: Editing Window
Figure 13: Deleting Window
Figure 14: Applying Windows
Figure 15: Selecting Deployment Pipelines
Figure 16a: Using Filters
Figure 16b: Filtered Results
Figure 17: Manage Windows
Figure 18: Attaching Windows to Applications
Figure 19: Checking Impacted Apps + Environments
Figure 20a: Removing Windows from Single Deployment Pipeline
Figure 20b: Removing Windows from Multiple Deployment Pipeline
Figure 21: Overview Page - Deployment Window
Figure 22: App Details Page - Deployment Window
Figure 23: Additional Configuration - ConfigMap
Figure 24a: Hibernate App
Figure 24b: Hibernation Dialog
Figure 25a: Restart Workload
Figure 25b: Selecting Workloads
Figure 25c: Restart Workloads Dialog
Figure 26a: Workload Deletion
Figure 26b: Workload Deletion Dialog
Figure 27: Do Not Deploy Labels
Figure 28a: Selecting an Image
Figure 28b: Deployment Dialog
Figure 29: Deployment Log
Figure 30a: Rollback Deployment
Figure 30b: Selecting Previously Deployed Image
Figure 30c: Rollback Dialog
Figure 31a: Pipeline Deletion
Figure 31b: Pipeline Deletion Dialog
Figure 32: Deployment Window in Application Group
Figure 33: Partial Deployment of Application Group

Select one of the to which you want to give permission to the user:

Select one of the to which you want to give permission to the user:

Select one of the to which you want to give permission to the user:

Select one of the to which you want to give permission to the user and click Done:

Enter the valid .

Cluster: Variable value might differ for each Kubernetes cluster.

Environment: Variable value might differ for each environment within a cluster, e.g., staging, dev, prod.

Application: Variable value might differ for each application.

Environment + Application: Variable value might differ for each application on a specific environment.

Figure 1: Downloading the Template
Figure 2: Uploading the Template
Figure 3: Reviewing the YAML file
Figure 4: Saving the file

Click the Variable List tab to view the variables. Check the section to know more.

Figure 5: List of Variables
Figure 6: Editing from UI
Figure 7: Reuploading New File

Currently, the widget is shown only on the following screens in :

Figure 8: Unexpanded Widget
Figure 9: Expanded Widget
Figure 10: Copying a Variable
Figure 11: Pasting a Variable

Environment + App

App

Environment

Cluster

Figure 12: Variable key in Red, Variable value in Green

DEVTRON_NAMESPACE: Provides name of the

DEVTRON_CLUSTER_NAME: Provides name of the configured on Devtron

DEVTRON_ENV_NAME: Provides name of the

DEVTRON_IMAGE_TAG: Provides associated with the

DEVTRON_APP_NAME: Provides name of the

Devtron offers the option to pull container images using digest. Refer to know the purpose it serves.

As a super-admin, you can decide whether you wish to enable pull image digest or .

Figure 1: Enabling for all Env
Figure 2: Saving Changes
Figure 3: Selecting Environments

Once you enable pull image digest for a given environment in Global Configurations, users won't be able to modify the . The toggle button would appear disabled for that environment as shown below.

Figure 4: Non-editable Option

Note: External links can only be added/managed by a super admin, but non-super admin users can on the App Configuration page.

{podName}: If used, the link will only be visible at the pod level on the page.

{containerName}: If used, the link will only be visible at the container level on the page.

The users (admin and others) can access the configured external link on the page.

Introduction

To achieve this, Devtron supports a feature known as Catalog Framework. Using this, you as a can decide the data you expect from the managers of different resource types. In other words, you can create a custom that would ultimately render a form for the resource owners to fill. Once the form is filled, a GUI output will appear as shown below.

Sample Catalog Data for an App

Figure 1: Choosing a Schema
Figure 2a: Using Sample Schema
Figure 2b: Expected Future Output
Figure 3: Change Diff
Figure 4: Indication of Existing Data
Figure 5: Unfilled Details
Figure 6: Rendered Empty Form
Figure 7: Filled Form
Figure 8: App Catalog Data

Introduction

Figure 1: How Tags Policy Works
Figure 2: Tags Policy
Figure 3: 'Add Tag' Button

If you just want to offer tag suggestions, use Suggested tags. These will appear as when adding tags to applications globally, and users can optionally use them if needed.

Figure 4: Creating Suggested or Mandatory Tag
Figure 5: Selecting One or More Projects
Figure 6: Entering Tag Key
Figure 7: Creating List of Choices
Figure 8: Adding Description for the Tag

Block deployment stages of prod environments - Use this option if you want to prevent the user from deploying an existing application to , if mandatory tags are not configured.

Block deployment stages of non-prod enviroments - Use this option if you want to prevent the user from deploying an existing application to , if mandatory tags are not configured.

Figure 9: Deciding Deployment Restrictions

Propagate Tag - By default, tags assigned to applications in Devtron are not automatically propagated to Kubernetes resources as labels. For more information on how labels function in Kubernetes, refer to the .

Figure 10a: Propagating Tags
Figure 10b: Enabling/Disabling Propagation
Figure 10c: How Tag Propagation Works
Figure 11: Adding More Tag
Figure 12: Editing a Tag
Figure 13: Adding/Removing projects in Bulk
Figure 14: Deleting a Tag
Figure 15: Deleting Multiple Tags
Figure 16: Mandatory Tag - App Creation Page
Figure 17: Mandatory Tag - Overview Page
Figure 18: App creation not allowed
Figure 19: Suggested Tags in Dropdown

If an existing application belongs to a project where mandatory tags are enabled along with deployment restrictions, if the user does not provide values for the mandatory tags, they cannot deploy that app to the intended environment (check step 9 of ).

Figure 20: Deployment Restriction

Similarly, if deployment restrictions apply due to missing mandatory tags, users cannot deploy apps to the intended environment from the .

Figure 21: Deployment Restriction in Application Group

If a user attempts to that contains applications with missing mandatory tags, the deployment will be blocked if restrictions apply.

Figure 22: Deployment Restriction in Release (SDH)

Introduction

Your should follow certain standards and precautions to ensure reliability and a smooth release. For example, mandating load testing for production deployments might help you identify performance bottlenecks early rather than face possible outages, unhappy users, or revenue loss.

The Plugin Policy feature in Devtron lets you enforce the presence of specific at various stages in your application's build and deployment pipelines, such as , , , or . Therefore, if the required plugins do not exist in the specified stage(s), you can decide the action (whether to allow or block the pipeline trigger).

On the other hand, if you selected Build pipeline in step 4 of , the build trigger would get blocked as shown below.

Introduction

If you have built such a , your CI image will sequentially traverse and deploy to each environment until it reaches the target environment. However, if there's a critical issue you wish to address urgently (through a hotfix) on production, navigating the standard workflow might feel slow and cumbersome.

and of the intermediate stages

All of the intermediate stages

--passCondition (optional): Specify a condition using . Images that match this condition will be eligible for promotion to the target environment.

Here, applicationEnvironments is a dictionary that contains the application names (app1, app2) and the corresponding environment names (env-demo/env-staging) where the policy will apply. In the applyToPolicyName key, enter the value of the name argument you used earlier while .

In case you have configured , an email notification will be sent to the approvers.

Only the users having role (for the application and environment) or superadmin permissions will be able to approve the image promotion request.

If a user has approved the promotion request for an image, they may or may not be able to deploy depending upon the .

However, a promoted image does not automatically qualify as a deployable image. It must fulfill all configured requirements (, , etc.) of the target environment for it to be deployed.

Introduction

The might contain certain configurations intended for the DevOps team (e.g., ingress), and not meant for developers to modify.

The 'protect configuration' feature is meant to verify the edits by introducing an approval flow for any changes made to the configuration files, i.e., Deployment template, ConfigMaps, and Secrets. Refer .

Enter the keys inside the editor on the left-hand side, e.g., autoscaling.MaxReplicas. Use to enter specific keys, lists, or objects to lock.

While super-admins can directly edit the locked keys, let's look at a scenario where a user (non-super-admin) tries to edit the same in an base deployment template.

If you select 'Basic' mode instead of 'Advanced (YAML)', all the keys meant for basic mode will be displayed in the GUI even if some are locked. While users can modify these keys, they cannot save the changes made to the locked keys.

However, if it's a , the user will require the approval of a as shown below.

Installation status
Description
Webhook URL link
App Configuration
cluster
application on Devtron
access the configured external links
App Details
production environments
non-production environments
Kubernetes Labels Documentation
Application Group
roles
roles
roles
roles
How to Use a Scoped Variable
for all environments
for specific environments
Defining a Schema
Filling the Schema-Generated Form
dropdown suggestions
adding tags
devtctl create imagePromotionPolicy \
    --name="example-policy" \
    --description="This is a sample policy that promotes an image to production environment" \
    --passCondition="true" \
    --failCondition="false" \
    --approverCount=0 \
    --allowRequestFromApprove=false \
    --allowImageBuilderFromApprove=false \
    --allowApproverFromDeploy=false \
    --applyPath="path/to/applyPolicy.yaml"
applyPolicy.yaml
apiVersion: v1
kind: artifactPromotionPolicy
spec:
  payload:
    applicationEnvironments:
      - appName: "app1"
        envName: "env-demo"
      - appName: "app1"
        envName: "env-staging"
      - appName: "app2"
        envName: "env-demo"
    applyToPolicyNames:
      - "example-policy"
devtctl apply policy -p="path/to/applyPolicy.yaml"
Upgrade Devtron using Helm
kubectl -n devtroncd get installers installer-devtron -o jsonpath='{.status.sync.data}' | grep "^LTAG=" | cut -d"=" -f2-
helm list --namespace devtroncd
RELEASE_NAME=devtron
kubectl -n devtroncd label all --all "app.kubernetes.io/managed-by=Helm"
kubectl -n devtroncd annotate all --all "meta.helm.sh/release-name=$RELEASE_NAME" "meta.helm.sh/release-namespace=devtroncd"
kubectl -n devtroncd label secret --all "app.kubernetes.io/managed-by=Helm"
kubectl -n devtroncd annotate secret --all "meta.helm.sh/release-name=$RELEASE_NAME" "meta.helm.sh/release-namespace=devtroncd"
kubectl -n devtroncd label cm --all "app.kubernetes.io/managed-by=Helm"
kubectl -n devtroncd annotate cm --all "meta.helm.sh/release-name=$RELEASE_NAME" "meta.helm.sh/release-namespace=devtroncd"
kubectl -n devtroncd label sa --all "app.kubernetes.io/managed-by=Helm"
kubectl -n devtroncd annotate sa --all "meta.helm.sh/release-name=$RELEASE_NAME" "meta.helm.sh/release-namespace=devtroncd"
kubectl label clusterrole devtron "app.kubernetes.io/managed-by=Helm"
kubectl annotate clusterrole devtron "meta.helm.sh/release-name=$RELEASE_NAME" "meta.helm.sh/release-namespace=devtroncd"
kubectl label clusterrolebinding devtron "app.kubernetes.io/managed-by=Helm"
kubectl annotate clusterrolebinding devtron "meta.helm.sh/release-name=$RELEASE_NAME" "meta.helm.sh/release-namespace=devtroncd"
kubectl -n devtroncd label role --all "app.kubernetes.io/managed-by=Helm"
kubectl -n devtroncd annotate role --all "meta.helm.sh/release-name=$RELEASE_NAME" "meta.helm.sh/release-namespace=devtroncd"
kubectl -n devtroncd label rolebinding --all "app.kubernetes.io/managed-by=Helm"
kubectl -n devtroncd annotate rolebinding --all "meta.helm.sh/release-name=$RELEASE_NAME" "meta.helm.sh/release-namespace=devtroncd"
helm repo update
helm upgrade devtron devtron/devtron-operator --namespace devtroncd \
-f https://raw.githubusercontent.com/devtron-labs/devtron/main/charts/devtron/devtron-bom.yaml \
--set installer.modules={cicd} --reuse-values
DEVTRON_TARGET_VERSION=v0.4.x

helm upgrade devtron devtron/devtron-operator --namespace devtroncd \
-f https://raw.githubusercontent.com/devtron-labs/devtron/$DEVTRON_TARGET_VERSION/charts/devtron/devtron-bom.yaml \
--set installer.modules={cicd} --reuse-values
kubectl -n devtroncd get installers installer-devtron -o jsonpath='{.status.sync.data}' | grep "^LTAG=" | cut -d"=" -f2-
helm list --namespace devtroncd
export RELEASE_NAME=devtron
wget https://raw.githubusercontent.com/devtron-labs/utilities/main/scripts/shell/upgrade-devtron-v6.sh
sh upgrade-devtron-v6.sh
kubectl -n devtroncd get installers installer-devtron -o jsonpath='{.status.sync.data}' | grep "^LTAG=" | cut -d"=" -f2-
kubectl apply -f https://raw.githubusercontent.com/devtron-labs/utilities/main/scripts/jobs/argocd-2.4.0-prerequisites-patch-job.yaml
helm list --namespace devtroncd
RELEASE_NAME=devtron
helm repo update
helm upgrade devtron devtron/devtron-operator --namespace devtroncd \
-f https://raw.githubusercontent.com/devtron-labs/devtron/main/charts/devtron/devtron-bom.yaml \
--set installer.modules={cicd} --reuse-values
DEVTRON_TARGET_VERSION=v0.5.x

helm upgrade devtron devtron/devtron-operator --namespace devtroncd \
-f https://raw.githubusercontent.com/devtron-labs/devtron/$DEVTRON_TARGET_VERSION/charts/devtron/devtron-bom.yaml \
--set installer.modules={cicd} --reuse-values
kubectl -n devtroncd get installers installer-devtron -o jsonpath='{.status.sync.data}' | grep "^LTAG=" | cut -d"=" -f2-
helm list --namespace devtroncd
RELEASE_NAME=devtron
helm repo update
helm upgrade devtron devtron/devtron-operator --namespace devtroncd \
-f https://raw.githubusercontent.com/devtron-labs/devtron/main/charts/devtron/devtron-bom.yaml \
--set installer.modules={cicd} --reuse-values
DEVTRON_TARGET_VERSION=v0.4.x

helm upgrade devtron devtron/devtron-operator --namespace devtroncd \
-f https://raw.githubusercontent.com/devtron-labs/devtron/$DEVTRON_TARGET_VERSION/charts/devtron/devtron-bom.yaml \
--set installer.modules={cicd} --reuse-values

0.3.x-0.3.x

If you want to check the current version of Devtron you are using, please use the following command.

kubectl -n devtroncd get installers installer-devtron -o jsonpath='{.status.sync.data}' | grep "^LTAG=" | cut -d"=" -f2-

Follow the below mentioned steps to upgrade the Devtron version using Helm

  1. Fetch the latest Devtron helm chart

helm repo update
  1. Input the target Devtron version that you want to upgrade to. You can find the latest releases from Devtron on Github https://github.com/devtron-labs/devtron/releases

DEVTRON_TARGET_VERSION=v0.3.x
  1. Upgrade Devtron

helm upgrade devtron devtron/devtron-operator --namespace devtroncd --set installer.release=$DEVTRON_TARGET_VERSION

Follow the below mentioned steps to upgrade the Devtron version using Kubectl

  1. Input the target Devtron version that you want to upgrade to. You can find the latest releases from Devtron on Github https://github.com/devtron-labs/devtron/releases

DEVTRON_TARGET_VERSION=v0.3.x
  1. Patch Devtron Installer

kubectl patch -n devtroncd installer installer-devtron --type='json' -p='[{"op": "add", "path": "/spec/reSync", "value": true },{"op": "replace", "path": "/spec/url", "value": "https://raw.githubusercontent.com/devtron-labs/devtron/'$DEVTRON_TARGET_VERSION'/manifests/installation-script"}]'

0.2.x-0.3.x

Follow the required steps to update the Devtron version

STEP 1

Delete the respective resources i.e, nats-operator , nats-streaming and nats-server using the following commands.

kubectl delete -f https://raw.githubusercontent.com/devtron-labs/devtron/v0.2.37/manifests/yamls/nats-operator.yaml
kubectl -n devtroncd delete -f https://raw.githubusercontent.com/devtron-labs/devtron/v0.2.37/manifests/yamls/nats-streaming.yaml
kubectl -n devtroncd delete -f https://raw.githubusercontent.com/devtron-labs/devtron/v0.2.37/manifests/yamls/nats-server.yaml

STEP 2

Verify the deletion of resources using the following commands.

kubectl -n devtroncd get pods 
kubectl -n devtroncd get serviceaccount
kubectl -n devtroncd get clusterrole

STEP 3

Set reSync: true in the installer object, this will initiate upgrade of the entire Devtron stack, you can use the following command to do this.

kubectl patch -n devtroncd installer installer-devtron --type='json' -p='[{"op": "add", "path": "/spec/reSync", "value": true }]'
App details
App details
application workflow
workflow
Common Expression Language (CEL)
Artifact promoter
Upgrade to 1.5.0
0.6.x to 0.7.x
0.5.x to 0.6.x
0.4.x to 0.5.x
0.4.x to 0.4.x
0.3.x to 0.4.x
0.3.x to 0.3.x
0.2.x to 0.3.x
Update Devtron from Devtron UI
Update Devtron to beta version
Base configuration
Environment-level configuration
Approval Policy
JSONpath expressions
unprotected
protected template
configuration approver
Creating Plugin Policy
creating the policy
policy configuration
SES or SMTP on Devtron
Customize GUI

Initializing

The update is being initialized.

Updating

Devtron is being updated to the latest version.

Failed

Unknown

Status is unknown at the moment and will be updated shortly.

Request timed out

Create a New Application

  • On the Devtron dashboard, select Applications.

  • On the upper-right corner of the screen, click Create.

  • Select Custom app from the drop-down list.

A new application can be created from one of the following options:

  • Custom App

Create Custom App

To create a new application from the custom app, select Custom app.

  • In the Create application window, enter an App Name and select a Project.

  • Select either:

    • Create from scratch to create an application from scratch, or

    • Clone existing application to clone an existing application.

Tags

Tags are key-value pairs. You can add one or multiple tags in your application.

Propagate Tags When tags are propagated, they are considered as labels to Kubernetes resources. Kubernetes offers integrated support for using these labels to query objects and perform bulk operations e.g., consolidated billing using labels. You can use these tags to filter/identify resources via CLI or in other Kubernetes tools.

  • Click + Add tag to add a new tag.

  • Click Save.

Applications

Introduction

The Applications page helps you create and manage your microservices, and it majorly consists of the following:

Application Listing

Create Button

You can use this to:

Other Options

There are additional options available for you:

  • Search and filters to make it easier for you to find applications.

  • Export CSV to download the data of Devtron apps (not supported for Helm apps and Argo CD apps).

  • Sync button to refresh the app listing.


View External Helm App Listing

Want to Manage your Existing Helm Release using Devtron?

Who Can Perform This Action?

Users with view only permission or above for an application can view helm app listing.

External Helm apps are Helm applications deployed outside of Devtron.

  1. Use the Cluster selection dropdown to choose the external cluster(s). You will see your external Helm apps under the Helm Apps tab.


View External ArgoCD App Listing

Want to Manage your Existing Argo CD Apps using Devtron?

Who Can Perform This Action?

Users need super-admin permission to view/enable/disable the ArgoCD listing.

Preface

In Argo CD, a user manages one dashboard for one ArgoCD instance. Therefore, with multiple ArgoCD instances, the process becomes cumbersome for the user to manage several dashboards.

With Devtron, you get an entire Argo CD app listing in one place. This listing includes:

  • Other Argo CD apps present in your cluster

Advantages

Devtron also bridges the gap for ArgoCD users by providing additional features as follows:

  • Single-pane View: All Argo CD apps will show details such as their app status, environment, cluster, and namespace together in one dashboard.

  • Feature-rich Options: Clicking an Argo CD app will give you access to its logs, terminal, events, manifest, available resource kinds, pod restart log, and many more.

Additional References

Prerequisite

The cluster in which Argo CD apps exist should be added in Global Configurations → Clusters and Environments

Feature Flag

ENABLE_EXTERNAL_ARGO_CD: "true"

Enabling ArgoCD App Listing

  1. Go to the Resource Browser of Devtron.

  2. Select the cluster (in which your Argo CD app exists).

  3. Type ConfigMap in the 'Jump to Kind' field.

  4. Search for dashboard-cm using the available search bar and click it.

  5. Click Edit Live Manifest.

  6. Set the feature flag ENABLE_EXTERNAL_ARGO_CD to "true"

  7. Click Apply Changes.

  8. Go back to the 'Jump to Kind' field and type Pod.

  9. Search for dashboard pod and use the kebab menu (3 vertical dots) to delete the pod.

  10. Go to Applications and refresh the page. A new tab named ArgoCD Apps will be visible.

  11. Select the cluster(s) from the dropdown to view the Argo CD apps available in the chosen cluster(s).


View External FluxCD App Listing

Who Can Perform This Action?

Users need super-admin permission to view/enable/disable the FluxCD listing.

Preface

Prerequisite

The cluster in which Flux CD apps exist should be added in Global Configurations → Clusters and Environments

Feature Flag

FEATURE_EXTERNAL_FLUX_CD_ENABLE: "true"

Enabling FluxCD App Listing

Tip

After successfully executing all the steps, a new tab named FluxCD Apps will be visible. Select the cluster(s) from the dropdown to view the Flux CD apps available in the chosen cluster(s).

Click any Flux CD app to view its details as shown below.

App Configuration

Please configure Global Configurations before moving ahead with App Configuration

Parts of Documentation

Base Deployment Template

A deployment configuration is a manifest of the application. It defines the runtime behavior of the application. You can select one of the default deployment charts or custom deployment charts which are created by super admin.

To configure a deployment chart for your application, do the following steps:

  • Go to Applications and create a new application.

  • Go to App Configuration page and configure your application.

  • On the Base Deployment Template page, select the drop-down under Chart type.


Selecting a Chart Type

Who Can Perform This Action?

Note

From Devtron Charts

You can select a default deployment chart from the following options:

From Deployment Charts

You can select an available custom chart as shown below. You can also view the description of the custom charts in the list.


Selecting a Chart Version

Who Can Perform This Action?

Once you select a chart type, choose a chart version using which you wish to deploy the application.

Devtron uses helm charts for deployments and it maintains multiple chart versions based on the features it supports.

One can see available chart versions in the drop-down. You can select any chart version as per your requirements. By default, the latest version of the helm chart is selected.

Every chart version has its own YAML file that provides specifications for your application. To make it easy to use, we have created templates for the YAML file and have added some variables inside the YAML. You can provide or change the values of these variables as per your requirement.


Configuring the Chart

Who Can Perform This Action?

Using Basic GUI

If you are not an advanced user, you may use the Basic (GUI) section to configure your chosen chart.

By default, the following fields are available for you to modify in the Basic (GUI) section:

Fields
Description

Arguments

Enable the Arguments to pass one or more argument values. By default, it is in the disabled state.

Command

Enable the Command to pass one or more command values. By default, it is in the disabled state.

HTTP Request Routes

Enable the HTTP Request Routes to define Host, and Path. By default, it is in the disabled state.

  • Host: Domain name of the server.

  • Path: Path of the specific component in the host that the HTTP wants to access.

You can define multiple paths as required by clicking Add path.

Resources

Here, you can tweak the requests and limits of the CPU resource and RAM resource as per the application.

Autoscaling

Define the autoscaling parameters to automatically scale your application's deployment based on resource utilization.

  • Maximum Replicas: The maximum number of replicas your application can scale up to.

  • Minimum Replicas: The minimum number of replicas your application should run at any time.

  • Target CPU Utilization Percentage: The average CPU utilization across all pods that will trigger scaling.

  • Target Memory Utilization Percentage: The average memory utilization across all pods that will trigger scaling.

Environment Variables (Key/Value)

Define key/value by clicking Add variable.

  • Key: Define the key of the environment.

  • Value: Define the value of the environment.

You can define multiple env variables by clicking Add EnvironmentVariables.

Container Port

The internal port on which the container listens for HTTP requests. Specify the container port and optionally the service port that maps to it.

Service

Configure the service that exposes your application to the network.

  • Type: Specify the type of service (e.g., ClusterIP, NodePort, LoadBalancer).

  • Annotations: Add custom annotations to the service for additional configuration.

Readiness Probe

Define the readiness probe to determine when a container is ready to start accepting traffic.

  • Path: The HTTP path that the readiness probe will access.

  • Port: The port on which the readiness probe will access the application.

Liveness Probe

Define the liveness probe to check if the container is still running and to restart it if it is not.

  • Path: The HTTP path that the liveness probe will access.

  • Port: The port on which the liveness probe will access the application.

Tolerations

Define tolerations to allow the pods to be scheduled on nodes with matching taints.

  • Key: The key of the taint to tolerate.

  • Operator: The relationship between the key and the value (e.g., Exists, Equal).

  • Value: The value of the taint to match.

  • Effect: The effect of the taint to tolerate (e.g., NoSchedule, NoExecute).

ServiceAccount

Specify the service account for the deployment to use, allowing it to access Kubernetes API resources.

  • Create: Toggle to create a new service account.

  • Name: The name of the service account to use.

Click Save Changes. If you want to do additional configurations, then click the Switch to Advanced button or Advanced (YAML) button for modifications.

Note

  • If you change any values in the 'Basic (GUI)', then the corresponding values will change in 'Advanced (YAML)' too.

  • Users who are not super-admins will land on 'Basic (GUI)' section when they visit Base Deployment Template page; whereas super-admins will land on 'Advanced (YAML)' section. This is just a default behavior; therefore, they can still navigate to the other section if needed.

Who Can Perform This Action?

Superadmin can define and apply custom deployment schema.

This is useful in scenarios where:

  • You frequently edit certain fields in Advanced (YAML), which you expect to remain easily accessible in Basic (GUI) section.

  • You don't require some fields in Basic (GUI) section.

  • You need the autonomy to keep the Basic (GUI) unique for applications/clusters/environments/charts, or display the same Basic (GUI) everywhere.

There are two ways you can customize the Basic GUI, use any one of the following:

  1. Using APIs (explained below)

You can pass a custom JSON (deployment schema) of your choice through the following API. You may need to run the API with the POST method if you are doing it for the first time.

PUT {{DEVTRON_BASEURL}}/orchestrator/deployment/template/schema
Sample API Request Body
{
  "name": "schema-1",
  "type": "JSON",
  "schema": "{\"type\":\"object\",\"properties\":{\"args\":{\"type\":\"object\",\"title\":\"Arguments\",\"properties\":{\"value\":{\"type\":\"array\",\"items\":{\"type\":\"string\"},\"title\":\"Value\"},\"enabled\":{\"type\":\"boolean\",\"title\":\"Enabled\"}}},\"command\":{\"type\":\"object\",\"title\":\"Command\",\"properties\":{\"value\":{\"type\":\"array\",\"items\":{\"type\":\"string\"},\"title\":\"Value\"},\"enabled\":{\"type\":\"boolean\",\"title\":\"Enabled\"}}},\"resources\":{\"type\":\"object\",\"title\":\"Resources(CPU&RAM)\",\"properties\":{\"limits\":{\"type\":\"object\",\"required\":[\"cpu\",\"memory\"],\"properties\":{\"cpu\":{\"type\":\"string\"},\"memory\":{\"type\":\"string\"}}},\"requests\":{\"type\":\"object\",\"properties\":{\"cpu\":{\"type\":\"string\"},\"memory\":{\"type\":\"string\"}}}}},\"autoscaling\":{\"type\":\"object\",\"title\":\"Autoscaling\",\"properties\":{\"MaxReplicas\":{\"type\":[\"integer\",\"string\"],\"title\":\"MaximumReplicas\",\"pattern\":\"^[a-zA-Z0-9-+\\\\/*%_\\\\\\\\s]+$\"},\"MinReplicas\":{\"type\":[\"integer\",\"string\"],\"title\":\"MinimumReplicas\",\"pattern\":\"^[a-zA-Z0-9-+\\\\/*%_\\\\\\\\s]+$\"},\"TargetCPUUtilizationPercentage\":{\"type\":[\"integer\",\"string\"],\"title\":\"TargetCPUUtilizationPercentage\",\"pattern\":\"^[a-zA-Z0-9-+\\\\/*%_\\\\\\\\s]+$\"},\"TargetMemoryUtilizationPercentage\":{\"type\":[\"integer\",\"string\"],\"title\":\"TargetMemoryUtilizationPercentage\",\"pattern\":\"^[a-zA-Z0-9-+\\\\/*%_\\\\\\\\s]+$\"}}},\"EnvVariables\":{\"type\":\"array\",\"items\":{\"type\":\"object\",\"properties\":{\"key\":{\"type\":\"string\"},\"value\":{\"type\":\"string\"}}},\"title\":\"EnvironmentVariables\"},\"ContainerPort\":{\"type\":\"array\",\"items\":{\"type\":\"object\",\"properties\":{\"port\":{\"type\":\"integer\"}}},\"title\":\"ContainerPort\"}}}",
  "selectors": [
    {
      "attributeSelector": {
        "category": "APP",
        "appNames": ["my-demo-app"]
      }
    },
    {
      "attributeSelector": {
        "category": "ENV",
        "envNames": ["env1", "env2", "env3"]
      }
    },
    {
      "attributeSelector": {
        "category": "CLUSTER",
        "clusterNames": ["cluster1", "cluster2", "cluster3"]
      }
    },
    {
      "attributeSelector": {
        "category": "CHART_REF",
        "chartVersions": [
          {
            "type": "Deployment",
            "version": "1.0.0"
          }
        ]
      }
    },
    {
      "attributeSelector": {
        "category": "APP_ENV",
        "appEnvNames": [
          {
            "appName": "my-demo-app",
            "envName": "devtron"
          }
        ]
      }
    }
  ]
}
  1. In the name field, give a name to your schema, e.g., schema-1

  2. Enter the type as JSON.

  3. The schema field is for entering your custom deployment schema. Perform the following steps:

    • Copy the final JSON and stringify it using any free online tool.

    • Paste the stringified JSON in the schema field of the API request body.

    • Send the API request. If your schema already exists, use the PUT method instead of POST in the API call.

  4. The attributeSelector object helps you choose the scope at which your custom deployment schema will take effect.

    Priority
    Category Scope
    Description

    1 (High)

    APP_ENV

    Specific to an application and its environment

    2

    APP

    Applies at the application level if no specific environment is defined

    3

    ENV

    Applies to specific deployment environment

    4

    CHART_REF

    Applies to all applications using a specific chart type and version

    5

    CLUSTER

    Applies across all applications and environments within a specific cluster

    6

    GLOBAL

    Universally applies if no other more specific schemas are defined

Using Advanced (YAML)

If you are an advanced user wishing to perform additional configurations, you may switch to Advanced (YAML) for modifications.

Refer the respective templates to view the YAML details.


Application Metrics

Depending on the chart type and version you select, application metrics of your application may be viewed. This includes:

  • Status codes 2xx, 3xx, 5xx

  • Throughput

  • Latency ...and many more

Enable Show application metrics toggle to view the application metrics on the App Details page.

IMPORTANT: Enabling application metrics introduces a sidecar container to your main container which may require some additional configuration adjustments. We recommend you to do load test after enabling it in a non-production environment before enabling it in production environment.

Select Save & Next to save your configurations.

GitOps Configuration

Introduction

The application-level GitOps configuration offers the flexibility to add a custom Git repo (as opposed to Devtron auto-creating a repo for your application).


Adding Custom Git Repo for GitOps

Who Can Perform This Action?

For Devtron Apps

  1. Go to Applications → Devtron Apps (tab) → (choose your app) → App Configuration (tab) → GitOps Configuration.

  2. Assuming a GitOps repo was not added to your application earlier, you get 2 options:

    • Auto-create repository - Select this option if you wish to proceed with the default behavior. It will create a repository automatically, named after your application with a prefix. Thus saving you the trouble of creating one manually.

GitOps repositories, whether auto-created by Devtron or added manually, are immutable. This means they cannot be modified after creation. The same is true if you have an existing CD pipeline that uses/used GitOps for deployment.

  1. Click Save.

For Helm Apps

  1. Click Configure & Deploy.

  2. After you enter the App Name, Project, and Environment; an option to choose the deployment approach (i.e., Helm or GitOps) would appear. Select GitOps.

    • Auto-create repository

    • Commit manifests to a desired repository

  1. Enter your custom Git Repo URL, and click Save.

Next, you may proceed to deploy the chart.

Once you deploy a helm app with GitOps, you cannot change its GitOps repo.

image tag
JSON schema
Figure 1: How Plugin Policy Works
Figure 2: Plugin Policy
Figure 3: Create Profile Button
Figure 4: Choosing a Pipeline Type
Figure 5: 'Add Plugin' Option
Figure 6: Finding and Choosing Plugins
Figure 7: Finding and Choosing Plugins
Figure 8: Taking Action
Figure 9: Apply Profile Button
Figure 10: Selecting Profiles
Figure 11: Choosing Scope
Figure 12: Deciding Matching Criteria for Pipelines
Figure 13: Deciding Matching Criteria for Pipelines
Figure 14: Non-compliant Pipelines
Figure 15: Blocked Non-compliant CD Pipeline
Figure 16: Blocked Non-compliant CI Pipeline
Figure 1: Workflow on Devtron
Figure 2: Promoting an Image
Figure 3: Promote Button
Figure 4: Selecting an Image
Figure 5: Selecting the Destination Environment
Figure 6: Checking Pending Approvals
Figure 7: Show Env-specific Promotion Requests
Figure 8: Approving Image Promotion Requests
Figure 9: Deploying Promoted Image
Figure 10: Deployment History - Checking Image Source
Figure 1: Preventing Changes to Locked Keys
Figure 2: Configure Lock Button
Figure 3: Values.YAML File
Figure 4: Referring Values.YAML File for Locking Keys
Figure 5: Saving Locked Keys
Figure 6: Confirmation Dialog
Figure 7: Hiding Locked Keys
Figure 8: Editing Locked Keys
Figure 9: Saving Edits to Locked Keys
Figure 10: Eligible and Non-eligible Changes
Figure 11: Editing Allowed Keys
Figure 12: Saving Eligible Changes
Figure 13: Updating Deployment Config
Figure 14: Proposing Changes to Protected Config
Upgrade Devtron
plugins
Filter Conditions

Update failed. You may retry the upgrade or .

The request to install has hit the maximum number of retries. You may retry the installation or for further assistance.

If you select Create from scratch, select the project from the drop-down list. Note: You have to add . Only then, it will appear in the drop-down list here.

If you select Clone existing application, select an app you want to clone from and the project from the drop-down list. Note: You have to add . Only then, it will appear in the drop-down list here.

Click the symbol on the left side of your tag to propagate a tag. Note: Dark grey colour in symbol specifies that the tags are propagated.

To remove the tags from propagation, click the symbol again.

Configure first before creating an application or cloning an existing application.

You can view the app name, its status, environment, namespace, and many more upfront. The apps are segregated into: , , , and .

Figure 1: App Types

Apart from internal helm apps created in Devtron, you can also view your external Helm app listing. Moreover, you can manage their deployments using Devtron. Read to know more.

Connect the cluster containing your external Helm apps in .

Figure 2: Helm App List

You can not only view your ArgoCD app list, but also manage their deployments using Devtron. Read to know more.

Apps deployed using on Devtron

Figure 3: ArgoCD App List

Resource Scanning: You can scan for vulnerabilities using Devtron's feature.

Figure 4: Cluster Selection for Argo CD Listing

Flux CD doesn't have any official dashboard; however, Devtron supports the listing of your apps in one dashboard. Here, the are same as those of .

Figure 5: FluxCD App List and Details

You may refer the steps mentioned in the section since the procedure is similar.

Using Devtron's Resource Browser, add the in the Dashboard ConfigMap as shown below.

Figure 6: Editing Dashboard ConfigMap
Figure 7: Selecting Cluster

(Optional) Once you choose cluster(s), you may use the Template Type dropdown to further filter your Flux CD app listing based on its type, i.e., or .

Figure 8: Flux App Details

Users need to have or above to select a chart.

After you select and save a chart type for a given application, you won't be able to change it later. Make sure to choose the correct chart type before saving. You can select a chart from or other .

(Recommended)

This option will be available only if a custom chart exists. If it doesn't, a user with super admin permission may upload one in .

Selecting Custom Chart

Users need to have or above to select a chart version.

Selecting Chart Version

Users need to have or above to configure a chart. However, super-admins can lock keys in base deployment template to prevent non-super-admins from modifying them. Refer to know more.

Customize Basic GUI

By default, the Basic (GUI) section comes with multiple predefined fields as seen earlier . However, if you wish to display a different set of fields to your team, you can modify the whole section as per your requirement.

Your team members find it difficult to understand and edit the section.

From section

To create a custom schema of your choice, you may use .

Show application metrics

The 'GitOps Configuration' page appears only if the super-admin has enabled 'Allow changing git repository for application' in .

This configuration is an extension of the settings present in of Devtron. Therefore, make sure you read it before making any changes to your app configuration.

Users need to have or above (along with access to the environment and application) to configure user-defined Git repo.

Commit manifests to a desired repository - Select this option if you wish to add a custom repo that is already created with your . Enter its link in the Git Repo URL field.

Note: In case you skipped the GitOps configuration for your application and proceeded towards the (that uses GitOps), you will be prompted to configure GitOps as shown below:

You can using either Helm or GitOps. Let's assume you wish to deploy airflow chart.

Select the helm chart from the .

The option to choose between 'Helm' or 'GitOps' is only available in

A modal window will appear for you to enter a Git repository. Just like (step 2), you get two options:

contact support
contact support
From Chart Store
project under Global Configurations
project under Global Configurations
Global Configurations
Create a Devtron app
Create a Helm app
Create a Job
Global Configurations → Cluster & Environments
ArgoCD: Standalone Configuration vs Devtron Configuration
Kustomization
Helmrelease
Git Repository
Build Configuration
Base Deployment Template
GitOps Configuration
Workflow Editor
ConfigMaps
Secrets
External Links
Environment Overrides
Deleting Application
Deployment
Rollout Deployment
Job & CronJob
StatefulSet
Global Configurations → Deployment Charts
RJSF JSON Schema Tool
Deployment
Rollout Deployment
Job & CronJob
StatefulSet
Application Listing
Create Button
Other Options
Flux CD
advantages
ArgoCD app listing
Enabling ArgoCD App Listing
feature flag
Devtron Charts
Deployment Charts
in the table
Advanced (YAML)
Deployment Charts
Global Configurations → GitOps
GitOps
Global Configurations
Chart Store
Git provider
Devtron Apps
CD pipeline
CD Pipeline - Image Digest
image digest setting in the CD pipeline
Pre-CD
Post-CD
approval nodes
Image Deployment Approval
Migrate Helm Release to Devtron
Migrate ArgoCD Apps to Devtron
creation of a new CD pipeline

Rollout Deployment

The Rollout Deployment chart deploys an advanced version of deployment that supports Blue/Green and Canary deployments. For functioning, it requires a rollout controller to run inside the cluster.

You can define application behavior by providing information in the following sections:

Key
Descriptions

Chart version

Basic (GUI)

Advanced (YAML)

Show application metrics


Advanced (YAML)

Container Ports

This defines the ports on which application services will be exposed to other services.

ContainerPort:
  - envoyPort: 8799
    envoyTimeout: 15s
    idleTimeout:
    name: app
    port: 8080
    servicePort: 80
    supportStreaming: true
    useHTTP2: true
Key
Description

envoyPort

envoy port for the container.

envoyTimeout

envoy Timeout for the container,envoy supports a wide range of timeouts that may need to be configured depending on the deployment.By default the envoytimeout is 15s.

idleTimeout

the duration of time that a connection is idle before the connection is terminated.

name

name of the port.

port

port for the container.

servicePort

port of the corresponding kubernetes service.

supportStreaming

Used for high performance protocols like grpc where timeout needs to be disabled.

useHTTP2

Envoy container can accept HTTP2 requests.

EnvVariables

EnvVariables: []

EnvVariables provide run-time information to containers and allow to customize how the application works and the behavior of the applications on the system.

Here we can pass the list of env variables , every record is an object which contain the name of variable along with value.

To set environment variables for the containers that run in the Pod.

Example of EnvVariables

IMP Docker image should have env variables, whatever we want to set.

EnvVariables: 
  - name: HOSTNAME
    value: www.xyz.com
  - name: DB_NAME
    value: mydb
  - name: USER_NAME
    value: xyz

But ConfigMap and Secret are the preferred way to inject env variables. You can create this in App Configuration Section.

ConfigMap

It is a centralized storage, specific to k8s namespace where key-value pairs are stored in plain text.

Secret

It is a centralized storage, specific to k8s namespace where we can store the key-value pairs in plain text as well as in encrypted(Base64) form.

IMP All key-values of Secret and CofigMap will reflect to your application.

Liveness Probe

If this check fails, kubernetes restarts the pod. This should return error code in case of non-recoverable error.

LivenessProbe:
  Path: ""
  port: 8080
  initialDelaySeconds: 20
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
  failureThreshold: 3
  command:
    - python
    - /etc/app/healthcheck.py
  httpHeaders:
    - name: Custom-Header
      value: abc
  scheme: ""
  tcp: true
Key
Description

Path

It define the path where the liveness needs to be checked.

initialDelaySeconds

It defines the time to wait before a given container is checked for liveliness.

periodSeconds

It defines how often (in seconds) to perform the liveness probe.

successThreshold

It defines the number of successes required before a given container is said to fulfil the liveness probe.

timeoutSeconds

The maximum time (in seconds) for the probe to complete.

failureThreshold

The number of consecutive failures required to consider the probe as failed.

command

The mentioned command is executed to perform the livenessProbe. If the command returns a non-zero value, it's equivalent to a failed probe.

httpHeaders

Custom headers to set in the request. HTTP allows repeated headers,You can override the default headers by defining .httpHeaders for the probe.

scheme

Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.

tcp

The kubelet will attempt to open a socket to your container on the specified port. If it can establish a connection, the container is considered healthy.

MaxUnavailable

  MaxUnavailable: 0

The maximum number of pods that can be unavailable during the update process. The value of "MaxUnavailable: " can be an absolute number or percentage of the replicas count. The default value of "MaxUnavailable: " is 25%.

MaxSurge

MaxSurge: 1

The maximum number of pods that can be created over the desired number of pods. For "MaxSurge: " also, the value can be an absolute number or percentage of the replicas count. The default value of "MaxSurge: " is 25%.

Min Ready Seconds

MinReadySeconds: 60

This specifies the minimum number of seconds for which a newly created Pod should be ready without any of its containers crashing, for it to be considered available. This defaults to 0 (the Pod will be considered available as soon as it is ready).

Readiness Probe

If this check fails, kubernetes stops sending traffic to the application. This should return error code in case of errors which can be recovered from if traffic is stopped.

ReadinessProbe:
  Path: ""
  port: 8080
  initialDelaySeconds: 20
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
  failureThreshold: 3
  command:
    - python
    - /etc/app/healthcheck.py
  httpHeaders:
    - name: Custom-Header
      value: abc
  scheme: ""
  tcp: true
Key
Description

Path

It define the path where the readiness needs to be checked.

initialDelaySeconds

It defines the time to wait before a given container is checked for readiness.

periodSeconds

It defines how often (in seconds) to perform the readiness probe.

successThreshold

It defines the number of successes required before a given container is said to fulfill the readiness probe.

timeoutSeconds

The maximum time (in seconds) for the probe to complete.

failureThreshold

The number of consecutive failures required to consider the probe as failed.

command

The mentioned command is executed to perform the readinessProbe. If the command returns a non-zero value, it's equivalent to a failed probe.

httpHeaders

Custom headers to set in the request. HTTP allows repeated headers,You can override the default headers by defining .httpHeaders for the probe.

scheme

Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.

tcp

The kubelet will attempt to open a socket to your container on the specified port. If it can establish a connection, the container is considered healthy.

Startup Probe

Startup Probe in Kubernetes is a type of probe used to determine when a container within a pod is ready to start accepting traffic. It is specifically designed for applications that have a longer startup time.

StartupProbe:
  Path: ""
  port: 8080
  initialDelaySeconds: 20
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
  failureThreshold: 3
  httpHeaders:
    - name: Custom-Header
      value: abc
  command:
    - python
    - /etc/app/healthcheck.py
  tcp: false
Key
Description

Path

It define the path where the startup needs to be checked.

initialDelaySeconds

It defines the time to wait before a given container is checked for startup.

periodSeconds

It defines how often (in seconds) to perform the startup probe.

successThreshold

The number of consecutive successful probe results required to mark the container as ready.

timeoutSeconds

The maximum time (in seconds) for the probe to complete.

failureThreshold

The number of consecutive failures required to consider the probe as failed.

command

The mentioned command is executed to perform the startup probe. If the command returns a non-zero value, it's equivalent to a failed probe.

httpHeaders

Custom headers to set in the request. HTTP allows repeated headers,You can override the default headers by defining .httpHeaders for the probe.

tcp

The kubelet will attempt to open a socket to your container on the specified port. If it can establish a connection, the container is considered healthy.

Autoscaling

This is connected to HPA and controls scaling up and down in response to request load.

autoscaling:
  enabled: false
  MinReplicas: 1
  MaxReplicas: 2
  TargetCPUUtilizationPercentage: 90
  TargetMemoryUtilizationPercentage: 80
  extraMetrics: []
Key
Description

enabled

Set true to enable autoscaling else set false.

MinReplicas

Minimum number of replicas allowed for scaling.

MaxReplicas

Maximum number of replicas allowed for scaling.

TargetCPUUtilizationPercentage

The target CPU utilization that is expected for a container.

TargetMemoryUtilizationPercentage

The target memory utilization that is expected for a container.

extraMetrics

Used to give external metrics for autoscaling.

Fullname Override

fullnameOverride: app-name

fullnameOverride replaces the release fullname created by default by devtron, which is used to construct Kubernetes object names. By default, devtron uses {app-name}-{environment-name} as release fullname.

Image

image:
  pullPolicy: IfNotPresent

Image is used to access images in kubernetes, pullpolicy is used to define the instances calling the image, here the image is pulled when the image is not present,it can also be set as "Always".

serviceAccount

serviceAccount:
  create: false
  name: ""
  annotations: {}
Key
Description

enabled

Determines whether to create a ServiceAccount for pods or not. If set to true, a ServiceAccount will be created.

name

Specifies the name of the ServiceAccount to use.

annotations

Specify annotations for the ServiceAccount.

imagePullSecrets

imagePullSecrets contains the docker credentials that are used for accessing a registry.

imagePullSecrets:
  - regcred

HostAliases

the hostAliases field is used in a Pod specification to associate additional hostnames with the Pod's IP address. This can be helpful in scenarios where you need to resolve specific hostnames to the Pod's IP within the Pod itself.

  hostAliases:
  - ip: "192.168.1.10"
    hostnames:
    - "hostname1.example.com"
    - "hostname2.example.com"
  - ip: "192.168.1.11"
    hostnames:
    - "hostname3.example.com"

Ingress

This allows public access to the url. Please ensure you are using the right nginx annotation for nginx class. The default value is nginx.

ingress:
  enabled: false
  # For K8s 1.19 and above use ingressClassName instead of annotation kubernetes.io/ingress.class:
  className: nginx
  annotations: {}
  hosts:
      - host: example1.com
        pathType: "ImplementationSpecific"
        paths:
            - /example
      - host: example2.com
        pathType: "ImplementationSpecific"
        paths:
            - /example2
            - /example2/healthz
  tls: []

Legacy deployment-template ingress format

ingress:
  enabled: false
  # For K8s 1.19 and above use ingressClassName instead of annotation kubernetes.io/ingress.class:
  ingressClassName: nginx-internal
  annotations: {}
  path: ""
  host: ""
  tls: []
Key
Description

enabled

Enable or disable ingress

annotations

To configure some options depending on the Ingress controller

host

Host name

pathType

Path in an Ingress is required to have a corresponding path type. Supported path types are ImplementationSpecific, Exact and Prefix.

path

Path name

tls

It contains security details

Ingress Internal

This allows private access to the url, please ensure you are using right nginx annotation for nginx class, its default value is nginx

ingressInternal:
  enabled: false
  # For K8s 1.19 and above use ingressClassName instead of annotation kubernetes.io/ingress.class:
  ingressClassName: nginx-internal
  annotations: {}
  hosts:
      - host: example1.com
        pathType: "ImplementationSpecific"
        paths:
            - /example
      - host: example2.com
        pathType: "ImplementationSpecific"
        paths:
            - /example2
            - /example2/healthz
  tls: []
Key
Description

enabled

Enable or disable ingress

annotations

To configure some options depending on the Ingress controller

host

Host name

pathType

Path in an Ingress is required to have a corresponding path type. Supported path types are ImplementationSpecific, Exact and Prefix.

path

Path name

pathType

Supported path types are ImplementationSpecific, Exact and Prefix.

tls

It contains security details

Init Containers

initContainers: 
  - reuseContainerImage: true
    securityContext:
      runAsUser: 1000
      runAsGroup: 3000
      fsGroup: 2000
    volumeMounts:
     - mountPath: /etc/ls-oms
       name: ls-oms-cm-vol
   command:
     - flyway
     - -configFiles=/etc/ls-oms/flyway.conf
     - migrate

  - name: nginx
    image: nginx:1.14.2
    securityContext:
      privileged: true
    ports:
    - containerPort: 80
    command: ["/usr/local/bin/nginx"]
    args: ["-g", "daemon off;"]

Specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image. One can use base image inside initContainer by setting the reuseContainerImage flag to true.

Pause For Seconds Before Switch Active

pauseForSecondsBeforeSwitchActive: 30

To wait for given period of time before switch active the container.

Resources

These define minimum and maximum RAM and CPU available to the application.

resources:
  limits:
    cpu: "1"
    memory: "200Mi"
  requests:
    cpu: "0.10"
    memory: "100Mi"

Resources are required to set CPU and memory usage.

Limits

Limits make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.

Requests

Requests are what the container is guaranteed to get.

Service

This defines annotations and the type of service, optionally can define name also.

  service:
    type: ClusterIP
    annotations: {}
Key
Description

type

Select the type of service, default ClusterIP

annotations

Annotations are widely used to attach metadata and configs in Kubernetes.

name

Optional field to assign name to service

loadBalancerSourceRanges

If service type is LoadBalancer, Provide a list of whitelisted IPs CIDR that will be allowed to use the Load Balancer.

Note - If loadBalancerSourceRanges is not set, Kubernetes allows traffic from 0.0.0.0/0 to the LoadBalancer / Node Security Group(s).

Volumes

volumes:
  - name: log-volume
    emptyDir: {}
  - name: logpv
    persistentVolumeClaim:
      claimName: logpvc

It is required when some values need to be read from or written to an external disk.

Volume Mounts

volumeMounts:
  - mountPath: /var/log/nginx/
    name: log-volume 
  - mountPath: /mnt/logs
    name: logpvc
    subPath: employee  

It is used to provide mounts to the volume.

Affinity and anti-affinity

Spec:
  Affinity:
    Key:
    Values:

Spec is used to define the desire state of the given container.

Node Affinity allows you to constrain which nodes your pod is eligible to schedule on, based on labels of the node.

Inter-pod affinity allow you to constrain which nodes your pod is eligible to be scheduled based on labels on pods.

Key

Key part of the label for node selection, this should be same as that on node. Please confirm with devops team.

Values

Value part of the label for node selection, this should be same as that on node. Please confirm with devops team.

Tolerations

tolerations:
 - key: "key"
   operator: "Equal"
   value: "value"
   effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"

Taints are the opposite, they allow a node to repel a set of pods.

A given pod can access the given node and avoid the given taint only if the given pod satisfies a given taint.

Taints and tolerations are a mechanism which work together that allows you to ensure that pods are not placed on inappropriate nodes. Taints are added to nodes, while tolerations are defined in the pod specification. When you taint a node, it will repel all the pods except those that have a toleration for that taint. A node can have one or many taints associated with it.

Arguments

args:
  enabled: false
  value: []

This is used to give arguments to command.

Command

command:
  enabled: false
  value: []
  workingDir: {}

It contains the commands to run inside the container.

Key
Description

enabled

To enable or disable the command.

value

It contains the commands.

workingDir

It is used to specify the working directory where commands will be executed.

Containers

Containers section can be used to run side-car containers along with your main container within same pod. Containers running within same pod can share volumes and IP Address and can address each other @localhost. We can use base image inside container by setting the reuseContainerImage flag to true.

    containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        command: ["/usr/local/bin/nginx"]
        args: ["-g", "daemon off;"]
      - reuseContainerImage: true
        securityContext:
          runAsUser: 1000
          runAsGroup: 3000
          fsGroup: 2000
        volumeMounts:
        - mountPath: /etc/ls-oms
          name: ls-oms-cm-vol
        command:
          - flyway
          - -configFiles=/etc/ls-oms/flyway.conf
          - migrate

Prometheus

  prometheus:
    release: monitoring

It is a kubernetes monitoring tool and the name of the file to be monitored as monitoring in the given case.It describes the state of the prometheus.

rawYaml

rawYaml: 
  - apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      selector:
        app: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
      type: ClusterIP

Accepts an array of Kubernetes objects. You can specify any kubernetes yaml here and it will be applied when your app gets deployed.

Grace Period

GracePeriod: 30

Kubernetes waits for the specified time called the termination grace period before terminating the pods. By default, this is 30 seconds. If your pod usually takes longer than 30 seconds to shut down gracefully, make sure you increase the GracePeriod.

A Graceful termination in practice means that your application needs to handle the SIGTERM message and begin shutting down when it receives it. This means saving all data that needs to be saved, closing down network connections, finishing any work that is left, and other similar tasks.

There are many reasons why Kubernetes might terminate a perfectly healthy container. If you update your deployment with a rolling update, Kubernetes slowly terminates old pods while spinning up new ones. If you drain a node, Kubernetes terminates all pods on that node. If a node runs out of resources, Kubernetes terminates pods to free those resources. It’s important that your application handle termination gracefully so that there is minimal impact on the end user and the time-to-recovery is as fast as possible.

Server

server:
  deployment:
    image_tag: 1-95a53
    image: ""

It is used for providing server configurations.

Deployment

It gives the details for deployment.

Key
Description

image_tag

It is the image tag

image

It is the URL of the image

Service Monitor

servicemonitor:
      enabled: true
      path: /abc
      scheme: 'http'
      interval: 30s
      scrapeTimeout: 20s
      metricRelabelings:
        - sourceLabels: [namespace]
          regex: '(.*)'
          replacement: myapp
          targetLabel: target_namespace

It gives the set of targets to be monitored.

Db Migration Config

dbMigrationConfig:
  enabled: false

It is used to configure database migration.

Istio

These Istio configurations collectively provide a comprehensive set of tools for controlling access, authenticating requests, enforcing security policies, and configuring traffic behavior within a microservices architecture. The specific settings you choose would depend on your security and traffic management requirements.

Istio

These Istio configurations collectively provide a comprehensive set of tools for controlling access, authenticating requests, enforcing security policies, and configuring traffic behavior within a microservices architecture. The specific settings you choose would depend on your security and traffic management requirements.

istio:
  enable: true

  gateway:
    enabled: true
    labels:
      app: my-gateway
    annotations:
      description: "Istio Gateway for external traffic"
    host: "example.com"
    tls:
      enabled: true
      secretName: my-tls-secret

  virtualService:
    enabled: true
    labels:
      app: my-service
    annotations:
      description: "Istio VirtualService for routing"
    gateways:
      - my-gateway
    hosts:
      - "example.com"
    http:
      - match:
          - uri:
              prefix: /v1
        route:
          - destination:
              host: my-service-v1
              subset: version-1
      - match:
          - uri:
              prefix: /v2
        route:
          - destination:
              host: my-service-v2
              subset: version-2

  destinationRule:
    enabled: true
    labels:
      app: my-service
    annotations:
      description: "Istio DestinationRule for traffic policies"
    subsets:
      - name: version-1
        labels:
          version: "v1"
      - name: version-2
        labels:
          version: "v2"
    trafficPolicy:
      connectionPool:
        tcp:
          maxConnections: 100
      outlierDetection:
        consecutiveErrors: 5
        interval: 30s
        baseEjectionTime: 60s

  peerAuthentication:
    enabled: true
    labels:
      app: my-service
    annotations:
      description: "Istio PeerAuthentication for mutual TLS"
    selector:
      matchLabels:
        version: "v1"
    mtls:
      mode: STRICT
    portLevelMtls:
      8080:
        mode: DISABLE

  requestAuthentication:
    enabled: true
    labels:
      app: my-service
    annotations:
      description: "Istio RequestAuthentication for JWT validation"
    selector:
      matchLabels:
        version: "v1"
    jwtRules:
      - issuer: "issuer-1"
        jwksUri: "https://issuer-1/.well-known/jwks.json"

  authorizationPolicy:
    enabled: true
    labels:
      app: my-service
    annotations:
      description: "Istio AuthorizationPolicy for access control"
    action: ALLOW
    provider:
      name: jwt
      kind: Authorization
    rules:
      - from:
          - source:
              requestPrincipals: ["*"]
        to:
          - operation:
              methods: ["GET"]
Key
Description

istio

Istio enablement. When istio.enable set to true, Istio would be enabled for the specified configurations

authorizationPolicy

It allows you to define access control policies for service-to-service communication.

action

Determines whether to ALLOW or DENY the request based on the defined rules.

provider

Authorization providers are external systems or mechanisms used to make access control decisions.

rules

List of rules defining the authorization policy. Each rule can specify conditions and requirements for allowing or denying access.

destinationRule

It allows for the fine-tuning of traffic policies and load balancing for specific services. You can define subsets of a service and apply different traffic policies to each subset.

subsets

Specifies subsets within the service for routing and load balancing.

trafficPolicy

Policies related to connection pool size, outlier detection, and load balancing.

gateway

Allowing external traffic to enter the service mesh through the specified configurations.

host

The external domain through which traffic will be routed into the service mesh.

tls

Traffic to and from the gateway should be encrypted using TLS.

secretName

Specifies the name of the Kubernetes secret that contains the TLS certificate and private key. The TLS certificate is used for securing the communication between clients and the Istio gateway.

peerAuthentication

It allows you to enforce mutual TLS and control the authentication between services.

mtls

Mutual TLS. Mutual TLS is a security protocol that requires both client and server, to authenticate each other using digital certificates for secure communication.

mode

Mutual TLS mode, specifying how mutual TLS should be applied. Modes include STRICT, PERMISSIVE, and DISABLE.

portLevelMtls

Configures port-specific mTLS settings. Allows for fine-grained control over the application of mutual TLS on specific ports.

selector

Configuration for selecting workloads to apply PeerAuthentication.

requestAuthentication

Defines rules for authenticating incoming requests.

jwtRules

Rules for validating JWTs (JSON Web Tokens). It defines how incoming JWTs should be validated for authentication purposes.

selector

Specifies the conditions under which the RequestAuthentication rules should be applied.

virtualService

Enables the definition of rules for how traffic should be routed to different services within the service mesh.

gateways

Specifies the gateways to which the rules defined in the VirtualService apply.

hosts

List of hosts (domains) to which this VirtualService is applied.

http

Configuration for HTTP routes within the VirtualService. It define routing rules based on HTTP attributes such as URI prefixes, headers, timeouts, and retry policies.

Application Metrics

Application metrics can be enabled to see your application's metrics-CPU Service Monitor usage, Memory Usage, Status, Throughput and Latency.

Deployment Metrics

It gives the realtime metrics of the deployed applications

Key
Description

Deployment Frequency

It shows how often this app is deployed to production

Change Failure Rate

It shows how often the respective pipeline fails.

Mean Lead Time

It shows the average time taken to deliver a change to production.

Mean Time to Recovery

It shows the average time taken to fix a failed pipeline.

Addon features in Deployment Template Chart version 3.9.0

Service Account

serviceAccountName: orchestrator

A service account provides an identity for the processes that run in a Pod.

When you access the cluster, you are authenticated by the API server as a particular User Account. Processes in containers inside pod can also contact the API server. When you are authenticated as a particular Service Account.

When you create a pod, if you do not create a service account, it is automatically assigned the default service account in the namespace.

Pod Disruption Budget

You can create PodDisruptionBudget for each application. A PDB limits the number of pods of a replicated application that are down simultaneously from voluntary disruptions. For example, an application would like to ensure the number of replicas running is never brought below the certain number.

podDisruptionBudget: 
     minAvailable: 1

or

podDisruptionBudget: 
     maxUnavailable: 50%

You can specify either maxUnavailable or minAvailable in a PodDisruptionBudget and it can be expressed as integers or as a percentage.

Key
Description

minAvailable

Evictions are allowed as long as they leave behind 1 or more healthy pods of the total number of desired replicas.

maxUnavailable

Evictions are allowed as long as at most 1 unhealthy replica among the total number of desired replicas.

Application metrics Envoy Configurations

envoyproxy:
  image: envoyproxy/envoy:v1.14.1
  configMapName: ""
  resources:
    limits:
      cpu: "50m"
      memory: "50Mi"
    requests:
      cpu: "50m"
      memory: "50Mi"

Envoy is attached as a sidecar to the application container to collect metrics like 4XX, 5XX, Throughput and latency. You can now configure the envoy settings such as idleTimeout, resources etc.

Prometheus Rule

prometheusRule:
  enabled: true
  additionalLabels: {}
  namespace: ""
  rules:
    - alert: TooMany500s
      expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"5.+"} ) / sum(nginx_ingress_controller_requests) ) > 5
      for: 1m
      labels:
        severity: critical
      annotations:
        description: Too many 5XXs
        summary: More than 5% of the all requests did return 5XX, this require your attention

Alerting rules allow you to define alert conditions based on Prometheus expressions and to send notifications about firing alerts to an external service.

In this case, Prometheus will check that the alert continues to be active during each evaluation for 1 minute before firing the alert. Elements that are active, but not firing yet, are in the pending state.

Pod Labels

Labels are key/value pairs that are attached to pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. Labels can be used to organize and to select subsets of objects.

podLabels:
  severity: critical

Pod Annotations

Pod Annotations are widely used to attach metadata and configs in Kubernetes.

podAnnotations:
  fluentbit.io/exclude: "true"

Custom Metrics in HPA

autoscaling:
  enabled: true
  MinReplicas: 1
  MaxReplicas: 2
  TargetCPUUtilizationPercentage: 90
  TargetMemoryUtilizationPercentage: 80
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
      - type: Percent
        value: 100
        periodSeconds: 15
    scaleUp:
      stabilizationWindowSeconds: 0
      policies:
      - type: Percent
        value: 100
        periodSeconds: 15
      - type: Pods
        value: 4
        periodSeconds: 15
      selectPolicy: Max

HPA, by default is configured to work with CPU and Memory metrics. These metrics are useful for internal cluster sizing, but you might want to configure wider set of metrics like service latency, I/O load etc. The custom metrics in HPA can help you to achieve this.

Wait For Seconds Before Scaling Down

waitForSecondsBeforeScalingDown: 30

Wait for given period of time before scaling down the container.

4. Show Application Metrics

If you want to see application metrics like different HTTP status codes metrics, application throughput, latency, response time. Enable the Application metrics from below the deployment template Save button. After enabling it, you should be able to see all metrics on App detail page. By default it remains disabled.

Helm Chart Json Schema Table

Helm Chart json schema is used to validate the deployment template values.

Chart Version
Link

reference-chart_3-12-0

reference-chart_3-11-0

reference-chart_3-10-0

reference-chart_3-9-0

Other Validations in Json Schema

The values of CPU and Memory in limits must be greater than or equal to in requests respectively. Similarly, In case of envoyproxy, the values of limits are greater than or equal to requests as mentioned below.

resources.limits.cpu >= resources.requests.cpu
resources.limits.memory >= resources.requests.memory
envoyproxy.resources.limits.cpu >= envoyproxy.resources.requests.cpu
envoyproxy.resources.limits.memory >= envoyproxy.resources.requests.memory

Addon features in Deployment Template Chart version 4.11.0

KEDA Autoscaling

KEDA Helm repo : https://kedacore.github.io/charts

Example for autoscaling with KEDA using Prometheus metrics is given below:

kedaAutoscaling:
  enabled: true
  minReplicaCount: 1
  maxReplicaCount: 2
  idleReplicaCount: 0
  pollingInterval: 30
  advanced:
    restoreToOriginalReplicaCount: true
    horizontalPodAutoscalerConfig:
      behavior:
        scaleDown:
          stabilizationWindowSeconds: 300
          policies:
          - type: Percent
            value: 100
            periodSeconds: 15
  triggers: 
    - type: prometheus
      metadata:
        serverAddress:  http://<prometheus-host>:9090
        metricName: http_request_total
        query: envoy_cluster_upstream_rq{appId="300", cluster_name="300-0", container="envoy",}
        threshold: "50"
  triggerAuthentication:
    enabled: false
    name:
    spec: {}
  authenticationRef: {}

Example for autosccaling with KEDA based on kafka is given below :

kedaAutoscaling:
  enabled: true
  minReplicaCount: 1
  maxReplicaCount: 2
  idleReplicaCount: 0
  pollingInterval: 30
  advanced: {}
  triggers: 
    - type: kafka
      metadata:
        bootstrapServers: b-2.kafka-msk-dev.example.c2.kafka.ap-southeast-1.amazonaws.com:9092,b-3.kafka-msk-dev.example.c2.kafka.ap-southeast-1.amazonaws.com:9092,b-1.kafka-msk-dev.example.c2.kafka.ap-southeast-1.amazonaws.com:9092
        topic: Orders-Service-ESP.info
        lagThreshold: "100"
        consumerGroup: oders-remove-delivered-packages
        allowIdleConsumers: "true"
  triggerAuthentication:
    enabled: true
    name: keda-trigger-auth-kafka-credential
    spec:
      secretTargetRef:
        - parameter: sasl
          name: keda-kafka-secrets
          key: sasl
        - parameter: username
          name: keda-kafka-secrets
          key: username
  authenticationRef: 
    name: keda-trigger-auth-kafka-credential

NetworkPolicy

Kubernetes NetworkPolicies control pod communication by defining rules for incoming and outgoing traffic.

networkPolicy:
  enabled: false
  annotations: {}
  labels: {}
  podSelector:
    matchLabels:
      role: db
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - ipBlock:
            cidr: 172.17.0.0/16
            except:
              - 172.17.1.0/24
        - namespaceSelector:
            matchLabels:
              project: myproject
        - podSelector:
            matchLabels:
              role: frontend
      ports:
        - protocol: TCP
          port: 6379
  egress:
    - to:
        - ipBlock:
            cidr: 10.0.0.0/24
      ports:
        - protocol: TCP
          port: 5978
Key
Description

enabled

Enable or disable NetworkPolicy.

annotations

Additional metadata or information associated with the NetworkPolicy.

labels

Labels to apply to the NetworkPolicy.

podSelector

Each NetworkPolicy includes a podSelector which selects the grouping of pods to which the policy applies. The example policy selects pods with the label "role=db". An empty podSelector selects all pods in the namespace.

policyTypes

Each NetworkPolicy includes a policyTypes list which may include either Ingress, Egress, or both.

Ingress

Controls incoming traffic to pods.

Egress

Controls outgoing traffic from pods.

Winter-Soldier

Winter Soldier can be used to

  • cleans up (delete) Kubernetes resources

  • reduce workload pods to 0

Given below is template values you can give in winter-soldier:

winterSoldier:
  enabled: false
  apiVersion: pincher.devtron.ai/v1alpha1
  action: sleep
  timeRangesWithZone:
    timeZone: "Asia/Kolkata"
    timeRanges: []
  targetReplicas: []
  fieldSelector: []
Key
values
Description

enabled

false,true

decide the enabling factor

apiVersion

pincher.devtron.ai/v1beta1, pincher.devtron.ai/v1alpha1

specific api version

action

sleep,delete, scale

This specify the action need to perform.

timeRangesWithZone:timeZone

eg:- "Asia/Kolkata","US/Pacific"

timeRangesWithZone:timeRanges

array of [ timeFrom, timeTo, weekdayFrom, weekdayTo]

It use to define time period/range on which the user need to perform the specified action. you can have multiple timeRanges. These settings will take action on Sat and Sun from 00:00 to 23:59:59,

targetReplicas

[n] : n - number of replicas to scale.

These is mandatory field when the action is scale Default value is [].

fieldSelector

- AfterTime(AddTime( ParseTime({{metadata.creationTimestamp}}, '2006-01-02T15:04:05Z'), '5m'), Now())

These value will take a list of methods to select the resources on which we perform specified action .

here is an example,

winterSoldier:
  apiVersion: pincher.devtron.ai/v1alpha1 
  enabled: true
  annotations: {}
  labels: {}
  timeRangesWithZone:
    timeZone: "Asia/Kolkata"
    timeRanges: 
      - timeFrom: 00:00
        timeTo: 23:59:59
        weekdayFrom: Sat
        weekdayTo: Sun
      - timeFrom: 00:00
        timeTo: 08:00
        weekdayFrom: Mon
        weekdayTo: Fri
      - timeFrom: 20:00
        timeTo: 23:59:59
        weekdayFrom: Mon
        weekdayTo: Fri
  action: scale
  targetReplicas: [1,1,1]
  fieldSelector: 
    - AfterTime(AddTime( ParseTime({{metadata.creationTimestamp}}, '2006-01-02T15:04:05Z'), '10h'), Now())

Above settings will take action on Sat and Sun from 00:00 to 23:59:59, and on Mon-Fri from 00:00 to 08:00 and 20:00 to 23:59:59. If action:sleep then runs hibernate at timeFrom and unhibernate at timeTo. If action: delete then it will delete workloads at timeFrom and timeTo. Here the action:scale thus it scale the number of resource replicas to targetReplicas: [1,1,1]. Here each element of targetReplicas array is mapped with the corresponding elements of array timeRangesWithZone/timeRanges. Thus make sure the length of both array is equal, otherwise the cnages cannot be observed.

The above example will select the application objects which have been created 10 hours ago across all namespaces excluding application's namespace. Winter soldier exposes following functions to handle time, cpu and memory.

  • ParseTime - This function can be used to parse time. For eg to parse creationTimestamp use ParseTime({{metadata.creationTimestamp}}, '2006-01-02T15:04:05Z')

  • AddTime - This can be used to add time. For eg AddTime(ParseTime({{metadata.creationTimestamp}}, '2006-01-02T15:04:05Z'), '-10h') ll add 10h to the time. Use d for day, h for hour, m for minutes and s for seconds. Use negative number to get earlier time.

  • Now - This can be used to get current time.

  • CpuToNumber - This can be used to compare CPU. For eg any({{spec.containers.#.resources.requests}}, { MemoryToNumber(.memory) < MemoryToNumber('60Mi')}) will check if any resource.requests is less than 60Mi.

Security Context

A security context defines privilege and access control settings for a Pod or Container.

To add a security context for main container:

containerSecurityContext:
  allowPrivilegeEscalation: false

To add a security context on pod level:

podSecurityContext:
  runAsUser: 1000
  runAsGroup: 3000
  fsGroup: 2000

Topology Spread Constraints

You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization.

topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: zone
    whenUnsatisfiable: DoNotSchedule
    autoLabelSelector: true
    customLabelSelector: {}

CI Pipeline

Creating CI Pipeline

A CI Workflow can be created in one of the following ways:

  • Create a Job

Each method has different use-cases that can be tailored according the needs of the organization.

1. Build and Deploy from Source Code

Build and Deploy from Source Code workflow allows you to build the container image from a source code repository.

  1. From the Applications menu, select your application.

  2. On the App Configuration page, select Workflow Editor.

  3. Select + New Workflow.

  4. Select Build and Deploy from Source Code.

  5. Enter the following fields on the Create build pipeline window:

Field Name
Required/Optional
Description

Source type

Required

Branch Name

Required

Branch that triggers the CI build

Advanced Options

Optional

Create Pre-Build, Build, and Post-Build tasks

Advanced Options

The Advanced CI Pipeline includes the following stages:

  • Pre-build stage: The tasks in this stage are executed before the image is built.

  • Build stage: In this stage, the build is triggered from the source code that you provide.

  • Post-build stage: The tasks in this stage will be triggered once the build is complete.

Build Stage

Go to the Build stage tab.

Field Name
Required/Optional
Description

TRIGGER BUILD PIPELINE

Required

The build execution may be set to:

  • Automatically (default): Build is triggered automatically as the Git source code changes.

  • Manually: Build is triggered manually.

DOCKER LAYER CACHING

Optional

Pipeline Name

Required

A name for the pipeline

Source type

Required

Branch Name

Required

Branch that triggers the CI build

Docker build arguments

Optional

Override docker build configurations for this pipeline.

  • Key: Field name

  • Value: Field value

Prerequisite

You can disable caching if:

  • It’s not relevant to your workflow

  • It consumes unnecessary storage

  • The pipeline doesn’t perform an actual Docker build

Which cache gets impacted?

If a PVC with cache is attached, it will not be impacted by disabling cache. Only the remote cache is disabled.

There are 3 places from where you can control the cache behavior:

1. Orchestrator ConfigMap (Global Settings)

Super-admins can define the cache settings in orchestrator-cm globally for all applications and jobs using the following flags:

DEFAULT_CACHE_FOR_CI_BUILD # for main application build stage 
DEFAULT_CACHE_FOR_CI_JOB # for CI jobs
DEFAULT_CACHE_FOR_JOB # for general jobs
DEFAULT_CACHE_FOR_CD_PRE # for pre-deployment stage 
DEFAULT_CACHE_FOR_CD_POST # for post-deployment stage

2. Editing Pipeline

Go to Workflow Editor → Edit Build Pipeline (Build Stage) → Docker Layer Caching (toggle) → Use remote cache (checkbox)

By default, your build pipeline will inherit the Global Settings. However, you can use the toggle button to override it and decide the caching behavior using the Use remote cache checkbox. In other words, cache behavior defined in pipeline configuration will have higher priority than the global one.

3. During Trigger

Go to Build & Deploy (tab) → Select Material → Ignore Cache (checkbox)

You have the option to ignore cache while triggering a build (regardless of the cache settings defined at the pipeline or global level).

Note

If the caching flags in Global Settings are set to false, ignoring cache becomes the default behavior even if you don't select the 'Ignore Cache' checkbox during trigger.

Source type

Branch Fixed

This allows you to trigger a CI build whenever there is a code change on the specified branch.

Enter the Branch Name of your code repository.

Branch Regex

Branch Regex allows users to easily switch between branches matching the configured Regex before triggering the build pipeline. In case of Branch Fixed, users cannot change the branch name in ci-pipeline unless they have admin access for the app. So, if users withBuild and Deploy access should be allowed to switch branch name before triggering ci-pipeline, Branch Regex should be selected as source type by a user with Admin access.

For example if the user sets the Branch Regex as feature-*, then users can trigger from branches such as feature-1450, feature-hot-fix etc.

Pull Request

This allows you to trigger the CI build when a pull request is created in your repository.

Prerequisites

To trigger the build from specific PRs, you can filter the PRs based on the following keys:

Filter key
Description

Author

Author of the PR

Source branch name

Branch from which the Pull Request is generated

Target branch name

Branch to which the Pull request will be merged

Title

Title of the Pull Request

State

State of the PR. Default is "open" and cannot be changed

Select the appropriate filter and pass the matching condition as a regular expression (regex).

Select Create Pipeline.

Tag Creation

This allows you to trigger the CI build whenever a new tag is created.

Prerequisites

To trigger the build from specific tags, you can filter the tags based on the author and/or the tag name.

Filter key
Description

Author

The one who created the tag

Tag name

Name of the tag for which the webhook will be triggered

Select the appropriate filter and pass the matching condition as a regular expression (regex).

Select Create Pipeline.

Scan for Vulnerabilities

Prerequisite

Install any one of the following integrations from Devtron Stack Manager:

  • Trivy

Custom Image Tag Pattern

This feature helps you apply custom tags (e.g., v1.0.0) to readily distinguish container images within your repository.

  1. Enable the toggle button as shown below.

  2. You can write an alphanumeric pattern for your image tag, e.g., test-v1.0.{x}. Here, 'x' is a mandatory variable whose value will incrementally increase with every build. You can also define the value of 'x' for the next build trigger in case you want to change it.

Ensure your custom tag do not start or end with a period (.) or comma (,)
  1. Click Update Pipeline.

  2. Now, go to Build & Deploy tab of your application, and click Select Material in the CI pipeline.

  3. Choose the git commit you wish to use for building the container image. Click Start Build.

  4. The build will initiate and once it is successful the image tag would reflect at all relevant screens:

    • Build History

    • Docker Registry

    • CD Pipeline (Image Selection)

Build will fail if the resulting image tag has already been built in the past. This means if there is an existing image with tag test-v1.0.0, you cannot build another image having the same tag test-v1.0.0 in the same CI pipeline. This error might occur when you reset the value of the variable x or when you disable/enable the toggle button for Custom image tag pattern.

2. Linked Build Pipeline

If one code is shared across multiple applications, Linked Build Pipeline can be used, and only one image will be built for multiple applications because if there is only one build, it is not advisable to create multiple CI Pipelines.

  1. From the Applications menu, select your application.

  2. On the App Configuration page, select Workflow Editor.

  3. Select + New Workflow.

  4. Select Linked Build Pipeline.

  5. On the Create linked build pipeline screen:

    • Search for the application in which the source CI pipeline is present.

    • Select the source CI pipeline from the application that you selected above.

    • Enter a new name for the linked CI pipeline.

  6. Click Create Linked CI Pipeline.

Thereafter, the source CI pipeline will indicate the number of Linked CI pipelines. Upon clicking it, it will display the child information as shown below. It reveals the applications and environments where Linked CI is used for deployment.

After creating a linked CI pipeline, you can create a CD pipeline.

Linked CI pipelines can't trigger builds. They rely on the source CI pipeline to build images. Trigger a build in the source CI pipeline to see the images available for deployment in the linked CI pipeline's CD stage.

3. Deploy Image from External Service

For CI pipeline, you can receive container images from an external services via webhook API.

You can use Devtron for deployments on Kubernetes while using an external CI tool such as Jenkins or CircleCI. External CI feature can be used when the CI tool is hosted outside the Devtron platform. However, by using an external CI, you will not be able to use some of the Devtron features such as Image scanning and security policies, configuring pre-post CI stages etc.

  • To configure Git Repository, you can add any Git repository account (e.g., dummy account) and click Next.

  • To configure the Container Registry and Container Repository, you can leave the fields blank or simply add any test repository and click Save & Next.

  • On the Workflow Editor page, click New Workflow and select Deploy image from external service.

  • On the Deploy image from external source page, provide the information in the following fields:

Fields
Description

Deploy to environment

When do you want to deploy

You can deploy either in one of the following ways:

  • Automatic: If you select automatic, your application will be deployed automatically everytime a new image is received.

  • Manual: In case of manual, you have to select the image and deploy manually.

Deployment Strategy

Configure the deployment preferences for this pipeline.

  • Click Create Pipeline. A new CI pipeline will be created for the external source. To get the webhook URL and JSON sample payload to be used in external CI pipeline, click Show webhook details.

  • On the Webhook Details page, you have to authenticate via API token to allow requests from an external service (e.g. Jenkins or CircleCI).

  • For authentication, only users with super-admin permissions can select or generate an API token:

    • Or use Auto-generate token to generate the API token with the required permissions. Make sure to enter the token name in the Token name field.

  • To allow requests from the external source, you can request the API by using:

    • Webhook URL

    • cURL Request

Webhook URL

HTTP Method: POST

API Endpoint: https://{domain-name}/orchestrator/webhook/ext-ci/{pipeline-id}

JSON Payload:

    {
    "dockerImage": "445808685819.dkr.ecr.us-east-2.amazonaws.com/orch:23907713-2"
}

You can also select metadata to send to Devtron. Sample JSON will be generated accordingly. You can send the Payload script to your CI tools such as Jenkins and Devtron will receive the build image every time the CI pipeline is triggered or you can use the Webhook URL, which will build an image every time CI pipeline is triggered using Devtron Dashboard.

Sample cURL Request

curl --location --request POST \
'https://{domain-name}/orchestrator/webhook/ext-ci/{pipeline-id}' \
--header 'Content-Type: application/json' \
--header 'token: {token}' \
--data-raw '{
    "dockerImage": "445808685819.dkr.ecr.us-east-2.amazonaws.com/orch:23907713-2"
}'

Response Codes

Code
Description

200

app detail page url

400

Bad request

401

Unauthorized

Integrate with External Sources - Jenkins or CircleCI

  • On the Jenkins dashboard, select the Jenkins job which you want to integrate with the Devtron dashboard.

  • Go to the Configuration > Build Steps, click Add build step, and then click Execute Shell.

  • Enter the cURL request command.

  • Make sure to enter the API token and dockerImage in your cURL command and click Save.

Now, you can access the images on the Devtron dashboard and deploy manually. In case, if you select Automatic deployment option, then your application will be deployed automatically everytime a new image is received.

Similarly, you can also integrate with external source such as CircleCI by:

  • Select the job on the CircleCI dashboard and click Configuration File.

  • On the respective job, enter the cURL command and update the API token and dockerImage in your cURL command.


Updating CI Pipeline

You can update the configurations of an existing CI Pipeline except for the pipeline's name. To update a pipeline, select your CI pipeline. In the Edit build pipeline window, edit the required stages and select Update Pipeline.


Deleting CI Pipeline

You can only delete a CI pipeline if there is no CD pipeline created in your workflow.

To delete a CI pipeline, go to App Configurations > Workflow Editor and select Delete Pipeline.


Extras

Configuring Webhook

For GitHub

  1. Go to the Settings page of your repository and select Webhooks.

  2. Select Add webhook.

  3. In the Payload URL field, enter the Webhook URL that you get on selecting the source type as "Pull Request" or "Tag Creation" in Devtron the dashboard.

  4. Change the Content-type to application/json.

  5. In the Secret field, enter the secret from Devtron the dashboard when you select the source type as "Pull Request" or "Tag Creation".

  1. Under Which events would you like to trigger this webhook?, select Let me select individual events. to trigger the webhook to build CI Pipeline.

  2. Select Branch or tag creation and Pull Requests.

  3. Select Add webhook.

For Bitbucket Cloud

  1. Go to the Repository settings page of your Bitbucket repository.

  2. Select Webhooks and then select Add webhook.

  1. Enter a Title for the webhook.

  2. In the URL field, enter the Webhook URL that you get on selecting the source type as "Pull Request" or "Tag Creation" in the Devtron dashboard.

  3. Select the event triggers for which you want to trigger the webhook.

  4. Select Save to save your configurations.

CD Pipeline

After your CI pipeline is ready, you can start building your CD pipeline. Devtron enables you to design your CD pipeline in a way that fully automates your deployments. Images from CI stage can be deployed to one or more environments through dedicated CD pipelines.

Creating CD Pipeline

Click the '+' sign on CI Pipeline to attach a CD Pipeline to it.

A basic Create deployment pipeline window will pop up.

Here, you get two tabs:


New Deployment

The New Deployment tab displays the following sections:

Deploy to Environment

This section expects four inputs from you:

Setting
Description
Options

Environment

Select the environment where you want to deploy your application

(List of available environments)

Namespace

Automatically populated based on the selected environment

Not Applicable

Trigger

When to execute the deployment pipeline

Automatic: Deployment triggers automatically when a new image completes the previous stage (build pipeline or another deployment pipeline) Manual: Deployment is not initiated automatically. You can trigger deployment with a desired image.

Deployment Approach

How to deploy the application

Deployment Strategy

Now, the window will have 3 distinct tabs, and you will see the following additions:

You can create or edit a deployment strategy in Advanced Options. Remember, only the default strategy will be used for deployment, so use the SET DEFAULT button to mark your preferred strategy as default after creating it.

Pre-Deployment Stage

If your deployment requires prior actions like DB migration, code quality check (QC), etc., you can use the Pre-deployment stage to configure such tasks.

  1. Tasks

Here you can add one or more tasks. The tasks can be re-arranged using drag-and-drop and they will be executed sequentially.

  1. Trigger Pre-Deployment Stage

  1. ConfigMaps & Secrets

Prerequisites

If you want to use some configuration files and secrets in pre-deployment stages or post-deployment stages, then you can use the ConfigMaps & Secrets options. You will get them as a drop-down in the pre-deployment stage.

  1. Execute tasks in application environment

These Pre-deployment CD / Post-deployment CD pods can be created in your deployment cluster or the devtron build cluster. If your scripts/tasks has some dependency on the deployment environment, you may run these pods in the deployment cluster. Thus, your scripts (if any) can interact with the cluster services that may not be publicly exposed.

Some tasks require extra permissions for the node where Devtron is installed. However, if the node already has the necessary permissions for deploying applications, there is no need to assign them again. Instead, you can enable the Execute tasks in application environment option for the pre-CD or post-CD steps. By default, this option is disabled.

To enable the Execute tasks in application environment option, follow these steps:

  • Go to the chart store and search for the devtron-in-clustercd chart.

  • Configure the chart according to your requirements and deploy it in the target cluster.

  • After the deployment, edit the devtron-cm configmap and add the following key-value pair:

    ORCH_HOST: <host_url>/orchestrator/webhook/msg/nats
    
    Example:
    
    ORCH_HOST: http://xyz.devtron.com/orchestrator/webhook/msg/nats
    

    ORCH_HOST value should be same as of CD_EXTERNAL_LISTENER_URL value which is passed in values.yaml.

  • Delete the Devtron pod using the following command:

    kubectl delete pod -l app=devtron -n devtroncd
  • Again navigate to the chart store and search for the "migration-incluster-cd" chart.

  • Edit the cluster-name and secret name values within the chart. The cluster name refers to the name used when adding the cluster in the global configuration and for which you are going to enable Execute tasks in application environment option.

  • Deploy the chart in any environment within the Devtron cluster. Now you should be able to enable Execute tasks in application environment option for an environment of target cluster.

Deployment Stage

Pipeline Name

Pipeline name will be auto-generated; however, you are free to modify the name as per your requirement.

Custom Image Tag Pattern

  1. Enable the toggle button as shown below.

  2. Click the edit icon.

  3. You can write an alphanumeric pattern for your image tag, e.g., prod-v1.0.{x}. Here, 'x' is a mandatory variable whose value will incrementally increase with every pre or post deployment trigger (that option is also available to you). You can also define the value of 'x' for the next trigger in case you want to change it.

Ensure your custom tag do not start or end with a period (.) or comma (,)
  1. Click Update Pipeline.

Pull Container Image with Image Digest

Therefore, to eliminate the possibility of pulling an unintended image, Devtron offers the option to pull container images using digest and image tag.

An image digest is a unique and immutable SHA-256 string returned by the container registry when you push an image. So the image referenced by the digest will never change.

Who Can Perform This Action?

Post-Deployment Stage

If you need to run any actions for e.g., closure of Jira ticket, load testing or performance testing, you can configure such actions in the post-deployment stages.

Post-deployment stages are similar to pre-deployment stages. The difference is, pre-deployment executes before the deployment, while post-deployment occurs after.


Migrate to Devtron

When can I see this option?

Who Can Perform This Action?

Only superadmins can migrate existing Helm releases or Argo apps to Devtron

If you already use external Helm or Argo CD for deployment and wish to try out Devtron, this feature helps you onboard and manage your external applications using Devtron’s CI/CD capabilities, offering the following benefits:

  • No hassle of manually migrating your existing applications

  • No need to set up a parallel Argo CD instance

  • No risk of losing your existing configurations

  • Use build pipeline in your workflow

  • Execute pre-deployment and post-deployment tasks

  • Scan your apps for vulnerabilities

  • Hibernate or restart your app

Migrate Helm Release

Prerequisites

  1. Click Helm Release in 'Select type of application to migrate'.

  2. Select the external cluster containing your Helm releases, and select the Helm release you wish to migrate.

  1. Select the trigger (Automatic/Manual) and click Create Pipeline.

Limitations

  • Apps deployed using Helm + manual kubectl, kubectl, kustomize + helm are not supported.

  • By default, Devtron detects and uses app-values.yaml as the values file. If your Helm app contains multiple values files, you must consolidate it into a single app-values.yaml.

  • Once an app is onboarded to Devtron, the user should only use Devtron to manage that application and not make manual changes on that onboarded Helm release. This is because Devtron might not monitor or reconcile the manual changes you make outside Devtron.

Migrate Argo CD Application

Prerequisites

  • It must have a single Git source and a single values file. By default, Devtron expects app-values.yaml so make sure it is committed to Git.

  • The external Argo CD should have auto-sync enabled or an alternative syncing mechanism, as Devtron does not perform manual syncs.

  1. Click Argo CD Application in 'Select type of application to migrate'.

  2. Select the external cluster containing your Argo apps, and select the Argo CD application you wish to migrate.

  1. Select the trigger (Automatic/Manual) and click Create Pipeline.

Limitations

  • The Git source type should be branch HEAD.

  • The target deployment cluster’s endpoint in Devtron must be the same as the one configured in Argo CD.

  • Once onboarded to Devtron, users should manage the application only through Devtron and avoid making changes directly in Git or Argo CD. This is because Devtron might not monitor or reconcile the manual changes you make outside Devtron.

Note


Updating CD Pipeline

You can update the deployment stages and the deployment strategy of the CD Pipeline whenever you require it. However, you cannot change the name of a CD Pipeline or its Deployment Environment. If you want a new CD pipeline for the same environment, first delete the previous CD pipeline.

To update a CD Pipeline, go to the App Configurations section, Click on Workflow editor and then click on the CD Pipeline you want to Update.

Make changes as needed and click on Update Pipeline to update this CD Pipeline.


Deleting CD Pipeline

If you no longer require the CD Pipeline, you can also delete the Pipeline.

To delete a CD Pipeline, go to the App Configurations and then click on the Workflow editor. Now click on the pipeline you wish to delete. A pop-up having the CD details will appear. Verify the name and the details to ensure that you are not accidentally deleting the wrong CD pipeline and then click Delete Pipeline to delete it.

Deleting a CD pipeline also deletes all the K8s resources associated with it and will bring a disruption in the deployed micro-service. Before deleting a CD pipeline, please ensure that the associated resources are not being used in any production workload.


Extras

Deployment Strategies

A deployment strategy is a method of updating, downgrading, or creating new versions of an application. The options you see under deployment strategy depend on the selected chart type (see fig 2). Below are some deployment configuration-based strategies.

Blue-Green Strategy

Blue-green deployments involve running two versions of an application at the same time and moving traffic from the in-production version (the green version) to the newer version (the blue version).

blueGreen:
  autoPromotionSeconds: 30
  scaleDownDelaySeconds: 30
  previewReplicaCount: 1
  autoPromotionEnabled: false
Key
Description

autoPromotionSeconds

It will make the rollout automatically promote the new ReplicaSet to active Service after this time has passed

scaleDownDelaySeconds

It is used to delay scaling down the old ReplicaSet after the active Service is switched to the new ReplicaSet

previewReplicaCount

It will indicate the number of replicas that the new version of an application should run

autoPromotionEnabled

It will make the rollout automatically promote the new ReplicaSet to the active service

Rolling Strategy

A rolling deployment slowly replaces instances of the previous version of an application with instances of the new version of the application. Rolling deployment typically waits for new pods to become ready via a readiness check before scaling down the old components. If a significant issue occurs, the rolling deployment can be aborted.

rolling:
  maxSurge: "25%"
  maxUnavailable: 1
Key
Description

maxSurge

No. of replicas allowed above the scheduled quantity

maxUnavailable

Maximum number of pods allowed to be unavailable

Canary Strategy

Canary deployments are a pattern for rolling out releases to a subset of users or servers. The idea is to first deploy the change to a small subset of servers, test it, and then roll the change out to the rest of the servers. The canary deployment serves as an early warning indicator with less impact on downtime: if the canary deployment fails, the rest of the servers aren't impacted.

canary:
  maxSurge: "25%"
  maxUnavailable: 1
  steps:
    - setWeight: 25
    - pause:
        duration: 15 # 1 min
    - setWeight: 50
    - pause:
        duration: 15 # 1 min
    - setWeight: 75
    - pause:
        duration: 15 # 1 min
Key
Description

maxSurge

It defines the maximum number of replicas the rollout can create to move to the correct ratio set by the last setWeight

maxUnavailable

The maximum number of pods that can be unavailable during the update

setWeight

It is the required percent of pods to move to the next step

duration

It is used to set the duration to wait to move to the next step

Recreate Strategy

The recreate strategy is a dummy deployment that consists of shutting down version 'A' and then deploying version 'B' after version 'A' is turned off.

A recreate deployment incurs downtime because, for a brief period, no instances of your application are running. However, your old code and new code do not run at the same time. It terminates the old version and releases the new one.

recreate:

Unlike other strategies mentioned above, 'Recreate' strategy doesn't contain keys for you to configure.

Creating Sequential Pipelines

Devtron supports attaching multiple deployment pipelines to a single build pipeline, in its workflow editor. This feature lets you deploy an image first to stage, run tests and then deploy the same image to production.

Please follow the steps mentioned below to create sequential pipelines:

  1. After creating CI/build pipeline, create a CD pipeline by clicking on the + sign on CI pipeline and configure the CD pipeline as per your requirements.

  2. To add another CD Pipeline sequentially after previous one, again click on + sign on the last CD pipeline.

  3. Similarly, you can add multiple CD pipelines by clicking + sign of the last CD pipeline, each deploying in different environments.

Tip

StatefulSets

The StatefulSet chart in Devtron allows you to deploy and manage stateful applications. StatefulSet is a Kubernetes resource that provides guarantees about the ordering and uniqueness of Pods during deployment and scaling.

It supports only ONDELETE and ROLLINGUPDATE deployment strategy.

You can select StatefulSet chart when you want to use only basic use cases which contain the following:

  • Managing Stateful Applications: StatefulSets are ideal for managing stateful applications, such as databases or distributed systems, that require stable network identities and persistent storage for each Pod.

  • Ordered Pod Management: StatefulSets ensure ordered and predictable management of Pods by providing each Pod with a unique and stable hostname based on a defined naming convention and ordinal index.

  • Updating and Scaling Stateful Applications: StatefulSets support updating and scaling stateful applications by creating new versions of the StatefulSet and performing rolling updates or scaling operations in a controlled manner, ensuring minimal disruption to the application.

  • Persistent Storage: StatefulSets have built-in mechanisms for handling persistent volumes, allowing each Pod to have its own unique volume claim and storage. This ensures data persistence even when Pods are rescheduled or restarted.

  • Maintaining Pod Identity: StatefulSets guarantee consistent identity for each Pod throughout its lifecycle. This stability is maintained even if the Pods are rescheduled, allowing applications to rely on stable network identities.

  • Rollback Capability: StatefulSets provide the ability to rollback to a previous version in case the current state of the application is unstable or encounters issues, ensuring a known working state for the application.

  • Status Monitoring: StatefulSets offer status information that can be used to monitor the deployment, including the current version, number of replicas, and the readiness of each Pod. This helps in tracking the health and progress of the StatefulSet deployment.

  • Resource Cleanup: StatefulSets allow for easy cleanup of older versions by deleting StatefulSets and their associated Pods and persistent volumes that are no longer needed, ensuring efficient resource utilization.

1. Yaml File

Container Ports

This defines ports on which application services will be exposed to other services

ContainerPort:
  - envoyPort: 8799
    idleTimeout:
    name: app
    port: 8080
    servicePort: 80
    nodePort: 32056
    supportStreaming: true
    useHTTP2: true
Key
Description

envoyPort

envoy port for the container.

idleTimeout

the duration of time that a connection is idle before the connection is terminated.

name

name of the port.

port

port for the container.

servicePort

port of the corresponding kubernetes service.

nodePort

nodeport of the corresponding kubernetes service.

supportStreaming

Used for high performance protocols like grpc where timeout needs to be disabled.

useHTTP2

Envoy container can accept HTTP2 requests.

EnvVariables

EnvVariables: []

EnvVariablesFromSecretKeys

EnvVariablesFromSecretKeys: 
  - name: ENV_NAME
    secretName: SECRET_NAME
    keyName: SECRET_KEY

It is used to get the name of Environment Variable name, Secret name and the Key name from which we are using the value in that corresponding Environment Variable.

EnvVariablesFromConfigMapKeys

EnvVariablesFromConfigMapKeys: 
  - name: ENV_NAME
    configMapName: CONFIG_MAP_NAME
    keyName: CONFIG_MAP_KEY

It is used to get the name of Environment Variable name, Config Map name and the Key name from which we are using the value in that corresponding Environment Variable.

To set environment variables for the containers that run in the Pod.

StatefulSetConfig

These are all the configuration settings for the StatefulSet.

statefulSetConfig:
  labels:
    app: my-statefulset
    environment: production
  annotations:
    example.com/version: "1.0"
  serviceName: "my-statefulset-service"
  podManagementPolicy: "Parallel"
  revisionHistoryLimit: 5
  mountPath: "/data"
  volumeClaimTemplates:
    - apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        labels:
          app: my-statefulset
      spec:
        accessModes:
          - ReadWriteOnce
        dataSource:
          kind: Snapshot
          apiGroup: snapshot.storage.k8s.io
          name: my-snapshot
        resources:
          requests:
            storage: 5Gi
          limits:
            storage: 10Gi
        storageClassName: my-storage-class
        selector:
          matchLabels:
            app: my-statefulset
        volumeMode: Filesystem
        volumeName: my-pv
  - apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-logs
      labels:
        app: myapp
    spec:
      accessModes:
        - ReadWriteMany
      dataSourceRef:
        kind: Secret
        apiGroup: v1
        name: my-secret
      resources:
        requests:
          storage: 5Gi
      storageClassName: my-storage-class
      selector:
        matchExpressions:
          - {key: environment, operator: In, values: [production]}
      volumeMode: Block
      volumeName: my-pv

Mandatoryfields in statefulSetConfig is

statefulSetConfig:
  mountPath: /tmp
  volumeClaimTemplates:
  - spec:
      accessModes: 
        - ReadWriteOnce
      resources: 
        requests:
            storage: 2Gi

Here is an explanation of each field in the statefulSetConfig :

Key
Description

labels

set of key-value pairs used to identify the StatefulSet .

annotations

A map of key-value pairs that are attached to the stateful set as metadata.

serviceName

The name of the Kubernetes Service that the StatefulSet should create.

podManagementPolicy

A policy that determines how Pods are created and deleted by the StatefulSet. In this case, the policy is set to "Parallel", which means that all Pods are created at once.

revisionHistoryLimit

The number of revisions that should be stored for each replica of the StatefulSet.

updateStrategy

The update strategy used by the StatefulSet when rolling out changes.

mountPath

The path where the volume should be mounted in the container.

volumeClaimTemplates: An array of volume claim templates that are used to create persistent volumes for the StatefulSet. Each volume claim template specifies the storage class, access mode, storage size, and other details of the persistent volume.

Key
Description

apiVersion

The API version of the PVC .

kind

The type of object that the PVC is.

metadata

Metadata that is attached to the resource being created.

labels

A set of key-value pairs used to label the object for identification and selection.

spec

The specification of the object, which defines its desired state and behavior.

accessModes

A list of access modes for the PersistentVolumeClaim, such as "ReadWriteOnce" or "ReadWriteMany".

dataSource

A data source used to populate the PersistentVolumeClaim, such as a Snapshot or a StorageClass.

kind

specifies the kind of the snapshot, in this case Snapshot.

apiGroup

specifies the API group of the snapshot API, in this case snapshot.storage.k8s.io.

name

specifies the name of the snapshot, in this case my-snapshot.

dataSourceRef

A reference to a data source used to create the persistent volume. In this case, it's a secret.

updateStrategy

The update strategy used by the StatefulSet when rolling out changes.

resources

The resource requests and limits for the PersistentVolumeClaim, which define the minimum and maximum amount of storage it can use.

requests

The amount of storage requested by the PersistentVolumeClaim.

limits

The maximum amount of storage that the PersistentVolumeClaim can use.

storageClassName

The name of the storage class to use for the persistent volume.

selector

The selector used to match a persistent volume to a persistent volume claim.

matchLabels

a map of key-value pairs to match the labels of the corresponding PersistentVolume.

matchExpressions

A set of requirements that the selected object must meet to be considered a match.

key

The key of the label or annotation to match.

operator

The operator used to compare the key-value pairs (in this case, "In" specifies a set membership test).

values

A list of values that the selected object's label or annotation must match.

volumeMode

The mode of the volume, either "Filesystem" or "Block".

volumeName

The name of the PersistentVolume that is created for the PersistentVolumeClaim.

Liveness Probe

If this check fails, kubernetes restarts the pod. This should return error code in case of non-recoverable error.

LivenessProbe:
  Path: ""
  port: 8080
  initialDelaySeconds: 20
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
  failureThreshold: 3
  httpHeaders:
    - name: Custom-Header
      value: abc
  scheme: ""
  tcp: true
Key
Description

Path

It define the path where the liveness needs to be checked.

initialDelaySeconds

It defines the time to wait before a given container is checked for liveliness.

periodSeconds

It defines the time to check a given container for liveness.

successThreshold

It defines the number of successes required before a given container is said to fulfil the liveness probe.

timeoutSeconds

It defines the time for checking timeout.

failureThreshold

It defines the maximum number of failures that are acceptable before a given container is not considered as live.

httpHeaders

Custom headers to set in the request. HTTP allows repeated headers,You can override the default headers by defining .httpHeaders for the probe.

scheme

Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.

tcp

The kubelet will attempt to open a socket to your container on the specified port. If it can establish a connection, the container is considered healthy.

MaxUnavailable

  MaxUnavailable: 0

The maximum number of pods that can be unavailable during the update process. The value of "MaxUnavailable: " can be an absolute number or percentage of the replicas count. The default value of "MaxUnavailable: " is 25%.

MaxSurge

MaxSurge: 1

The maximum number of pods that can be created over the desired number of pods. For "MaxSurge: " also, the value can be an absolute number or percentage of the replicas count. The default value of "MaxSurge: " is 25%.

Min Ready Seconds

MinReadySeconds: 60

This specifies the minimum number of seconds for which a newly created Pod should be ready without any of its containers crashing, for it to be considered available. This defaults to 0 (the Pod will be considered available as soon as it is ready).

Readiness Probe

If this check fails, kubernetes stops sending traffic to the application. This should return error code in case of errors which can be recovered from if traffic is stopped.

ReadinessProbe:
  Path: ""
  port: 8080
  initialDelaySeconds: 20
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
  failureThreshold: 3
  httpHeaders:
    - name: Custom-Header
      value: abc
  scheme: ""
  tcp: true
Key
Description

Path

It define the path where the readiness needs to be checked.

initialDelaySeconds

It defines the time to wait before a given container is checked for readiness.

periodSeconds

It defines the time to check a given container for readiness.

successThreshold

It defines the number of successes required before a given container is said to fulfill the readiness probe.

timeoutSeconds

It defines the time for checking timeout.

failureThreshold

It defines the maximum number of failures that are acceptable before a given container is not considered as ready.

httpHeaders

Custom headers to set in the request. HTTP allows repeated headers,You can override the default headers by defining .httpHeaders for the probe.

scheme

Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.

tcp

The kubelet will attempt to open a socket to your container on the specified port. If it can establish a connection, the container is considered healthy.

Ambassador Mappings

You can create ambassador mappings to access your applications from outside the cluster. At its core a Mapping resource maps a resource to a service.

ambassadorMapping:
  ambassadorId: "prod-emissary"
  cors: {}
  enabled: true
  hostname: devtron.example.com
  labels: {}
  prefix: /
  retryPolicy: {}
  rewrite: ""
  tls:
    context: "devtron-tls-context"
    create: false
    hosts: []
    secretName: ""
Key
Description

enabled

Set true to enable ambassador mapping else set false.

ambassadorId

used to specify id for specific ambassador mappings controller.

cors

used to specify cors policy to access host for this mapping.

weight

used to specify weight for canary ambassador mappings.

hostname

used to specify hostname for ambassador mapping.

prefix

used to specify path for ambassador mapping.

labels

used to provide custom labels for ambassador mapping.

retryPolicy

used to specify retry policy for ambassador mapping.

corsPolicy

Provide cors headers on flagger resource.

rewrite

used to specify whether to redirect the path of this mapping and where.

tls

used to create or define ambassador TLSContext resource.

extraSpec

used to provide extra spec values which not present in deployment template for ambassador resource.

Autoscaling

This is connected to HPA and controls scaling up and down in response to request load.

autoscaling:
  enabled: false
  MinReplicas: 1
  MaxReplicas: 2
  TargetCPUUtilizationPercentage: 90
  TargetMemoryUtilizationPercentage: 80
  extraMetrics: []
Key
Description

enabled

Set true to enable autoscaling else set false.

MinReplicas

Minimum number of replicas allowed for scaling.

MaxReplicas

Maximum number of replicas allowed for scaling.

TargetCPUUtilizationPercentage

The target CPU utilization that is expected for a container.

TargetMemoryUtilizationPercentage

The target memory utilization that is expected for a container.

extraMetrics

Used to give external metrics for autoscaling.

Fullname Override

fullnameOverride: app-name

fullnameOverride replaces the release fullname created by default by devtron, which is used to construct Kubernetes object names. By default, devtron uses {app-name}-{environment-name} as release fullname.

Image

image:
  pullPolicy: IfNotPresent

Image is used to access images in kubernetes, pullpolicy is used to define the instances calling the image, here the image is pulled when the image is not present,it can also be set as "Always".

imagePullSecrets

imagePullSecrets contains the docker credentials that are used for accessing a registry.

imagePullSecrets:
  - regcred

Ingress

This allows public access to the url, please ensure you are using right nginx annotation for nginx class, its default value is nginx

ingress:
  enabled: false
  # For K8s 1.19 and above use ingressClassName instead of annotation kubernetes.io/ingress.class:
  className: nginx
  annotations: {}
  hosts:
      - host: example1.com
        paths:
            - /example
      - host: example2.com
        paths:
            - /example2
            - /example2/healthz
  tls: []

Legacy deployment-template ingress format

ingress:
  enabled: false
  # For K8s 1.19 and above use ingressClassName instead of annotation kubernetes.io/ingress.class:
  ingressClassName: nginx-internal
  annotations: {}
  path: ""
  host: ""
  tls: []
Key
Description

enabled

Enable or disable ingress

annotations

To configure some options depending on the Ingress controller

path

Path name

host

Host name

tls

It contains security details

Ingress Internal

This allows private access to the url, please ensure you are using right nginx annotation for nginx class, its default value is nginx

ingressInternal:
  enabled: false
  # For K8s 1.19 and above use ingressClassName instead of annotation kubernetes.io/ingress.class:
  ingressClassName: nginx-internal
  annotations: {}
  hosts:
      - host: example1.com
        paths:
            - /example
      - host: example2.com
        paths:
            - /example2
            - /example2/healthz
  tls: []
Key
Description

enabled

Enable or disable ingress

annotations

To configure some options depending on the Ingress controller

path

Path name

host

Host name

tls

It contains security details

Init Containers

initContainers: 
  - reuseContainerImage: true
    securityContext:
      runAsUser: 1000
      runAsGroup: 3000
      fsGroup: 2000
    volumeMounts:
     - mountPath: /etc/ls-oms
       name: ls-oms-cm-vol
   command:
     - flyway
     - -configFiles=/etc/ls-oms/flyway.conf
     - migrate

  - name: nginx
    image: nginx:1.14.2
    securityContext:
      privileged: true
    ports:
    - containerPort: 80
    command: ["/usr/local/bin/nginx"]
    args: ["-g", "daemon off;"]

Specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image. One can use base image inside initContainer by setting the reuseContainerImage flag to true.

Istio

Istio is a service mesh which simplifies observability, traffic management, security and much more with it's virtual services and gateways.

istio:
  enable: true
  gateway:
    annotations: {}
    enabled: false
    host: example.com
    labels: {}
    tls:
      enabled: false
      secretName: example-tls-secret
  virtualService:
    annotations: {}
    enabled: false
    gateways: []
    hosts: []
    http:
      - corsPolicy:
          allowCredentials: false
          allowHeaders:
            - x-some-header
          allowMethods:
            - GET
          allowOrigin:
            - example.com
          maxAge: 24h
        headers:
          request:
            add:
              x-some-header: value
        match:
          - uri:
              prefix: /v1
          - uri:
              prefix: /v2
        retries:
          attempts: 2
          perTryTimeout: 3s
        rewriteUri: /
        route:
          - destination:
              host: service1
              port: 80
        timeout: 12s
      - route:
          - destination:
              host: service2
    labels: {}
Key
Description

istio

Istio enablement. When istio.enable set to true, Istio would be enabled for the specified configurations

gateway

Allowing external traffic to enter the service mesh through the specified configurations.

host

The external domain through which traffic will be routed into the service mesh.

tls

Traffic to and from the gateway should be encrypted using TLS.

secretName

Specifies the name of the Kubernetes secret that contains the TLS certificate and private key. The TLS certificate is used for securing the communication between clients and the Istio gateway.

virtualService

Enables the definition of rules for how traffic should be routed to different services within the service mesh.

gateways

Specifies the gateways to which the rules defined in the VirtualService apply.

hosts

List of hosts (domains) to which this VirtualService is applied.

http

Configuration for HTTP routes within the VirtualService. It define routing rules based on HTTP attributes such as URI prefixes, headers, timeouts, and retry policies.

corsPolicy

Cross-Origin Resource Sharing (CORS) policy configuration.

headers

Additional headers to be added to the HTTP request.

match

Conditions that need to be satisfied for this route to be used.

uri

This specifies a match condition based on the URI of the incoming request.

prefix

It specifies that the URI should have the specified prefix.

retries

Retry configuration for failed requests.

attempts

It specifies the number of retry attempts for failed requests.

perTryTimeout

sets the timeout for each individual retry attempt.

rewriteUri

Rewrites the URI of the incoming request.

route

List of destination rules for routing traffic.

Pause For Seconds Before Switch Active

pauseForSecondsBeforeSwitchActive: 30

To wait for given period of time before switch active the container.

Resources

These define minimum and maximum RAM and CPU available to the application.

resources:
  limits:
    cpu: "1"
    memory: "200Mi"
  requests:
    cpu: "0.10"
    memory: "100Mi"

Resources are required to set CPU and memory usage.

Limits

Limits make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.

Requests

Requests are what the container is guaranteed to get.

Service

This defines annotations and the type of service, optionally can define name also.

  service:
    type: ClusterIP
    annotations: {}

Volumes

volumes:
  - name: log-volume
    emptyDir: {}
  - name: logpv
    persistentVolumeClaim:
      claimName: logpvc

It is required when some values need to be read from or written to an external disk.

Volume Mounts

volumeMounts:
  - mountPath: /var/log/nginx/
    name: log-volume 
  - mountPath: /mnt/logs
    name: logpvc
    subPath: employee  

It is used to provide mounts to the volume.

Affinity and anti-affinity

Spec:
  Affinity:
    Key:
    Values:

Spec is used to define the desire state of the given container.

Node Affinity allows you to constrain which nodes your pod is eligible to schedule on, based on labels of the node.

Inter-pod affinity allow you to constrain which nodes your pod is eligible to be scheduled based on labels on pods.

Key

Key part of the label for node selection, this should be same as that on node. Please confirm with devops team.

Values

Value part of the label for node selection, this should be same as that on node. Please confirm with devops team.

Tolerations

tolerations:
 - key: "key"
   operator: "Equal"
   value: "value"
   effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"

Taints are the opposite, they allow a node to repel a set of pods.

A given pod can access the given node and avoid the given taint only if the given pod satisfies a given taint.

Taints and tolerations are a mechanism which work together that allows you to ensure that pods are not placed on inappropriate nodes. Taints are added to nodes, while tolerations are defined in the pod specification. When you taint a node, it will repel all the pods except those that have a toleration for that taint. A node can have one or many taints associated with it.

Arguments

args:
  enabled: false
  value: []

This is used to give arguments to command.

Command

command:
  enabled: false
  value: []

It contains the commands for the server.

Key
Description

enabled

To enable or disable the command.

value

It contains the commands.

Containers

Containers section can be used to run side-car containers along with your main container within same pod. Containers running within same pod can share volumes and IP Address and can address each other @localhost. We can use base image inside container by setting the reuseContainerImage flag to true.

    containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        command: ["/usr/local/bin/nginx"]
        args: ["-g", "daemon off;"]
      - reuseContainerImage: true
        securityContext:
          runAsUser: 1000
          runAsGroup: 3000
          fsGroup: 2000
        volumeMounts:
        - mountPath: /etc/ls-oms
          name: ls-oms-cm-vol
        command:
          - flyway
          - -configFiles=/etc/ls-oms/flyway.conf
          - migrate

Prometheus

  prometheus:
    release: monitoring

It is a kubernetes monitoring tool and the name of the file to be monitored as monitoring in the given case.It describes the state of the prometheus.

rawYaml

rawYaml: 
  - apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      selector:
        app: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
      type: ClusterIP

Accepts an array of Kubernetes objects. You can specify any kubernetes yaml here and it will be applied when your app gets deployed.

Grace Period

GracePeriod: 30

Kubernetes waits for the specified time called the termination grace period before terminating the pods. By default, this is 30 seconds. If your pod usually takes longer than 30 seconds to shut down gracefully, make sure you increase the GracePeriod.

A Graceful termination in practice means that your application needs to handle the SIGTERM message and begin shutting down when it receives it. This means saving all data that needs to be saved, closing down network connections, finishing any work that is left, and other similar tasks.

There are many reasons why Kubernetes might terminate a perfectly healthy container. If you update your deployment with a rolling update, Kubernetes slowly terminates old pods while spinning up new ones. If you drain a node, Kubernetes terminates all pods on that node. If a node runs out of resources, Kubernetes terminates pods to free those resources. It’s important that your application handle termination gracefully so that there is minimal impact on the end user and the time-to-recovery is as fast as possible.

Server

server:
  deployment:
    image_tag: 1-95a53
    image: ""

It is used for providing server configurations.

Deployment

It gives the details for deployment.

Key
Description

image_tag

It is the image tag

image

It is the URL of the image

Service Monitor

servicemonitor:
      enabled: true
      path: /abc
      scheme: 'http'
      interval: 30s
      scrapeTimeout: 20s
      metricRelabelings:
        - sourceLabels: [namespace]
          regex: '(.*)'
          replacement: myapp
          targetLabel: target_namespace

It gives the set of targets to be monitored.

Db Migration Config

dbMigrationConfig:
  enabled: false

It is used to configure database migration.

KEDA Autoscaling

Example for autosccaling with KEDA using Prometheus metrics is given below:

kedaAutoscaling:
  enabled: true
  minReplicaCount: 1
  maxReplicaCount: 2
  idleReplicaCount: 0
  pollingInterval: 30
  advanced:
    restoreToOriginalReplicaCount: true
    horizontalPodAutoscalerConfig:
      behavior:
        scaleDown:
          stabilizationWindowSeconds: 300
          policies:
          - type: Percent
            value: 100
            periodSeconds: 15
  triggers: 
    - type: prometheus
      metadata:
        serverAddress:  http://<prometheus-host>:9090
        metricName: http_request_total
        query: envoy_cluster_upstream_rq{appId="300", cluster_name="300-0", container="envoy",}
        threshold: "50"
  triggerAuthentication:
    enabled: false
    name:
    spec: {}
  authenticationRef: {}

Example for autosccaling with KEDA based on kafka is given below :

kedaAutoscaling:
  enabled: true
  minReplicaCount: 1
  maxReplicaCount: 2
  idleReplicaCount: 0
  pollingInterval: 30
  advanced: {}
  triggers: 
    - type: kafka
      metadata:
        bootstrapServers: b-2.kafka-msk-dev.example.c2.kafka.ap-southeast-1.amazonaws.com:9092,b-3.kafka-msk-dev.example.c2.kafka.ap-southeast-1.amazonaws.com:9092,b-1.kafka-msk-dev.example.c2.kafka.ap-southeast-1.amazonaws.com:9092
        topic: Orders-Service-ESP.info
        lagThreshold: "100"
        consumerGroup: oders-remove-delivered-packages
        allowIdleConsumers: "true"
  triggerAuthentication:
    enabled: true
    name: keda-trigger-auth-kafka-credential
    spec:
      secretTargetRef:
        - parameter: sasl
          name: keda-kafka-secrets
          key: sasl
        - parameter: username
          name: keda-kafka-secrets
          key: username
  authenticationRef: 
    name: keda-trigger-auth-kafka-credential

Winter-Soldier

Winter Soldier can be used to

  • cleans up (delete) Kubernetes resources

  • reduce workload pods to 0

Given below is template values you can give in winter-soldier:

winterSoilder:
  enable: false
  apiVersion: pincher.devtron.ai/v1alpha1
  action: sleep
  timeRangesWithZone:
    timeZone: "Asia/Kolkata"
    timeRanges: []
  targetReplicas: []
  fieldSelector: []

Here,

Key
values
Description

enable

false,true

decide the enabling factor

apiVersion

pincher.devtron.ai/v1beta1, pincher.devtron.ai/v1alpha1

specific api version

action

sleep,delete, scale

This specify the action need to perform.

timeRangesWithZone:timeZone

eg:- "Asia/Kolkata","US/Pacific"

timeRangesWithZone:timeRanges

array of [ timeFrom, timeTo, weekdayFrom, weekdayTo]

It use to define time period/range on which the user need to perform the specified action. you can have multiple timeRanges. These settings will take action on Sat and Sun from 00:00 to 23:59:59,

targetReplicas

[n] : n - number of replicas to scale.

These is mandatory field when the action is scale Default value is [].

fieldSelector

- AfterTime(AddTime( ParseTime({{metadata.creationTimestamp}}, '2006-01-02T15:04:05Z'), '5m'), Now())

These value will take a list of methods to select the resources on which we perform specified action .

here is an example,

winterSoilder:
  apiVersion: pincher.devtron.ai/v1alpha1 
  enable: true
  annotations: {}
  labels: {}
  timeRangesWithZone:
    timeZone: "Asia/Kolkata"
    timeRanges: 
      - timeFrom: 00:00
        timeTo: 23:59:59
        weekdayFrom: Sat
        weekdayTo: Sun
      - timeFrom: 00:00
        timeTo: 08:00
        weekdayFrom: Mon
        weekdayTo: Fri
      - timeFrom: 20:00
        timeTo: 23:59:59
        weekdayFrom: Mon
        weekdayTo: Fri
  action: scale
  targetReplicas: [1,1,1]
  fieldSelector: 
    - AfterTime(AddTime( ParseTime({{metadata.creationTimestamp}}, '2006-01-02T15:04:05Z'), '10h'), Now())

Above settings will take action on Sat and Sun from 00:00 to 23:59:59, and on Mon-Fri from 00:00 to 08:00 and 20:00 to 23:59:59. If action:sleep then runs hibernate at timeFrom and unhibernate at timeTo. If action: delete then it will delete workloads at timeFrom and timeTo. Here the action:scale thus it scale the number of resource replicas to targetReplicas: [1,1,1]. Here each element of targetReplicas array is mapped with the corresponding elements of array timeRangesWithZone/timeRanges. Thus make sure the length of both array is equal, otherwise the cnages cannot be observed.

The above example will select the application objects which have been created 10 hours ago across all namespaces excluding application's namespace. Winter soldier exposes following functions to handle time, cpu and memory.

  • ParseTime - This function can be used to parse time. For eg to parse creationTimestamp use ParseTime({{metadata.creationTimestamp}}, '2006-01-02T15:04:05Z')

  • AddTime - This can be used to add time. For eg AddTime(ParseTime({{metadata.creationTimestamp}}, '2006-01-02T15:04:05Z'), '-10h') ll add 10h to the time. Use d for day, h for hour, m for minutes and s for seconds. Use negative number to get earlier time.

  • Now - This can be used to get current time.

  • CpuToNumber - This can be used to compare CPU. For eg any({{spec.containers.#.resources.requests}}, { MemoryToNumber(.memory) < MemoryToNumber('60Mi')}) will check if any resource.requests is less than 60Mi.

Security Context

A security context defines privilege and access control settings for a Pod or Container.

To add a security context for main container:

containerSecurityContext:
  allowPrivilegeEscalation: false

To add a security context on pod level:

podSecurityContext:
  runAsUser: 1000
  runAsGroup: 3000
  fsGroup: 2000

Topology Spread Constraints

You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization.

topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: zone
    whenUnsatisfiable: DoNotSchedule
    autoLabelSelector: true
    customLabelSelector: {}

Deployment Metrics

It gives the realtime metrics of the deployed applications

Key
Description

Deployment Frequency

It shows how often this app is deployed to production

Change Failure Rate

It shows how often the respective pipeline fails.

Mean Lead Time

It shows the average time taken to deliver a change to production.

Mean Time to Recovery

It shows the average time taken to fix a failed pipeline.

2. Show application metrics

If you want to see application metrics like different HTTP status codes metrics, application throughput, latency, response time. Enable the Application metrics from below the deployment template Save button. After enabling it, you should be able to see all metrics on App detail page. By default it remains disabled.

Helm Chart Json Schema

Other Validations in Json Schema

The values of CPU and Memory in limits must be greater than or equal to in requests respectively. Similarly, In case of envoyproxy, the values of limits are greater than or equal to requests as mentioned below.

resources.limits.cpu >= resources.requests.cpu
resources.limits.memory >= resources.requests.memory
envoyproxy.resources.limits.cpu >= envoyproxy.resources.requests.cpu
envoyproxy.resources.limits.memory >= envoyproxy.resources.requests.memory

Secrets

Secrets and configmaps both are used to store environment variables but there is one major difference between them: Configmap stores key-values in normal text format while secrets store them in base64 encrypted form. Devtron hides the data of secrets for the normal users and it is only visible to the users having edit permission.

Secret objects let you store and manage sensitive information, such as passwords, authentication tokens, and ssh keys. Embedding this information in secrets is safer and more flexible than putting it verbatim in a Pod definition or in a container image.

Configure Secret

Click Add Secret to add a new secret.

Key
Description

Name

Provide a name to your Secret

Data Type

Data Volume

Specify if there is a need to add a volume that is accessible to the Containers running in a pod.

Use secrets as Environment Variable

Select this option if you want to inject Environment Variables in your pods using Secrets.

Use secrets as Data Volume

Select this option if you want to configure a Data Volume that is accessible to Containers running in a pod. Ensure that you provide a Volume mount path for the same.

Key-Value

Provide a key and the corresponding value of the provided key.

Volume Mount Path

Specify the volume mount folder path in Volume Mount Path, a path where the data volume needs to be mounted. This volume will be accessible to the containers running in a pod.

Sub Path

For multiple files mount at the same location you need to check sub path bool field, it will use the file name (key) as sub path. Sub Path feature is not applicable in case of external configmap except AWS Secret Manager, AWS System Manager and Hashi Corp Vault, for these cases Name (Secret key) as sub path will be picked up automatically.

File Permission

File permission will be provide at the configmap level not on the each key of the configmap. it will take 3 digit standard permission for the file.

Click Save Secret to save the secret.

You can see the Secret is added.

Update Secrets

You can update your secrets anytime later, but you cannot change the name of your secrets. If you want to change your name of secrets then you have to create a new secret.

To update secrets, click the secret you wish to update.

Click Update Secret to update your secret.

Delete Secret

You can delete your secret. Click your secret and click the delete sign to delete your secret.

Data Types

There are five Data types that you can use to save your secret.

  • Kubernetes Secret: The secret that you create using Devtron.

  • Kubernetes External Secret: The secret data of your application is fetched by Devtron externally. Then the Kubernetes External Secret is converted to Kubernetes Secret.

  • AWS Secret Manager: The secret data of your application is fetched from AWS Secret Manager and then converted to Kubernetes Secret from AWS Secret.

  • AWS System Manager: The secret data for your application is fetched from AWS System Secret Manager and all the secrets stored in AWS System Manager are converted to Kubernetes Secret.

  • HashiCorp Vault: The secret data for your application is fetched from HashiCorp Vault and the secrets stored in HashiCorp Vault are converted to Kubernetes Secret.

Note: The conversion of secrets from various data types to Kubernetes Secrets is done within Devtron and irrespective of the data type, after conversion, the Pods access secrets normally.

Mount Existing Kubernetes Secrets

Use this option to mount an existing Kubernetes Secret in your application pods. A Secret will not be created by system so please ensure that the secret already exist within the namespace else the deployment will fail.

Kubernetes External Secret (Deprecated)

The secret that is already created and stored in the environment and being used by Devtron externally is referred here as Kubernetes External Secret. For this option, Devtron will not create any secret by itself but they can be used within the pods. Before adding secret from kubernetes external secret, please make sure that secret with the same name is present in the environment. To add secret from kubernetes external secret, follow the steps mentioned below:

  1. Navigate to Secrets of the application.

  2. Click Add Secret to add a new secret.

  3. Select Kubernetes External Secret from dropdown of Data type.

  4. Provide a name to your secret. Devtron will search secret in the environment with the same name that you mention here.

AWS Secret Manager

Before adding any external secrets on Devtron, kubernetes-external-secrets must be installed on the target cluster. Kubernetes External Secrets allows you to use external secret management systems (e.g., AWS Secrets Manager, Hashicorp Vault, etc) to securely add secrets in Kubernetes.

Installing kubernetes-external-secrets Using Chart

To install the chart with the release named my-release:

$ helm install my-release external-secrets/kubernetes-external-secrets

To install the chart with AWS IAM Roles for Service Accounts:

$ helm install my-release external-secrets/kubernetes-external-secrets --set securityContext.fsGroup=65534 --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"='arn:aws:iam::111111111111:role/ROLENAME'

Adding Secrets From AWS Secret Manager

To add secrets from AWS secret manager, navigate to Secrets of the application and follow the steps mentioned below :

  1. Click Add Secret to add a new secret.

  1. Select AWS Secret Manager from dropdown of Data type.

  2. Provide a name to your secret.

  3. Select how you want to use the secret. You may leave it selected as environment variable and also you may leave Role ARN empty.

  4. In Data section, you will have to provide data in key-value format.

All the required field to pass your data to fetch secrets on Devtron are described below :

Key
Description

key

Secret key in backend

name

Name for this key in the generated secret

property

Property to extract if secret in backend is a JSON object

isBinary

Set this to true if configuring an item for a binary file stored else set false

Adding Secrets in AWS Secret Manager

To add secrets in AWS secret manager, do the following steps :

  1. Go to AWS secret manager console.

  2. Click Store a new secret.

  3. Add and save your secret.

AWS Secrets Manager

To add secrets from AWS Secrets Manager, we need to create a generic Kubernetes secret for AWS authentication.

Create a Kubernetes secret in the namespace in which the application is to be deployed using base64 encoded AWS access-key and secret-access-key. You can use a Devtron generic chart for it.

Note: You don't have to create the Kubernetes secret every time you create external secret for the respective namespace.

After creating the generic secret, navigate to Secrets section of the application and follow the steps mentioned below :

1. Click Add Secret to add a new secret

2. Select AWS Secret Manager under External Secret Operator (ESO) from the dropdown of Data type

3. Configure the secret

Key
Description

region

AWS region in which secret is created

accessKeyIDSecretRef.name

Name of secret created that would be used for authentication

accessKeyIDSecretRef.key

In generic secret created for AWS authentication, variable name in which base64 encoded AWS access-key is stored

secretAccessKeySecretRef.name

Name of secret created that would be used for authentication

secretAccessKeySecretRef.key

In generic secret created for AWS authentication, variable name in which base64 encoded secret-access-key is stored

secretKey

Key name to store secret

key

AWS Secrets Manager secret name

property

AWS Secrets Manager secret key

4. Save the secret

ESO AWS secrets Manager Setup with Devtron using ClusterSecretsStore

ClusterSecretStore provides a secure and centralized storage solution for managing and accessing sensitive information, such as passwords, API keys, certificates, and other credentials, within a cluster or application environment.

Requirement: Devtron deployment template chart version should be 4.17 and above.

To setup ESO AWS secrets manager with Devtron using ClusterSecretsStore, follow the mentined steps:

1. Create a secret for AWS authentication

Create a Kubernetes secret in any namespace using base64 encoded AWS access-key and secret-access-key. You can use the devtron generic chart for this.

2. Create a ClusterSecretStore

Create a ClusterSecretStore using the secret created for AWS authentication in step 1.

3. Create a secret in the application using ESO AWS Secrets Manager

Go to the application where you want to create an external secret. Navigate to secrets section under application configuration and create a secret using ESO AWS Secrets Manager.

External Secret Operator (ESO)

Prerequisites: Chart version should be > 4.14.0

External Secrets Operator is a Kubernetes operator that integrates external secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault and many more. The operator reads information from external APIs and automatically injects the values into a Kubernetes Secret.

Install External Secret Operator

Before creating any external secrets on Devtron, External Secret Operator must be installed on the target cluster. External Secret Operator allows you to use external secret management systems (e.g., AWS Secrets Manager, Hashicorp Vault, Azure Secrets Manager, Google Secrets Manager etc.) to securely inject secrets in Kubernetes.

You can install External Secrets Operator using charts store:

  1. Go to charts store.

  2. Search chart with name external-secrets.

  1. Deploy the chart.

ConfigMaps

The ConfigMap API resource holds key-value pairs of the configuration data that can be consumed by pods or used to store configuration data for system components such as controllers. ConfigMap is similar to Secrets, but designed to more conveniently support working with strings that do not contain sensitive information.

Click on Add ConfigMap to add a config map to your application.

Configure the ConfigMap

You can configure a configmap in two ways-

(a) Using data type Kubernetes ConfigMap

(b) Using data type Kubernetes External ConfigMap

Key
Description

Data Type (Kubernetes ConfigMap)

Select your preferred data type for Kubernetes ConfigMap or Kubernetes External ConfigMap

Name

Provide a name to this ConfigMap.

Use configmap as Environment Variable

Select this option if you want to inject Environment Variables in pods using ConfigMap.

Use configmap as Data Volume

Select this option, if you want to configure a Data Volume that is accessible to Containers running in a pod and provide a Volume mount path.

Key-Value

Provide the actual key-value configuration data here. Key and corresponding value to the provided key.

(A) Using Kubernetes ConfigMap

1. Data Type

Select the Data Type as Kubernetes ConfigMap, if you wish to use the ConfigMap created by Devtron.

2. Name

Provide a name to your configmap.

3. Use ConfigMap as

Here we are providing two options, one can select any of them as per your requirement

-Environment Variable as part of your configMap or you want to add Data Volume to your container using Config Map.

  • Environment Variable

Select this option if you want to add Environment Variables as a part of configMap. You can provide Environment Variables in key-value pairs, which can be seen and accessed inside a pod.

  • Data Volume

Select this option if you want to add a Data Volume to your container using the Config Map.

Key-value pairs that you provide here, are provided as a file to the mount path. Your application will read this file and collect the required data as configured.

4. Data

In the Data section, you provide your configmap in key-value pairs. You can provide one or more than one environment variable.

You can provide variables in two ways-

  • YAML (raw data)

  • GUI (more user friendly)

Once you have provided the config, You can click on any option-YAML or GUI to view the key and Value parameters of the ConfigMap.

Kubernetes ConfigMap using Environment Variable:

If you select Environment Variable in 3rd option, then you can provide your environment variables in key-value pairs in the Data section using YAML or GUI.

Data in YAML (please Check below screenshot)

Now, Click on Save ConfigMap to save your configmap configuration.

Kubernetes ConfigMap using Data Volume

Volume Mount Path

Provide the Volume Mount folder path in Volume Mount Path, a path where the data volume needs to be mounted, which will be accessible to the Containers running in a pod.

You can add Configuration data as in YAML or GUI format as explained above.

You can click on YAML or GUI to view the key and Value parameters of the ConfigMap that you have created.

You can click on Save ConfigMap to save the configMap.

Sub Path

For multiple files mount at the same location you need to check sub path bool field, it will use the file name (key) as sub path. Sub Path feature is not applicable in case of external configmap.

File Permission

File permission will be provide at the configmap level not on the each key of the configmap. It will take 3 digit standard permission for the file.

(B) Kubernetes External ConfigMap

You can select Kubernetes External ConfigMap in the data type field if you have created a ConfigMap using the kubectl command.

By default, the data type is set to Kubernetes ConfigMap.

Kubernetes External ConfigMap is created using the kubectl create configmap command. If you are using Kubernetes External ConfigMap, make sure you give the name of ConfigMap the same as the name that you have given using kubectl create Configmap <configmap-name> <data source> command, otherwise, it might result in an error during the built.

You have to ensure that the External ConfigMap exists and is available to the pod.

The config map is created.

Update ConfigMap

You can update your configmap anytime later but you cannot change the name of your configmap. If you want to change the name of the configmap then you have to create a new configmap. To update configmap, click on the configmap you have created make changes as required.

Click on Update Configmap to update your configmap.

Delete ConfigMap

You can delete your configmap. Click on your configmap and click on the delete sign to delete your configmap.

Google Secrets Manager

To add secrets from Google Secrets Manager, follow the steps mentioned below :

1. Go to Google cloud console and create a Service Account.

2. Assign roles to the service account.

3. Add and create a new key.

5. Create a Kubernetes secret in the namespace in which the application is to be deployed using base64 encoded service account key.

You can use devtron generic chart for this.

6. After creating the generic secret, navigate to Secrets section of the application and click Add Secret to add a new secret.

7. Select Google Secrets Manager under External Secret Operator (ESO) from the dropdown of Data type.

8. Configure secret:

Key
Description

secretAccessKeySecretRef.name

Name of secret created that would be used for authentication.

secretAccessKeySecretRef.key

In generic secret created for GCP authentication, variable name in which base64 encoded service account key is stored.

ProjectID

GCP Project ID where secret is created.

secretKey

Key name to store secret.

key

GCP Secrets Manager secret name.

9. Save secret.

HashiCorp Vault

To incorporate secrets from HashiCorp Vault, you need to create a generic Kubernetes secret that will be used for vault authentication. This involves creating a Kubernetes secret in the specific namespace where your application will be deployed. The secret should store the base64-encoded password or token obtained from vault. To simplify the process, you can utilize the Devtron generic chart. An example yaml is given below:

apiVersion: v1
kind: Secret
type: Opaque
data:
   token: <vault-password>
metadata:
   name: vault-token
   namespace: <namespace>

Note: Please note that you don't need to create the Kubernetes secret every time you create an External Secret for the corresponding namespace.

Once you have created the generic secret, follow these steps in the application's Secrets section:

1. Create a new secret

To add a new secret to the application, go to the App Configuration section of the application. Then, navigate to the left pane and select the Secrets option and click the Add Secret button.

2. Select HashiCorp Vault as the External Secret Operator

After clicking the Add Secret button, select HashiCorp Vault from the dropdown menu for the Data type option. Provide a name for the secret you are creating, and then proceed to configure the external secret as described in the next step.

3. Configure the secret

To configure the external secret that will be fetched from HashiCorp Vault for your application, you will need to provide specific details using the following key-value pairs:

Key
Description

vault.server

Server is the connection address for the Vaultserver, e.g: "https://vault.example.com:8200"

vault.path

Specify the path where the secret is stored in Vault

tokenSecretRef.name

Enter the name of the secret that will be used for authentication

tokenSecretRef.key

Specify the key name within the secret that contains the token

secretKey

Provide a name for the secret in Kubernetes

key

Enter the name of the secret in Vault

property

Specify the key within the Vault secret

4. Save the secret

After configuring the external secret from HashiCorp Vault, proceed to save the secret by clicking the Save button.

By following the steps mentioned above and configuring these values correctly, you can seamlessly fetch and utilize external secrets from HashiCorp Vault within your application environment by deploying the application.

Override Build Configuration

Within the same application, you can override a container registry, container image and target platform during the build pipeline, which means the images built for non-production environment can be included to the non-production registry and the images for production environment can be included to the production registry.

To override a container registry, container image or target platform:

  • Go to Applications and select your application from the Devtron Apps tabs.

  • On the App Configuration tab, select Workflow Editor.

  • Open the build pipeline of your application.

  • Click Allow Override to:

    • Select the new container registry from the drop-down list.

  • Click Update Pipeline.

Pre-Build/Post-Build Stages

The CI pipeline includes Pre and Post-build steps to validate and introduce checkpoints in the build process. The pre/post plugins allow you to execute some standard tasks, such as Code analysis, Load testing, Security scanning etc. You can build custom pre-build/post-build tasks or select one of the standard preset plugins provided by Devtron.

Preset plugin is an API resource which you can add within the CI build environment. By integrating the preset plugin in your application, it helps your development cycle to keep track of finding bugs, code duplication, code complexity, load testing, security scanning etc. You can analyze your code easily.

Devtron CI pipeline includes the following build stages:

  • Pre-Build Stage: The tasks in this stage run before the image is built.

  • Build Stage: In this stage, the build is triggered from the source code (container image) that you provide.

  • Post-Build Stage: The tasks in this stage are triggered once the build is complete.

Before you begin

Configuring Pre/Post-build Tasks

Each Pre/Post-build stage is executed as a series of events called tasks and includes custom scripts. You can create one or more tasks that are dependent on one another for execution. In other words, the output variable of one task can be used as an input for the next task to build a CI runner. The tasks will run following the execution order.

The tasks can be re-arranged by drag-and-drop; however, the order of passing the variables must be followed.

You can create a task either by selecting one of the available preset plugins or by creating a custom script.

Creating Pre/Post-build Tasks

Lets take Codacy as an example and configure it in the Pre-Build stage in the CI pipeline for finding bugs, detecting dependency vulnerabilities, and enforcing code standards.

  • Go to the Applications and select your application from the Devtron Apps tabs.

  • Go to the App Configuration tab, click Workflow Editor.

  • Select the build pipeline for configuring the pre/post-build tasks.

  • On the Edit build pipeline, in the Pre-Build Stage, click + Add task.

  • Select Codacy from PRESET PLUGINS.

  • Enter a relevant name or codacy in the Task name field. It is a mandatory field.

  • Enter a descriptive message for the task in the Description field. It is an optional field. Note: The description is available by default.

  • In the Input Variables, provide the information in the following fields:

  • In Trigger/Skip Condition, set the trigger conditions to execute a task or Set skip conditions. As an example: CodacyEndpoint equal to https://app.codacy.com. Note: You can set more than one condition.

  • In Pass/Failure Condition set the conditions to execute pass or fail of your build. As an example: Pass if number of issues equal to zero. Note: You can set more than one condition.

  • Click Update Pipeline.

  • Go to the Build & Deploy, click the build pipeline and start your build.

  • Click Details on the build pipeline and you can view the details on the Logs.

Execute custom script

  1. On the Edit build pipeline screen, select the Pre-build stage.

  2. Select + Add task.

  3. Select Execute custom script.

Custom script - Shell

  • Select the Task type as Shell.

Consider an example that creates a Shell task to stop the build if the database name is not "mysql". The script takes 2 input variables, one is a global variable (DOCKER_IMAGE), and the other is a custom variable (DB_NAME) with a value "mysql". The task triggers only if the database name matches "mysql". If the trigger condition fails, this Pre-build task will be skipped and the build process will start. The variable DB_NAME is declared as an output variable that will be available as an input variable for the next task. The task fails if DB_NAME is not equal to "mysql".

  • Select Update Pipeline.

Here is a screenshot with the failure message from the task:

Custom script - Container image

  • Select the Task type as Container image.

This example creates a Pre-build task from a container image. The output variable from the previous task is available as an input variable.

  • Select Update Pipeline.

Preset Plugins

What's next

Environment Overrides

You will see all your environments associated with an application under the Environment Overrides section.

You can customize your Deployment template, ConfigMap, Secrets in Environment Overrides section to add separate customizations for different environments such as dev, test, integration, prod, etc.


Deployment template - Functionality

If you want to deploy an application in a non-production environment and then in production environment, once testing is done in the non-production environment, then you do not need to create a new application for production environment. Your existing pipeline(non-production env) will work for both the environments with little customization in your deployment template under Environment overrides.

Example customization:

In a Non-production environment, you may have specified 100m CPU resources in the deployment template but in the Production environment, you may want to have 500m CPU resources as the traffic on Pods will be higher than traffic on non-production env.

Configuring the Deployment template inside Environment Overrides for a specific environment will not affect the other environments because Environment Overrides will configure deployment templates on environment basis. And at the time of deployment, it will always pick the overridden deployment template if any.

If there are no overrides specified for an environment in the Environment Overrides section, the deployment template will be the one you specified in the deployment template section of the app creation.

(Note: This example is meant only for a representational purpose. You can choose to add any customizations you want in your deployment templates in the Environment Overrides tab)

Any changes in the configuration will not be added to the template, instead, it will make a copy of the template and lets you customize it for each particular environment. And now this overridden template will be used only for the specified Environment.

This will save you the trouble to manually create deployment files separately for each environment. Instead, all you have to do is to change the required variables in the deployment template.


How to add Environment Overrides

Who Can Perform This Action?

Go to App Configuration → Environment Overrides. For each environment you can override the following configurations:

Deployment Template

Basic (GUI)

However, you have the flexibility to use different values at the environment level by overriding the base configurations as shown below.

Advanced (YAML)

Similarly, if you are an advanced user intending to tweak more values in deployment template, you may go to Advanced (YAML) section and edit them.

Delete Override will discard the current overrides and the base deployment configuration will be applicable again to the environment.

ConfigMaps & Secrets

The same goes for ConfigMap and Secrets. You can also create an environment-specific configmap and Secrets inside the Environment override section.

To update a ConfigMap, follow the steps below:

  1. In your environment, click ConfigMaps.

  2. Click the ConfigMap you wish to update.

  3. Click Allow Override.

  4. Edit your ConfigMap.

  5. Click Save Changes.

Similarly, you can update Secrets too as shown below.

Build and Deploy

Each time you push a change to your application through GitHub, your application goes through a process to be built and deployed.

There are two main steps for building and deploying applications:

Deleting Application

Delete the Application, when you are sure you no longer need it.

Clicking on Delete Application will not delete your application if you have workflows in the application.

If your Application contains workflows in the Workflow Editor. So, when you click on Delete Application, you will see the following prompt.

Click on View Workflows to view and delete your workflows in the application.

To delete the workflows in your application, you must first delete all the pipelines (CD Pipeline, CI Pipeline or Linked CI Pipeline or External CI Pipeline if there are any).

After you have deleted all the pipelines in the workflow, you can delete that particular workflow.

Similarly, delete all the workflows in the application.

Now, Click on Delete Application to delete the application.

Triggering CI

To trigger the CI pipeline, first you need to select a Git commit. To select a Git commit, click the Select Material button present on the CI pipeline.

Once clicked, a list will appear showing various commits made in the repository, it includes details such as the author name, commit date, time, etc. Choose the desired commit for which you want to trigger the pipeline, and then click Start Build to initiate the CI pipeline.

CI Pipelines with automatic trigger enabled are triggered immediately when a new commit is made to the git branch. If the trigger for a build pipeline is set to manual, it will not be automatically triggered and requires a manual trigger.


CI builds can be time-consuming for large repositories, especially for enterprises. However, Devtron's partial cloning feature significantly increases cloning speed, reducing the time it takes to clone your source code and leading to faster build times.

Advantages

  • Smaller image sizes

  • Reduced resource usage and costs

  • Faster software releases

  • Improved productivity

Get in touch with us if you are looking for a way to improve the efficiency of your software development process.

The Refresh icon updates the Git Commits section in the CI Pipeline by fetching the latest commits from the repository. Clicking on the refresh icon ensures that you have the most recent commit available.


Who Can Perform This Action?

If you wish to pass runtime parameters for a build job, you can provide key-value pairs before triggering the build. Thereafter, you can access those passed values by referencing the corresponding keys in the environment variable dictionary.

Steps

  1. Go to the Parameters tab available on the screen where you select the commit.

  2. Click + Add parameter.

  3. Enter your key-value pair as shown below.

    Similarly, you may add more than one key-value pair by using the + Add Parameter button.

  4. Click Start Build.


Fetching Logs and Reports

Click the CI Pipeline or navigate to the Build History to get the CI pipeline details such as build logs, source code details, artifacts, and vulnerability scan reports.

To access the logs of the CI Pipeline, simply click Logs.

To view specific details of the Git commit you've selected for the build, click on Source. This will provide you with information like the commit ID, author, and commit message associated with that particular commit.

By selecting the Artifacts option, you can download reports related to the tasks performed in the Pre-CI and Post-CI stages. This will allow you to access and retrieve the generated reports, if any, related to these stages. Additionally, you have the option to add tags or comments to the image directly from this section.

Triggering CD

  1. Go to the Build & Deploy tab of your application and click Select Image in the CD pipeline.

  2. Select an image to deploy and then click Deploy to trigger the CD pipeline.

However, if an image is already deployed, you can identify it by the tag Active on <Environment name>.

If no approved images are available or the current image is already deployed, you won't see any images for deployment when clicking Select Image.

Requesting for Image Approval

To request an image approval, follow these steps:

  1. Navigate to the Build & Deploy page, and click the Approval for deployment icon.

  2. Click the Request Approval button present on the image for which you want to request an approval and click Submit Request.

    The users you selected will receive an approval request via email. Any user with 'Image approver' permission alongwith access to the given application and given environment would be able to approve the image.

Extras

  • In case you wish to cancel the image approval request, you can do so from the Approval pending tab as shown in the below image.

  • If you've received an approval but no longer want the image to be deployable, you can let the approval expire.

Accepting Image Approval Request

By default, super-admin users are considered as the default approvers. Users who build the image and/or request for its approval, cannot self-approve it even if they have super-admin privileges.

To approve an image approval request, follow these steps:

  1. Go to the Build & Deploy page and click the Approval for deployment button.

  2. Switch to the Approval pending tab. Here, you will get a list of images that are awaiting approval.

  3. Click Approve followed by Approve Request button.

Deploying Approved Image

To deploy an approved image, follow these steps:

  1. Navigate to the Build & Deploy tab and click Select Image.

  2. You will find all the approved images listed under the Approved images section. From the list, you can select the desired image and deploy it to your environment.

  3. You can view the status of current deployment in the App Details tab.

The status initially appears as Progressing for approximately 1-2 minutes, and then gradually transitions to Healthy state based on the deployment strategy.

Here, our CD pipeline trigger was successful and the deployment is in Healthy state.

Rollback Deployment

Deployments can be rolled back manually. After a deployment is completed, you can manually rollback to a previously deployed image by retaining the same configuration or changing the configuration.

As an example, You have deployed four different releases as follows:

If you want to roll back from V3 image to V2 image, then you have the following options:

  1. Select Rollback in your deployed pipeline.

  2. On the Rollback page, select a configuration to deploy from the list:

  1. Once you select the previously deployed image and the configuration, review the difference between Last Deployed Configuration and the selected configuration.

  2. Click Deploy.

The selected previously deployed image will be deployed.

Note:

  • There will be no difference in the configuration if you select Last deployed config from the list.

  • When you select Config deployed with selected image and if the configuration is missing in the selected previously deployed image, it will show as Config Not Available. In such cases, you can select either Last saved config or Last deployed config.

Lock Deployment Configuration
Figure 1: App-level GitOps Config
Figure 2: Repo Creation
Figure 3: Saved GitOps Config
Figure 4: Incomplete GitOps Config
Figure 5: Choosing a Helm Chart
Figure 6: Configure & Deploy Button
Figure 7a: Deployment Approach
Figure 7b: Selecting GitOps Method
Figure 8: Adding a Repo
Figure 9: Saved GitOps Config for Helm App

Select the Chart Version using which you want to deploy the application. Refer section for more detail.

You can perform a basic deployment configuration for your application in the Basic (GUI) section instead of configuring the YAML file. Refer section for more detail.

If you want to do additional configurations, then click Advanced (YAML) for modifications. Refer section for more detail.

You can enable Show application metrics to see your application's metrics-CPU Service Monitor usage, Memory Usage, Status, Throughput and Latency. Refer for more detail.

Super-admins can lock keys in rollout deployment template to prevent non-super-admins from modifying those locked keys. Refer to know more.

regcred is the secret that contains the docker credentials that are used for accessing a registry. Devtron will not create this secret automatically, you'll have to create this secret using dt-secrets helm chart in the App store or create one using kubectl. You can follow this documentation Pull an Image from a Private Registry .

Once all the Deployment template configurations are done, click on Save to save your deployment configuration. Now you are ready to create to do CI/CD.

Prerequisite: KEDA controller should be installed in the cluster. To install KEDA controller using Helm, navigate to chart store and search for keda chart and deploy it. You can follow this for deploying a Helm chart on Devtron.

is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed. KEDA can be installed into any Kubernetes cluster and can work alongside standard Kubernetes components like the Horizontal Pod Autoscaler(HPA).

NOTE: After deploying this we can create the Hibernator object and provide the custom configuration by which workloads going to delete, sleep and many more. for more information check

It use to specify the timeZone used. (It uses standard format. please refer )

For Devtron version older than v0.4.0, please refer the page.

Sync with Environment

Source type to trigger the CI. Available options: | | |

The Pre-build and Post-build stages allow you to create Pre/Post-Build CI tasks as explained .

Build stage

Use this to from your build pipeline

Select the source type to build the CI pipeline: | | |

Docker Layer Caching

if you wish to store cache.

If you are rebuilding the same Docker image frequently, an effective cache strategy can cut down build time. Docker images are built layer by layer and Docker’s allows unchanged layers to be reused across pipeline runs.

Cache behavior at Global-level
Cache behavior at Pipeline-level
Cache behavior at Trigger

for either GitHub or Bitbucket.

The Pull Request source type feature only works for the host GitHub or Bitbucket Cloud for now. To request support for a different Git host, please create a GitHub issue .

Devtron uses regexp library, view . You can test your custom regex from .

for either GitHub or Bitbucket.

The total timeout for the execution of the CI pipeline is by default set as 3600 seconds. This default timeout is configurable according to the use case (refer ).

To perform the security scan after the container image is built, enable the Scan for vulnerabilities toggle in the build stage. Refer to know more.

Linked CI with Child Information

Create a or an application.

On the Base Deployment Template page, select the Chart type from the drop-down list and configure as per your and click Save & Next.

Environment: Provide the name of the .

Namespace: Provide the .

You can either use Select API Token if you have generated an under Global Configurations.

If you choose or as the , you must first configure the Webhook for GitHub/Bitbucket as a prerequisite step.

Figure 1a: Adding CD Pipeline
Figure 1b: Creating CD Pipeline

- Use this option to create new Helm/GitOps deployment.

- Use this option if you wish to migrate your existing Helm Release/Argo CD Apps to Devtron.

Helm or GitOps Refer

Devtron supports multiple deployment strategies depending on the .

Figure 2: Strategies Supported by Chart Type

Refer to know more about each strategy in depth.

The next section is and it comes with additional capabilities. This option is available at the bottom of the Create deployment pipeline window. However, if you don't need them, you may proceed with a basic CD pipeline and click Create Pipeline.

Figure 3: Advanced Options

Figure 4: Advanced Options (Expanded View)
Figure 5: Pre-deployment Stage

Refer the trigger types from .

Make sure you have added and in App Configuration.

Make sure your cluster has installed.

Figure 6: 'devtron-in-clustercd' Chart
Figure 7: Configuration
Figure 8: 'migration-incluster-cd' chart
Figure 9: Configuration

This will be utilized only when an existing container image is copied to another repository using the . The image will be copied with the tag generated by the Image Tag Pattern you defined.

Figure 10: Enabling Custom Image Tag Pattern
Figure 11: Edit Icon
Figure 12: Defining Tag Pattern

To know how and where this image tag would appear, refer

Although Devtron ensures that remain unique, the same cannot be said if images are pushed with the same tag to the same container registry from outside Devtron.

Figure 13: Pull with Image Digest
Figure 14: Tag@Digest

Users need to have Admin permission or above (along with access to the environment and application) to enable this option. However, this option will be non-editable in case the super-admin has enabled .

You can use in post deployments as well. The option to execute tasks in application environment is available too.

Figure 15: Post-deployment Stage

This option will be available only during the in your workflow. Existing CD pipelines will not have this option.

View config diff, deployment history, and all the capabilities that come with Devtron Apps. Check the .

Add your external cluster (containing your Helm Apps) in .

Your Helm release must use the same chart type as your application. If needed, you can upload or select the appropriate chart in Global Configurations → Deployment Charts, then save the chart type at of your application.

You can not only , but also manage their deployments using Devtron's CI/CD.

Figure 16: Choosing External Cluster and Helm Release from Dropdown

The target cluster, its namespace, and environment would be visible. If the environment is not available, click Add Environment. This will open a new tab. Once you have , return and click the refresh button.

Figure 17: Adding Environment to Target
Figure 18: Creating CD Pipeline for Helm Release

Once the pipeline is created, you may go to to trigger the pipelines. Your Helm release would be deployed using Devtron.

This feature comes with certain mentioned limitations and expectations. If your use case doesn't fit and goes beyond, feel free to .

You can not only , but also manage their deployments using Devtron's CI/CD.

Your app should be an Argo Helm app ().

GitOps credentials required to commit in the Git repo should be configured in .

The cluster containing your external Argo applications should be added to Devtron. Refer .

The target deployment cluster, its namespace, and its should be added to Devtron.

Your Argo CD app must use the same chart type as your application. If needed, you can upload or select the appropriate chart in Global Configurations → Deployment Charts. Then save the chart type at of your application.

Figure 19: Choosing External Cluster and Argo App from Dropdown

The target cluster, its namespace, and environment would be visible. If the environment is not available, click Add Environment. This will open a new tab. Once you have , return and click the refresh button.

Figure 20: Adding Environment to Target
Figure 21: Creating CD Pipeline for Argo CD App

Once the pipeline is created, you may go to to trigger the pipelines. Your Argo CD app would be deployed using Devtron.

This feature comes with certain mentioned limitations and expectations. If your use case doesn't fit and goes beyond, feel free to .

If you have configured for your external Argo apps in Devtron, and later install the GitOps (ArgoCD) module from to deploy your Devtron apps/Helm apps via GitOps, you must once again save your GitOps and Cluster configurations after installation. This might prevent potential errors and ensure your GitOps deployments are functional.

Figure 22: Updating CD Pipeline

Does your app have different requirements for different environments? Read

Figure 23: Adding Multiple CD Pipelines

If you have multiple applications that already have an existing pipeline (for a given environment) in their workflow, you may clone the same pipeline and its configurations for new environments instead of recreating them in each application. Refer to know more.

Super-admins can lock keys in StatefulSet deployment template to prevent non-super-admins from modifying those locked keys. Refer to know more.

regcred is the secret that contains the docker credentials that are used for accessing a registry. Devtron will not create this secret automatically, you'll have to create this secret using dt-secrets helm chart in the App store or create one using kubectl. You can follow this documentation Pull an Image from a Private Registry .

is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed. KEDA can be installed into any Kubernetes cluster and can work alongside standard Kubernetes components like the Horizontal Pod Autoscaler(HPA).

NOTE: After deploying this we can create the Hibernator object and provide the custom configuration by which workloads going to delete, sleep and many more. for more information check

It use to specify the timeZone used. (It uses standard format. please refer )

Once all the Deployment template configurations are done, click on Save to save your deployment configuration. Now you are ready to create to do CI/CD.

Helm Chart is used to validate the deployment template values.

Provide the Data Type of your secret. To know about different Data Types available click on

External secrets chart

If you don't find any chart with this name i.e external-secrets, add chart repository using repository url https://charts.external-secrets.io. Please follow this for adding chart repository.

Or, with different options.

Or, set a from the drop-down list or enter a new target platform.

The overridden container registry/container image location/target platform will be reflected on the page. You can also see the number of build pipelines for which the container registry/container image location/target platform is overridden.

Make sure you have before you start configuring Pre-Build or Post-Build tasks.

Stage
Task
Variable
Format
Description

The task type of the custom script may be a or a .

Field name
Required/Optional
Field description
Field name
Required/Optional
Field description

Go to section to know more about the available plugins

Trigger the

Users need to have or above (along with access to the environment and applications) to change perform environment override.

Users who are not super-admins will land on section when they visit the Deployment Template page; whereas super-admins will land on section. This is just a default behavior, they can still navigate to the other section if needed.

If you have a set up at application level, the environment(s) you define for your application will also inherit those values.

Refer to know more about each field within Basic (GUI) section.

Want to customize the fields displayed on Basic (GUI)?

Refer to know the process of adding, removing, and customizing the Basic (GUI) section.

to know more about each key-value pair within the Advanced (YAML) section.

If you want to configure your ConfigMap and secrets at the application level then you can provide them in and , but if you want to have environment-specific ConfigMap and secrets then provide them under the Environment override Section. At the time of deployment, it will pick both of them and provide them inside your cluster.

You can also rollback the deployment. Refer for detail.

Partal Cloning Feature

The Ignore Cache option ignores the previous build cache and creates a fresh build. If selected, will take a longer build time than usual. to read more about controlling cache behavior in Devtron.

Passing Build Parameters

Users need to have or above (along with access to the environment and application) to pass build parameters.

In case you trigger builds in bulk, you can consider passing build parameters in .

To check for any vulnerabilities in the build image, click on Security. Please note that vulnerabilities will only be visible if you have enabled the Scan for vulnerabilities option in the advanced options of the CI pipeline before building the image. For more information about this feature, please refer to this .

After the is complete, you can trigger the CD pipeline.

Manual Approval for Deployment

When for the deployment pipeline configured in the workflow, you are expected to request for an image approval before each deployment. Alternatively, you can deploy images that have already been approved once.

Users need to have or above (along with access to the environment and application) to request for an image approval.

In case you have configured , you can directly choose the approver(s) from the list of approvers as shown below.

Users with Approver permission (for the specific application and environment) can also approve a deployment. This permission can be granted to users from present in .

In case or was configured in Devtron, and the user chose the approvers while raising an image approval request, the approvers would receive an email notification as shown below:

Users need to have or above (along with access to the respective environment and application) to select and deploy an approved image.

In case the super-admin has set the minimum number of approval to more than 1 (in ), you must wait for all approvals before deploying the image. In other words, partially approved image will not be eligible for deployment.

To further diagnose the deployments,

Image
Configuration
Release
Configuration Option
Image
Configuration
Configurations
Description
Lock Deployment Configuration
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
Workflow
documentation
KEDA
the main repo
CI Pipeline (legacy)
here
Configure blob storage
layer caching mechanism
here
regexp cheatsheet
here
Build Infra
Clair
Security features
new
clone
requirements
API Token
ConfigMaps
Secrets
Copy Container Image Plugin
Copy Container Image Plugin
pull image digest in Global Configurations
full list of features
Clusters & Environments
base configuration
Build & Deploy
open a feature request
read about supported tools
Global Configurations
Clusters & Environments
base configuration
Build & Deploy
open a feature request
GitOps
Devtron Stack Manager
Environment Overrides
Lock Deployment Configuration
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
KEDA
the main repo
Workflow
json schema
Build and Deploy from Source Code
Linked Build Pipeline
Deploy Image from External Service
Orchestrator ConfigMap (Global Settings)
Editing Pipeline
During Trigger
Configure the webhook
Configure the webhook
Pull Request
Tag Creation
source type
New Deployment
Migrate to Devtron
Deploy to Environment
Deployment Strategy
Advanced Options
Deployment Strategies
Advanced Options
Pre-Deployment stage (tab)
Deployment stage (tab)
Post-Deployment stage (tab)
here
image tags
ConfigMap and Secrets
creation of CD pipeline
deployment chart type
devtron-agent
added the environment to your cluster
environment
added the environment to your cluster
view your external Helm apps
view your external Argo CD apps
documentation

CodacyEndpoint

String

API endpoint for Codacy

GitProvider

String

Git provider for the scanning

CodacyApiToken

String

API token for Codacy. If it is provided, it will be used, otherwise it will be picked from Global secret (CODACY_API_TOKEN)

Organisation

String

Your Organization for Codacy

RepoName

String

Your Repository name

Branch

String

Your branch name

Task name

Required

A relevant name for the task

Description

Optional

A descriptive message for the task

Task type

Optional

Shell: Custom shell script goes here

Input variables

Optional

  • Variable name: Alphanumeric chars and (_) only

  • Source or input value: The variable's value can be global, output from the previous task, or a custom value. Accepted data types include: STRING | BOOL | NUMBER | DATE

  • Description: Relevant message to describe the variable.

Trigger/Skip condition

Optional

A conditional statement to execute or skip the task

Script

Required

Custom script for the Pre/Post-build tasks

Output directory path

Optional

Output variables

Optional

Environment variables that are passed as input variables for the next task.

  • Pass/Failure Condition (Optional): Conditional statements to determine the success/failure of the task. A failed condition stops the execution of the next task and/or build process

Task name

Required

A relevant name for the task

Description

Optional

A descriptive message for the task

Task type

Optional

Container image

Input variables

Optional

  • Variable name: Alphanumeric chars and (_) only

  • Source or input value: The variable's value can be global, output from the previous task, or a custom value Accepted data types include: STRING | BOOL | NUMBER | DATE

  • Description: Relevant message to describe the variable

Trigger/Skip condition

Optional

A conditional statement to execute or skip the task

Container image

Required

Select an image from the drop-down list or enter a custom value in the format <image>:<tag>

Mount custom code

Optional

Enable to mount the custom code in the container. Enter the script in the box below.

  • Mount above code at (required): Path where the code should be mounted

Command

Optional

The command to be executed inside the container

Args

Optional

The arguments to be passed to the command mentioned in the previous field

Port mapping

Optional

The port number on which the container listens. The port number exposes the container to outside services.

Mount code to container

Optional

Mounts the source code inside the container. Default is "No". If set to "Yes", enter the path.

Mount directory from host

Optional

Mount any directory from the host into the container. This can be used to mount code or even output directories.

Output directory path

Optional

Directory path for the script output files such as logs, errors, etc.

V1

C1

R1

V2

C2

R2

V3

C2

R3

V3

C3

R4

V3

C4 (saved but not deployed)

-

Config deployed with selected image

V2

C2

Last deployed config

V2

C3

Last saved config

V2

C4

Application Metrics
Json Schema
Json Schema
Json Schema
Json Schema
this
GitOps
this
Build Configuration
CI build pipeline
Preset Plugins
CI pipeline
ConfigMaps
Secrets
Triggering CI
Triggering CD
Rollback Deployment
Application Groups
documentation
CI pipeline
manual approval is enabled
click here
Chart Version
Basic Configuration
Customize Basic GUI
Advanced (YAML)
Branch Fixed
Branch Regex
Pull Request
Tag Creation
enable/disable caching of docker image layers
Branch Fixed
Branch Regex
Pull Request
Tag Creation
Click here
environment
namespace
Data Types
Shell
Container image
Deployment Template
ConfigMaps
Secrets
Basic (GUI)
Advanced (YAML)
SES or SMTP on Devtron
SES
SMTP
workflow
GitOps (ArgoCD)
GitOps (ArgoCD)
Helm chart
container registries
images
helm charts
Secret
deployment template
environment
namespace
environment
container image
Devtron applications
Helm applications
Clusters
Jobs
pre-build
post-build
pre-deployment
post-deployment
Deployment Template
Devtron Apps
Helm Apps
ArgoCD Apps
FluxCD Apps
GitOps
deploy a release
deploy a helm chart
Clone Pipeline Config
resource scanning

Pre-Build/Post-Build

Last saved config

Deploy the image with the latest saved configuration.

Last deployed config

Config deployed with selected image

Applying Labels to Images

Introduction

For example:

  • You can label an image as non-prod to indicate that it is meant for 'Dev' or 'QA' environments, but not for production.

  • Add hotfix image only label to indicate a one-time patch on production.

  • Comments like This image is buggy and shouldn't be used for deployment to caution other users from deploying an unwanted image.

Tagging labels and comments are supported only for images in workflows with at least one production deployment pipeline. In Devtron, you can go to Global Configurations → Clusters & Environments to identify a production environment by checking the 'Prod' label.


Adding Labels & Comments

Who Can Perform This Action?

You can add labels and comments from the following pages:

From Build & Deploy

From Build History

From Deployment History

From App Details


Deleting Labels & Comments

Soft-Delete Labels

Who Can Perform This Action?

This action marks the label as invalid but doesn't delete the label. Therefore, you can recover it again but you cannot reuse it for other image (unless it's a different application).

  1. Click the edit option.

  2. Use the (-) icon to strike off the label. This icon is available on the left-side of a label.

  3. Click Save.

Hard-Delete Labels

Who Can Perform This Action?

Users need to have super-admin permission to perform hard deletion of labels.

This action deletes the label permanently and makes it available for reuse in same/other image of the given application.

  1. Click the edit option.

  2. Use the (x) icon to permanently remove the label. This icon is available on the right-side of a label.

  3. Click Save.

Removing Comments

Who Can Perform This Action?

If you wish to permanently remove a comment, do the following:

  1. Click the edit option.

  2. Empty the content of an existing comment.

  3. Click Save.


Extra Use Case

This will be helpful in scenarios (say release package) where you wish to deploy multiple applications at once, and you have already labelled the intended images of the respective applications.

Application Metrics

Application Metrics are the indicators used to evaluate the performance and efficiency of your application. It can be enabled in the Devtron platform to see your application's metrics.

Types of Metrics available in the Devtron platform:

  1. CPU usage: Overall CPU utilization per pod and aggregated.

  2. Memory Usage: Overall memory utilization per pod and aggregated.

  3. Throughput: Number of requests processed per minute.

  4. Latency: Delay between request and response, measured in percentiles.

Setup Application Metrics

Note

  1. Install Grafana Dashboard:

  2. Install Prometheus:

    Go to the Chart Store and search for prometheus. Use the Prometheus community's kube-prometheus-stack chart to deploy Prometheus.

    After selecting the chart, configure these values as needed before deployment.

    kube-state-metrics: 
      metricLabelsAllowlist:   
      - pods=[*]

    Search for the above parameters, and update them as shown (or customize as needed).

  3. Enable upgradeJob parameter to install CRDs:

    Since Helm does not automatically apply CRDs, you need to enable the upgradeJob parameter in the Helm chart to ensure CRDs are applied before deploying Prometheus.

    • In the Prometheus Helm chart settings, locate the upgradeJob parameter and set it to true if it is false.

      After enabling the parameter, click Deploy Chart.

  4. Setup Prometheus Endpoint:

    Once Prometheus is installed, go to its App Details and navigate to Networking → Service in the K8s resources. Expand the Prometheus server service to see the endpoints.

    Copy the URL of the kube-prometheus service as shown in the image below.

    To set Prometheus as a data source in Grafana, navigate to Global Configurations → Clusters & Environments, select your cluster, and edit its settings.

    Now to set up the Prometheus endpoint:

    • Enable the See metrics for applications in this cluster option, as shown in the image below.

    • Paste the copied URL into the Prometheus endpoint field, ensuring it includes http://

    • Click Update Cluster to save the changes.

    After adding the endpoint, application metrics will be visible in the Devtron dashboard for all the Devtron apps in the cluster (it may take a few minutes). This includes CPU usage and Memory usage.

  5. Enable Application Metrics:

    To enable Throughput and Latency metrics in Devtron, follow these steps:

    • Open your Devtron app.

    • Go to Configurations → Base Configurations → Deployment Template.

    • Enable Application Metrics in the Deployment Template as shown below and save the changes.

    Now, you can track all your application metrics by navigating to Applications and going to the App Details page of your Devtron App as shown below.

Note

Uninstall Devtron

To uninstall Devtron, run the following command:

This command will remove all the namespaces related to Devtron (devtroncd, devtron-cd, devtron-ci etc.).

Custom script - Shell
Pre-Build task failure
Custom script - Container image
Figure 1: App Configuration → Environment Overrides
Figure 2: Overriding Deployment Template - GUI Method
Figure 3: Overriding Deployment Template - YAML Method
Figure 2: Updating ConfigMap
Figure 3: Updating Secret
Figure 1: 'Select Image' Button
Figure 2: Selecting an Image for Deployment
Figure 3: Currently Deployed Image
Figure 3: No Approved Image
Figure 4: Approval Button
Figure 5: Requesting Approval
Figure 6: Choosing Approvers
Figure 7: Cancelling Request
Figure 8: Expiring an Approval
Figure 9: Email Notification to the Approver
Figure 10: Approval Button
Figure 11: List of Pending Approvals
Figure 12: Approving a Request
Figure 13: Approval Count
Figure 14: Select Image Button
Figure 15: List of Approved Images
Figure 16: 'App Details' Screen
Global Configurations

Create a task using one of the integrated in Devtron:

Create a task from which you can customize your script with:

Or,

Deploy the image with the last deployed configuration. : The configuration C3.

Deploy the configuration which was deployed with the selected image. : The configuration C2.

Typically in a CI pipeline, you , and the number of images gradually increases over a period of time. Devtron's image labels and comments feature helps you to mark and recall specific images from the repository by allowing you to add special instructions or notes to them.

Figure 1: Labels and Comments

Such labels and comments will be visible only within Devtron, and will not propagate to your (say Docker Hub), unlike custom . You may use it to simplify the management and for deployment.

Users need to have or above (along with access to the environment and application) to add labels and comments.

(only after deployment)

(only after deployment)

You can add multiple labels to an image. but each label can be used only once 'per image, per application'. You may use it in an image of other application though. Refer if you commit a mistake while adding labels.

Figure 2: Adding Labels and Comments - 'Build & Deploy' Page
Figure 3: Adding Labels and Comments - 'Build History' Page
Figure 4: Adding Labels and Comments - 'Deployment History' Page
Figure 5: Adding Labels and Comments - 'App Details' Page

Users need to have or above (along with access to the environment and application) to perform soft deletion of labels.

Figure 6: Soft Deletion of a Label
Figure 7: Hard Deletion of a Label

Users need to have or above (along with access to the environment and application) to remove comments.

Figure 8: Removing a Comment

If you use to deploy in bulk, image labels (if added) will be available as filters for you to quickly locate the container image.

Figure 9: Application Groups - Filter by Image Label

Application metrics can only be enabled if your application is deployed using Devtron Deployment Charts and not .

To use the Grafana dashboard, you need to first install the integration from the .

Figure 1: Chart Store
Figure 2: Prometheus Chart
Figure 3: upgradeJob Parameter
Figure 4: Prometheus Service
Figure 5: Clusters and Environments
Figure 6: Prometheus Endpoint
Figure 7: CPU Usage & Memory Usage
Figure 8: Enable Application Metrics
Figure 9: Application Metrics

If your environment is , you need to enable the Application Metrics at the environment override deployment template instead of the base deployment template.

Note: If you have questions, please let us know on our discord channel.

K6 Load testing
Sonarqube
Dependency track for Python
Dependency track for Maven and Gradle
Semgrep
Codacy
build container images
Application Groups
Custom Deployment Charts
Devtron Stack Manager
Read Grafana Dashboard
Overridden
Preset Plugins
Dependency track for NodeJs
Execute Custom script
Custom script - Shell
Custom script - Container image
As an example
As an example
container registry
image tag pattern
selection of container images
From Build & Deploy
From Build History
From Deployment History
From App Details
Deleting Labels
helm uninstall devtron --namespace devtroncd

kubectl delete -n devtroncd -f https://raw.githubusercontent.com/devtron-labs/charts/main/charts/devtron/crds/crd-devtron.yaml

kubectl delete -n argo -f https://raw.githubusercontent.com/devtron-labs/devtron/main/manifests/yamls/workflow.yaml

kubectl delete ns devtroncd devtron-cd devtron-ci devtron-demo argo

Devtron Kubernetes Client

Overview

The Kubernetes client by Devtron is a very lightweight dashboard that can be installed on arm64/amd64-based architectures. It comes with the features such as Kubernetes Resources Browser and Cluster Management that can provide control and observability for resources across clouds and clusters.

Devtron Kubernetes client is an intuitive Kubernetes Dashboard or a command line utility installed outside a Kubernetes cluster. The client can be installed on a desktop running on any Operating Systems and interact with all your Kubernetes clusters and workloads through an API server. It is a binary, packaged in a bash script that you can download and install by using the following set of commands.

By installing Devtron Kubernetes Client, you can access:

Here are a few advantages of using Devtron Kubernetes Client:

  • Managing Kubernetes Resources at scale: Clusters vary on business and architectural needs. Organizations tend to build smaller clusters for more decentralization. This practice leads to the creation of multiple clusters and more nodes. Managing them on a CLI requires multiple files, making it difficult to perform resource operations. But with the Devtron Kubernetes Client, you can gain more visibility into K8s resources easily.

  • Unifying information in one place: When information is scattered across clusters, and you have to type commands with arguments to fetch desired output, the process becomes slow and error-prone. Without a single point of configuration source, the configurations of different config. files diverge, making them even more challenging to restore and track. The Devtron Kubernetes Client unifies all the information and tools into one interface to perform various contextual tasks.

  • Accessibility during an outage for troubleshooting: As the Devtron Kubernetes Client runs outside a cluster, you can exercise basic control over their failed resources when there is a cluster-level outage. The Client helps to gather essential logs and data to pinpoint the root cause of the issue and reduce the time to restore service.

  • Avoiding Kubeconfig version mismatch errors: With the Devtron Kubernetes Client, you can be relieved from maintaining the Kubeconfig versions for the respective clusters (v1.16 - 1.26 i.e, current version) as the Devtron Kubernetes Client performs self kubeconfig version control. Instead of managing multiple kubectl versions manually, it eliminates the chances of errors occurring due to the mismatch in configuration.

Install Devtron Kubernetes Client

  • Download the bash script using the below URL: https://cdn.devtron.ai/k8s-client/devtron-install.bash

  • To automatically download the executable file and to open the dashboard in the respective browser, run the following command:

   sh devtron-install.bash start  

Note: Make sure you place Devtron-install.bash in your current directory before you execute the command.

  • Devtron Kubernetes Client opens in your browser automatically.

Note: You do not need to have a super admin permission to add a cluster if you install Devtron Kubernetes Client. You can add more than one cluster.

Kubernetes Resource Browser

Kubernetes Resource Browser provides a graphical user interface for interacting and managing all your Kubernetes (k8s) resources across clusters. It also helps you to deploy and manage Kubernetes resources and allows pod operations such as:

  • View real-time logs

  • Check manifest and edit live manifests of k8s resources

  • Executable via terminal

  • View Events

  • Or, delete a resource

With Kubernetes Resource browser, you can also perform the following:

  • Check the real-time health status

  • Search for any workloads

  • Manage multiple clusters and change cluster contexts

  • Deploy multiple K8s manifests through Create UI option.

  • Perform resource grouping at the cluster level.

Note: You do not need to have a super admin permission to access Kubernetes Resource Browser if you install Devtron Kubernetes Client.

Cluster Management

With the Devtron Kubernetes Client, you can manage all your clusters running on-premises or on a cloud. It is a cluster and cloud agnostic platform where you can add as many clusters as you want, be it a lightweight cluster such as k3s/ microk8s or cloud managed clusters like Amazon EKS.

It enables you to observe and monitor the cluster health and real-time node conditions. The Cluster management feature provides a summary of nodes with all available labels, annotations, taints, and other parameters such as resource usage. In addition to that, it helps you to perform node operations such as:

  • Debug a node

  • Cordon a node

  • Drain a node

  • Taint a node

  • Edit a node config

  • Delete a node

Some Peripheral Commands

  • In case if you close the browser by mistake, you can open the dashboard by executing the following command. It will open the dashboard through a port in the available web browser and store the Kubernetes client's state.

sh devtron-install.bash open 
  • To stop the dashboard, you can execute the following command:

sh devtron-install.bash stop
  • To update the Devtron Kubernetes Client, use the following command. It will stop the running dashboard and download the latest executable file and open it in the browser.

sh devtron-install.bash upgrade

Try Devtron Enterprise for Free

You must add your cluster to make your cluster visible on the Kubernetes Resource Browser and Clusters section. To add a cluster, go to the Global Configurations and click Add Cluster. .

After your cluster is added via Global Configurations, go to the Kubernetes Resource Browser page and select your cluster. .

With its rich features and intuitive interface, you can easily manage and and use any CLI debugging tools like busybox, kubectl, netshoot or any custom CLI tools like k9s.

After your cluster is added via Global Configurations, go to the Clusters page and search or select your cluster. .

Explore of Devtron with its Enterprise version trial ().

Refer Resource Browser documentation for detail and its operations
Refer Clusters documentation for detail and its operations
all capabilities
read more
Kubernetes Resource Browser
Clusters Management Feature
Refer documentation on how to add a cluster
debug clusters through cluster terminal access
super-admin
super admin access
add users
user groups
super-admin
super-admin
Admin role
Admin role
Admin role
Admin permission
Admin role
Build & deploy permission
Build & deploy permission
User Permissions
Build & deploy permission
Build & deploy permission
Build & deploy permission
Build & deploy permission

Install Devtron on Airgapped Environment

Introduction

In certain scenarios, you may need to deploy Devtron to a Kubernetes cluster that isn’t connected to the internet. Such air-gapped environments are used for various reasons, particularly in industries with strict regulatory requirements like healthcare, banking, and finance. This is because air-gapped environments aren't exposed to the public internet; therefore, they create a controlled and secure space for handling sensitive data and operations.

Try Devtron Enterprise for Free

Prerequisites

  1. Install podman or docker on the VM from where you're executing the installation commands.

  2. Get the latest image file

    curl -LO https://raw.githubusercontent.com/devtron-labs/devtron/refs/heads/main/devtron-images.txt.source
  3. Set the values of TARGET_REGISTRY, TARGET_REGISTRY_USERNAME, and TARGET_REGISTRY_TOKEN. This registry should be accessible from the VM where you are running the cloning script and the K8s cluster where you’re installing Devtron.

Note

If you are using Docker, the TARGET_REGISTRY should be in the format docker.io/<USERNAME>


Docker Instructions

Platform Selection

For Linux/amd64

```bash
export PLATFORM="linux/amd64"
```

For Linux/arm64

```bash
export PLATFORM="linux/arm64"
```
  1. Set the environment variables

    # Set the source registry URL
    export SOURCE_REGISTRY="quay.io/devtron"
    
    # Set the target registry URL, username, and token/password
    export TARGET_REGISTRY=""
    export TARGET_REGISTRY_USERNAME=""
    export TARGET_REGISTRY_TOKEN=""
    
    # Set the source and target image file names with default values if not already set
    SOURCE_IMAGES_LIST="${SOURCE_IMAGES_LIST:=devtron-images.txt.source}"
    TARGET_IMAGES_LIST="${TARGET_IMAGES_LIST:=devtron-images.txt.target}"
  2. Log in to the target Docker registry

    docker login -u $TARGET_REGISTRY_USERNAME -p $TARGET_REGISTRY_TOKEN $TARGET_REGISTRY
  3. Clone the images

    while IFS= read -r source_image; do
      # Check if the source image belongs to the quay.io/devtron registry
      if [[ "$source_image" == quay.io/devtron/* ]]; then
        # Replace the source registry with the target registry in the image name
        target_image="${source_image/quay.io\/devtron/$TARGET_REGISTRY}"
    
      # Check if the source image belongs to the quay.io/argoproj registry
      elif [[ "$source_image" == quay.io/argoproj/* ]]; then
        # Replace the source registry with the target registry in the image name
        target_image="${source_image/quay.io\/argoproj/$TARGET_REGISTRY}"
    
      # Check if the source image belongs to the public.ecr.aws/docker/library registry
      elif [[ "$source_image" == public.ecr.aws/docker/library/* ]]; then
        # Replace the source registry with the target registry in the image name
        target_image="${source_image/public.ecr.aws\/docker\/library/$TARGET_REGISTRY}"
      fi
    
      # Pull the image from the source registry
      docker pull --platform $PLATFORM $source_image
    
      # Tag the image with the new target registry name
      docker tag $source_image $target_image
    
      # Push the image to the target registry
      docker push $target_image
    
      # Output the updated image name
      echo "Updated image: $target_image"
    
      # Append the new image name to the target image file
      echo "$target_image" >> "$TARGET_IMAGES_LIST"
    
    done < "$SOURCE_IMAGES_LIST"

Podman Instructions

For Multi-arch

  1. Set the environment variables

    export SOURCE_REGISTRY="quay.io/devtron"
    export SOURCE_REGISTRY_TOKEN=#Enter token provided by Devtron team
    export TARGET_REGISTRY=#Enter target registry url 
    export TARGET_REGISTRY_USERNAME=#Enter target registry username 
    export TARGET_REGISTRY_TOKEN=#Enter target registry token/password
  2. Log in to the target Podman registry

    podman login -u $TARGET_REGISTRY_USERNAME -p $TARGET_REGISTRY_TOKEN $TARGET_REGISTRY
  3. Clone the images

    SOURCE_REGISTRY="quay.io/devtron"
    TARGET_REGISTRY=${TARGET_REGISTRY}
    SOURCE_IMAGES_FILE_NAME="${SOURCE_IMAGES_FILE_NAME:=devtron-images.txt.source}"
    TARGET_IMAGES_FILE_NAME="${TARGET_IMAGES_FILE_NAME:=devtron-images.txt.target}"
    
    cp $SOURCE_IMAGES_FILE_NAME $TARGET_IMAGES_FILE_NAME
    while read source_image; do
      if [[ "$source_image" == *"workflow-controller:"* || "$source_image" == *"argoexec:"* || "$source_image" == *"argocd:"* ]]
      then
        SOURCE_REGISTRY="quay.io/argoproj"
        sed -i "s|${SOURCE_REGISTRY}|${TARGET_REGISTRY}|g" $TARGET_IMAGES_FILE_NAME
      elif [[ "$source_image" == *"redis:"* ]]
      then
        SOURCE_REGISTRY="public.ecr.aws/docker/library"
        sed -i "s|${SOURCE_REGISTRY}|${TARGET_REGISTRY}|g" $TARGET_IMAGES_FILE_NAME
      else
        SOURCE_REGISTRY="quay.io/devtron"
        sed -i "s|${SOURCE_REGISTRY}|${TARGET_REGISTRY}|g" $TARGET_IMAGES_FILE_NAME
      fi
    done <$SOURCE_IMAGES_FILE_NAME
    echo "Target Images file finalized"
    
    while read -r -u 3 source_image && read -r -u 4 target_image ; do
      echo "Pushing $source_image $target_image"
      podman manifest create $source_image
      podman manifest add $source_image $source_image --all
      podman manifest push $source_image $target_image --all
    done 3<"$SOURCE_IMAGES_FILE_NAME" 4<"$TARGET_IMAGES_FILE_NAME"

Devtron Installation

Before starting, ensure you have created an image pull secret for your registry if authentication is required.

  1. Create the namespace (if not already created)

    kubectl create ns devtroncd
  2. Create the Docker registry secret

    kubectl create secret docker-registry devtron-imagepull \
      --namespace devtroncd \
      --docker-server=$TARGET_REGISTRY \
      --docker-username=$TARGET_REGISTRY_USERNAME \
      --docker-password=$TARGET_REGISTRY_TOKEN

    If you are installing Devtron with the CI/CD module or using Argo CD, create the secret in the following namespaces else, you can skip this step-:

    kubectl create secret docker-registry devtron-imagepull \
      --namespace devtron-cd \
      --docker-server=$TARGET_REGISTRY \
      --docker-username=$TARGET_REGISTRY_USERNAME \
      --docker-password=$TARGET_REGISTRY_TOKEN
    kubectl create secret docker-registry devtron-imagepull \
      --namespace devtron-ci \
      --docker-server=$TARGET_REGISTRY \
      --docker-username=$TARGET_REGISTRY_USERNAME \
      --docker-password=$TARGET_REGISTRY_TOKEN
    kubectl create secret docker-registry devtron-imagepull \
      --namespace argo \
      --docker-server=$TARGET_REGISTRY \
      --docker-username=$TARGET_REGISTRY_USERNAME \
      --docker-password=$TARGET_REGISTRY_TOKEN

Get the latest Devtron Helm Chart

helm pull devtron-operator --repo http://helm.devtron.ai

This would download the tar file of the devtron-operator chart, Make sure to replace the <devtron-chart-file> in the installation commands with this file name.

Install Devtron without any Integration

Use the below command to install Devtron without any Integrations

  1. Without imagePullSecrets:

    helm install devtron <devtron-chart-file> -n devtroncd --set global.containerRegistry="$TARGET_REGISTRY" --set-string components.devtron.customOverrides.IS_AIR_GAP_ENVIRONMENT=true
  2. With imagePullSecrets:

    helm install devtron <devtron-chart-file> -n devtroncd --set global.containerRegistry="$TARGET_REGISTRY" --set global.imagePullSecrets[0].name=devtron-imagepull --set-string components.devtron.customOverrides.IS_AIR_GAP_ENVIRONMENT=true

Installing Devtron with CI/CD Mode

Use the below command to install Devtron with only the CI/CD module

  1. Without imagePullSecrets:

    helm install devtron <devtron-chart-file> -n devtroncd --set installer.modules={cicd} --set global.containerRegistry="$TARGET_REGISTRY" --set-string components.devtron.customOverrides.IS_AIR_GAP_ENVIRONMENT=true
  2. With imagePullSecrets:

    helm install devtron <devtron-chart-file> -n devtroncd --set installer.modules={cicd} --set global.containerRegistry="$TARGET_REGISTRY" --set global.imagePullSecrets[0].name=devtron-imagepull --set-string components.devtron.customOverrides.IS_AIR_GAP_ENVIRONMENT=true

Install Devtron with CICD Mode including Argocd

Use the below command to install Devtron with the CI/CD module and Argo CD

  1. Without imagePullSecrets:

    helm install devtron <devtron-chart-file> --create-namespace -n devtroncd --set installer.modules={cicd} --set argo-cd.enabled=true --set global.containerRegistry="$TARGET_REGISTRY" --set argo-cd.global.image.repository="${TARGET_REGISTRY}/argocd" --set argo-cd.redis.image.repository="${TARGET_REGISTRY}/redis" --set-string components.devtron.customOverrides.IS_AIR_GAP_ENVIRONMENT=true
  2. With imagePullSecrets:

    helm install devtron <devtron-chart-file> --create-namespace -n devtroncd --set installer.modules={cicd} --set argo-cd.enabled=true --set global.containerRegistry="$TARGET_REGISTRY" --set argo-cd.global.image.repository="${TARGET_REGISTRY}/argocd" --set argo-cd.redis.image.repository="${TARGET_REGISTRY}/redis" --set global.imagePullSecrets[0].name=devtron-imagepull --set-string components.devtron.customOverrides.IS_AIR_GAP_ENVIRONMENT=true

Next Steps

Projects

Projects are the logical grouping of your applications so that you can manage and control the access level of users.

Add Project:

  1. To add a project name, go to the Projects section of Global Configurations.

  2. Click Add Project.

  3. Provide a project name in the field and click Save.

Global Configurations

A global configuration allows you to easily share common configuration between multiple repositories without copy/pasting it to these repositories.

Before you start creating an application, we recommend to provide basic information in different sections of Global Configurations available in Devtron.

Authorization

Authorization section describes how to authenticate and authorize access to resources, also managing role-based access levels in Devtron.

Access can be granted to a user via:

Okta

Prerequisites

Tutorial

Steps on Okta Admin Console

Once your Okta org is set up, create an app integration on Okta to get a Client ID and Client Secret.

  1. In the Admin Console, go to Applications → Applications.

  2. Click Create App Integration.

  3. Select OIDC - OpenID Connect as the Sign-in method.

  1. Select Web as the application type and click Next.

  2. On the App Integration page:

    • Give a name to your application.

    • Select the Interaction Code and Refresh Token checkbox.

    • Now go to Devtron's Global Configurations → SSO Login Services → OIDC.

    • Copy the redirect URI given in the helper text (might look like: https://xxx.xxx.xxx/xxx/callback).

    • Return to the Okta screen, and remove the prefilled value in Sign-in redirect URIs.

    • Paste the copied URI in Sign-in redirect URIs.

    • Click Save.

  3. On the General tab:

    • Note the Client ID value.

    • Click the Edit option.

    • In Client Authentication, choose Client Secret.

    • Click Save.

    • Click Generate new secret.

    • Note the Client Secret value.

Steps on Devtron

  1. Go to the Global Configurations → SSO Login Services → OIDC.

  2. In the URL field, enter the Devtron application URL (a valid https link) where it is hosted.

  3. Under Configuration tab, locate the config object, and provide the clientID and clientSecret of the app integration you created on Okta.

  4. Provide issuer value as https://${yourOktaDomain}. Replace ${yourOktaDomain} with your domain on Okta as shown in the video.

  5. For providing redirectURI or callbackURI registered with the SSO provider, you can either select Configuration or Sample Script. Note that the redirect URI is already given in the helper text (as seen in the previous section).

  6. Click Save to create and activate Okta SSO login.

Now your users will be able to log in to Devtron using the Okta authentication method. Note that existing signed-in users will be logged out and they have to log in again using their OIDC account.

Sample Configuration

User Permissions

Introduction

Here you can manage who can access your Devtron instance and what actions they can perform. Use this section to add team members, assign them roles, and control their access by granting fine-grained permissions. Moreover, you can also download all user data in a CSV format.


Add Users

Mandatory Action

This is a mandatory step after configuring SSO in Devtron; otherwise, your users won't be able to log in to Devtron via SSO.

Who Can Perform This Action?

Only managers and super-admins can add users.

  1. Go to Global Configurations → Authorization → User Permissions.

  2. Click Add Users.

  3. In the Email addresses field, type the email address of the user you wish to add. You may add more than one email address.

  4. (Optional) From the Assign user groups dropdown, you may assign one or more user groups to the user. This helps in identifying the group/team to which the user belongs (e.g., Security Team, Frontend Team, Department Leads) especially when adding larger teams.

  5. There are two types of permissions in Devtron (click the links below to learn more):

  6. Click Save. You have successfully added your user(s).


Grant Super Admin Permission

Who Can Perform This Action?

Only existing super-admins can assign super-admin permissions to another user.

Before assigning this permission, please note the following:

  • Selecting this option will grant the user full access to all the resources.

  • Since super-admin permission is the highest level of access you can grant, we recommend you give it only to limited users.


Grant Specific Permissions

Who Can Perform This Action?

Only managers and super-admins can assign specific permissions to a user.

Upon selecting this option, you get two additional sections:

Section

Description

Permission Groups

Direct Permissions

This option allows you to grant your user the access to:

What happens when a user has direct permissions as well as permissions inherited from a group?

If you assign a permission group as well as direct permissions, the user will have the combined permissions of both.

For example:

  • A user is granted ‘Build & Deploy’ access to three apps via direct permissions.

  • The same user is part of a group that has ‘View only’ access to five apps (including those three apps).

  • Now, the user will have both ‘Build & Deploy’ and ‘View only’ permissions for those three apps, and just ‘View only’ for the other two.

Devtron Apps permissions

Note

Here you can grant your user the permissions for Devtron apps.

Field
Description

Project

Select a project from the dropdown list to grant the user access. You can select only one project at a time. Note: If you want to select more than one project, then click Add Permission.

Environment

Select a specific environment or all environments from the dropdown list. Note: If you select All environments, the user will have access to all the current environments including any new environment which gets associated with the application later.

Application

Select a specific application or all applications from the dropdown list corresponding to your selected environments. Note: If you select All applications, the user will have access to all current and future applications associated with the project. Moreover, user with access to all applications, can create new applications too.

Role

Available Roles:

  • View only

  • Build and Deploy

  • Admin

  • Manager

Status

Roles available for Devtron Apps

There are seven role-based access levels for Devtron Apps:

  1. View only: These users can view applications and environments access to but cannot view sensitive data like secrets used in applications or charts.

  2. Build and Deploy: In addition to View only access, these users can build and deploy images of applications to permitted environments.

  3. Admin: These users can create, edit, deploy, and delete permitted applications in selected projects.

  4. Manager: These users have the same permissions as Admin but can also grant or revoke user access for applications and environments they manage.

  5. Image approver: These users can approve image deployment requests.

However, super-admin users have unrestricted access to all Devtron resources. They can create, modify, delete, and manage any resource, including user access, Git repositories, container registries, clusters, and environments.

Role
View
Create
Edit
Delete
Build & Deploy
Approve Images
Approve Config Change
Approve Artifacts
Manage User Access

View

✅

❌

❌

❌

❌

❌

❌

❌

❌

Build and Deploy

✅

❌

❌

❌

✅

❌

❌

❌

❌

Admin

✅

✅

✅

✅

✅

❌

❌

❌

❌

Manager

✅

✅

✅

✅

✅

❌

❌

❌

✅

Image Approver

✅

❌

❌

❌

❌

✅

❌

❌

❌

Configuration Approver

✅

❌

❌

❌

❌

❌

✅

❌

❌

Artifact Promoter

✅

❌

❌

❌

❌

❌

❌

✅

❌

Super Admin

✅

✅

✅

✅

✅

✅

✅

✅

✅

Helm Apps permissions

Here you can grant your user the permissions for Helm apps deployed from Devtron or outside Devtron.

Field
Description

Project

Select a project from the dropdown list to grant the user access. You can select only one project at a time. Note: If you want to select more than one project, then click Add Permission.

Environment or Cluster/Namespace

Select a specific environment from the dropdown list. Note: If you select All existing + future environments in cluster, then the user will get access to all the current environments including any new environment which gets associated with the application later.

Application

Select a specific helm application or all helm apps from the dropdown list corresponding to your selected environments. Note: If All applications is selected, the user will have access to all current and future applications associated with the project.

Permission

Available Permissions:

  • View only

  • View & Edit

  • Admin

Status

Roles available for Helm Apps

There are three role-based access levels for Helm Apps:

  1. View only: Users with this role can only view Helm applications and their configurations but cannot make any modifications.

  2. View & Edit: These users can modify the configurations of permitted Helm applications and deploy them.

  3. Admin: Users with this role have full access to Helm applications, including the ability to create, manage, and delete applications.

Role
View
Create
Deploy
Edit
Delete

View only

✅

❌

❌

❌

❌

View & Edit

✅

❌

✅

✅

❌

Admin

✅

✅

✅

✅

✅

Super Admin

✅

✅

✅

✅

✅

Jobs permissions

Here you can grant your user the permissions to access the jobs created in Devtron.

Field
Description

Project

Select a project from the dropdown list to grant the user access. You can select only one project at a time. Note: If you want to select more than one project, then click Add Permission.

Job Name

Select a specific job or choose All jobs to grant access to all available jobs within the project.

Workflow

Select a specific workflow or All workflows to grant access to the workflows containing the job pipelines.

Environment

Select a specific environment or All environments to grant access to the environments associated with the job(s).

Role

Available Roles:

  • View only

  • Run job

  • Admin

Status

Roles available for Jobs

There are three role-based access levels for Jobs:

  1. View only: Users can view the job workflows and logs but cannot trigger or modify jobs.

  2. Run Job: These users can trigger jobs but cannot make modifications to workflows.

  3. Admin: Users with this role have full control over jobs, including creating, modifying, and deleting workflows.

Role
View
Create
Run
Edit
Delete

View only

✅

❌

❌

❌

❌

Run job

✅

❌

✅

❌

❌

Admin

✅

✅

✅

✅

✅

Super Admin

✅

✅

✅

✅

✅

Kubernetes Resources permissions

Note

The 'Kubernetes Resources' tab will be available only if you have super-admin permissions.

To grant Kubernetes resource permission, click Add permission.

Field
Description

Cluster

Select a cluster from the dropdown list to which you want to give permission to the user. You can select only one cluster at a time. Note: To add another cluster, click Add another.

Namespace

Select a namespace from the dropdown list.

API Group

Select a specific API group or All API groups from the dropdown list corresponding to the Kubernetes resource.

Kind

Select a kind or All kind from the dropdown list corresponding to the Kubernetes resource.

Resource name

Select a resource name or All resources from the dropdown list to which you want to give permission to the user.

Role

Available Roles:

  • View

  • Admin

Status

Roles available for Kubernetes Resources

There are two role-based access levels for Kubernetes Resources:

  1. View: Users with this role can inspect Kubernetes resources but cannot make changes.

  2. Admin: Users can create, modify, and delete Kubernetes resources within their assigned namespaces and clusters.

Role
View
Create
Edit
Delete

View

✅

❌

❌

❌

Admin

✅

✅

✅

✅

Super Admin

✅

✅

✅

✅

Chart Groups permissions

Note

Here you can grant your user the permissions for accessing Chart Groups. Note that you can only give users the permission to either create chart groups or edit them, but not both.

Action
Permissions

View

Click the View checkbox if you want the user(s) to view only the chart groups.

Create

Click the Create checkbox if you want the user(s) to create, view, or delete the chart groups.

Edit

  • Deny: Select Deny from the dropdown list to restrict the users from editing the chart groups.

  • Specific Chart Groups: Select the Specific Charts Groups option from the dropdown list and then select the chart group for which you want to allow users to edit.

Roles available for Chart Groups

  1. View: Users can view chart groups but cannot create or edit them.

  2. Create: Users can create new chart groups and modify existing ones.

  3. Edit: Users can modify chart groups but cannot create new ones.

Role
View
Create
Deploy
Edit
Delete

View

✅

❌

❌

❌

❌

Create

✅

✅

❌

✅

✅

Edit

✅

❌

❌

None/Specific Groups

❌

Super Admin

✅

✅

✅

✅

✅


Who Can Perform This Action?

  • Super-admins can activate or deactivate users.

  • Managers can activate or deactivate users only if the users has the same or fewer permissions than the manager.

When working with multiple collaborators in Devtron, you may need to deactivate users who no longer require access and reactivate them when needed. This applies to users of Devtron Apps, Helm Apps, Jobs, and Kubernetes Resources.

You can manage a user's active status at three levels:

At User level

  • Active/Activate - Use this option to activate a deactivated user while retaining their previous roles and permissions.

  • Inactive/Inactivate - Use this option to deactivate an existing active user and save the changes. If the user has an ongoing session, they will be logged out permanently on their next action or refresh.

  • Keep active until - Use this TTL-based option to keep a user active only till a specified date and time, after which the user is automatically deactivated. The user will not be able to log in to Devtron.

At Permission Group level

  • Active/Activate - Use this option to allow permissions from the group to take effect for the user.

  • Keep active until - Use this TTL-based option to grant group permissions to the user until a set date, after which permission group will become inactive for the user.

At Direct Permissions level

  • Active/Activate - Use this option to grant the project/resource access to the user.

  • Keep active until - Use this TTL-based option to grant the project/resource access to the user only till a specified date and time, beyond which the user will no longer have access to the project/resource.


Edit User Permissions

Who Can Perform This Action?

  • Super-admins can edit user permissions.

  • Managers can edit user permissions only if the user has the same or fewer permissions than the manager.

Note

You can edit the user permissions by clicking the edit icon. Click Save after editing the permissions.


Export User Data to CSV

You may download the user data of current users and deleted users in a CSV format. Broadly, your exported CSV will include:

  • User's Email address

  • User ID & Status (Active/Inactive/Deleted)

  • Last Login Time

  • Detailed Permissions

  • Role

  • Timestamps for User Addition, Updation, and Deletion


Delete Users

Who Can Perform This Action?

  • Super-admins can delete users.

  • Managers can delete users only if the user has the same or fewer permissions than the manager.

If you want to delete a user, click Delete.

This will remove the user from the system along with all the permissions granted earlier. The user will no longer be able to log in to Devtron unless added again.

Install Devtron with CI/CD

In this section, we describe the steps in detail on how you can install Devtron with CI/CD integration.

Try Devtron Enterprise for Free


Prerequisites

Run the following command to install AWS EBS CSI driver using Helm:

helm repo add aws-ebs-csi-driver \
https://kubernetes-sigs.github.io/aws-ebs-csi-driver \
helm repo update \
helm upgrade --install aws-ebs-csi-driver \
--namespace kube-system aws-ebs-csi-driver/aws-ebs-csi-driver

Command

Run the following command to install the latest version of Devtron along with the CI/CD module:

helm repo add devtron https://helm.devtron.ai 

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set installer.modules={cicd}

Install Multi-Architecture Nodes (ARM and AMD)

To install Devtron on clusters with the multi-architecture nodes (ARM and AMD), append the Devtron installation command with --set installer.arch=multi-arch.


Configure Blob Storage during Installation

Configuring Blob Storage in your Devtron environment allows you to store build logs and cache. In case, if you do not configure the Blob Storage, then:

  • You will not be able to access the build and deployment logs after an hour.

  • Build time for commit hash takes longer as cache is not available.

  • Artifact reports cannot be generated in pre/post build and deployment stages.

Choose one of the options to configure blob storage:

Run the following command to install Devtron along with MinIO for storing logs and cache.

helm repo add devtron https://helm.devtron.ai 

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set installer.modules={cicd} \
--set minio.enabled=true

Note: Unlike global cloud providers such as AWS S3 Bucket, Azure Blob Storage and Google Cloud Storage, MinIO can be hosted locally also.

Run the following command to install Devtron along with AWS S3 buckets for storing build logs and cache:

  • Install using S3 IAM policy.

Note: Please ensure that S3 permission policy to the IAM role attached to the nodes of the cluster if you are using below command.

helm repo add devtron https://helm.devtron.ai

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set installer.modules={cicd} \
--set configs.BLOB_STORAGE_PROVIDER=S3 \
--set configs.DEFAULT_CACHE_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CACHE_BUCKET_REGION=us-east-1 \
--set configs.DEFAULT_BUILD_LOGS_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CD_LOGS_BUCKET_REGION=us-east-1
  • Install using access-key and secret-key for AWS S3 authentication:

helm repo add devtron https://helm.devtron.ai

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set installer.modules={cicd} \
--set configs.BLOB_STORAGE_PROVIDER=S3 \
--set configs.DEFAULT_CACHE_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CACHE_BUCKET_REGION=us-east-1 \
--set configs.DEFAULT_BUILD_LOGS_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CD_LOGS_BUCKET_REGION=us-east-1 \
--set secrets.BLOB_STORAGE_S3_ACCESS_KEY=<access-key> \
--set secrets.BLOB_STORAGE_S3_SECRET_KEY=<secret-key>
  • Install using S3 compatible storages:

helm repo add devtron https://helm.devtron.ai

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set installer.modules={cicd} \
--set configs.BLOB_STORAGE_PROVIDER=S3 \
--set configs.DEFAULT_CACHE_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CACHE_BUCKET_REGION=us-east-1 \
--set configs.DEFAULT_BUILD_LOGS_BUCKET=demo-s3-bucket \
--set configs.DEFAULT_CD_LOGS_BUCKET_REGION=us-east-1 \
--set secrets.BLOB_STORAGE_S3_ACCESS_KEY=<access-key> \
--set secrets.BLOB_STORAGE_S3_SECRET_KEY=<secret-key> \
--set configs.BLOB_STORAGE_S3_ENDPOINT=<endpoint>

Run the following command to install Devtron along with Azure Blob Storage for storing build logs and cache:

helm repo add devtron https://helm.devtron.ai

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set installer.modules={cicd} \
--set secrets.AZURE_ACCOUNT_KEY=xxxxxxxxxx \
--set configs.BLOB_STORAGE_PROVIDER=AZURE \
--set configs.AZURE_ACCOUNT_NAME=test-account \
--set configs.AZURE_BLOB_CONTAINER_CI_LOG=ci-log-container \
--set configs.AZURE_BLOB_CONTAINER_CI_CACHE=ci-cache-container

Run the following command to install Devtron along with Google Cloud Storage for storing build logs and cache:

helm repo add devtron https://helm.devtron.ai

helm repo update devtron

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set installer.modules={cicd} \
--set configs.BLOB_STORAGE_PROVIDER=GCP \
--set secrets.BLOB_STORAGE_GCP_CREDENTIALS_JSON=eyJ0eXBlIjogInNlcnZpY2VfYWNjb3VudCIsInByb2plY3RfaWQiOiAiPHlvdXItcHJvamVjdC1pZD4iLCJwcml2YXRlX2tleV9pZCI6ICI8eW91ci1wcml2YXRlLWtleS1pZD4iLCJwcml2YXRlX2tleSI6ICI8eW91ci1wcml2YXRlLWtleT4iLCJjbGllbnRfZW1haWwiOiAiPHlvdXItY2xpZW50LWVtYWlsPiIsImNsaWVudF9pZCI6ICI8eW91ci1jbGllbnQtaWQ+IiwiYXV0aF91cmkiOiAiaHR0cHM6Ly9hY2NvdW50cy5nb29nbGUuY29tL28vb2F1dGgyL2F1dGgiLCJ0b2tlbl91cmkiOiAiaHR0cHM6Ly9vYXV0aDIuZ29vZ2xlYXBpcy5jb20vdG9rZW4iLCJhdXRoX3Byb3ZpZGVyX3g1MDlfY2VydF91cmwiOiAiaHR0cHM6Ly93d3cuZ29vZ2xlYXBpcy5jb20vb2F1dGgyL3YxL2NlcnRzIiwiY2xpZW50X3g1MDlfY2VydF91cmwiOiAiPHlvdXItY2xpZW50LWNlcnQtdXJsPiJ9Cg== \
--set configs.DEFAULT_CACHE_BUCKET=cache-bucket \
--set configs.DEFAULT_BUILD_LOGS_BUCKET=log-bucket

Check Status of Devtron Installation

The installation takes about 15 to 20 minutes to spin up all of the Devtron microservices one by one

Run the following command to check the status of the installation:

kubectl -n devtroncd get installers installer-devtron \
-o jsonpath='{.status.sync.status}'

The command executes with one of the following output messages, indicating the status of the installation:

Status
Description

Downloaded

The installer has downloaded all the manifests, and the installation is in progress.

Applied

The installer has successfully applied all the manifests, and the installation is completed.


Check the Installer Logs

Run the following command to check the installer logs:

kubectl logs -f -l app=inception -n devtroncd

Devtron Dashboard

Run the following command to get the Devtron dashboard URL:

kubectl get svc -n devtroncd devtron-service \
-o jsonpath='{.status.loadBalancer.ingress}'

You will get an output similar to the example shown below:

[map[hostname:aaff16e9760594a92afa0140dbfd99f7-305259315.us-east-1.elb.amazonaws.com]]

Use the hostname aaff16e9760594a92afa0140dbfd99f7-305259315.us-east-1.elb.amazonaws.com (Loadbalancer URL) to access the Devtron dashboard.

If you do not get a hostname or receive a message that says "service doesn't exist," it means Devtron is still installing. Please wait until the installation is completed.

You can also use a CNAME entry corresponding to your domain/subdomain to point to the Loadbalancer URL to access at a customized domain.

Host
Type
Points to

devtron.yourdomain.com

CNAME

aaff16e9760594a92afa0140dbfd99f7-305259315.us-east-1.elb.amazonaws.com


Devtron Admin Credentials

When you install Devtron for the first time, it creates a default admin user and password (with unrestricted access to Devtron). You can use that credentials to log in as an administrator.

After the initial login, we recommend you set up any SSO service like Google, GitHub, etc., and then add other users (including yourself). Subsequently, all the users can use the same SSO (let's say, GitHub) to log in to Devtron's dashboard.

The sections below will help you understand the process of getting the administrator password.

For Devtron version v0.6.0 and higher

Username: admin Password: Run the following command to get the admin password:

kubectl -n devtroncd get secret devtron-secret \
-o jsonpath='{.data.ADMIN_PASSWORD}' | base64 -d
For Devtron version less than v0.6.0

Username: admin Password: Run the following command to get the admin password:

kubectl -n devtroncd get secret devtron-secret \
-o jsonpath='{.data.ACD_PASSWORD}' | base64 -d

Filter Condition

Using filter conditions, you can control the progression of events. Here are a few general examples:

  • Images containing the label "test" should not be eligible for deployment in production environment

  • Only images having tag versions greater than v0.7.4 should be eligible for deployment

  • Images hosted on Docker Hub should be eligible but not the rest


Steps to Create a Filter

Prerequisites

You must have application(s) with CI-CD workflow(s) configured

  1. From the left sidebar, go to Global Configurations → Filter Condition.

  2. Add a filter condition.

  3. In the Define Filter condition section, you get the following fields:

    • Filter For: Choose the pipeline upon which the filter should apply. Currently, you can use filter conditions for CD pipelines only. Support for CI pipelines is underway.

    • Filter Name: Give a name to the filter.

    • Description: (Optional) Add a description to the filter, preferably explaining what it does.

    • Filter Condition: You can specify either a pass condition, fail condition, or both the conditions:

      • Pass Condition: Events that satisfy the pass condition are eligible to trigger your CD pipeline.

      • Fail Condition: Events that satisfy the fail condition are not eligible to trigger your CD pipeline.

    • Use CEL Expression: You can use Common Expression Language (CEL) to define the conditions. Currently, you can create conditions with the help of following variables:

      • containerImage: Package that contains all the necessary files and instructions to run an application in a container, e.g., gcr.io/k8s-minikube/kicbase:v0.0.39. It returns a string value in the following format: <registry>/<repository>:<tag>

      • containerRepository: Storage location for container images, e.g., kicbase

      • containerImageTag: Versioning of image to indicate its release, e.g., v0.0.39

      • imageLabels: The label(s) you assign to an image in the CD pipeline, e.g., ["PROD","Stage"]. It returns an array of strings.

  4. Click Next.

  5. In the Apply to section, you get the following fields:

    • Application: Choose one or more applications to which your filter condition must apply.

    • Environment: Choose one or more environments to which your filter condition must apply.

Since an application can have more than one environment, the filter conditions apply only to the environment you chose in the Apply to section. If you create a filter condition without choosing an application or environment, it will not apply to any of your pipelines.

  1. Click Save. You have successfully created a filter.

If you create filters using CEL expressions that result in a conflict (i.e., passing and failing of the same image), fail will have higher precedence


Examples

Pass Condition

Scenario 1

Consider a scenario where you wish to make an image eligible for deployment only if its tag version is greater than v0.0.7

The CEL Expression should be containerImageTag > "v0.0.7"

Go to the Build & Deploy tab. The filter condition was created specifically for test environment, therefore the filter condition would be evaluated only at the relevant CD pipeline, i.e., test

Click Select Image for the test CD pipeline. The first tab Eligible images shows the list and count of images that have satisfied the pass condition since their tag versions were greater than v0.0.7. Hence, they are marked eligible for deployment.

The second tab Latest images shows the latest builds (up to 10 images) irrespective of whether they have satisfied the filter condition(s) or not. The ones that have not satisfied the filter conditions get marked as Excluded. In other words, they are not eligible for deployment.

Clicking the filter icon at the top-left shows the filter condition(s) applied to the test CD pipeline.

Scenario 2

Consider another scenario where you wish to make images eligible for deployment only if the application's git branch starts with the word hotfix and also if its repo URL matches your specified condition.

CEL Expression:

gitCommitDetails.filter(gitCommitDetail, gitCommitDetail.startsWith('https://github.com/devtron-labs')).map(repo, gitCommitDetails[repo].branch).exists_one(branch, branch.startsWith('hotfix-'))

where, https://github.com/devtron-labs is a portion of the repo URL and hotfix- is for finding the branch name (say hotfix-sept-2024)

Alternatively, if you have a fixed branch (say hotfix-123), you may write the following expression:

'hotfix-123' in gitCommitDetails.filter(gitCommitDetail, gitCommitDetail.startsWith('https://github.com/devtron-labs')).map(repo, gitCommitDetails[repo].branch)

Walkthrough Video:

Fail Condition

Consider a scenario where you wish to exclude an image from deployment if its tag starts with the word trial or ends with the word testing

The CEL Expression should be containerImageTag.startsWith("trial") || containerImageTag.endsWith("testing")

Go to the Build & Deploy tab. The filter condition was created specifically for devtron-demo environment, therefore the filter condition would be evaluated only at the relevant CD pipeline, i.e., devtron-demo

Click Select Image for the devtron-demo CD pipeline. The first tab Eligible images shows the list and count of images that have not met the fail condition. Hence, they are marked eligible for deployment.

The second tab Latest images shows the latest builds (up to 10 images) irrespective of whether they have satisfied the filter condition(s) or not. The ones that have satisfied the filter conditions get marked as Excluded. In other words, they are not eligible for deployment.

Clicking the filter icon at the top-left shows the filter condition(s) applied to the devtron-demo CD pipeline.

0.6.x-0.7.x

To check the current version of your Devtron setup, use the following command

kubectl -n devtroncd get installers installer-devtron -o jsonpath='{.status.sync.data}' | grep "^LTAG=" | cut -d"=" -f2-

Proceed with the following steps only if the version is 0.6.x


Prerequisites

  1. Set the release name

export RELEASE_NAME=devtron
  1. Label and annotate the service accounts in the devtron-ci namespace

kubectl -n devtron-ci label sa --all "app.kubernetes.io/managed-by=Helm" --overwrite
kubectl -n devtron-ci annotate sa --all "meta.helm.sh/release-name=$RELEASE_NAME" "meta.helm.sh/release-namespace=devtroncd" --overwrite
  1. Now, label and annotate the service accounts in the devtron-cd namespace

kubectl -n devtron-cd label sa --all "app.kubernetes.io/managed-by=Helm" --overwrite
kubectl -n devtron-cd annotate sa --all "meta.helm.sh/release-name=$RELEASE_NAME" "meta.helm.sh/release-namespace=devtroncd" --overwrite

Upgrade Commands

  1. Update the Helm repository

helm repo update
  1. Run the upgrade command for Devtron

helm upgrade devtron devtron/devtron-operator -n devtroncd --reuse-values -f https://raw.githubusercontent.com/devtron-labs/devtron/main/charts/devtron/devtron-bom.yaml

Expected Command Output

Build Infra

Introduction

Therefore, applying a common infra configuration to all applications is not optimal. Since resources incur heavy costs, it's wise to efficiently allocate resources (not more, not less).

With the 'Build Infra' feature, Devtron makes it possible for you to tweak the resources as per the needs of your applications. The build (ci-runner) pod will be scheduled on an available node (considering applied taints and tolerations) in the cluster on which 'Devtron' is installed.

Who Can Perform This Action?

Users need to have super-admin permission to configure Build Infra.


Steps to Configure Build Infra

From the left sidebar, go to Global Configurations → Build Infra.

Default Profile

This contains the default infra configuration applicable to all the applications, be it large or small.

You may click it to modify the following:

Furthermore, CPU and Memory have 2 fields each:

  • Request - Use this field to specify the minimum guaranteed amount of CPU/Memory resources your application needs for its CI build. In our example, we required 1500m or 1.5 cores CPU along with 6 GB of RAM.

  • Limit - Use this field to set the maximum amount of CPU/Memory resources the build process can use, even if there is a lot available in the cluster.

Instead of default profile, you can create custom profiles having different infra configurations. Example: One profile for Python apps, a second profile for large apps, and a third profile for small apps, and many more.

  1. Click Create Profile.

  2. Give a name to the profile along with a brief description, and select the configurations to specify the values.

  3. Click Save. Your custom profile will appear under the list of custom profiles as shown below.

Attaching Profile

  1. Go to the Applications tab.

  2. Choose an application and click the dropdown below it.

  3. Choose the profile you wish to apply from the dropdown.

  4. Click Change to apply the profile to your application.

Tip: If you missed creating a profile but selected your application(s), you can use the 'Create Profile' button. This will quickly open a new tab for creating a profile. Once done, you can return and click the refresh icon as shown below.

Performing Bulk Action

If you wish to apply a profile to multiple applications at once, you can do that too.

Simply use the checkboxes to select the applications. You can do this even if there are many applications spanning multiple pages. You will see a draggable floating widget as shown below.

Select the profile you wish to apply from the dropdown and confirm the changes.

Once you apply a profile, it will show the count of applications attached to it.

Editing or Deleting Profile

You can edit or delete a custom profile using the respective icons as shown below.

Need More Options?


Extras

CPU Units

CPU resources are measured in millicore. 1000m or 1000 millicore is equal to 1 core. If a node has 4 cores, the node's CPU capacity would be represented as 4000m.

Memory Units

Memory is measured in bytes. You can enter memory with suffixes (E, P, T, G, M, K, and Ei, Pi, Ti, Gi, Mi, Ki).

Symbol
Prefix
Value (Bytes)

m

-

0.001 byte

byte

-

1 byte

k

Kilo

1,000 bytes

Ki

Kibi

1,024 bytes

M

Mega

1,000,000 bytes

Mi

Mebi

1,048,576 bytes

G

Giga

1,000,000,000 bytes

Gi

Gibi

1,073,741,824 bytes

T

Tera

1,000,000,000,000 bytes

Ti

Tebi

1,099,511,627,776 bytes

P

Peta

1,000,000,000,000,000 bytes

Pi

Petabi

1,125,899,906,842,624 bytes

E

Exa

1,000,000,000,000,000,000 bytes

Ei

Exabi

1,152,921,504,606,846,976 bytes

Timeout Units

You can specify timeouts in the following units, beyond which the build process would be marked as failed:

  • seconds

  • minutes

  • hours

Explore of Devtron with its Enterprise version trial ().

After installation, refer for further steps, including obtaining the dashboard URL and the admin password.

Refer for more detail.

A verified account on . Okta activates your account only if email verification is successful.

Here's a reference guide to set up your Okta org and application:

OIDC stands for OpenID Connect. to read more.

Add a key insecureSkipEmailVerified: true. Note that this key is only required for Okta SSO. For other types of OIDC SSO, refer .

Figure 1: User Permissions - Example
Figure 2: User Permissions in Global Configurations
Figure 3: 'Add Users' Button
Figure 4: Adding Email Addresses of Users
Figure 5: Assigning User Group(s)

for granting full access.

for granting cherry-picked access.

Figure 6: Granting Specific or Superadmin Access
Figure 7: Granting Superadmin Access

You can revoke a user's super-admin access at any time and restrict it to .

Figure 8: Granting Specific Access

(Recommended, ) Use the dropdown to assign the user to a . Your user will automatically inherit all the permissions to the projects/resources defined for that group. You may select more than one permission group too. Once you select a permission group, assigning direct permissions can be skipped (unless you wish to grant additional permissions). You may also at permission group-level. We recommend using permission groups over direct permissions for easier management of user access.

The 'Devtron Apps' tab will be available only if the is installed.

Figure 9: Granting Devtron Apps Permissions

to learn more about the role you wish to assign the user.

Read:

Configuration approver: These users can approve configuration change requests for , , and . However, users cannot self-approve their own proposed changes, even if they have this role or Super Admin access.

Artifact promoter: These users have the authority to approve the promotion of directly to the target CD pipeline.

Figure 10: Granting Helm Apps Permissions

to learn more about the permission you wish to assign the user.

Read:

Figure 11: Granting Jobs Permissions

to learn more about the role you wish to assign the user.

Read:

Here you can provide permission to view, inspect, manage, and delete resources in your clusters from .

Figure 12a: Adding Permissions for Kubernetes Resources
Figure 12b: Granting Permissions for Kubernetes Resources

to learn more about the role you wish to assign the user.

Read:

The 'Chart Groups' tab will be available only if the is installed.

Figure 13: Granting Chart Group Permissions

Making Users Active/Inactive

Figure 14: Active/Inactive Options

Figure 15: Active/Inactive User
Figure 16: Active/Inactive User from Permission Group

Inactive/Inactivate - Use this option to prevent permissions from the group from taking effect for the user. However, they can still log in/log out of Devtron if .

Figure 17: Active/Inactive User for Project Access

Inactive/Inactivate - Use this option to revoke the project/resource access from the user. Note: The user will still be able to log in/log out of Devtron if .

Direct user permissions cannot be edited if you're using / for SSO with 'auto-assign permission' enabled. Permissions can only be in such a scenario.

Figure 18: Editing User Permissions
Figure 19: Exporting User Data
Figure 20: Deleting a User

Explore of Devtron with its Enterprise version trial ().

Install , if you have not installed it already.

If you are using EKS version 1.23 or above, you must also install .

If you want to configure Blob Storage during the installation, refer .

If you want to install Devtron for production deployments, please refer our for Devtron Installation.

Refer to the AWS specific parameters on the page.

Refer to the Azure specific parameters on the page.

Refer to the Google Cloud specific parameters on the page.

If you want to uninstall Devtron or clean Devtron helm installer, refer our .

Related to installaltion, please also refer section also.

If you have any questions, please let us know on our Discord channel.

Introduction

The you create in Devtron for managing the CI-CD of your application can be made flexible or restricting with the help of CD filter conditions, for e.g., not all events (such as image builds) generated during the CI stage require progression to the CD stage. Therefore, instead of creating multiple workflows that cater to complex requirements, Devtron provides you the option of defining filters to tailor your workflow according to your specific needs.

Only images derived from master branch should be eligible for production deployment (see )

Figure 1: Creating Our First Filter
Figure 2: 'Define Filter Condition' section

Click View filter criteria to check the supported criteria. You get a copy button and a description of each criterion upon hovering. Moreover, you can go to CEL expression to learn more about the rules and supported syntax. Check to know more.

Figure 3: List of Supported Values
Figure 4: Selecting Application(s)
Figure 5: Selecting Environment(s) from Cluster(s)
Figure 6: Success Toast

Here's a sample pipeline we will be using for our explanation of and .

Figure 7: Sample Pipeline
Figure 8: CEL Expression for Pass Condition
Figure 9: Build & Deploy tab
Figure 10: List of Eligible Images
Figure 11: List of Latest Images
Figure 12a: Filter Icon
Figure 12b: Conditions Applied
Figure 13: CEL Expression for Fail Condition
Figure 14: Build & Deploy tab
Figure 15: List of Eligible Images
Figure 16: List of Latest Images
Figure 17a: Filter Menu Icon
Figure 17b: Conditions Applied
Command Output

The involves activities that require infra resources such as CPU, memory (RAM), and many more. The amount of resources required depends on the complexity of the application. In other words, large applications require more resources compared to small applications.

Figure 1: Global Configurations - Build Infra

You will see the and a list of (if they exist). Setting up profiles makes it easier for you to manage the build infra configurations, ensuring its reusability in the long term.

Figure 2: Default Profile

CPU - Processor core allocated to the build process. See .

Memory - RAM allocated to the build process. See .

Build Timeout - Max. time limit allocated to the build process. See .

Figure 3: Editing Default Profile

Creating Custom Profile

Figure 4: Creating Custom Profile
Figure 5a: Empty Profile
Figure 5b: Filled Profile
Figure 6: Listed Profile

Once you create a profile, attach it to the intended applications, or else the will remain applied.

Figure 7: Applications Tab
Figure 8: Profile Dropdown
Figure 9: Selecting a Profile
Figure 10: Confirming Profile Change
Figure 11: Quick Profile Creation
Figure 12: Floating Widget
Figure 13: Selecting a Profile
Figure 14: Count of Applications
Figure 15: Edit and Delete Icons

If you delete a profile attached to one or more applications, the will apply from the next build.

Figure 16: Confirm Profile Deletion

If you need extra control on the build infra configuration apart from CPU, memory, and build timeout, feel free to open a for us to help you.

all capabilities
read more
Devtron installation documentation
user access
Host URL
GitOps
Projects
Clusters & Environments
Git Accounts
Container/OCI Registry
Chart Repositories
Deployment Charts
Authorization
Notifications
Deployment Window
Approval Policy
External Links
Catalog Framework
Scoped Variables
Plugin Policy
Pull Image Digest
Tags Policy
Filter Condition
Lock Deployment Configuration
Image Promotion Policy
Build Infra
SSO Login Services
User Permissions
Permission Groups
API Tokens
Okta
Link
Click here
OIDC supported configurations
CI/CD module
Deployment Templates
ConfigMaps
Secrets
Devtron's Resource Browser
CI/CD module
all capabilities
read more
Helm
aws-ebs-csi-driver
recommended overrides
uninstall Devtron
FAQ
workflows
CI process
GitHub issue
Super admin permission
Specific permissions
specific permissions
User-level
Permission Group-level
Direct Permissions-level
active at the user-level
active at user-level
artifacts
LDAP
Microsoft
managed via permission groups
configure blob storage duing installation
Storage for Logs and Cache
Storage for Logs and Cache
Storage for Logs and Cache
example
Examples
pass condition
fail condition
Default Profile
Custom Profiles
CPU units
memory units
timeout units
default profile
default profile
see snapshot
permission group
make users Active/Inactive
Devtron Apps
Helm Apps
Jobs
Kubernetes Resources
Chart Groups
Click here
Click here
Click here
Click here
create and build the new container image
new target platform
base deployment configuration
basic configuration
Click here

Deploy a Sample Application

Hurray! Your Devtron stack is completely setup. Let's get started by deploying a simple application on it.

Find out the steps here

This is a sample Nodejs application which we are going to deploy using Devtron. For a detailed step-wise procedure, please have a look at the link below -

Clone an Existing Application

Click on Create New and the select Custom app to create a new application.

As soon you click on Custom app, you will get a popup window on screen where you have to enter app name and project for the application. there are two radio buttons present on the popup window, one is for Blank app and another one is for Clone an existing app. For cloning an existing application, select the second one. After this, one more drop-down will appear on the window from which you can select the application that you want to clone. For this, you will have to type minimum three character to see the matching results in the drop-down. After typing the matching characters, select the application that you want to clone. You also can add additional information about the application (eg. created by, Created on) using tags (only key:value allowed).

Key
Description

App Name

Name of the new app you want to Create

Project

Project name

Select an app to clone

Select the application that you want to clone

Tags

Additional information about the application

Now click on Clone App to clone the selected application.

New application with a duplicate template is created.

Upgrade to 1.5.0

This document outlines the step-by-step process to be followed before upgrading Devtron to version 1.5.0.

Overview of the Upgrade Process

The upgrade process consists of three sequential Kubernetes jobs:

  1. devtron-pre-upgrade: Prepares the environment for the upgrade.

  2. devtron-upgrade-init: Scales down Devtron and takes the backup.

  3. devtron-upgrade: Performs the restoration of data and scales up Devtron.

After the completion of the above jobs, you may proceed to upgrade Devtron using the UI or command line.


Prerequisites

  • You must have administrative access to the cluster where Devtron is running, along with kubectl configured.

  • PVC creation must not be blocked by any policy. If it is, exclude the devtroncd namespace from it.


Steps

1. Apply the 'pre-upgrade' job

The devtron-pre-upgrade job creates the necessary resources and prepares for the database backup.

# Apply the devtron-pre-upgrade job
kubectl apply -f https://raw.githubusercontent.com/devtron-labs/utilities/refs/heads/main/scripts/postgres-upgrade/devtron-pre-upgrade.yaml

This job will:

  1. Create a ConfigMap named devtron-postgres-upgrade in the devtroncd namespace.

  2. Determine the StorageClass and size of the existing PostgreSQL PVC.

  3. Create a new PVC named devtron-db-upgrade-pvc with additional storage (+5Gi).

  4. Automatically apply the upgrade-init job.

To monitor the progress of this job:

kubectl logs -f job/devtron-pre-upgrade -n devtroncd

Wait for this job to complete successfully before proceeding.

2. Monitor the 'upgrade-init' job

The devtron-upgrade-init job is automatically triggered by the devtron-pre-upgrade job:

  1. It scales down all Devtron components to ensure database consistency.

  2. Terminates active database connections.

  3. Starts the Postgres migration process.

To monitor the progress of this job:

kubectl logs -f job/devtron-upgrade-init -n devtroncd

Ensure this job completes successfully before proceeding to the next step.

3. Apply the 'upgrade' job

Once the backup is confirmed, apply the final upgrade job:

kubectl apply -f https://raw.githubusercontent.com/devtron-labs/utilities/refs/heads/main/scripts/postgres-upgrade/devtron-upgrade.yaml

This job will:

  1. Verify if the devtron-upgrade-init job was successful.

  2. Extract any nodeSelectors or tolerations from the existing PostgreSQL StatefulSet.

  3. Remove PostgreSQL 11 components.

  4. Install PostgreSQL 14 with the same configuration.

  5. Migrate the data.

  6. Scale up all Devtron components.

To monitor the progress of this job:

kubectl logs -f job/devtron-upgrade -n devtroncd

Verify the Upgrade

After the upgrade job completes, verify the PostgreSQL migration:

# Check if all pods are running
kubectl get pods -n devtroncd

# Verify PostgreSQL version (should now be 14)
kubectl get configmap devtron-postgres-upgrade -n devtroncd -o jsonpath="{.data.POSTGRES_MIGRATED}"

The value of POSTGRES_MIGRATED should be "14" if the migration was successful.


Potential Issues and Troubleshooting

Job Failure

  1. If the devtron-upgrade-init or the devtron-upgrade job fails, check the logs of job and the ConfigMap for error messages:

kubectl get configmap devtron-postgres-upgrade -n devtroncd -o yaml

Look for any entries with "ERROR" in the keys.

  1. To reapply the devtron-upgrade-init job, delete the PVC named devtron-db-upgrade-pvc, recreate it with the same configurations and then reapply the devtron-upgrade-init job.

  2. If the devtron-upgrade-init job is in a pending state, check for the PVC named devtron-db-upgrade-pvc, ensure that the PVC is successfully created.


Next Steps

Upgrade Commands

  1. Update the Helm repository

helm repo update
  1. Run the upgrade command for Devtron

helm upgrade devtron devtron/devtron-operator -n devtroncd --reuse-values -f https://raw.githubusercontent.com/devtron-labs/devtron/main/charts/devtron/devtron-bom.yaml

Git Repository

Introduction

Devtron also supports multiple Git repositories (be it from one Git account or multiple Git accounts) in a single deployment.

Therefore, this doc is divided into 2 sections, read the one that caters to your application:


Single Repo Application

Follow the below steps if the source code of your application is hosted on a single Git repository.

In your application, go to App Configuration → Git Repository. You will get the following fields and options:

  1. (Checkboxes)

Git Account

If the authentication type of your Git account is anonymous, only public Git repositories in that account will be accessible. Whereas, adding a user auth or SSH key will make both public and private repositories accessible.

Git Repo URL

In this field, you have to provide your code repository’s URL, for e.g., https://github.com/devtron-labs/django-repo.

You can find this URL by clicking on the Code button available on your repository page as shown below:

  • Copy the HTTPS/SSH portion of the URL too

Exclude specific file/folder in this repo

Devtron allows you to create either an exclusion rule, an inclusion rule, or a combination of both. In case of multiple files or folders, you can list them in new lines.

To exclude a path, use ! as the prefix, e.g. !path/to/file To include a path, don't use any prefix, e.g. path/to/file

Examples

Sample Values
Description

!README.md

Exclusion of a single file in root folder: Commits containing changes made only in README.md file will not be shown

!README.md !index.js

Exclusion of multiple files in root folder: Commits containing changes made only in README.md or/and index.js files will not be shown

README.md

Inclusion of a single file in root folder: Commits containing changes made only in README.md file will be shown. Rest all will be excluded.

!src/extensions/printer/code2.py

Exclusion of a single file in a folder tree: Commits containing changes made specifically to code2.py file will not be shown

!src/*

Exclusion of a single folder and all its files: Commits containing changes made specifically to files within src folder will not be shown

!README.md index.js

Exclusion and inclusion of files: Commits containing changes made only in README.md will not be shown, but commits made in index.js file will be shown. All other commits apart from the aforementioned files will be excluded.

!README.md README.md

Exclusion and inclusion of conflicting files: If conflicting paths are defined in the rule, the one defined later will be considered. In this case, commits containing changes made only in README.md will be shown.

You may use the Learn how link (as shown below) to understand the syntax of defining an exclusion or inclusion rule.

Since file paths can be long, Devtron supports regex too for writing the paths. To understand it better, you may click the How to use link as shown below.

How to view excluded commits?

As we saw earlier in fig. 4 and 5, commits containing the changes of only README.md file were not displayed, since the file was in the exclusion list.

However, Devtron gives you the option to view the excluded commits too. There's a döner menu at the top-right (beside the Search by commit hash search bar).

The EXCLUDED label (in red) indicates that the commits contain changes made only to the excluded file, and hence they are unavailable for build.

Set clone directory

After clicking the checkbox, a field titled clone directory path appears. It is the directory where your code will be cloned for the repository you specified in the previous step.

This field is optional for a single Git repository application and you can leave the path as default. Devtron assigns a directory by itself when the field is left blank. The default value of this field is ./

Pull submodules recursively


Multi Repo Application

Repeat the process for every new git repository you add. The clone directory path is used by Devtron to assign a directory to each of your Git repositories. Devtron will clone your code at those locations and those paths can be referenced in the Docker file to create a Docker image of the application.

Whenever a change is pushed to any of the configured repositories, CI will be triggered and a new Docker image file will be built (based on the latest commits of the configured repositories). Next, the image will be pushed to the container registry you configured in Devtron.

Why do you need Multi-Git support?

Let’s look at this with an example:

Due to security reasons, you want to keep sensitive configurations like third-party API keys in separate access-restricted git repositories, and the source code in a Git repository that every developer has access to. To deploy this application, code from both the repositories are required. A Multi-Git support helps you achieve it.

Other examples where you might need Multi-Git support:

  • To make code modularized, where front-end and back-end code are in different repos

  • Common library extracted out in a different repo so that other projects can use it

Build Configuration

In this section, we will provide information on the Build Configuration.

Build configuration is used to create and push docker images in the container registry of your application. You will provide all the docker related information to build and push docker images on the Build Configuration page.

For build configuration, you must provide information in the sections as given below:

Store Container Image

The following fields are provided on the Store Container Image section:

Field
Description

Container Registry

Container Repository

Enter the name of your container repository, preferably in the format username/repo-name. The repository that you specify here will store a collection of related docker images. Whenever an image is added here, it will be stored with a new tag version.

If you are using docker hub account, you need to enter the repository name along with your username. For example - If my username is kartik579 and repo name is devtron-trial, then enter kartik579/devtron-trial instead of only devtron-trial.

Build the Container Image

In order to deploy the application, we must build the container images to configure a fully operational container environment.

You can choose one of the following options to build your container image:

  • I have a Dockerfile

  • Create Dockerfile

  • Build without Dockerfile

Build Docker Image when you have a Dockerfile

A Dockerfile is a text document that contains all the commands which you can call on the command line to build an image.

Field
Description

Select repository containing Dockerfile

Dockerfile Path (Relative)

Enter a relative file path where your docker file is located in Git repository. Ensure that the dockerfile is available on this path. This is a mandatory field.

Build Docker Image by creating Dockerfile

With the option Create Dockerfile, you can create a Dockerfile from the available templates. You can edit any selected Dockerfile template as per your build configuration requirements.

Field
Description

Language

Select the programming language (e.g., Java, Go, Python, Node etc.) from the drop-down list you want to create a dockerfile as per compatibility to your system. Note We will be adding other programming languages in the future releases.

Framework

Select the framework (e.g., Maven, Gradle etc.) of the selected programming language. Note We will be adding other frameworks in the future releases.

Build Docker Image without Dockerfile

With the option Build without Dockerfile, you can use Buildpacks to automatically build the image for your preferred language and framework.

Field
Description

Select repository containing code

Project Path (Relative)

In case of monorepo, specify the path of the project from your Git repository.

Language

Select the programming language (e.g., Java, Go, Python, Node, Ruby, PHP etc.) from the drop-down list you want to build your container image as per the compatibility to your system. Note: We will be adding other programming languages in the future releases.

Version

Select a language version from the drop-down list. If you do not find the version you need, then you can update the language version in Build Env Arguments. You can also select Autodetect in case if you want Builder to detect version by itself or its default version.

Select a builder

A builder is an image that contains a set of buildpacks which provide your app's dependencies, a stack, and the OS layer for your app image. Select a buildpack provider from the following options:

Build Env Arguments

You can add Key/Value pair by clicking Add argument.

Field
Description

Key

Value

Define the value for the specified key. E.g. Version no.

Advanced Options

Set Target Platform for the build

Using this option, you can build images for a specific or multiple architectures and operating systems (target platforms). You can select the target platform from the drop-down list or can type to select a customized target platform.

Before selecting a customized target platform, please ensure that the architecture and the operating system are supported by the registry type you are using, otherwise build will fail. Devtron uses BuildX to build images for multiple target Platforms, which requires higher CI worker resources. To allocate more resources, you can increase value of the following parameters in the devtron-cm configmap in devtroncd namespace.

  • LIMIT_CI_CPU

  • REQ_CI_CPU

  • REQ_CI_MEM

  • LIMIT_CI_MEM

To edit the devtron-cm configmap in devtroncd namespace:

kubectl edit configmap devtron-cm -n devtroncd 

If target platform is not set, Devtron will build image for architecture and operating system of the k8s node on which CI is running.

The Target Platform feature might not work in minikube & microk8s clusters as of now.

Docker Build Arguments

It is is a collapsed view including the following parameters:

  • Key

  • Value

Click Save Configuration.

Deployment

This chart creates a deployment that runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive. It does not support Blue/Green and Canary deployments. This is the default deployment chart. You can select Deployment chart when you want to use only basic use cases which contain the following:

  • Create a Deployment to rollout a ReplicaSet. The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not.

  • Declare the new state of the Pods. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.

  • Rollback to an earlier Deployment revision if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment.

  • Scale up the Deployment to facilitate more load.

  • Use the status of the Deployment as an indicator that a rollout has stuck.

  • Clean up older ReplicaSets that you do not need anymore.

You can define application behavior by providing information in the following sections:

Key
Descriptions

Chart version

Basic (GUI)

Advanced (YAML)

Show application metrics


Advanced (YAML)

Container Ports

This defines ports on which application services will be exposed to other services

ContainerPort:
  - envoyPort: 8799
    idleTimeout:
    name: app
    port: 8080
    servicePort: 80
    nodePort: 32056
    supportStreaming: true
    useHTTP2: true
Key
Description

envoyPort

envoy port for the container

idleTimeout

the duration of time that a connection is idle before the connection is terminated

name

name of the port

port

port for the container

servicePort

port of the corresponding kubernetes service

nodePort

nodeport of the corresponding kubernetes service

supportStreaming

Used for high performance protocols like grpc where timeout needs to be disabled

useHTTP2

Envoy container can accept HTTP2 requests

EnvVariables

EnvVariables: []

To set environment variables for the containers that run in the Pod.

EnvVariablesFromFieldPath

EnvVariablesFromFieldPath:
- name: ENV_NAME
  fieldPath: status.podIP (example)

To set environment variables for the containers and fetching their values from pod-level fields.

Liveness Probe

If this check fails, kubernetes restarts the pod. This should return error code in case of non-recoverable error.

LivenessProbe:
  Path: ""
  port: 8080
  initialDelaySeconds: 20
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
  failureThreshold: 3
  httpHeaders:
    - name: Custom-Header
      value: abc
  scheme: ""
  tcp: true
Key
Description

Path

It define the path where the liveness needs to be checked

initialDelaySeconds

It defines the time to wait before a given container is checked for liveliness

periodSeconds

It defines the time to check a given container for liveness

successThreshold

It defines the number of successes required before a given container is said to fulfill the liveness probe

timeoutSeconds

It defines the time for checking timeout

failureThreshold

It defines the maximum number of failures that are acceptable before a given container is not considered as live

httpHeaders

Custom headers to set in the request. HTTP allows repeated headers, you can override the default headers by defining .httpHeaders for the probe.

scheme

Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.

tcp

The kubelet will attempt to open a socket to your container on the specified port. If it can establish a connection, the container is considered healthy.

MaxUnavailable

  MaxUnavailable: 0

The maximum number of pods that can be unavailable during the update process. The value of "MaxUnavailable: " can be an absolute number or percentage of the replicas count. The default value of "MaxUnavailable: " is 25%.

MaxSurge

MaxSurge: 1

The maximum number of pods that can be created over the desired number of pods. For "MaxSurge: " also, the value can be an absolute number or percentage of the replicas count. The default value of "MaxSurge: " is 25%.

Min Ready Seconds

MinReadySeconds: 60

This specifies the minimum number of seconds for which a newly created Pod should be ready without any of its containers crashing, for it to be considered available. This defaults to 0 (the Pod will be considered available as soon as it is ready).

Readiness Probe

If this check fails, kubernetes stops sending traffic to the application. This should return error code in case of errors which can be recovered from if traffic is stopped.

ReadinessProbe:
  Path: ""
  port: 8080
  initialDelaySeconds: 20
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
  failureThreshold: 3
  httpHeaders:
    - name: Custom-Header
      value: abc
  scheme: ""
  tcp: true
Key
Description

Path

It define the path where the readiness needs to be checked

initialDelaySeconds

It defines the time to wait before a given container is checked for readiness

periodSeconds

It defines the time to check a given container for readiness

successThreshold

It defines the number of successes required before a given container is said to fulfill the readiness probe

timeoutSeconds

It defines the time for checking timeout

failureThreshold

It defines the maximum number of failures that are acceptable before a given container is not considered as ready

httpHeaders

Custom headers to set in the request. HTTP allows repeated headers, you can override the default headers by defining .httpHeaders for the probe.

scheme

Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.

tcp

The kubelet will attempt to open a socket to your container on the specified port. If it can establish a connection, the container is considered healthy.

Pod Disruption Budget

You can create PodDisruptionBudget for each application. A PDB limits the number of pods of a replicated application that are down simultaneously from voluntary disruptions. For example, an application would like to ensure the number of replicas running is never brought below the certain number.

podDisruptionBudget: 
     minAvailable: 1

or

podDisruptionBudget: 
     maxUnavailable: 50%

You can specify either maxUnavailable or minAvailable in a PodDisruptionBudget and it can be expressed as integers or as a percentage.

Key
Description

minAvailable

Evictions are allowed as long as they leave behind 1 or more healthy pods of the total number of desired replicas.

maxUnavailable

Evictions are allowed as long as at most 1 unhealthy replica among the total number of desired replicas.

Ambassador Mappings

You can create ambassador mappings to access your applications from outside the cluster. At its core a Mapping resource maps a resource to a service.

ambassadorMapping:
  ambassadorId: "prod-emissary"
  cors: {}
  enabled: true
  hostname: devtron.example.com
  labels: {}
  prefix: /
  retryPolicy: {}
  rewrite: ""
  tls:
    context: "devtron-tls-context"
    create: false
    hosts: []
    secretName: ""
Key
Description

enabled

Set true to enable ambassador mapping else set false

ambassadorId

used to specify id for specific ambassador mappings controller

cors

used to specify cors policy to access host for this mapping

weight

used to specify weight for canary ambassador mappings

hostname

used to specify hostname for ambassador mapping

prefix

used to specify path for ambassador mapping

labels

used to provide custom labels for ambassador mapping

retryPolicy

used to specify retry policy for ambassador mapping

corsPolicy

Provide cors headers on flagger resource

rewrite

used to specify whether to redirect the path of this mapping and where

tls

used to create or define ambassador TLSContext resource

extraSpec

used to provide extra spec values which not present in deployment template for ambassador resource

Autoscaling

This is connected to HPA and controls scaling up and down in response to request load.

autoscaling:
  enabled: false
  MinReplicas: 1
  MaxReplicas: 2
  TargetCPUUtilizationPercentage: 90
  TargetMemoryUtilizationPercentage: 80
  extraMetrics: []
Key
Description

enabled

Set true to enable autoscaling else set false

MinReplicas

Minimum number of replicas allowed for scaling

MaxReplicas

Maximum number of replicas allowed for scaling

TargetCPUUtilizationPercentage

The target CPU utilization that is expected for a container

TargetMemoryUtilizationPercentage

The target memory utilization that is expected for a container

extraMetrics

Used to give external metrics for autoscaling

Flagger

You can use flagger for canary releases with deployment objects. It supports flexible traffic routing with istio service mesh as well.

flaggerCanary:
  addOtherGateways: []
  addOtherHosts: []
  analysis:
    interval: 15s
    maxWeight: 50
    stepWeight: 5
    threshold: 5
  annotations: {}
  appProtocol: http
  corsPolicy:
    allowCredentials: false
    allowHeaders:
      - x-some-header
    allowMethods:
      - GET
    allowOrigin:
      - example.com
    maxAge: 24h
  createIstioGateway:
    annotations: {}
    enabled: false
    host: example.com
    labels: {}
    tls:
      enabled: false
      secretName: example-tls-secret
  enabled: false
  gatewayRefs: null
  headers:
    request:
      add:
        x-some-header: value
  labels: {}
  loadtest:
    enabled: true
    url: http://flagger-loadtester.istio-system/
  match:
    - uri:
        prefix: /
  port: 8080
  portDiscovery: true
  retries: null
  rewriteUri: /
  targetPort: 8080
  thresholds:
    latency: 500
    successRate: 90
  timeout: null
Key
Description

enabled

Set true to enable canary releases using flagger else set false

addOtherGateways

To provide multiple istio gateways for flagger

addOtherHosts

Add multiple hosts for istio service mesh with flagger

analysis

Define how the canary release should progress and at what interval

annotations

Annotation to add on flagger resource

labels

Labels to add on flagger resource

appProtocol

Protocol to use for canary

corsPolicy

Provide cors headers on flagger resource

createIstioGateway

Set to true if you want to create istio gateway as well with flagger

headers

Add headers if any

loadtest

Enable load testing for your canary release

Fullname Override

fullnameOverride: app-name

fullnameOverride replaces the release fullname created by default by devtron, which is used to construct Kubernetes object names. By default, devtron uses {app-name}-{environment-name} as release fullname.

Image

image:
  pullPolicy: IfNotPresent

Image is used to access images in kubernetes, pullpolicy is used to define the instances calling the image, here the image is pulled when the image is not present,it can also be set as "Always".

imagePullSecrets

imagePullSecrets contains the docker credentials that are used for accessing a registry.

imagePullSecrets:
  - regcred

serviceAccount

serviceAccount:
  create: false
  name: ""
  annotations: {}
Key
Description

enabled

Determines whether to create a ServiceAccount for pods or not. If set to true, a ServiceAccount will be created.

name

Specifies the name of the ServiceAccount to use.

annotations

Specify annotations for the ServiceAccount.

HostAliases

the hostAliases field is used in a Pod specification to associate additional hostnames with the Pod's IP address. This can be helpful in scenarios where you need to resolve specific hostnames to the Pod's IP within the Pod itself.

  hostAliases:
  - ip: "192.168.1.10"
    hostnames:
    - "hostname1.example.com"
    - "hostname2.example.com"
  - ip: "192.168.1.11"
    hostnames:
    - "hostname3.example.com"

Ingress

This allows public access to the url, please ensure you are using right nginx annotation for nginx class, its default value is nginx

ingress:
  enabled: false
  # For K8s 1.19 and above use ingressClassName instead of annotation kubernetes.io/ingress.class:
  className: nginx
  annotations: {}
  hosts:
      - host: example1.com
        paths:
            - /example
      - host: example2.com
        paths:
            - /example2
            - /example2/healthz
  tls: []

Legacy deployment-template ingress format

ingress:
  enabled: false
  # For K8s 1.19 and above use ingressClassName instead of annotation kubernetes.io/ingress.class:
  ingressClassName: nginx-internal
  annotations: {}
  path: ""
  host: ""
  tls: []
Key
Description

enabled

Enable or disable ingress

annotations

To configure some options depending on the Ingress controller

path

Path name

host

Host name

tls

It contains security details

Ingress Internal

This allows private access to the url, please ensure you are using right nginx annotation for nginx class, its default value is nginx

ingressInternal:
  enabled: false
  # For K8s 1.19 and above use ingressClassName instead of annotation kubernetes.io/ingress.class:
  ingressClassName: nginx-internal
  annotations: {}
  hosts:
      - host: example1.com
        paths:
            - /example
      - host: example2.com
        paths:
            - /example2
            - /example2/healthz
  tls: []
Key
Description

enabled

Enable or disable ingress

annotations

To configure some options depending on the Ingress controller

path

Path name

host

Host name

tls

It contains security details

Init Containers

initContainers: 
  - reuseContainerImage: true
    securityContext:
      runAsUser: 1000
      runAsGroup: 3000
      fsGroup: 2000
    volumeMounts:
     - mountPath: /etc/ls-oms
       name: ls-oms-cm-vol
   command:
     - flyway
     - -configFiles=/etc/ls-oms/flyway.conf
     - migrate

  - name: nginx
    image: nginx:1.14.2
    securityContext:
      privileged: true
    ports:
    - containerPort: 80
    command: ["/usr/local/bin/nginx"]
    args: ["-g", "daemon off;"]

Specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image. One can use base image inside initContainer by setting the reuseContainerImage flag to true.

Pause For Seconds Before Switch Active

pauseForSecondsBeforeSwitchActive: 30

To wait for given period of time before switch active the container.

Resources

These define minimum and maximum RAM and CPU available to the application.

resources:
  limits:
    cpu: "1"
    memory: "200Mi"
  requests:
    cpu: "0.10"
    memory: "100Mi"

Resources are required to set CPU and memory usage.

Limits

Limits make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.

Requests

Requests are what the container is guaranteed to get.

Service

This defines annotations and the type of service, optionally can define name also.

  service:
    type: ClusterIP
    annotations: {}

Volumes

volumes:
  - name: log-volume
    emptyDir: {}
  - name: logpv
    persistentVolumeClaim:
      claimName: logpvc

It is required when some values need to be read from or written to an external disk.

Volume Mounts

volumeMounts:
  - mountPath: /var/log/nginx/
    name: log-volume 
  - mountPath: /mnt/logs
    name: logpvc
    subPath: employee  

It is used to provide mounts to the volume.

Affinity and anti-affinity

Spec:
  Affinity:
    Key:
    Values:

Spec is used to define the desire state of the given container.

Node Affinity allows you to constrain which nodes your pod is eligible to schedule on, based on labels of the node.

Inter-pod affinity allow you to constrain which nodes your pod is eligible to be scheduled based on labels on pods.

Key

Key part of the label for node selection, this should be same as that on node. Please confirm with devops team.

Values

Value part of the label for node selection, this should be same as that on node. Please confirm with devops team.

Tolerations

tolerations:
 - key: "key"
   operator: "Equal"
   value: "value"
   effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"

Taints are the opposite, they allow a node to repel a set of pods.

A given pod can access the given node and avoid the given taint only if the given pod satisfies a given taint.

Taints and tolerations are a mechanism which work together that allows you to ensure that pods are not placed on inappropriate nodes. Taints are added to nodes, while tolerations are defined in the pod specification. When you taint a node, it will repel all the pods except those that have a toleration for that taint. A node can have one or many taints associated with it.

Arguments

args:
  enabled: false
  value: []

This is used to give arguments to command.

Command

command:
  enabled: false
  value: []

It contains the commands for the server.

Key
Description

enabled

To enable or disable the command

value

It contains the commands

Containers

Containers section can be used to run side-car containers along with your main container within same pod. Containers running within same pod can share volumes and IP Address and can address each other @localhost. We can use base image inside container by setting the reuseContainerImage flag to true.

    containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        command: ["/usr/local/bin/nginx"]
        args: ["-g", "daemon off;"]
      - reuseContainerImage: true
        securityContext:
          runAsUser: 1000
          runAsGroup: 3000
          fsGroup: 2000
        volumeMounts:
        - mountPath: /etc/ls-oms
          name: ls-oms-cm-vol
        command:
          - flyway
          - -configFiles=/etc/ls-oms/flyway.conf
          - migrate

Container Lifecycle Hooks

Container lifecycle hooks are mechanisms that allow users to define custom actions to be performed at specific stages of a container's lifecycle i.e. PostStart or PreStop.

containerSpec:
  lifecycle:
    enabled: false
    postStart:
      httpGet:
        host: example.com
        path: /example
        port: 90
    preStop:
      exec:
        command:
          - sleep
          - "10"
Key
Description

containerSpec

containerSpec to define container lifecycle hooks configuration

lifecycle

Lifecycle hooks for the container

enabled

Set true to enable lifecycle hooks for the container else set false

postStart

The postStart hook is executed immediately after a container is created

httpsGet

Sends an HTTP GET request to a specific endpoint on the container

host

Specifies the host (example.com) to which the HTTP GET request will be sent

path

Specifies the path (/example) of the endpoint to which the HTTP GET request will be sent

port

Specifies the port (90) on the host where the HTTP GET request will be sent

preStop

The preStop hook is executed just before the container is stopped

exec

Executes a specific command, such as pre-stop.sh, inside the cgroups and namespaces of the container

command

The command to be executed is sleep 10, which tells the container to sleep for 10 seconds before it is stopped

Prometheus

  prometheus:
    release: monitoring

It is a kubernetes monitoring tool and the name of the file to be monitored as monitoring in the given case. It describes the state of the Prometheus.

rawYaml

rawYaml: 
  - apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      selector:
        app: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
      type: ClusterIP

Accepts an array of Kubernetes objects. You can specify any kubernetes yaml here and it will be applied when your app gets deployed.

Grace Period

GracePeriod: 30

Kubernetes waits for the specified time called the termination grace period before terminating the pods. By default, this is 30 seconds. If your pod usually takes longer than 30 seconds to shut down gracefully, make sure you increase the GracePeriod.

A Graceful termination in practice means that your application needs to handle the SIGTERM message and begin shutting down when it receives it. This means saving all data that needs to be saved, closing down network connections, finishing any work that is left, and other similar tasks.

There are many reasons why Kubernetes might terminate a perfectly healthy container. If you update your deployment with a rolling update, Kubernetes slowly terminates old pods while spinning up new ones. If you drain a node, Kubernetes terminates all pods on that node. If a node runs out of resources, Kubernetes terminates pods to free those resources. It’s important that your application handle termination gracefully so that there is minimal impact on the end user and the time-to-recovery is as fast as possible.

Server

server:
  deployment:
    image_tag: 1-95a53
    image: ""

It is used for providing server configurations.

Deployment

It gives the details for deployment.

Key
Description

image_tag

It is the image tag

image

It is the URL of the image

Service Monitor

servicemonitor:
      enabled: true
      path: /abc
      scheme: 'http'
      interval: 30s
      scrapeTimeout: 20s
      metricRelabelings:
        - sourceLabels: [namespace]
          regex: '(.*)'
          replacement: myapp
          targetLabel: target_namespace

It gives the set of targets to be monitored.

Db Migration Config

dbMigrationConfig:
  enabled: false

It is used to configure database migration.

Istio

These Istio configurations collectively provide a comprehensive set of tools for controlling access, authenticating requests, enforcing security policies, and configuring traffic behavior within a microservices architecture. The specific settings you choose would depend on your security and traffic management requirements.

istio:
  enable: true

  gateway:
    enabled: true
    labels:
      app: my-gateway
    annotations:
      description: "Istio Gateway for external traffic"
    host: "example.com"
    tls:
      enabled: true
      secretName: my-tls-secret

  virtualService:
    enabled: true
    labels:
      app: my-service
    annotations:
      description: "Istio VirtualService for routing"
    gateways:
      - my-gateway
    hosts:
      - "example.com"
    http:
      - match:
          - uri:
              prefix: /v1
        route:
          - destination:
              host: my-service-v1
              subset: version-1
      - match:
          - uri:
              prefix: /v2
        route:
          - destination:
              host: my-service-v2
              subset: version-2

  destinationRule:
    enabled: true
    labels:
      app: my-service
    annotations:
      description: "Istio DestinationRule for traffic policies"
    subsets:
      - name: version-1
        labels:
          version: "v1"
      - name: version-2
        labels:
          version: "v2"
    trafficPolicy:
      connectionPool:
        tcp:
          maxConnections: 100
      outlierDetection:
        consecutiveErrors: 5
        interval: 30s
        baseEjectionTime: 60s

  peerAuthentication:
    enabled: true
    labels:
      app: my-service
    annotations:
      description: "Istio PeerAuthentication for mutual TLS"
    selector:
      matchLabels:
        version: "v1"
    mtls:
      mode: STRICT
    portLevelMtls:
      8080:
        mode: DISABLE

  requestAuthentication:
    enabled: true
    labels:
      app: my-service
    annotations:
      description: "Istio RequestAuthentication for JWT validation"
    selector:
      matchLabels:
        version: "v1"
    jwtRules:
      - issuer: "issuer-1"
        jwksUri: "https://issuer-1/.well-known/jwks.json"

  authorizationPolicy:
    enabled: true
    labels:
      app: my-service
    annotations:
      description: "Istio AuthorizationPolicy for access control"
    action: ALLOW
    provider:
      name: jwt
      kind: Authorization
    rules:
      - from:
          - source:
              requestPrincipals: ["*"]
        to:
          - operation:
              methods: ["GET"]
Key
Description

istio

Istio enablement. When istio.enable set to true, Istio would be enabled for the specified configurations

authorizationPolicy

It allows you to define access control policies for service-to-service communication.

action

Determines whether to ALLOW or DENY the request based on the defined rules.

provider

Authorization providers are external systems or mechanisms used to make access control decisions.

rules

List of rules defining the authorization policy. Each rule can specify conditions and requirements for allowing or denying access.

destinationRule

It allows for the fine-tuning of traffic policies and load balancing for specific services. You can define subsets of a service and apply different traffic policies to each subset.

subsets

Specifies subsets within the service for routing and load balancing.

trafficPolicy

Policies related to connection pool size, outlier detection, and load balancing.

gateway

Allowing external traffic to enter the service mesh through the specified configurations.

host

The external domain through which traffic will be routed into the service mesh.

tls

Traffic to and from the gateway should be encrypted using TLS.

secretName

Specifies the name of the Kubernetes secret that contains the TLS certificate and private key. The TLS certificate is used for securing the communication between clients and the Istio gateway.

peerAuthentication

It allows you to enforce mutual TLS and control the authentication between services.

mtls

Mutual TLS. Mutual TLS is a security protocol that requires both client and server, to authenticate each other using digital certificates for secure communication.

mode

Mutual TLS mode, specifying how mutual TLS should be applied. Modes include STRICT, PERMISSIVE, and DISABLE.

portLevelMtls

Configures port-specific mTLS settings. Allows for fine-grained control over the application of mutual TLS on specific ports.

selector

Configuration for selecting workloads to apply PeerAuthentication.

requestAuthentication

Defines rules for authenticating incoming requests.

jwtRules

Rules for validating JWTs (JSON Web Tokens). It defines how incoming JWTs should be validated for authentication purposes.

selector

Specifies the conditions under which the RequestAuthentication rules should be applied.

virtualService

Enables the definition of rules for how traffic should be routed to different services within the service mesh.

gateways

Specifies the gateways to which the rules defined in the VirtualService apply.

hosts

List of hosts (domains) to which this VirtualService is applied.

http

Configuration for HTTP routes within the VirtualService. It define routing rules based on HTTP attributes such as URI prefixes, headers, timeouts, and retry policies.

KEDA Autoscaling

Example for autosccaling with KEDA using Prometheus metrics is given below:

kedaAutoscaling:
  enabled: true
  minReplicaCount: 1
  maxReplicaCount: 2
  idleReplicaCount: 0
  pollingInterval: 30
  advanced:
    restoreToOriginalReplicaCount: true
    horizontalPodAutoscalerConfig:
      behavior:
        scaleDown:
          stabilizationWindowSeconds: 300
          policies:
          - type: Percent
            value: 100
            periodSeconds: 15
  triggers: 
    - type: prometheus
      metadata:
        serverAddress:  http://<prometheus-host>:9090
        metricName: http_request_total
        query: envoy_cluster_upstream_rq{appId="300", cluster_name="300-0", container="envoy",}
        threshold: "50"
  triggerAuthentication:
    enabled: false
    name:
    spec: {}
  authenticationRef: {}

Example for autosccaling with KEDA based on kafka is given below :

kedaAutoscaling:
  enabled: true
  minReplicaCount: 1
  maxReplicaCount: 2
  idleReplicaCount: 0
  pollingInterval: 30
  advanced: {}
  triggers: 
    - type: kafka
      metadata:
        bootstrapServers: b-2.kafka-msk-dev.example.c2.kafka.ap-southeast-1.amazonaws.com:9092,b-3.kafka-msk-dev.example.c2.kafka.ap-southeast-1.amazonaws.com:9092,b-1.kafka-msk-dev.example.c2.kafka.ap-southeast-1.amazonaws.com:9092
        topic: Orders-Service-ESP.info
        lagThreshold: "100"
        consumerGroup: oders-remove-delivered-packages
        allowIdleConsumers: "true"
  triggerAuthentication:
    enabled: true
    name: keda-trigger-auth-kafka-credential
    spec:
      secretTargetRef:
        - parameter: sasl
          name: keda-kafka-secrets
          key: sasl
        - parameter: username
          name: keda-kafka-secrets
          key: username
  authenticationRef: 
    name: keda-trigger-auth-kafka-credential

NetworkPolicy

Kubernetes NetworkPolicies control pod communication by defining rules for incoming and outgoing traffic.

networkPolicy:
  enabled: false
  annotations: {}
  labels: {}
  podSelector:
    matchLabels:
      role: db
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - ipBlock:
            cidr: 172.17.0.0/16
            except:
              - 172.17.1.0/24
        - namespaceSelector:
            matchLabels:
              project: myproject
        - podSelector:
            matchLabels:
              role: frontend
      ports:
        - protocol: TCP
          port: 6379
  egress:
    - to:
        - ipBlock:
            cidr: 10.0.0.0/24
      ports:
        - protocol: TCP
          port: 5978
Key
Description

enabled

Enable or disable NetworkPolicy.

annotations

Additional metadata or information associated with the NetworkPolicy.

labels

Labels to apply to the NetworkPolicy.

podSelector

Each NetworkPolicy includes a podSelector which selects the grouping of pods to which the policy applies. The example policy selects pods with the label "role=db". An empty podSelector selects all pods in the namespace.

policyTypes

Each NetworkPolicy includes a policyTypes list which may include either Ingress, Egress, or both.

Ingress

Controls incoming traffic to pods.

Egress

Controls outgoing traffic from pods.

Winter-Soldier

Winter Soldier can be used to

  • cleans up (delete) Kubernetes resources

  • reduce workload pods to 0

Given below is template values you can give in winter-soldier:

winterSoldier:
  enabled: false
  apiVersion: pincher.devtron.ai/v1alpha1
  action: sleep
  timeRangesWithZone:
    timeZone: "Asia/Kolkata"
    timeRanges: []
  targetReplicas: []
  fieldSelector: []
Key
values
Description

enabled

false,true

decide the enabling factor

apiVersion

pincher.devtron.ai/v1beta1, pincher.devtron.ai/v1alpha1

specific api version

action

sleep,delete, scale

This specify the action need to perform.

timeRangesWithZone:timeZone

eg:- "Asia/Kolkata","US/Pacific"

timeRangesWithZone:timeRanges

array of [ timeFrom, timeTo, weekdayFrom, weekdayTo]

It use to define time period/range on which the user need to perform the specified action. you can have multiple timeRanges. These settings will take action on Sat and Sun from 00:00 to 23:59:59,

targetReplicas

[n] : n - number of replicas to scale.

These is mandatory field when the action is scale Default value is [].

fieldSelector

- AfterTime(AddTime( ParseTime({{metadata.creationTimestamp}}, '2006-01-02T15:04:05Z'), '5m'), Now())

These value will take a list of methods to select the resources on which we perform specified action .

here is an example,

winterSoldier:
  apiVersion: pincher.devtron.ai/v1alpha1 
  enabled: true
  annotations: {}
  labels: {}
  timeRangesWithZone:
    timeZone: "Asia/Kolkata"
    timeRanges: 
      - timeFrom: 00:00
        timeTo: 23:59:59
        weekdayFrom: Sat
        weekdayTo: Sun
      - timeFrom: 00:00
        timeTo: 08:00
        weekdayFrom: Mon
        weekdayTo: Fri
      - timeFrom: 20:00
        timeTo: 23:59:59
        weekdayFrom: Mon
        weekdayTo: Fri
  action: scale
  targetReplicas: [1,1,1]
  fieldSelector: 
    - AfterTime(AddTime( ParseTime({{metadata.creationTimestamp}}, '2006-01-02T15:04:05Z'), '10h'), Now())

Above settings will take action on Sat and Sun from 00:00 to 23:59:59, and on Mon-Fri from 00:00 to 08:00 and 20:00 to 23:59:59. If action:sleep then runs hibernate at timeFrom and unhibernate at timeTo. If action: delete then it will delete workloads at timeFrom and timeTo. Here the action:scale thus it scale the number of resource replicas to targetReplicas: [1,1,1]. Here each element of targetReplicas array is mapped with the corresponding elements of array timeRangesWithZone/timeRanges. Thus make sure the length of both array is equal, otherwise the cnages cannot be observed.

The above example will select the application objects which have been created 10 hours ago across all namespaces excluding application's namespace. Winter soldier exposes following functions to handle time, cpu and memory.

  • ParseTime - This function can be used to parse time. For eg to parse creationTimestamp use ParseTime({{metadata.creationTimestamp}}, '2006-01-02T15:04:05Z')

  • AddTime - This can be used to add time. For eg AddTime(ParseTime({{metadata.creationTimestamp}}, '2006-01-02T15:04:05Z'), '-10h') ll add 10h to the time. Use d for day, h for hour, m for minutes and s for seconds. Use negative number to get earlier time.

  • Now - This can be used to get current time.

  • CpuToNumber - This can be used to compare CPU. For eg any({{spec.containers.#.resources.requests}}, { MemoryToNumber(.memory) < MemoryToNumber('60Mi')}) will check if any resource.requests is less than 60Mi.

Security Context

A security context defines privilege and access control settings for a Pod or Container.

To add a security context for main container:

containerSecurityContext:
  allowPrivilegeEscalation: false

To add a security context on pod level:

podSecurityContext:
  runAsUser: 1000
  runAsGroup: 3000
  fsGroup: 2000

Topology Spread Constraints

You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization.

topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: zone
    whenUnsatisfiable: DoNotSchedule
    autoLabelSelector: true
    customLabelSelector: {}

Deployment Metrics

It gives the realtime metrics of the deployed applications

Key
Description

Deployment Frequency

It shows how often this app is deployed to production

Change Failure Rate

It shows how often the respective pipeline fails

Mean Lead Time

It shows the average time taken to deliver a change to production

Mean Time to Recovery

It shows the average time taken to fix a failed pipeline


4. Show Application Metrics

If you want to see application metrics like different HTTP status codes metrics, application throughput, latency, response time. Enable the Application metrics from below the deployment template Save button. After enabling it, you should be able to see all metrics on App detail page. By default it remains disabled.

Helm Chart Json Schema

Other Validations in Json Schema

The values of CPU and Memory in limits must be greater than or equal to in requests respectively. Similarly, In case of envoyproxy, the values of limits are greater than or equal to requests as mentioned below.

resources.limits.cpu >= resources.requests.cpu
resources.limits.memory >= resources.requests.memory
envoyproxy.resources.limits.cpu >= envoyproxy.resources.requests.cpu
envoyproxy.resources.limits.memory >= envoyproxy.resources.requests.memory

Workflow Editor

Workflow is a logical sequence of different stages used for continuous integration and continuous deployment of an application.

Click on New Build Pipeline to create a new workflow

On clicking New Build Pipeline, three options appear as mentioned below:

  • Continuous Integration: Choose this option if you want Devtron to build the image of source code.

  • Linked CI Pipeline: Choose this option if you want to use an image created by an existing CI pipeline in Devtron.

  • Incoming Webhook: Choose this if you want to build your image outside Devtron, it will receive a docker image from an external source via the incoming webhook.

Then, create CI/CD Pipelines for your application.

Job and Cronjob

This chart deploys Job & CronJob. A Job is a controller object that represents a finite task and CronJob is used to schedule the creation of Jobs.

1. Job

A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up the Pods it created. Suspeding a Job will delete its active Pods until the Job is resumed again.

Example:

2. CronJob

A CronJob creates jobs on a repeating schedule. One Cronjob object is like one line of a crontab (cron table) file. It runs a job periodically on a given schedule, written in Cron format. CronJobs are meant for performing regular scheduled actions such as backups, report generation, and so on. Each task must be configured to recur indefinitely (as an example: once a day / week / month). You can schedule the time within that interval when the job should start.

Example:

Making Users Active/Inactive
Making Users Active/Inactive
Making Users Active/Inactive
Making Users Active/Inactive

When cloning an application with GitOps configuration, the configuration itself is not copied. To set up the configuration for your new application, refer guide.

Ensure that you have and that at least one backup has been pushed successfully. to know more about the backups chart.

Once the database migration is complete, you can proceed with upgrading the Devtron application through the UI as mentioned in the final message of the upgrade job. Alternatively, you may use the mentioned below.

During the , the application source code is pulled from your .

Figure 1: Adding Git Repository

This is a dropdown that shows the list of Git accounts added to your organization on Devtron. If you haven't done already, we recommend you to first (especially when the repository is private).

Figure 2: Selecting Git Account
Figure 3: Getting Repo URL

Make sure you've added your in the repo

Not all repository changes are worth triggering a new . If you enable this checkbox, you can define the file(s) or folder(s) whose commits you wish to use in the CI build.

Figure 4: Sample Exclusion Rule

In other words, if a given commit contains changes only in file(s) present in your exclusion rule, the commit won't show up while selecting the , which means it will not be eligible for build. However, if a given commit contains changes in other files too (along with the excluded file), the commit won't be excluded and it will definitely show up in the list of commits.

Figure 5: Excludes commits made to README.md
Figure 6: 'Learn how' Button
Figure 7: Regex Support
Figure 8a: Döner Menu Icon
Figure 8b: Show Excluded Commits
Figure 8c: Commits Unavailable for Build
Figure 8: Clone Directory Option

This checkbox is optional and is used for pulling present in a repo. The submodules will be pulled recursively, and the auth method used for the parent repo will be used for submodules too.

As discussed earlier, Devtron also supports multiple git repositories in a single application. To add multiple repositories, click Add Git Repository and repeat all the steps as mentioned in . However, ensure that the clone directory paths are unique for each repo.

Even if you add multiple repositories, only one image will be created based on the Dockerfile as shown in the

Only one docker image can be created for multi-git repository applications as explained in the section.

Select the container registry from the drop-down list or you can click Add Container Registry. This registry will be used to .

Select the Git checkout path of your repository. This repository is the same which you defined on the section.

Select your code repository. This repository is the same which you defined on the section.

Heroku: It compiles your deployed code and creates a slug, which is a compressed and pre-packaged copy of your app and also the runtime which is optimized for distribution to the dyno (Linux containers) manager. .

GCR: GCR builder is a general purpose builder that creates container images designed to run on most platforms (e.g. Kubernetes / Anthos, Knative / Cloud Run, Container OS, etc.). It auto-detects the language of your source code, and can also build functions compatible with the Google Cloud Function Framework. .

Paketo: Paketo buildpacks provide production-ready buildpacks for the most popular languages and frameworks to easily build your apps. Based on your application needs, you can select from Full, Base and Tiny. .

Define the key parameter as per your selected language and builder. E.g., By default GOOGLE_RUNTIME_VERSION for GCR buildpack. Note: If you want to define env arguments for PHP and Ruby languages after selecting Heroku builder, please make sure to refer respective and documentation for runtime information.

Note This fields are optional. If required, it can be overridden at .

Select target platform from drop-down
Select custom target platform

These fields will contain the key parameter and the value for the specified key for your . This field is Optional. If required, this can be overridden at .

Select the Chart Version using which you want to deploy the application. Refer section for more detail.

You can perform a basic deployment configuration for your application in the Basic (GUI) section instead of configuring the YAML file. Refer section for more detail.

If you want to do additional configurations, then click Advanced (YAML) for modifications. Refer section for more detail.

You can enable Show application metrics to see your application's metrics-CPU Service Monitor usage, Memory Usage, Status, Throughput and Latency. Refer for more detail.

Super-admins can lock keys in deployment template to prevent non-super-admins from modifying those locked keys. Refer to know more.

regcred is the secret that contains the docker credentials that are used for accessing a registry. Devtron will not create this secret automatically, you'll have to create this secret using dt-secrets helm chart in the App store or create one using kubectl. You can follow this documentation Pull an Image from a Private Registry .

is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed. KEDA can be installed into any Kubernetes cluster and can work alongside standard Kubernetes components like the Horizontal Pod Autoscaler(HPA).

NOTE: After deploying this we can create the Hibernator object and provide the custom configuration by which workloads going to delete, sleep and many more. for more information check

It use to specify the timeZone used. (It uses standard format. please refer )

Once all the Deployment template configurations are done, click on Save to save your deployment configuration. Now you are ready to create to do CI/CD.

Helm Chart is used to validate the deployment template values.

To know how to create the CI pipeline for your application, click on:

To know how to create the CD pipeline for your application, click on:

Key
Description
Key
Descriptions

Super-admins can lock keys in Job & CronJob deployment template to prevent non-super-admins from modifying those locked keys. Refer to know more.

Getting Started with Deploying application through devtron
GitOps Configuration
deployed the devtron-backups chart
Click here
add your Git account
Dockerfile
CI build
git submodules
docker build config
Git Repository
CI step
docker build
CI step
Lock Deployment Configuration
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
KEDA
the main repo
Workflow
json schema
upgrade commands
CI process
git repository
Git material
Single Repo Application
Multi Repo Application
Git Account
Git Repo URL
Exclude specific file/folder in this repo
Set clone directory
Pull submodules recursively
Single Repo Application
Store Container Image
Build the Container Image
Advanced Options
kind: Job
jobConfigs:
    activeDeadlineSeconds: 120
    backoffLimit: 6
    completions: 1
    parallelism: 1
    suspend: false
    ttlSecondsAfterFinished: 100

activeDeadlineSeconds

Another way to terminate a Job is by setting an active deadline. Do this by setting the activeDeadlineSeconds field of the Job to a number of seconds. The activeDeadlineSeconds applies to the duration of the job, no matter how many Pods are created. Once a Job reaches activeDeadlineSeconds, all of its running Pods are terminated and the Job status will become type: Failed with reason: DeadlineExceeded.

backoffLimit

There are situations where you want to fail a Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a Job as failed. The back-off limit is set by default to 6. Failed Pods associated with the Job are recreated by the Job controller with an exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. The back-off count is reset when a Job's Pod is deleted or successful without any other Pods for the Job failing around that time.

completions

Jobs with fixed completion count - that is , jobs that have non null completions - can have a completion mode that is specified in completionMode.

parallelism

The requested parallelism can be set to any non-negative value. If it is unspecified, it defaults to 1. If it is specified as 0, then the Job is effectively paused until it is increased.

suspend

The suspend field is also optional. If it is set to true, all subsequent executions are suspended. This setting does not apply to already started executions. Defaults to false.

ttlSecondsAfterFinished

The TTL controller only supports Jobs for now. A cluster operator can use this feature to clean up finished Jobs (either Complete or Failed) automatically by specifying the ttlSecondsAfterFinished field of a Job, as in this example. The TTL controller will assume that a resource is eligible to be cleaned up TTL seconds after the resource has finished, in other words, when the TTL has expired. When the TTL controller cleans up a resource, it will delete it cascadingly, that is to say it will delete its dependent objects together with it. Note that when the resource is deleted, its lifecycle guarantees, such as finalizers, will be honored.

kind

As with all other Kubernetes config, a Job and cronjob needs apiVersion, kind.cronjob and job also needs a section fields which is optional . these fields specify to deploy which job (conjob or job) should be kept. by default, they are set job.

kind: CronJob
cronjobConfigs:
    concurrencyPolicy: Allow
    failedJobsHistoryLimit: 1
    restartPolicy: OnFailure
    schedule: 32 8 * * *
    startingDeadlineSeconds: 100
    successfulJobsHistoryLimit: 3
    suspend: false

concurrencyPolicy

A CronJob is counted as missed if it has failed to be created at its scheduled time. For example, If concurrencyPolicy is set to Forbid and a CronJob was attempted to be scheduled when there was a previous schedule still running, then it would count as missed,Acceptable values: Allow / Forbid.

failedJobsHistoryLimit

The failedJobsHistoryLimit fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to 0 corresponds to keeping none of the corresponding kind of jobs after they finish.

restartPolicy

The spec of a Pod has a restartPolicy field with possible values Always, OnFailure, and Never. The default value is Always.The restartPolicy applies to all containers in the Pod. restartPolicy only refers to restarts of the containers by the kubelet on the same node. After containers in a Pod exit, the kubelet restarts them with an exponential back-off delay (10s, 20s, 40s, …), that is capped at five minutes. Once a container has executed for 10 minutes without any problems, the kubelet resets the restart backoff timer for that container, Acceptable values: Always / OnFailure / Never.

schedule

To generate Cronjob schedule expressions, you can also use web tools like https://crontab.guru/.

startingDeadlineSeconds

If startingDeadlineSeconds is set to a large value or left unset (the default) and if concurrencyPolicy is set to Allow, the jobs will always run at least once.

successfulJobsHistoryLimit

The successfulJobsHistoryLimit fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to 0 corresponds to keeping none of the corresponding kind of jobs after they finish.

suspend

The suspend field is also optional. If it is set to true, all subsequent executions are suspended. This setting does not apply to already started executions. Defaults to false.

kind

As with all other Kubernetes config, a Job and cronjob needs apiVersion, kind.cronjob and job also needs a section fields which is optional . these fields specify to deploy which job (conjob or job) should be kept. by default, they are set cronjob.

store docker images
Git Repository
Git Repository
Learn more
Learn more
Learn more
Heroku Ruby Support
Heroku PHP Support
Application Metrics
this
Create CI Pipelines
Create CD Pipelines
Lock Deployment Configuration
Chart Version
Basic Configuration
Advanced (YAML)
Job
CronJob

App Details

Access an External Link

  1. Select Applications from the left navigation pane.

  2. After selecting a configured application, select the App Details tab.

Note: If you enable App admins can edit on the External Links page, then only non-super admin users can view the selected links on the App-Details page.

As shown in the screenshot, the external links appear on the App-Details level:

  1. You can hover around an external link (e.g. Grafana) to view the description.

Manage External Links

On the App Configuration page, select External Links from the navigation pane. You can see the configured external links which can be searched, edited or deleted.

You can also Add Link to add a new external link.

Ingress Host URL

You can view the Ingress Host URL and the Load Balancer URL on the URLs section on the App Details. You can also copy the Ingress Host URL from the URLs instead of searching in the Manifest.

  1. Select Applications from the left navigation pane.

  2. After selecting your configured application, select the App Details.

  3. Click URLs.

  4. You can view or copy the URL of the Ingress Host.

Note:

  • The Ingress Host URL will point to the load balancer of your application.

  • You can also view the Service name with the load balancer detail.

The users can access the on the App Details page.

The link opens in a new tab with the context you specified as env variables in the section.

configured external links
Add an external link

Using Ephemeral Containers

Introduction

Ephemeral container is a special type of container that runs temporarily in an existing Pod to accomplish user-initiated actions such as troubleshooting. It is especially useful when kubectl exec is insufficient because a container has crashed or a container image doesn't include debugging utilities.

For instance, ephemeral containers help you execute a curl request from within pods that typically lack this utility.

Ephemeral containers are turned on by default in Kubernetes v1.23 and later


How to Launch an Ephemeral Container

Wherever you can access pod resources in Devtron, you can launch an ephemeral container as shown below.

From Devtron (App Details)

  1. In the left sidebar, go to Applications.

  2. Search and click your application from the list of Devtron Apps.

  3. Go to the App Details tab.

  4. Under the K8 Resources tab, select Pod inside Workloads.

  5. Locate the pod you wish to debug. Hover and choose click Terminal.

  6. Click Launch Ephemeral Container as shown below.

  7. You get 2 tabs:

    1. Basic - It provides the bare minimum configurations required to launch an ephemeral container.

    It contains 3 mandatory fields:

    • Container name prefix - Type a prefix to give to your ephemeral container, for e.g., debug. Your container name would look like debug-jndvs.

    • Image - Choose an image to run from the dropdown. Ephemeral containers need an image to run and provide the capability to debug, such as curl. You can use a custom image too.

    • Target Container name - Since a pod can have one or more containers, choose a target container you wish to debug, from the dropdown.

Devtron ignores the `command` field while launching an ephemeral container
  1. Click Launch Container.

From Devtron (Resource Browser)

From Devtron's Cluster Terminal

(This is not a recommended method. This option is available only if you are an admin.)

You can launch an ephemeral container from Kubernetes CLI. For this, you need access to the cluster terminal on Devtron.


Removing an Ephemeral Container

You can remove an ephemeral container using either App Details or Resource Browser (from the same screen you used to create the ephemeral container).

You cannot use App Details or Resource Browser to remove an ephemeral container created using Kubernetes CLI

Debugging Deployment And Monitoring

If the deployment of your application is not successful, then debugging needs to be done to check the cause of the error.

This can be done through App Details section which you can access in the following way:-

Applications->AppName->App Details

Over here, you can see the status of the app as Healthy. If there are some errors with deployment then the status would not be in a Healthy state.

Events

Events of the application are accessible from the bottom left corner.

Events section displays you the events that took place during the deployment of an app. These events are available until 15 minutes of deployment of the application.

Logs

Logs contain the logs of the Pods and Containers deployed which you can use for the process of debugging.

Manifest

The Manifest shows the critical information such as Container-image, restartCount, state, phase, podIP, startTime etc. and status of the pods deployed.

Deleting Pods

You might run into a situation where you need to delete Pods. You may need to bounce or restart a pod.

Deleting a Pod is not an irksome task, it can simply be deleted by Clicking on Delete Pod.

Suppose you want to setup a new environment, you can delete a pod and thereafter a new pod will be created automatically depending upon the replica count.

Application Objects

You can view Application Objects in this section of App Details, such as:

Monitoring

You can monitor the application in the App Details section.

Metrics like CPU Usage, Memory Usage, Throughput and Latency can be viewed here.

Figure 1: Opening a Terminal
Figure 2: Launching an Ephemeral Container
Figure 3: Basic View

Advanced - It is particularly useful for advanced users that wish to use labels or annotations since it provides additional key-value options. Refer to view the supported options.

Figure 4: Advanced View

Click to know more.

Figure 5: Removing Ephemeral Container from App Details
Key
Description
Key
Description
Ephemeral Container Spec
here

Workloads

ReplicaSet(ensures how many replica of pod should be running), Status of Pod(status of the Pod)

Networking

Service(an abstraction which defines a logical set of Pods), Endpoints(names of the endpoints that implement a Service), Ingress(API object that manages external access to the services in a cluster)

Config & Storage

ConfigMap( API object used to store non-confidential data in key-value pairs)

Custom Resource

Rollout(new Pods will be scheduled on Nodes with available resources), ServiceMonitor(specifies how groups of services should be monitored)

CPU Usage

Percentage of CPU's cycles used by the app.

Memory Usage

Amount of memory used by app.

Throughput

Performance of the app.

Latency

Delay caused while transmitting the data.