Make sure Global Configuration > GitOps is configured before moving ahead.
A deployment configuration is a manifest for the application. It defines the runtime behavior of the application.
Devtron includes deployment template for both default as well as custom charts created by a super admin.
To configure a deployment chart for your application:
Go to Applications and create a new application.
Go to App Configuration page and configure your application.
On the Deployment Template page, select the drop-down under Chart type.
You can select a chart in one of the following ways:
Deployment (Recommended)
Knative
Custom charts are added by a super admin from the custom charts section.
Users can select the available custom charts from the drop-down list.
A custom chart can be uploaded by a super admin.
Enable show application metrics toggle to view the application metrics on the App Details page.
IMPORTANT: Enabling Application metrics introduces a sidecar container to your main container which may require some additional configuration adjustments, we recommend you to do load test after enabling it in a non-prod environment before enabling it in production environment.
Select Save to save your configurations.
This chart creates a deployment that runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive. It does not support Blue/Green and Canary deployments.
This is the default deployment chart. You can select Deployment
chart when you want to use only basic use cases which contain the following:
Create a Deployment to rollout a ReplicaSet. The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not.
Declare the new state of the Pods. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.
Rollback to an earlier Deployment revision if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment.
Scale up the Deployment to facilitate more load.
Use the status of the Deployment as an indicator that a rollout has stuck.
Clean up older ReplicaSets that you do not need anymore.
You can define application behavior by providing information in the following sections:
Key | Descriptions |
---|---|
Chart version
Select the Chart Version using which you want to deploy the application. Refer Chart Version section for more detail.
Basic Configuration
You can select the basic deployment configuration for your application on the Basic GUI section instead of configuring the YAML file. Refer Basic Configuration section for more detail.
Advanced (YAML)
If you want to do additional configurations, then click Advanced (YAML) for modifications. Refer Advanced (YAML) section for more detail.
Show application metrics
You can enable Show application metrics
to see your application's metrics-CPU Service Monitor usage, Memory Usage, Status, Throughput and Latency.
Refer Application Metrics for more detail.
Deployment configuration is the Manifest for the application, it defines the runtime behavior of the application. You can define application behavior by providing information in three sections:
Chart Version
Yaml file
Show application metrics
Devtron uses helm charts for the deployments. And we are having multiple chart versions based on features it is supporting with every chart version.
One can see multiple chart version options available in the drop-down. you can select any chart version as per your requirements. By default, the latest version of the helm chart is selected in the chart version option.
Every chart version has its own YAML file. Helm charts are used to provide specifications for your application. To make it easy to use, we have created templates for the YAML file and have added some variables inside the YAML. You can provide or change the values of these variables as per your requirement.
If you want to see Application Metrics (For example Status codes 2xx, 3xx, 5xx; throughput, and latency) for your application, then you need to select the latest chart version.
Application Metrics is not supported for Chart version older than 3.7 version.
This defines ports on which application services will be exposed to other services
EnvVariables
provide run-time information to containers and allow to customize how the application works and the behavior of the applications on the system.
Here we can pass the list of env variables , every record is an object which contain the name
of variable along with value
.
To set environment variables for the containers that run in the Pod.
IMP
Docker image should have env variables, whatever we want to set.
But ConfigMap
and Secret
are the prefered way to inject env variables. So we can create this in App Configuration
Section
It is a centralized storage, specific to k8s namespace where key-value pairs are stored in plain text.
It is a centralized storage, specific to k8s namespace where we can store the key-value pairs in plain text as well as in encrypted(Base64
) form.
IMP
All key-values of Secret
and CofigMap
will reflect to your application.
If this check fails, kubernetes restarts the pod. This should return error code in case of non-recoverable error.
The maximum number of pods that can be unavailable during the update process. The value of "MaxUnavailable: " can be an absolute number or percentage of the replicas count. The default value of "MaxUnavailable: " is 25%.
The maximum number of pods that can be created over the desired number of pods. For "MaxSurge: " also, the value can be an absolute number or percentage of the replicas count. The default value of "MaxSurge: " is 25%.
This specifies the minimum number of seconds for which a newly created Pod should be ready without any of its containers crashing, for it to be considered available. This defaults to 0 (the Pod will be considered available as soon as it is ready).
If this check fails, kubernetes stops sending traffic to the application. This should return error code in case of errors which can be recovered from if traffic is stopped.
This is connected to HPA and controls scaling up and down in response to request load.
fullnameOverride
replaces the release fullname created by default by devtron, which is used to construct Kubernetes object names. By default, devtron uses {app-name}-{environment-name} as release fullname.
Image is used to access images in kubernetes, pullpolicy is used to define the instances calling the image, here the image is pulled when the image is not present,it can also be set as "Always".
imagePullSecrets
contains the docker credentials that are used for accessing a registry.
regcred is the secret that contains the docker credentials that are used for accessing a registry. Devtron will not create this secret automatically, you'll have to create this secret using dt-secrets helm chart in the App store or create one using kubectl. You can follow this documentation Pull an Image from a Private Registry https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ .
This allows public access to the url, please ensure you are using right nginx annotation for nginx class, its default value is nginx
Legacy deployment-template ingress format
This allows private access to the url, please ensure you are using right nginx annotation for nginx class, its default value is nginx
Specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image. One can use base image inside initContainer by setting the reuseContainerImage flag to true
.
To wait for given period of time before switch active the container.
These define minimum and maximum RAM and CPU available to the application.
Resources are required to set CPU and memory usage.
Limits make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.
Requests are what the container is guaranteed to get.
This defines annotations and the type of service, optionally can define name also.
Note - If loadBalancerSourceRanges
is not set, Kubernetes allows traffic from 0.0.0.0/0 to the LoadBalancer / Node Security Group(s).
It is required when some values need to be read from or written to an external disk.
It is used to provide mounts to the volume.
Spec is used to define the desire state of the given container.
Node Affinity allows you to constrain which nodes your pod is eligible to schedule on, based on labels of the node.
Inter-pod affinity allow you to constrain which nodes your pod is eligible to be scheduled based on labels on pods.
Key part of the label for node selection, this should be same as that on node. Please confirm with devops team.
Value part of the label for node selection, this should be same as that on node. Please confirm with devops team.
Taints are the opposite, they allow a node to repel a set of pods.
A given pod can access the given node and avoid the given taint only if the given pod satisfies a given taint.
Taints and tolerations are a mechanism which work together that allows you to ensure that pods are not placed on inappropriate nodes. Taints are added to nodes, while tolerations are defined in the pod specification. When you taint a node, it will repel all the pods except those that have a toleration for that taint. A node can have one or many taints associated with it.
This is used to give arguments to command.
It contains the commands to run inside the container.
Containers section can be used to run side-car containers along with your main container within same pod. Containers running within same pod can share volumes and IP Address and can address each other @localhost.
It is a kubernetes monitoring tool and the name of the file to be monitored as monitoring in the given case.It describes the state of the prometheus.
Accepts an array of Kubernetes objects. You can specify any kubernetes yaml here and it will be applied when your app gets deployed.
Kubernetes waits for the specified time called the termination grace period before terminating the pods. By default, this is 30 seconds. If your pod usually takes longer than 30 seconds to shut down gracefully, make sure you increase the GracePeriod
.
A Graceful termination in practice means that your application needs to handle the SIGTERM message and begin shutting down when it receives it. This means saving all data that needs to be saved, closing down network connections, finishing any work that is left, and other similar tasks.
There are many reasons why Kubernetes might terminate a perfectly healthy container. If you update your deployment with a rolling update, Kubernetes slowly terminates old pods while spinning up new ones. If you drain a node, Kubernetes terminates all pods on that node. If a node runs out of resources, Kubernetes terminates pods to free those resources. It’s important that your application handle termination gracefully so that there is minimal impact on the end user and the time-to-recovery is as fast as possible.
It is used for providing server configurations.
It gives the details for deployment.
It gives the set of targets to be monitored.
It is used to configure database migration.
Application metrics can be enabled to see your application's metrics-CPUService Monito usage,Memory Usage,Status,Throughput and Latency.
It gives the realtime metrics of the deployed applications
A service account provides an identity for the processes that run in a Pod.
When you access the cluster, you are authenticated by the API server as a particular User Account. Processes in containers inside pod can also contact the API server. When you are authenticated as a particular Service Account.
When you create a pod, if you do not create a service account, it is automatically assigned the default service account in the namespace.
You can create PodDisruptionBudget
for each application. A PDB limits the number of pods of a replicated application that are down simultaneously from voluntary disruptions. For example, an application would like to ensure the number of replicas running is never brought below the certain number.
You can specify maxUnavailable
and minAvailable
in a PodDisruptionBudget
.
With minAvailable
of 1, evictions are allowed as long as they leave behind 1 or more healthy pods of the total number of desired replicas.
With maxAvailable
of 1, evictions are allowed as long as at most 1 unhealthy replica among the total number of desired replicas.
Envoy is attached as a sidecar to the application container to collect metrics like 4XX, 5XX, Throughput and latency. You can now configure the envoy settings such as idleTimeout, resources etc.
Alerting rules allow you to define alert conditions based on Prometheus expressions and to send notifications about firing alerts to an external service.
In this case, Prometheus will check that the alert continues to be active during each evaluation for 1 minute before firing the alert. Elements that are active, but not firing yet, are in the pending state.
Labels are key/value pairs that are attached to pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. Labels can be used to organize and to select subsets of objects.
Pod Annotations are widely used to attach metadata and configs in Kubernetes.
HPA, by default is configured to work with CPU and Memory metrics. These metrics are useful for internal cluster sizing, but you might want to configure wider set of metrics like service latency, I/O load etc. The custom metrics in HPA can help you to achieve this.
Wait for given period of time before scaling down the container.
If you want to see application metrics like different HTTP status codes metrics, application throughput, latency, response time. Enable the Application metrics from below the deployment template Save button. After enabling it, you should be able to see all metrics on App detail page. By default it remains disabled.
Once all the Deployment template configurations are done, click on Save
to save your deployment configuration. Now you are ready to create Workflow to do CI/CD.
Helm Chart json schema is used to validate the deployment template values.
The values of CPU and Memory in limits must be greater than or equal to in requests respectively. Similarly, In case of envoyproxy, the values of limits are greater than or equal to requests as mentioned below.
KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed. KEDA can be installed into any Kubernetes cluster and can work alongside standard Kubernetes components like the Horizontal Pod Autoscaler(HPA).
Example for autosccaling with KEDA using Prometheus metrics is given below:
Example for autosccaling with KEDA based on kafka is given below :
A security context defines privilege and access control settings for a Pod or Container.
To add a security context for main container:
To add a security context on pod level:
You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization.
A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up the Pods it created. Suspeding a Job will delete its active Pods until the Job is resumed again.
Key | Description |
---|
A Cronjob creates Jobs on a repeating schedule , One Cronjob object is like one line of a crontab (cron table) file. It runs a job periodically on a given schedule, written in Cron format. Cronjobs are meant for performing regular scheduled actions such as backups, report generation, and so on. Each of those tasks should be configured to recur indefinitely (for example: once a day / week / month); you can define the point in time within that interval when the job should start.
Key | Descriptions |
---|---|
Key | Description |
---|---|
Key | Description |
---|---|
Key | Description |
---|---|
Key | Description |
---|---|
Key | Description |
---|---|
Key | Description |
---|---|
Key | Description |
---|---|
Key | Description |
---|---|
Key | Description |
---|---|
Key | Description |
---|---|
Chart Version | Link |
---|---|
Key | Descriptions |
---|
Chart Version
Select the Chart Version using which you want to deploy the application.
envoyPort
envoy port for the container.
envoyTimeout
envoy Timeout for the container,envoy supports a wide range of timeouts that may need to be configured depending on the deployment.By default the envoytimeout is 15s.
idleTimeout
the duration of time that a connection is idle before the connection is terminated.
name
name of the port.
port
port for the container.
servicePort
port of the corresponding kubernetes service.
supportStreaming
Used for high performance protocols like grpc where timeout needs to be disabled.
useHTTP2
Envoy container can accept HTTP2 requests.
Path
It define the path where the liveness needs to be checked.
initialDelaySeconds
It defines the time to wait before a given container is checked for liveliness.
periodSeconds
It defines the time to check a given container for liveness.
successThreshold
It defines the number of successes required before a given container is said to fulfil the liveness probe.
timeoutSeconds
It defines the time for checking timeout.
failureThreshold
It defines the maximum number of failures that are acceptable before a given container is not considered as live.
command
The mentioned command is executed to perform the livenessProbe. If the command returns a non-zero value, it's equivalent to a failed probe.
httpHeaders
Custom headers to set in the request. HTTP allows repeated headers,You can override the default headers by defining .httpHeaders for the probe.
scheme
Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.
tcp
The kubelet will attempt to open a socket to your container on the specified port. If it can establish a connection, the container is considered healthy.
Path
It define the path where the readiness needs to be checked.
initialDelaySeconds
It defines the time to wait before a given container is checked for readiness.
periodSeconds
It defines the time to check a given container for readiness.
successThreshold
It defines the number of successes required before a given container is said to fulfill the readiness probe.
timeoutSeconds
It defines the time for checking timeout.
failureThreshold
It defines the maximum number of failures that are acceptable before a given container is not considered as ready.
command
The mentioned command is executed to perform the readinessProbe. If the command returns a non-zero value, it's equivalent to a failed probe.
httpHeaders
Custom headers to set in the request. HTTP allows repeated headers,You can override the default headers by defining .httpHeaders for the probe.
scheme
Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.
tcp
The kubelet will attempt to open a socket to your container on the specified port. If it can establish a connection, the container is considered healthy.
enabled
Set true to enable autoscaling else set false.
MinReplicas
Minimum number of replicas allowed for scaling.
MaxReplicas
Maximum number of replicas allowed for scaling.
TargetCPUUtilizationPercentage
The target CPU utilization that is expected for a container.
TargetMemoryUtilizationPercentage
The target memory utilization that is expected for a container.
extraMetrics
Used to give external metrics for autoscaling.
enabled
Enable or disable ingress
annotations
To configure some options depending on the Ingress controller
host
Host name
pathType
Path in an Ingress is required to have a corresponding path type. Supported path types are ImplementationSpecific
, Exact
and Prefix
.
path
Path name
tls
It contains security details
enabled
Enable or disable ingress
annotations
To configure some options depending on the Ingress controller
host
Host name
pathType
Path in an Ingress is required to have a corresponding path type. Supported path types are ImplementationSpecific
, Exact
and Prefix
.
path
Path name
pathType
Supported path types are ImplementationSpecific
, Exact
and Prefix
.
tls
It contains security details
type
Select the type of service, default ClusterIP
annotations
Annotations are widely used to attach metadata and configs in Kubernetes.
name
Optional field to assign name to service
loadBalancerSourceRanges
If service type is LoadBalancer
, Provide a list of whitelisted IPs CIDR that will be allowed to use the Load Balancer.
enabled
To enable or disable the command.
value
It contains the commands.
workingDir
It is used to specify the working directory where commands will be executed.
image_tag
It is the image tag
image
It is the URL of the image
Deployment Frequency
It shows how often this app is deployed to production
Change Failure Rate
It shows how often the respective pipeline fails.
Mean Lead Time
It shows the average time taken to deliver a change to production.
Mean Time to Recovery
It shows the average time taken to fix a failed pipeline.
reference-chart_3-12-0
reference-chart_3-11-0
reference-chart_3-10-0
reference-chart_3-9-0
| A CronJob is counted as missed if it has failed to be created at its scheduled time. For example, If concurrencyPolicy is set to Forbid and a CronJob was attempted to be scheduled when there was a previous schedule still running, then it would count as missed, |
| The failedJobsHistoryLimit fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to 0 corresponds to keeping none of the corresponding kind of jobs after they finish. |
| The spec of a Pod has a restartPolicy field with possible values Always, OnFailure, and Never. The default value is Always.The restartPolicy applies to all containers in the Pod. restartPolicy only refers to restarts of the containers by the kubelet on the same node. After containers in a Pod exit, the kubelet restarts them with an exponential back-off delay (10s, 20s, 40s, …), that is capped at five minutes. Once a container has executed for 10 minutes without any problems, the kubelet resets the restart backoff timer for that container, |
| To generate Cronjob schedule expressions, you can also use web tools like https://crontab.guru/. |
| If startingDeadlineSeconds is set to a large value or left unset (the default) and if concurrencyPolicy is set to Allow, the jobs will always run at least once. |
| The successfulJobsHistoryLimit fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to 0 corresponds to keeping none of the corresponding kind of jobs after they finish. |
| The suspend field is also optional. If it is set to true, all subsequent executions are suspended. This setting does not apply to already started executions. Defaults to false. |
| As with all other Kubernetes config, a Job and cronjob needs apiVersion, kind.cronjob and job also needs a section fields which is optional . these fields specify to deploy which job (conjob or job) should be kept. by default, they are set cronjob. |
| Another way to terminate a Job is by setting an active deadline. Do this by setting the activeDeadlineSeconds field of the Job to a number of seconds. The activeDeadlineSeconds applies to the duration of the job, no matter how many Pods are created. Once a Job reaches activeDeadlineSeconds, all of its running Pods are terminated and the Job status will become type: Failed with reason: DeadlineExceeded. |
| There are situations where you want to fail a Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a Job as failed. The back-off limit is set by default to 6. Failed Pods associated with the Job are recreated by the Job controller with an exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. The back-off count is reset when a Job's Pod is deleted or successful without any other Pods for the Job failing around that time. |
| Jobs with fixed completion count - that is , jobs that have non null completions - can have a completion mode that is specified in completionMode. |
| The requested parallelism can be set to any non-negative value. If it is unspecified, it defaults to 1. If it is specified as 0, then the Job is effectively paused until it is increased. |
| The suspend field is also optional. If it is set to true, all subsequent executions are suspended. This setting does not apply to already started executions. Defaults to false. |
| The TTL controller only supports Jobs for now. A cluster operator can use this feature to clean up finished Jobs (either Complete or Failed) automatically by specifying the ttlSecondsAfterFinished field of a Job, as in this example. The TTL controller will assume that a resource is eligible to be cleaned up TTL seconds after the resource has finished, in other words, when the TTL has expired. When the TTL controller cleans up a resource, it will delete it cascadingly, that is to say it will delete its dependent objects together with it. Note that when the resource is deleted, its lifecycle guarantees, such as finalizers, will be honored. |
| As with all other Kubernetes config, a Job and cronjob needs apiVersion, kind.cronjob and job also needs a section fields which is optional . these fields specify to deploy which job (conjob or job) should be kept. by default, they are set job. |