Pod Scenarios
This scenario disrupts the pods matching the label, excluded label or pod name in the specified namespace on a Kubernetes/OpenShift cluster.
Why pod scenarios are important:
Modern applications demand high availability, low downtime, and resilient infrastructure. Kubernetes provides building blocks like Deployments, ReplicaSets, and Services to support fault tolerance, but understanding how these interact during disruptions is critical for ensuring reliability. Pod disruption scenarios test this reliability under various conditions, validating that the application and infrastructure respond as expected.
Use cases of pod scenarios
1. Deleting a single pod
- **Use Case:** Simulates unplanned deletion of a single pod
- **Why It's Important:** Validates whether the ReplicaSet or Deployment automatically creates a replacement.
- **Customer Impact:** Ensures continuous service even if a pod unexpectedly crashes.
- **Recovery Timing:** Typically less than 10 seconds for stateless apps (seen in Krkn telemetry output).
- **HA Indicator:** Pod is automatically rescheduled and becomes Ready without manual intervention.
```bash
kubectl delete pod -n kubectl get pods -n -w # watch for new pods
```- Deleting multiple pods simultaneously
- Use Case: Simulates a larger failure event, such as a node crash or AZ outage.
- Why It’s Important: Tests whether the system has enough resources and policies to recover gracefully.
- Customer Impact: If all pods of a service fail, user experience is directly impacted.
- HA Indicator: Application can continue functioning from other replicas across zones/nodes.
- Pod Eviction (Soft Disruption)
- Use Case: Triggered by Kubernetes itself during node upgrades or scaling down.
- Why It’s Important: Ensures graceful termination and restart elsewhere without user impact.
- Customer Impact: Should be zero if readiness/liveness probes and PDBs are correctly configured.
- HA Indicator: Rolling disruption does not take down the whole application.
How to know if it is highly available
- Multiple Replicas Exist: Confirmed by checking
kubectl get deploy -n <namespace> and seeing atleast 1 replica. - Pods Distributed Across Nodes/availability zones: Using
topologySpreadConstraints or observing pod distribution in kubectl get pods -o wide. See Health Checks for real time visibility into the impact of chaos scenarios on application availability and performance - Service Uptime Remains Unaffected: During chaos test, verify app availability (synthetic probes, Prometheus alerts, etc).
- Recovery Is Automatic: No manual intervention needed to restore service.
- Krkn Telemetry Indicators: End of run data includes recovery times, pod reschedule latency, and service downtime which are vital metrics for assessing HA.
Excluding Pods from Disruption
Employ exclude_label to designate the safe pods in a group, while the rest of the pods in a namespace are subjected to chaos. Some frequent use cases are:
- Turn off the backend pods but make sure the database replicas that are highly available remain untouched.
- Inject the fault in the application layer, do not stop the infrastructure/monitoring pods.
- Run a rolling disruption experiment with the control-plane or system-critical components that are not affected.
Format:
exclude_label: "key=value"
Mechanism:
- Pods are selected based on
namespace_pattern + label_selector or name_pattern. - Before deletion, the pods that match
exclude_label are removed from the list. - Rest of the pods are subjected to chaos.
Example: Have the Leader Protected While Different etcd Replicas Are Killed
- id: kill_pods
config:
namespace_pattern: ^openshift-etcd$
label_selector: k8s-app=etcd
exclude_label: role=etcd-leader
krkn_pod_recovery_time: 120
kill: 1
Example: Disrupt Backend, Skip Monitoring
- id: kill_pods
config:
namespace_pattern: ^production$
label_selector: app=backend
exclude_label: component=monitoring
krkn_pod_recovery_time: 120
kill: 2
Recovery Time Metrics in Krkn Telemetry
Krkn tracks three key recovery time metrics for each affected pod:
pod_rescheduling_time - The time (in seconds) that the Kubernetes cluster took to reschedule the pod after it was killed. This measures the cluster’s scheduling efficiency and includes the time from pod deletion until the replacement pod is scheduled on a node.
pod_readiness_time - The time (in seconds) the pod took to become ready after being scheduled. This measures application startup time, including container image pulls, initialization, and readiness probe success.
total_recovery_time - The total amount of time (in seconds) from pod deletion until the replacement pod became fully ready and available to serve traffic. This is the sum of rescheduling time and readiness time.
These metrics appear in the telemetry output under PodsStatus.recovered for successfully recovered pods. Pods that fail to recover within the timeout period appear under PodsStatus.unrecovered without timing data.
Example telemetry output:
{
"recovered": [
{
"pod_name": "backend-7d8f9c-xyz",
"namespace": "production",
"pod_rescheduling_time": 2.3,
"pod_readiness_time": 5.7,
"total_recovery_time": 8.0
}
],
"unrecovered": []
}
See Krkn config examples and Krknctl parameters for full details.
1 - Pod Scenarios using Krkn
Example Config
The following are the components of Kubernetes for which a basic chaos scenario config exists today.
kraken:
chaos_scenarios:
- pod_disruption_scenarios:
- path/to/scenario.yaml
You can then create the scenario file with the following contents:
# yaml-language-server: $schema=../plugin.schema.json
- id: kill-pods
config:
namespace_pattern: ^kube-system$
label_selector: k8s-app=kube-scheduler
krkn_pod_recovery_time: 120
#Not needed by default, but can be used if you want to target pods on specific nodes
# Option 1: Target pods on nodes with specific labels [master/worker nodes]
node_label_selector: node-role.kubernetes.io/control-plane= # Target control-plane nodes (works on both k8s and openshift)
exclude_label: 'critical=true' # Optional - Pods matching this label will be excluded from the chaos
# Option 2: Target pods of specific nodes (testing mixed node types)
node_names:
- ip-10-0-31-8.us-east-2.compute.internal # Worker node 1
- ip-10-0-48-188.us-east-2.compute.internal # Worker node 2
- ip-10-0-14-59.us-east-2.compute.internal # Master node 1
Please adjust the schema reference to point to the schema file. This file will give you code completion and documentation for the available options in your IDE.
Pod Chaos Scenarios
The following are the components of Kubernetes/OpenShift for which a basic chaos scenario config exists today.
| Component | Description | Working |
|---|
| Basic pod scenario | Kill a pod. | :heavy_check_mark: |
| Etcd | Kills a single/multiple etcd replicas. | :heavy_check_mark: |
| Kube ApiServer | Kills a single/multiple kube-apiserver replicas. | :heavy_check_mark: |
| ApiServer | Kills a single/multiple apiserver replicas. | :heavy_check_mark: |
| Prometheus | Kills a single/multiple prometheus replicas. | :heavy_check_mark: |
| OpenShift System Pods | Kills random pods running in the OpenShift system namespaces. | :heavy_check_mark: |
2 - Pod Scenarios using Krkn-hub
This scenario disrupts the pods matching the label in the specified namespace on a Kubernetes/OpenShift cluster.
Run
If enabling Cerberus to monitor the cluster and pass/fail the scenario post chaos, refer docs. Make sure to start it before injecting the chaos and set CERBERUS_ENABLED environment variable for the chaos injection container to autoconnect.
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:pod-scenarios
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
Note
–env-host: This option is not available with the remote Podman client, including Mac and Windows (excluding WSL2) machines.
Without the –env-host option you’ll have to set each enviornment variable on the podman command line like -e <VARIABLE>=<value>$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:pod-scenarios
OR
$ docker run -e <VARIABLE>=<value> --name=<container_name> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:pod-scenarios
$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
Tip
Because the container runs with a non-root user, ensure the kube config is globally readable before mounting it in the container. You can achieve this with the following commands:
kubectl config view --flatten > ~/kubeconfig && chmod 444 ~/kubeconfig && docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v ~kubeconfig:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:<scenario>Supported parameters
The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:
Example if –env-host is used:
export <parameter_name>=<value>
OR on the command line like example:
-e <VARIABLE>=<value>
See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables
| Parameter | Description | Default |
|---|
| NAMESPACE | Targeted namespace in the cluster ( supports regex ) | openshift-.* |
| POD_LABEL | Label of the pod(s) to target | "" |
| EXCLUDE_LABEL | Pods matching this label will be excluded from the chaos even if they match other criteria | "" |
| NAME_PATTERN | Regex pattern to match the pods in NAMESPACE when POD_LABEL is not specified | .* |
| DISRUPTION_COUNT | Number of pods to disrupt | 1 |
| KILL_TIMEOUT | Timeout to wait for the target pod(s) to be removed in seconds | 180 |
| EXPECTED_RECOVERY_TIME | Fails if the pod disrupted do not recover within the timeout set | 120 |
| NODE_LABEL_SELECTOR | Label of the node(s) to target | "" |
| NODE_NAMES | Name of the node(s) to target. Example: [“worker-node-1”,“worker-node-2”,“master-node-1”] | [] |
Note
Set NAMESPACE environment variable to openshift-.* to pick and disrupt pods randomly in openshift system namespaces, the DAEMON_MODE can also be enabled to disrupt the pods every x seconds in the background to check the reliability.Note
In case of using custom metrics profile or alerts profile when CAPTURE_METRICS or ENABLE_ALERTS is enabled, mount the metrics profile from the host on which the container is run using podman/docker under /home/krkn/kraken/config/metrics-aggregated.yaml and /home/krkn/kraken/config/alerts.For example:
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:container-scenarios
Demo
See a demo of this scenario:
3 - Pod Scenarios using Krknctl
krknctl run pod-scenarios (optional: --<parameter>:<value> )
Can also set any global variable listed here
Scenario specific parameters:
| Parameter | Description | Type | Default |
|---|
--namespace | Targeted namespace in the cluster ( supports regex ) | string | openshift-* |
--pod-label | Label of the pod(s) to target ex. “app=test” | string | |
--exclude-label | Pods matching this label will be excluded from the chaos even if they match other criteria | string | "" |
--name-pattern | Regex pattern to match the pods in NAMESPACE when POD_LABEL is not specified | string | .* |
--disruption-count | Number of pods to disrupt | number | 1 |
--kill-timeout | Timeout to wait for the target pod(s) to be removed in seconds | number | 180 |
--expected-recovery-time | Fails if the pod disrupted do not recover within the timeout set | number | 120 |
--node-label-selector | Label of the node(s) to target | string | "" |
--node-names | Name of the node(s) to target. Example: [“worker-node-1”,“worker-node-2”,“master-node-1”] | string | [] |
To see all available scenario options
krknctl run pod-scenarios --help