This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Pod Scenarios

Krkn recently replaced PowerfulSeal with its own internal pod scenarios using a plugin system. This scenario disrupts the pods matching the label in the specified namespace on a Kubernetes/OpenShift cluster.

Why pod scenarios are important:

Modern applications demand high availability, low downtime, and resilient infrastructure. Kubernetes provides building blocks like Deployments, ReplicaSets, and Services to support fault tolerance, but understanding how these interact during disruptions is critical for ensuring reliability. Pod disruption scenarios test this reliability under various conditions, validating that the application and infrastructure respond as expected.

Krkn Telemetry: Krkn collects metrics during chaos experiments, such as recovery timing. These indicators help assess how resilient the application is under test conditions.

Use cases and importance of pod scenarios

  1. Deleting a single pod
  • Use Case: Simulates unplanned deletion of a single pod
  • Why It’s Important: Validates whether the ReplicaSet or Deployment automatically creates a replacement.
  • Customer Impact: Ensures continuous service even if a pod unexpectedly crashes.
  • Recovery Timing: Typically less than 10 seconds for stateless apps (seen in Krkn telemetry output).
  • HA Indicator: Pod is automatically rescheduled and becomes Ready without manual intervention.
kubectl delete pod <pod-name> -n <namespace>
kubectl get pods -n <namespace> -w # watch for new pods
  1. Deleting multiple pods simultaneously
  • Use Case: Simulates a larger failure event, such as a node crash or AZ outage.
  • Why It’s Important: Tests whether the system has enough resources and policies to recover gracefully.
  • Customer Impact: If all pods of a service fail, user experience is directly impacted.
  • HA Indicator: Application can continue functioning from other replicas across zones/nodes.
  1. Pod Eviction (Soft Disruption)
  • Use Case: Triggered by Kubernetes itself during node upgrades or scaling down.
  • Why It’s Important: Ensures graceful termination and restart elsewhere without user impact.
  • Customer Impact: Should be zero if readiness/liveness probes and PDBs are correctly configured.
  • HA Indicator: Rolling disruption does not take down the whole application.

How to know if it is highly available

  • Multiple Replicas Exist: Confirmed by checking kubectl get deploy -n <namespace> and seeing atleast 1 replica.
  • Pods Distributed Across Nodes/availability zones: Using topologySpreadConstraints or observing pod distribution in kubectl get pods -o wide. See Health Checks for real time visibility into the impact of chaos scenarios on application availability and performance
  • Service Uptime Remains Unaffected: During chaos test, verify app availability (synthetic probes, Prometheus alerts, etc).
  • Recovery Is Automatic: No manual intervention needed to restore service.
  • Krkn Telemetry Indicators: End of run data includes recovery times, pod reschedule latency, and service downtime which are vital metrics for assessing HA.

1 - Pod Scenarios using Krkn

Example Config

The following are the components of Kubernetes for which a basic chaos scenario config exists today.

kraken:
  chaos_scenarios:
    - pod_disruption_scenarios:
      - path/to/scenario.yaml

You can then create the scenario file with the following contents:

# yaml-language-server: $schema=../plugin.schema.json
- id: kill-pods
  config:
    namespace_pattern: ^kube-system$
    label_selector: k8s-app=kube-scheduler
    krkn_pod_recovery_time: 120
    

Please adjust the schema reference to point to the schema file. This file will give you code completion and documentation for the available options in your IDE.

Pod Chaos Scenarios

The following are the components of Kubernetes/OpenShift for which a basic chaos scenario config exists today.

ComponentDescriptionWorking
Basic pod scenarioKill a pod.:heavy_check_mark:
EtcdKills a single/multiple etcd replicas.:heavy_check_mark:
Kube ApiServerKills a single/multiple kube-apiserver replicas.:heavy_check_mark:
ApiServerKills a single/multiple apiserver replicas.:heavy_check_mark:
PrometheusKills a single/multiple prometheus replicas.:heavy_check_mark:
OpenShift System PodsKills random pods running in the OpenShift system namespaces.:heavy_check_mark:

2 - Pod Scenarios using Krknctl

krknctl run pod-scenarios (optional: --<parameter>:<value> )

Can also set any global variable listed here

Scenario specific parameters:

ParameterDescriptionTypeDefault
--namespaceTargeted namespace in the cluster ( supports regex )stringopenshift-*
--pod-labelLabel of the pod(s) to target ex. “app=test”string
--name-patternRegex pattern to match the pods in NAMESPACE when POD_LABEL is not specifiedstring.*
--disruption-countNumber of pods to disruptnumber1
--kill-timeoutTimeout to wait for the target pod(s) to be removed in secondsnumber180
--expected-recovery-timeFails if the pod disrupted do not recover within the timeout setnumber120

To see all available scenario options

krknctl run pod-scenarios --help 

3 - Pod Scenarios using Krkn-hub

This scenario disrupts the pods matching the label in the specified namespace on a Kubernetes/OpenShift cluster.

Run

If enabling Cerberus to monitor the cluster and pass/fail the scenario post chaos, refer docs. Make sure to start it before injecting the chaos and set CERBERUS_ENABLED environment variable for the chaos injection container to autoconnect.

$ podman run --name=<container_name> --net=host --env-host=true -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:pod-scenarios
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:pod-scenarios
OR 
$ docker run -e <VARIABLE>=<value> --name=<container_name> --net=host -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:pod-scenarios

$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario

Supported parameters

The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:

Example if –env-host is used:

export <parameter_name>=<value>

OR on the command line like example:

-e <VARIABLE>=<value> 

See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables

ParameterDescriptionDefault
NAMESPACETargeted namespace in the cluster ( supports regex )openshift-.*
POD_LABELLabel of the pod(s) to target""
NAME_PATTERNRegex pattern to match the pods in NAMESPACE when POD_LABEL is not specified.*
DISRUPTION_COUNTNumber of pods to disrupt1
KILL_TIMEOUTTimeout to wait for the target pod(s) to be removed in seconds180
EXPECTED_RECOVERY_TIMEFails if the pod disrupted do not recover within the timeout set120

For example:

$ podman run --name=<container_name> --net=host --env-host=true -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:container-scenarios

Demo

See a demo of this scenario: