This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Hog Scenarios

Hog Scenarios background

Hog Scenarios are designed to push the limits of memory, CPU, or I/O on one or more nodes in your cluster. They also serve to evaluate whether your cluster can withstand rogue pods that excessively consume resources without any limits.

These scenarios involve deploying one or more workloads in the cluster. Based on the specific configuration, these workloads will use a predetermined amount of resources for a specified duration.

Config Options

Common options

OptionTypeDescription
durationnumberthe duration of the stress test in seconds
workersnumber (Optional)the number of threads instantiated by stress-ng, if left empty the number of workers will match the number of available cores in the node.
hog-typestring (Enum)can be cpu, memory or io.
imagestringthe container image of the stress workload (quay.io/krkn-chaos/krkn-hog)
namespacestringthe namespace where the stress workload will be deployed
node-selectorstring (Optional)defines the node selector for choosing target nodes. If not specified, one schedulable node in the cluster will be chosen at random. If multiple nodes match the selector, all of them will be subjected to stress. If number-of-nodes is specified, that many nodes will be randomly selected from those identified by the selector.
taintslist (Optional) default []list of taints for which tolerations need to created. Example: [“node-role.kubernetes.io/master:NoSchedule”]
number-of-nodesnumber (Optional)restricts the number of selected nodes by the selector

Available Scenarios

Hog scenarios:

Rollback Scenario Support

Krkn supports rollback for all available Hog scenarios. For more details, please refer to the Rollback Scenarios documentation.

1 - CPU Hog Scenario

Overview

The CPU Hog scenario is designed to create CPU pressure on one or more nodes in your Kubernetes/OpenShift cluster for a specified duration. This scenario helps you test how your cluster and applications respond to high CPU utilization.

How It Works

The scenario deploys a stress workload pod on targeted nodes. These pods use stress-ng to consume CPU resources according to your configuration. The workload runs for a specified duration and then terminates, allowing you to observe your cluster’s behavior under CPU stress.

When to Use

Use the CPU Hog scenario to:

  • Test your cluster’s ability to handle CPU resource contention
  • Validate that CPU resource limits and quotas are properly configured
  • Evaluate the impact of CPU pressure on application performance
  • Test whether your monitoring and alerting systems properly detect CPU saturation
  • Verify that the Kubernetes scheduler correctly handles CPU-constrained nodes
  • Simulate scenarios where rogue pods consume excessive CPU without limits

Key Configuration Options

In addition to the common hog scenario options, CPU Hog scenarios support:

OptionTypeDescription
cpu-load-percentagenumberThe percentage of CPU that will be consumed by the hog
cpu-methodstringThe CPU load strategy adopted by stress-ng (see stress-ng documentation for available options)

Usage

Select your deployment method to get started:

1.1 - CPU Hog Scenarios using Krkn

To enable this plugin add the pointer to the scenario input file scenarios/kube/cpu-hog.yml as described in the Usage section.

cpu-hog options

In addition to the common hog scenario options, you can specify the below options in your scenario configuration to specificy the amount of CPU to hog on a certain worker node

OptionTypeDescription
cpu-load-percentagenumberthe amount of cpu that will be consumed by the hog
cpu-methodstringreflects the cpu load strategy adopted by stress-ng, please refer to the stress-ng documentation for all the available options

Usage

To enable hog scenarios edit the kraken config file, go to the section kraken -> chaos_scenarios of the yaml structure and add a new element to the list named hog_scenarios then add the desired scenario pointing to the hog.yaml file.

kraken:
    ...
    chaos_scenarios:
        - hog_scenarios:
            - scenarios/kube/cpu-hog.yml

1.2 - CPU Hog Scenario using Krkn-Hub

This scenario hogs the cpu on the specified node on a Kubernetes/OpenShift cluster for a specified duration. For more information refer the following documentation.

Run

If enabling Cerberus to monitor the cluster and pass/fail the scenario post chaos, refer docs. Make sure to start it before injecting the chaos and set CERBERUS_ENABLED environment variable for the chaos injection container to autoconnect.

$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-cpu-hog
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-cpu-hog
OR 
$ docker run -e <VARIABLE>=<value> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-cpu-hog

$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario

Supported parameters

The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:

Example if –env-host is used:

export <parameter_name>=<value>

OR on the command line like example:

-e <VARIABLE>=<value> 

See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables

ParameterDescriptionDefault
TOTAL_CHAOS_DURATIONSet chaos duration (in sec) as desired60
NODE_CPU_CORENumber of cores (workers) of node CPU to be consumed2
NODE_CPU_PERCENTAGEPercentage of total cpu to be consumed50
NAMESPACENamespace where the scenario container will be deployeddefault
NODE_SELECTORDefines the node selector for choosing target nodes. If not specified, one schedulable node in the cluster will be chosen at random. If multiple nodes match the selector, all of them will be subjected to stress. If number-of-nodes is specified, that many nodes will be randomly selected from those identified by the selector.""
TAINTSList of taints for which tolerations need to created. Example: [“node-role.kubernetes.io/master:NoSchedule”][]
NUMBER_OF_NODESRestricts the number of selected nodes by the selector""
IMAGEThe container image of the stress workloadquay.io/krkn-chaos/krkn-hog

For example:

$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-cpu-hog

Demo

You can find a link to a demo of the scenario here

1.3 - Node CPU Hog using Krknctl

krknctl run node-cpu-hog (optional: --<parameter>:<value> )

Can also set any global variable listed here

ParameterDescriptionTypeDefault
--chaos-durationSet chaos duration (in secs) as desirednumber60
--coresNumber of cores (workers) of node CPU to be consumednumber
--cpu-percentagePercentage of total cpu to be consumednumber50
--namespaceNamespace where the scenario container will be deployedstringdefault
--node-selectorNode selector where the scenario containers will be scheduled in the format “=”. NOTE: Will be instantiated a container per each node selected with the same scenario options. If left empty a random node will be selectedstring
--taintsList of taints for which tolerations need to created. For example [“node-role.kubernetes.io/master:NoSchedule”]"string[]
--number-of-nodesrestricts the number of selected nodes by the selectornumber
--imageThe hog container image. Can be changed if the hog image is mirrored on a private repositorystringquay.io/krkn-chaos/krkn-hog

To see all available scenario options

krknctl run node-cpu-hog --help 

2 - IO Hog Scenario

Overview

The IO Hog scenario is designed to create disk I/O pressure on one or more nodes in your Kubernetes/OpenShift cluster for a specified duration. This scenario helps you test how your cluster and applications respond to high disk I/O utilization and storage-related bottlenecks.

How It Works

The scenario deploys a stress workload pod on targeted nodes. These pods use stress-ng to perform intensive write operations to disk, consuming I/O resources according to your configuration. The scenario supports attaching node paths to the pod as a hostPath volume or using custom pod volume definitions, allowing you to test I/O pressure on specific storage targets.

When to Use

Use the IO Hog scenario to:

  • Test your cluster’s behavior under disk I/O pressure
  • Validate that I/O resource limits are properly configured
  • Evaluate the impact of disk I/O contention on application performance
  • Test whether your monitoring systems properly detect disk saturation
  • Verify that storage performance meets requirements under stress
  • Simulate scenarios where pods perform excessive disk writes
  • Test the resilience of persistent volume configurations
  • Validate disk I/O quotas and rate limiting

Key Configuration Options

In addition to the common hog scenario options, IO Hog scenarios support:

OptionTypeDescription
io-block-sizestringThe size of each individual write operation performed by the stressor
io-write-bytesstringThe total amount of data that will be written by the stressor. Can be specified as a percentage (%) of free space on the filesystem or in absolute units (b, k, m, g for Bytes, KBytes, MBytes, GBytes)
io-target-pod-folderstringThe path within the pod where the volume will be mounted
io-target-pod-volumedictionaryThe pod volume definition that will be stressed by the scenario (typically a hostPath volume)

Example Values

  • io-block-size: "1m" - Write in 1 megabyte blocks
  • io-block-size: "4k" - Write in 4 kilobyte blocks
  • io-write-bytes: "50%" - Write data equal to 50% of available free space
  • io-write-bytes: "10g" - Write 10 gigabytes of data

Usage

Select your deployment method to get started:

2.1 - IO Hog Scenarios using Krkn

To enable this plugin add the pointer to the scenario input file scenarios/kube/io-hog.yaml as described in the Usage section.

io-hog options

In addition to the common hog scenario options, you can specify the below options in your scenario configuration to target specific pod IO

OptionTypeDescription
io-block-sizestringthe block size written by the stressor
io-write-bytesstringthe total amount of data that will be written by the stressor. The size can be specified as % of free space on the file system or in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g
io-target-pod-folderstringthe folder where the volume will be mounted in the pod
io-target-pod-volumedictionarythe pod volume definition that will be stressed by the scenario.

Usage

To enable hog scenarios edit the kraken config file, go to the section kraken -> chaos_scenarios of the yaml structure and add a new element to the list named hog_scenarios then add the desired scenario pointing to the hog.yaml file.

kraken:
    ...
    chaos_scenarios:
        - hog_scenarios:
            - scenarios/kube/io-hog.yml

2.2 - IO Hog Scenario using Krkn-Hub

This scenario hogs the IO on the specified node on a Kubernetes/OpenShift cluster for a specified duration. For more information refer the following documentation.

Run

If enabling Cerberus to monitor the cluster and pass/fail the scenario post chaos, refer docs. Make sure to start it before injecting the chaos and set CERBERUS_ENABLED environment variable for the chaos injection container to autoconnect.

$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-kube-config>:/root/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-io-hog
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v <path-to-kube-config>:/root/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-io-hog
OR 
$ docker run -e <VARIABLE>=<value> --net=host --pull=always -v <path-to-kube-config>:/root/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-io-hog

$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario

Supported parameters

The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:

Example if –env-host is used:

export <parameter_name>=<value>

OR on the command line like example:

-e <VARIABLE>=<value> 

See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables

ParameterDescriptionDefault
TOTAL_CHAOS_DURATIONSet chaos duration (in sec) as desired180
IO_BLOCK_SIZEstring size of each write in bytes. Size can be from 1 byte to 4m1m
IO_WORKERSNumber of stressorts5
IO_WRITE_BYTESstring writes N bytes for each hdd process. The size can be expressed as % of free space on the file system or in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g10m
NAMESPACENamespace where the scenario container will be deployeddefault
NODE_SELECTORdefines the node selector for choosing target nodes. If not specified, one schedulable node in the cluster will be chosen at random. If multiple nodes match the selector, all of them will be subjected to stress. If number-of-nodes is specified, that many nodes will be randomly selected from those identified by the selector.""
TAINTSList of taints for which tolerations need to created. Example: [“node-role.kubernetes.io/master:NoSchedule”][]
NODE_MOUNT_PATHthe local path in the node that will be mounted in the pod and that will be filled by the scenario""
NUMBER_OF_NODESrestricts the number of selected nodes by the selector""
IMAGEthe container image of the stress workloadquay.io/krkn-chaos/krkn-hog

For example:

$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-custom-metrics-profile>:/root/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/root/kraken/config/alerts -v <path-to-kube-config>:/root/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-io-hog

2.3 - IO Hog using Krknctl

krknctl run node-io-hog (optional: --<parameter>:<value> )

Can also set any global variable listed here

ParameterDescriptionTypeDefault
--chaos-durationSet chaos duration (in sec) as desirednumber60
--oo-block-sizesSze of each write in bytes. Size can be from 1 byte to 4 Megabytes (allowed suffix are b,k,m)string1m
--io-workersNumber of stressor instancesnumber5
--io-write-bytesstring writes N bytes for each hdd process. The size can be expressed as % of free space on the file system or in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or gstring10m
--node-mount-paththe path in the node that will be mounted in the pod and where the io hog will be executed. NOTE: be sure that kubelet has the rights to write in that node pathstring/root
--namespaceNamespace where the scenario container will be deployedstringdefault
--node-selectorNode selector where the scenario containers will be scheduled in the format “=”. NOTE: Will be instantiated a container per each node selected with the same scenario options. If left empty a random node will be selectedstring
--taintsList of taints for which tolerations need to created. For example [“node-role.kubernetes.io/master:NoSchedule”]"string[]
--number-of-nodesrestricts the number of selected nodes by the selectornumber
--imageThe hog container image. Can be changed if the hog image is mirrored on a private repositorystringquay.io/krkn-chaos/krkn-hog

To see all available scenario options

krknctl run node-io-hog --help 

3 - Memory Hog Scenario

Overview

The Memory Hog scenario is designed to create virtual memory pressure on one or more nodes in your Kubernetes/OpenShift cluster for a specified duration. This scenario helps you test how your cluster and applications respond to memory exhaustion and pressure conditions.

How It Works

The scenario deploys a stress workload pod on targeted nodes. These pods use stress-ng to allocate and consume memory resources according to your configuration. The workload runs for a specified duration, allowing you to observe how your cluster handles memory pressure, OOM (Out of Memory) conditions, and eviction scenarios.

When to Use

Use the Memory Hog scenario to:

  • Test your cluster’s behavior under memory pressure
  • Validate that memory resource limits and quotas are properly configured
  • Test pod eviction policies when nodes run out of memory
  • Verify that the kubelet correctly evicts pods based on memory pressure
  • Evaluate the impact of memory contention on application performance
  • Test whether your monitoring systems properly detect memory saturation
  • Simulate scenarios where rogue pods consume excessive memory without limits
  • Validate that memory-based horizontal pod autoscaling works correctly

Key Configuration Options

In addition to the common hog scenario options, Memory Hog scenarios support:

OptionTypeDescription
memory-vm-bytesstringThe amount of memory that the scenario will attempt to allocate and consume. Can be specified as a percentage (%) of available memory or in absolute units (b, k, m, g for Bytes, KBytes, MBytes, GBytes)

Example Values

  • memory-vm-bytes: "80%" - Consume 80% of available memory
  • memory-vm-bytes: "2g" - Consume 2 gigabytes of memory
  • memory-vm-bytes: "512m" - Consume 512 megabytes of memory

Usage

Select your deployment method to get started:

3.1 - Memory Hog Scenarios using Krkn

To enable this plugin add the pointer to the scenario input file scenarios/kube/memory-hog.yml as described in the Usage section.

memory-hog options

In addition to the common hog scenario options, you can specify the below options in your scenario configuration to specificy the amount of memory to hog on a certain worker node

OptionTypeDescription
memory-vm-bytesstringthe amount of memory that the scenario will try to hog.The size can be specified as % of free space on the file system or in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g

Usage

To enable hog scenarios edit the kraken config file, go to the section kraken -> chaos_scenarios of the yaml structure and add a new element to the list named hog_scenarios then add the desired scenario pointing to the hog.yaml file.

kraken:
    ...
    chaos_scenarios:
        - hog_scenarios:
            - scenarios/kube/memory-hog.yml

3.2 - Memory Hog Scenario using Krkn-Hub

This scenario hogs the memory on the specified node on a Kubernetes/OpenShift cluster for a specified duration. For more information refer the following documentation.

Run

If enabling Cerberus to monitor the cluster and pass/fail the scenario post chaos, refer docs. Make sure to start it before injecting the chaos and set CERBERUS_ENABLED environment variable for the chaos injection container to autoconnect.

$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-memory-hog
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-memory-hog
OR 
$ docker run -e <VARIABLE>=<value> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-memory-hog

$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario

Supported parameters

The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:

Example if –env-host is used:

export <parameter_name>=<value>

OR on the command line like example:

-e <VARIABLE>=<value> 

See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables

ParameterDescriptionDefault
TOTAL_CHAOS_DURATIONSet chaos duration (in sec) as desired60
MEMORY_CONSUMPTION_PERCENTAGEpercentage (expressed with the suffix %) or amount (expressed with the suffix b, k, m or g) of memory to be consumed by the scenario90%
NUMBER_OF_WORKERSTotal number of workers (stress-ng threads)1
NAMESPACENamespace where the scenario container will be deployeddefault
NODE_SELECTORdefines the node selector for choosing target nodes. If not specified, one schedulable node in the cluster will be chosen at random. If multiple nodes match the selector, all of them will be subjected to stress. If number-of-nodes is specified, that many nodes will be randomly selected from those identified by the selector.""
TAINTSList of taints for which tolerations need to created. Example: [“node-role.kubernetes.io/master:NoSchedule”][]
NUMBER_OF_NODESrestricts the number of selected nodes by the selector""
IMAGEthe container image of the stress workloadquay.io/krkn-chaos/krkn-hog

For example:

$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-memory-hog

Demo

You can find a link to a demo of the scenario here

3.3 - Memory Hog using Krknctl

krknctl run node-memory-hog (optional: --<parameter>:<value> )

Can also set any global variable listed here

ParameterDescriptionTypeDefault
--chaos-durationSet chaos duration (in sec) as desirednumber60
–memory-workersTotal number of workers (stress-ng threads)number1
–memory-consumptionpercentage (expressed with the suffix %) or amount (expressed with the suffix b, k, m or g) of memory to be consumed by the scenariostring90%
--namespaceNamespace where the scenario container will be deployedstringdefault
--node-selectorNode selector where the scenario containers will be scheduled in the format “=”. NOTE: Will be instantiated a container per each node selected with the same scenario options. If left empty a random node will be selectedstring
--taintsList of taints for which tolerations need to created. For example [“node-role.kubernetes.io/master:NoSchedule”]"string[]
--number-of-nodesrestricts the number of selected nodes by the selectornumber
--imageThe hog container image. Can be changed if the hog image is mirrored on a private repositorystringquay.memory/krkn-chaos/krkn-hog

To see all available scenario options

krknctl run node-memory-hog --help