Hog Scenarios
Hog Scenarios background
Hog Scenarios are designed to push the limits of memory, CPU, or I/O on one or more nodes in your cluster. They also serve to evaluate whether your cluster can withstand rogue pods that excessively consume resources without any limits.
These scenarios involve deploying one or more workloads in the cluster. Based on the specific configuration, these workloads will use a predetermined amount of resources for a specified duration.
Config Options
Common options
| Option | Type | Description |
|---|
duration | number | the duration of the stress test in seconds |
workers | number (Optional) | the number of threads instantiated by stress-ng, if left empty the number of workers will match the number of available cores in the node. |
hog-type | string (Enum) | can be cpu, memory or io. |
image | string | the container image of the stress workload (quay.io/krkn-chaos/krkn-hog) |
namespace | string | the namespace where the stress workload will be deployed |
node-selector | string (Optional) | defines the node selector for choosing target nodes. If not specified, one schedulable node in the cluster will be chosen at random. If multiple nodes match the selector, all of them will be subjected to stress. If number-of-nodes is specified, that many nodes will be randomly selected from those identified by the selector. |
taints | list (Optional) default [] | list of taints for which tolerations need to created. Example: [“node-role.kubernetes.io/master:NoSchedule”] |
number-of-nodes | number (Optional) | restricts the number of selected nodes by the selector |
Available Scenarios
Hog scenarios:
Rollback Scenario Support
Krkn supports rollback for all available Hog scenarios. For more details, please refer to the Rollback Scenarios documentation.
1 - CPU Hog Scenario
Overview
The CPU Hog scenario is designed to create CPU pressure on one or more nodes in your Kubernetes/OpenShift cluster for a specified duration. This scenario helps you test how your cluster and applications respond to high CPU utilization.
How It Works
The scenario deploys a stress workload pod on targeted nodes. These pods use stress-ng to consume CPU resources according to your configuration. The workload runs for a specified duration and then terminates, allowing you to observe your cluster’s behavior under CPU stress.
When to Use
Use the CPU Hog scenario to:
- Test your cluster’s ability to handle CPU resource contention
- Validate that CPU resource limits and quotas are properly configured
- Evaluate the impact of CPU pressure on application performance
- Test whether your monitoring and alerting systems properly detect CPU saturation
- Verify that the Kubernetes scheduler correctly handles CPU-constrained nodes
- Simulate scenarios where rogue pods consume excessive CPU without limits
Key Configuration Options
In addition to the common hog scenario options, CPU Hog scenarios support:
| Option | Type | Description |
|---|
cpu-load-percentage | number | The percentage of CPU that will be consumed by the hog |
cpu-method | string | The CPU load strategy adopted by stress-ng (see stress-ng documentation for available options) |
Usage
Select your deployment method to get started:
1.1 - CPU Hog Scenarios using Krkn
To enable this plugin add the pointer to the scenario input file scenarios/kube/cpu-hog.yml as described in the
Usage section.
cpu-hog options
In addition to the common hog scenario options, you can specify the below options in your scenario configuration to specificy the amount of CPU to hog on a certain worker node
| Option | Type | Description |
|---|
cpu-load-percentage | number | the amount of cpu that will be consumed by the hog |
cpu-method | string | reflects the cpu load strategy adopted by stress-ng, please refer to the stress-ng documentation for all the available options |
Usage
To enable hog scenarios edit the kraken config file, go to the section kraken -> chaos_scenarios of the yaml structure
and add a new element to the list named hog_scenarios then add the desired scenario
pointing to the hog.yaml file.
kraken:
...
chaos_scenarios:
- hog_scenarios:
- scenarios/kube/cpu-hog.yml
1.2 - CPU Hog Scenario using Krkn-Hub
This scenario hogs the cpu on the specified node on a Kubernetes/OpenShift cluster for a specified duration. For more information refer the following documentation.
Run
If enabling Cerberus to monitor the cluster and pass/fail the scenario post chaos, refer docs. Make sure to start it before injecting the chaos and set CERBERUS_ENABLED environment variable for the chaos injection container to autoconnect.
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-cpu-hog
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
Note
–env-host: This option is not available with the remote Podman client, including Mac and Windows (excluding WSL2) machines.
Without the –env-host option you’ll have to set each enviornment variable on the podman command line like -e <VARIABLE>=<value>$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-cpu-hog
OR
$ docker run -e <VARIABLE>=<value> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-cpu-hog
$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
Tip
Because the container runs with a non-root user, ensure the kube config is globally readable before mounting it in the container. You can achieve this with the following commands:
kubectl config view --flatten > ~/kubeconfig && chmod 444 ~/kubeconfig && docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v ~kubeconfig:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:<scenario>Supported parameters
The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:
Example if –env-host is used:
export <parameter_name>=<value>
OR on the command line like example:
-e <VARIABLE>=<value>
See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables
| Parameter | Description | Default |
|---|
| TOTAL_CHAOS_DURATION | Set chaos duration (in sec) as desired | 60 |
| NODE_CPU_CORE | Number of cores (workers) of node CPU to be consumed | 2 |
| NODE_CPU_PERCENTAGE | Percentage of total cpu to be consumed | 50 |
| NAMESPACE | Namespace where the scenario container will be deployed | default |
| NODE_SELECTOR | Defines the node selector for choosing target nodes. If not specified, one schedulable node in the cluster will be chosen at random. If multiple nodes match the selector, all of them will be subjected to stress. If number-of-nodes is specified, that many nodes will be randomly selected from those identified by the selector. | "" |
| TAINTS | List of taints for which tolerations need to created. Example: [“node-role.kubernetes.io/master:NoSchedule”] | [] |
| NUMBER_OF_NODES | Restricts the number of selected nodes by the selector | "" |
| IMAGE | The container image of the stress workload | quay.io/krkn-chaos/krkn-hog |
Note
In case of using custom metrics profile or alerts profile when CAPTURE_METRICS or ENABLE_ALERTS is enabled, mount the metrics profile from the host on which the container is run using podman/docker under /home/krkn/kraken/config/metrics-aggregated.yaml and /home/krkn/kraken/config/alerts.For example:
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-cpu-hog
Demo
You can find a link to a demo of the scenario here
1.3 - Node CPU Hog using Krknctl
krknctl run node-cpu-hog (optional: --<parameter>:<value> )
Can also set any global variable listed here
| Parameter | Description | Type | Default |
|---|
--chaos-duration | Set chaos duration (in secs) as desired | number | 60 |
--cores | Number of cores (workers) of node CPU to be consumed | number | |
--cpu-percentage | Percentage of total cpu to be consumed | number | 50 |
--namespace | Namespace where the scenario container will be deployed | string | default |
--node-selector | Node selector where the scenario containers will be scheduled in the format “=”. NOTE: Will be instantiated a container per each node selected with the same scenario options. If left empty a random node will be selected | string | |
--taints | List of taints for which tolerations need to created. For example [“node-role.kubernetes.io/master:NoSchedule”]" | string | [] |
--number-of-nodes | restricts the number of selected nodes by the selector | number | |
--image | The hog container image. Can be changed if the hog image is mirrored on a private repository | string | quay.io/krkn-chaos/krkn-hog |
To see all available scenario options
krknctl run node-cpu-hog --help
2 - IO Hog Scenario
Overview
The IO Hog scenario is designed to create disk I/O pressure on one or more nodes in your Kubernetes/OpenShift cluster for a specified duration. This scenario helps you test how your cluster and applications respond to high disk I/O utilization and storage-related bottlenecks.
How It Works
The scenario deploys a stress workload pod on targeted nodes. These pods use stress-ng to perform intensive write operations to disk, consuming I/O resources according to your configuration. The scenario supports attaching node paths to the pod as a hostPath volume or using custom pod volume definitions, allowing you to test I/O pressure on specific storage targets.
When to Use
Use the IO Hog scenario to:
- Test your cluster’s behavior under disk I/O pressure
- Validate that I/O resource limits are properly configured
- Evaluate the impact of disk I/O contention on application performance
- Test whether your monitoring systems properly detect disk saturation
- Verify that storage performance meets requirements under stress
- Simulate scenarios where pods perform excessive disk writes
- Test the resilience of persistent volume configurations
- Validate disk I/O quotas and rate limiting
Key Configuration Options
In addition to the common hog scenario options, IO Hog scenarios support:
| Option | Type | Description |
|---|
io-block-size | string | The size of each individual write operation performed by the stressor |
io-write-bytes | string | The total amount of data that will be written by the stressor. Can be specified as a percentage (%) of free space on the filesystem or in absolute units (b, k, m, g for Bytes, KBytes, MBytes, GBytes) |
io-target-pod-folder | string | The path within the pod where the volume will be mounted |
io-target-pod-volume | dictionary | The pod volume definition that will be stressed by the scenario (typically a hostPath volume) |
Modifying the structure of io-target-pod-volume might alter how the hog operates, potentially rendering it ineffective.
Example Values
io-block-size: "1m" - Write in 1 megabyte blocksio-block-size: "4k" - Write in 4 kilobyte blocksio-write-bytes: "50%" - Write data equal to 50% of available free spaceio-write-bytes: "10g" - Write 10 gigabytes of data
Usage
Select your deployment method to get started:
2.1 - IO Hog Scenarios using Krkn
To enable this plugin add the pointer to the scenario input file scenarios/kube/io-hog.yaml as described in the
Usage section.
io-hog options
In addition to the common hog scenario options, you can specify the below options in your scenario configuration to target specific pod IO
| Option | Type | Description |
|---|
io-block-size | string | the block size written by the stressor |
io-write-bytes | string | the total amount of data that will be written by the stressor. The size can be specified as % of free space on the file system or in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g |
io-target-pod-folder | string | the folder where the volume will be mounted in the pod |
io-target-pod-volume | dictionary | the pod volume definition that will be stressed by the scenario. |
Modifying the structure of io-target-pod-volume might alter how the hog operates, potentially rendering it ineffective.
Usage
To enable hog scenarios edit the kraken config file, go to the section kraken -> chaos_scenarios of the yaml structure
and add a new element to the list named hog_scenarios then add the desired scenario
pointing to the hog.yaml file.
kraken:
...
chaos_scenarios:
- hog_scenarios:
- scenarios/kube/io-hog.yml
2.2 - IO Hog Scenario using Krkn-Hub
This scenario hogs the IO on the specified node on a Kubernetes/OpenShift cluster for a specified duration. For more information refer the following documentation.
Run
If enabling Cerberus to monitor the cluster and pass/fail the scenario post chaos, refer docs. Make sure to start it before injecting the chaos and set CERBERUS_ENABLED environment variable for the chaos injection container to autoconnect.
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-kube-config>:/root/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-io-hog
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
Note
–env-host: This option is not available with the remote Podman client, including Mac and Windows (excluding WSL2) machines.
Without the –env-host option you’ll have to set each enviornment variable on the podman command line like -e <VARIABLE>=<value>$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v <path-to-kube-config>:/root/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-io-hog
OR
$ docker run -e <VARIABLE>=<value> --net=host --pull=always -v <path-to-kube-config>:/root/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-io-hog
$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
Tip
Because the container runs with a non-root user, ensure the kube config is globally readable before mounting it in the container. You can achieve this with the following commands:
kubectl config view --flatten > ~/kubeconfig && chmod 444 ~/kubeconfig && docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v ~kubeconfig:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:<scenario>Supported parameters
The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:
Example if –env-host is used:
export <parameter_name>=<value>
OR on the command line like example:
-e <VARIABLE>=<value>
See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables
| Parameter | Description | Default |
|---|
| TOTAL_CHAOS_DURATION | Set chaos duration (in sec) as desired | 180 |
| IO_BLOCK_SIZE | string size of each write in bytes. Size can be from 1 byte to 4m | 1m |
| IO_WORKERS | Number of stressorts | 5 |
| IO_WRITE_BYTES | string writes N bytes for each hdd process. The size can be expressed as % of free space on the file system or in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g | 10m |
| NAMESPACE | Namespace where the scenario container will be deployed | default |
| NODE_SELECTOR | defines the node selector for choosing target nodes. If not specified, one schedulable node in the cluster will be chosen at random. If multiple nodes match the selector, all of them will be subjected to stress. If number-of-nodes is specified, that many nodes will be randomly selected from those identified by the selector. | "" |
| TAINTS | List of taints for which tolerations need to created. Example: [“node-role.kubernetes.io/master:NoSchedule”] | [] |
| NODE_MOUNT_PATH | the local path in the node that will be mounted in the pod and that will be filled by the scenario | "" |
| NUMBER_OF_NODES | restricts the number of selected nodes by the selector | "" |
| IMAGE | the container image of the stress workload | quay.io/krkn-chaos/krkn-hog |
Note
In case of using custom metrics profile or alerts profile when CAPTURE_METRICS or ENABLE_ALERTS is enabled, mount the metrics profile from the host on which the container is run using podman/docker under /home/krkn/kraken/config/metrics-aggregated.yaml and /home/krkn/kraken/config/alerts.For example:
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-custom-metrics-profile>:/root/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/root/kraken/config/alerts -v <path-to-kube-config>:/root/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-io-hog
2.3 - IO Hog using Krknctl
krknctl run node-io-hog (optional: --<parameter>:<value> )
Can also set any global variable listed here
| Parameter | Description | Type | Default |
|---|
--chaos-duration | Set chaos duration (in sec) as desired | number | 60 |
--oo-block-size | sSze of each write in bytes. Size can be from 1 byte to 4 Megabytes (allowed suffix are b,k,m) | string | 1m |
--io-workers | Number of stressor instances | number | 5 |
--io-write-bytes | string writes N bytes for each hdd process. The size can be expressed as % of free space on the file system or in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g | string | 10m |
--node-mount-path | the path in the node that will be mounted in the pod and where the io hog will be executed. NOTE: be sure that kubelet has the rights to write in that node path | string | /root |
--namespace | Namespace where the scenario container will be deployed | string | default |
--node-selector | Node selector where the scenario containers will be scheduled in the format “=”. NOTE: Will be instantiated a container per each node selected with the same scenario options. If left empty a random node will be selected | string | |
--taints | List of taints for which tolerations need to created. For example [“node-role.kubernetes.io/master:NoSchedule”]" | string | [] |
--number-of-nodes | restricts the number of selected nodes by the selector | number | |
--image | The hog container image. Can be changed if the hog image is mirrored on a private repository | string | quay.io/krkn-chaos/krkn-hog |
To see all available scenario options
krknctl run node-io-hog --help
3 - Memory Hog Scenario
Overview
The Memory Hog scenario is designed to create virtual memory pressure on one or more nodes in your Kubernetes/OpenShift cluster for a specified duration. This scenario helps you test how your cluster and applications respond to memory exhaustion and pressure conditions.
How It Works
The scenario deploys a stress workload pod on targeted nodes. These pods use stress-ng to allocate and consume memory resources according to your configuration. The workload runs for a specified duration, allowing you to observe how your cluster handles memory pressure, OOM (Out of Memory) conditions, and eviction scenarios.
When to Use
Use the Memory Hog scenario to:
- Test your cluster’s behavior under memory pressure
- Validate that memory resource limits and quotas are properly configured
- Test pod eviction policies when nodes run out of memory
- Verify that the kubelet correctly evicts pods based on memory pressure
- Evaluate the impact of memory contention on application performance
- Test whether your monitoring systems properly detect memory saturation
- Simulate scenarios where rogue pods consume excessive memory without limits
- Validate that memory-based horizontal pod autoscaling works correctly
Key Configuration Options
In addition to the common hog scenario options, Memory Hog scenarios support:
| Option | Type | Description |
|---|
memory-vm-bytes | string | The amount of memory that the scenario will attempt to allocate and consume. Can be specified as a percentage (%) of available memory or in absolute units (b, k, m, g for Bytes, KBytes, MBytes, GBytes) |
Example Values
memory-vm-bytes: "80%" - Consume 80% of available memorymemory-vm-bytes: "2g" - Consume 2 gigabytes of memorymemory-vm-bytes: "512m" - Consume 512 megabytes of memory
Usage
Select your deployment method to get started:
3.1 - Memory Hog Scenarios using Krkn
To enable this plugin add the pointer to the scenario input file scenarios/kube/memory-hog.yml as described in the
Usage section.
memory-hog options
In addition to the common hog scenario options, you can specify the below options in your scenario configuration to specificy the amount of memory to hog on a certain worker node
| Option | Type | Description |
|---|
memory-vm-bytes | string | the amount of memory that the scenario will try to hog.The size can be specified as % of free space on the file system or in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g |
Usage
To enable hog scenarios edit the kraken config file, go to the section kraken -> chaos_scenarios of the yaml structure
and add a new element to the list named hog_scenarios then add the desired scenario
pointing to the hog.yaml file.
kraken:
...
chaos_scenarios:
- hog_scenarios:
- scenarios/kube/memory-hog.yml
3.2 - Memory Hog Scenario using Krkn-Hub
This scenario hogs the memory on the specified node on a Kubernetes/OpenShift cluster for a specified duration. For more information refer the following documentation.
Run
If enabling Cerberus to monitor the cluster and pass/fail the scenario post chaos, refer docs. Make sure to start it before injecting the chaos and set CERBERUS_ENABLED environment variable for the chaos injection container to autoconnect.
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-memory-hog
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
Note
–env-host: This option is not available with the remote Podman client, including Mac and Windows (excluding WSL2) machines.
Without the –env-host option you’ll have to set each enviornment variable on the podman command line like -e <VARIABLE>=<value>$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-memory-hog
OR
$ docker run -e <VARIABLE>=<value> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-memory-hog
$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
Tip
Because the container runs with a non-root user, ensure the kube config is globally readable before mounting it in the container. You can achieve this with the following commands:
kubectl config view --flatten > ~/kubeconfig && chmod 444 ~/kubeconfig && docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v ~kubeconfig:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:<scenario>Supported parameters
The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:
Example if –env-host is used:
export <parameter_name>=<value>
OR on the command line like example:
-e <VARIABLE>=<value>
See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables
| Parameter | Description | Default |
|---|
| TOTAL_CHAOS_DURATION | Set chaos duration (in sec) as desired | 60 |
| MEMORY_CONSUMPTION_PERCENTAGE | percentage (expressed with the suffix %) or amount (expressed with the suffix b, k, m or g) of memory to be consumed by the scenario | 90% |
| NUMBER_OF_WORKERS | Total number of workers (stress-ng threads) | 1 |
| NAMESPACE | Namespace where the scenario container will be deployed | default |
| NODE_SELECTOR | defines the node selector for choosing target nodes. If not specified, one schedulable node in the cluster will be chosen at random. If multiple nodes match the selector, all of them will be subjected to stress. If number-of-nodes is specified, that many nodes will be randomly selected from those identified by the selector. | "" |
| TAINTS | List of taints for which tolerations need to created. Example: [“node-role.kubernetes.io/master:NoSchedule”] | [] |
| NUMBER_OF_NODES | restricts the number of selected nodes by the selector | "" |
| IMAGE | the container image of the stress workload | quay.io/krkn-chaos/krkn-hog |
Note
In case of using custom metrics profile or alerts profile when CAPTURE_METRICS or ENABLE_ALERTS is enabled, mount the metrics profile from the host on which the container is run using podman/docker under /home/krkn/kraken/config/metrics-aggregated.yaml and /home/krkn/kraken/config/alerts.For example:
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-memory-hog
Demo
You can find a link to a demo of the scenario here
3.3 - Memory Hog using Krknctl
krknctl run node-memory-hog (optional: --<parameter>:<value> )
Can also set any global variable listed here
| Parameter | Description | Type | Default |
|---|
--chaos-duration | Set chaos duration (in sec) as desired | number | 60 |
| –memory-workers | Total number of workers (stress-ng threads) | number | 1 |
| –memory-consumption | percentage (expressed with the suffix %) or amount (expressed with the suffix b, k, m or g) of memory to be consumed by the scenario | string | 90% |
--namespace | Namespace where the scenario container will be deployed | string | default |
--node-selector | Node selector where the scenario containers will be scheduled in the format “=”. NOTE: Will be instantiated a container per each node selected with the same scenario options. If left empty a random node will be selected | string | |
--taints | List of taints for which tolerations need to created. For example [“node-role.kubernetes.io/master:NoSchedule”]" | string | [] |
--number-of-nodes | restricts the number of selected nodes by the selector | number | |
--image | The hog container image. Can be changed if the hog image is mirrored on a private repository | string | quay.memory/krkn-chaos/krkn-hog |
To see all available scenario options
krknctl run node-memory-hog --help