Memory Hog Scenario
Overview
The Memory Hog scenario is designed to create virtual memory pressure on one or more nodes in your Kubernetes/OpenShift cluster for a specified duration. This scenario helps you test how your cluster and applications respond to memory exhaustion and pressure conditions.
How It Works
The scenario deploys a stress workload pod on targeted nodes. These pods use stress-ng to allocate and consume memory resources according to your configuration. The workload runs for a specified duration, allowing you to observe how your cluster handles memory pressure, OOM (Out of Memory) conditions, and eviction scenarios.
When to Use
Use the Memory Hog scenario to:
- Test your cluster’s behavior under memory pressure
- Validate that memory resource limits and quotas are properly configured
- Test pod eviction policies when nodes run out of memory
- Verify that the kubelet correctly evicts pods based on memory pressure
- Evaluate the impact of memory contention on application performance
- Test whether your monitoring systems properly detect memory saturation
- Simulate scenarios where rogue pods consume excessive memory without limits
- Validate that memory-based horizontal pod autoscaling works correctly
Key Configuration Options
In addition to the common hog scenario options, Memory Hog scenarios support:
| Option | Type | Description |
|---|
memory-vm-bytes | string | The amount of memory that the scenario will attempt to allocate and consume. Can be specified as a percentage (%) of available memory or in absolute units (b, k, m, g for Bytes, KBytes, MBytes, GBytes) |
Example Values
memory-vm-bytes: "80%" - Consume 80% of available memorymemory-vm-bytes: "2g" - Consume 2 gigabytes of memorymemory-vm-bytes: "512m" - Consume 512 megabytes of memory
Usage
Select your deployment method to get started:
1 - Memory Hog Scenarios using Krkn
To enable this plugin add the pointer to the scenario input file scenarios/kube/memory-hog.yml as described in the
Usage section.
memory-hog options
In addition to the common hog scenario options, you can specify the below options in your scenario configuration to specificy the amount of memory to hog on a certain worker node
| Option | Type | Description |
|---|
memory-vm-bytes | string | the amount of memory that the scenario will try to hog.The size can be specified as % of free space on the file system or in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g |
Usage
To enable hog scenarios edit the kraken config file, go to the section kraken -> chaos_scenarios of the yaml structure
and add a new element to the list named hog_scenarios then add the desired scenario
pointing to the hog.yaml file.
kraken:
...
chaos_scenarios:
- hog_scenarios:
- scenarios/kube/memory-hog.yml
2 - Memory Hog Scenario using Krkn-Hub
This scenario hogs the memory on the specified node on a Kubernetes/OpenShift cluster for a specified duration. For more information refer the following documentation.
Run
If enabling Cerberus to monitor the cluster and pass/fail the scenario post chaos, refer docs. Make sure to start it before injecting the chaos and set CERBERUS_ENABLED environment variable for the chaos injection container to autoconnect.
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-memory-hog
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
Note
–env-host: This option is not available with the remote Podman client, including Mac and Windows (excluding WSL2) machines.
Without the –env-host option you’ll have to set each enviornment variable on the podman command line like -e <VARIABLE>=<value>$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-memory-hog
OR
$ docker run -e <VARIABLE>=<value> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-memory-hog
$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
Tip
Because the container runs with a non-root user, ensure the kube config is globally readable before mounting it in the container. You can achieve this with the following commands:
kubectl config view --flatten > ~/kubeconfig && chmod 444 ~/kubeconfig && docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v ~kubeconfig:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:<scenario>Supported parameters
The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:
Example if –env-host is used:
export <parameter_name>=<value>
OR on the command line like example:
-e <VARIABLE>=<value>
See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables
| Parameter | Description | Default |
|---|
| TOTAL_CHAOS_DURATION | Set chaos duration (in sec) as desired | 60 |
| MEMORY_CONSUMPTION_PERCENTAGE | percentage (expressed with the suffix %) or amount (expressed with the suffix b, k, m or g) of memory to be consumed by the scenario | 90% |
| NUMBER_OF_WORKERS | Total number of workers (stress-ng threads) | 1 |
| NAMESPACE | Namespace where the scenario container will be deployed | default |
| NODE_SELECTOR | defines the node selector for choosing target nodes. If not specified, one schedulable node in the cluster will be chosen at random. If multiple nodes match the selector, all of them will be subjected to stress. If number-of-nodes is specified, that many nodes will be randomly selected from those identified by the selector. | "" |
| TAINTS | List of taints for which tolerations need to created. Example: [“node-role.kubernetes.io/master:NoSchedule”] | [] |
| NUMBER_OF_NODES | restricts the number of selected nodes by the selector | "" |
| IMAGE | the container image of the stress workload | quay.io/krkn-chaos/krkn-hog |
Note
In case of using custom metrics profile or alerts profile when CAPTURE_METRICS or ENABLE_ALERTS is enabled, mount the metrics profile from the host on which the container is run using podman/docker under /home/krkn/kraken/config/metrics-aggregated.yaml and /home/krkn/kraken/config/alerts.For example:
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-memory-hog
Demo
You can find a link to a demo of the scenario here
3 - Memory Hog using Krknctl
krknctl run node-memory-hog (optional: --<parameter>:<value> )
Can also set any global variable listed here
| Parameter | Description | Type | Default |
|---|
--chaos-duration | Set chaos duration (in sec) as desired | number | 60 |
| –memory-workers | Total number of workers (stress-ng threads) | number | 1 |
| –memory-consumption | percentage (expressed with the suffix %) or amount (expressed with the suffix b, k, m or g) of memory to be consumed by the scenario | string | 90% |
--namespace | Namespace where the scenario container will be deployed | string | default |
--node-selector | Node selector where the scenario containers will be scheduled in the format “=”. NOTE: Will be instantiated a container per each node selected with the same scenario options. If left empty a random node will be selected | string | |
--taints | List of taints for which tolerations need to created. For example [“node-role.kubernetes.io/master:NoSchedule”]" | string | [] |
--number-of-nodes | restricts the number of selected nodes by the selector | number | |
--image | The hog container image. Can be changed if the hog image is mirrored on a private repository | string | quay.memory/krkn-chaos/krkn-hog |
To see all available scenario options
krknctl run node-memory-hog --help