This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Memory Hog Scenario

    Overview

    The Memory Hog scenario is designed to create virtual memory pressure on one or more nodes in your Kubernetes/OpenShift cluster for a specified duration. This scenario helps you test how your cluster and applications respond to memory exhaustion and pressure conditions.

    How It Works

    The scenario deploys a stress workload pod on targeted nodes. These pods use stress-ng to allocate and consume memory resources according to your configuration. The workload runs for a specified duration, allowing you to observe how your cluster handles memory pressure, OOM (Out of Memory) conditions, and eviction scenarios.

    When to Use

    Use the Memory Hog scenario to:

    • Test your cluster’s behavior under memory pressure
    • Validate that memory resource limits and quotas are properly configured
    • Test pod eviction policies when nodes run out of memory
    • Verify that the kubelet correctly evicts pods based on memory pressure
    • Evaluate the impact of memory contention on application performance
    • Test whether your monitoring systems properly detect memory saturation
    • Simulate scenarios where rogue pods consume excessive memory without limits
    • Validate that memory-based horizontal pod autoscaling works correctly

    Key Configuration Options

    In addition to the common hog scenario options, Memory Hog scenarios support:

    OptionTypeDescription
    memory-vm-bytesstringThe amount of memory that the scenario will attempt to allocate and consume. Can be specified as a percentage (%) of available memory or in absolute units (b, k, m, g for Bytes, KBytes, MBytes, GBytes)

    Example Values

    • memory-vm-bytes: "80%" - Consume 80% of available memory
    • memory-vm-bytes: "2g" - Consume 2 gigabytes of memory
    • memory-vm-bytes: "512m" - Consume 512 megabytes of memory

    How to Run Memory Hog Scenarios

    Choose your preferred method to run memory hog scenarios:

    To enable this plugin add the pointer to the scenario input file scenarios/kube/memory-hog.yml as described in the Usage section.

    Example scenario file: memory-hog.yml

    memory-hog options

    In addition to the common hog scenario options, you can specify the below options in your scenario configuration to specificy the amount of memory to hog on a certain worker node

    OptionTypeDescription
    memory-vm-bytesstringthe amount of memory that the scenario will try to hog.The size can be specified as % of free space on the file system or in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g

    Usage

    To enable hog scenarios edit the kraken config file, go to the section kraken -> chaos_scenarios of the yaml structure and add a new element to the list named hog_scenarios then add the desired scenario pointing to the hog.yaml file.

    kraken:
        ...
        chaos_scenarios:
            - hog_scenarios:
                - scenarios/kube/memory-hog.yml
    

    Run

    python run_kraken.py --config config/config.yaml
    

    This scenario hogs the memory on the specified node on a Kubernetes/OpenShift cluster for a specified duration. For more information refer the following documentation.

    Run

    If enabling Cerberus to monitor the cluster and pass/fail the scenario post chaos, refer docs. Make sure to start it before injecting the chaos and set CERBERUS_ENABLED environment variable for the chaos injection container to autoconnect.

    $ podman run \
      --name=<container_name> \
      --net=host \
      --pull=always \
      --env-host=true \
      -v <path-to-kube-config>:/home/krkn/.kube/config:Z \
      -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-memory-hog
    $ podman logs -f <container_name or container_id> # Streams Kraken logs
    $ podman inspect <container-name or container-id> \
      --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
    
    $ docker run $(./get_docker_params.sh) \
      --name=<container_name> \
      --net=host \
      --pull=always \
      -v <path-to-kube-config>:/home/krkn/.kube/config:Z \
      -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-memory-hog
    $ docker run \
      -e <VARIABLE>=<value> \
      --net=host \
      --pull=always \
      -v <path-to-kube-config>:/home/krkn/.kube/config:Z \
      -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-memory-hog
    
    $ docker logs -f <container_name or container_id> # Streams Kraken logs
    $ docker inspect <container-name or container-id> \
      --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
    

    Supported parameters

    The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:

    Example if –env-host is used:

    export <parameter_name>=<value>
    

    OR on the command line like example:

    -e <VARIABLE>=<value>
    

    See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables

    ParameterDescriptionDefault
    TOTAL_CHAOS_DURATIONSet chaos duration (in sec) as desired60
    MEMORY_CONSUMPTION_PERCENTAGEpercentage (expressed with the suffix %) or amount (expressed with the suffix b, k, m or g) of memory to be consumed by the scenario90%
    NUMBER_OF_WORKERSTotal number of workers (stress-ng threads)1
    NAMESPACENamespace where the scenario container will be deployeddefault
    NODE_SELECTORdefines the node selector for choosing target nodes. If not specified, one schedulable node in the cluster will be chosen at random. If multiple nodes match the selector, all of them will be subjected to stress. If number-of-nodes is specified, that many nodes will be randomly selected from those identified by the selector.""
    TAINTSList of taints for which tolerations need to created. Example: [“node-role.kubernetes.io/master:NoSchedule”][]
    NUMBER_OF_NODESrestricts the number of selected nodes by the selector""
    IMAGEthe container image of the stress workloadquay.io/krkn-chaos/krkn-hog

    For example:

    $ podman run \
      --name=<container_name> \
      --net=host \
      --pull=always \
      --env-host=true \
      -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml \
      -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts \
      -v <path-to-kube-config>:/home/krkn/.kube/config:Z \
      -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-memory-hog
    
    krknctl run node-memory-hog (optional: --<parameter>:<value> )
    

    Can also set any global variable listed here

    ParameterDescriptionTypeDefault
    --chaos-durationSet chaos duration (in sec) as desirednumber60
    --memory-workersTotal number of workers (stress-ng threads)number1
    --memory-consumptionpercentage (expressed with the suffix %) or amount (expressed with the suffix b, k, m or g) of memory to be consumed by the scenariostring90%
    --namespaceNamespace where the scenario container will be deployedstringdefault
    --node-selectorNode selector where the scenario containers will be scheduled in the format “=”. NOTE: Will be instantiated a container per each node selected with the same scenario options. If left empty a random node will be selectedstring
    --taintsList of taints for which tolerations need to created. For example [“node-role.kubernetes.io/master:NoSchedule”]"string[]
    --number-of-nodesrestricts the number of selected nodes by the selectornumber
    --imageThe hog container image. Can be changed if the hog image is mirrored on a private repositorystringquay.memory/krkn-chaos/krkn-hog

    To see all available scenario options

    krknctl run node-memory-hog --help
    

    Demo

    You can find a link to a demo of the scenario here