Memory Hog Scenario
Overview
The Memory Hog scenario is designed to create virtual memory pressure on one or more nodes in your Kubernetes/OpenShift cluster for a specified duration. This scenario helps you test how your cluster and applications respond to memory exhaustion and pressure conditions.
How It Works
The scenario deploys a stress workload pod on targeted nodes. These pods use stress-ng to allocate and consume memory resources according to your configuration. The workload runs for a specified duration, allowing you to observe how your cluster handles memory pressure, OOM (Out of Memory) conditions, and eviction scenarios.
When to Use
Use the Memory Hog scenario to:
- Test your cluster’s behavior under memory pressure
- Validate that memory resource limits and quotas are properly configured
- Test pod eviction policies when nodes run out of memory
- Verify that the kubelet correctly evicts pods based on memory pressure
- Evaluate the impact of memory contention on application performance
- Test whether your monitoring systems properly detect memory saturation
- Simulate scenarios where rogue pods consume excessive memory without limits
- Validate that memory-based horizontal pod autoscaling works correctly
Key Configuration Options
In addition to the common hog scenario options, Memory Hog scenarios support:
| Option | Type | Description |
|---|---|---|
memory-vm-bytes | string | The amount of memory that the scenario will attempt to allocate and consume. Can be specified as a percentage (%) of available memory or in absolute units (b, k, m, g for Bytes, KBytes, MBytes, GBytes) |
Example Values
memory-vm-bytes: "80%"- Consume 80% of available memorymemory-vm-bytes: "2g"- Consume 2 gigabytes of memorymemory-vm-bytes: "512m"- Consume 512 megabytes of memory
Usage
Select your deployment method to get started:
- Memory Hog using Krkn - Configuration for direct Krkn usage
- Memory Hog using Krknctl - Configuration for Krknctl CLI
- Memory Hog using Krkn-Hub - Configuration for Krkn-Hub