HTTP Load Scenarios

HTTP Load Scenarios

This scenario generates distributed HTTP load against one or more target endpoints using Vegeta load testing pods deployed inside the Kubernetes cluster. It leverages the distributed nature of Kubernetes clusters to instantiate multiple load generator pods, significantly increasing the effectiveness of the load test.

The scenario supports multiple concurrent pods, configurable request rates, multiple HTTP methods (GET, POST, PUT, DELETE, PATCH, HEAD), custom headers, request bodies, and comprehensive results collection with aggregated metrics across all pods.

The configuration allows for the specification of multiple node selectors, enabling Kubernetes to schedule the attacker pods on a user-defined subset of nodes to make the test more realistic.

The attacker container source code is available here.

How to Run HTTP Load Scenarios

Choose your preferred method to run HTTP load scenarios:

Example scenario file: http_load_scenario.yml

Sample scenario config
- http_load_scenario:
    runs: 1                                            # number of times to execute the scenario
    number-of-pods: 2                                  # number of attacker pods instantiated
    namespace: default                                 # namespace to deploy load testing pods
    image: quay.io/krkn-chaos/krkn-http-load:latest    # http load attacker container image
    attacker-nodes:                                    # node affinity to schedule the attacker pods
      node-role.kubernetes.io/worker:                  # per each node label selector can be specified
        - ""                                           # multiple values so the kube scheduler will schedule
                                                       # the attacker pods in the best way possible
                                                       # set empty value `attacker-nodes: {}` to let kubernetes schedule the pods
    targets:                                           # Vegeta round-robins across all endpoints
      endpoints:                                       # supported methods: GET, POST, PUT, DELETE, PATCH, HEAD
        - url: "https://your-service.example.com/health"
          method: "GET"
        - url: "https://your-service.example.com/api/data"
          method: "POST"
          headers:
            Content-Type: "application/json"
            Authorization: "Bearer your-token"
          body: '{"key":"value"}'

    rate: "50/1s"                                      # request rate per pod: "50/1s", "1000/1m", "0" for max throughput
    duration: "30s"                                    # attack duration: "30s", "5m", "1h"
    workers: 10                                        # initial concurrent workers per pod
    max_workers: 100                                   # maximum workers per pod (auto-scales)
    connections: 100                                   # max idle connections per host
    timeout: "10s"                                     # per-request timeout
    keepalive: true                                    # use persistent HTTP connections
    http2: true                                        # enable HTTP/2
    insecure: false                                    # skip TLS verification (for self-signed certs)

How to Use Plugin Name

Add the plugin name to the list of chaos_scenarios section in the config/config.yaml file

kraken:
    kubeconfig_path: ~/.kube/config                     # Path to kubeconfig
    ..
    chaos_scenarios:
        - http_load_scenarios:
            - scenarios/<scenario_name>.yaml

Run

python run_kraken.py --config config/config.yaml

HTTP Load scenario

This scenario generates distributed HTTP load against one or more target endpoints using Vegeta load testing pods deployed inside the cluster.

Run

If enabling Cerberus to monitor the cluster and pass/fail the scenario post chaos, refer docs. Make sure to start it before injecting the chaos and set CERBERUS_ENABLED environment variable for the chaos injection container to autoconnect.

$ podman run --name=<container_name> \
  --net=host \
  --pull=always \
  --env-host=true \
  -v <path-to-kube-config>:/home/krkn/.kube/config:Z \
  -e TARGET_ENDPOINTS="GET https://myapp.example.com/health" \
  -e NAMESPACE=<target_namespace> \
  -e TOTAL_CHAOS_DURATION=30s \
  -e NUMBER_OF_PODS=2 \
  -e NODE_SELECTORS=<key>=<value>;<key>=<othervalue> \
  -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:http-load

$ podman logs -f <container_name or container_id>

$ podman inspect <container-name or container-id> \
  --format "{{.State.ExitCode}}"
$ docker run $(./get_docker_params.sh) \
  --name=<container_name> \
  --net=host \
  --pull=always \
  -v <path-to-kube-config>:/home/krkn/.kube/config:Z \
  -e TARGET_ENDPOINTS="GET https://myapp.example.com/health" \
  -e NAMESPACE=<target_namespace> \
  -e TOTAL_CHAOS_DURATION=30s \
  -e NUMBER_OF_PODS=2 \
  -e NODE_SELECTORS=<key>=<value>;<key>=<othervalue> \
  -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:http-load

$ docker logs -f <container_name or container_id>

$ docker inspect <container-name or container-id> \
  --format "{{.State.ExitCode}}"

TIP: Because the container runs with a non-root user, ensure the kube config is globally readable before mounting it in the container. You can achieve this with the following commands:

kubectl config view --flatten > ~/kubeconfig && \
chmod 444 ~/kubeconfig && \
docker run $(./get_docker_params.sh) \
  --name=<container_name> \
  --net=host \
  --pull=always \
  -v ~/kubeconfig:/home/krkn/.kube/config:Z \
  -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:http-load

Supported parameters

The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:

Example if –env-host is used:

export <parameter_name>=<value>

OR on the command line like example:

-e <VARIABLE>=<value>

See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables

ParameterDescriptionDefault
TARGET_ENDPOINTSSemicolon-separated list of target endpoints. Format: METHOD URL;METHOD URL HEADER1:VAL1,HEADER2:VAL2 BODY. Example: GET https://myapp.example.com/health;POST https://myapp.example.com/api Content-Type:application/json {"key":"value"}Required
RATERequest rate per pod (e.g. 50/1s, 1000/1m, 0 for max throughput)50/1s
TOTAL_CHAOS_DURATIONDuration of the load test (e.g. 30s, 5m, 1h)30s
NAMESPACEThe namespace where the attacker pods will be deployeddefault
NUMBER_OF_PODSThe number of attacker pods that will be deployed2
WORKERSInitial number of concurrent workers per pod10
MAX_WORKERSMaximum number of concurrent workers per pod (auto-scales)100
CONNECTIONSMaximum number of idle open connections per host100
TIMEOUTPer-request timeout (e.g. 10s, 30s)10s
IMAGEThe container image that will be used to perform the scenarioquay.io/krkn-chaos/krkn-http-load:latest
INSECURESkip TLS certificate verification (for self-signed certs)false
NODE_SELECTORSThe node selectors are used to guide the cluster on where to deploy attacker pods. You can specify one or more labels in the format key=value;key=value2 (even using the same key) to choose one or more node categories. If left empty, the pods will be scheduled on any available node, depending on the cluster’s capacity.

NOTE In case of using custom metrics profile or alerts profile when CAPTURE_METRICS or ENABLE_ALERTS is enabled, mount the metrics profile from the host on which the container is run using podman/docker under /home/krkn/kraken/config/metrics-aggregated.yaml and /home/krkn/kraken/config/alerts. For example:

$ podman run \
  --name=<container_name> \
  --net=host \
  --pull=always \
  --env-host=true \
  -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml \
  -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts \
  -v <path-to-kube-config>:/home/krkn/.kube/config:Z \
  -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:http-load
krknctl run http-load (optional: --<parameter>:<value> )

Can also set any global variable listed here

Scenario specific parameters:

ParameterDescriptionTypeDefault
--target-endpointsSemicolon-separated list of target endpoints. Format: METHOD URL;METHOD URL HEADER1:VAL1,HEADER2:VAL2 BODY. Example: GET https://myapp.example.com/health;POST https://myapp.example.com/api Content-Type:application/json {"key":"value"}stringRequired
--rateRequest rate per pod (e.g. 50/1s, 1000/1m, 0 for max throughput)string50/1s
--chaos-durationDuration of the load test (e.g. 30s, 5m, 1h)string30s
--namespaceThe namespace where the attacker pods will be deployedstringdefault
--number-of-podsThe number of attacker pods that will be deployednumber2
--workersInitial number of concurrent workers per podnumber10
--max-workersMaximum number of concurrent workers per pod (auto-scales)number100
--connectionsMaximum number of idle open connections per hostnumber100
--timeoutPer-request timeout (e.g. 10s, 30s)string10s
--imageThe container image that will be used to perform the scenariostringquay.io/krkn-chaos/krkn-http-load:latest
--insecureSkip TLS certificate verification (for self-signed certs)stringfalse
--node-selectorsThe node selectors are used to guide the cluster on where to deploy attacker pods. You can specify one or more labels in the format key=value;key=value2 (even using the same key) to choose one or more node categories. If left empty, the pods will be scheduled on any available node, depending on the cluster s capacity.string

To see all available scenario options

krknctl run http-load --help