Syn Flood Scenarios
Syn Flood Scenarios
This scenario generates a substantial amount of TCP traffic directed at one or more Kubernetes services within
the cluster to test the server’s resiliency under extreme traffic conditions.
It can also target hosts outside the cluster by specifying a reachable IP address or hostname.
This scenario leverages the distributed nature of Kubernetes clusters to instantiate multiple instances
of the same pod against a single host, significantly increasing the effectiveness of the attack.
The configuration also allows for the specification of multiple node selectors, enabling Kubernetes to schedule
the attacker pods on a user-defined subset of nodes to make the test more realistic.
The attacker container source code is available here.
1 - Syn Flood Scenario using Krkn
Sample scenario config
packet-size: 120 # hping3 packet size
window-size: 64 # hping 3 TCP window size
duration: 10 # chaos scenario duration
namespace: default # namespace where the target service(s) are deployed
target-service: target-svc # target service name (if set target-service-label must be empty)
target-port: 80 # target service TCP port
target-service-label : "" # target service label, can be used to target multiple target at the same time
# if they have the same label set (if set target-service must be empty)
number-of-pods: 2 # number of attacker pod instantiated per each target
image: quay.io/krkn-chaos/krkn-syn-flood # syn flood attacker container image
attacker-nodes: # this will set the node affinity to schedule the attacker node. Per each node label selector
# can be specified multiple values in this way the kube scheduler will schedule the attacker pods
# in the best way possible based on the provided labels. Multiple labels can be specified
kubernetes.io/hostname:
- host_1
- host_2
kubernetes.io/os:
- linux
2 - Syn Flood Scenario using Krkn-Hub
Syn Flood scenario
This scenario simulates a user-defined surge of TCP SYN requests directed at one or more services deployed within the cluster or an external target reachable by the cluster.
For more details, please refer to the following documentation.
Run
If enabling Cerberus to monitor the cluster and pass/fail the scenario post chaos, refer docs. Make sure to start it before injecting the chaos and set CERBERUS_ENABLED
environment variable for the chaos injection container to autoconnect.
$ podman run --name=<container_name> --net=host --env-host=true -v <path-to-kube-config>:/home/krkn/.kube/config:Z
-e TARGET_PORT=<target_port> \
-e NAMESPACE=<target_namespace> \
-e TOTAL_CHAOS_DURATION=<duration> \
-e TARGET_SERVICE=<target_service> \
-e NUMBER_OF_PODS=10 \
-e NODE_SELECTORS=<key>=<value>;<key>=<othervalue> \
-d
quay.io/krkn-chaos/krkn-hub:syn-flood
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
Note
–env-host: This option is not available with the remote Podman client, including Mac and Windows (excluding WSL2) machines.
Without the –env-host option you’ll have to set each enviornment variable on the podman command line like -e <VARIABLE>=<value>
$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host -v <path-to-kube-config>:/home/krkn/.kube/config:Z
-e TARGET_PORT=<target_port> \
-e NAMESPACE=<target_namespace> \
-e TOTAL_CHAOS_DURATION=<duration> \
-e TARGET_SERVICE=<target_service> \
-e NUMBER_OF_PODS=10 \
-e NODE_SELECTORS=<key>=<value>;<key>=<othervalue> \
-d
quay.io/krkn-chaos/krkn-hub:syn-flood
$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
TIP: Because the container runs with a non-root user, ensure the kube config is globally readable before mounting it in the container. You can achieve this with the following commands:
kubectl config view --flatten > ~/kubeconfig && chmod 444 ~/kubeconfig && docker run $(./get_docker_params.sh) --name=<container_name> --net=host -v ~kubeconfig:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:<scenario>
Supported parameters
The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:
Example if –env-host is used:
export <parameter_name>=<value>
OR on the command line like example:
-e <VARIABLE>=<value>
See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables
Parameter | Description | Default |
---|
PACKET_SIZE | The size in bytes of the SYN packet | 120 |
WINDOW_SIZE | The TCP window size between packets in bytes | 64 |
TOTAL_CHAOS_DURATION | The number of seconds the chaos will last | 120 |
NAMESPACE | The namespace containing the target service and where the attacker pods will be deployed | default |
TARGET_SERVICE | The service name (or the hostname/IP address in case an external target will be hit) that will be affected by the attack. Must be empty if TARGET_SERVICE_LABEL will be set | |
TARGET_PORT | The TCP port that will be targeted by the attack | |
TARGET_SERVICE_LABEL | The label that will be used to select one or more services. Must be left empty if TARGET_SERVICE variable is set | |
NUMBER_OF_PODS | The number of attacker pods that will be deployed | 2 |
IMAGE | The container image that will be used to perform the scenario | quay.io/krkn-chaos/krkn-syn-flood:latest |
NODE_SELECTORS | The node selectors are used to guide the cluster on where to deploy attacker pods. You can specify one or more labels in the format key=value;key=value2 (even using the same key) to choose one or more node categories. If left empty, the pods will be scheduled on any available node, depending on the cluster’s capacity. | |
NOTE In case of using custom metrics profile or alerts profile when CAPTURE_METRICS
or ENABLE_ALERTS
is enabled, mount the metrics profile from the host on which the container is run using podman/docker under /home/krkn/kraken/config/metrics-aggregated.yaml
and /home/krkn/kraken/config/alerts
. For example:
$ podman run --name=<container_name> --net=host --env-host=true -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:syn-flood