How to Run Node Network Filter Scenarios
Choose your preferred method to run node network filter scenarios:
Example scenario file: node-network-filter.yml
Configuration
- id: node_network_filter
wait_duration: 300
test_duration: 100
label_selector: "kubernetes.io/hostname=ip-10-0-39-182.us-east-2.compute.internal"
instance_count: 1
execution: parallel
namespace: 'default'
# scenario specific settings
ingress: false
egress: true
target: node-name
interfaces: []
protocols:
- tcp
ports:
- 2049
taints: []
service_account: ""
for the common module settings please refer to the documentation.
ingress: filters incoming traffic on one or more portsegress: filters outgoing traffic on one or more portstarget: the node name (iflabel_selectoris not set)interfaces: network interfaces used for outgoing traffic whenegressis enabled (same semantics as krknctl and krkn-hub)ports: ports that incoming and/or outgoing filtering applies to (depending oningress/egress)protocols: the IP protocols to filter (tcpandudp)taints: list of taints for which tolerations are created. Example:["node-role.kubernetes.io/master:NoSchedule"]service_account: optional service account for the scenario workload (empty string uses the default)
Usage
To enable hog scenarios edit the kraken config file, go to the section kraken -> chaos_scenarios of the yaml structure
and add a new element to the list named network_chaos_ng_scenarios then add the desired scenario
pointing to the hog.yaml file.
kraken:
...
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/kube/node-network-filter.yml
Note
You can specify multiple scenario files of the same type by adding additional paths to the list:
kraken:
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/kube/node-network-filter-1.yml
- scenarios/kube/node-network-filter-2.yml
- scenarios/kube/node-network-filter-3.yml
You can also combine multiple different scenario types in the same config.yaml file. Scenario types can be specified in any order, and you can include the same scenario type multiple times:
kraken:
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/kube/node-network-filter.yml
- pod_disruption_scenarios:
- scenarios/pod-kill.yaml
- node_scenarios:
- scenarios/node-reboot.yaml
- network_chaos_ng_scenarios: # Same type can appear multiple times
- scenarios/kube/node-network-filter-2.yml
Examples
Please refer to the use cases section for some real usage scenarios.
Run
python run_kraken.py --config config/config.yaml
Run
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:node-network-filter
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:node-network-filter
OR
$ docker run -e <VARIABLE>=<value> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:node-network-filter
$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
TIP: Because the container runs with a non-root user, ensure the kube config is globally readable before mounting it in the container. You can achieve this with the following commands:
kubectl config view --flatten > ~/kubeconfig && chmod 444 ~/kubeconfig && docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v ~kubeconfig:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:<scenario>
Supported parameters
The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:
ex.)
export <parameter_name>=<value>
See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables
| Parameter | Description | Default |
|---|---|---|
| TOTAL_CHAOS_DURATION | set chaos duration (in sec) as desired | 60 |
| NODE_SELECTOR | defines the node selector for choosing target nodes. If not specified, one schedulable node in the cluster will be chosen at random. If multiple nodes match the selector, all of them will be subjected to stress. | “node-role.kubernetes.io/worker=” |
| NODE_NAME | the node name to target (if label selector not selected | |
| INSTANCE_COUNT | restricts the number of selected nodes by the selector | “1” |
| EXECUTION | sets the execution mode of the scenario on multiple nodes, can be parallel or serial | “parallel” |
| INGRESS | sets the network filter on incoming traffic, can be true or false | false |
| EGRESS | sets the network filter on outgoing traffic, can be true or false | true |
| INTERFACES | a list of comma separated names of network interfaces (eg. eth0 or eth0,eth1,eth2) to filter for outgoing traffic | "" |
| PORTS | a list of comma separated port numbers (eg 8080 or 8080,8081,8082) to filter for both outgoing and incoming traffic | "" |
| PROTOCOLS | a list of comma separated protocols to filter (tcp, udp or both) | |
| TAINTS | List of taints for which tolerations need to be created. Example: [“node-role.kubernetes.io/master:NoSchedule”] | [] |
| SERVICE_ACCOUNT | optional service account for the Node Network Filter workload | "" |
NOTE In case of using custom metrics profile or alerts profile when CAPTURE_METRICS or ENABLE_ALERTS is enabled, mount the metrics profile from the host on which the container is run using podman/docker under /home/krkn/kraken/config/metrics-aggregated.yaml and /home/krkn/kraken/config/alerts. For example:
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:node-network-filter
krknctl run node-network-filter (optional: --<parameter>:<value>)
Can also set any global variable listed here
Node Network Filter Parameters
krknctl marks --ingress and --egress as required flags (you should pass both). Values: at least one of --ingress or --egress must be true; both may be true to filter incoming and outgoing traffic.
| Argument | Type | Description | Required | Default Value |
|---|---|---|---|---|
--chaos-duration | number | Chaos duration in seconds | false | 60 |
--node-selector | string | Node label selector (format: key=value) | false | |
--node-name | string | Specific node name to target (alternative to node-selector) | false | |
--namespace | string | Namespace where the scenario container is deployed | false | default |
--instance-count | number | Number of nodes to target when using node-selector | false | 1 |
--execution | enum | Execution mode: parallel or serial | false | parallel |
--ingress | boolean | Filter incoming traffic (true / false) | true | |
--egress | boolean | Filter outgoing traffic (true / false) | true | |
--interfaces | string | Network interfaces for outgoing traffic (comma-separated, e.g. eth0,eth1). Optional; empty uses workload defaults | false | |
--ports | string | Network ports to filter traffic (comma-separated, e.g., 8080,8081,8082) | true | |
--image | string | The network chaos injection workload container image | false | quay.io/krkn-chaos/krkn-network-chaos:latest |
--protocols | string | Network protocols to filter: tcp, udp, or tcp,udp | false | tcp |
--taints | string | Comma-separated taints (tolerations are derived for the workload). Same notation as elsewhere in Network Chaos NG docs, e.g. node-role.kubernetes.io/master:NoSchedule | false | |
--service-account | string | Service account for the workload (optional) | false |
Parameter Format Details
Node Selection:
--node-selector: Label selector in formatkey=value(e.g.,node-role.kubernetes.io/worker=)--node-name: Specific node name (e.g.,ip-10-0-1-100.ec2.internal)- Specify either
--node-selectorOR--node-name, not both - When using
--node-selector, use--instance-countto limit the number of selected nodes
Port Format:
- Single port:
8080 - Multiple ports:
8080,8081,8082(comma-separated, no spaces)
Protocol Format:
- Valid values:
tcp,udp,tcp,udp, orudp,tcp - Default:
tcp
Interface Format:
- Applies to egress (outgoing) filtering, matching the scenario image metadata
- Single interface:
eth0 - Multiple interfaces:
eth0,eth1,eth2(comma-separated, no spaces) - May be left empty when not needed for your egress rules
Taints Format:
- Comma-separated Kubernetes taints; the workload gets matching tolerations
- Examples:
node-role.kubernetes.io/master:NoScheduleorkey=value:NoSchedulewhen the taint includes a value
Usage Notes
- Node targeting: This scenario targets nodes (not pods) and creates iptables rules on the target node(s) to filter network traffic
- Ingress/Egress: Pass both flags; at least one must be
true. Both may betrueto filter incoming and outgoing traffic - Execution modes:
parallel: Applies network filtering to all selected nodes simultaneouslyserial: Applies network filtering to nodes one at a time
Example Commands
Basic egress filtering (block outgoing traffic):
krknctl run node-network-filter \
--node-selector node-role.kubernetes.io/worker= \
--instance-count 1 \
--ingress false \
--egress true \
--ports 8080 \
--protocols tcp \
--chaos-duration 120
Ingress + egress filtering (block both incoming and outgoing):
krknctl run node-network-filter \
--node-name ip-10-0-1-100.ec2.internal \
--ingress true \
--egress true \
--ports 9090,9091 \
--protocols tcp,udp \
--interfaces eth0 \
--chaos-duration 300
Multi-port filtering with parallel execution:
krknctl run node-network-filter \
--node-selector kubernetes.io/os=linux \
--instance-count 3 \
--execution parallel \
--ingress false \
--egress true \
--ports 6379,6380,6381 \
--protocols tcp \
--chaos-duration 180