This scenario introduce a new infrastructure to refactor and port the current implementation of the network chaos plugins
This is the multi-page printable view of this section. Click here to print.
Network Chaos NG Scenarios
1 - Network Chaos API
AbstractNetworkChaosModule abstract module class
All the plugins must implement the AbstractNetworkChaosModule abstract class in order to be instantiated and ran by the Netwok Chaos NG plugin.
This abstract class implements two main abstract methods:
run(self, target: str, kubecli: KrknTelemetryOpenshift, error_queue: queue.Queue = None)is the entrypoint for each Network Chaos module. If the module is configured to be run in parallelerror_queuemust not be Nonetarget: param is the name of the resource (Pod, Node etc.) that will be targeted by the scenariokubecli: theKrknTelemetryOpenshiftneeded by the scenario to access to the krkn-lib methodserror_queue: a queue that will be used by the plugin to push the errors raised during the execution of parallel modules
get_config(self) -> (NetworkChaosScenarioType, BaseNetworkChaosConfig)returns the common subset of settings shared by all the scenariosBaseNetworkChaosConfigand the type of Network Chaos Scenario that is running (Pod Scenario or Node Scenario)
BaseNetworkChaosConfig base module configuration
Is the base class that contains the common parameters shared by all the Network Chaos NG modules.
idis the string name of the Network Chaos NG modulewait_durationif there is more than one network module config in the same config file, the plugin will waitwait_durationseconds before running the following onetest_durationthe duration in seconds of the scenariolabel_selectorthe selector used to target the resourceinstance_countif greater than 0 picksinstance_countelements from the targets selected by the filters randomlyexecutionif more than one target are selected by the selector the scenario can target the resources both inserialorparallel.namespacethe namespace were the scenario workloads will be deployedservice_accountoptional service account for the scenario workload (empty string uses the cluster default)taints: List of taints for which tolerations need to be created. Example: [“node-role.kubernetes.io/master:NoSchedule”]
2 - Node Interface Down
Brings one or more network interfaces down on a target node for a configurable duration, then restores them. Can be used to simulate network partitions, NIC failures, or loss of connectivity at the node level.
How to Run Node Interface Down Scenarios
Choose your preferred method to run node interface down scenarios:
Example scenario file: node_interface_down.yaml
Configuration
- id: node_interface_down
image: quay.io/krkn-chaos/krkn-network-chaos:latest
wait_duration: 0
test_duration: 60
label_selector: "node-role.kubernetes.io/worker="
instance_count: 1
execution: serial
namespace: default
# scenario specific settings
target: ""
interfaces: []
recovery_time: 30
taints: []
For the common module settings please refer to the documentation.
target: the node name to target (used whenlabel_selectoris not set)interfaces: a list of network interface names to bring down (e.g.["eth0", "bond0"]). Leave empty to auto-detect the node’s default interfacerecovery_time: seconds to wait after bringing the interface(s) back up before continuing. Set to0to skip the recovery wait
Usage
To enable node interface down scenarios edit the kraken config file, go to the section kraken -> chaos_scenarios of the yaml structure
and add a new element to the list named network_chaos_ng_scenarios then add the desired scenario
pointing to the scenario yaml file.
kraken:
...
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/openshift/node_interface_down.yaml
Note
You can specify multiple scenario files of the same type by adding additional paths to the list:
kraken:
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/openshift/node_interface_down-1.yaml
- scenarios/openshift/node_interface_down-2.yaml
You can also combine multiple different scenario types in the same config.yaml file. Scenario types can be specified in any order, and you can include the same scenario type multiple times:
kraken:
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/openshift/node_interface_down.yaml
- pod_disruption_scenarios:
- scenarios/pod-kill.yaml
- node_scenarios:
- scenarios/node-reboot.yaml
Run
python run_kraken.py --config config/config.yaml
Run
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:node-interface-down
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:node-interface-down
OR
$ docker run -e <VARIABLE>=<value> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:node-interface-down
$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
TIP: Because the container runs with a non-root user, ensure the kube config is globally readable before mounting it in the container. You can achieve this with the following commands:
kubectl config view --flatten > ~/kubeconfig && chmod 444 ~/kubeconfig && docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v ~kubeconfig:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:<scenario>
Supported parameters
The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:
ex.)
export <parameter_name>=<value>
See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables
| Parameter | Description | Default |
|---|---|---|
| TOTAL_CHAOS_DURATION | Duration in seconds to keep the interface(s) down | 60 |
| RECOVERY_TIME | Seconds to wait after bringing the interface(s) back up | 0 |
| NODE_SELECTOR | Label selector to choose target nodes. If not specified, a schedulable node will be chosen at random | “node-role.kubernetes.io/worker=” |
| NODE_NAME | The node name to target (used when label selector is not set) | |
| INSTANCE_COUNT | Restricts the number of nodes selected by the label selector | 1 |
| EXECUTION | Execution mode for multiple nodes: serial or parallel | serial |
| INTERFACES | Comma-separated list of interface names to bring down (e.g. eth0 or eth0,bond0). Leave empty to auto-detect the default interface | "" |
| NAMESPACE | Namespace where the chaos workload pod will be deployed | default |
| TAINTS | List of taints for which tolerations need to be created. Example: ["node-role.kubernetes.io/master:NoSchedule"] | [] |
NOTE In case of using custom metrics profile or alerts profile when CAPTURE_METRICS or ENABLE_ALERTS is enabled, mount the metrics profile from the host on which the container is run using podman/docker under /home/krkn/kraken/config/metrics-aggregated.yaml and /home/krkn/kraken/config/alerts. For example:
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:node-interface-down
krknctl run node-interface-down (optional: --<parameter>:<value> )
Can also set any global variable listed here
Node Interface Down Parameters
| Argument | Type | Description | Required | Default Value |
|---|---|---|---|---|
--chaos-duration | number | Duration in seconds to keep the interface(s) down | false | 60 |
--recovery-time | number | Seconds to wait after bringing the interface(s) back up before continuing | false | 0 |
--node-selector | string | Label selector to choose target nodes | false | node-role.kubernetes.io/worker= |
--node-name | string | Node name to target (used when node-selector is not set) | false | |
--namespace | string | Namespace where the chaos workload pod will be deployed | false | default |
--instance-count | number | Number of nodes to target from those matching the selector | false | 1 |
--execution | enum | Execution mode when targeting multiple nodes: serial or parallel | false | serial |
--interfaces | string | Comma-separated list of interface names to bring down. Leave empty to auto-detect the default interface | false | |
--image | string | The chaos workload container image | false | quay.io/redhat-chaos/krkn-ng-tools:latest |
--taints | string | List of taints for which tolerations need to be created | false |
3 - Node Network Filter
How to Run Node Network Filter Scenarios
Choose your preferred method to run node network filter scenarios:
Example scenario file: node-network-filter.yml
Configuration
- id: node_network_filter
wait_duration: 300
test_duration: 100
label_selector: "kubernetes.io/hostname=ip-10-0-39-182.us-east-2.compute.internal"
instance_count: 1
execution: parallel
namespace: 'default'
# scenario specific settings
ingress: false
egress: true
target: node-name
interfaces: []
protocols:
- tcp
ports:
- 2049
taints: []
service_account: ""
for the common module settings please refer to the documentation.
ingress: filters incoming traffic on one or more portsegress: filters outgoing traffic on one or more portstarget: the node name (iflabel_selectoris not set)interfaces: network interfaces used for outgoing traffic whenegressis enabled (same semantics as krknctl and krkn-hub)ports: ports that incoming and/or outgoing filtering applies to (depending oningress/egress)protocols: the IP protocols to filter (tcpandudp)taints: list of taints for which tolerations are created. Example:["node-role.kubernetes.io/master:NoSchedule"]service_account: optional service account for the scenario workload (empty string uses the default)
Usage
To enable hog scenarios edit the kraken config file, go to the section kraken -> chaos_scenarios of the yaml structure
and add a new element to the list named network_chaos_ng_scenarios then add the desired scenario
pointing to the hog.yaml file.
kraken:
...
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/kube/node-network-filter.yml
Note
You can specify multiple scenario files of the same type by adding additional paths to the list:
kraken:
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/kube/node-network-filter-1.yml
- scenarios/kube/node-network-filter-2.yml
- scenarios/kube/node-network-filter-3.yml
You can also combine multiple different scenario types in the same config.yaml file. Scenario types can be specified in any order, and you can include the same scenario type multiple times:
kraken:
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/kube/node-network-filter.yml
- pod_disruption_scenarios:
- scenarios/pod-kill.yaml
- node_scenarios:
- scenarios/node-reboot.yaml
- network_chaos_ng_scenarios: # Same type can appear multiple times
- scenarios/kube/node-network-filter-2.yml
Examples
Please refer to the use cases section for some real usage scenarios.
Run
python run_kraken.py --config config/config.yaml
Run
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:node-network-filter
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:node-network-filter
OR
$ docker run -e <VARIABLE>=<value> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:node-network-filter
$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
TIP: Because the container runs with a non-root user, ensure the kube config is globally readable before mounting it in the container. You can achieve this with the following commands:
kubectl config view --flatten > ~/kubeconfig && chmod 444 ~/kubeconfig && docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v ~kubeconfig:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:<scenario>
Supported parameters
The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:
ex.)
export <parameter_name>=<value>
See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables
| Parameter | Description | Default |
|---|---|---|
| TOTAL_CHAOS_DURATION | set chaos duration (in sec) as desired | 60 |
| NODE_SELECTOR | defines the node selector for choosing target nodes. If not specified, one schedulable node in the cluster will be chosen at random. If multiple nodes match the selector, all of them will be subjected to stress. | “node-role.kubernetes.io/worker=” |
| NODE_NAME | the node name to target (if label selector not selected | |
| INSTANCE_COUNT | restricts the number of selected nodes by the selector | “1” |
| EXECUTION | sets the execution mode of the scenario on multiple nodes, can be parallel or serial | “parallel” |
| INGRESS | sets the network filter on incoming traffic, can be true or false | false |
| EGRESS | sets the network filter on outgoing traffic, can be true or false | true |
| INTERFACES | a list of comma separated names of network interfaces (eg. eth0 or eth0,eth1,eth2) to filter for outgoing traffic | "" |
| PORTS | a list of comma separated port numbers (eg 8080 or 8080,8081,8082) to filter for both outgoing and incoming traffic | "" |
| PROTOCOLS | a list of comma separated protocols to filter (tcp, udp or both) | |
| TAINTS | List of taints for which tolerations need to be created. Example: [“node-role.kubernetes.io/master:NoSchedule”] | [] |
| SERVICE_ACCOUNT | optional service account for the Node Network Filter workload | "" |
NOTE In case of using custom metrics profile or alerts profile when CAPTURE_METRICS or ENABLE_ALERTS is enabled, mount the metrics profile from the host on which the container is run using podman/docker under /home/krkn/kraken/config/metrics-aggregated.yaml and /home/krkn/kraken/config/alerts. For example:
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:node-network-filter
krknctl run node-network-filter (optional: --<parameter>:<value>)
Can also set any global variable listed here
Node Network Filter Parameters
krknctl marks --ingress and --egress as required flags (you should pass both). Values: at least one of --ingress or --egress must be true; both may be true to filter incoming and outgoing traffic.
| Argument | Type | Description | Required | Default Value |
|---|---|---|---|---|
--chaos-duration | number | Chaos duration in seconds | false | 60 |
--node-selector | string | Node label selector (format: key=value) | false | |
--node-name | string | Specific node name to target (alternative to node-selector) | false | |
--namespace | string | Namespace where the scenario container is deployed | false | default |
--instance-count | number | Number of nodes to target when using node-selector | false | 1 |
--execution | enum | Execution mode: parallel or serial | false | parallel |
--ingress | boolean | Filter incoming traffic (true / false) | true | |
--egress | boolean | Filter outgoing traffic (true / false) | true | |
--interfaces | string | Network interfaces for outgoing traffic (comma-separated, e.g. eth0,eth1). Optional; empty uses workload defaults | false | |
--ports | string | Network ports to filter traffic (comma-separated, e.g., 8080,8081,8082) | true | |
--image | string | The network chaos injection workload container image | false | quay.io/krkn-chaos/krkn-network-chaos:latest |
--protocols | string | Network protocols to filter: tcp, udp, or tcp,udp | false | tcp |
--taints | string | Comma-separated taints (tolerations are derived for the workload). Same notation as elsewhere in Network Chaos NG docs, e.g. node-role.kubernetes.io/master:NoSchedule | false | |
--service-account | string | Service account for the workload (optional) | false |
Parameter Format Details
Node Selection:
--node-selector: Label selector in formatkey=value(e.g.,node-role.kubernetes.io/worker=)--node-name: Specific node name (e.g.,ip-10-0-1-100.ec2.internal)- Specify either
--node-selectorOR--node-name, not both - When using
--node-selector, use--instance-countto limit the number of selected nodes
Port Format:
- Single port:
8080 - Multiple ports:
8080,8081,8082(comma-separated, no spaces)
Protocol Format:
- Valid values:
tcp,udp,tcp,udp, orudp,tcp - Default:
tcp
Interface Format:
- Applies to egress (outgoing) filtering, matching the scenario image metadata
- Single interface:
eth0 - Multiple interfaces:
eth0,eth1,eth2(comma-separated, no spaces) - May be left empty when not needed for your egress rules
Taints Format:
- Comma-separated Kubernetes taints; the workload gets matching tolerations
- Examples:
node-role.kubernetes.io/master:NoScheduleorkey=value:NoSchedulewhen the taint includes a value
Usage Notes
- Node targeting: This scenario targets nodes (not pods) and creates iptables rules on the target node(s) to filter network traffic
- Ingress/Egress: Pass both flags; at least one must be
true. Both may betrueto filter incoming and outgoing traffic - Execution modes:
parallel: Applies network filtering to all selected nodes simultaneouslyserial: Applies network filtering to nodes one at a time
Example Commands
Basic egress filtering (block outgoing traffic):
krknctl run node-network-filter \
--node-selector node-role.kubernetes.io/worker= \
--instance-count 1 \
--ingress false \
--egress true \
--ports 8080 \
--protocols tcp \
--chaos-duration 120
Ingress + egress filtering (block both incoming and outgoing):
krknctl run node-network-filter \
--node-name ip-10-0-1-100.ec2.internal \
--ingress true \
--egress true \
--ports 9090,9091 \
--protocols tcp,udp \
--interfaces eth0 \
--chaos-duration 300
Multi-port filtering with parallel execution:
krknctl run node-network-filter \
--node-selector kubernetes.io/os=linux \
--instance-count 3 \
--execution parallel \
--ingress false \
--egress true \
--ports 6379,6380,6381 \
--protocols tcp \
--chaos-duration 180
4 - Pod Network Filter
How to Run Pod Network Filter Scenarios
Choose your preferred method to run pod network filter scenarios:
Example scenario file: pod-network-filter.yml
Configuration
- id: pod_network_filter
wait_duration: 300
test_duration: 100
label_selector: "app=label"
instance_count: 1
execution: parallel
namespace: 'default'
# scenario specific settings
ingress: false
egress: true
target: 'pod-name'
interfaces: []
protocols:
- tcp
ports:
- 80
taints: []
for the common module settings please refer to the documentation.
ingress: filters the incoming traffic on one or more ports. If set one or more network interfaces must be specifiedegress: filters the outgoing traffic on one or more ports.target: the pod name (if label_selector not set)interfaces: a list of network interfaces where the incoming traffic will be filteredports: the list of ports that will be filteredprotocols: the ip protocols to filter (tcp and udp)taints: List of taints for which tolerations need to be created. Example: [“node-role.kubernetes.io/master:NoSchedule”]
Usage
To enable hog scenarios edit the kraken config file, go to the section kraken -> chaos_scenarios of the yaml structure
and add a new element to the list named network_chaos_ng_scenarios then add the desired scenario
pointing to the hog.yaml file.
kraken:
...
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/kube/pod-network-filter.yml
Note
You can specify multiple scenario files of the same type by adding additional paths to the list:
kraken:
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/kube/pod-network-filter-1.yml
- scenarios/kube/pod-network-filter-2.yml
- scenarios/kube/pod-network-filter-3.yml
You can also combine multiple different scenario types in the same config.yaml file. Scenario types can be specified in any order, and you can include the same scenario type multiple times:
kraken:
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/kube/pod-network-filter.yml
- pod_disruption_scenarios:
- scenarios/pod-kill.yaml
- node_scenarios:
- scenarios/node-reboot.yaml
- network_chaos_ng_scenarios: # Same type can appear multiple times
- scenarios/kube/pod-network-filter-2.yml
Examples
Please refer to the use cases section for some real usage scenarios.
Run
python run_kraken.py --config config/config.yaml
Run
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-kube-config>:/home/krkn/.kube/config:z -d quay.io/krkn-chaos/krkn-hub:pod-network-filter
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:z -d quay.io/krkn-chaos/krkn-hub:pod-network-filter
OR
$ docker run -e <VARIABLE>=<value> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:z -d quay.io/krkn-chaos/krkn-hub:pod-network-filter
$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
TIP: Because the container runs with a non-root user, ensure the kube config is globally readable before mounting it in the container. You can achieve this with the following commands:
kubectl config view --flatten > ~/kubeconfig && chmod 444 ~/kubeconfig && docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v ~kubeconfig:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:<scenario>
Supported parameters
The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:
ex.)
export <parameter_name>=<value>
See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables
| Parameter | Description | Default |
|---|---|---|
| TOTAL_CHAOS_DURATION | set chaos duration (in sec) as desired | 60 |
| POD_SELECTOR | defines the pod selector for choosing target pods. If multiple pods match the selector, all of them will be subjected to stress. | “app=selector” |
| POD_NAME | the pod name to target (if POD_SELECTOR not specified) | |
| INSTANCE_COUNT | restricts the number of selected pods by the selector | “1” |
| EXECUTION | sets the execution mode of the scenario on multiple pods, can be parallel or serial | “parallel” |
| INGRESS | sets the network filter on incoming traffic, can be true or false | false |
| EGRESS | sets the network filter on outgoing traffic, can be true or false | true |
| INTERFACES | a list of comma separated names of network interfaces (eg. eth0 or eth0,eth1,eth2) to filter for outgoing traffic | "" |
| PORTS | a list of comma separated port numbers (eg 8080 or 8080,8081,8082) to filter for both outgoing and incoming traffic | "" |
| PROTOCOLS | a list of comma separated network protocols (tcp, udp or both of them e.g. tcp,udp) | “tcp” |
| TAINTS | List of taints for which tolerations need to be created. Example: [“node-role.kubernetes.io/master:NoSchedule”] | [] |
NOTE In case of using custom metrics profile or alerts profile when CAPTURE_METRICS or ENABLE_ALERTS is enabled, mount the metrics profile from the host on which the container is run using podman/docker under /home/krkn/kraken/config/metrics-aggregated.yaml and /home/krkn/kraken/config/alerts. For example:
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:pod-network-traffic
krknctl run pod-network-filter (optional: --<parameter>:<value> )
Can also set any global variable listed here
| Argument | Type | Description | Required | Default Value |
|---|---|---|---|---|
--chaos-duration | number | Chaos Duration | false | 60 |
--pod-selector | string | Pod Selector | false | |
--pod-name | string | Pod Name | false | |
--namespace | string | Namespace | false | default |
--instance-count | number | Number of instances to target | false | 1 |
--execution | enum | Execution mode | false | |
--ingress | boolean | Filter incoming traffic | true | |
--egress | boolean | Filter outgoing traffic | true | |
--interfaces | string | Network interfaces to filter outgoing traffic (if more than one separated by comma) | false | |
--ports | string | Network ports to filter traffic (if more than one separated by comma) | true | |
--image | string | The network chaos injection workload container image | false | quay.io/krkn-chaos/krkn-network-chaos:latest |
--protocols | string | The network protocols that will be filtered | false | tcp |
--taints | String | List of taints for which tolerations need to be created | false |