How to Run VMI Network Filter Scenarios
Choose your preferred method to run VMI network filter scenarios:
Example scenario file: virt_network.yaml
Configuration
- id: vmi_network_filter
image: "quay.io/krkn-chaos/krkn-network-chaos:latest"
wait_duration: 300
test_duration: 120
label_selector: ""
service_account: ""
taints: []
namespace: "my-namespace"
instance_count: 1
execution: serial
target: ".*"
interfaces: []
ingress: true
egress: true
For the common module settings please refer to the documentation.
target: regex to match VMI names within the namespace (e.g."<vmi-name-prefix>-.*"or".*"for all)namespace: namespace containing the target VMIs (required; also supports regex to match multiple namespaces)interfaces: list of tap interface names to target. Leave empty to auto-detect the tap device in the virt-launcher network namespaceingress: apply iptables DROP rules to incoming trafficegress: apply iptables DROP rules to outgoing trafficports: list of ports to block (omit or leave empty to block all ports)protocols: list of IP protocols to filter —tcp,udp, or both (defaults to["tcp", "udp"])
Note
ports and protocols are optional. When ports is omitted or empty, all traffic on the specified protocols is blocked — equivalent to full network isolation.Catastrophic Configurations
Full network isolation (most catastrophic):
ingress: true
egress: true
# no ports or protocols — blocks all TCP and UDP
Complete network cut to the VMI.
DNS blackout (cascading failures):
ingress: true
egress: true
protocols:
- tcp
- udp
ports:
- 53
Blocking DNS (port 53) causes every service inside the VM that resolves hostnames to fail with timeouts. Cascading failures across the application stack without a hard cut — often the most realistic chaos scenario.
Management plane loss:
ingress: true
egress: true
protocols:
- tcp
ports:
- 22
- 443
- 6443
Blocks SSH, HTTPS, and the Kubernetes API server. The VM stays running but is unreachable for management and API calls.
Application layer only:
ingress: true
egress: true
protocols:
- tcp
ports:
- 80
- 443
- 8080
- 8443
Kills HTTP/HTTPS traffic only — tests application resilience without taking the entire VM offline.
Usage
To enable VMI network filter scenarios edit the kraken config file, go to the section kraken -> chaos_scenarios of the yaml structure
and add a new element to the list named network_chaos_ng_scenarios then add the desired scenario
pointing to the scenario yaml file.
kraken:
...
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/openshift/virt_network.yaml
Note
You can specify multiple scenario files of the same type by adding additional paths to the list:
kraken:
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/openshift/virt_network.yaml
- scenarios/openshift/virt_network_2.yaml
You can also combine multiple different scenario types in the same config.yaml file:
kraken:
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/openshift/virt_network.yaml
- pod_disruption_scenarios:
- scenarios/pod-kill.yaml
Run
python run_kraken.py --config config/config.yaml
Run
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:vmi-network-filter
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:vmi-network-filter
OR
$ docker run -e <VARIABLE>=<value> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:vmi-network-filter
$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
TIP: Because the container runs with a non-root user, ensure the kube config is globally readable before mounting it in the container. You can achieve this with the following commands:
kubectl config view --flatten > ~/kubeconfig && chmod 444 ~/kubeconfig && docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v ~/kubeconfig:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:vmi-network-filter
Supported parameters
The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:
ex.)
export <parameter_name>=<value>
See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables
| Parameter | Description | Type | Default |
|---|---|---|---|
| TOTAL_CHAOS_DURATION | Chaos duration in seconds | number | 120 |
| NAMESPACE | Namespace containing the target VMIs (required) | string | |
| VMI_NAME | Regex to match VMI names (e.g. virt-server-.* or .* for all) | string | .* |
| LABEL_SELECTOR | Label selector to filter VMIs (e.g. app=myapp) | string | "" |
| INSTANCE_COUNT | Maximum number of VMIs to target | number | 1 |
| EXECUTION | Execution mode: serial or parallel | enum | serial |
| INGRESS | Apply DROP rules to incoming traffic | boolean | true |
| EGRESS | Apply DROP rules to outgoing traffic | boolean | true |
| INTERFACES | Comma-separated tap interface names (empty to auto-detect) | string | "" |
| PORTS | Comma-separated port numbers to block (empty = all ports) | string | "" |
| PROTOCOLS | Comma-separated protocols to filter: tcp, udp, or both | string | tcp,udp |
| WAIT_DURATION | Seconds to wait before running the next scenario in the same file | number | 300 |
| IMAGE | Network chaos injection workload image | string | quay.io/krkn-chaos/krkn-network-chaos:latest |
| TAINTS | List of taints for which tolerations are created (e.g. ["node-role.kubernetes.io/master:NoSchedule"]) | string | [] |
| SERVICE_ACCOUNT | Optional service account for the scenario workload | string | "" |
NOTE In case of using custom metrics profile or alerts profile when CAPTURE_METRICS or ENABLE_ALERTS is enabled, mount the metrics profile from the host on which the container is run using podman/docker under /home/krkn/kraken/config/metrics-aggregated.yaml and /home/krkn/kraken/config/alerts. For example:
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:vmi-network-filter
krknctl run vmi-network-filter [--<parameter> <value>]
Can also set any global variable listed here
VMI Network Filter Parameters
| Argument | Type | Description | Required | Default Value |
|---|---|---|---|---|
--chaos-duration | number | Chaos duration in seconds | false | 120 |
--namespace | string | Namespace containing the target VMIs | true | |
--target | string | Regex to match VMI names (e.g. <vmi-name-prefix>-.* or .* for all) | false | .* |
--label-selector | string | Label selector to filter VMIs (e.g. app=myapp) | false | |
--instance-count | number | Maximum number of VMIs to target | false | 1 |
--execution | enum | Execution mode: parallel or serial | false | serial |
--ingress | boolean | Apply DROP rules to incoming traffic | false | true |
--egress | boolean | Apply DROP rules to outgoing traffic | false | true |
--interfaces | string | Comma-separated tap interface names (empty to auto-detect) | false | |
--ports | string | Comma-separated port numbers to block (e.g. 53, 22,443,6443). Empty = all ports | false | |
--protocols | string | Protocols to filter: tcp, udp, or tcp,udp | false | tcp,udp |
--image | string | Network chaos injection workload image | false | quay.io/krkn-chaos/krkn-network-chaos:latest |
--taints | string | Comma-separated taints for which tolerations are created (e.g. node-role.kubernetes.io/master:NoSchedule) | false | |
--service-account | string | Optional service account for the scenario workload | false | |
--wait-duration | number | Seconds to wait before running the next scenario in the same file | false | 300 |
Parameter Format Details
VMI Selection:
--namespace: required; supports regex to match multiple namespaces (e.g.virt-density-.*)--target: regex matched against VMI names (e.g.<vmi-name-prefix>-.*targets all VMIs whose name starts with that prefix)- Use
--instance-countto limit how many matching VMIs are targeted
Port and Protocol Format:
--ports: comma-separated integers, no spaces (e.g.53or22,443,6443). Omit to block all ports--protocols:tcp,udp, ortcp,udp. Defaults to both
Interface Detection:
- Leave
--interfacesempty to let the scenario auto-detect the tap device inside the virt-launcher network namespace - Specify explicitly (e.g.
tap0) only if auto-detection fails
Example Commands
DNS blackout (most impactful cascading failure):
krknctl run vmi-network-filter \
--namespace <namespace> \
--target ".*" \
--ports 53 \
--protocols tcp,udp \
--ingress true \
--egress true \
--chaos-duration 120
Full network isolation:
krknctl run vmi-network-filter \
--namespace <namespace> \
--target "<vmi-name>" \
--ingress true \
--egress true \
--chaos-duration 60
Management plane loss (SSH + API):
krknctl run vmi-network-filter \
--namespace <namespace> \
--target "<vmi-name-prefix>-.*" \
--instance-count 2 \
--ports 22,443,6443 \
--protocols tcp \
--ingress true \
--egress true \
--chaos-duration 300
Application layer only (HTTP/HTTPS):
krknctl run vmi-network-filter \
--namespace <namespace> \
--target ".*" \
--execution parallel \
--ports 80,443,8080,8443 \
--protocols tcp \
--ingress true \
--egress true \
--chaos-duration 180