This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

VMI Network Filter

    Injects iptables-based network filtering into a KubeVirt Virtual Machine Instance (VMI) by applying INPUT and OUTPUT rules inside the virt-launcher network namespace via nsenter. Supports port and protocol-specific filtering so you can selectively block DNS, SSH, HTTP, or any other traffic without cutting all connectivity. The tap interface (tap0) is targeted directly so only the specific VMI is isolated, leaving OVN's BFD heartbeats and other node workloads unaffected.

    How to Run VMI Network Filter Scenarios

    Choose your preferred method to run VMI network filter scenarios:

    Example scenario file: virt_network.yaml

    Configuration

    - id: vmi_network_filter
      image: "quay.io/krkn-chaos/krkn-network-chaos:latest"
      wait_duration: 300
      test_duration: 120
      label_selector: ""
      service_account: ""
      taints: []
      namespace: "my-namespace"
      instance_count: 1
      execution: serial
      target: ".*"
      interfaces: []
      ingress: true
      egress: true
    

    For the common module settings please refer to the documentation.

    • target: regex to match VMI names within the namespace (e.g. "<vmi-name-prefix>-.*" or ".*" for all)
    • namespace: namespace containing the target VMIs (required; also supports regex to match multiple namespaces)
    • interfaces: list of tap interface names to target. Leave empty to auto-detect the tap device in the virt-launcher network namespace
    • ingress: apply iptables DROP rules to incoming traffic
    • egress: apply iptables DROP rules to outgoing traffic
    • ports: list of ports to block (omit or leave empty to block all ports)
    • protocols: list of IP protocols to filter — tcp, udp, or both (defaults to ["tcp", "udp"])

    Catastrophic Configurations

    Full network isolation (most catastrophic):

      ingress: true
      egress: true
      # no ports or protocols — blocks all TCP and UDP
    

    Complete network cut to the VMI.

    DNS blackout (cascading failures):

      ingress: true
      egress: true
      protocols:
        - tcp
        - udp
      ports:
        - 53
    

    Blocking DNS (port 53) causes every service inside the VM that resolves hostnames to fail with timeouts. Cascading failures across the application stack without a hard cut — often the most realistic chaos scenario.

    Management plane loss:

      ingress: true
      egress: true
      protocols:
        - tcp
      ports:
        - 22
        - 443
        - 6443
    

    Blocks SSH, HTTPS, and the Kubernetes API server. The VM stays running but is unreachable for management and API calls.

    Application layer only:

      ingress: true
      egress: true
      protocols:
        - tcp
      ports:
        - 80
        - 443
        - 8080
        - 8443
    

    Kills HTTP/HTTPS traffic only — tests application resilience without taking the entire VM offline.

    Usage

    To enable VMI network filter scenarios edit the kraken config file, go to the section kraken -> chaos_scenarios of the yaml structure and add a new element to the list named network_chaos_ng_scenarios then add the desired scenario pointing to the scenario yaml file.

    kraken:
        ...
        chaos_scenarios:
            - network_chaos_ng_scenarios:
                - scenarios/openshift/virt_network.yaml
    

    Run

    python run_kraken.py --config config/config.yaml
    

    Run

    $ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:vmi-network-filter
    $ podman logs -f <container_name or container_id> # Streams Kraken logs
    $ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
    
    $ docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:vmi-network-filter
    OR
    $ docker run -e <VARIABLE>=<value> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:vmi-network-filter
    $ docker logs -f <container_name or container_id> # Streams Kraken logs
    $ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
    

    TIP: Because the container runs with a non-root user, ensure the kube config is globally readable before mounting it in the container. You can achieve this with the following commands:

    kubectl config view --flatten > ~/kubeconfig && chmod 444 ~/kubeconfig && docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v ~/kubeconfig:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:vmi-network-filter
    

    Supported parameters

    The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:

    ex.) export <parameter_name>=<value>

    See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables

    ParameterDescriptionTypeDefault
    TOTAL_CHAOS_DURATIONChaos duration in secondsnumber120
    NAMESPACENamespace containing the target VMIs (required)string
    VMI_NAMERegex to match VMI names (e.g. virt-server-.* or .* for all)string.*
    LABEL_SELECTORLabel selector to filter VMIs (e.g. app=myapp)string""
    INSTANCE_COUNTMaximum number of VMIs to targetnumber1
    EXECUTIONExecution mode: serial or parallelenumserial
    INGRESSApply DROP rules to incoming trafficbooleantrue
    EGRESSApply DROP rules to outgoing trafficbooleantrue
    INTERFACESComma-separated tap interface names (empty to auto-detect)string""
    PORTSComma-separated port numbers to block (empty = all ports)string""
    PROTOCOLSComma-separated protocols to filter: tcp, udp, or bothstringtcp,udp
    WAIT_DURATIONSeconds to wait before running the next scenario in the same filenumber300
    IMAGENetwork chaos injection workload imagestringquay.io/krkn-chaos/krkn-network-chaos:latest
    TAINTSList of taints for which tolerations are created (e.g. ["node-role.kubernetes.io/master:NoSchedule"])string[]
    SERVICE_ACCOUNTOptional service account for the scenario workloadstring""

    NOTE In case of using custom metrics profile or alerts profile when CAPTURE_METRICS or ENABLE_ALERTS is enabled, mount the metrics profile from the host on which the container is run using podman/docker under /home/krkn/kraken/config/metrics-aggregated.yaml and /home/krkn/kraken/config/alerts. For example:

    $ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/krkn-chaos/krkn-hub:vmi-network-filter
    
    krknctl run vmi-network-filter [--<parameter> <value>]
    

    Can also set any global variable listed here

    VMI Network Filter Parameters

    ArgumentTypeDescriptionRequiredDefault Value
    --chaos-durationnumberChaos duration in secondsfalse120
    --namespacestringNamespace containing the target VMIstrue
    --targetstringRegex to match VMI names (e.g. <vmi-name-prefix>-.* or .* for all)false.*
    --label-selectorstringLabel selector to filter VMIs (e.g. app=myapp)false
    --instance-countnumberMaximum number of VMIs to targetfalse1
    --executionenumExecution mode: parallel or serialfalseserial
    --ingressbooleanApply DROP rules to incoming trafficfalsetrue
    --egressbooleanApply DROP rules to outgoing trafficfalsetrue
    --interfacesstringComma-separated tap interface names (empty to auto-detect)false
    --portsstringComma-separated port numbers to block (e.g. 53, 22,443,6443). Empty = all portsfalse
    --protocolsstringProtocols to filter: tcp, udp, or tcp,udpfalsetcp,udp
    --imagestringNetwork chaos injection workload imagefalsequay.io/krkn-chaos/krkn-network-chaos:latest
    --taintsstringComma-separated taints for which tolerations are created (e.g. node-role.kubernetes.io/master:NoSchedule)false
    --service-accountstringOptional service account for the scenario workloadfalse
    --wait-durationnumberSeconds to wait before running the next scenario in the same filefalse300

    Parameter Format Details

    VMI Selection:

    • --namespace: required; supports regex to match multiple namespaces (e.g. virt-density-.*)
    • --target: regex matched against VMI names (e.g. <vmi-name-prefix>-.* targets all VMIs whose name starts with that prefix)
    • Use --instance-count to limit how many matching VMIs are targeted

    Port and Protocol Format:

    • --ports: comma-separated integers, no spaces (e.g. 53 or 22,443,6443). Omit to block all ports
    • --protocols: tcp, udp, or tcp,udp. Defaults to both

    Interface Detection:

    • Leave --interfaces empty to let the scenario auto-detect the tap device inside the virt-launcher network namespace
    • Specify explicitly (e.g. tap0) only if auto-detection fails

    Example Commands

    DNS blackout (most impactful cascading failure):

    krknctl run vmi-network-filter \
      --namespace <namespace> \
      --target ".*" \
      --ports 53 \
      --protocols tcp,udp \
      --ingress true \
      --egress true \
      --chaos-duration 120
    

    Full network isolation:

    krknctl run vmi-network-filter \
      --namespace <namespace> \
      --target "<vmi-name>" \
      --ingress true \
      --egress true \
      --chaos-duration 60
    

    Management plane loss (SSH + API):

    krknctl run vmi-network-filter \
      --namespace <namespace> \
      --target "<vmi-name-prefix>-.*" \
      --instance-count 2 \
      --ports 22,443,6443 \
      --protocols tcp \
      --ingress true \
      --egress true \
      --chaos-duration 300
    

    Application layer only (HTTP/HTTPS):

    krknctl run vmi-network-filter \
      --namespace <namespace> \
      --target ".*" \
      --execution parallel \
      --ports 80,443,8080,8443 \
      --protocols tcp \
      --ingress true \
      --egress true \
      --chaos-duration 180