This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

KubeVirt VM Outage Scenario

Simulating VM-level disruptions in KubeVirt/OpenShift CNV environments

This scenario enables the simulation of VM-level disruptions in clusters where KubeVirt or OpenShift Containerized Network Virtualization (CNV) is installed. It allows users to delete a Virtual Machine Instance (VMI) to simulate a VM crash and test recovery capabilities.

Purpose

The kubevirt_vm_outage scenario deletes a specific KubeVirt Virtual Machine Instance (VMI) to simulate a VM crash or outage. This helps users:

  • Test the resilience of applications running inside VMs
  • Verify that VM monitoring and recovery mechanisms work as expected
  • Validate high availability configurations for VM workloads
  • Understand the impact of sudden VM failures on workloads and the overall system

Prerequisites

Before using this scenario, ensure the following:

  1. KubeVirt or OpenShift CNV is installed in your cluster
  2. The target VMI exists and is running in the specified namespace
  3. Your cluster credentials have sufficient permissions to delete and create VMIs

Parameters

The scenario supports the following parameters:

ParameterDescriptionRequiredDefault
vm_nameThe name of the VMI to deleteYesN/A
namespaceThe namespace where the VMI is locatedNo“default”
timeoutHow long to wait (in seconds) before attempting recovery for VMI to start running againNo60

Expected Behavior

When executed, the scenario will:

  1. Validate that KubeVirt is installed and the target VMI exists
  2. Save the initial state of the VMI
  3. Delete the VMI
  4. Wait for the VMI to become running or hit the timeout
  5. Attempt to recover the VMI:
    • If the VMI is managed by a VirtualMachine resource with runStrategy: Always, it will automatically recover
    • If automatic recovery doesn’t occur, the plugin will manually recreate the VMI using the saved state
  6. Validate that the VMI is running again

Advanced Use Cases

Testing High Availability VM Configurations

This scenario is particularly useful for testing high availability configurations, such as:

  • Clustered applications running across multiple VMs
  • VMs with automatic restart policies
  • Applications with cross-VM resilience mechanisms

Recovery Strategies

The plugin implements two recovery strategies:

  1. Automated Recovery: If the VM is managed by a VirtualMachine resource with runStrategy: Always, the plugin will wait for KubeVirt’s controller to automatically recreate the VMI.

  2. Manual Recovery: If automatic recovery doesn’t occur within the timeout period, the plugin will attempt to manually recreate the VMI using the saved state from before the deletion.

Limitations

  • The scenario currently supports deleting a single VMI at a time
  • If VM spec changes during the outage window, the manual recovery may not reflect those changes
  • The scenario doesn’t simulate partial VM failures (e.g., VM freezing) - only complete VM outage

Troubleshooting

If the scenario fails, check the following:

  1. Ensure KubeVirt/CNV is properly installed in your cluster
  2. Verify that the target VMI exists and is running
  3. Check that your credentials have sufficient permissions to delete and create VMIs
  4. Examine the logs for specific error messages

1 - KubeVirt VM Outage Scenario - Kraken

Detailed implementation of the KubeVirt VM Outage Scenario in Kraken

KubeVirt VM Outage Scenario in Kraken

The kubevirt_vm_outage scenario in Kraken enables users to simulate VM-level disruptions by deleting a Virtual Machine Instance (VMI) to test resilience and recovery capabilities.

Implementation

This scenario is implemented in Kraken’s core repository, with the following key functionality:

  1. Finding and validating the target VMI
  2. Deleting the VMI using the KubeVirt API
  3. Monitoring the recovery process
  4. Implementing fallback recovery if needed

Usage

You can use this scenario in your Kraken configuration file as follows:

scenarios:
  - name: "kubevirt vm outage"
    scenario: kubevirt_vm_outage
    parameters:
      vm_name: <my-application-vm>
      namespace: <vm-workloads>
      timeout: 60

Detailed Parameters

ParameterDescriptionRequiredDefaultExample Values
vm_nameThe name of the VMI to deleteYesN/A“database-vm”, “web-server-vm”
namespaceThe namespace where the VMI is locatedNo“default”“openshift-cnv”, “vm-workloads”
timeoutHow long to wait (in seconds) for VMI to become running before attempting recoveryNo6030, 120, 300

Execution Flow

When executed, the scenario follows this process:

  1. Initialization: Validates KubeVirt is installed and configures the KubeVirt client
  2. VMI Validation: Checks if the target VMI exists and is in Running state
  3. State Preservation: Saves the initial state of the VMI
  4. Chaos Injection: Deletes the VMI using the KubeVirt API
  5. Wait for Running: Waits for VMI to become running again, up to the timeout specified
  6. Recovery Monitoring: Checks if the VMI is automatically restored
  7. Manual Recovery: If automatic recovery doesn’t occur, manually recreates the VMI
  8. Validation: Confirms the VMI is running correctly

Sample Configuration

Here’s an example configuration for using the kubevirt_vm_outage scenario:

scenarios:
  - name: "kubevirt outage test"
    scenario: kubevirt_vm_outage
    parameters:
      vm_name: my-vm
      namespace: kubevirt
      duration: 60

For multiple VMs in different namespaces:

scenarios:
  - name: "kubevirt outage test - app VM"
    scenario: kubevirt_vm_outage
    parameters:
      vm_name: app-vm
      namespace: application
      duration: 120
  
  - name: "kubevirt outage test - database VM"
    scenario: kubevirt_vm_outage
    parameters:
      vm_name: db-vm
      namespace: database
      duration: 180

Combining with Other Scenarios

For more comprehensive testing, you can combine this scenario with other Kraken scenarios in the list of chaos_scenarios in the config file:

kraken:
    kubeconfig_path: ~/.kube/config                     # Path to kubeconfig
    ...
    chaos_scenarios:
        - hog_scenarios:
            - scenarios/kube/cpu-hog.yml
        -  kubevirt_vm_outage:
               - scenarios/kubevirt/kubevirt-vm-outage.yaml

2 - KubeVirt Outage Scenarios using Krkn-Hub

This scenario deletes a VMI matching the namespace and name on a Kubernetes/OpenShift cluster.

Run

If enabling Cerberus to monitor the cluster and pass/fail the scenario post chaos, refer docs. Make sure to start it before injecting the chaos and set CERBERUS_ENABLED environment variable for the chaos injection container to autoconnect.

$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:kubevirt-outage
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:kubevirt-outage
OR 
$ docker run -e <VARIABLE>=<value> --net=host --pull=always -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:kubevirt-outage

$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario

Supported parameters

The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:

Example if –env-host is used:

export <parameter_name>=<value>

OR on the command line like example:

-e <VARIABLE>=<value> 

See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables

ParameterDescriptionDefault
NAMESPACEVMI Namespace to target""
VMI_NAMEVMI name to delete, supports regex""
TIMEOUTTimeout to wait for VMI to start running again, will fail if timeout is hit120
For example:
$ podman run --name=<container_name> --net=host --pull=always --env-host=true -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:kubevirt-outage

3 - Kubevirt Outage Scenarios using Krknctl

krknctl run kubevirt-outage (optional: --<parameter>:<value> )

Can also set any global variable listed here

Scenario specific parameters: (be sure to scroll to right)

ParameterDescriptionTypeDefaultPossible Values
--namespaceVMI Namespace to targetstringnode-role.kubernetes.io/worker
--vmi-nameVMI name to inject faults in case of targeting a specific nodestring
--timeoutDuration to wait for completion of node scenario injectionnumber180

To see all available scenario options

krknctl run kubevirt-outage --help