This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

KubeVirt VM Outage Scenario

Simulating VM-level disruptions in KubeVirt/OpenShift CNV environments

This scenario enables the simulation of VM-level disruptions in clusters where KubeVirt or OpenShift Containerized Network Virtualization (CNV) is installed. It allows users to delete a Virtual Machine Instance (VMI) to simulate a VM crash and test recovery capabilities.

Purpose

The kubevirt_vm_outage scenario deletes a specific KubeVirt Virtual Machine Instance (VMI) to simulate a VM crash or outage. This helps users:

  • Test the resilience of applications running inside VMs
  • Verify that VM monitoring and recovery mechanisms work as expected
  • Validate high availability configurations for VM workloads
  • Understand the impact of sudden VM failures on workloads and the overall system

Prerequisites

Before using this scenario, ensure the following:

  1. KubeVirt or OpenShift CNV is installed in your cluster
  2. The target VMI exists and is running in the specified namespace
  3. You have the kubevirt Python client installed (included in krkn requirements.txt)
  4. Your cluster credentials have sufficient permissions to delete and create VMIs

Parameters

The scenario supports the following parameters:

ParameterDescriptionRequiredDefault
vm_nameThe name of the VMI to deleteYesN/A
namespaceThe namespace where the VMI is locatedNo“default”
durationHow long to wait (in seconds) before attempting recoveryNo60

Expected Behavior

When executed, the scenario will:

  1. Validate that KubeVirt is installed and the target VMI exists
  2. Save the initial state of the VMI
  3. Delete the VMI using the KubeVirt API
  4. Wait for the specified duration
  5. Attempt to recover the VMI:
    • If the VMI is managed by a VirtualMachine resource with runStrategy: Always, it will automatically recover
    • If automatic recovery doesn’t occur, the plugin will manually recreate the VMI using the saved state
  6. Validate that the VMI is running again

Sample Configuration

Here’s an example configuration for using the kubevirt_vm_outage scenario:

scenarios:
  - name: "kubevirt outage test"
    scenario: kubevirt_vm_outage
    parameters:
      vm_name: my-vm
      namespace: kubevirt
      duration: 60

For multiple VMs in different namespaces:

scenarios:
  - name: "kubevirt outage test - app VM"
    scenario: kubevirt_vm_outage
    parameters:
      vm_name: app-vm
      namespace: application
      duration: 120
  
  - name: "kubevirt outage test - database VM"
    scenario: kubevirt_vm_outage
    parameters:
      vm_name: db-vm
      namespace: database
      duration: 180

Recovery Strategies

The plugin implements two recovery strategies:

  1. Automated Recovery: If the VM is managed by a VirtualMachine resource with runStrategy: Always, the plugin will wait for KubeVirt’s controller to automatically recreate the VMI.

  2. Manual Recovery: If automatic recovery doesn’t occur within the timeout period, the plugin will attempt to manually recreate the VMI using the saved state from before the deletion.

Limitations

  • The scenario currently supports deleting a single VMI at a time
  • If VM spec changes during the outage window, the manual recovery may not reflect those changes
  • The scenario doesn’t simulate partial VM failures (e.g., VM freezing) - only complete VM outage

Troubleshooting

If the scenario fails, check the following:

  1. Ensure KubeVirt/CNV is properly installed in your cluster
  2. Verify that the target VMI exists and is running
  3. Check that your credentials have sufficient permissions to delete and create VMIs
  4. Examine the logs for specific error messages

1 - KubeVirt VM Outage Scenario - Kraken

Detailed implementation of the KubeVirt VM Outage Scenario in Kraken

KubeVirt VM Outage Scenario in Kraken

The kubevirt_vm_outage scenario in Kraken enables users to simulate VM-level disruptions by deleting a Virtual Machine Instance (VMI) to test resilience and recovery capabilities.

Implementation

This scenario is implemented in Kraken’s core repository, with the following key functionality:

  1. Finding and validating the target VMI
  2. Deleting the VMI using the KubeVirt API
  3. Monitoring the recovery process
  4. Implementing fallback recovery if needed

Usage

You can use this scenario in your Kraken configuration file as follows:

scenarios:
  - name: "kubevirt vm outage"
    scenario: kubevirt_vm_outage
    parameters:
      vm_name: <my-application-vm>
      namespace: <vm-workloads>
      duration: 60

Detailed Parameters

ParameterDescriptionRequiredDefaultExample Values
vm_nameThe name of the VMI to deleteYesN/A“database-vm”, “web-server-vm”
namespaceThe namespace where the VMI is locatedNo“default”“openshift-cnv”, “vm-workloads”
durationHow long to wait (in seconds) before attempting recoveryNo6030, 120, 300

Execution Flow

When executed, the scenario follows this process:

  1. Initialization: Validates KubeVirt is installed and configures the KubeVirt client
  2. VMI Validation: Checks if the target VMI exists and is in Running state
  3. State Preservation: Saves the initial state of the VMI
  4. Chaos Injection: Deletes the VMI using the KubeVirt API
  5. Wait Period: Waits for the specified duration
  6. Recovery Monitoring: Checks if the VMI is automatically restored
  7. Manual Recovery: If automatic recovery doesn’t occur, manually recreates the VMI
  8. Validation: Confirms the VMI is running correctly

Integration with KubeVirt API

The scenario utilizes the KubeVirt Python client to interact with the KubeVirt API. Key API operations include:

  • Reading VMI objects: kubevirt_api.read_namespaced_virtual_machine_instance()
  • Deleting VMI objects: kubevirt_api.delete_namespaced_virtual_machine_instance()
  • Creating VMI objects: kubevirt_api.create_namespaced_virtual_machine_instance()

Advanced Use Cases

Testing High Availability VM Configurations

This scenario is particularly useful for testing high availability configurations, such as:

  • Clustered applications running across multiple VMs
  • VMs with automatic restart policies
  • Applications with cross-VM resilience mechanisms

Combining with Other Scenarios

For more comprehensive testing, you can combine this scenario with other Kraken scenarios:

scenarios:
  - name: "node outage with vm recovery test"
    scenario: node_stop_start_scenario
    parameters:
      # Node scenario parameters
  
  - name: "vm outage during node recovery"
    scenario: kubevirt_vm_outage
    parameters:
      vm_name: <critical-vm>
      namespace: <production>
      duration: 120