This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Node Scenarios

This scenario disrupts the node(s) matching the label or node name(s) on a Kubernetes/OpenShift cluster. These scenarios are performed in two different ways, either by the clusters cloud cli or by common/generic commands that can be performed on any cluster.

Actions

The following node chaos scenarios are supported:

  1. node_start_scenario: Scenario to start the node instance. Need access to cloud provider
  2. node_stop_scenario: Scenario to stop the node instance. Need access to cloud provider
  3. node_stop_start_scenario: Scenario to stop and then start the node instance. Not supported on VMware. Need access to cloud provider
  4. node_termination_scenario: Scenario to terminate the node instance. Need access to cloud provider
  5. node_reboot_scenario: Scenario to reboot the node instance. Need access to cloud provider
  6. stop_kubelet_scenario: Scenario to stop the kubelet of the node instance. Need access to cloud provider
  7. stop_start_kubelet_scenario: Scenario to stop and start the kubelet of the node instance. Need access to cloud provider
  8. restart_kubelet_scenario: Scenario to restart the kubelet of the node instance. Can be used with generic cloud type or when you don’t have access to cloud provider
  9. node_crash_scenario: Scenario to crash the node instance. Can be used with generic cloud type or when you don’t have access to cloud provider
  10. stop_start_helper_node_scenario: Scenario to stop and start the helper node and check service status. Need access to cloud provider
  11. node_block_scenario: Scenario to block inbound and outbound traffic from other nodes to a specific node for a set duration (only for Azure). Need access to cloud provider

Clouds

Supported cloud supported:

Recovery Times

In each node scenario, the end telemetry details of the run will show the time it took for each node to stop and recover depening on the scenario.

The details printed in telemetry:

  • node_name: Node name
  • node_id: Node id
  • not_ready_time: Amount of time the node took to get to a not ready state after cloud provider has stopped node
  • ready_time: Amount of time the node took to get to a ready state after cloud provider has become in started state
  • stopped_time: Amount of time the cloud provider took to stop a node
  • running_time: Amount of time the cloud provider took to get a node running
  • terminating_time: Amount of time the cloud provider took for node to become terminated

Example:

"affected_nodes": [
    {
        "node_name": "cluster-name-**.438115.internal",
        "node_id": "cluster-name-**",
        "not_ready_time": 0.18194103240966797,
        "ready_time": 0.0,
        "stopped_time": 140.74104499816895,
        "running_time": 0.0,
        "terminating_time": 0.0
    },
    {
        "node_name": "cluster-name-**-master-0.438115.internal",
        "node_id": "cluster-name-**-master-0",
        "not_ready_time": 0.1611928939819336,
        "ready_time": 0.0,
        "stopped_time": 146.72056317329407,
        "running_time": 0.0,
        "terminating_time": 0.0
    },
    {
        "node_name": "cluster-name-**.438115.internal",
        "node_id": "cluster-name-**",
        "not_ready_time": 0.0,
        "ready_time": 43.521320104599,
        "stopped_time": 0.0,
        "running_time": 12.305592775344849,
        "terminating_time": 0.0
    },
    {
        "node_name": "cluster-name-**-master-0.438115.internal",
        "node_id": "cluster-name-**-master-0",
        "not_ready_time": 0.0,
        "ready_time": 48.33336925506592,
        "stopped_time": 0.0,
        "running_time": 12.052034854888916,
        "terminating_time": 0.0
    }

1 - Node Scenarios using Krkn

For any of the node scenarios, you’ll specify node_scenarios as the scenario type.

See example config here:

    chaos_scenarios:
        - node_scenarios: # List of chaos node scenarios to load
            - scenarios/***.yml
            - scenarios/***.yml # Can specify multiple files here

Sample scenario file, you are able to specify multiple list items under node_scenarios that will be ran serially

node_scenarios:
  - actions:                   # node chaos scenarios to be injected
    - <action>                 # Can specify multiple actions here
    node_name: <node_name>     # node on which scenario has to be injected; can set multiple names separated by comma
    label_selector: <label>    # when node_name is not specified, a node with matching label_selector is selected for node chaos scenario injection; can specify multiple by a comma separated list
    instance_count: <instance_number> # Number of nodes to perform action/select that match the label selector
    runs: <run_int>            # number of times to inject each scenario under actions (will perform on same node each time)
    timeout: <timeout>         # duration to wait for completion of node scenario injection
    duration: <duration>       # duration to stop the node before running the start action
    cloud_type: <cloud>        # cloud type on which Kubernetes/OpenShift runs  
    parallel: <true_or_false>  # Run action on label or node name in parallel or sequential, defaults to sequential

AWS

Cloud setup instructions can be found here. Sample scenario config can be found here.

The cloud type in the scenario yaml file needs to be aws

Baremetal

Sample scenario config can be found here.

The cloud type in the scenario yaml file needs to be bm

Docker

The Docker provider can be used to run node scenarios against kind clusters.

kind is a tool for running local Kubernetes clusters using Docker container “nodes”.

kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.

GCP

Cloud setup instructions can be found here. Sample scenario config can be found here.

The cloud type in the scenario yaml file needs to be gcp

Openstack

How to set up Openstack cli to run node scenarios is defined here.

The cloud type in the scenario yaml file needs to be openstack

The supported node level chaos scenarios on an OPENSTACK cloud are only: node_stop_start_scenario, stop_start_kubelet_scenario and node_reboot_scenario.

To execute the scenario, ensure the value for ssh_private_key in the node scenarios config file is set with the correct private key file path for ssh connection to the helper node. Ensure passwordless ssh is configured on the host running Kraken and the helper node to avoid connection errors.

Azure

Cloud setup instructions can be found here. Sample scenario config can be found here.

The cloud type in the scenario yaml file needs to be azure

Alibaba

How to set up Alibaba cli to run node scenarios is defined here.

The cloud type in the scenario yaml file needs to be alibaba

VMware

How to set up VMware vSphere to run node scenarios is defined here

The cloud type in the scenario yaml file needs to be vmware

IBMCloud

How to set up IBMCloud to run node scenarios is defined here

See a sample of ibm cloud node scenarios example config file

The cloud type in the scenario yaml file needs to be ibm

General

Use ‘generic’ or do not add the ‘cloud_type’ key to your scenario if your cluster is not set up using one of the current supported cloud types.

2 - Node Scenarios using Krknctl

krknctl run node-scenarios (optional: --<parameter>:<value> )

Can also set any global variable listed here

Scenario specific parameters: (be sure to scroll to right)

ParameterDescriptionTypeDefaultPossible Values
--actionaction performed on the node, visit https://github.com/krkn-chaos/krkn/blob/main/docs/node_scenarios.md for more infosenumnode_start_scenario,node_stop_scenario,node_stop_start_scenario,node_termination_scenario,node_reboot_scenario,stop_kubelet_scenario,stop_start_kubelet_scenario,restart_kubelet_scenario,node_crash_scenario,stop_start_helper_node_scenario
--label-selectorNode label to targetstringnode-role.kubernetes.io/worker
--node-nameNode name to inject faults in case of targeting a specific node; Can set multiple node names separated by a commastring
--instance-countTargeted instance count matching the label selectornumber1
--runsIterations to perform action on a single nodenumber1
--cloud-typeCloud platform on top of which cluster is running, supported platforms - aws, azure, gcp, vmware, ibmcloud, bmenumaws
--timeoutDuration to wait for completion of node scenario injectionnumber180
--durationDuration to wait for completion of node scenario injectionnumber120
--vsphere-ipVSpere IP Addressstring
--vsphere-usernameVSpere IP Addressstring (secret)
--vsphere-passwordVSpere passwordstring (secret)
--aws-access-key-idAWS Access Key Idstring (secret)
--aws-secret-access-keyAWS Secret Access Keystring (secret)
--aws-default-regionAWS default regionstring
--bmc-userOnly needed for Baremetal ( bm ) - IPMI/bmc usernamestring(secret)
--bmc-passwordOnly needed for Baremetal ( bm ) - IPMI/bmc passwordstring(secret)
--bmc-addressOnly needed for Baremetal ( bm ) - IPMI/bmc addressstring
--ibmc-addressIBM Cloud URLstring
--ibmc-api-keyIBM Cloud API Keystring (secret)
--azure-tenantAzure Tenantstring
--azure-client-secretAzure Client Secretstring(secret)
--azure-client-idAzure Client IDstring(secret)
--azure-subscription-idAzure Subscription IDstring (secret)
--gcp-application-credentialsGCP application credentials file locationfile

NOTE: The secret string types will be masked when scenario is ran

To see all available scenario options

krknctl run node-scenarios --help 

3 - Node Scenarios using Krkn-Hub

This scenario disrupts the node(s) matching the label on a Kubernetes/OpenShift cluster. Actions/disruptions supported are listed here

Run

If enabling Cerberus to monitor the cluster and pass/fail the scenario post chaos, refer docs. Make sure to start it before injecting the chaos and set CERBERUS_ENABLED environment variable for the chaos injection container to autoconnect.

$ podman run --name=<container_name> --net=host --env-host=true -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-scenarios
$ podman logs -f <container_name or container_id> # Streams Kraken logs
$ podman inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario
$ docker run $(./get_docker_params.sh) --name=<container_name> --net=host -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-scenarios
OR 
$ docker run -e <VARIABLE>=<value> --net=host -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:node-scenarios

$ docker logs -f <container_name or container_id> # Streams Kraken logs
$ docker inspect <container-name or container-id> --format "{{.State.ExitCode}}" # Outputs exit code which can considered as pass/fail for the scenario

Supported parameters

The following environment variables can be set on the host running the container to tweak the scenario/faults being injected:

Example if –env-host is used:

export <parameter_name>=<value>

OR on the command line like example:

-e <VARIABLE>=<value> 

See list of variables that apply to all scenarios here that can be used/set in addition to these scenario specific variables

ParameterDescriptionDefault
ACTIONAction can be one of the followingnode_stop_start_scenario
LABEL_SELECTORNode label to targetnode-role.kubernetes.io/worker
NODE_NAMENode name to inject faults in case of targeting a specific node; Can set multiple node names separated by a comma""
INSTANCE_COUNTTargeted instance count matching the label selector1
RUNSIterations to perform action on a single node1
CLOUD_TYPECloud platform on top of which cluster is running, supported platforms - aws, vmware, ibmcloud, bmaws
TIMEOUTDuration to wait for completion of node scenario injection180
DURATIONDuration to stop the node before running the start action - not supported for vmware and ibm cloud type120
VERIFY_SESSIONOnly needed for vmware - Set to True if you want to verify the vSphere client session using certificatesFalse
SKIP_OPENSHIFT_CHECKSOnly needed for vmware - Set to True if you don’t want to wait for the status of the nodes to change on OpenShift before passing the scenarioFalse
BMC_USEROnly needed for Baremetal ( bm ) - IPMI/bmc username""
BMC_PASSWORDOnly needed for Baremetal ( bm ) - IPMI/bmc password""
BMC_ADDROnly needed for Baremetal ( bm ) - IPMI/bmc username""

For example:

$ podman run --name=<container_name> --net=host --env-host=true -v <path-to-custom-metrics-profile>:/home/krkn/kraken/config/metrics-aggregated.yaml -v <path-to-custom-alerts-profile>:/home/krkn/kraken/config/alerts -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:container-scenarios 

The following environment variables need to be set for the scenarios that requires intereacting with the cloud platform API to perform the actions:

Amazon Web Services

$ export AWS_ACCESS_KEY_ID=<>
$ export AWS_SECRET_ACCESS_KEY=<>
$ export AWS_DEFAULT_REGION=<>

VMware Vsphere

$ export VSPHERE_IP=<vSphere_client_IP_address>

$ export VSPHERE_USERNAME=<vSphere_client_username>

$ export VSPHERE_PASSWORD=<vSphere_client_password>

Ibmcloud

$ export IBMC_URL=https://<region>.iaas.cloud.ibm.com/v1

$ export IBMC_APIKEY=<ibmcloud_api_key>

Baremetal

$ export BMC_USER=<bmc/IPMI user>
$ export BMC_PASSWORD=<bmc/IPMI password>
$ export BMC_ADDR=<bmc address>

Google Cloud Platform

$ export GOOGLE_APPLICATION_CREDENTIALS=<GCP Json>

Azure

$ export AZURE_TENANT_ID=<>
$ export AZURE_CLIENT_SECRET=<>
$ export AZURE_CLIENT_ID=<>

OpenStack

export OS_USERNAME=username
export OS_PASSWORD=password
export OS_TENANT_NAME=projectName
export OS_AUTH_URL=https://identityHost:portNumber/v2.0
export OS_TENANT_ID=tenantIDString
export OS_REGION_NAME=regionName
export OS_CACERT=/path/to/cacertFile

Demo

You can find a link to a demo of the scenario here