Testing your changes

This page gives details about how you can get a kind cluster configured to be able to run on krkn-lib (the lowest level of krkn-chaos repos) up through krknctl (our easiest way to run and highest level repo)

Configure Kind Testing Environment

  1. Install kind

  2. Create cluster using kind-config.yml under krkn-lib base folder

kind create cluster --wait 300s --config=kind-config.yml

Install Elasticsearch and Prometheus

To be able to run the full test suite of tests you need to have elasticsearch and prometheus properly configured on the cluster

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add stable https://charts.helm.sh/stable
helm repo update

Prometheus

Deploy prometheus on your cluster

kubectl create namespace monitoring
helm install \
--wait --timeout 360s \
kind-prometheus \
prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--set prometheus.service.nodePort=30000 \
--set prometheus.service.type=NodePort \
--set grafana.service.nodePort=31000 \
--set grafana.service.type=NodePort \
--set alertmanager.service.nodePort=32000 \
--set alertmanager.service.type=NodePort \
--set prometheus-node-exporter.service.nodePort=32001 \
--set prometheus-node-exporter.service.type=NodePort \
--set prometheus.prometheusSpec.maximumStartupDurationSeconds=300

ElasticSearch

Set environment variables of elasticsearch variables

export ELASTIC_URL="https://localhost"
export ELASTIC_PORT="9091"
export ELASTIC_USER="elastic"
export ELASTIC_PASSWORD="test"

Deploy elasticsearch on your cluster

helm install \
--wait --timeout 360s \
elasticsearch \
oci://registry-1.docker.io/bitnamicharts/elasticsearch \
--set master.masterOnly=false \
--set master.replicaCount=1 \
--set data.replicaCount=0 \
--set coordinating.replicaCount=0 \
--set ingest.replicaCount=0 \
--set service.type=NodePort \
--set service.nodePorts.restAPI=32766 \
--set security.elasticPassword=test \
--set security.enabled=true \
--set image.tag=7.17.23-debian-12-r0 \
--set security.tls.autoGenerated=true

Testing Changes in Krkn-lib

To be able to run all the tests in the krkn-lib suite, you’ll need to have prometheus and elastic properly configured. See above steps for details

Install poetry

Using a virtual environment install poetry and install krkn-lib requirmenets

$ pip install poetry
$ poetry install --no-interaction

Run tests

poetry run python3 -m coverage run -a -m unittest discover -v src/krkn_lib/tests/

Adding tests

Be sure that if you are adding any new functions or functionality you are adding unit tests for it. We want to keep above an 80% coverage in this repo since its our base functionality

Testing Changes in Krkn

Unit Tests

Krkn unit tests are located in the tests/ directory and use Python’s unittest framework with comprehensive mocking. IMPORTANT: These tests do NOT require any external infrastructure, cloud credentials, or Kubernetes cluster - all dependencies are mocked.

Prerequisites

Install krkn dependencies in a virtual environment:

# Create and activate virtual environment
python3.11 -m venv chaos
source chaos/bin/activate

# Install requirements
pip install -r requirements.txt

Running Unit Tests

Run all unit tests:

python -m unittest discover -s tests -v

Run all unit tests with coverage:

python -m coverage run -a -m unittest discover -s tests -v
python -m coverage report

Run specific test file:

python -m unittest tests.test_kubevirt_vm_outage -v

Run specific test class:

python -m unittest tests.test_kubevirt_vm_outage.TestKubevirtVmOutageScenarioPlugin -v

Run specific test method:

python -m unittest tests.test_kubevirt_vm_outage.TestKubevirtVmOutageScenarioPlugin.test_successful_injection_and_recovery -v

Viewing Coverage Results

After running tests with coverage, generate an HTML report:

# Generate HTML coverage report
python -m coverage html

# View the report
open htmlcov/index.html  # macOS
xdg-open htmlcov/index.html  # Linux

Or view a text summary:

python -m coverage report

Example output:

$$Name Stmts Miss Cover --------------------------------------------------------------------------------- krkn/scenario_plugins/kubevirt_vm_outage/... 215 12 94% krkn/scenario_plugins/node_actions/ibmcloud_node_scenarios.py 185 8 96% --------------------------------------------------------------------------------- TOTAL 2847 156 95%$$

Test Output

Unit test output shows:

  • Test names and descriptions
  • Pass/fail status for each test
  • Execution time
  • Any assertion failures or errors

Example output:

$$test_successful_injection_and_recovery (tests.test_kubevirt_vm_outage.TestKubevirtVmOutageScenarioPlugin) Test successful deletion and recovery of a VMI using detailed mocking ... ok test_injection_failure (tests.test_kubevirt_vm_outage.TestKubevirtVmOutageScenarioPlugin) Test failure during VMI deletion ... ok test_validation_failure (tests.test_kubevirt_vm_outage.TestKubevirtVmOutageScenarioPlugin) Test validation failure when KubeVirt is not installed ... ok ---------------------------------------------------------------------- Ran 30 tests in 1.234s OK$$

Adding Unit Tests

When adding new functionality, always add corresponding unit tests. See the Adding Tests to Krkn guide for detailed instructions.

Key requirements:

  • Use comprehensive mocking (no external dependencies)
  • Add “IMPORTANT” note in docstring about no credentials needed
  • Test success paths, failure paths, edge cases, and exceptions
  • Organize tests into logical sections
  • Aim for >80% code coverage

Functional Tests (if able to run scenario on kind cluster)

Configuring test Cluster

After creating a kind cluster with the steps above, create these test pods on your cluster

kubectl apply -f CI/templates/outage_pod.yaml
kubectl wait --for=condition=ready pod -l scenario=outage --timeout=300s
kubectl apply -f CI/templates/container_scenario_pod.yaml
kubectl wait --for=condition=ready pod -l scenario=container --timeout=300s
kubectl create namespace namespace-scenario
kubectl apply -f CI/templates/time_pod.yaml
kubectl wait --for=condition=ready pod -l scenario=time-skew --timeout=300s
kubectl apply -f CI/templates/service_hijacking.yaml
kubectl wait --for=condition=ready pod -l "app.kubernetes.io/name=proxy" --timeout=300s

Install Requirements

$ python3.11 -m venv chaos
$ source chaos/bin/activate
$ pip install -r requirements.txt

Run Tests

  1. Add prometheus configuration variables to the test config file
yq -i '.kraken.port="8081"' CI/config/common_test_config.yaml
yq -i '.kraken.signal_address="0.0.0.0"' CI/config/common_test_config.yaml
yq -i '.kraken.performance_monitoring="localhost:9090"' CI/config/common_test_config.yaml
  1. Add tests to the list of functional tests to run
echo "test_service_hijacking" > ./CI/tests/functional_tests
echo "test_app_outages" >> ./CI/tests/functional_tests
echo "test_container"      >> ./CI/tests/functional_tests
echo "test_pod" >> ./CI/tests/functional_tests
echo "test_namespace"      >> ./CI/tests/functional_tests
echo "test_net_chaos"      >> ./CI/tests/functional_tests
echo "test_time"           >> ./CI/tests/functional_tests
echo "test_cpu_hog" >> ./CI/tests/functional_tests
echo "test_memory_hog" >> ./CI/tests/functional_tests
echo "test_io_hog" >> ./CI/tests/functional_tests
  1. Run tests
./CI/run.sh

Results can be seen in ./CI/results.markdown

Adding Tests

Be sure that if you are adding any new scenario you are adding tests for it based on a 5 (3 master, 2 worker) node kind cluster. See more details on how to add functional tests here The tests live here

Testing Changes for Krkn-hub

Install Podman/Docker Compose

You can use either podman-compose or docker-compose for this step

NOTE: Podman might not work on Mac’s

pip3 install docker-compose

OR

To get latest podman-compose features we need, use this installation command

pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz

Build Your Changes

  1. Run build.sh to create Dockerfile’s for each scenario
  2. Edit the docker-compose.yaml file to point to your quay.io repository (optional; required if you want to push or are testing krknctl)
ex.) 
image: containers.krkn-chaos.dev/krkn-chaos/krkn-hub:chaos-recommender 

change to >

image: quay.io/<user>/krkn-hub:chaos-recommender
  1. Build your image(s) from base krkn-hub directory

    Builds all images in docker-compose file

    docker-compose build
    

    Builds single image defined by service/scenario name

    docker-compose build <scenario_type>
    

    OR

    Builds all images in podman-compose file

    podman-compose build
    

    Builds single image defined by service/scenario name

    podman-compose build <scenario_type>
    

Push Images to your quay.io

Push all Images using docker-compose

docker-compose push

Push a single image using docker-compose

docker-compose push <scenario_type>

OR

Single Image (have to go one by one to push images through podman)

podman-compose push <scenario_type>

OR

podman push quay.io/<username>/krkn-hub:<scenario_type>

Run your new scenario

docker run -d -v <kube_config_path>:/root/.kube/config:Z quay.io/<username>/krkn-hub:<scenario_type>

OR

podman run -d -v <kube_config_path>:/root/.kube/config:Z quay.io/<username>/krkn-hub:<scenario_type>

See krkn-hub documentation for each scenario to see all possible variables to use

Testing Changes in Krknctl

Once you’ve created a krknctl-input.json file using the steps here, you’ll want to test those changes using the below steps. You will need a either podman or docker installed as well as a quay account.

Build and Push to personal Quay

First you will build your changes of krkn-hub and push changes to your own quay repository for testing

Run Krknctl with Personal Image

Once you have your images in quay, you are all set to configure krknctl to look for these new images. You’ll edit the config file of krknctl found here and edit the quay_org to be set to your quay username

With these updates to your config, you’ll build your personal krknctl binary and you’ll be all set to start testing your new scenario and config options.

If any krknctl code changes are required, you’ll have to make changes and rebuild the the krknctl binary each time to test as well