This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Installation

Details on how to install krkn, krkn-hub, and krknctl

Choose Your Installation Method

Krkn provides multiple ways to run chaos scenarios. Choose the method that best fits your needs:

ToolWhat is it?Best For
krknctlCLI tool with auto-completionComplex workflow orchestration, querying and running scenarios, ease of use
krkn-hubPre-built container imagesCI/CD pipelines, automation
krkn DashboardWeb UI for running scenariosVisual runs, demos, teams that prefer a GUI
krknStandalone Python programFull control, development, and customization

Installation Methods

What is it? A dedicated command-line interface (CLI) tool that simplifies running Krkn chaos scenarios while providing powerful orchestration capabilities.

Why use it?

  • Complex workflow orchestration — chain and orchestrate multiple chaos scenarios in sophisticated workflows
  • Query capabilities — discover, understand, and explore all supported scenarios directly from the CLI
  • Ease of use — command auto-completion, built-in input validation, and interactive prompts remove the guesswork
  • No configuration files — no need to manage YAML configs or Python environments manually
  • Container-native — runs scenarios via container runtime (Podman/Docker) with zero setup overhead

Best for: All users — from first-time chaos engineers to teams building complex resilience testing workflows.

👉 Install krknctl →


krkn-hub

What is it? A collection of pre-built container images that wrap Krkn scenarios, configured via environment variables.

Why use it?

  • No Python environment setup required
  • Easy integration with CI/CD systems (Jenkins, GitHub Actions, etc.)
  • Consistent, reproducible chaos runs
  • Scenarios are isolated in containers

Best for: CI/CD pipelines, automated testing, and users who prefer containers over local Python setups.

👉 Install krkn-hub →


Krkn Dashboard

What is it? A web application that lets you run Krkn chaos scenarios from your browser, with real-time logs and Elasticsearch/Grafana integration.

Why use it?

  • Scenario selection and parameter forms
  • Real-time visibility into running chaos containers
  • Visualized run history and metrics

Best for: Demos, shared machines, and teams that prefer a GUI.

👉 Install Krkn Dashboard →


krkn (Standalone Python)

What is it? The core Krkn chaos engineering tool, run as a standalone Python program cloned from Git.

Why use it?

  • Full control over configuration and execution
  • Run multiple different scenario types in a single execution
  • Direct access to all features and customization options
  • Ideal for development and advanced customization

Best for: Advanced users, developers contributing to Krkn, and scenarios requiring fine-grained control.

👉 Install krkn →


Important Considerations

1 - Krkn

Krkn aka Kraken

Installation

Clone the Repository

To clone and use the latest krkn version follow the directions below. If you’re wanting to contribute back to krkn in anyway in the future we recommend forking the repository first before cloning.

See the latest release version here

$ git clone https://github.com/krkn-chaos/krkn.git --branch <release version>
$ cd krkn

Fork and Clone the Repository

Fork the repository

$ git clone https://github.com/<github_user_id>/krkn.git
$ cd krkn

Set your cloned local to track the upstream repository:

cd krkn
git remote add upstream https://github.com/krkn-chaos/krkn

Disable pushing to upstream master:

git remote set-url --push upstream no_push
git remote -v

Install the dependencies

To be sure that krkn’s dependencies don’t interfere with other python dependencies you may have locally, we recommend creating a virtual environment before installing the dependencies. We have only tested up to python 3.11

$ python3.11 -m venv chaos
$ source chaos/bin/activate
$ pip install -r requirements.txt

Where can your user find your project code? How can they install it (binaries, installable package, build from source)? Are there multiple options/versions they can install and how should they choose the right one for them?

Getting Started with Krkn

If you are wanting to try to edit your configuration files and scenarios see getting started doc

Running Krkn

$ python run_kraken.py --config <config_file_location>

Run containerized version

Krkn-hub is a wrapper that allows running Krkn chaos scenarios via podman or docker runtime with scenario parameters/configuration defined as environment variables.

krknctl is a CLI that allows running Krkn chaos scenarios via podman or docker runtime with scenarios parameters/configuration passed as command line options or a json graph for complex workflows.

What’s next?

Please refer to the getting started guide, pick the scenarios of interest and follow the instructions to run them via Krkn, Krkn-hub or Krknctl. Running via Krkn-hub or Krknctl are recommended for ease of use and better user experience.

2 - krkn-hub

Krkn-hub aka kraken-hub

Hosts container images and wrapper for running scenarios supported by Krkn, a chaos testing tool for Kubernetes clusters to ensure it is resilient to failures. All we need to do is run the containers with the respective environment variables defined as supported by the scenarios without having to maintain and tweak files!

Set Up

You can use docker or podman to run kraken-hub

Install Podman your certain operating system based on these instructions

or

Install Docker on your system.

Docker is also supported but all variables you want to set (separate from the defaults) need to be set at the command line In the form -e <VARIABLE>=<value>

You can take advantage of the get_docker_params.sh script to create your parameters string This will take all environment variables and put them in the form “-e =” to make a long string that can get passed to the command

For example: docker run $(./get_docker_params.sh) --net=host -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/redhat-chaos/krkn-hub:power-outages

What’s next?

Please refer to the getting started guide, pick the scenarios of interest and follow the instructions to run them via Krkn, Krkn-hub or Krknctl. Running via Krkn-hub or Krknctl are recommended for ease of use and better user experience.

3 - krknctl

how to install, build and configure the CLI

Use the official install script as the primary installation method:

Install using the official script

curl -fsSL https://raw.githubusercontent.com/krkn-chaos/krknctl/refs/heads/main/install.sh | bash

Verify installation:

krknctl --version

Alternative installation methods

Binary distribution

The krknctl binary is available for download from GitHub releases for supported operating systems and architectures. Extract the tarball and add the binary to your $PATH.

Build from source

Fork and clone the repository

Fork the repository:

$ git clone https://github.com/<github_user_id>/krknctl.git
$ cd krknctl

Set your cloned local to track the upstream repository:

git remote add upstream https://github.com/krkn-chaos/krknctl

Linux

Dictionaries

To generate the random words we use the american dictionary, it is often available but if that’s not the case:

  • Fedora/RHEL: sudo dnf install words
  • Ubuntu/Debian: sudo apt-get install wamerican

Build dependencies

Linux

To build the only system package required is libbtrfs:

  • Fedora/RHEL: sudo dnf install btrfs-progs-devel
  • Ubuntu/Debian: sudo apt-get install libbtrfs-dev
MacOS
  • gpgme: brew install gpgme

Build command

go build -tags containers_image_openpgp -ldflags="-w -s" -o bin/ ./...

Configure Autocompletion:

The first step to have the best experience with the tool is to install the autocompletion in the shell so that the tool will be able to suggest to the user the available command and the description simply hitting tab twice.

Bash (linux):

source <(krknctl completion bash)

zsh (MacOS):

autoload -Uz compinit
compinit
source <(krknctl completion zsh)


Container Runtime:

The tool supports both Podman and Docker to run the krkn-hub scenario containers. The tool interacts with the container runtime through Unix socket. If both container runtimes are installed in the system the tool will default on Podman.

Podman:

Steps required to enable the Podman support

Linux:

  • enable and activate the podman API daemon
sudo systemctl enable --now podman
  • activate the user socket
systemctl enable --user --now podman.socket 

MacOS:

If both Podman and Docker are installed be sure that the docker compatibility is disabled

Docker:

Linux:

Check that the user has been added to the docker group and can correctly connect to the Docker unix socket
running the command podman ps if an error is returned run the command sudo usermod -aG docker $USER

What’s next?

Please refer to the getting started guide, pick the scenarios of interest and follow the instructions to run them via Krkn, Krkn-hub or Krknctl. Running via Krkn-hub or Krknctl are recommended for ease of use and better user experience.

4 - Setting Up Disconnected Enviornment

Getting Your Disconnected Enviornment Set Up

Getting Started Running Chaos Scenarios in a Disconnected Enviornment

Mirror following images on the bastion host

  • quay.io/krkn-chaos/krkn-hub:node-scenarios-bm - Master/worker node disruptions on baremetal
  • quay.io/krkn-chaos/krkn-hub:network-chaos - Network disruptions/traffic shaping
  • quay.io/krkn-chaos/krkn-hub:pod-scenarios - Pod level disruptions and evaluating recovery time/availability
  • quay.io/krkn-chaos/krkn-hub:syn-flood - Generates substantial amount of traffic/half open connections targeting a service
  • quay.io/krkn-chaos/krkn-hub:node-cpu-hog - Hogs CPU on the target nodes
  • quay.io/krkn-chaos/krkn-hub:node-io-hog - Hogs IO on the target nodes
  • quay.io/krkn-chaos/krkn-hub:node-memory-hog - Hogs memory on the target nodes
  • quay.io/krkn-chaos/krkn-hub:pvc-scenarios - Fills up a given PersistentVolumeClaim
  • quay.io/krkn-chaos/krkn-hub:service-hijacking - Simulates fake HTTP response for a service
  • quay.io/krkn-chaos/krkn-hub:power-outages - Shuts down the cluster and turns it back on
  • quay.io/krkn-chaos/krkn-hub:container-scenarios - Kills a container via provided kill signal
  • quay.io/krkn-chaos/krkn-hub:application-outages - Isolates application Ingress/Egress traffic
  • quay.io/krkn-chaos/krkn-hub:time-scenarios - Tweaks time/date on the nodes
  • quay.io/krkn-chaos/krkn-hub:pod-network-chaos - Introduces network chaos at pod level
  • quay.io/krkn-chaos/krkn-hub:node-network-filter - Node ip traffic filtering
  • quay.io/krkn-chaos/krkn-hub:pod-network-filter - DNS, internal/external service outages

Will also need these mirrored images inside the cluster

  • Network disruptions - quay.io/krkn-chaos/krkn:tools
  • Hog scenarios ( CPU, Memory and IO ) - quay.io/krkn-chaos/krkn-hog
  • SYN flood - quay.io/krkn-chaos/krkn-syn-flood:latest
  • Pod network filter scenarios - quay.io/krkn-chaos/krkn-network-chaos:latest
  • Service hijacking scenarios - quay.io/krkn-chaos/krkn-service-hijacking:v0.1.3

How to Mirror

The strategy is simple:

  1. Pull & Save: On a machine with internet access, pull the desired image from quay.io and use podman save to package it into a single archive file (a .tar file).

  2. Transfer: Move this archive file to your disconnected cluster node using a method like a USB drive, a secure network file transfer, or any other means available.

  3. Load: On the disconnected machine, use podman load to import the image from the archive file into the local container storage. The cluster’s container runtime can then use it.

Step-by-Step Instructions

Here’s a practical example using the quay.io/krkn-chaos/krkn-hub image.

Step 1: On the Connected Machine (Pull and Save)

First, pull the image from quay.io and then save it to a tarball.

Pull the image:

podman pull quay.io/krkn-chaos/krkn-hub:pod-scenarios

Save the image to a file: The -o or –output flag specifies the destination file.

podman save -o pod-scenarios.tar quay.io/krkn-chaos/krkn-hub:pod-scenarios

After this command, you will have a file named pod-scenarios.tar in your current directory.

Step 2: Transfer the Archive File

Move the pod-scenarios.tar file from your connected machine to a node within your disconnected cluster.

SCP (Secure Copy Protocol): If you have a secure, intermittent connection or bastion host.

scp ./pod-scenarios.tar user@disconnected-node-ip:/path/to/destination/

Step 3: On the Disconnected Machine (Load and Verify)

Once the file is on the disconnected machine, use podman load to import it.

Load the image: The -i or –input flag specifies the source archive.

podman load -i pod-scenarios.tar

Podman will read the tarball and restore the image layers into its local storage.

Verify the image is loaded: Check that the image now appears in your local image list.

podman images

You should see quay.io/krkn-chaos/krkn-hub in the output, ready to be used by your applications. 👍

REPOSITORY                       TAG      IMAGE ID      CREATED        SIZE
- quay.io/krkn-chaos/krkn-hub    pod-scenarios  b1a13a82513f  3 weeks ago    220 MB

The image is now available locally on that node for your container runtime (like CRI-O in OpenShift/Kubernetes) to create containers without needing to reach the internet. You may need to repeat this loading process on every node in the cluster that might run the container, or push it to a private registry within your disconnected environment.

5 - Krkn-AI

How to install Krkn-AI

Installation

Prerequisites

  • Python 3.11+
  • Podman or Docker Container Runtime
  • krknctl
  • uv package manager (recommended) or pip

Clone the Repository

To clone and use the latest krkn version follow the directions below. If you’re wanting to contribute back to Krkn-AI in anyway in the future we recommend forking the repository first before cloning.

$ git clone https://github.com/krkn-chaos/krkn-ai.git
$ cd krkn-ai

Fork and Clone the Repository

Fork the repository

$ git clone https://github.com/<github_user_id>/krkn-ai.git
$ cd krkn-ai

Set your cloned local to track the upstream repository:

cd krkn-ai
git remote add upstream https://github.com/krkn-chaos/krkn-ai

Disable pushing to upstream master:

git remote set-url --push upstream no_push
git remote -v

Install the dependencies

To be sure that Krkn-AI’s dependencies don’t interfere with other python dependencies you may have locally, we recommend creating a virtual environment before installing the dependencies. We have only tested up to python 3.11

Using pip package manager:

$ python3.11 -m venv .venv
$ source .venv/bin/activate
$ pip install -e .

# Check if installation is successful
$ krkn_ai --help

Using uv package manager:

$ pip install uv
$ uv venv --python 3.11
$ source .venv/bin/activate
$ uv pip install -e .

#  Check if installation is successful
$ uv run krkn_ai --help

Getting Started with Krkn-AI

To configure Krkn-AI testing scenarios, check out getting started doc.

6 - Krkn Dashboard

How to install and run the Krkn Dashboard (local or containerized).

The Krkn Dashboard is a web UI for running and observing Krkn chaos scenarios. You can run it locally (Node.js on your machine) or containerized (Podman/Docker).


Prerequisites (both methods)

  • Kubernetes cluster — You need a cluster and a kubeconfig so that the dashboard can target it for chaos runs. If you don’t have one, see Kubernetes, minikube, K3s, or OpenShift.
  • Podman or Docker — The dashboard starts chaos runs by launching krkn-hub containers; the host must have Podman (or Docker) installed and available.

Local installation

Run the dashboard on your machine with Node.js.

Prerequisites for local run

Clone and run locally

  1. Clone the Krkn Dashboard repository:

    git clone https://github.com/krkn-chaos/krkn-dashboard.git
    cd krkn-dashboard
    
  2. Install dependencies:

    npm install
    
  3. Start the application:

    npm run dev
    

The application runs at http://localhost:3000 (or the port shown in the terminal).


Container installation

Build and run the dashboard in a container. The container uses Podman (or Docker) on the host to start krkn-hub chaos containers.

Get the source (choose one method)

Check available releases at krkn-dashboard releases.

Method 1: Clone a specific release (recommended)

# Replace <RELEASE_TAG> with your desired version (e.g., v1.0.0)
git clone --branch <RELEASE_TAG> --single-branch https://github.com/krkn-chaos/krkn-dashboard.git
cd krkn-dashboard

Method 2: Download release tarball

wget https://github.com/krkn-chaos/krkn-dashboard/archive/refs/tags/<RELEASE_TAG>.tar.gz
# Extract and cd into the directory

Method 3: Clone latest release

LATEST_TAG=$(curl -s https://api.github.com/repos/krkn-chaos/krkn-dashboard/releases/latest | grep '"tag_name":' | sed -E 's/.*"([^"]+)".*/\1/')
git clone --branch $LATEST_TAG --single-branch https://github.com/krkn-chaos/krkn-dashboard.git
cd krkn-dashboard
echo "Cloned release: $LATEST_TAG"

Build the image

Replace <image-name> with the image name and tag you want (e.g. krkn-dashboard:latest).

cd krkn-dashboard
podman build -t <image-name> -f containers/Dockerfile .

(Use docker build instead of podman build if you use Docker.)

Run the container

  1. Prepare a directory for assets (e.g. kubeconfig) in the git folder:

    export CHAOS_ASSETS=$(pwd)/src/assets
    

    Copy your kubeconfig into $CHAOS_ASSETS as kubeconfig (so the dashboard inside the container can target your cluster).

  2. Run the container (as root or with permissions for the Podman socket). Replace <container-name> with the name you want for the container, and <image-name> with the image you built in the previous step.

    podman run --env CHAOS_ASSETS \
      -v $CHAOS_ASSETS:/usr/src/chaos-dashboard/src/assets:z \
      -v "$(pwd)/database:/data:z" \
      -v /run/podman/podman.sock:/run/podman/podman.sock \
      -p 3000:3000 -p 8000:8000 \
      --net=host -d --name <container-name> <image-name>
    

    For Docker, use -v /var/run/docker.sock:/var/run/docker.sock instead of the Podman socket path, and ensure the container can reach the Docker daemon.

  3. Open http://localhost:3000 in your browser to use the dashboard and trigger Krkn scenarios.