Installation
Details on how to install krkn, krkn-hub, and krknctl
Choose Your Installation Method
Krkn provides multiple ways to run chaos scenarios. Choose the method that best fits your needs:
| Tool | What is it? | Best For | Complexity |
|---|
| krknctl | CLI tool with auto-completion | Quick testing, ease of use | ⭐ Easy |
| krkn-hub | Pre-built container images | CI/CD pipelines, automation | ⭐⭐ Moderate |
| krkn | Standalone Python program | Full control, multiple scenarios | ⭐⭐⭐ Advanced |
Recommendation
New to Krkn? Start with krknctl - it’s the easiest way to get started with chaos testing!
Installation Methods
krknctl (Recommended)
What is it? A dedicated command-line interface (CLI) tool that simplifies running Krkn chaos scenarios.
Why use it?
- Command auto-completion for faster workflows
- Built-in input validation to catch errors early
- No need to manage configuration files manually
- Runs scenarios via container runtime (Podman/Docker)
Best for: Users who want a streamlined, user-friendly experience without managing configs.
👉 Install krknctl →
krkn-hub
What is it? A collection of pre-built container images that wrap Krkn scenarios, configured via environment variables.
Why use it?
- No Python environment setup required
- Easy integration with CI/CD systems (Jenkins, GitHub Actions, etc.)
- Consistent, reproducible chaos runs
- Scenarios are isolated in containers
Best for: CI/CD pipelines, automated testing, and users who prefer containers over local Python setups.
Note
krkn-hub runs one scenario type per execution. For running multiple scenarios in a single run, use the standalone krkn installation.👉 Install krkn-hub →
krkn (Standalone Python)
What is it? The core Krkn chaos engineering tool, run as a standalone Python program cloned from Git.
Why use it?
- Full control over configuration and execution
- Run multiple different scenario types in a single execution
- Direct access to all features and customization options
- Ideal for development and advanced customization
Best for: Advanced users, developers contributing to Krkn, and scenarios requiring fine-grained control.
Note
Requires Python 3.9 environment and manual dependency management.👉 Install krkn →
Important Considerations
Run External to Cluster
It is recommended to run Krkn external to the cluster (Standalone or Containerized) hitting the Kubernetes/OpenShift API. Running it inside the cluster might be disruptive to itself and may not report results if the chaos leads to API server instability.Power Architecture (ppc64le)
To run Krkn on Power (ppc64le) architecture, build and run a containerized version by following the instructions
here.
1 - Krkn
Krkn aka Kraken
Installation
Clone the Repository
To clone and use the latest krkn version follow the directions below. If you’re wanting to contribute back to krkn in anyway in the future we recommend forking the repository first before cloning.
See the latest release version here
$ git clone https://github.com/krkn-chaos/krkn.git --branch <release version>
$ cd krkn
Fork and Clone the Repository
Fork the repository
$ git clone https://github.com/<github_user_id>/krkn.git
$ cd krkn
Set your cloned local to track the upstream repository:
cd krkn
git remote add upstream https://github.com/krkn-chaos/krkn
Disable pushing to upstream master:
git remote set-url --push upstream no_push
git remote -v
Install the dependencies
To be sure that krkn’s dependencies don’t interfere with other python dependencies you may have locally, we recommend creating a virtual enviornment before installing the dependencies. We have only tested up to python 3.9
$ python3.9 -m venv chaos
$ source chaos/bin/activate
$ pip install -r requirements.txt
Note
Make sure python3-devel and latest pip versions are installed on the system. The dependencies install has been tested with pip >= 21.1.3 versions.Where can your user find your project code? How can they install it (binaries, installable package, build from source)? Are there multiple options/versions they can install and how should they choose the right one for them?
Getting Started with Krkn
If you are wanting to try to edit your configuration files and scenarios see getting started doc
Running Krkn
$ python run_kraken.py --config <config_file_location>
Run containerized version
Krkn-hub is a wrapper that allows running Krkn chaos scenarios via podman or docker runtime with scenario parameters/configuration defined as environment variables.
krknctl is a CLI that allows running Krkn chaos scenarios via podman or docker runtime with scenarios parameters/configuration passed as command line options or a json graph for complex workflows.
What’s next?
Please refer to the getting started guide, pick the scenarios of interest and follow the instructions to run them via Krkn, Krkn-hub or Krknctl. Running via Krkn-hub or Krknctl are recommended for ease of use and better user experience.
2 - krkn-hub
Krkn-hub aka kraken-hub
Hosts container images and wrapper for running scenarios supported by Krkn, a chaos testing tool for Kubernetes clusters to ensure it is resilient to failures. All we need to do is run the containers with the respective environment variables defined as supported by the scenarios without having to maintain and tweak files!
Set Up
You can use docker or podman to run kraken-hub
Install Podman your certain operating system based on these instructions
or
Install Docker on your system.
Docker is also supported but all variables you want to set (separate from the defaults) need to be set at the command line In the form -e <VARIABLE>=<value>
You can take advantage of the get_docker_params.sh script to create your parameters string This will take all environment variables and put them in the form “-e =” to make a long string that can get passed to the command
For example: docker run $(./get_docker_params.sh) --net=host -v <path-to-kube-config>:/home/krkn/.kube/config:Z -d quay.io/redhat-chaos/krkn-hub:power-outages
Tip
Because the container runs with a non-root user, ensure the kube config is globally readable before mounting it in the container. You can achieve this with the following commands: kubectl config view –flatten > ~/kubeconfig && chmod 444 ~/kubeconfig && docker run $(./get_docker_params.sh) –name=<container_name> –net=host -v ~kubeconfig:/home/krkn/.kube/config:Z -d containers.krkn-chaos.dev/krkn-chaos/krkn-hub:<scenario>What’s next?
Please refer to the getting started guide, pick the scenarios of interest and follow the instructions to run them via Krkn, Krkn-hub or Krknctl. Running via Krkn-hub or Krknctl are recommended for ease of use and better user experience.
3 - krknctl
how to install, build and configure the CLI
Binary distribution (Recommended):
The krknctl binary is available for download from GitHub releases for supported operating systems and architectures. Extract the tarball and add the binary to your $PATH.
Build from sources :
Fork and Clone the Repository
Fork the repository
$ git clone https://github.com/<github_user_id>/krknctl.git
$ cd krknctl
Set your cloned local to track the upstream repository:
git remote add upstream https://github.com/krkn-chaos/krknctl
Linux:
Dictionaries:
To generate the random words we use the american dictionary, it is often available but if that’s not the case:
- Fedora/RHEL:
sudo dnf install words - Ubuntu/Debian:
sudo apt-get install wamerican
Building from sources:
Linux:
To build the only system package required is libbtrfs:
- Fedora/RHEL:
sudo dnf install btrfs-progs-devel - Ubuntu/Debian:
sudo apt-get install libbtrfs-dev
MacOS:
- gpgme:
brew install gpgme
Build command:
go build -tags containers_image_openpgp -ldflags="-w -s" -o bin/ ./...
Note
To build for different operating systems/architectures refer to
GOOS GOARCH golang variablesThe first step to have the best experience with the tool is to install the autocompletion in the shell so that the tool
will be able to suggest to the user the available command and the description simply hitting tab twice.
Bash (linux):
source <(./krknctl completion bash)
Tip
To install autocompletion permanently add this command to .bashrc (setting the krknctl binary path correctly)zsh (MacOS):
autoload -Uz compinit
compinit
source <(./krknctl completion zsh)
Tip
To install autocompletion permanently add this command to .zshrc (setting the krknctl binary path correctly)Container Runtime:
The tool supports both Podman and Docker to run the krkn-hub scenario containers. The tool interacts with the container
runtime through Unix socket. If both container runtimes are installed in the system the tool will default on Podman.
Podman:
Steps required to enable the Podman support
Linux:
- enable and activate the podman API daemon
sudo systemctl enable --now podman
systemctl enable --user --now podman.socket
MacOS:
If both Podman and Docker are installed be sure that the docker compatibility is disabled
Docker:
Linux:
Check that the user has been added to the docker group and can correctly connect to the Docker unix socket
running the comman podman ps if an error is returned run the command sudo usermod -aG docker $USER
What’s next?
Please refer to the getting started guide, pick the scenarios of interest and follow the instructions to run them via Krkn, Krkn-hub or Krknctl. Running via Krkn-hub or Krknctl are recommended for ease of use and better user experience.
4 - Setting Up Disconnected Enviornment
Getting Your Disconnected Enviornment Set Up
Getting Started Running Chaos Scenarios in a Disconnected Enviornment
Mirror following images on the bastion host
- quay.io/krkn-chaos/krkn-hub:node-scenarios-bm - Master/worker node disruptions on baremetal
- quay.io/krkn-chaos/krkn-hub:network-chaos - Network disruptions/traffic shaping
- quay.io/krkn-chaos/krkn-hub:pod-scenarios - Pod level disruptions and evaluating recovery time/availability
- quay.io/krkn-chaos/krkn-hub:syn-flood - Generates substantial amount of traffic/half open connections targeting a service
- quay.io/krkn-chaos/krkn-hub:node-cpu-hog - Hogs CPU on the target nodes
- quay.io/krkn-chaos/krkn-hub:node-io-hog - Hogs IO on the target nodes
- quay.io/krkn-chaos/krkn-hub:node-memory-hog - Hogs memory on the target nodes
- quay.io/krkn-chaos/krkn-hub:pvc-scenarios - Fills up a given PersistentVolumeClaim
- quay.io/krkn-chaos/krkn-hub:service-hijacking - Simulates fake HTTP response for a service
- quay.io/krkn-chaos/krkn-hub:power-outages - Shuts down the cluster and turns it back on
- quay.io/krkn-chaos/krkn-hub:container-scenarios - Kills a container via provided kill signal
- quay.io/krkn-chaos/krkn-hub:application-outages - Isolates application Ingress/Egress traffic
- quay.io/krkn-chaos/krkn-hub:time-scenarios - Tweaks time/date on the nodes
- quay.io/krkn-chaos/krkn-hub:pod-network-chaos - Introduces network chaos at pod level
- quay.io/krkn-chaos/krkn-hub:node-network-filter - Node ip traffic filtering
- quay.io/krkn-chaos/krkn-hub:pod-network-filter - DNS, internal/external service outages
Will also need these mirrored images inside the cluster
- Network disruptions - quay.io/krkn-chaos/krkn:tools
- Hog scenarios ( CPU, Memory and IO ) - quay.io/krkn-chaos/krkn-hog
- SYN flood - quay.io/krkn-chaos/krkn-syn-flood:latest
- Pod network filter scenarios - quay.io/krkn-chaos/krkn-network-chaos:latest
- Service hijacking scenarios - quay.io/krkn-chaos/krkn-service-hijacking:v0.1.3
How to Mirror
The strategy is simple:
Pull & Save: On a machine with internet access, pull the desired image from quay.io and use podman save to package it into a single archive file (a .tar file).
Transfer: Move this archive file to your disconnected cluster node using a method like a USB drive, a secure network file transfer, or any other means available.
Load: On the disconnected machine, use podman load to import the image from the archive file into the local container storage. The cluster’s container runtime can then use it.
Step-by-Step Instructions
Here’s a practical example using the quay.io/krkn-chaos/krkn-hub image.
Step 1: On the Connected Machine (Pull and Save)
First, pull the image from quay.io and then save it to a tarball.
Pull the image:
podman pull quay.io/krkn-chaos/krkn-hub:pod-scenarios
Save the image to a file: The -o or –output flag specifies the destination file.
podman save -o pod-scenarios.tar quay.io/krkn-chaos/krkn-hub:pod-scenarios
After this command, you will have a file named pod-scenarios.tar in your current directory.
Tip: You can save multiple images into a single archive to be more efficient.
podman save -o krkn-hub-images.tar quay.io/krkn-chaos/krkn-hub:pod-scenarios quay.io/krkn-chaos/krkn-hub:pod-network-chaos
Step 2: Transfer the Archive File
Move the pod-scenarios.tar file from your connected machine to a node within your disconnected cluster.
SCP (Secure Copy Protocol): If you have a secure, intermittent connection or bastion host.
scp ./pod-scenarios.tar user@disconnected-node-ip:/path/to/destination/
Step 3: On the Disconnected Machine (Load and Verify)
Once the file is on the disconnected machine, use podman load to import it.
Load the image: The -i or –input flag specifies the source archive.
podman load -i pod-scenarios.tar
Podman will read the tarball and restore the image layers into its local storage.
Verify the image is loaded: Check that the image now appears in your local image list.
You should see quay.io/krkn-chaos/krkn-hub in the output, ready to be used by your applications. 👍
REPOSITORY TAG IMAGE ID CREATED SIZE
- quay.io/krkn-chaos/krkn-hub pod-scenarios b1a13a82513f 3 weeks ago 220 MB
The image is now available locally on that node for your container runtime (like CRI-O in OpenShift/Kubernetes) to create containers without needing to reach the internet. You may need to repeat this loading process on every node in the cluster that might run the container, or push it to a private registry within your disconnected environment.
5 - Krkn-AI
How to install Krkn-AI
Installation
Prerequisites
- Python 3.9+
- Podman or Docker Container Runtime
- krknctl
uv package manager (recommended) or pip
Clone the Repository
To clone and use the latest krkn version follow the directions below. If you’re wanting to contribute back to Krkn-AI in anyway in the future we recommend forking the repository first before cloning.
$ git clone https://github.com/krkn-chaos/krkn-ai.git
$ cd krkn-ai
Fork and Clone the Repository
Fork the repository
$ git clone https://github.com/<github_user_id>/krkn-ai.git
$ cd krkn-ai
Set your cloned local to track the upstream repository:
cd krkn-ai
git remote add upstream https://github.com/krkn-chaos/krkn-ai
Disable pushing to upstream master:
git remote set-url --push upstream no_push
git remote -v
Install the dependencies
To be sure that Krkn-AI’s dependencies don’t interfere with other python dependencies you may have locally, we recommend creating a virtual enviornment before installing the dependencies. We have only tested up to python 3.9
Using pip package manager:
$ python3.9 -m venv .venv
$ source .venv/bin/activate
$ pip install -e .
# Check if installation is successful
$ krkn_ai --help
Using uv package manager:
$ pip install uv
$ uv venv --python 3.9
$ source .venv/bin/activate
$ uv pip install -e .
# Check if installation is successful
$ uv run krkn_ai --help
Note
Make sure python3-devel and latest pip versions are installed on the system. The dependencies install has been tested with pip >= 21.1.3 versions.Getting Started with Krkn-AI
To configure Krkn-AI testing scenarios, check out getting started doc.