1 - Usage
Commands:
Commands are grouped by action and may include one or more subcommands to further define the specific action.
list <subcommand>
:
describe <scenario name>
:
Describes the specified scenario giving to the user an overview of what are the actions that the scenario will perform on
the target system. It will also show all the available flags that the scenario will accept as input to modify the behaviour
of the scenario.
run <scenario name> [flags]
:
Will run the selected scenarios with the specified options
Tip
Because the kubeconfig file may reference external certificates stored on the filesystem,
which won’t be accessible once mounted inside the container, it will be automatically
copied to the directory where the tool is executed. During this process, the kubeconfig
will be flattened by encoding the certificates in base64 and inlining them directly into the file.Tip
if you want interrupt the scenario while running in attached mode simply hit CTRL+C
the
container will be killed and the scenario interrupted immediatelyCommon flags:
Flag | Description |
---|
–kubeconfig | kubeconfig path (if empty will default to ~/.kube/config) |
–detached | will run the scenario in detached mode (background) will be possible to reattach the tool to the container logs with the attach command |
–alerts-profile | will mount in the container a custom alert profile (check krkn documentation for further infos) |
–metrics-profile | will mount in the container scenario a custom metrics profile (check krkn documentation for further infos) |
graph <subcommand>
:
In addition to running individual scenarios, the tool can also orchestrate
multiple scenarios in serial, parallel, or mixed execution by utilizing a
scenario dependency graph resolution algorithm.
scaffold <scenario names> [flags]
:
Scaffolds a basic execution plan structure in json format for all the scenario names provided.
The default structure is a serial execution with a root node and each node depends on the
other starting from the root. Starting from this configuration it is possible to define complex
scenarios changing the dependencies between the nodes.
Will be provided a random id for each scenario and the dependency will be defined through the
depends_on
attribute. The scenario id is not strictly dependent on the scenario type so it’s
perfectly legit to repeat the same scenario type (with the same or different attributes) varying the
scenario Id and the dependencies accordingly.
./krknctl graph scaffold node-cpu-hog node-memory-hog node-io-hog service-hijacking node-cpu-hog > plan.json
will generate an execution plan (serial) containing all the available options for each of the scenarios mentioned with default values
when defined, or a description of the content expected for the field.
Note
Any graph configuration is supported except cycles (self dependencies or transitive)Supported flags:
Flag | Description |
---|
–global-env | if set this flag will add global environment variables to each scenario in the graph |
run <json execution plan path> [flags]
:
It will display the resolved dependency graph, detailing all the scenarios executed at each dependency step, and will instruct
the container runtime to execute the krkn scenarios accordingly.
Note
Since multiple scenarios can be executed within a single running plan, the output is redirected
to files in the directory where the command is run. These files are named using the following
format: krknctl---.log.Supported flags:
Flag | Description |
---|
–kubeconfig | kubeconfig path (if empty will default to ~/.kube/config) |
–alerts-profile | will mount in the container a custom alert profile (check krkn documentation for further infos) |
–metrics-profile | will mount in the container scenario a custom metrics profile (check krkn documentation for further infos) |
–exit-on-error | if set this flag will the workflow will be interrupted and the tool will exit with a status greater than 0 |
Supported graph configurations:

Serial execution:
All the nodes depend on each other building a chain, the execution will start from the last item of the chain.
Mixed execution:
The graph is structured in different “layers” so the execution will happen step-by-step executing all the scenarios of the
step in parallel and waiting the end
Parallel execution:
To achieve full parallel execution, where each step can run concurrently (if it involves multiple scenarios),
the approach is to use a root scenario as the entry point, with several other scenarios dependent on it.
While we could have implemented a completely new command to handle this, doing so would have introduced additional
code to support what is essentially a specific case of graph execution.
Instead, we developed a scenario called dummy-scenario
. This scenario performs no actual actions but simply pauses
for a set duration. It serves as an ideal root node, allowing all dependent nodes to execute in parallel without adding
unnecessary complexity to the codebase.
random <subcommand>
Random orchestration can be used to test parallel scenario generating random graphs from a set of preconfigured scenarios.
Differently from the graph command, the scenarios in the json plan don’t have dependencies between them since the dependencies
are generated at runtime.
This is might be also helpful to run multiple chaos scenarios at large scale.
scaffold <scenario names> [flags]
Will create the structure for a random plan execution, so without any dependency between the scenarios. Once properly configured this can
be used as a seed
to generate large test plans for large scale tests.
This subcommand supports base scaffolding mode by allowing users to specify desired scenario names or generate a plan file of any size using pre-configured scenarios as a template (or seed). This mode is extensively covered in the scale testing section.
Supported flags:
Flag | Description |
---|
–global-env | if set this flag will add global environment variables to each scenario in the graph |
–number-of-scenarios | the number of scenarios that will be created from the template file |
–seed-file | template file with already configured scenarios used to generate the random test plan |
run <json execution plan path> [flags]
Supported flags:
Flag | Description |
---|
–alerts-profile | custom alerts profile file path |
–exit-on-error | if set this flag will the workflow will be interrupted and the tool will exit with a status greater than 0 |
–graph-dump | specifies the name of the file where the randomly generated dependency graph will be persisted |
–kubeconfig | kubeconfig path (if not set will default to ~/.kube/config) |
–max-parallel | maximum number of parallel scenarios |
–metrics-profile | custom metrics profile file path |
–number-of-scenarios | allows you to specify the number of elements to select from the execution plan |
attach <scenario ID>
:
If a scenario has been executed in detached mode or through a graph plan and you want to attach to the container
standard output this command comes into help.
Tip
to interrupt the output hit CTRL+C
, this won’t interrupt the container, but only the outputTip
if shell completion is enabled, pressing TAB twice will display a list of running
containers along with their respective IDs, helping you select the correct one.clean
:
will remove all the krkn containers from the container runtime, will delete all the kubeconfig files
and logfiles created by the tool in the current folder.
query-status <container Id or Name> [--graph <graph file path>]
:
The tool will query the container platform to retrieve information about a container by its name or ID if the --graph
flag is not provided. If the --graph
flag is set, it will instead query the status of all container names
listed in the graph file. When a single container name or ID is specified,
the tool will exit with the same status as that container.
Tip
This function can be integrated into CI/CD pipelines to halt execution if the chaos run encounters any failure.Running krknctl on a disconnected environment with a private registry
If you’re using krknctl in a disconnected environment, you can mirror the desired krkn-hub images to your private registry and configure krknctl to use that registry as the backend. Krknctl supports this through global flags or environment variables.
Private registry global flags
Flag | Environment Variable | Description |
---|
–private-registry | KRKNCTL_PRIVATE_REGISTRY | private registry URI (eg. quay.io, without any protocol schema prefix) |
–private-registry-insecure | KRKNCTL_PRIVATE_REGISTRY_INSECURE | uses plain HTTP instead of TLS |
–private-registry-password | KRKNCTL_PRIVATE_REGISTRY_PASSWORD | private registry password for basic authentication |
–private-registry-scenarios | KRKNCTL_PRIVATE_REGISTRY_SCENARIOS | private registry krkn scenarios image repository |
–private-registry-skip-tls | KRKNCTL_PRIVATE_REGISTRY_SKIP_TLS | skips tls verification on private registry |
–private-registry-token | KRKNCTL_PRIVATE_REGISTRY_TOKEN | private registry identity token for token based authentication |
-private-registry-username | KRKNCTL_PRIVATE_REGISTRY_USERNAME | private registry username for basic authentication |
Note
Not all options are available on every platform due to limitations in the container runtime platform SDK:
Podman
Token authentication is not supported
Docker
Skip TLS verfication cannot be done by CLI, docker daemon needs to be configured on that purpose please follow the documentation
Example: Running krknctl on quay.io private registry
Note
This example will run only on Docker since the token authentication is not yet implemented on the podman SDKI will use for that example an invented private registry on quay.io: my-quay-user/krkn-hub
curl -s -X GET \
--user 'user:password' \
"https://quay.io/v2/auth?service=quay.io&scope=repository:my-quay-user/krkn-hub:pull,push" \
-k | jq -r '.token'
- run krknctl with the private registry flags:
krknctl \
--private-registry quay.io \
--private-registry-scenarios my-quay-user/krkn-hub \
--private-registry-token <your token obtained in the previous step> \
list available
- your images should be listed on the console
Note
To make krknctl commands more concise, it’s more convenient to export the corresponding environment variables instead of prepending flags to every command. The relevant variables are:
- KRKNCTL_PRIVATE_REGISTRY
- KRKNCTL_PRIVATE_REGISTRY_SCENARIOS
- KRKNCTL_PRIVATE_REGISTRY_TOKEN
2 - Randomized chaos testing
The random subcommand is valuable for generating chaos tests on a large scale with ease and speed. The random scaffold command, when used with the --seed-file
and --number-of-scenarios
flags, allows you to expand a pre-existing random or graph plan as a template (or seed). The tool randomly distributes scenarios from the seed-file
to meet the specified number-of-scenarios
. The resulting output is compatible exclusively with the random run command, which generates a random graph from it.
Warning
graph scaffolded scenarios can serve as input for random scaffold --seed-file
and random run
, as dependencies are simply ignored. However, the reverse is not true. To address this, graphs generated by the random run
command are saved (with the path and file name configurable via the --graph-dump
flag) and can be replayed using the graph run
command.Example
Let’s start from the following chaos test graph called graph.json
:
{
"application-outages-1-1": {
"image": "quay.io/krkn-chaos/krkn-hub:application-outages",
"name": "application-outages",
"env": {
"BLOCK_TRAFFIC_TYPE": "[Ingress, Egress]",
"DURATION": "30",
"NAMESPACE": "dittybopper",
"POD_SELECTOR": "{app: dittybopper}",
"WAIT_DURATION": "1",
"KRKN_DEBUG": "True"
},
},
"application-outages-1-2": {
"image": "quay.io/krkn-chaos/krkn-hub:application-outages",
"name": "application-outages",
"env": {
"BLOCK_TRAFFIC_TYPE": "[Ingress, Egress]",
"DURATION": "30",
"NAMESPACE": "default",
"POD_SELECTOR": "{app: nginx}",
"WAIT_DURATION": "1",
"KRKN_DEBUG": "True"
},
"depends_on": "root-scenario"
},
"root-scenario-1": {
"_comment": "I'm the root Node!",
"image": "quay.io/krkn-chaos/krkn-hub:dummy-scenario",
"name": "dummy-scenario",
"env": {
"END": "10",
"EXIT_STATUS": "0"
}
}
}
Note
The larger the seed file, the more diverse the resulting output file will be.- Step 1: let’s expand it to 100 scenarios with the command
krknctl random scaffold --seed-file graph.json --number-of-scenarios 100 > big-random-graph.json
This will produce a file containing 100 compiled replicating the three scenarios above a random amount of times per each:
{
"application-outages-1-1--6oJCqST": {
"image": "quay.io/krkn-chaos/krkn-hub:application-outages",
"name": "application-outages",
"env": {
"BLOCK_TRAFFIC_TYPE": "[Ingress, Egress]",
"DURATION": "30",
"KRKN_DEBUG": "True",
"NAMESPACE": "dittybopper",
"POD_SELECTOR": "{app: dittybopper}",
"WAIT_DURATION": "1"
}
},
"application-outages-1-1--JToAFrk": {
"image": "quay.io/krkn-chaos/krkn-hub:application-outages",
"name": "application-outages",
"env": {
"BLOCK_TRAFFIC_TYPE": "[Ingress, Egress]",
"DURATION": "30",
"KRKN_DEBUG": "True",
"NAMESPACE": "dittybopper",
"POD_SELECTOR": "{app: dittybopper}",
"WAIT_DURATION": "1"
}
},
"application-outages-1-1--ofb4iMD": {
"image": "quay.io/krkn-chaos/krkn-hub:application-outages",
"name": "application-outages",
"env": {
"BLOCK_TRAFFIC_TYPE": "[Ingress, Egress]",
"DURATION": "30",
"KRKN_DEBUG": "True",
"NAMESPACE": "dittybopper",
"POD_SELECTOR": "{app: dittybopper}",
"WAIT_DURATION": "1"
}
},
"application-outages-1-1--tLPY-MZ": {
"image": "quay.io/krkn-chaos/krkn-hub:application-outages",
"name": "application-outages",
"env": {
"BLOCK_TRAFFIC_TYPE": "[Ingress, Egress]",
"DURATION": "30",
"KRKN_DEBUG": "True",
"NAMESPACE": "dittybopper",
"POD_SELECTOR": "{app: dittybopper}",
"WAIT_DURATION": "1"
}
},
.... (and other 96 scenarios)
- Step 2: run the randomly generated chaos test using the command
krknctl random run big-random-graph.json --max-parallel 50 --graph-dump big-graph.json
. This instructs krknctl to orchestrate the scenarios in the specified file within a graph, allowing up to 50 scenarios to run in parallel per step, while ensuring all scenarios listed in the JSON input file are executed.The generated random graph will be saved to a file named big-graph.json
.
Warning
The max-parallel
value should be tuned according to machine resources, as it determines the number of parallel krkn instances executed simultaneously on the local machine via containers on podman or docker- Step 3: if you found the previous chaos run disruptive and you want to re-execute it periodically you can store the
big-graph.json
somewhere and replay it with the command krknctl graph run big-graph.json