This scenario creates an outgoing firewall rule on specific nodes in your cluster, chosen by node name or a selector. This rule blocks connections to AWS EFS, leading to a temporary failure of any EFS volumes mounted on those affected nodes.
How to Run EFS Disruption Scenarios
Choose your preferred method to run EFS disruption scenarios:
This scenario creates an outgoing firewall rule on specific nodes in your cluster, chosen by node name or a selector. This rule blocks connections to AWS EFS, leading to a temporary failure of any EFS volumes mounted on those affected nodes.
Example scenario file: efs_disruption.yml
Sample scenario config
- id: node_network_filter
wait_duration: 0
test_duration: 60
label_selector: ''
service_account: ''
namespace: 'default'
instance_count: 1
execution: parallel
ingress: false
egress: true
target: '<NODE_NAME>'
interfaces: []
ports: [2049]
taints: []
protocols:
- tcp
- udp
image: quay.io/krkn-chaos/krkn-network-chaos:latest
How to Use Plugin Name
Add the plugin name to the list of chaos_scenarios section in the config/config.yaml file
kraken:
kubeconfig_path: ~/.kube/config # Path to kubeconfig
..
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/<scenario_name>.yaml
Note
You can specify multiple scenario files of the same type by adding additional paths to the list:
kraken:
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/efs-disruption-1.yaml
- scenarios/efs-disruption-2.yaml
- scenarios/efs-disruption-3.yaml
You can also combine multiple different scenario types in the same config.yaml file. Scenario types can be specified in any order, and you can include the same scenario type multiple times:
kraken:
chaos_scenarios:
- network_chaos_ng_scenarios:
- scenarios/efs-disruption.yaml
- pod_disruption_scenarios:
- scenarios/pod-kill.yaml
- node_scenarios:
- scenarios/node-reboot.yaml
- network_chaos_ng_scenarios: # Same type can appear multiple times
- scenarios/efs-disruption-2.yaml
Run
python run_kraken.py --config config/config.yaml
This scenario disrupts a targeted zone in the public cloud by blocking egress and ingress traffic to understand the impact on both Kubernetes/OpenShift platforms control plane as well as applications running on the worker nodes in that zone. More information is documented here
Run
podman run -v ~/.kube/config:/home/krkn/.kube/config:z -e TEST_DURATION="60" \
-e INGRESS="false" -e EGRESS="true" -e PROTOCOLS="tcp,udp" -e PORTS="2049" \
-e NODE_NAME="<node_name>" quay.io/krkn-chaos/krkn-hub:node-network-filter
krknctl run node-network-filter \
--chaos-duration 60 \
--node-name kind-control-plane \
--ingress false \
--egress true \
--protocols tcp,udp \
--ports 2049