This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Cerberus

Cerberus

Guardian of Kubernetes and OpenShift Clusters

Cerberus watches the Kubernetes/OpenShift clusters for dead nodes, system component failures/health and exposes a go or no-go signal which can be consumed by other workload generators or applications in the cluster and act accordingly.

Workflow

Cerberus workflow

Installation

Instructions on how to setup, configure and run Cerberus can be found at Installation.

What Kubernetes/OpenShift components can Cerberus monitor?

Following are the components of Kubernetes/OpenShift that Cerberus can monitor today, we will be adding more soon.

ComponentDescriptionWorking
NodesWatches all the nodes including masters, workers as well as nodes created using custom MachineSets:heavy_check_mark:
NamespacesWatches all the pods including containers running inside the pods in the namespaces specified in the config:heavy_check_mark:
Cluster OperatorsWatches all Cluster Operators:heavy_check_mark:
Masters SchedulabilityWatches and warns if masters nodes are marked as schedulable:heavy_check_mark:
RoutesWatches specified routes:heavy_check_mark:
CSRsWarns if any CSRs are not approved:heavy_check_mark:
Critical AlertsWarns the user on observing abnormal behavior which might effect the health of the cluster:heavy_check_mark:
Bring your own checksUsers can bring their own checks and Ceberus runs and includes them in the reporting as wells as go/no-go signal:heavy_check_mark:

An explanation of all the components that Cerberus can monitor are explained here

How does Cerberus report cluster health?

Cerberus exposes the cluster health and failures through a go/no-go signal, report and metrics API.

Go or no-go signal

When the cerberus is configured to run in the daemon mode, it will continuosly monitor the components specified, runs a light weight http server at http://0.0.0.0:8080 and publishes the signal i.e True or False depending on the components status. The tools can consume the signal and act accordingly.

Report

The report is generated in the run directory and it contains the information about each check/monitored component status per iteration with timestamps. It also displays information about the components in case of failure. Refer report for example.

You can use the “-o <file_path_name>” option to change the location of the created report

Metrics API

Cerberus exposes the metrics including the failures observed during the run through an API. Tools consuming Cerberus can query the API to get a blob of json with the observed failures to scrape and act accordingly. For example, we can query for etcd failures within a start and end time and take actions to determine pass/fail for test cases or report whether the cluster is healthy or unhealthy for that duration.

  • The failures in the past 1 hour can be retrieved in the json format by visiting http://0.0.0.0:8080/history.
  • The failures in a specific time window can be retrieved in the json format by visiting http://0.0.0.0:8080/history?loopback=.
  • The failures between two time timestamps, the failures of specific issues types and the failures related to specific components can be retrieved in the json format by visiting http://0.0.0.0:8080/analyze url. The filters have to be applied to scrape the failures accordingly.

Slack integration

Cerberus supports reporting failures in slack. Refer slack integration for information on how to set it up.

Node Problem Detector

Cerberus also consumes node-problem-detector to detect various failures in Kubernetes/OpenShift nodes. More information on setting it up can be found at node-problem-detector

Bring your own checks

Users can add additional checks to monitor components that are not being monitored by Cerberus and consume it as part of the go/no-go signal. This can be accomplished by placing relative paths of files containing additional checks under custom_checks in config file. All the checks should be placed within the main function of the file. If the additional checks need to be considered in determining the go/no-go signal of Cerberus, the main function can return a boolean value for the same. Having a dict return value of the format {‘status’:status, ‘message’:message} shall send signal to Cerberus along with message to be displayed in slack notification. However, it’s optional to return a value. Refer to example_check for an example custom check file.

Alerts

Monitoring metrics and alerting on abnormal behavior is critical as they are the indicators for clusters health. Information on supported alerts can be found at alerts.

Use cases

There can be number of use cases, here are some of them:

  • We run tools to push the limits of Kubernetes/OpenShift to look at the performance and scalability. There are a number of instances where system components or nodes start to degrade, which invalidates the results and the workload generator continues to push the cluster until it is unrecoverable.

  • When running chaos experiments on a kubernetes/OpenShift cluster, they can potentially break the components unrelated to the targeted components which means that the chaos experiment won’t be able to find it. The go/no-go signal can be used here to decide whether the cluster recovered from the failure injection as well as to decide whether to continue with the next chaos scenario.

Tools consuming Cerberus

  • Benchmark Operator: The intent of this Operator is to deploy common workloads to establish a performance baseline of Kubernetes cluster on your provider. Benchmark Operator consumes Cerberus to determine if the cluster was healthy during the benchmark run. More information can be found at cerberus-integration.

  • Kraken: Tool to inject deliberate failures into Kubernetes/OpenShift clusters to check if it is resilient. Kraken consumes Cerberus to determine if the cluster is healthy as a whole in addition to the targeted component during chaos testing. More information can be found at cerberus-integration.

Blogs and other useful resources

Contributions

We are always looking for more enhancements, fixes to make it better, any contributions are most welcome. Feel free to report or work on the issues filed on github.

More information on how to Contribute

Community

Key Members(slack_usernames): paige, rook, mffiedler, mohit, dry923, rsevilla, ravi

Credits

Thanks to Mary Shakshober ( https://github.com/maryshak1996 ) for designing the logo.

1 - Installation

Following ways are supported to run Cerberus:

  • Standalone python program through Git or python package
  • Containerized version using either Podman or Docker as the runtime
  • Kubernetes or OpenShift deployment

Git

Pick the latest stable release to install here.

$ git clone https://github.com/redhat-chaos/cerberus.git --branch <release>

Install the dependencies

NOTE: Recommended to use a virtual environment(pyenv,venv) so as to prevent conflicts with already installed packages.

$ pip3 install -r requirements.txt

Configure and Run

Setup the config according to your requirements. Information on the available options can be found at usage.

Run

$ python3 start_cerberus.py --config <config_file_location>

NOTE: When config file location is not passed, default config is used.

Python Package

Cerberus is also available as a python package to ease the installation and setup.

To install the lastest release:

$ pip3 install cerberus-client

Configure and Run

Setup the config according to your requirements. Information on the available options can be found at usage.

Run

$ cerberus_client -c <config_file_location>`

Containerized version

Assuming docker ( 17.05 or greater with multi-build support ) is intalled on the host, run:

$ docker pull quay.io/redhat-chaos/cerberus
# Setup the [config](https://github.com/redhat-chaos/cerberus/tree/master/config) according to your requirements. Information on the available options can be found at [usage](usage.md).
$ docker run --name=cerberus --net=host -v <path_to_kubeconfig>:/root/.kube/config -v <path_to_cerberus_config>:/root/cerberus/config/config.yaml -d quay.io/redhat-chaos/cerberus:latest
$ docker logs -f cerberus

Similarly, podman can be used to achieve the same:

$ podman pull quay.io/redhat-chaos/cerberus
# Setup the [config](https://github.com/redhat-chaos/cerberus/tree/master/config) according to your requirements. Information on the available options can be found at [usage](usage.md).
$ podman run --name=cerberus --net=host -v <path_to_kubeconfig>:/root/.kube/config:Z -v <path_to_cerberus_config>:/root/cerberus/config/config.yaml:Z -d quay.io/redhat-chaos/cerberus:latest
$ podman logs -f cerberus

The go/no-go signal ( True or False ) gets published at http://<hostname>:8080. Note that the cerberus will only support ipv4 for the time being.

If you want to build your own Cerberus image, see here. To run Cerberus on Power (ppc64le) architecture, build and run a containerized version by following the instructions given here.

Run containerized Cerberus as a Kubernetes/OpenShift deployment

Refer to the instructions for information on how to run cerberus as a Kubernetes or OpenShift application.

2 - Config

Cerberus Config Components Explained

Config

Set the components to monitor and the tunings like duration to wait between each check in the config file located at config/config.yaml. A sample config looks like:

cerberus:
    distribution: openshift                              # Distribution can be kubernetes or openshift
    kubeconfig_path: /root/.kube/config                      # Path to kubeconfig
    port: 8081                                           # http server port where cerberus status is published
    watch_nodes: True                                    # Set to True for the cerberus to monitor the cluster nodes
    watch_cluster_operators: True                        # Set to True for cerberus to monitor cluster operators
    watch_terminating_namespaces: True                   # Set to True to monitor if any namespaces (set below under 'watch_namespaces' start terminating
    watch_url_routes:
    # Route url's you want to monitor, this is a double array with the url and optional authorization parameter
    watch_master_schedulable:                            # When enabled checks for the schedulable master nodes with given label.
        enabled: True
        label: node-role.kubernetes.io/master
    watch_namespaces:                                    # List of namespaces to be monitored
        -    openshift-etcd
        -    openshift-apiserver
        -    openshift-kube-apiserver
        -    openshift-monitoring
        -    openshift-kube-controller-manager
        -    openshift-machine-api
        -    openshift-kube-scheduler
        -    openshift-ingress
        -    openshift-sdn                                   # When enabled, it will check for the cluster sdn and monitor that namespace
    watch_namespaces_ignore_pattern: []                  # Ignores pods matching the regex pattern in the namespaces specified under watch_namespaces
    cerberus_publish_status: True                        # When enabled, cerberus starts a light weight http server and publishes the status
    inspect_components: False                            # Enable it only when OpenShift client is supported to run
                                                         # When enabled, cerberus collects logs, events and metrics of failed components

    prometheus_url:                                      # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
    prometheus_bearer_token:                             # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
                                                         # This enables Cerberus to query prometheus and alert on observing high Kube API Server latencies.

    slack_integration: False                             # When enabled, cerberus reports the failed iterations in the slack channel
                                                         # The following env vars needs to be set: SLACK_API_TOKEN ( Bot User OAuth Access Token ) and SLACK_CHANNEL ( channel to send notifications in case of failures )
                                                         # When slack_integration is enabled, a watcher can be assigned for each day. The watcher of the day is tagged while reporting failures in the slack channel. Values are slack member ID's.
    watcher_slack_ID:                                        # (NOTE: Defining the watcher id's is optional and when the watcher slack id's are not defined, the slack_team_alias tag is used if it is set else no tag is used while reporting failures in the slack channel.)
        Monday:
        Tuesday:
        Wednesday:
        Thursday:
        Friday:
        Saturday:
        Sunday:
    slack_team_alias:                                    # The slack team alias to be tagged while reporting failures in the slack channel when no watcher is assigned

    custom_checks:
        -   custom_checks/custom_check_sample.py       # Relative paths of files conataining additional user defined checks

tunings:
    timeout: 20                                          # Number of seconds before requests fail
    iterations: 1                                        # Iterations to loop before stopping the watch, it will be replaced with infinity when the daemon mode is enabled
    sleep_time: 3                                       # Sleep duration between each iteration
    kube_api_request_chunk_size: 250                     # Large requests will be broken into the specified chunk size to reduce the load on API server and improve responsiveness.
    daemon_mode: True                                    # Iterations are set to infinity which means that the cerberus will monitor the resources forever
    cores_usage_percentage: 0.5                          # Set the fraction of cores to be used for multiprocessing

database:
    database_path: /tmp/cerberus.db                      # Path where cerberus database needs to be stored
    reuse_database: False                                # When enabled, the database is reused to store the failures

Watch Nodes

This flag returns any nodes where the KernelDeadlock is not set to False and does not have a Ready status

Watch Cluster Operators

When watch_cluster_operators is set to True, this will monitor the degraded status of all the cluster operators and report a failure if any are degraded. If set to False will not query or report the status of the cluster operators

Watch Routes

This parameter expects a double array with each item having the url and an optional bearer token or authorization for each of the url’s to properly connect

For example:

watch_url_routes:
- - <url>
  - <authorization> (optional)
- - https://prometheus-k8s-openshift-monitoring.apps.****.devcluster.openshift.com
  - Bearer ****
- - http://nodejs-mongodb-example-default.apps.****.devcluster.openshift.com

Watch Master Schedulable Status

When this check is enabled, cerberus queries each of the nodes for the given label and verifies the taint effect does not equal “NoSchedule”

watch_master_schedulable:                            # When enabled checks for the schedulable master nodes with given label.
    enabled: True
    label: <label of master nodes>

Watch Namespaces

It supports monitoring pods in any namespaces specified in the config, the watch is enabled for system components mentioned in the config by default as they are critical for running the operations on Kubernetes/OpenShift clusters.

watch_namespaces support regex patterns. Any valid regex pattern can be used to watch all the namespaces matching the regex pattern. For example, ^openshift-.*$ can be used to watch all namespaces that start with openshift- or openshift can be used to watch all namespaces that have openshift in it. Or you can use ^.*$ to watch all namespaces in your cluster

Watch Terminating Namespaces

When watch_terminating_namespaces is set to True, this will monitor the status of all the namespaces defind under watch namespaces and report a failure if any are terminating. If set to False will not query or report the status of the terminating namespaces

Publish Status

Parameter to set if you want to publish the go/no-go signal to the http server

Inspect Components

inspect_components if set to True will perform an oc adm inspect namespace <namespace> when any namespace has any failing pods

Custom Checks

Users can add additional checks to monitor components that are not being monitored by Cerberus and consume it as part of the go/no-go signal. This can be accomplished by placing relative paths of files containing additional checks under custom_checks in config file. All the checks should be placed within the main function of the file. If the additional checks need to be considered in determining the go/no-go signal of Cerberus, the main function can return a boolean value for the same. Having a dict return value of the format {‘status’:status, ‘message’:message} shall send signal to Cerberus along with message to be displayed in slack notification. However, it’s optional to return a value.

Refer to example_check for an example custom check file.

3 - Example Report

2020-03-26 22:05:06,393 [INFO] Starting ceberus
2020-03-26 22:05:06,401 [INFO] Initializing client to talk to the Kubernetes cluster
2020-03-26 22:05:06,434 [INFO] Fetching cluster info
2020-03-26 22:05:06,739 [INFO] Publishing cerberus status at http://0.0.0.0:8080
2020-03-26 22:05:06,753 [INFO] Starting http server at http://0.0.0.0:8080
2020-03-26 22:05:06,753 [INFO] Daemon mode enabled, cerberus will monitor forever
2020-03-26 22:05:06,753 [INFO] Ignoring the iterations set

2020-03-26 22:05:25,104 [INFO] Iteration 4: Node status: True
2020-03-26 22:05:25,133 [INFO] Iteration 4: Etcd member pods status: True
2020-03-26 22:05:25,161 [INFO] Iteration 4: OpenShift apiserver status: True
2020-03-26 22:05:25,546 [INFO] Iteration 4: Kube ApiServer status: True
2020-03-26 22:05:25,717 [INFO] Iteration 4: Monitoring stack status: True
2020-03-26 22:05:25,720 [INFO] Iteration 4: Kube controller status: True
2020-03-26 22:05:25,746 [INFO] Iteration 4: Machine API components status: True
2020-03-26 22:05:25,945 [INFO] Iteration 4: Kube scheduler status: True
2020-03-26 22:05:25,963 [INFO] Iteration 4: OpenShift ingress status: True
2020-03-26 22:05:26,077 [INFO] Iteration 4: OpenShift SDN status: True
2020-03-26 22:05:26,077 [INFO] HTTP requests served: 0
2020-03-26 22:05:26,077 [INFO] Sleeping for the specified duration: 5


2020-03-26 22:05:31,134 [INFO] Iteration 5: Node status: True
2020-03-26 22:05:31,162 [INFO] Iteration 5: Etcd member pods status: True
2020-03-26 22:05:31,190 [INFO] Iteration 5: OpenShift apiserver status: True
127.0.0.1 - - [26/Mar/2020 22:05:31] "GET / HTTP/1.1" 200 -
2020-03-26 22:05:31,588 [INFO] Iteration 5: Kube ApiServer status: True
2020-03-26 22:05:31,759 [INFO] Iteration 5: Monitoring stack status: True
2020-03-26 22:05:31,763 [INFO] Iteration 5: Kube controller status: True
2020-03-26 22:05:31,788 [INFO] Iteration 5: Machine API components status: True
2020-03-26 22:05:31,989 [INFO] Iteration 5: Kube scheduler status: True
2020-03-26 22:05:32,007 [INFO] Iteration 5: OpenShift ingress status: True
2020-03-26 22:05:32,118 [INFO] Iteration 5: OpenShift SDN status: False
2020-03-26 22:05:32,118 [INFO] HTTP requests served: 1
2020-03-26 22:05:32,118 [INFO] Sleeping for the specified duration: 5
+--------------------------------------------------Failed Components--------------------------------------------------+
2020-03-26 22:05:37,123 [INFO] Failed openshfit sdn components: ['sdn-xmqhd']

2020-05-23 23:26:43,041 [INFO] ------------------------- Iteration Stats ---------------------------------------------
2020-05-23 23:26:43,041 [INFO] Time taken to run watch_nodes in iteration 1: 0.0996248722076416 seconds
2020-05-23 23:26:43,041 [INFO] Time taken to run watch_cluster_operators in iteration 1: 0.3672499656677246 seconds
2020-05-23 23:26:43,041 [INFO] Time taken to run watch_namespaces in iteration 1: 1.085144281387329 seconds
2020-05-23 23:26:43,041 [INFO] Time taken to run entire_iteration in iteration 1: 4.107403039932251 seconds
2020-05-23 23:26:43,041 [INFO] ---------------------------------------------------------------------------------------

4 - Usage

Config

Set the supported components to monitor and the tunings like number of iterations to monitor and duration to wait between each check in the config file located at config/config.yaml. A sample config looks like:

cerberus:
    distribution: openshift                              # Distribution can be kubernetes or openshift
    kubeconfig_path: ~/.kube/config                      # Path to kubeconfig
    port: 8080                                           # http server port where cerberus status is published
    watch_nodes: True                                    # Set to True for the cerberus to monitor the cluster nodes
    watch_cluster_operators: True                        # Set to True for cerberus to monitor cluster operators. Parameter is optional, will set to True if not specified
    watch_url_routes:                                    # Route url's you want to monitor
        - - https://...
          - Bearer ****                                  # This parameter is optional, specify authorization need for get call to route
        - - http://...
    watch_master_schedulable:                            # When enabled checks for the schedulable
        enabled: True                                     master nodes with given label.
        label: node-role.kubernetes.io/master
    watch_namespaces:                                    # List of namespaces to be monitored
        -    openshift-etcd
        -    openshift-apiserver
        -    openshift-kube-apiserver
        -    openshift-monitoring
        -    openshift-kube-controller-manager
        -    openshift-machine-api
        -    openshift-kube-scheduler
        -    openshift-ingress
        -    openshift-sdn
    cerberus_publish_status: True                        # When enabled, cerberus starts a light weight http server and publishes the status
    inspect_components: False                            # Enable it only when OpenShift client is supported to run.
                                                         # When enabled, cerberus collects logs, events and metrics of failed components

    prometheus_url:                                      # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
    prometheus_bearer_token:                             # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
                                                         # This enables Cerberus to query prometheus and alert on observing high Kube API Server latencies.

    slack_integration: False                             # When enabled, cerberus reports status of failed iterations in the slack channel
                                                         # The following env vars need to be set: SLACK_API_TOKEN ( Bot User OAuth Access Token ) and SLACK_CHANNEL ( channel to send notifications in case of failures )
                                                         # When slack_integration is enabled, a watcher can be assigned for each day. The watcher of the day is tagged while reporting failures in the slack channel. Values are slack member ID's.
    watcher_slack_ID:                                        # (NOTE: Defining the watcher id's is optional and when the watcher slack id's are not defined, the slack_team_alias tag is used if it is set else no tag is used while reporting failures in the slack channel.)
        Monday:
        Tuesday:
        Wednesday:
        Thursday:
        Friday:
        Saturday:
        Sunday:
    slack_team_alias:                                    # The slack team alias to be tagged while reporting failures in the slack channel when no watcher is assigned

    custom_checks:                                       # Relative paths of files conataining additional user defined checks
        -   custom_checks/custom_check_sample.py
        -   custom_check.py

tunings:
    iterations: 5                                        # Iterations to loop before stopping the watch, it will be replaced with infinity when the daemon mode is enabled
    sleep_time: 60                                       # Sleep duration between each iteration
    kube_api_request_chunk_size: 250                     # Large requests will be broken into the specified chunk size to reduce the load on API server and improve responsiveness.
    daemon_mode: True                                    # Iterations are set to infinity which means that the cerberus will monitor the resources forever
    cores_usage_percentage: 0.5                          # Set the fraction of cores to be used for multiprocessing

database:
    database_path: /tmp/cerberus.db                      # Path where cerberus database needs to be stored
    reuse_database: False                                # When enabled, the database is reused to store the failures

5 - Alerts

Cerberus consumes the metrics from Prometheus deployed on the cluster to report the alerts.

When provided the prometheus url and bearer token in the config, Cerberus reports the following alerts:

  • KubeAPILatencyHigh: alerts at the end of each iteration and warns if 99th percentile latency for given requests to the kube-apiserver is above 1 second. It is the official SLI/SLO defined for Kubernetes.

  • High number of etcd leader changes: alerts the user when an increase in etcd leader changes are observed on the cluster. Frequent elections may be a sign of insufficient resources, high network latency, or disruptions by other components and should be investigated.

NOTE: The prometheus url and bearer token are automatically picked from the cluster if the distribution is OpenShift since it’s the default metrics solution. In case of Kubernetes, they need to be provided in the config if prometheus is deployed.

6 - Node Problem Detector

node-problem-detector aims to make various node problems visible to the upstream layers in cluster management stack.

Installation

Please follow the instructions in the installation section to setup Node Problem Detector on Kubernetes. The following instructions are setting it up on OpenShift:

  1. Create openshift-node-problem-detector namespace ns.yaml with oc create -f ns.yaml
  2. Add cluster role with oc adm policy add-cluster-role-to-user system:node-problem-detector -z default -n openshift-node-problem-detector
  3. Add security context constraints with oc adm policy add-scc-to-user privileged system:serviceaccount:openshift-node-problem-detector:default
  4. Edit node-problem-detector.yaml to fit your environment.
  5. Edit node-problem-detector-config.yaml to configure node-problem-detector.
  6. Create the ConfigMap with oc create -f node-problem-detector-config.yaml
  7. Create the DaemonSet with oc create -f node-problem-detector.yaml

Once installed you will see node-problem-detector pods in openshift-node-problem-detector namespace. Now enable openshift-node-problem-detector in the config.yaml. Cerberus just monitors KernelDeadlock condition provided by the node problem detector as it is system critical and can hinder node performance.

7 - Slack Integration

The user has the option to enable/disable the slack integration ( disabled by default ). To use the slack integration, the user has to first create an app and add a bot to it on slack. SLACK_API_TOKEN and SLACK_CHANNEL environment variables have to be set. SLACK_API_TOKEN refers to Bot User OAuth Access Token and SLACK_CHANNEL refers to the slack channel ID the user wishes to receive the notifications. Make sure the Slack Bot Token Scopes contains this permission [calls:read] [channels:read] [chat:write] [groups:read] [im:read] [mpim:read]

  • Reports when cerberus starts monitoring a cluster in the specified slack channel.
  • Reports the component failures in the slack channel.
  • A watcher can be assigned for each day of the week. The watcher of the day is tagged while reporting failures in the slack channel instead of everyone. (NOTE: Defining the watcher id’s is optional and when the watcher slack id’s are not defined, the slack_team_alias tag is used if it is set else no tag is used while reporting failures in the slack channel.)

Go or no-go signal

When the cerberus is configured to run in the daemon mode, it will continuosly monitor the components specified, runs a simple http server at http://0.0.0.0:8080 and publishes the signal i.e True or False depending on the components status. The tools can consume the signal and act accordingly.

Failures in a time window

  1. The failures in the past 1 hour can be retrieved in the json format by visiting http://0.0.0.0:8080/history.
  2. The failures in a specific time window can be retrieved in the json format by visiting http://0.0.0.0:8080/history?loopback=.
  3. The failures between two time timestamps, the failures of specific issues types and the failures related to specific components can be retrieved in the json format by visiting http://0.0.0.0:8080/analyze url. The filters have to be applied to scrape the failures accordingly.

Sample Slack Config

This is a snippet of how would your slack config could look like within your cerberus_config.yaml.

    watcher_slack_ID:
        Monday: U1234ABCD   # replace with your Slack ID from Profile-> More -> Copy Member ID
        Tuesday:            # Same or different ID can be used for remaining days depending on who you want to tag
        Wednesday:
        Thursday:
        Friday:
        Saturday:
        Sunday:
    slack_team_alias:   @group_or_team_id

8 - Contribute

How to contribute

Contributions are always appreciated.

How to:

Pull request

In order to submit a change or a PR, please fork the project and follow instructions:

$ git clone http://github.com/<me>/cerberus
$ cd cerberus
$ git checkout -b <branch_name>
$ <make change>
$ git add <changes>
$ git commit -a
$ <insert good message>
$ git push

Fix Formatting

Cerberus uses pre-commit framework to maintain the code linting and python code styling. The CI would run the pre-commit check on each pull request. We encourage our contributors to follow the same pattern, while contributing to the code.

The pre-commit configuration file is present in the repository .pre-commit-config.yaml It contains the different code styling and linting guide which we use for the application.

Following command can be used to run the pre-commit: pre-commit run --all-files

If pre-commit is not installed in your system, it can be install with : pip install pre-commit

Squash Commits

If there are mutliple commits, please rebase/squash multiple commits before creating the PR by following:

$ git checkout <my-working-branch>
$ git rebase -i HEAD~<num_of_commits_to_merge>
   -OR-
$ git rebase -i <commit_id_of_first_change_commit>

In the interactive rebase screen, set the first commit to pick and all others to squash (or whatever else you may need to do).

Push your rebased commits (you may need to force), then issue your PR.

$ git push origin <my-working-branch> --force