Skip to content

Troubleshooting the Sensor

You've deployed the Soveren Sensor and everything should be working properly. However, if you don't see any data in the Soveren app, or something seems amiss, here are several troubleshooting steps you can follow.

Verifying the deployment

Ensure that you're running the latest version of the Soveren Sensor. You can verify this with the following command:

helm search repo soveren

You can then compare the versions listed in the output with our customer success team for confirmation.

Refer to our current helm chart for all values that can be tuned up for Soveren Sensors, and for current images / components versions.

Next, it's advisable to confirm that all Soveren Sensor components have been successfully deployed:

helm -n soverenio list

In this command, soverenio is the namespace where you've deployed the Sensor.

Ensure you observe all of the following:

  • interceptor: There should be several instances, equal to the number of nodes in your cluster. Interceptors collect the traffic from nodes and relay it to kafka.
  • digger: One instance, reads data from kafka, sends it to the detection-tool, collects results, and forwards relevant metadata to the Soveren Cloud.
  • kafka: Only one instance should exist, which receives traffic from the interceptors.
  • detection-tool: A single instance, performs the bulk of the work detecting sensitive data.
  • prometheus-agent: A single instance, monitors basic metrics from all other Sensor components.
helm -n soverenio-dar-sensor list

In this command, soverenio-dar-sensor is the namespace where you've deployed the Sensor.

Ensure you observe all of the following:

  • crawler: One instance. Reads data from data sources, sends it to the detection-tool, collects results, and forwards relevant metadata to the Soveren Cloud.
  • kafka: Only one instance should exist.
  • detection-tool: A single instance, performs the bulk of the work detecting sensitive data.
  • prometheus-agent: A single instance, monitors basic metrics from all other Sensor components.

Additionally, ensure that all custom values specified in your values.yaml have been incorporated into the deployment:

helm -n soverenio get values soveren-agent | grep -v token
helm -n soverenio-dar-sensor get values soveren-dar-sensor | grep -v token

These commands offer a basic check of the Soveren Sensor setup's consistency

Be prepared to share the output of these commands when discussing issues with our customer success team.

Verifying individual components

If the basic setup appears correct but issues persist, consider inspecting individual components.

Checking Deployments and DaemonSet

First, review the configurations of each component:

kubectl -n soverenio describe deployment -l app.kubernetes.io/component=[digger|kafka|prometheus-agent|detection-tool]
kubectl -n soverenio-dar-sensor describe deployment -l app.kubernetes.io/component=[crawler|kafka|prometheus-agent|detection-tool]

These components are considered Deployments in Kubernetes, and the command provides detailed information about each.

Since Interceptors function as a Kubernetes DaemonSet, they require a different command:

kubectl -n soverenio describe daemonset -l app.kubernetes.io/component=interceptor

Permissions required by Interceptors

If issues arise specifically with the Interceptors, such as difficulties transitioning to running mode, confirm they possess the requisite permissions:

kubectl -n soverenio get daemonset -l app.kubernetes.io/component=interceptor -o yaml

The securityContext must contain the following:

securityContext:
  privileged: true

Also, ensure that the output includes:

dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
hostPID: true

Ensure the securityContext for Interceptors is properly set

Interceptors listen to the host's virtual interfaces, necessitating their operation in privileged mode. Otherwise, they'll fail to capture traffic.

Checking pods

If a particular component raises concerns, delve deeper into its associated pods.

For pods by component:

kubectl -n soverenio describe pod -l app.kubernetes.io/component=[digger|interceptor|kafka|prometheus-agent|detection-tool]
kubectl -n soverenio-dar-sensor describe pod -l app.kubernetes.io/component=[crawler|kafka|prometheus-agent|detection-tool]

To view all the Sensor's pods:

kubectl -n soverenio describe pod -l app.kubernetes.io/name=soveren-agent
kubectl -n soverenio-dar-sensor describe pod -l app.kubernetes.io/name=soveren-dar-sensor

Checking logs

If a specific component seems problematic, consider inspecting its logs.

To view logs by component:

kubectl -n soverenio logs --tail=-1 -l app.kubernetes.io/component=[digger|interceptor|kafka|prometheus-agent|detection-tool]
kubectl -n soverenio-dar-sensor logs --tail=-1 -l app.kubernetes.io/component=[crawler|kafka|prometheus-agent|detection-tool]

To investigate logs from individual pods of the sensor components:

kubectl -n soverenio get pod -l app.kubernetes.io/component=[digger|interceptor|kafka|prometheus-agent|detection-tool]
kubectl -n soverenio-dar-sensor get pod -l app.kubernetes.io/component=[crawler|kafka|prometheus-agent|detection-tool]

This provides a list of POD_NAMES associated with the component. You can then retrieve logs from a specific pod:

kubectl -n soverenio logs --tail=-1 <POD_NAME>
kubectl -n soverenio-dar-sensor logs --tail=-1 <POD_NAME>

To enhance log verbosity, you may need to adjust the log level for the concerned component.