Skip to content

Troubleshooting the Sensor

You've deployed the Soveren Sensor and everything should be working properly. However, if you don't see any data in the Soveren app, or something seems amiss, here are several troubleshooting steps you can follow.

Verifying the deployment

Ensure that you're running the latest version of the Soveren Sensor. You can verify this with the following command:

helm search repo soveren

You can then compare the versions listed in the output with our customer success team for confirmation.

Refer to our current helm chart for all values that can be tuned up for the Soveren Sensor, and for current images / components versions.

Next, it's advisable to confirm that all Soveren Sensor components have been successfully deployed:

helm -n soverenio list

In this command, soverenio is the namespace where you've deployed the Sensor.

Ensure you observe all of the following:

  • interceptor: There should be several instances, equal to the number of nodes in your cluster. Interceptors collect the traffic from nodes and relay it to kafka.
  • kafka: Only one instance should exist, which receives traffic from the interceptors.
  • digger: One instance, reads data from kafka, sends it to the detection-tool, collects results, and forwards relevant metadata to the Soveren Cloud.
  • detection-tool: A single instance, performs the bulk of the work detecting sensitive data.
  • prometheus-agent: A single instance, monitors basic metrics from all other Sensor components.

Additionally, ensure that all custom values specified in your values.yaml have been incorporated into the deployment:

helm -n soverenio get values soveren-agent | grep -v token

These commands offer a basic check of the Soveren Sensor setup's consistency

Be prepared to share the output of these commands when discussing issues with our customer success team.

Verifying individual components

If the basic setup appears correct but issues persist, consider inspecting individual components.

Checking Deployments and DaemonSet

First, review the configurations of each component:

kubectl -n soverenio describe deployment -l[digger|kafka|prometheus-agent|detection-tool]

Run the aforementioned command multiple times, substituting in digger, kafka, prometheus-agent, and detection-tool. These components are considered Deployments in Kubernetes, and the command provides detailed information about each.

Since Interceptors function as a Kubernetes DaemonSet, they require a different command:

kubectl -n soverenio describe daemonset -l

Permissions required by Interceptors

If issues arise specifically with the Interceptors, such as difficulties transitioning to running mode, confirm they possess the requisite permissions:

kubectl -n soverenio get daemonset -l -o yaml

Each Interceptor pod houses two containers: the rpcapd, responsible for actual traffic capture, and the interceptor itself, which processes this data. Examine the securityContext for both interceptor and rpcapd:

  privileged: true

For interceptor container, ensure the output includes:

dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
hostPID: true

Ensure the securityContext for Interceptors is properly set

Interceptors listen to the host's virtual interfaces, necessitating their operation in privileged mode. Otherwise, they'll fail to capture traffic.

Checking pods

If a particular component raises concerns, delve deeper into its associated pods.

For pods by component:

kubectl -n soverenio describe pod -l[digger|interceptor|kafka|prometheus-agent|detection-tool]

Repeat the command above for digger, interceptor, kafka, prometheus-agent and detection-tool.

To view all the Sensor's pods:

kubectl -n soverenio describe pod -l

Checking logs

If a specific component seems problematic, consider inspecting its logs.

To view logs by component:

kubectl -n soverenio logs -l[digger|interceptor|kafka|prometheus-agent|detection-tool]

This command should be executed individually for digger, interceptor, kafka, prometheus-agent and detection-tool.

To investigate logs from individual pods, such as the Interceptors:

kubectl -n soverenio get pod -l

This provides a list of POD_NAMES associated with the Interceptors. You can then retrieve logs from a specific pod:

kubectl -n soverenio logs <POD_NAME>

To enhance log verbosity, you may need to adjust the log level for the concerned component.