Configuring the Sensor¶
Common configuration¶
Refer to the separate guide for security-related configuration options.
We use Helm for managing the deployment of Soveren Sensors. Refer to our Helm chart for all values that can be tuned up for the Soveren Sensor.
To customize values sent to your Soveren Sensor, you need to create the values.yaml
file in the folder that you use for custom Helm configuration.
Don't forget to run a helm upgrade
command after you've updated the values.yaml
file, providing the -f path_to/values.yaml
as a command line option (see the updating guide).
Only use values.yaml
to override specific values!
Avoid using a complete copy of our values.yaml
from the repository. This can lead to numerous issues in production that are difficult and time-consuming to resolve.
Sensor token¶
You should use values.yaml
to set the token for the sensor.
The token value is used to send metadata to the Soveren Cloud and to check for over-the-air updates of the detection model.
Use unique tokens for different deployments
If you're managing multiple Soveren deployments, please create unique tokens for each one. Using the same token across different deployments can result in data being mixed and lead to interpretation errors that are difficult to track.
You can also use Kubernetes secrets or store the token value in HashiCorp Vault and retrieve it at runtime using various techniques. Check the Securing Sensors page for instructions on how to do this.
Custom volumes¶
You can mount custom volumes, e.g., for secrets or configuration. To do this, define volumeMounts
and volumes
for each pod.
Example of how you can set up custom volume mounts
Binding components to nodes¶
The Soveren Sensor consists of two types of components:
-
Interceptors
, which are distributed to each node via DaemonSet. Interceptors are exclusively used by the Data-in-motion (DIM) sensors. -
Components instantiated only once per cluster via Deployments; these include
digger
,crawler
,kafka
,detectionTool
andprometheusAgent
. These can be thought of as the centralized components.
The centralized components consume a relatively large yet steady amount of resources. Their resource consumption is not significantly affected by variations in traffic volume and patterns. In contrast, the resource requirements for Interceptors can vary depending on traffic.
Given these considerations, it may be beneficial to isolate the centralized components on specific nodes. For example, you might choose nodes that are more focused on infrastructure monitoring rather than on business processes. Alternatively, you could select nodes that offer more resources than the average node.
If you know exactly which nodes host the workloads you wish to monitor with Soveren, you can also limit the deployment of Interceptors to those specific nodes.
First, you'll need to label the nodes that Soveren components will utilize:
After labeling, you have two options for directing the deployment of components: using nodeSelector
or affinity
.
Option 1: using nodeSelector
Option 2: using affinity
The affinity
option is conceptually similar to nodeSelector
but allows for a broader set of constraints.
Resources¶
We do not recommend changing the requests
values. They are calibrated to ensure the minimum functionality required by the component with the allocated resources.
On the other hand, the limits
for different containers can vary significantly and are dependent on the volume of collected data. There is no one-size-fits-all approach to determining them, but it's crucial to monitor actual usage and observe how quickly the data map is constructed by the product. The general trade-off here is: the more resources you allocate, the quicker the map is built.
It's important to note that the Soveren Sensor does not persist any data. It is normal for components to restart and virtual storage to be flushed. The ephemeral-storage
values are set to prevent the overuse of virtual disk space.
Detailed breakdown of resources:
Container | CPU requests |
CPU limits |
MEM requests |
MEM limits |
Ephemeral storage limits |
---|---|---|---|---|---|
interceptor |
50m |
1000m |
64Mi |
1536Mi |
100Mi |
digger |
100m |
1500m |
100Mi |
768Mi |
100Mi |
detection-tool |
200m |
2200m |
2252Mi |
2764Mi |
200Mi |
kafka |
100m |
400m |
650Mi |
1024Mi |
10Gi |
kafka-exporter |
100m |
400m |
650Mi |
1024Mi |
10Gi |
prometheus |
75m |
75m |
192Mi |
400Mi |
100Mi |
Pods containing interceptor
are deployed as a DaemonSet. To estimate the required resources, you will need to multiply the values by the number of nodes.
Container | CPU requests |
CPU limits |
MEM requests |
MEM limits |
Ephemeral storage limits |
---|---|---|---|---|---|
crawler |
100m |
1500m |
100Mi |
768Mi |
100Mi |
detection-tool |
200m |
2200m |
2252Mi |
4000Mi |
200Mi |
kafka |
100m |
400m |
650Mi |
1024Mi |
10Gi |
kafka-exporter |
100m |
400m |
650Mi |
1024Mi |
10Gi |
prometheus |
75m |
75m |
192Mi |
400Mi |
100Mi |
Kafka¶
In our testing, Kafka was found to be somewhat heap-hungry. That's why we limited the heap usage separately from the main memory usage limits.
Default heap settings for Kafka
The rule of thumb is this: if you increased the limits
memory
value for the kafka
container ×N-fold, also increase the heap ×N-fold.
The Soveren Sensor is designed to avoid persisting any information during runtime or between restarts. All containers are allocated a certain amount of ephemeral-storage
to limit potential disk usage. Kafka is a significant consumer of ephemeral-storage
as it temporarily holds collected information before further processing by other components.
There may be scenarios where you'd want to use persistentVolume
for Kafka. For instance, the disk space might be shared among various workloads running on the same node, and your cloud provider may not differentiate between persistent and ephemeral storage usage.
Enabling persistent volume for Kafka
Local metrics¶
You can collect metrics from the Soveren Sensor locally and create your own dashboards.
Collecting metrics in your own Prometheus instance
Log level¶
By default, the log levels for all Soveren Sensor components are set to error
. To adjust the verbosity of the logs according to your monitoring needs, you can specify different log levels for individual components.
You can adjust the log level for all components except Kafka.
DIM configuration¶
Multi-cluster deployment¶
For each Kubernetes cluster, you'll need a separate DIM sensor. When deploying DIM sensors across multiple clusters, they will be identified by the tokens and names assigned during their creation.
Use a separate sensor for each cluster
Sometimes you may want to automate the naming of your clusters in Soveren during deployment.
Without those settings, Soveren will default to using the Sensor's name defined in the Soveren app.
Namespace filtering¶
At times, you may want to limit the Soveren Sensor to specific namespaces for monitoring. You can achieve this by either specifying allowed namespaces (the "allow list") or by excluding particular ones (the "exclude list").
The syntax is as follows:
- If nothing is specified, all namespaces will be monitored.
- An asterisk (*) represents "everything."
action: allow
includes the specified namespace for monitoring.action: deny
excludes the specified namespace from monitoring.
Filtering out namespaces from monitoring
When defining names, you can use wildcards and globs such as foo*
, /dev/sd?
, and devspace-[1-9]
, as defined in the Go path package.
The Sensor's default policy is to work only with explicitly mentioned namespaces, ignoring all others.
End with allow *
if you have any deny
definitions
If you've included deny
definitions in your filter list and want to monitor all other namespaces, make sure to conclude the list with:
Failing to do so could result in the Sensor not monitoring any namespaces if only deny
definitions are present.
Service mesh and encryption¶
Soveren can monitor connections encrypted through service meshes like Linkerd or Istio.
The Sensor will automatically detect if a service mesh is deployed on the node. Fine-tuning is only necessary if your mesh implementation uses non-standard ports.
Example of non-standard Linkerd port
TLS interception¶
Soveren can intercept encrypted traffic from applications running in containers that use the OpenSSL library.
Enabling application-level TLS interception
TLS interception is a highly experimental feature and requires more resources than intercepting non-encrypted traffic. Therefore, it is disabled by default
updateStrategy¶
You can adjust the update strategy for Interceptors.
Using updateStrategy
for Interceptors (DaemonSet)
DAR configuration¶
Deployment¶
We recommend creating a separate sensor for each type of asset that you want to monitor. For example, one sensor for S3 buckets, one for Kafka, and one for each database type.
We recommend using a separate DAR sensor for each asset type (S3, or Kafka, or a database variant)
You can also have multiple sensors covering the same type of asset, for performance reasons. While it is possible to use one sensor for all types, this approach can complicate the resolution of potential performance bottlenecks and other issues.
Instead of passing the credentials directly, you can use secrets to pass the whole connection string or configuration section.
S3 buckets¶
To enable S3 bucket discovery and scanning, you must provide the sensor with credentials for access. This can be done either directly by providing an access key or by configuring a specific role that the sensor will assume at runtime.
Soveren supports various S3 implementations, such as AWS and MinIO. For each S3 implementation, ensure you create and configure a separate DAR sensor.
A separate DAR sensor must be deployed for each S3 implementation, such as AWS or MinIO
You can also use secrets to pass the configuration to the sensor.
S3 scanning configuration
The user must be granted the s3:ListAllMyBuckets
, and the following minimal Actions
on all buckets that need to be monitored:
-
s3:GetBucketPolicyStatus
-
s3:GetBucketPolicy
-
s3:GetBucketAcl
-
s3:GetObject
-
s3:GetEncryptionConfiguration
-
s3:GetBucketTagging
-
s3:ListBucket
Kafka¶
To enable Kafka scanning, you must provide the sensor with the instance name and address, as well as the necessary access credentials.
You can also use secrets to pass the configuration to the sensor.
Kafka scanning configuration
SQL databases¶
To enable database scanning, you must provide the sensor with the instance name and the connection string containing necessary access credentials.
Currently we support PostgreSQL, SQL Server, and MySQL.
PostgreSQL¶
PostgreSQL configuration
The user must have SELECT
permissions on the following tables:
-
information_schema.role_table_grants
-
pg_catalog.pg_stat_ssl
-
pg_catalog.pg_database
-
pg_catalog.pg_roles
-
pg_catalog.pg_auth_members
-
pg_catalog.pg_tables
-
pg_catalog.pg_class
-
pg_catalog.pg_namespace
-
pg_catalog.pg_attribute
-
(optional)
pg_catalog.pg_hba_file_rules
In addition, the user must also have SELECT
permissions on any databases, schemas, or tables to be scanned for sensitive data.
SQL Server¶
SQL Server configuration
The user must have SELECT
permissions on the following tables:
-
information_schema.COLUMNS
-
sys.dm_exec_connections
-
sys.tables
-
sys.schemas
-
sys.dm_db_partition_stats
In addition, the user must also have SELECT
permissions on any databases, schemas, or tables to be scanned for sensitive data.
MySQL¶
MySQL configuration
The soveren_read_only_user
must have SELECT
permissions on any databases and tables to be scanned for sensitive data, as well as on some system tables.
This is how you should configure permissions for the soveren_read_only_user
NoSQL databases¶
To enable database scanning, you must provide the sensor with the instance name and the connection string containing necessary access credentials.
Currently we support MongoDB.
MongoDB¶
MongoDB configuration
The user must have the following permissions:
-
getCmdLineOpts
— globally; -
listCollections
— on databases to be scanned for sensitive data; -
find
,collStats
— on databases and collections to be scanned for sensitive data.