Kaspersky Container Security

Solution installation

Kaspersky Container Security components are supplied as images in the Kaspersky Container Security manufacturer registry and deployed as containers.

Installation of the Kaspersky Container Security platform consists of the following steps:

  1. Installation of the Basic business logic module and the Scanner components.
  2. First launch of the Management Console.
  3. Configuration of the agent groups and agent deployment on the controlled cluster nodes.

After installation, you should prepare the solution for operation:

In this Help section

Installing the basic business logic module and scanner

First launch of the Management Console

Viewing and accepting the End User License Agreement

Checking solution functionality

Agent deployment

Viewing and editing agent groups

Configuring a proxy server

Connecting to external data storage resources

Installing private fixes

Page top
[Topic 276636]

Installing the basic business logic module and scanner

Before the solution installation, you must check the data integrity in the prepared Helm Chart package.

To check the data integrity:

  1. Download the archive with the prepared Helm Chart package and hash file and go to this directory.
  2. Run the command:

    sha256sum -c kcs-2.0.0.tgz.sha

    The data integrity is confirmed if the following message is displayed:

    kcs-2.0.0.tgz: OK

Before starting the installation (including on AWS EKS or Microsoft Azure), pay attention to the storageClass and ingressClass settings in the default and ingress.kcs blocks of the configuration file. These settings are cluster relevant and, if necessary, are to be changed according to your infrastructure. For example, the following variant is used for Azure:

default:
  storageClass: azurefile
  networkPolicies:
    ingressControllerNamespaces:
      - app-routing-system

ingress:
  kcs:
    ingressClass: webapprouting.kubernetes.azure.com

To install the basic business logic module and the scanner of Kaspersky Container Security,

After preparing the configuration file, run the solution installation:

cd kcs/

helm upgrade --install kcs . \

--create-namespace \

--namespace kcs \

--values values.yaml \

--set default.domain="example.com" \

--set default.networkPolicies.ingressControllerNamespaces="{ingress-nginx}" \

--set secret.infracreds.envs.POSTGRES_USER="user" \

--set-string secret.infracreds.envs.POSTGRES_PASSWORD="pass" \

--set secret.infracreds.envs.MINIO_ROOT_USER="user" \

--set-string secret.infracreds.envs.MINIO_ROOT_PASSWORD="password" \

--set-string secret.infracreds.envs.CLICKHOUSE_ADMIN_PASSWORD="pass" \

--set secret.infracreds.envs.MCHD_USER="user" \

--set-string secret.infracreds.envs.MCHD_PASS="pass" \

--set pullSecret.kcs-pullsecret.username="user" \

--set-string pullSecret.kcs-pullsecret.password="pass"

After installation, the solution components are deployed.

Also, when installing the Kaspersky Container Security Middleware module and scanner, you can configure the secure transfer of passwords, tokens, and secrets. This is achieved using a HashiCorp Vault storage, which you can configure in the values.yaml file and deploy when the Helm Chart package is started.

After installation is complete, a record about the execution of the solution installation command remains in the command shell. You can open the command history file and delete this record, or prevent the command history from being logged in the command shell before installation.

The control panel will be available at the address specified in the envs subsection of the environment variables section. This allows you to create the ConfigMap object for the API_URL parameter:

http://${DOMAIN}

Page top
[Topic 292077]

First launch of the Management console

To start the Kaspersky Container Security Management Console:

  1. In your browser, navigate to the address specified for the Management Console during the Server installation.

    The authorization page opens.

  2. Enter your user name and password and click the Login button.

    During the installation of the solution, the user name and password have the same value assigned—admin. You can change the user name and password after launching the Management Console.

    After 3 unsuccessful password entry attempts, the user is temporarily blocked. The default block duration is 1 minute.

  3. Following the request, change the current password for the user account: enter a new password, confirm it, and click the Change button.

    Passwords have the following requirements:

    • The password must contain numerals, special characters, and uppercase and lowercase letters.
    • The minimum password length is 6 characters, and the maximum password length is 72 characters.

The main page of the Management Console opens.

By default, the logged-in user session in the Management Console is 9 hours. In the SettingsAuthentication section, you can set your own session duration from the minimum of 1 hour to the maximum of 168 hours. After this time expires, the session ends.

You can change the connection settings in the SettingsAuthentication section.

Page top
[Topic 250380]

Viewing and accepting the End User License Agreement

When you launch the Management Console in a browser for the first time, Kaspersky Container Security prompts you to read the End User License Agreement between you and Kaspersky. To continue working with the solution, confirm that you have fully read and accept the terms of the End User License Agreement for Kaspersky Container Security.

To confirm acceptance of the terms of the End User License Agreement,

at the bottom of the End User License Agreement window, click the Accept button.

The authorization page opens for launching the Management Console.

After installing a new version of the solution, accept the End User License Agreement again.

Page top
[Topic 255780]

Checking solution functionality

After installing Kaspersky Container Security and starting the administration console, you can make sure that the solution is detecting security problems and protecting containerized objects.

To check the functionality of Kaspersky Container Security:

  1. Activate the solution using an activation code or key file.
  2. Configure integration with image registries. Integration with a single registry is sufficient to check the functionality.
  3. If necessary, configure the settings of the scanner policy that is created by default after installation of the solution.
  4. Add an image for scanning and make sure that the scan task is sent for processing.
  5. After the scan is complete, go to the page with detailed information about the image scan results.

Scanning an image and receiving valid results confirms that Kaspersky Container Security is operating correctly. After this, you can further configure the solution settings.

Page top
[Topic 266166]

Agent deployment

You should install Agents on all nodes of the cluster that you want to protect.

A separate group of agents is installed on each cluster.

To deploy agents in the cluster:

  1. In the main menu, go to the Components → Agents section.
  2. In the work pane, click the Add agent group button.
  3. On the General tab:
    1. Fill in the fields in the form.
      • Enter the group name. For convenient agent management, we recommend naming the group after cluster whose nodes the agents will be deployed on.
      • If required, enter a description of the agent group.
      • Select the orchestrator to use.
      • Specify the namespace name.
    2. In the KCS registry section, enter the web address of the registry where the images used to install agents are located. To access the registry, you must specify the correct user name and password.
    3. Under Linked SIEM, select the SIEM system from the drop-down list.

      To link an agent group in Kaspersky Container Security, you must create and configure at least one integration with a SIEM system.
      One agent group can be linked with only one SIEM system.

      For each SIEM system integration, the drop-down list indicates the connection status – Success, Warning, or Error.

  4. On the Node monitoring tab, use the Disable/Enable toggle to start monitoring and analyzing the status of the network, processes inside containers, and file threat protection for the following settings:
    • Network connections monitoring. The status of network connections is monitored with traffic capture devices (network monitors) and eBPF modules. This process considers applicable runtime policies and container runtime profiles.
    • Container processes monitoring. Container processes are monitored using eBPF programs based on applicable runtime policy rules and container runtime profile rules.
    • File threat protection. To track anti-malware database updates, specify one of the following values:
      • Anti-malware database update URL: the web address of the Kaspersky Container Security update service.
      • Anti-malware database update proxy: the HTTP proxy for a cloud or local update server.

      If the kcs-updates container is used to update anti-malware databases, the URL of the database update tool must be specified as follows: <domain>/kuu/updates (for example, https://kcs.company.com/kuu/updates).

      By default, File Threat Protection databases are updated from Kaspersky cloud servers.

    • File operations. The solution tracks file operations using eBPF modules based on applicable runtime policies and container runtime profiles.

      Regardless of the mode specified in the runtime policy, only the Audit mode is supported for file operations. If the Enforce mode is specified in the applicable runtime policy, file operations are performed in Audit mode.

    Monitoring steps that are not needed can be disabled to avoid unnecessary load on the nodes.

  5. Click Save.

In the workspace, the Deployment data tab displays the following data necessary for deploying agents on the cluster:

  • The automatically generated deployment token is the identifier that the agent uses to connect to the server. You can copy the token by clicking the copy icon (Copy icon.) next to the Deployment token field.
  • Instruction for deploying agents on a cluster. You can copy the instruction from the Configuration field by clicking the copy icon (Copy icon), or download the instruction as a file in .YAML format.

    You can use this instruction to deploy agents on a cluster. For example:

    kubectl apply -f <file> -n <namespace>

    Following the application of the instruction, the agent is deployed on all worker nodes of the cluster.

The solution automatically updates the agent deployment instruction if you change the following parameters:

  • TLS certificates of the solution
  • URL, user name, and password for downloading the kube-agent and node-agent images
  • The linked SIEM system
  • Settings in the Node monitoring section

You must copy or download the updated instruction in a .YAML file again, and then apply it by using the kubectl apply -f <file> -n <namespace> command. Otherwise, changes of these parameters are not applied to deployed agents.

Page top
[Topic 294539]

Viewing and editing agent groups

The table under ComponentsAgents displays the created and deployed agent groups. The following information is provided for each of these groups:

  • Agent group name
  • Number of connected agents in the group
  • Orchestrator
  • Enabled node monitoring activities
  • Linked SIEM

You can filter agent groups by connection status (All, Connected, Disconnected, Pending) using the buttons above the table.

By clicking on the deployment icon (Right arrow icon.), you can expand each agent group in the table to view the following agent details:

  • The name of the agent and its connection status.
  • Version of the node where the agent is deployed (primary or worker)
  • The name of the pod with which the agent is associated.
  • Node monitoring actions (Container processes, Network connections, File Threat Protection, and File operations).
  • SIEM status
  • Date and time when the agent last connected

By clicking the agent name link, you can expand the sidebar to view agent status information.

To edit the agent group settings:

  1. Under ComponentsAgents, in the table with the list of agent groups, click the link in the agent group name.
  2. In the window that opens, edit the group settings.
  3. Click Save.

Page top
[Topic 283087]

Configuring a proxy server

In version 2.0, Kaspersky Container Security can proxy requests from private corporate networks to the external environment. The settings for connection through a proxy server are configured using the following environment variables in the Helm Chart package, which is included in the solution distribution kit:

  • HTTP_PROXY – proxy server for HTTP requests.
  • HTTPS_PROXY – proxy server for HTTPS requests.
  • NO_PROXY – a variable that specifies domains or domain masks to be excluded from proxying.

    If HTTP_PROXY or HTTPS_PROXY is used, the NO_PROXY variable is automatically generated in the Helm Chart package, and all the components used by Kaspersky Container Security are indicated in this variable.

    You can change the NO_PROXY variable if you need to specify domains and masks for operation of Kaspersky Container Security in order to exclude them from proxying.

  • SCANNER_PROXY – a specialized variable that specifies which proxy server receives requests from the scanner of the File Threat Protection component. These requests are used by Kaspersky servers to update databases.
  • LICENSE_PROXY – a specialized variable that specifies the proxy server through which kcs-licenses module sends requests to Kaspersky servers to check and update information about the current license.

Depending on the domain name masks supported by your proxy server, you must use the following masks to specify Kaspersky servers in lists of permitted proxy servers: *.kaspersky.com or .kaspersky.com , *.kaspersky-labs.com or .kaspersky-labs.com. To access these proxy servers, port 80 must be opened.

You can specify the port in the proxy server parameters using IP address or FQDN.

Special characters must be escaped.

The table below lists the Kaspersky Container Security components that can use environment variables, and also indicates the purpose of these environment variables.

Environment variables used by Kaspersky Container Security components

Component

Environment variable

Purpose

kcs-ih

HTTP_PROXY

HTTPS_PROXY

NO_PROXY

Getting access to external image registries that are not available from the Kaspersky Container Security namespace.

kcs-ih

SCANNER_PROXY

Update of the databases of the File Threat Protection scanner using Kaspersky update servers.

kcs-middleware

HTTP_PROXY

HTTPS_PROXY

NO_PROXY

Getting access to external image registries that are not available from the Kaspersky namespace.

kcs-scanner

SCANNER_PROXY

Update of the vulnerability scanner databases using Kaspersky update servers.

kcs-licenses

LICENSE_PROXY

Check and update of information about the current license using Kaspersky license servers.

You can configure the operation of agents using a proxy server, and the proxy server will pass requests to the Kaspersky Container Security installation address.

To configure the operation of agents using a proxy server:

  1. Under Components → Agents, in the table with the list of agent groups, click the link in the agent group name.
  2. In the window that opens, go to the Node monitoring tab and do the following:
    • Ensure that the File Threat Protection component is enabled by using the Disable/Enable toggle switch.
    • In the File Threat Protection section, specify the proxy server in Anti-malware database update proxy.
    • Click Save.
  3. Click the Deployment data tab.
  4. Copy or download the updated agent deployment instruction in a .YAML file again, and then apply it by using the kubectl apply -f <file> -n <namespace> command.
  5. Configure the HTTP_PROXY, HTTPS_PROXY, or NO_PROXY environment variables in the Deployment and DaemonSet objects of the agents.

Page top
[Topic 293087]

Connecting to external data storage resources

In addition to the Kaspersky Container Security components included in the distribution kit, the solution can also work with the following external data storage resources:

  • PostgreSQL database
  • ClickHouse DBMS
  • MinIO s3 compatible file storage

Configuration of settings for connection to external data storage resources is conducted by means of the values.yaml configuration file.

In this section

Creating a user for an external PostgreSQL database

Using external ClickHouse DBMS

Configuring the MinIO external storage settings

Page top
[Topic 298985]

Creating a user for an external PostgreSQL database

For Kaspersky Container Security, you can use PostgreSQL databases included in the solution or your own PostgreSQL databases. To install an external PostgreSQL database that does not work with the Kaspersky Container Security schema, you must create a separate user. You can do this by installing the Helm Chart package with the schema parameters specified for the external PostgreSQL database.

To create a user with a custom schema for an external PostgreSQL database:

  1. Run the following command to create a separate namespace for the external PostgreSQL database:

    kubectl create ns kcspg

    where kcspg is the namespace for the external PostgreSQL database.

  2. To deploy an external PostgreSQL database:
    1. Specify the parameters for deploying the external PostgreSQL database in the pg.yaml configuration file.

      Parameters for deploying the external PostgreSQL database

      apiVersion: apps/v1

      kind: Deployment

      metadata:

      annotations:

      deployment.kubernetes.io/revision: "1"

      labels:

      app: postgres

      component: postgres

      name: postgres

      namespace: kcspg

      spec:

      replicas: 1

      selector:

      matchLabels:

      app: postgres

      component: postgres

      strategy:

      type: Recreate

      template:

      metadata:

      creationTimestamp: null

      labels:

      app: postgres

      component: postgres

      spec:

      containers:

      - name: postgres

      image: postgres:13-alpine

      ports:

      - containerPort: 5432

      env:

      - name: POSTGRES_DB

      value: api

      - name: POSTGRES_USER

      value: postgres

      - name: POSTGRES_PASSWORD

      value: postgres

      volumeMounts:

      - mountPath: "/var/lib/postgresql/data"

      name: "pgdata"

      imagePullSecrets:

      - name: ci-creds

      volumes:

      - hostPath:

      path: "/home/docker/pgdata"

      name: pgdata

      The parameters specify the password of the database. You must then specify this password in the infraconfig section of the values.yaml configuration file, which is part of the Helm Chart package included in the distribution kit of the solution.

    2. Run the following command:

      kubectl apply -f pg.yaml -n kcspg

    The name of this external database is formed as follows:

    <pod_name>.<namespace name>.<service>.cluster.local

    For example, postgres.kcspg.svc.cluster.local

  3. To deploy a Service object in a cluster:
    1. Specify the Service object deployment parameters in the svc.yaml configuration file.

      Parameters for deploying the Service object in a cluster

      apiVersion: v1

      kind: Service

      metadata:

      name: postgres

      spec:

      type: ClusterIP

      selector:

      component: postgres

      ports:

      - port: 5432

      targetPort: 5432

    2. Run the following command:

      kubectl apply -f svc.yaml -n kcspg

  4. To create a user, a schema, and a user-schema relation:
    1. Using the postgres element expanded at step 2b, log in to the pod.
    2. Start the psql interactive terminal:

      psql -h localhost -U postgres -d api

    3. Run the following commands:

      CREATE ROLE kks LOGIN PASSWORD 'kks' NOINHERIT CREATEDB;

      CREATE SCHEMA kks AUTHORIZATION kks;

      GRANT USAGE ON SCHEMA kks TO PUBLIC;

  5. In the values.yaml configuration file, specify the necessary parameters to use an external PostgreSQL database.

    Parameters in the values.yaml file

    default:

    postgresql:

    external: true

    configmap:

    infraconfig:

    type: fromEnvs

    envs:

    POSTGRES_HOST: postgres.kcspg.svc.cluster.local

    POSTGRES_PORT: 5432

    POSTGRES_DB_NAME: api

    secret:

    infracreds:

    type: fromEnvs

    envs:

    POSTGRES_USER: kks

    POSTGRES_PASSWORD: kks

    The values of the parameters specified in values.yaml must match the values of corresponding parameters in the pg.yaml and svc.yaml configuration files.

  6. Start a solution update.

    Example of commands to create a user with an external PostgreSQL database

    export KUBECONFIG=/root/.kube/config

    export CHART_URL=repo.kcs.kaspersky.com

    export CHART_USERNAME=<CHART_USERNAME>

    export CHART_PASSWORD=<CHART_PASSWORD>

    export VERSION=2.0.0

    export KCS_HOSTNAME=kcs.apps.aws.ext.company.com

    export IMAGE_URL=company.gitlab.examplecloud.com:5050

    export IMAGE_USERNAME=<repo_user>

    export IMAGE_PASSWORD=<repo_pass>

    cd /tmp

    helm registry login --username $IMAGE_USERNAME --password $IMAGE_PASSWORD company.gitlab.examplecloud.com:5050/company/kcs/chart

    helm pull oci://company.gitlab.examplecloud.com:5050/company/kcs/chart/kcs --version $VERSION

    tar -xf kcs*.tgz -C /tmp

    cp -rf /tmp/values.yaml /tmp/kcs

    cd /tmp/kcs

    helm upgrade --install kcs-release --create-namespace --namespace kcs --values values.yaml --version $VERSION --timeout 30m --wait --debug .

Page top
[Topic 292954]

Using external ClickHouse DBMS

In addition to the ClickHouse DBMS, which is a component of Kaspersky Container Security and is included in the distribution kit, the solution can also work with the resources of the external ClickHouse DBMS. To do this, you must do the following:

Page top
[Topic 298717]

Creating a database for Kaspersky Container Security

To create a database for Kaspersky Container Security,

In ClickHouse on your workstation, run the following command:

CREATE DATABASE IF NOT EXISTS kcs

where kcs is the name of the database for Kaspersky Container Security.

To configure the settings of the created database for Kaspersky Container Security:

  1. Add users and define their authorization method. To do this, you must do the following:
    1. Add the following users:
      • a user with rights to read data received by the Kaspersky Container Security core (reader).

        <roles>

        <kcs_reader_role>

        <grants>

        <query>GRANT SELECT ON kcs.*</query>

        </grants>

        </kcs_reader_role>

      • a user with rights to write data from external agent requests (writer).

        <roles>

        <kcs_writer_role>

        <grants>

        <query>GRANT CREATE TABLE, INSERT, ALTER, UPDATE ON kcs.*</query>

        <query>GRANT SELECT (source_ip, source_port, source_alias, dest_ip, dest_port, dest_alias, protocol, severity, action, event_time, count, type) ON kcs.node_agent_events</query>

        </grants>

        </kcs_writer_role>

    2. Specify the user authorization method: with a password or with a certificate.

      Example of configuring users with password authentication

      <clickhouse>

      ...

      <kcsuser-write>

      <password>*********</password>

      <networks>

      <ip>::/0</ip>

      </networks>

      ...

      <grants>

      <query>GRANT kcs_writer_role</query>

      </grants>

      </kcsuser-write>

      <kcsuser-read>

      <password>*********</password>

      <networks>

      <ip>::/0</ip>

      </networks>

      ...

      <grants>

      <query>GRANT kcs_reader_role</query>

      </grants>

      </kcsuser-read>

      ...

      <roles>

      <kcs_reader_role>

      <grants>

      <query>GRANT SELECT ON kcs.*</query>

      </grants>

      </kcs_reader_role>

      <kcs_writer_role>

      <grants>

      <query>GRANT CREATE TABLE, INSERT, ALTER, UPDATE ON kcs.*</query>

      <query>GRANT SELECT (source_ip, source_port, source_alias, dest_ip, dest_port, dest_alias, protocol, severity, action, event_time, count, type) ON kcs.node_agent_events</query>

      </grants>

      </kcs_writer_role>

      ...

      </roles>

      ...

      </clickhouse>

      Example of configuring users with certificate authentication

      <clickhouse>

      ...

      <kcsuser-write>

      <ssl_certificates>

      <common_name>kcsuser-write</common_name>

      </ssl_certificates>

      <networks>

      <ip>::/0</ip>

      </networks>

      ...

      <grants>

      <query>GRANT kcs_writer_role</query>

      </grants>

      </kcsuser-write>

      <kcsuser-read>

      <ssl_certificates>

      <common_name>kcsuser-read</common_name>

      </ssl_certificates>

      <networks>

      <ip>::/0</ip>

      </networks>

      ...

      <grants>

      <query>GRANT kcs_reader_role</query>

      </grants>

      </kcsuser-read>

      ...

      <roles>

      <kcs_reader_role>

      <grants>

      <query>GRANT SELECT ON kcs.*</query>

      </grants>

      </kcs_reader_role>

      <kcs_writer_role>

      <grants>

      <query>GRANT CREATE TABLE, INSERT, ALTER, UPDATE ON kcs.*</query>

      <query>GRANT SELECT (source_ip, source_port, source_alias, dest_ip, dest_port, dest_alias, protocol, severity, action, event_time, count, type) ON kcs.node_agent_events</query>

      </grants>

      </kcs_writer_role>

      ...

      </roles>

      ...

      </clickhouse>

  2. Specify disks for short-term and long-term data storage. When working with ClickHouse, Kaspersky Container Security can store large amounts of data with various retention periods. By default, the major part of events is stored for a maximum of 30 minutes, whereas information about incidents is stored for up to 90 days. Since event recording requires a considerable resources to ensure high recording speed and disk space provision, it is recommended to use different disks for short-term and long-term data storage.

    Example of configuring data storage settings

    <clickhouse>

    ...

    <storage_configuration>

    <disks>

    <kcs_disk_hot>

    <path>/etc/clickhouse/hot/</path>

    </kcs_disk_hot>

    <kcs_disk_cold>

    <path>/etc/clickhouse/cold/</path>

    </kcs_disk_cold>

    </disks>

    <policies>

    <kcs_default>

    <volumes>

    <default>

    <disk>kcs_disk_hot</disk>

    </default>

    <cold>

    <disk>kcs_disk_cold</disk>

    </cold>

    </volumes>

    </kcs_default>

    </policies>

    </storage_configuration>

    ...

    </clickhouse>

Page top
[Topic 298742]

Configuring the external ClickHouse DBMS settings

To configure the Kaspersky Container Security settings to use the external ClickHouse DBMS:

  1. In the values.yaml configuration file, specify that the solution uses the external ClickHouse DBMS:

    default:

    kcs-clickhouse:

    external: true

  2. Specify the variables for using the external ClickHouse DBMS:

    configmap:

    infraconfig:

    type: fromEnvs

    envs:

    ...<ariables for using the external ClickHouse DBMS>

    In this section you must specify the following variables:

    • EXT_CLICKHOUSE_PROTOCOL is the protocol for connection to the external ClickHouse DBMS.
    • EXT_CLICKHOUSE_HOST is the host for connection to the external ClickHouse DBMS.
    • EXT_CLICKHOUSE_PORT is the port for connection to the external ClickHouse DBMS.
    • EXT_CLICKHOUSE_DB_NAME is the name of the database prepared for using with Kaspersky Container Security.
    • EXT_CLICKHOUSE_COLD_STORAGE_NAME is the name of the disk, where ClickHouse will long term store data about incidents.
    • EXT_CLICKHOUSE_STORAGE_POLICY_NAME is the name of the data storage policy according to which ClickHouse will transfer the data about incidents to the disk for long-term storage.

      If you use the same disk for short-term and long-term data storage, the EXT_CLICKHOUSE_COLD_STORAGE_NAME and EXT_CLICKHOUSE_STORAGE_POLICY_NAME values are not specified.

    • EXT_CLICKHOUSE_SSL_AUTH is the variable for SSL authorization of ClickHouse users. If the true value is specified, authorization is performed without passwords using client certificates.

      If TLS_INTERNAL is false, EXT_CLICKHOUSE_SSL_AUTH must also be false.

    • EXT_CLICKHOUSE_ROOT_CA_PATH is the path to the CA certificate, which is specified if the https protocol is used to connect to ClickHouse ( EXT_CLICKHOUSE_PROTOCOL: https). You can specify the path in one of the following ways:
      • Put the ClickHouse CA certificate in the directory specified by the path. In this case, you must uncomment the secret.cert-kcs-clickhouse-ca block.
      • Use Vault to store certificate data. In this case, you must uncomment the cert-kcs-clickhouse-ca block in the vault.certificate section.
  3. Specify values of secrets for using the external ClickHouse DBMS:

    configmap:

    secret:

    infracreds:

    type: fromEnvs

    envs:

    ...<secrets for using the external ClickHouse DBMS>

    In this section you must specify the following:

    • EXT_CLICKHOUSE_WRITE_USER is the name of a user with permissions to write created for using with Kaspersky Container Security.
    • CLICKHOUSE_WRITE_PASSWORD is the password of a user with permissions to write created for using with Kaspersky Container Security.
    • EXT_CLICKHOUSE_READ_USER is the name of a user with read rights prepared for use with Kaspersky Container Security.
    • CLICKHOUSE_READ_PASSWORD is the password of a user with permissions to read created for using with Kaspersky Container Security.

      CLICKHOUSE_READ_PASSWORD and CLICKHOUSE_WRITE_PASSWORD are not used if EXT_CLICKHOUSE_SSL_AUTH is set to true.

    Usernames and passwords can also be specified using the Vault secret storage.

    Example of configuring the external ClickHouse DBMS settings

    kcs-clickhouse:

    external: true

    persistent: true

    ...

    configmap:

    infraconfig:

    type: fromEnvs

    envs:

    ...

    EXT_CLICKHOUSE_PROTOCOL: https

    EXT_CLICKHOUSE_HOST: clickhouse.ns.svc.cluster.local

    EXT_CLICKHOUSE_PORT: 8443

    EXT_CLICKHOUSE_DB_NAME: kcs

    EXT_CLICKHOUSE_COLD_STORAGE_NAME: cold

    EXT_CLICKHOUSE_STORAGE_POLICY_NAME: kcs_default

    EXT_CLICKHOUSE_SSL_AUTH: false

    EXT_CLICKHOUSE_ROOT_CA_PATH: /etc/ssl/certs/kcs-clickhouse-ca.crt

    ...

    secret:

    ...

    infracreds:

    type: fromEnvs

    envs:

    ...

    EXT_CLICKHOUSE_WRITE_USER: kcsuser-write

    EXT_CLICKHOUSE_READ_USER: kcsuser-read

    CLICKHOUSE_WRITE_PASSWORD: **************

    CLICKHOUSE_READ_PASSWORD: ***********

    ...

    When using Vault:

    vault:

    ...

    secret:

    type: managedByVault

    ...

    EXT_CLICKHOUSE_WRITE_USER: kv/secret/kcs/clickhouse@EXT_CLICKHOUSE_WRITE_USER

    EXT_CLICKHOUSE_READ_USER: kv/secret/kcs/clickhouse@EXT_CLICKHOUSE_READ_USER

    CLICKHOUSE_WRITE_PASSWORD: kv/secret/kcs/clickhouse@CLICKHOUSE_WRITE_PASSWORD

    CLICKHOUSE_READ_PASSWORD: kv/secret/kcs/clickhouse@CLICKHOUSE_READ_PASSWORD

    ...

Page top
[Topic 298743]

Configuring the MinIO external storage settings

To configure the Kaspersky Container Security settings to use the external S3-compatible MinIO file storage:

  1. In the values.yaml configuration file, specify that the solution uses external MinIO file storage:

    default:

    kcs-s3:

    external: true

  2. Specify variable values for using MinIO:

    configmap:

    infraconfig:

    type: fromEnvs

    envs:

    ... variables for using the external MinIO file storage >

    In this section you must specify the following variables:

    • MINIO_HOST is the host to connect to MinIO.
    • MINIO_PORT is the port to connect to MinIO.
    • MINIO_BUCKET_NAME is the name of the section in MinIO allocated for Kaspersky Container Security data.
    • MINIO_SSL is the variable for ssl connection to MinIO (including using the https protocol).

      If TLS_INTERNAL is false, MINIO_SSL must also be false.

    • MINIO_ROOT_CA_PATH is the path to the CA certificate, which is specified if the https protocol is used to connect to MinIO (MINIO_SSL: true). You can specify the path in one of the following ways:
      • Put the MinIO CA certificate in the directory specified by the path. In this case, you must uncomment the secret.cert-minio-ca block.
      • Use Vault to store certificate data. In this case, you must uncomment the cert-minio-ca block in the vault.certificate section.
  3. Specify values of secrets for using the external MinIO file storage:

    configmap:

    secret:

    infracreds:

    type: fromEnvs

    envs:

    ...<secrets for using the external MinIO file storage>

    In this section you must specify the following:

    • MINIO_ROOT_USER is the name of the MinIO user specified for Kaspersky Container Security.
    • MINIO_ROOT_PASSWORD is the password of the MinIO user user specified for Kaspersky Container Security.

    Usernames and passwords can also be specified using the Vault secret storage.

    Example of configuring the MinIO external file storage settings

    kcs-s3:

    enabled: true

    external: true

    ...

    configmap:

    infraconfig:

    type: fromEnvs

    envs:

    ...

    MINIO_HOST: kcs-s3

    MINIO_PORT: 9000

    MINIO_BUCKET_NAME: reports

    MINIO_SSL: true

    MINIO_ROOT_CA_PATH: /etc/ssl/certs/minio-ca.crt

    ...

    secret:

    ...

    infracreds:

    type: fromEnvs

    envs:

    ...

    MINIO_ROOT_USER: kcs_user

    MINIO_ROOT_PASSWORD: ********

    ...

    When using Vault:

    vault:

    ...

    secret:

    type: managedByVault

    ...

    MINIO_ROOT_USER: kv/test/minio@MINIO_ROOT_USER

    MINIO_ROOT_PASSWORD: kv/test/minio@MINIO_ROOT_PASSWORD

Page top
[Topic 298749]

Installing private fixes

Due to specifics of various corporate networks where Kaspersky Container Security is deployed, sometimes there is a need to install private fixes for the solution. A private fix is one or several customized Docker images. Such private fixes are not published on the official Kaspersky image source, and are transferred directly to the customer for putting in the corporate image registry.

To put the received Docker image to an image registry:

  1. Save the archive with the Docker image in the selected directory.
  2. Run the docker load -i <archive_name> command.
  3. Run the docker tag <hash name obtained in step 2> <customer registry/<component>:fix> command.
  4. Run the docker push <customer registry/<component>:fix> command.
  5. Depending on the component for which the fix is installed, replace the image tag from <component>:2.0 to <customer registry/<component>:fix> in one of the following configuration files:

Page top
[Topic 298784]