Kaspersky Unified Monitoring and Analysis Platform

Installing and removing KUMA

To complete the installation, you need a distribution kit:

  • kuma-ansible-installer-<build number>.tar.gz contains all necessary files for installing KUMA without the support for fault-tolerant configurations.
  • kuma-ansible-installer-ha-<build number>.tar.gz contains all necessary files for installing KUMA in a fault-tolerant configuration.

To complete the installation, you need the install.sh installer file and an inventory file that describes the infrastructure. You can create an inventory file based on a template. Each distribution contains an install.sh installer file and the following inventory file templates:

  • single.inventory.yml.template
  • distributed.inventory.yml.template
  • expand.inventory.yml.template
  • k0s.inventory.yml.template

KUMA places its files in the /opt directory, so we recommend making /opt a separate partition and allocating 16 GB for the operating system and the remainder of the disk space for the /opt partition.

KUMA is installed in the same way on all hosts using the installer and your prepared inventory file in which you describe your configuration. We recommend taking time to think through the setup before you proceed.

The following installation options are available:

  • Installation on a single server

    Single-server installation diagram

    aio

    Installation on a single server

    You can install all KUMA components on the same server: specify the same server in the single.inventory.yml inventory file for all components. An "all-in-one" installation can handle a small stream of events, up to 10,000 EPS. If you plan to use multiple dashboard layouts and handle a high volume of search queries, a single server might not be sufficient. In that case, we recommend choosing the distributed installation instead.

  • Distributed installation

    Distributed Installation diagram

    distributed

    Distributed Installation diagram

    You can install KUMA services on different servers; you can describe the configuration for a distributed installation in the distributed.inventory.yml inventory file.

  • Distributed installation in a fault-tolerant configuration

    You can install the KUMA Core on a Kubernetes cluster for fault tolerance. Use the k0s.inventory.yml inventory file for the description.

In this section

Program installation requirements

Ports used by KUMA during installation

Synchronizing time on servers

About the inventory file

Installation on a single server

Distributed installation

Distributed installation in a fault-tolerant configuration

KUMA backup

Modifying the configuration of KUMA

Updating previous versions of KUMA

Troubleshooting update errors

Delete KUMA

Page top
[Topic 217904]

Program installation requirements

General application installation requirements

Before deploying the application, make sure the following conditions are met:

  • Servers on which you want to install the components satisfy the hardware and software requirements.
  • Ports used by the installed instance of KUMA are available.
  • KUMA components are addressed using the fully qualified domain name (FQDN) of the host. Before you install the application, make sure that the correct host FQDN is returned in the Static hostname field. For this purpose, execute the following command:

    hostnamectl status

  • The server where the installer is run does not have the name localhost or localhost.<domain>.
  • Time synchronization over Network Time Protocol (NTP) is configured on all servers with KUMA services.

Installation requirements for Oracle Linux and Astra Linux operating systems

 

Oracle Linux

Astra Linux

Python version

3.6 or later

3.6 or later

SELinux module

Disabled

Disabled

Package manager

pip3

pip3

Basic packages

  • netaddr
  • firewalld

The packages can be installed using the following commands:

pip3 install netaddr

yum install firewalld

  • python3-apt
  • curl
  • libcurl4

The packages can be installed using the following command:

apt install python3-apt curl libcurl4

Dependent packages

  • netaddr
  • python3-cffi-backend

The packages can be installed using the following command:

apt install python3-netaddr python3-cffi-backend

If you are planning to query Oracle DB databases from KUMA, you must install the libaio1 Astra Linux package.

Packages that must be installed on a device with the KUMA Core for correct generation and downloading of reports

  • nss
  • gtk2
  • atk
  • libnss3.so
  • libatk-1.0.so.0
  • libxkbcommon
  • libdrm
  • at-spi2-atk
  • mesa-libgbm
  • alsa-lib
  • libgtk2.0.0
  • libnss3
  • libatk-adaptor
  • libatk-1.0.so.0
  • libdrm-common
  • libgbm1
  • libxkbcommon0
  • libasound2

 

User permissions level required to install the application

To assign the required permissions to the user account used for installing the application, run the following command:

sudo pdpl-user -i 63 <user name under which the application is being installed>

Page top
[Topic 231034]

Ports used by KUMA during installation

For the program to run correctly, you need to ensure that the KUMA components are able to interact with other components and programs over the network via the protocols and ports specified during the installation of the KUMA components.

Before installing the Core on the device, make sure that the following ports are free:

  • 9090: used by Victoria Metrics.
  • 8880: used by VMalert.
  • 27017: used by MongoDB.

The table below shows the default network ports values. The installer automatically opens the ports during KUMA installation

Network ports used for the interaction of KUMA components

Protocol

Port

Direction

Destination of the connection

HTTPS

7222

From the KUMA client to the server with the KUMA Core component.

Reverse proxy in the CyberTrace system.

HTTPS

8123

From the storage service to the ClickHouse cluster node.

Writing and receiving normalized events in the ClickHouse cluster.

HTTPS

9009

Between ClickHouse cluster replicas.

Internal communication between ClickHouse cluster replicas for transferring data of the cluster.

TCP

2181

From ClickHouse cluster nodes to the ClickHouse keeper replication coordination service.

Receiving and writing of replication metadata by replicas of ClickHouse servers.

TCP

2182

From one ClickHouse keeper replication coordination service to another.

Internal communication between replication coordination services to reach a quorum.

TCP

7209

From the parent server with the KUMA Core component to the child server with the KUMA Core component.

Internal communication of the parent node with the child node in hierarchy mode.

TCP

7210

From all KUMA components to the KUMA Core server.

Receipt of the configuration by KUMA from the KUMA Core server.

TCP

7220

  • From the KUMA client to the server with the KUMA Core component.
  • From storage hosts to the server with the KUMA Core component during installation or upgrade.
  • User access to the KUMA web interface.
  • Interaction between the storage hosts and the KUMA Core during installation or upgrade. You can close the port after the installation or upgrade.

TCP

7221 and other ports used for service installation as the --api.port <port> parameter value

From KUMA Core to KUMA services.

Administration of services from the KUMA web interface.

TCP

7223

To the KUMA Core server.

Default port used for API requests.

TCP

8001

From Victoria Metrics to the ClickHouse server.

Receiving ClickHouse server operation metrics.

TCP

9000

From the ClickHouse client to the ClickHouse cluster node.

Writing and receiving data in the ClickHouse cluster.

Ports used by the OOTB predefined resources

The installer automatically opens the ports during KUMA installation.

Ports used by the OOTB predefined resources:

  • 7230/tcp
  • 7231/tcp
  • 7232/tcp
  • 7233/tcp
  • 7234/tcp
  • 7235/tcp
  • 5140/tcp
  • 5140/udp
  • 5141/tcp
  • 5144/udp

KUMA Core traffic in a fault-tolerant configuration

The "KUMA Core traffic in a fault-tolerant configuration" table shows the initiator of the connection (the source) and the destination. The port number on the initiator can be dynamic. Return traffic within the established connection must not be blocked.

KUMA Core traffic in a fault-tolerant configuration

Source

Destination

Destination port

Type

External KUMA services

Load balancer

7209

TCP

External KUMA services

Load balancer

7210

TCP

External KUMA services

Load balancer

7220

TCP

External KUMA services

Load balancer

7222

TCP

External KUMA services

Load balancer

7223

TCP

Worker node

Load balancer

6443

TCP

Worker node

Load balancer

8132

TCP

Control node

Load balancer

6443

TCP

Control node

Load balancer

8132

TCP

Control node

Load balancer

9443

TCP

Worker node

External KUMA services

Depending on the settings specified when creating the service.

TCP

Load balancer

Worker node

7209

TCP

Load balancer

Worker node

7210

TCP

Load balancer

Worker node

7220

TCP

Load balancer

Worker node

7222

TCP

Load balancer

Worker node

7223

TCP

External KUMA services

Worker node

7209

TCP

External KUMA services

Worker node

7210

TCP

External KUMA services

Worker node

7220

TCP

External KUMA services

Worker node

7222

TCP

External KUMA services

Worker node

7223

TCP

Worker node

Worker node

179

TCP

Worker node

Worker node

9500

TCP

Worker node

Worker node

10250

TCP

Worker node

Worker node

51820

UDP

Worker node

Worker node

51821

UDP

Control node

Worker node

10250

TCP

Load balancer

Control node

6443

TCP

Load balancer

Control node

8132

TCP

Load balancer

Control node

9443

TCP

Worker node

Control node

6443

TCP

Worker node

Control node

8132

TCP

Worker node

Control node

10250

TCP

Control node

Control node

2380

TCP

Control node

Control node

6443

TCP

Control node

Control node

9443

TCP

Control node

Control node

10250

TCP

Cluster management console (CLI)

Load balancer

6443

TCP

Cluster management console (CLI)

Control node

6443

TCP

Page top
[Topic 217770]

Synchronizing time on servers

To configure time synchronization on servers:

  1. Install chrony:

    sudo apt install chrony

  2. Configure the system time to synchronize with the NTP server:
    1. Make sure the virtual machine has Internet access.

      If access is available, go to step b.

      If internet access is not available, edit the /etc/chrony.conf file to replace 2.pool.ntp.org with the name or IP address of your organization's internal NTP server.

    2. Start the system time synchronization service by executing the following command:

      sudo systemctl enable --now chronyd

    3. Wait a few seconds and run the following command:

      sudo timedatectl | grep 'System clock synchronized'

      If the system time is synchronized correctly, the output will contain the line "System clock synchronized: yes".

Synchronization is configured.

Page top
[Topic 255123]

About the inventory file

KUMA components can be installed, updated, and removed from the directory with the unpacked kuma-ansible-installer using the Ansible tool and the inventory file you created. You can specify values ​​for KUMA configuration settings in the inventory file; the installer uses these values ​​when deploying, updating, and removing the program. The inventory file uses the YAML format.

You can create an inventory file based on the templates included in the distribution kit. The following templates are available:

  • single.inventory.yml.template—Used to install KUMA on a single server. It contains the minimum set of settings optimized for installation on a single device without the use of a Kubernetes cluster.
  • distributed.inventory.yml.template—Used for the initial distributed installation of KUMA without using a Kubernetes cluster, for expanding the all-in-one installation to a distributed installation, and for updating KUMA.
  • expand.inventory.yml.template—Used in some reconfiguration scenarios: for adding collector and correlator servers, for expanding an existing storage cluster, and for adding a new storage cluster. If you use this inventory file to edit the configuration, the installer does not stop services in the entire infrastructure. If you reuse the inventory file, the installer can stop only services on hosts that are listed in the expand.inventory.yml inventory file.
  • k0s.inventory.yml.template—Used to install or migrate KUMA to a Kubernetes cluster.

We recommend backing up the inventory file that you used to install the program. You can use it to add components to the system or remove KUMA.

Page top
[Topic 255188]

KUMA settings in the inventory file

The inventory file may include the following blocks:

  • all
  • kuma
  • kuma_k0s

For each host, you must specify the FQDN in the <host name>.<domain> format or an ipv4 or ipv6 IP address.

Example:

hosts:

hostname.example.com:

ip: 0.0.0.0

or

ip: ::%eth0

all block

In this block the variables that are applied to all hosts indicated in the inventory are specified, including the implicit localhost where the installation is started. Variables can be redefined at the level of host groups or even for individual hosts.

Example of redefining variables in the inventory file

all:

  vars:

    ansible_connection: ssh

    deploy_to_k8s: False

    need_transfer: False

    airgap: True

    deploy_example_services: True

kuma:

  vars:

    ansible_become: true

    ansible_user: i.ivanov

    ansible_become_method: su

    ansible_ssh_private_key_file: ~/.ssh/id_rsa

  children:

    kuma_core:

      vars:

        ansible_user: p.petrov

        ansible_become_method: sudo

The following table lists possible variables in the 'vars' section and their descriptions.

List of possible variables in the vars section

Variable

Description

Possible values

ansible_connection

Method used to connect to target machines.

  • ssh—connection to remote hosts via SSH.
  • local—no connection to remote hosts is established.

ansible_user

User name used to connect to target machines and install components.

If the root user is blocked on the target machines, use a user name that has the right to establish SSH connections and elevate privileges using su or sudo.

ansible_become

Indicates the need to increase the privileges of the user account that is used to install KUMA components.

true if the ansible_user value is not root.

ansible_become_method

A method for increasing the privileges of the user account that is used to install KUMA components.

su or sudo if the ansible_user value is not root.

ansible_ssh_private_key_file

Path to the private key in the format /<path>/.ssh/id_rsa. This variable must be defined if you need to specify a key file that is different from the default key file: ~/.ssh/id_rsa.

 

deploy_to_k8s

Indicates that KUMA components are deployed in a Kubernetes cluster.

  • false

    is the default value for the single.inventory.yml and distributed.inventory.yml templates.
  • true

    is the default value for the k0s.inventory.yml template.

need_transfer

Indicates that KUMA components are moved in a Kubernetes cluster.

  • false

    is the default value for the single.inventory.yml and distributed.inventory.yml templates.
  • true

    is the default value for the k0s.inventory.yml template.

airgap

Indicates that there is no internet connection.

true

is the default value for the k0s.inventory.yml template.

generate_etc_hosts

Indicates that the machines are registered in the DNS zone of your organization.

In this case, the installer will automatically add the IP addresses of the machines from the inventory file to the /etc/hosts files on the machines where KUMA components are installed. The specified IP addresses must be unique.

  • false.
  • true.

deploy_example_services

Indicates the creation of predefined services during installation.

  • false: no services are needed. The default value for the distributed.inventory.yml and k0s.inventory.yml templates.
  • true: services must be created. The default value for the single.inventory.yml template.

low_resources

Indicates that KUMA is installed in environments with limited computing resources. In this case, the Core can be installed on a host that has 4 GB of free disk space. By default, there is no variable.

 

kuma block

This block lists the settings of KUMA components deployed outside of the Kubernetes cluster.

The following sections are available in the block:

  • In the vars section, you can specify the variables that are applied to all hosts indicated in the kuma block.
  • In the children section you can list groups of component settings:
    • kuma_core—KUMA Core settings. This may contain only one host.
    • kuma_collector—settings of KUMA collectors. Can contain multiple hosts.
    • kuma_correlator—settings of KUMA correlators. Can contain multiple hosts.
    • kuma_storage—settings of KUMA storage nodes. Can contain multiple hosts.

kuma_k0s block

This block defines the settings of the Kubernetes cluster that ensures fault tolerance of KUMA. This block is only available in an inventory file that is based on k0s.inventory.yml.template.

The minimum configuration allowed for installation is one controller combined with a worker node. This configuration does not provide fault tolerance for the Core and is only intended for demonstration of its capabilities or for testing the software environment.

To implement fault tolerance, 2 dedicated cluster controllers and a load balancer are required. For industrial operation, it is recommended to use dedicated worker nodes and controllers. If a cluster controller is under workload and the pod with the KUMA Core is hosted on the controller, disabling the controller will result in a complete loss of access to the Core.

The following sections are available in the block:

  • In the vars section, you can specify the variables that are applied to all hosts indicated in the kuma block.
  • The children section defines the settings of the Kubernetes cluster that ensures fault tolerance of KUMA.

The table below shows a list of possible variables in the vars section and their descriptions.

List of possible variables in the vars section

Variable group

Description

kuma_lb

FQDN of the load balancer.

The user installs the balancer on their own.

If the kuma_managed_lb = true parameter is indicated within the group, the load balancer will be automatically configured during KUMA installation, the necessary network TCP ports will be opened on its host (6443, 8132, 9443, 7209, 7210, 7220, 7222, 7223), and a restart will be performed to apply the changes.

kuma_control_plane_master

A host that acts as a dedicated primary controller for the cluster.

Groups for specifying the primary controller. A host must be assigned to only one of them.

kuma_control_plane_master_worker

A host that combines the role of the primary controller and worker node of the cluster.

kuma_control_plane

Hosts that act as a dedicated cluster controller.

Groups for specifying secondary controllers.

kuma_control_plane_worker 

Hosts that combine the role of controller and worker node of the cluster.

kuma_worker 

Worker nodes of the cluster.

Each host in this block must have its unique FQDN or IP address indicated in the ansible_host parameter, except for the host in the kuma_lb section which must have its FQDN indicated. Hosts must not be duplicated in groups.

The parameter extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true" must be indicated for each cluster node and for a cluster controller that is combined with a worker node.

Page top
[Topic 244406]

Installation on a single server

To install KUMA components on a single server, complete the following steps:

  1. Ensure that hardware, software, and installation requirements for KUMA are met.
  2. Prepare the single.inventory.yml inventory file.

    Use the single.yml.template inventory file template from the distribution kit to create a single.inventory.yml inventory file and describe the network structure of program components in that file. The installer uses the single.inventory.yml file to deploy KUMA.

  3. Install the program.

    Install the program and log in to the web interface using the default credentials.

If necessary, you can move application components to different servers to continue with a distributed configuration.

In this section

Preparing the single.inventory.yml inventory file

Installing the program on a single server

Page top
[Topic 217908]

Preparing the single.inventory.yml inventory file

KUMA components can be installed, updated, and removed in the directory containing the unpacked installer by using the Ansible tool and the user-created YML inventory file containing a list of the hosts of KUMA components and other settings. If you want to install all KUMA components on the same server, you must specify the same host for all components in the inventory file.

To create an inventory file for installation on a single server:

  1. Copy the archive with the kuma-ansible-installer-<version name>.tar.gz installer to the server and unpack it using the following command (about 2 GB of disk space is required):

    sudo tar -xpf kuma-ansible-installer-<version name>.tar.gz

  2. Go to the KUMA installer folder by executing the following command:

    cd kuma-ansible-installer

  3. Copy the single.inventory.yml.template template and create an inventory file named single.inventory.yml:

    cp single.inventory.yml.template single.inventory.yml

  4. Edit the settings in the single.inventory.yml inventory file.

    If you want predefined services to be created during the installation, set deploy_example_services to true.

    deploy_example_services: true

    The predefined services will appear only as a result of the initial installation of KUMA. If you are upgrading the system using the same inventory file, the predefined services are not re-created.

  5. Replace all kuma.example.com strings in the inventory file with the name of the host on which you want to install KUMA components.

The inventory file is created. Now you can use it to install KUMA on a single server.

We recommend backing up the inventory file that you used to install the program. You can use it to add components to the system or remove KUMA.

Sample inventory file for installation on a single server

Single.inventory.yml_example

Page top
[Topic 222158]

Installing the program on a single server

You can install all KUMA components on a single server using the Ansible tool and the single.inventory.yml inventory file.

To install Kuma on a single server:

  1. Download the kuma-ansible-installer-<build number>.tar.gz KUMA distribution kit to the server and extract it. The archive is unpacked into the kuma-ansibleinstaller directory.
  2. Go to the directory with the unpacked installer.
  3. Place the license key file in the <installer directory>/roles/kuma/files/ directory.

    The key file must be named license.key.

    sudo cp <key file>.key <installer directory>/roles/kuma/files/license.key

  4. Run the following command to start the component installation with your prepared single.inventory.yml inventory file:

    sudo ./install.sh single.inventory.yml

  5. Accept the terms of the End User License Agreement.

    If you do not accept the terms of the End User License Agreement, the program will not be installed.

As a result, all KUMA components are installed. After the installation is complete, log in to the KUMA web interface and enter the address of the KUMA web interface in the address bar of your browser, then enter your credentials on the login page.

The address of the KUMA web interface is https://<FQDN of the host where KUMA is installed>:7220.

Default login credentials:
- login – admin
- password – mustB3Ch@ng3d!

After the first login, change the password of the admin account

We recommend backing up the inventory file that you used to install the program. You can use this inventory file to add components to the system or remove KUMA.

You can expand the installation to a distributed installation.

Page top
[Topic 222159]

Distributed installation

Distributed installation of KUMA involves multiple steps:

  1. Verifying that the hardware, software, and installation requirements for KUMA are satisfied.
  2. Preparing the test machine.

    The test machine is used during the program installation process: the installer files are unpacked and run on it.

  3. Preparing the target machines.

    The program components are installed on the target machines.

  4. Preparing the distributed.inventory.yml inventory file.

    Create an inventory file with a description of the network structure of program components. The installer uses this inventory file to deploy KUMA.

  5. Installing the program.

    Install the program and log in to the web interface.

  6. Creating services.

    Create the client part of the services in the KUMA web interface and install the server part of the services on the target machines.

    Make sure the KUMA installation is complete before you install KUMA services. We recommend installing services in the following order: storage, collectors, correlators, and agents.

    When deploying several KUMA services on the same host, you must specify unique ports for each service using the --api.port <port> parameters during installation.

If necessary, you can change the KUMA web console certificate to your company's certificate.

In this section

Preparing the test machine

Preparing the target machine

Preparing the distributed.inventory.yml inventory file

Installing the program in a distributed configuration

Modifying the self-signed web console certificate

Page top
[Topic 217917]

Preparing the test machine

To prepare the test machine for the KUMA installation:

  1. Ensure that hardware, software, and installation requirements of the program are met.
  2. Generate an SSH key for authentication on the SSH servers of the target machines by executing the following command:

    sudo ssh-keygen -f /root/.ssh/id_rsa -N "" -C kuma-ansible-installer

    If SSH root access is blocked on the test machine, generate an SSH key for authentication on the SSH servers of the target machines using a user from the sudo group:

    If the user does not have sudo rights, add the user to the sudo group:

    usermod -aG sudo user

    sudo ssh-keygen -f /home/<name of the user from sudo group>/.ssh/id_rsa -N "" -C kuma-ansible-installer

    As a result, the key is generated and saved in the user's home directory. You should specify the full path to the key in the inventory file in the value of the ansible_ssh_private_key_file parameter so that the key is available during installation.

  3. Make sure that the test machine has network access to all the target machines by host name and copy the SSH key to each target machine by carrying out the following command:

    sudo ssh-copy-id -i /root/.ssh/id_rsa root@<host name of the test machine>

    If SSH root access is blocked on the test machine and you want to use the SSH key from the home directory of the sudo group user, make sure that the test machine has network access to all target machines by host name and copy the SSH key to each target machine using the following command:

    sudo ssh-copy-id -i /home/<name of a user in the sudo group>/.ssh/id_rsa root@<host name of the test machine>

  4. Copy the archive with the kuma-ansible-installer-<version>.tar.gz installer to the test machine and unpack it using the following command (about 2 GB of disk space is required):

    sudo tar -xpf kuma-ansible-installer-<version name>.tar.gz

The test machine is ready for the KUMA installation.

Page top
[Topic 222083]

Preparing the target machine

To prepare the target machine for the installation of KUMA components:

  1. Ensure that hardware, software, and installation requirements are met.
  2. Specify the host name. We recommend specifying the FQDN. For example, kuma1.example.com.

    You should not change the KUMA host name after installation: this will make it impossible to verify the authenticity of certificates and will disrupt the network communication between the program components.

  3. Register the target machine in your organization's DNS zone to allow host names to be translated to IP addresses.

    If your organization does not use a DNS server, you can use the /etc/hosts file for name resolution. The content of the files can be automatically generated for each target machine when installing KUMA.

  4. To get the hostname that you must specify when installing KUMA, run the following command and record the result:

    hostname -f

    The test machine must be able to access the target machine using this name.

The target machine is ready for the installation of KUMA components.

Page top
[Topic 217955]

Preparing the distributed.inventory.yml inventory file

To create the distributed.inventory.yml inventory file:

  1. Go to the KUMA installer folder by executing the following command:

    cd kuma-ansible-installer

  2. Copy the distributed.inventory.yml.template template and create an inventory file named distributed.inventory.yml:

    cp distributed.inventory.yml.template distributed.inventory.yml

  3. Edit the settings in the distributed.inventory.yml inventory file.

We recommend backing up the inventory file that you used to install the program. You can use it to add components to the system or remove KUMA.

Sample inventory file for distributed installation

Single to distributed.inventory.yml_example

Page top
[Topic 222085]

Installing the program in a distributed configuration

KUMA is installed using the Ansible tool and the YML inventory file. The installation is performed using the test machine, where all of the KUMA components are installed on the target machines.

To install KUMA:

  1. On the test machine, open the folder containing the unpacked installer.

    cd kuma-ansible-installer

  2. Place the license key file in the <installer directory>/roles/kuma/files/ directory.

    The key file must be named license.key.

  3. Run the installer from the folder with the unpacked installer:

    sudo ./install.sh distributed.inventory.yml

  4. Accept the terms of the End User License Agreement.

    If you do not accept the terms of the End User License Agreement, the program will not be installed.

KUMA components are installed. The screen will display the URL of the KUMA web interface and the user name and password that must be used to access the web interface.

By default, the KUMA web interface address is https://<FQDN or IP address of the core component>:7220.

Default login credentials (after the first login, you must change the password of the admin account):
- user name — admin
- password— mustB3Ch@ng3d!

We recommend backing up the inventory file that you used to install the program. You can use it to add components to the system or remove KUMA.

Page top
[Topic 217914]

Modifying the self-signed web console certificate

Before changing KUMA certificate, make sure to back up the previous certificate and key with the names external.cert.old and external.key.old respectively.

After installing the KUMA Core, the installer creates the following certificates in the /opt/kaspersky/kuma/core/certificates folder:

  • Self-signed root certificate ca.cert with the ca.key.

    Signs all other certificates that are used for internal communication between KUMA components.

  • The internal.cert certificate signed with the root certificate, and the Core server internal.key.

    Used for internal communication between KUMA components.

  • KUMA web console external.cert certificate and external.key.

    Used in the KUMA web console and for REST API requests.

    You can use your company certificate and key instead of self-signed web console certificate. For example, if you want to replace self-signed CA Core certificate with a certificate issued by an enterprise CA, you must provide an external.cert and an unencrypted external.key in PEM format.

    The following example shows how to replace a self-signed CA Core certificate with an enterprise certificate in PFX format. You can use the instructions as an example and adapt the steps according to your needs.

To replace the KUMA web console certificate with an external certificate:

  1. Switch to root user operation:

    sudo -i

  2. Go to the certificates directory:

    cd /opt/kaspersky/kuma/core/certificates

  3. Make a backup copy of the current certificate and key:

    mv external.cert external.cert.old && mv external.key external.key.old

  4. In OpenSSL, convert the PFX file to a certificate and an encrypted key in PEM format:

    openssl pkcs12 -in kumaWebIssuedByCorporateCA.pfx -nokeys -out external.cert

    openssl pkcs12 -in kumaWebIssuedByCorporateCA.pfx -nocerts -nodes -out external.key

    When carrying out the command, you are required to specify the PFX key password (Enter Import Password).

    As a result, the external.cert certificate and the external.key in PEM format are returned.

  5. Place the returned external.cert certificate and external.key files in the /opt/kaspersky/kuma/core/certificates directory.
  6. Change the owner of the key files:

    chown kuma:kuma external.cert external.key

  7. Restart KUMA:

    systemctl restart kuma-core

  8. Refresh the web page or restart the browser hosting the KUMA web interface.

Your company certificate and key have been replaced.

Page top
[Topic 217747]

Distributed installation in a fault-tolerant configuration

KUMA fault-tolerant configuration is provided by injecting KUMA Core into a Kubernetes cluster deployed by the KUMA installer.

The Kubernetes cluster configuration is defined in the inventory file. It must include one controller (dedicated or combined with a worker node), at least one worker node (dedicated or combined with a controller), and 0 or more dedicated worker nodes.

To install a fault-tolerant configuration of KUMA, you must use the kuma-ansible-installer-ha-<build number>.tar.gz installer.

When installing the fault-tolerant application configuration, the KUMA Core is placed into a Kubernetes cluster using the installer and the inventory file. The KUMA Core can be placed in a Kubernetes cluster in the following ways:

  • Install KUMA in a Kubernetes cluster.
  • Migrate the Core of the existing KUMA installation to the Kubernetes cluster.

In this section

About KUMA fault tolerance

Additional application installation requirements

Managing Kubernetes and accessing KUMA

Time zone in a Kubernetes cluster

Page top
[Topic 244396]

About KUMA fault tolerance

KUMA fault tolerance is ensured by implementing the KUMA Core into the Kubernetes cluster deployed by the KUMA installer, and by using an external TCP traffic balancer.

There are 2 possible roles for nodes in Kubernetes:

  • Controllers (control-plane)—nodes with this role manage the cluster, store metadata, and distribute the workload.
  • Workers—nodes with this role bear the workload by hosting KUMA processes.

Learn more about the requirements for cluster nodes.

For product installations of the KUMA Core in Kubernetes, it is critically important to allocate 3 separate nodes with a single controller role. This will provide fault tolerance for the Kubernetes cluster and will ensure that the workload (KUMA processes and others) cannot affect the tasks associated with managing the Kubernetes cluster. If you are using virtualization tools, you should make sure that these nodes reside on different physical servers and ensure that there are no worker nodes on the same physical servers.

In cases where KUMA is installed for demo purposes, nodes that combine the roles of a controller and worker node are allowed. However, if you are expanding an installation to a distributed installation, you must reinstall the entire Kubernetes cluster while allocating 3 separate nodes with the controller role and at least 2 nodes with the worker node role. KUMA cannot be upgraded to later versions if there are nodes that combine the roles of a controller and worker node.

You can combine different roles on the same cluster node only for demo deployment of the application.

KUMA Core availability under various scenarios:

  • Malfunction or network disconnection of the worker node where the KUMA Core service is deployed.

    Access to the KUMA web interface is lost. After 6 minutes, Kubernetes initiates migration of the Core bucket to an operational node of the cluster. After deployment is complete, which takes less than one minute, the KUMA web interface becomes available again via URLs that use the FQDN of the load balancer. To determine on which of the hosts the Core is running, run the following command in the terminal of one of the controllers:

    k0s kubectl get pod -n kuma -o wide

    When the malfunctioning worker node or access to it is restored, the Core bucket is not migrated from its current worker node. A restored node can participate in replication of a disk volume of the Core service.

  • Malfunction or network disconnection of a worker node containing a replica of the KUMA Core drive on which the Core service is not currently deployed.

    Access to the KUMA web interface is not lost via URLs that use the FQDN of the load balancer. The network storage creates a replica of the running Core disk volume on other running nodes. When accessing KUMA via a URL with the FQDN of running nodes, there is no disruption.

  • Loss of availability of one or more cluster controllers when quorum is maintained.

    Worker nodes operate in normal mode. Access to KUMA is not disrupted. A failure of cluster controllers extensive enough to break quorum leads to the loss of control over the cluster.

    Correspondence of the number of machines in use to ensure fault tolerance

    Number of controllers when installing a cluster

    Minimum number of controllers required for the operation of the cluster (quorum)

    Admissible number of failed controllers

    1

    1

    0

    2

    2

    0

    3

    2

    1

    4

    3

    1

    5

    3

    2

    6

    4

    2

    7

    4

    3

    8

    5

    3

    9

    5

    4

  • Simultaneous failure of all Kubernetes cluster controllers.

    The cluster cannot be managed and therefore will have impaired performance.

  • Simultaneous loss of availability of all worker nodes of a cluster with replicas of the Core volume and the Core pod.

    Access to the KUMA web interface is lost. If all replicas are lost, information will be lost.

Page top
[Topic 244722]

Additional application installation requirements

To protect the KUMA network infrastructure using Kaspersky Endpoint Security for Linux, first install KUMA in a Kubernetes cluster and then deploy Kaspersky Endpoint Security for Linux.

When you install a fault-tolerant configuration of KUMA, the following requirements must be met:

  • General application installation requirements.
  • The hosts that are planned to be used for Kubernetes cluster nodes do not use IP addresses from the following Kubernetes blocks:
    • serviceCIDR: 10.96.0.0/12
    • podCIDR: 10.244.0.0/16

    The traffic to the proxy servers is also excluded for the addresses of these blocks.

  • The nginx load balancer is installed and configured (more details about configuring nginx). For example, you can use the following command for installation:

    sudo yum install nginx

    If you want nginx to be configured automatically during the KUMA installation, install nginx and provide access to it via SSH in the same way as for the Kubernetes cluster hosts.

    Example of an automatically created nginx configuration

    The installer creates the /etc/nginx/kuma_nginx_lb.conf configuration file. An example of the file contents is shown below. The upstream sections are generated dynamically and contain the IP addresses of the Kubernetes cluster controllers (in the example, 10.0.0.2-4 in the upstream kubeAPI_backend, upstream konnectivity_backend, controllerJoinAPI_backend sections) and the IP addresses of the worker nodes (in the example 10.0.1.2-3), for which the inventory file contains the "kaspersky.com/kuma-ingress=true" value for the extra_args variable.

    The "include /etc/nginx/kuma_nginx_lb.conf;" line is added to the end of the /etc/nginx/nginx.conf file to apply the generated configuration file.

    Configuration file example:

    # Ansible managed

    #

    # LB KUMA cluster

    #

     

    stream {

        server {

            listen          6443;

            proxy_pass      kubeAPI_backend;

        }

        server {

            listen          8132;

            proxy_pass      konnectivity_backend;

        }

        server {

            listen          9443;

            proxy_pass      controllerJoinAPI_backend;

        }

        server {

            listen          7209;

            proxy_pass      kuma-core-hierarchy_backend;

            proxy_timeout   86400s;

        }

        server {

            listen          7210;

            proxy_pass      kuma-core-services_backend;

            proxy_timeout   86400s;

        }

        server {

            listen          7220;

            proxy_pass      kuma-core-ui_backend;

            proxy_timeout   86400s;

        }

        server {

            listen          7222;

            proxy_pass      kuma-core-cybertrace_backend;

            proxy_timeout   86400s;

        }

        server {

            listen          7223;

            proxy_pass      kuma-core-rest_backend;

            proxy_timeout   86400s;

        }

        upstream kubeAPI_backend {

            server 10.0.0.2:6443;

            server 10.0.0.3:6443;

            server 10.0.0.4:6443;

        }

        upstream konnectivity_backend {

            server 10.0.0.2:8132;

            server 10.0.0.3:8132;

            server 10.0.0.4:8132;

        }

        upstream controllerJoinAPI_backend {

            server 10.0.0.2:9443;

            server 10.0.0.3:9443;

            server 10.0.0.4:9443;

        }

        upstream kuma-core-hierarchy_backend {

            server 10.0.1.2:7209;

            server 10.0.1.3:7209;

        }

        upstream kuma-core-services_backend {

            server 10.0.1.2:7210;

            server 10.0.1.3:7210;

        }

        upstream kuma-core-ui_backend {

            server 10.0.1.2:7220;

            server 10.0.1.3:7220;

        }

        upstream kuma-core-cybertrace_backend {

            server 10.0.1.2:7222;

            server 10.0.1.3:7222;

        }

        upstream kuma-core-rest_backend {

            server 10.0.1.2:7223;

            server 10.0.1.3:7223;

        }

    }

  • An access key from the device on which KUMA is installed is added to the load balancer server.
  • The SELinux module is NOT enabled on the balancer server in the operating system.
  • The tar, systemctl, setfacl packages are installed on the hosts.

During KUMA installation, the hosts are automatically checked to meet the following hardware requirements. If these conditions are not met, the installation is terminated.

For demonstration purposes, you can disable the check of these conditions during installation by specifying the low_resources: true variable in the inventory file.

  • Number of CPU cores (threads) – 12 or more.
  • RAM – 22,528 MB or more.
  • Available disk space in the /opt/ section – 1,000 GB or more.
  • For initial installation, the /var/lib/ section must have at least 32 GB of available space. If the cluster is already installed on this node, the size of the required available space is reduced by the size of the /var/lib/k0s directory.

Additional requirements for the application installation in the Astra Linux Special Edition operating system

  • Installing a fault-tolerant configuration of KUMA is supported for the Astra Linux Special Edition RUSB.10015-01 operating system (2022-1011SE17MD, update 1.7.2.UU.1). Core version 5.15.0.33 or higher is required.
  • The following packages are installed on the machines intended for deploying a Kubernetes cluster:
    • open-iscsi
    • wireguard
    • wireguard-tools

    The packages can be installed using the following command:

    sudo apt install open-iscsi wireguard wireguard-tools

Additional requirements for the application installation in the Oracle Linux operating system

The following packages are installed on the machines intended for deploying a Kubernetes cluster:

  • iscsi-initiator-utils
  • wireguard-tools

Before installing the packages, add the EPEL repository as a source: sudo yum install oracle-epel-release-el8.

The packages can be installed using the following command:

sudo yum install iscsi-initiator-utils wireguard-tools

Page top
[Topic 244399]

Managing Kubernetes and accessing KUMA

When installing KUMA in a fault-tolerant configuration, the file named artifacts/k0s-kubeconfig.yml is created in the installer directory. This file contains the details required for connecting to the created Kubernetes cluster. The same file is created on the main controller in the home directory of the user set as ansible_user in the inventory file.

To ensure that the Kubernetes cluster can be monitored and managed, the k0s-kubeconfig.yml file must be saved in a location available for the cluster administrators. Access to the file must be restricted.

Managing a Kubernetes cluster

To monitor and manage a cluster, you can use the k0s application that is installed on all cluster nodes during KUMA deployment. For example, you can use the following command to view the load on worker nodes:

k0s kubectl top nodes

Access to the KUMA Core

The KUMA Core can be accessed at the URL https://<worker node FQDN>:<worker node port>. Available ports: 7209, 7210, 7220, 7222, 7223. Port 7220 is used by default to connect to the KUMA Core web interface. Access can be obtained through any worker node whose extra_args parameter contains the value kaspersky.com/kuma-ingress=true.

It is not possible to log in to the KUMA web interface on multiple worker nodes simultaneously using the same account credentials. Only the most recently established connection remains active.

If you are using an external load balancer in a fault-tolerant Kubernetes cluster configuration, the ports of the KUMA Core are accessed via the FQDN of the load balancer.

Page top
[Topic 244730]

Time zone in a Kubernetes cluster

The time zone within a Kubernetes cluster is always UTC+0, so this time difference should be taken into account when handling data created by the KUMA Core deployed in a fault-tolerant configuration:

  • In audit events, the time zone is UTC+0 in the DeviceTimeZone field.
  • In generated reports, the user will see the difference between the time the report was generated and the time in the browser.
  • In the dashboard, the user will see the difference between the time in the widget (the time of the user's browser is displayed) and the time in the exported widget data in the CSV file (the time within the Kubernetes cluster is displayed).
Page top
[Topic 246518]

KUMA backup

KUMA allows you to back up the KUMA Core database and certificates. The backup function is intended for restoring KUMA. To move or copy the resources, use the resource export and import functions.

Backup can be done in the following ways:

Special considerations for KUMA backup

  • Data may only be restored from a backup if it is restored to the KUMA of the same version as the backup one.
  • Backup of collectors is not required unless the collectors have an SQL connection. When restoring such collectors, you should revert to the original initial value of the ID.
  • If KUMA cannot start after the restore, it is recommended to reset the kuma database in MongoDB.

    How to reset a database in MongoDB

    If the KUMA Core fails to start after data recovery, the recovery must be performed again but this time the kuma database in MongoDB must be reset.

    To restore KUMA data and reset the MongoDB database:

    1. Log in to the OS of the server where the KUMA Core is installed.
    2. On the KUMA Core server, run the following command:

      sudo systemctl stop kuma-core

    3. Log in to MongoDB by running the following commands:
      1. cd /opt/kaspersky/kuma/mongodb/bin/
      2. ./mongo
    4. Reset the MongoDB database by running the following commands:
      1. use kuma
      2. db.dropDatabase()
    5. Log out of the MongoDB database by pressing Ctrl+C.
    6. Restore data from a backup copy by running the following command:

      sudo /opt/kaspersky/kuma/kuma tools restore --src <path to folder containing backup copy> --certificates

      The --certificates flag is optional and is used to restore certificates.

    7. Start KUMA by running the following command:

      sudo systemctl start kuma-core

    8. Rebuild the services using the recovered service resource sets.

    Data is restored from the backup.

See also:

REST API

In this section

KUMA backup using the kuma file

Page top
[Topic 222208]

KUMA backup using the kuma file

To perform a backup:

  1. Log in to the OS of the server where the KUMA Core is installed.
  2. Execute the following command of the kuma executable file:

    sudo /opt/kaspersky/kuma/kuma tools backup --dst <path to folder for backup copy> --certificates

The backup copy has been created.

To restore data from a backup:

  1. Log in to the OS of the server where the KUMA Core is installed.
  2. On the KUMA Core server, run the following command:

    sudo systemctl stop kuma-core

  3. Execute the following command:

    sudo /opt/kaspersky/kuma/kuma tools restore --src <path to folder containing backup copy> --certificates

  4. Start KUMA by running the following command:

    sudo systemctl start kuma-core

  5. In the KUMA web interface, in the ResourcesActive services section, select all services and click the Reset certificate button.
  6. Reinstall the services with the same ports and IDs.

Data is restored from the backup.

Page top
[Topic 244996]

Modifying the configuration of KUMA

The following KUMA configuration changes can be performed.

  • Extending an all-in-one installation to a distributed installation.

    To expand an all-in-one installation to a distributed installation:

    1. Create a backup copy of KUMA.
    2. Remove the pre-installed correlator, collector, and storage services from the server.
      1. In the KUMA web interface, under ResourcesActive services, select a service and click Copy ID. On the server where the services were installed, run the service removal command:

        sudo /opt/kaspersky/kuma/kuma <collector/correlator/storage> --id <service ID copied from the KUMA web interface> --uninstall

        Repeat the removal command for each service.

      2. Then remove the services in the KUMA web interface:

      As a result, only the KUMA Core remains on the initial installation server.

    3. Prepare the distributed.inventory.yml inventory file and in that file, specify the initial all-in-one initial installation server in the kuma_core group.

      In this way, the KUMA Core remains on the original server, and you can deploy the other components on other servers. Specify the servers on which you want to install the KUMA components in the inventory file.

      Sample inventory file for expanding an all-in-one installation to a distributed installation

      Single to distributed.inventory.yml_example

    4. Create and install the storage, collector, correlator, and agent services on other machines.
      1. After you specify the settings ​​for all sections in the distributed.inventory.yml inventory file, run the installer on the test machine.

        sudo ./install.sh distributed.inventory.yml

        Running the command causes the files necessary to install the KUMA components (storages, collectors, correlators) to appear on each target machine specified in the distributed.inventory.yml inventory file.

      2. Create storage, collector, and correlator services.

    The expansion of the installation is completed.

  • Adding servers for collectors to a distributed installation.

    The following instructions show how to add one or more servers to an existing infrastructure and then install collectors on these servers to balance the load. You can use these instructions as an example and adapt them to your requirements.

    To add servers to a distributed installation:

    1. Ensure that the target machines meet hardware, software, and installation requirements.
    2. On the test machine, go to the directory with the unpacked KUMA installer by running the following command:

      cd kuma-ansible-installer

    3. Copy the expand.inventory.yml.template template to create an inventory file called expand.inventory.yml:

      cp expand.inventory.yml.template expand.inventory.yml

    4. Edit the settings in the expand.inventory.yml inventory file and specify the servers that you want to add in the kuma_collector section.

      Sample expand.inventory.yml inventory file for adding collector servers

      Expand.inventory.yml_add_collector_example

    5. On the test machine start expand.inventory.playbook by running the following command as root in the directory with the unpacked installer:

      PYTHONPATH="$(pwd)/ansible/site-packages:${PYTHONPATH}" python3 ./ansible/bin/ansible-playbook -i expand.inventory.yml expand.inventory.playbook.yml

      Running this command on each target machine specified in the expand.inventory.yml inventory file creates files for creating and installing the collector.

    6. Create and install the collectors. A KUMA collector consists of a client part and a server part, therefore creating a collector involves two steps.
      1. Creating the client part of the collector, which includes a set of resources and the collector service.

        To create a set of resources for a collector, in the KUMA web interface, under ResourcesCollectors, click Add collector and edit the settings. For more details, see Creating a collector.

        At the last step of the configuration wizard, after you click Create and save, a resource set for the collector is created and the collector service is automatically created. The command for installing the service on the server is also automatically generated and displayed on the screen. Copy the installation command and proceed to the next step.

      2. Creating the server part of the collector.
      1. On the target machine, run the command you copied at the previous step. The command looks as follows, but all parameters are filled in automatically.

        sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install

        The collector service is installed on the target machine. You can check the status of the service in the web interface under ResourcesActive services.

      2. Run the same command on each target machine specified in the expand.inventory.yml inventory file.
    7. Specify the added servers in the distributed.inventory.yml inventory file so that it has up-to-date information in case of a KUMA update.

    Servers are successfully added.

  • Adding servers for correlators to a distributed installation.

    The following instructions show how to add one or more servers to an existing infrastructure and then install correlators on these servers to balance the load. You can use these instructions as an example and adapt them to your requirements.

    To add servers to a distributed installation:

    1. Ensure that the target machines meet hardware, software, and installation requirements.
    2. On the test machine, go to the directory with the unpacked KUMA installer by running the following command:

      cd kuma-ansible-installer

    3. Copy the expand.inventory.yml.template template to create an inventory file called expand.inventory.yml:

      cp expand.inventory.yml.template expand.inventory.yml

    4. Edit the settings in the expand.inventory.yml inventory file and specify the servers that you want to add in the kuma_correlator section.

      Sample expand.inventory.yml inventory file for adding correlator servers

      Expand.inventory.yml_add_correlator_example

    5. On the test machine, start expand.inventory.playbook by running the following command as root in the directory with the unpacked installer:

      PYTHONPATH="$(pwd)/ansible/site-packages:${PYTHONPATH}" python3 ./ansible/bin/ansible-playbook -i expand.inventory.yml expand.inventory.playbook.yml

      Running this command on each target machine specified in the expand.inventory.yml inventory file creates files for creating and installing the correlator.

    6. Create and install the correlators. A KUMA correlator consists of a client part and a server part, therefore creating a correlator involves two steps.
      1. Creating the client part of the correlator, which includes a set of resources and the correlator service.

        To create a resource set for a correlator, in the KUMA web interface, under ResourcesCorrelators, click Add correlator and edit the settings. For more details, see Creating a correlator.

        At the last step of the configuration wizard, after you click Create and save, a resource set for the correlator is created and the correlator service is automatically created. The command for installing the service on the server is also automatically generated and displayed on the screen. Copy the installation command and proceed to the next step.

      2. Creating the server part of the correlator.
      1. On the target machine, run the command you copied at the previous step. The command looks as follows, but all parameter values are assigned automatically.

        sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install

        The correlator service is installed on the target machine. You can check the status of the service in the web interface under ResourcesActive services.

      2. Run the same command on each target machine specified in the expand.inventory.yml inventory file.
    7. Specify the added servers in the distributed.inventory.yml inventory file so that it has up-to-date information in case of a KUMA update.

    Servers are successfully added.

  • Adding servers to an existing storage cluster.

    The following instructions show how to add multiple servers to an existing storage cluster. You can use these instructions as an example and adapt them to your requirements.

    To add servers to an existing storage cluster:

    1. Ensure that the target machines meet hardware, software, and installation requirements.
    2. On the test machine, go to the directory with the unpacked KUMA installer by running the following command:

      cd kuma-ansible-installer

    3. Copy the expand.inventory.yml.template template to create an inventory file called expand.inventory.yml:

      cp expand.inventory.yml.template expand.inventory.yml

    4. Edit the settings in the expand.inventory.yml inventory file and specify the servers that you want to add in the 'storage' section. In the following example, the 'storage' section specifies servers for installing two shards, each of which contains two replicas. In the expand.inventory.yml inventory file, you must only specify the FQDN, the roles of shards and replicas are assigned later in the KUMA web interface by following the steps of the instructions. You can adapt this example to suit your needs.

      Sample expand.inventory.yml inventory file for adding servers to an existing storage cluster

      Expand.inventory.yml_existing_storage_example

    5. On the test machine, start expand.inventory.playbook by running the following command as root in the directory with the unpacked installer:

      PYTHONPATH="$(pwd)/ansible/site-packages:${PYTHONPATH}" python3 ./ansible/bin/ansible-playbook -i expand.inventory.yml expand.inventory.playbook.yml

      Running this command on each target machine specified in the expand.inventory.yml inventory file creates files for creating and installing the storage.

    6. You do not need to create a separate storage because you are adding servers to an existing storage cluster. You must edit the storage settings of the existing cluster:
      1. In the ResourcesStorages section, select an existing storage and open the storage for editing.
      2. In the ClickHouse cluster nodes section, click Add nodes and specify roles in the fields for the new node. The following example shows how to specify identifiers to add two shards, containing two replicas each, to an existing cluster. You can adapt the example to suit your needs.

        Example:

        ClickHouse cluster nodes

        <existing nodes>

        FQDN: kuma-storage-cluster1server8.example.com

        Shard ID: 1

        Replica ID: 1

        Keeper ID: 0

        FQDN: kuma-storage-cluster1server9.example.com

        Shard ID: 1

        Replica ID: 2

        Keeper ID: 0

        FQDN: kuma-storage-cluster1server9.example.com

        Shard ID: 2

        Replica ID: 1

        Keeper ID: 0

        FQDN: kuma-storage-cluster1server10.example.com

        Shard ID: 2

        Replica ID: 2

        Keeper ID: 0

      3. Save the storage settings.

        Now you can create storage services for each ClickHouse cluster node.

    7. To create a storage service, in the KUMA web interface, in the ResourcesActive services section, click Add service.

      This opens the Choose a service window; in that window, select the storage you edited at the previous step and click Create service. Do the same for each ClickHouse storage node you are adding.

      As a result, the number of created services must be the same as the number of nodes added to the ClickHouse cluster, that is, four services for four nodes. The created storage services are displayed in the KUMA web interface in the ResourcesActive services section. Now storage services must be installed on each server by using the service ID.

    8. Now storage services must be installed on each server by using the service ID.
      1. In the KUMA web interface, in the ResourcesActive services section, select the storage service that you need and click Copy ID.

        The service ID is copied to the clipboard; you need it for running the service installation command.

      2. Compose and run the following command on the target machine:

        sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install

        The storage service is installed on the target machine. You can check the status of the service in the web interface under ResourcesActive services.

      3. Run the storage service installation command on each target machine listed in the 'storage' section of the expand.inventory.yml inventory file, one machine at a time. On each machine, the unique service ID within the cluster must be specified in the installation command.
    9. To apply changes to a running cluster, in the KUMA web interface, under ResourcesActive services, select the check box next to all storage services in the cluster that you are expanding and click Update configuration. Changes are applied without stopping services.
    10. Specify the added servers in the distributed.inventory.yml inventory file so that it has up-to-date information in case of a KUMA update.

    Servers are successfully added to a storage cluster.

  • Adding an additional storage cluster.

    The following instructions show how to add an additional storage cluster to existing infrastructure. You can use these instructions as an example and adapt them to your requirements.

    To add an additional storage cluster:

    1. Ensure that the target machines meet hardware, software, and installation requirements.
    2. On the test machine, go to the directory with the unpacked KUMA installer by running the following command:

      cd kuma-ansible-installer

    3. Copy the expand.inventory.yml.template template to create an inventory file called expand.inventory.yml:

      cp expand.inventory.yml.template expand.inventory.yml

    4. Edit the settings in the expand.inventory.yml inventory file and specify the servers that you want to add in the 'storage' section. In the following example, the 'storage' section specifies servers for installing three dedicated keepers and two shards, each of which contains two replicas. In the expand.inventory.yml inventory file, you must only specify the FQDN, the roles of keepers, shards, and replicas are assigned later in the KUMA web interface by following the steps of the instructions. You can adapt this example to suit your needs.

      Sample expand.inventory.yml inventory file for adding an additional storage cluster

      Expand.inventory.yml_example

    5. On the test machine, start expand.inventory.playbook by running the following command as root in the directory with the unpacked installer:

      PYTHONPATH="$(pwd)/ansible/site-packages:${PYTHONPATH}" python3 ./ansible/bin/ansible-playbook -i expand.inventory.yml expand.inventory.playbook.yml

      Running this command on each target machine specified in the expand.inventory.yml inventory file creates files for creating and installing the storage.

    6. Create and install a storage. For each storage cluster, you must create a separate storage, that is, three storages for three storage clusters. A storage consists of a client part and a server part, therefore creating a storage involves two steps.
      1. Creating the client part of the storage, which includes a set of resources and the storage service.
        1. To create a resource set for a storage, in the KUMA web interface, under ResourcesStorages, click Add storage and edit the settings. In the ClickHouse cluster nodes section, specify roles for each server that you are adding: keeper, shard, replica. For more details, see Creating a set of resources for a storage.

          The created set of resources for the storage is displayed in the ResourcesStorages section. Now you can create storage services for each ClickHouse cluster node.

        2. To create a storage service, in the KUMA web interface, in the ResourcesActive services section, click Add service.

          This opens the Choose a service window; in that window, select the set of resources that you created for the storage at the previous step and click Create service. Do the same for each ClickHouse storage.

          As a result, the number of created services must be the same as the number of nodes in the ClickHouse cluster, that is, fifty services for fifty nodes. The created storage services are displayed in the KUMA web interface in the ResourcesActive services section. Now storage services must be installed to each node of the ClickHouse cluster by using the service ID.

      2. Creating the server part of the storage.
      1. On the target machine, create the server part of the storage: in the KUMA web interface, in the ResourcesActive services section, select the relevant storage service and click Copy ID.

        The service ID is copied to the clipboard; you need it for running the service installation command.

      2. Compose and run the following command on the target machine:

        sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install

        The storage service is installed on the target machine. You can check the status of the service in the web interface under ResourcesActive services.

      3. Run the storage service installation command on each target machine listed in the 'storage' section of the expand.inventory.yml inventory file, one machine at a time. On each machine, the unique service ID within the cluster must be specified in the installation command.
      4. Dedicated keepers are automatically started immediately after installation and are displayed in the ResourcesActive services section with a green status. Services on other storage nodes may not start until services are installed for all nodes in that cluster. Up to that point, services can be displayed with a red status. This is normal behavior for creating a new storage cluster or adding nodes to an existing storage cluster. As soon as the command to install services on all nodes of the cluster is executed, all services acquire the green status.
    7. Specify the added servers in the distributed.inventory.yml inventory file so that it has up-to-date information in case of a KUMA update.

    An additional storage cluster is successfully added.

  • Removing servers from a distributed installation.

    To remove a server from a distributed installation:

    1. Remove all services from the server that you want to remove from the distributed installation.
      1. Remove the server part of the service. Copy the service ID in the KUMA web interface and run the following command on the target machine:

        sudo /opt/kaspersky/kuma/kuma <collector/correlator/storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install

      2. Remove the client part of the service in the KUMA web interface in the Active services → Delete section.

        The service is removed.

    2. Repeat step 1 for each server that you want to remove from the infrastructure.
    3. Remove servers from the relevant sections of the distributed.inventory.yml inventory file to make sure the inventory file has up-to-date information in case of a KUMA update.

    The servers are removed from the distributed installation.

  • Removing a storage cluster from a distributed installation.

    To remove one or more storage clusters from a distributed installation:

    1. Remove the storage service on each cluster server that you want to removed from the distributed installation.
      1. Remove the server part of the storage service. Copy the service ID in the KUMA web interface and run the following command on the target machine:

        sudo /opt/kaspersky/kuma/kuma <storage> --id <service ID> --uninstall

        Repeat for each server.

      2. Remove the client part of the service in the KUMA web interface in the ResourcesActive services → Delete section.

        The service is removed.

    2. Remove servers from the 'storage' section of the distributed.inventory.yml inventory file to make sure the inventory file has up-to-date information in case of a KUMA update or a configuration change.

    The cluster is removed from the distributed installation.

  • Migrating the KUMA Core to a new Kubernetes cluster.

    Preparing the inventory file

    When migrating the KUMA Core to a Kubernetes cluster, it is recommended to use the template file named k0s.inventory.yml.template when creating the inventory file.

    The kuma_core, kuma_ collector, kuma_correlator, and kuma_storage sections of your inventory file must contain the same hosts that were used when upgrading KUMA from version 2.0.x to version 2.1 or when performing a new installation of the application. In the inventory file, set the deploy_to_k8s, need_transfer and airgap parameters to true. The deploy_example_services parameter must be set to false.

    Example inventory file with 1 dedicated controller and 2 worker nodes

    all:

    vars:

    ansible_connection: ssh

    ansible_user: root

    deploy_to_k8s: True

    need_transfer: True

    airgap: True

    deploy_example_services: False

    kuma:

    children:

    kuma_core:

    hosts:

    kuma.example.com:

    mongo_log_archives_number: 14

    mongo_log_frequency_rotation: daily

    mongo_log_file_size: 1G

    kuma_collector:

    hosts:

    kuma.example.com:

    kuma_correlator:

    hosts:

    kuma.example.com:

    kuma_storage:

    hosts:

    kuma.example.com:

    shard: 1

    replica: 1

    keeper: 1

    kuma_k0s:

    children:

    kuma_control_plane_master:

    hosts:

    kuma2.example.com:

    ansible_host: 10.0.1.10

    kuma_control_plane_master_worker:

    kuma_control_plane:

    kuma_control_plane_worker:

    kuma_worker:

    hosts:

    kuma.example.com:

    ansible_host: 10.0.1.11

    extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true"

    kuma3.example.com:

    ansible_host: 10.0.1.12

    extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true"

    Migrating the KUMA Core to a new Kubernetes cluster

    When the installer is started with this template file, it searches for the installed KUMA Core on all hosts where you intend to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.

    If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.

    Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.

    On the Core host, the installer does the following:

    • Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
    • Deletes the internal certificate of the Core.
    • Deletes the certificate files of all other components and deletes their records from MongoDB.
    • Deletes the following directories:
      • /opt/kaspersky/kuma/core/bin
      • /opt/kaspersky/kuma/core/certificates
      • /opt/kaspersky/kuma/core/log
      • /opt/kaspersky/kuma/core/logs
      • /opt/kaspersky/kuma/grafana/bin
      • /opt/kaspersky/kuma/mongodb/bin
      • /opt/kaspersky/kuma/mongodb/log
      • /opt/kaspersky/kuma/victoria-metrics/bin
    • Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
    • On the Core host, it migrates the following directories:
      • /opt/kaspersky/kuma/core
      • /opt/kaspersky/kuma/grafana
      • /opt/kaspersky/kuma/mongodb
      • /opt/kaspersky/kuma/victoria-metrics

      to the following directories:

      • /opt/kaspersky/kuma/core.moved
      • /opt/kaspersky/kuma/grafana.moved
      • /opt/kaspersky/kuma/mongodb.moved
      • /opt/kaspersky/kuma/victoria-metrics.moved

      After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.

    If you encounter problems with the migration, analyze the logs of the core-transfer migration task in the kuma namespace in the cluster (this task is available for 1 hour after the migration).

    If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.

    If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of /etc/hosts from the host where the primary controller is deployed are entered into the ConfigMap.

Page top
[Topic 222160]

Updating previous versions of KUMA

The update is performed the same way on all hosts using the installer and inventory file. If you are using version 1.5 or 1.6 and want to update to KUMA 2.1.x, please update to 2.0.x first, and then from 2.0.x to 2.1.x.

Upgrading from version 2.0.x to 2.1.x

To install KUMA version 2.1.x over version 2.0.x, complete the preliminary steps and then update.

Preliminary steps

  1. Creating a backup copy of the KUMA Core.
  2. Make sure that all application installation requirements are met.
  3. Make sure that MongoDB versions are compatible by running the following sequence of commands on the device where KUMA Core is located:

    cd /opt/kaspersky/kuma/mongodb/bin/

    ./mongo

    use kuma

    db.adminCommand({getParameter: 1, featureCompatibilityVersion: 1})

    If the component version is different from 4.4, set the value to 4.4 using the following command: 

    db.adminCommand({ setFeatureCompatibilityVersion: "4.4" })

  4. During installation or update, ensure network accessibility of TCP port 7220 on the KUMA Core for the KUMA storage hosts.
  5. If you have a keeper deployed on a separate device in the ClickHouse cluster, install the storage service on the same device before performing the update:
    • Use the existing storage of the cluster to create a storage service for the keeper in the web interface.
    • Install the service on a device with a dedicated ClickHouse keeper.
  6. In the inventory file, specify the same hosts that were used when installing KUMA version 2.0.X. Set the following settings to false:

    deploy_to_k8s false

    need_transfer false

    deploy_example_services false

    When the installer uses this inventory file, all KUMA components are upgraded to version 2.1.0. The available services and storage resources are also reconfigured on hosts from the kuma_storage group:

    • ClickHouse systemd services are deleted.
    • Certificates are deleted from the /opt/kaspersky/kuma/clickhouse/certificates directory.
    • The Shard ID, Replica ID, Keeper ID, and ClickHouse configuration override fields are filled in for each node in the storage resource based on values from the inventory and configuration files of the service on the host. Subsequently, you will manage the roles of each node in the KUMA web interface.
    • All existing configuration files from the /opt/kaspersky/kuma/clickhouse/cfg directory are deleted (they will be subsequently generated by the storage service).
    • The value of the LimitNOFILE parameter (Service section) is changed from 64,000 to 500,000 in the kuma-storage systemd services.
  7. If you use alert segmentation rules, prepare the data for migrating the existing rules and save. In the next step, you can use this data to re-create the rules. During the update, alert segmentation rules are not migrated automatically.
  8. To perform an update, you need a valid password from the admin user. If you forgot the admin user password, contact Technical Support to reset the current password and use the new password to perform the update at the next step.

Updating KUMA

  1. If you have a ready-made inventory file, follow the instructions for distributed installation of the program.

    If an inventory file is not available for the current version, use the provided inventory file template and fill in the corresponding settings. To view a list of hosts and host roles in the current KUMA system, in the web interface, go to ResourcesActive services section.

  2. When upgrading on systems that contain large amounts of data and are operating with limited resources, the system may return the 'Wrong admin password' error message after you enter the administrator password. If you specify the correct password, KUMA may still return an error because KUMA could not start the Core service due to a timeout error and resource limit. If you enter the administrator password three times without waiting for the installation to complete, the update may end with a fatal error. Resolve the timeout error to proceed with the update.

The final stage of preparing KUMA for work

  1. After updating KUMA, you must clear your browser cache.
  2. Re-create the alert segmentation rules.
  3. Manually update the KUMA agents.

KUMA update completed successfully.

Upgrading from version 2.1.x to 2.1.3

To install KUMA version 2.1.3 over version 2.1.x, complete the preliminary steps and then update.

Preliminary steps

  1. Creating a backup copy of the KUMA Core.
  2. Make sure that all application installation requirements are met.
  3. During installation or update, ensure network accessibility of TCP port 7220 on the KUMA Core for the KUMA storage hosts.
  4. To perform an update, you need a valid password from the admin user. If you forgot the admin user password, contact Technical Support to reset the current password and use the new password to perform the update at the next step.

Updating KUMA

  1. If you have a ready-made inventory file, follow the instructions for distributed installation of the program.

    If an inventory file is not available for the current version, use the provided inventory file template and fill in the corresponding settings. To view a list of hosts and host roles in the current KUMA system, in the web interface, go to ResourcesActive services section.

  2. When upgrading on systems that contain large amounts of data and are operating with limited resources, the system may return the 'Wrong admin password' error message after you enter the administrator password. If you specify the correct password, KUMA may still return an error because KUMA could not start the Core service due to a timeout error and resource limit. If you enter the administrator password three times without waiting for the installation to complete, the update may end with a fatal error. Resolve the timeout error to proceed with the update.

The final stage of preparing KUMA for work

  1. After updating KUMA, you must clear your browser cache.
  2. Manually update the KUMA agents.

KUMA update completed successfully.

Page top
[Topic 222156]

Troubleshooting update errors

When updating KUMA, you may encounter the following errors:

  • Timeout error

    When upgrading from version 2.0.x on systems that contain large amounts of data and are operating with limited resources, the system may return the Wrong admin password error message after you enter the administrator password. If you specify the correct password, KUMA may still return an error because KUMA could not start the Core service due to resource limit and a timeout error. If you enter the administrator password three times without waiting for the installation to complete, the update may end with a fatal error.

    Follow these steps to resolve the timeout error and successfully complete the update:

    1. Open a separate second terminal and run the following command to verify that the command output contains the timeout error line:

      journalctl -u kuma-core | grep 'start operation timed out' 

      Timeout error message:

      kuma-core.service: start operation timed out. Terminating.

    2. After you find the timeout error message, in the /usr/lib/systemd/system/kuma-core.service file, change the value of the TimeoutSec parameter from 300 to 0 to remove the timeout limit and temporarily prevent the error from recurring.
    3. After modifying the service file, run the following commands in sequence:

      systemctl daemon-reload

      service kuma-core restart

    4. After executing the commands and successfully starting the service in the second terminal, enter the administrator password again in the original first terminal when the installer prompts you for the password.

      KUMA will continue the installation. In resource-limited environments, installation may take up to an hour.

    5. After installation finishes successfully, in the /usr/lib/systemd/system/kuma-core.service file, set the TimeoutSec parameter back to 300.
    6. After modifying the service file, run the following commands in the second terminal:

      systemctl daemon-reload

      service kuma-core restart

    After the commands are executed, the update will be completed.

  • Invalid administrator password

    The admin user password is required to automatically populate the storage settings during an update. If you entered an incorrect admin user password nine times during the TASK [Prompt for admin password], the installer still performs the update, and the web interface is still available. However, the storage settings are not migrated, and the storages will show a red status.

    To fix the error and make the repositories available for use, update the storage settings:

    1. Go to the storage settings, manually fill in the ClickHouse cluster fields, and click Save.
    2. Restart the storage service.

    The storage service will start with the specified parameters and will show a green status.

  • DB::Exception error

    After updating KUMA, the storage may be in a red status, and its logs may show errors about suspicious strings.

    Example error:

    DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, int, bool) @ 0xda0553a in /opt/kaspersky/kuma/clickhouse/bin/clickhouse

    To restart ClickHouse, carry out the following command on the KUMA storage server:

    touch /opt/kaspersky/kuma/clickhouse/data/flags/force_restore_data && systemctl restart kuma-storage-<ID of the storage where the error was detected>

Fix the errors to successfully complete the update.

Page top
[Topic 247287]

Delete KUMA

To remove KUMA, use the Ansible tool and the user-generated inventory file.

To remove KUMA:

  1. On the test machine, go to the installer folder:

    cd kuma-ansible-installer

  2. Execute the following command:

    sudo ./uninstall.sh <inventory file>

KUMA and all of the program data will be removed from the server.

The databases that were used by KUMA (for example, the ClickHouse storage database) and the information they contain must be deleted separately.

Special considerations for removing a fault-tolerant configuration of KUMA

The composition of the removed components depends on the value of the deploy_to_k8s parameter in the inventory file used to remove KUMA:

  • true – the Kubernetes cluster created during the KUMA installation is deleted.
  • false – all KUMA components except for the Core are deleted from the Kubernetes cluster. The cluster is not deleted.

In addition to the KUMA components installed outside the cluster, the following directories and files are deleted on the cluster nodes:

  • /usr/bin/k0s
  • /etc/k0s/
  • /var/lib/k0s/
  • /usr/libexec/k0s/
  • ~/k0s/ (for the ansible_user)
  • /opt/longhorn/
  • /opt/cni/
  • /opt/containerd

When a cluster is being deleted, error messages may appear, however, it does not interrupt the installer.

  • You can ignore such messages for the Delete KUMA transfer job and Delete KUMA pod tasks.
  • For the Reset k0s task (if an error message contains the following text: "To ensure a full reset, a node reboot is recommended.") and the Delete k0s Directories and files task (if an error message contains the following text: "I/O error: '/var/lib/k0s/kubelet/plugins/kubernetes.io/csi/driver.longhorn.io/"), it is recommended to restart the host the error is related to and try to uninstall KUMA again with the same inventory file.

After removing KUMA, restart the hosts on which the KUMA or Kubernetes components were installed.

Page top
[Topic 217962]