Contents
- Kaspersky Next XDR Expert maintenance
- Updating Kaspersky Next XDR Expert components
- Versioning the configuration file
- Removing Kaspersky Next XDR Expert components and management web plug-ins
- Reinstalling Kaspersky Next XDR Expert after a failed installation
- Stopping the Kubernetes cluster nodes
- Using certificates for public Kaspersky Next XDR Expert services
- Modifying the self-signed KUMA Console certificate
- Calculation and changing of disk space for storing Administration Server data
- Rotation of secrets
- Adding hosts for installing the additional KUMA services
- Replacing a host that uses KUMA storage
Kaspersky Next XDR Expert maintenance
This section describes updating, removing, and reinstalling Kaspersky Next XDR Expert components by using KDT. Also, the section provides instructions on how to stop the Kubernetes cluster nodes, update custom certificates for public Kaspersky Next XDR Expert services, as well as obtain the current version of the configuration file, and perform other actions with Kaspersky Next XDR Expert components by using KDT.
Updating Kaspersky Next XDR Expert components
KDT allows you to update the Kaspersky Next XDR Expert components (including management web plug-ins). New versions of the Kaspersky Next XDR Expert components are included in the distribution package.
Installing components of an earlier version is not supported.
To update the Kaspersky Next XDR Expert components:
- Download the distribution package with the new versions of the Kaspersky Next XDR Expert components.
- If necessary, on the administrator host, export the current version of the configuration file.
You do not need to export the configuration file if the installation parameters are not added or modified.
- Update the Kaspersky Next XDR Expert components:
- Run the following command for standard updating of the Kaspersky Next XDR Expert components:
./kdt apply -k <path_to_XDR_updates_archive> -i <path_to_configuration_file>
- If the version of the installed Kaspersky Next XDR Expert component matches the component version in the distribution package, the update of this component is skipped. Run the following command to force an update of this component by using the
force
flag:./kdt apply --force -k <path_to_XDR_updates_archive> -i <path_to_configuration_file>
- Run the following command for standard updating of the Kaspersky Next XDR Expert components:
- If the distribution package contains a new version of the Bootstrap component, run the following command to update the Kubernetes cluster:
./kdt apply -k <path_to_XDR_updates_archive> -i <path_to_configuration_file> --force-bootstrap
In the commands described above, you need specify the path to the archive with updates of the components and the path to the current configuration file. You may not specify the path to the configuration file in the command if the installation parameters are not added or modified.
- Read the End User License Agreement (EULA) and the Privacy Policy of the Kaspersky Next XDR Expert component, if a new version of the EULA and the Privacy Policy appears. The text is displayed in the command line window. Press the space bar to view the next text segment. Then, when prompted, enter the following values:
- Enter
y
if you understand and accept the terms of the EULA.Enter
n
if you do not accept the terms of the EULA. To use the Kaspersky Next XDR Expert component, you must accept the terms of the EULA. - Enter
y
if you understand and accept the terms of the Privacy Policy, and you agree that your data will be handled and transmitted (including to third countries) as described in the Privacy Policy.Enter
n
if you do not accept the terms of the Privacy Policy.
To update the Kaspersky Next XDR Expert component, you must accept the terms of the EULA and the Privacy Policy.
- Enter
After you accept the EULA and the Privacy Policy, KDT updates the Kaspersky Next XDR Expert components.
You can read the EULA and the Privacy Policy of the Kaspersky Next XDR Expert component after the update. The files are located in the /home/kdt/
directory of the user who runs the deployment of Kaspersky Next XDR Expert.
Versioning the configuration file
When working with Kaspersky Next XDR Expert, you may need to change the parameters that were specified in the configuration file before the Kaspersky Next XDR Expert deployment. For example, when changing the disk space used to store the Administration Server data, the ksc_state_size
parameter is modified. The current version of the configuration file with the modified ksc_state_size
parameter is updated in the Kubernetes cluster.
If you try to use the previous version of the configuration file in a KDT custom action that requires the configuration file, a conflict occurs. To avoid conflicts, you have to use only the current version on the configuration file exported from the Kubernetes cluster.
To export the current version of the configuration file,
On the administrator host where the KDT utility is located, run the following custom action, and then specify the path to the configuration file and its name:
./kdt export-config --filename <path_to_configuration_file.yaml>
The current version of the configuration file is saved to the specified directory with the specified name.
You can use the exported configuration file, for example, when updating Kaspersky Next XDR Expert components or adding management plug-ins for Kaspersky applications.
You need not export the configuration file if the installation parameters are not added or modified.
Page topRemoving Kaspersky Next XDR Expert components and management web plug-ins
KDT allows you to remove all Kaspersky Next XDR Expert components installed in the Kubernetes cluster, the cluster itself, and the KUMA services installed outside the cluster. By using KDT, you can also remove the management web plug-ins of Kaspersky applications, for example, the plug-in of Kaspersky Endpoint Security for Windows.
Removing Kaspersky Next XDR Expert
To remove the Kaspersky Next XDR Expert components and related data:
On the administrator host, run the following command:
./kdt remove --all
All Kaspersky Next XDR Expert components installed in the Kubernetes cluster and the cluster itself are removed. If you installed a DBMS inside the cluster, the DBMS is removed, too.
Also, KDT removes the KUMA services installed outside the cluster on the hosts that were specified in the inventory file.
Data related to the Kaspersky Next XDR Expert components is deleted from the administrator host.
If the administrator host does not have network access to a target host, removing the components is interrupted. You can restore network access and restart the removal of Kaspersky Next XDR Expert. Alternatively, you can remove the Kaspersky Next XDR Expert components from the target hosts manually (refer to the next instruction).
If you use multiple Kubernetes clusters managing by contexts, this command removes only the current Kubernetes context, the corresponding cluster, and the Kaspersky Next XDR Expert components installed in the cluster. Other contexts and their clusters with Kaspersky Next XDR Expert instances are not removed.
- Remove the DBMS and data related to the Kaspersky Next XDR Expert components manually, if you installed the DBMS on a separate server outside the cluster.
- Close the ports used by Kaspersky Next XDR Expert that were opened during the deployment, if needed. These ports are not closed automatically.
Remove the operating system packages that were automatically installed during the deployment, if needed. These packages are not removed automatically.
- Remove KDT and the contents of the
/home/kdt
and/home/.kdt
directories.
The Kaspersky Next XDR Expert components, DBMS, and related data are removed, and the ports used by Kaspersky Next XDR Expert are closed.
To remove the Kaspersky Next XDR Expert components from the target hosts manually:
On the target host, run the following command to stop the k0s service:
/usr/local/bin/k0s stop
Remove the contents of the following directories:
Required directories:
/etc/k0s/
/var/lib/k0s/
/usr/libexec/k0s/
/usr/local/bin/
Optional directories:
/var/lib/containerd/
/var/cache/k0s/
/var/cache/kubelet/
/var/cache/containerd/
You can remove the
/var/lib/containerd/
and/var/cache/containerd/
directories if the containerd service is used only for the function of Kaspersky Next XDR Expert. Otherwise, your data contained in the/var/lib/containerd/
and/var/cache/containerd/
directories may be lost.Contents of the
/var/cache/k0s/
,/var/cache/kubelet/
, and/var/cache/containerd/
directories is automatically removed after you restart the target host. You do not have to clear these folders manually.
The Kaspersky Next XDR Expert components are deleted from the target hosts.
Removing management web plug-ins
You can remove the management web plug-ins of Kaspersky applications that provide additional functionality for Kaspersky Next XDR Expert. The Kaspersky Next XDR Expert services plug-ins are used for the correct function of Kaspersky Next XDR Expert and cannot be removed (for example, the plug-in of Incident Response Platform).
To remove a management web plug-in:
If needed, run the following command to obtain the name of the plug-in that you want to remove:
./kdt status
The list of components is displayed.
On the administrator host, run the following command. Specify the name of the plug-in that you want to remove:
./kdt remove --cnab <plug-in_name>
The specified management web plug-in is removed by KDT.
Page topReinstalling Kaspersky Next XDR Expert after a failed installation
During the installation of Kaspersky Next XDR Expert, on the administrator host, KDT displays an installation log that shows whether the Kaspersky Next XDR Expert components are installed correctly.
After installing Kaspersky Next XDR Expert, you can run the following command to view the list of all installed components:
./kdt status
The installed components list is displayed. Correctly installed components have the Success
status. If the component installation failed, this component has the Failed
status.
To view the full installation log of the incorrectly installed Kaspersky Next XDR Expert component, run the following command:
./kdt status -l <component_name>
You can also output all diagnostic information about Kaspersky Next XDR Expert components by using the following command:
./kdt logs get --to-archive
You can use the obtained logs to troubleshoot problems on your own or with the help of Kaspersky Technical Support.
To reinstall incorrectly installed Kaspersky Next XDR Expert components,
- If you did not modify the configuration file, run the following command, and then specify the same transport archive that was used for the Kaspersky Next XDR Expert installation:
./kdt apply -k <path_to_transport_archive>
- If you need to change the installation parameters, export the configuration file, modify it, and then run the following command with the transport archive and the updated configuration file:
./kdt apply -k <path_to_transport_archive> -i <path_to_configuration_file>
KDT reinstalls only the incorrectly installed Kaspersky Next XDR Expert components.
Page topStopping the Kubernetes cluster nodes
You may need to stop the entire Kubernetes cluster or temporarily detach one of the nodes of the cluster for maintenance.
In a virtual environment, do not power off virtual machines that are hosting active Kubernetes cluster nodes.
To stop a multi-node Kubernetes cluster (distributed deployment scheme):
- Log in to a worker node and initiate graceful shut down. Repeat this process for all worker nodes.
- Log in to the primary node and initiate graceful shut down.
To stop a single-node Kubernetes cluster (single node deployment scheme):
Log in to the primary node and initiate graceful shut down.
Page topUsing certificates for public Kaspersky Next XDR Expert services
For working with public Kaspersky Next XDR Expert services, you can use self-signed or custom certificates. By default, Kaspersky Next XDR Expert uses self-signed certificates.
Certificates are required for the following Kaspersky Next XDR Expert public services:
- console.<smp_domain>—Access to the OSMP Console interface.
- admsrv.<smp_domain>—Interaction with Administration Server.
- api.<smp_domain>—Access to the Kaspersky Next XDR Expert API.
The list of FQDNs of public Kaspersky Next XDR Expert services, for which self-signed or custom certificates are defined during the deployment, is specified in the pki_fqdn_list
installation parameter.
A custom certificate must be specified as a file in the PEM format that contains the complete certificate chain (or only one certificate) and an unencrypted private key.
You can specify the intermediate certificate from your organization's private key infrastructure (PKI). Custom certificates for public Kaspersky Next XDR Expert services are issued from this custom intermediate certificate. Alternatively, you can specify leaf certificates for each of the public services. If leaf certificates are specified only for a part of the public services, then self-signed certificates are issued for the other public services.
For the console.<smp_domain> and api.<smp_domain> public services, you can specify custom certificates only before the deployment in the configuration file. Specify the intermediate_bundle
and intermediate_enabled
installation parameters to use the custom intermediate certificate.
If you want to use the leaf custom certificates to work with the public Kaspersky Next XDR Expert services, specify the corresponding console_bundle
, admsrv_bundle
, and api_bundle
installation parameters. Set the intermediate_enabled
parameter to false
and do not specify the intermediate_bundle
parameter.
For the admsrv.<smp_domain> service, you can replace the issued Administration Server self-signed certificate with a custom certificate by using the klsetsrvcert utility.
Automatic rotation of certificates is not supported. Take into account the validity term of the certificate, and then update the certificate when it expires.
To update custom certificates:
- On the administrator host, export the current version of the configuration file.
- In the exported configuration file, specify the path to a new custom intermediate certificate in the
intermediate_bundle
installation parameter. If you use the leaf custom certificates for each of the public services, specify theconsole_bundle
,admsrv_bundle
, andapi_bundle
installation parameters. - Run the following command and specify the path to the modified configuration file:
./kdt apply -i <path_to_configuration_file>
Custom certificates are updated.
Page topModifying the self-signed KUMA Console certificate
You can use your company certificate and key instead of self-signed web console certificate. For example, if you want to replace self-signed CA Core certificate with a certificate issued by an enterprise CA, you must provide an external.cert and an unencrypted external.key in PEM format.
The following example shows how to replace a self-signed CA Core certificate with an enterprise certificate in PFX format. You can use the instructions as an example and adapt the steps according to your needs.
To replace the KUMA Console certificate with an external certificate:
- If you are using a certificate and key in a PFX container, in OpenSSL, convert the PFX file to a certificate and encrypted key in PEM format by executing the following command:
openssl pkcs12 -in kumaWebIssuedByCorporateCA.pfx -nokeys -out external.cert
openssl pkcs12 -in kumaWebIssuedByCorporateCA.pfx -nocerts -nodes -out external.key
When carrying out the command, you are required to specify the PFX key password (Enter Import Password).
As a result, the external.cert certificate and the external.key in PEM format are returned.
- In the KUMA Console, go to the Settings → General → KUMA Core section. Under External TLS pair, click Upload certificate and Upload key and upload the external.cert file and the unencrypted external.key file in PEM format.
- Restart KUMA:
systemctl restart kuma-core
- Refresh the web page or restart the browser hosting the KUMA Console.
Your company certificate and key have been replaced.
Page topCalculation and changing of disk space for storing Administration Server data
Administration Server data includes the following objects:
- Information about assets (devices).
- Information about events logged on the Administration Server for the selected client device.
- Information about the domain in which the assets are included.
- Data of the Application Control component.
- Updates. The shared folder additionally requires at least 4 GB to store updates.
- Installation packages. If some installation packages are stored on the Administration Server, the shared folder will require an additional amount of free disk space equal to the total size of all of the available installation packages to be installed.
- Remote installation tasks. If remote installation tasks are present on the Administration Server, an additional amount of free disk space equal to the total size of all installation packages to be installed will be required.
Calculation of the minimum disk space for storing Administration Server data
The minimum disk space required for storing the Administration Server data can be estimated approximately by using the formula:
(724 * C + 0.15 * E + 0.17 * A + U), KB
where:
- C is the number of assets (devices).
- E is the number of events to store.
- A is the total number of domain objects:
- Device accounts
- User accounts
- Accounts of security groups
- Organizational units
- U is the size of updates (at least 4 GB).
If domain polling is disabled, A is considered to equal zero.
The formula calculates the disk space required for storing typical data from managed devices and the typical size of updates. The formula does not include the amount of disk space occupied by data that is independent of the number of managed devices for the Application Control component, installation packages, and remote installation tasks.
Changing of the disk space for storing the Administration Server data
The amount of free disk space allocated to store the Administration Server data is specified in the configuration file before the deployment of Kaspersky Next XDR Expert (the ksc_state_size
parameter). Take into account the minimum disk space calculated by using the formula.
To check the disk space used to store the Administration Server data after the deployment of Kaspersky Next XDR Expert,
On the administrator host where the KDT utility is located, run the following command:
./kdt invoke ksc --action getPvSize
The amount of the required free disk space in gigabytes is displayed.
To change the disk space used to store the Administration Server data after the deployment of Kaspersky Next XDR Expert,
On the administrator host where the KDT utility is located, run the following command and specify the required free disk space in gigabytes (for example, "50Gi"):
./kdt invoke ksc --action setPvSize --param ksc_state_size="<new_disk_space_amount>Gi"
The amount of free disk space allocated to store the Administration Server data is changed.
Page topRotation of secrets
KDT allows you to rotate the secrets that are used to connect to the Kubernetes cluster, to the infrastructure components of Kaspersky Next XDR Expert, and to the DBMS. The rotation period of these secrets can be specified in accordance with the information security requirements of your organization. Secrets are located on the administrator host.
Secrets that are used to connect to the Kubernetes cluster include a client certificate and a private key. Secrets for access to the Registry and DBMS include the corresponding DSNs.
To rotate the secrets for connection to the Kubernetes cluster manually,
On the administrator host where the KDT utility is located, run the following command:
./kdt invoke bootstrap --action RotateK0sConfig
New secrets for connection to the Kubernetes cluster are generated.
When updating Bootstrap, secrets for connection to the Kubernetes cluster are updated automatically.
To rotate the secrets for connection to the Registry manually,
On the administrator host where the KDT utility is located, run the following command:
./kdt invoke bootstrap --action RotateRegistryCreds
New secrets for connection to the Registry are generated.
Page topAdding hosts for installing the additional KUMA services
If you need to expand the storage, or add new collectors and correlators for the increased flow of events, you can add additional hosts for installation of the KUMA services.
You must specify the parameters of the additional hosts in the expand.inventory.yml file. This file is located in the distribution package with the transport archive, KDT, the configuration file, and other files. In the expand.inventory.yml file, you can specify several additional hosts for collectors, correlators, and storages at once. Ensure that hardware, software, and installation requirements for the selected hosts are met.
To prepare the required infrastructure on the hosts specified in the expand.inventory.yml file, you need to create the service directories to which the files that are required for the service installation are added. To prepare the infrastructure, run the following command and specify the expand.inventory.yml file:
./kdt invoke kuma --action addHosts --param hostInventory=<path_to_inventory_file>
On the hosts specified in the expand.inventory.yml file, the service directories to which the files that are required for the service installation are added.
Adding an additional storage, collector, or correlator
You can add an additional storage cluster, collector, or correlator to your existing infrastructure. If you want to add several services, it is recommended to install them in the following order: storages, collectors, and correlators.
To add an additional storage cluster, collector, or correlator:
- Sign in to KUMA Console.
You can use one of the following methods:
- In the main menu of OSMP Console, go to Settings → KUMA.
- In your browser, go to
https://kuma.<smp_domain>:7220
.
- In the KUMA Console, create a resource set for each KUMA service (storages, collectors, and correlators) that you want to install on the prepared hosts.
- Create services for storages, collectors and correlators in KUMA Console.
- Obtain the service identifiers to bind the created resource sets and the KUMA services:
- In the KUMA Console main menu, go to Resources → Active services.
- Select the required KUMA service, and then click the Copy ID button.
- Install the KUMA services on each prepared host listed in the kuma_storage, kuma_collector, and kuma_correlator sections of the expand.inventory.yml inventory file. On each machine, in the installation command, specify the service ID corresponding to the host. Run the corresponding commands to install the KUMA services:
- Installation command for the storage:
sudo /opt/kaspersky/kuma/kuma storage --core https://<KUMA Core server FQDN>:7210 --id <service ID copied from the KUMA Console> --install
- Installation command for the collector:
sudo /opt/kaspersky/kuma/kuma collector --core https://<KUMA Core server FQDN>:7210 --id <service ID copied from the KUMA Console> --api.port <port used for communication with the installed component>
- Installation command for the correlator:
sudo /opt/kaspersky/kuma/kuma correlator --core https://<KUMA Core server FQDN>:7210 --id <service ID copied from the KUMA Console> --api.port <port used for communication with the installed component> --install
The collector and correlator installation commands are automatically generated on the Setup validation tab of the Installation Wizard, and the port used for communication is added to the command automatically. Use the generated commands to install the collector and correlator on the hosts. This will allow you to make sure that the ports for communication with the services specified in the command are available.
By default, the FQDN of the KUMA Core is
kuma.<smp_domain>
.The port that is used for connection to KUMA Core cannot be changed. By default, port 7210 is used.
- Installation command for the storage:
The additional KUMA services are installed.
Adding hosts to an existing storage
You can expand an existing storage (storage cluster) by adding hosts as new storage cluster nodes.
To add hosts to an existing storage:
- Sign in to KUMA Console.
You can use one of the following methods:
- In the main menu of OSMP Console, go to Settings → KUMA.
- In your browser, go to
https://kuma.<smp_domain>:7220
.
- Add new nodes to the storage cluster. To do this, edit the settings of the existing storage cluster:
- In the Resources → Storages section, select an existing storage, and then open the storage for editing.
- In the ClickHouse cluster nodes section, click Add nodes, and then specify roles in the fields for the new node. Specify the corresponding host domain names from the kuma_storage section of the expand.inventory.yml file, and then specify the roles for the new nodes.
- Save changes.
You do not need to create a separate storage because you are adding servers to an existing storage cluster.
- Create storage services for each added storage cluster node in KUMA Console, and then bind the services to the storage cluster.
- Obtain the storage service identifiers for each prepared host to install the KUMA services:
- In the KUMA Console main menu, go to Resources → Active services.
- Select the required KUMA service, and then click the Copy ID button.
- Install the storage service on each prepared host listed in the kuma_storage section of the expand.inventory.yml inventory file. On each machine, in the installation command, specify the service ID corresponding to the host. Run the following command to install the storage service:
sudo /opt/kaspersky/kuma/kuma storage --core https://<KUMA Core server FQDN>:7210 --id <service ID copied from the KUMA Console> --install
By default, the FQDN of the KUMA Core is
kuma.<smp_domain>
.The port that is used for connection to KUMA Core cannot be changed. By default, port 7210 is used.
The additional hosts are added to the storage cluster.
Specify the added hosts in the distributed.inventory.yml inventory file so that it has up-to-date information in case of a KUMA components update.
Page topReplacing a host that uses KUMA storage
To replace a host that uses KUMA storage with another one:
- Fill in the expand.inventory.yml file, specifying the parameters of the host you want to replace.
- Run the following command, specifying the expand.inventory.yml file to remove the host:
./kdt invoke kuma --action removeHosts --param hostInventory=<path_to_inventory_file>
- Fill in the expand.inventory.yml file, specifying the parameters of the new host that you want to replace the previous host, and then run the following command:
./kdt invoke kuma --action addHosts --param hostInventory=<path_to_inventory_file>
- Follow steps 2-6 of the instruction for adding new hosts for KUMA services to add a new host with the KUMA storage.
The host with the KUMA storage is replaced with another one.
If your storage configuration includes a shard containing two replicas, and you replaced the second replica host with a new one by using the steps described above, then you may receive an error when installing a new replica. In this case, the new replica will not work.
To fix an error when adding a new replica of a shard:
- On another host with a replica of the same shard that owns the incorrectly added replica, launch the ClickHouse client by using the command:
/opt/kaspersky/kuma/clickhouse/bin/client.sh
If this host is unavailable, run the client on any other host with a replica included in the same storage cluster.
- Run the command to remove the data about the host you wanted to replace.
- If the host with a replica of the same shard that owns the incorrectly added replica is available, run the following command:
SYSTEM DROP REPLICA '<replica number of read-only node>' FROM TABLE kuma.events_local_v2
- If you are using another storage cluster host with a replica, run the following command:
SYSTEM DROP REPLICA '<replica number of read-only node>' FROM ZKPATH '/clickhouse/tables/kuma/<shard number of read-only node>/kuma/events_local_v2
- If the host with a replica of the same shard that owns the incorrectly added replica is available, run the following command:
- Run the following command to restore the operation of the added host with a replica:
SYSTEM RESTORE REPLICA kuma.events_local_v2
Operability of the added host with a replica is restored.
Page top