Kaspersky Next XDR Expert

Contents

KUMA services

Services are the main components of KUMA that help the system to manage events: services allow you to receive events from event sources and subsequently bring them to a common form that is convenient for finding correlation, as well as for storage and manual analysis. Each service consists of two parts that work together:

  • One part of the service is created inside the KUMA Console based on set of resources for services.
  • The second part of the service is installed in the network infrastructure where the KUMA system is deployed as one of its components. The server part of a service can consist of multiple instances: for example, services of the same agent or storage can be installed on multiple devices at once.

    On the server side, KUMA services are located in the /opt/kaspersky/kuma directory.

    When you install KUMA in high availability mode, only the KUMA Core is installed in the cluster. Collectors, correlators, and storages are hosted on hosts outside of the Kubernetes cluster.

Parts of services are connected to each other via the service ID.

Service types:

  • Storages are used to save events.
  • Correlators are used to analyze events and search for defined patterns.
  • Collectors are used to receive events and convert them to KUMA format.
  • Agents are used to receive events on remote devices and forward them to KUMA collectors.

In the KUMA Console, services are displayed in the ResourcesActive services section in table format. The table of services can be updated by clicking the Refresh button and sorted by columns by clicking on the active headers.

The maximum table size is not limited. If you want to select all services, scroll to the end of the table and select the Select all check box, which selects all available services in the table.

Table columns:

  • Status—service status:
    • Green means that the service is running.
    • Red means that the service is not running.
    • Yellow is the status that applies to all services except the agent. The yellow status means that the service is running, but there are errors or alerts from Victoria Metrics. You can view the error message by hovering the mouse pointer over the status.
    • Purple is the status that is applied to running services whose configuration file in the database has changed, but that have no other errors. If a service has an incorrect configuration file and has errors, for example, from Victoria Metrics, status of the service is yellow.
    • Gray—if a deleted tenant had a running service that continues to work, that service is displayed with a gray status on the Active services page. Services with the gray status are kept to let you copy the ID and remove services on your servers. Only the Main administrator can delete services with the gray status. When a tenant is deleted, the services of that tenant are assigned to the Main tenant.
  • Type—type of service: agent, collector, correlator, or storage.
  • Name—name of the service. Clicking on the name of the service opens its settings.
  • Version—service version.
  • Tenant—the name of the tenant that owns the service.
  • FQDN—fully qualified domain name of the service server.
  • IP address—IP address of the server where the service is installed.
  • API port—Remote Procedure Call port number.
  • Uptime—the time showing how long the service has been running.
  • Created—the date and time when the service was created.

The table can be sorted in ascending and descending order, as well as by the Status parameter. To sort active services, right-click the context menu and select one or more statuses.

You can click the buttons in the upper part of the Services window to perform the following group actions:

  • Add service

    You can create new services based on existing service resource sets. We do not recommend creating services outside the main tenant without first carefully planning the inter-tenant interactions of various services and users.

  • Refresh

    You can refresh the list of active services.

  • Update configuration
  • Restart

To perform an action with an individual service, right-click the service to display its context menu. The following actions are available:

  • Reset certificate
  • Delete
  • Download log

    If you want to receive detailed information, enable the Debug mode in the service settings.

  • Copy service ID

    You need this ID to install, restart, stop, or delete the service.

  • Go to Events
  • Go to active lists
  • Go to context tables
  • Go to partitions

To change a service, select a service under ResourcesActive services. This opens a window with a set of resources based on which the service was created. You can edit the settings of the set of resources and save your changes. To apply the saved changes, restart the service.

If, when changing the settings of a collector resource set, you change or delete conversions in a normalizer connected to it, the edits will not be saved, and the normalizer itself may be corrupted. If you need to modify conversions in a normalizer that is already part of a service, the changes must be made directly to the normalizer under ResourcesNormalizers in the web interface.

In this section

Services tools

Service resource sets

Creating a storage

Creating a correlator

Creating an event router

Creating a collector

Creating an agent

Page top
[Topic 264669]

Services tools

This section describes the tools for working with services available in the ResourcesActive services section of the KUMA Console.

In this section

Getting service identifier

Stopping, starting, checking status of the service

Restarting the service

Deleting the service

Partitions window

Searching for related events

Page top
[Topic 264670]

Getting service identifier

The service identifier is used to bind parts of the service residing within KUMA and installed in the network infrastructure into a single complex. An identifier is assigned to a service when it is created in KUMA, and is then used when installing the service to the server.

To get the identifier of a service:

  1. Log in to the KUMA Console and open ResourcesActive services.
  2. Select the check box next to the service whose ID you want to obtain, and click Copy ID.

The identifier of the service will be copied to the clipboard. It can be used, for example, for installing the service on a server.

Page top
[Topic 264671]

Stopping, starting, checking status of the service

While managing KUMA, you may need to perform the following operations.

  • Temporarily stop the service. For example, when restoring the Core from backup, or to edit service settings related to the operating system.
  • Start the service.
  • Check the status of the service.

The "Commands for stopping, starting, and checking the status of a service" table lists commands that may be useful when managing KUMA.

Commands for stopping, starting, and checking the status of a service

Service

Stop service

Start service

Check the status of the service

Core

sudo systemctl stop kuma-core.service

sudo systemctl start kuma-core.service

sudo systemctl status kuma-core.service

Services with an ID:

  • collector
  • correlator
  • storage

sudo systemctl stop kuma-<collector/correlator/storage>-<service ID>.service

sudo systemctl start kuma-<collector/correlator/storage>-<service ID>.service

sudo systemctl status kuma-<collector/correlator/storage>-<service ID>.service

Services without an ID:

  • kuma-grafana.service
  • kuma-mongodb.service
  • kuma-victoria-metrics.service
  • kuma-vmalert.service

sudo systemctl stop kuma-<grafana/victoria-metrics/vmalert>.service

sudo systemctl start kuma-<grafana/victoria-metrics/vmalert>.service

sudo systemctl status kuma-<grafana/victoria-metrics/vmalert>.service

Windows agents

To stop an agent service:

  1. Copy the agent ID in the KUMA Console.
  2. Connect to the host on which you want to start the KUMA agent service.
  3. Run PowerShell as a user with administrative privileges.
  4. Run the following command in PowerShell:

    Stop-Service -Name "WindowsAgent-<agent ID>"

To start an agent service:

  1. Copy the agent ID in the KUMA Console.
  2. Connect to the host on which you want to start the KUMA agent service.
  3. Run PowerShell as a user with administrative privileges.
  4. Run the following command in PowerShell:

    Start-Service -Name "WindowsAgent-<agent ID>"

To view the status of an agent service:

  1. In Windows, go to the Start → Services menu, and in the list of services, double-click the relevant KUMA agent.
  2. This opens a window; in that window, view the status of the agent in the Service status field.

Page top
[Topic 270283]

Restarting the service

To restart the service:

  1. Log in to the KUMA Console and open ResourcesActive services.
  2. Select the check box next to the service and select the necessary option:
    • Update configuration—perform a hot update of a running service configuration. For example, you can change the field mapping settings or the destination point settings this way.
    • Restart—stop a service and start it again. This option is used to modify the port number or connector type.

      Restarting KUMA agents:

      • KUMA Windows Agent can be restarted as described above only if it is running on a remote computer. If the service on the remote computer is inactive, you will receive an error when trying to restart from KUMA. In that case you must restart KUMA Windows Agent service on the remote Windows machine. For information on restarting Windows services, refer to the documentation specific to the operating system version of your remote Windows computer.
      • KUMA Agent for Linux stops when this option is used. To start the agent again, you must execute the command that was used to start it.
    • Reset certificate—remove certificates that the service uses for internal communication. For example, this option can be used to renew the Core certificate.

      Special considerations for deleting Windows agent certificates:

      • If the agent has the green status and you select Reset certificate, KUMA deletes the current certificate and creates a new one, the agent continues working with the new certificate.
      • If the agent has the red status and you select Reset certificate, KUMA generates an error that the agent is not running. In the agent installation folder %APPDATA%\kaspersky\kuma\<Agent ID>\certificates, manually delete the internal.cert and internal.key files and start the agent manually. When the agent starts, a new certificate is created automatically.

      Special considerations for deleting Linux agent certificates:

      1. Regardless of the agent status, apply the Reset certificate option in the web interface to delete the certificate in the databases.
      2. In the agent installation folder /opt/kaspersky/agent/<Agent ID>/certificates, manually delete the internal.cert and internal.key files.
      3. Since the Reset certificate option stops the agent, to continue its operation, start the agent manually. When the agent starts, a new certificate is created automatically.
Page top
[Topic 264672]

Deleting the service

Before deleting the service get its ID. The ID will be required to remove the service for the server.

To remove a service in the KUMA Console:

  1. Log in to the KUMA Console and open ResourcesActive services.
  2. Select the check box next to the service you want to delete, and click Delete.

    A confirmation window opens.

  3. Click OK.

The service has been deleted from KUMA.

To remove a service from the server, run the following command:

sudo /opt/kaspersky/kuma/kuma <collector/correlator/storage> --id <service ID> --uninstall

The service has been deleted from the server.

Page top
[Topic 264673]

Partitions window

If the storage service was created and installed, you can view its partitions in the Partitions table.

To open Partitions table:

  1. Log in to the KUMA Console and open ResourcesActive services.
  2. Select the check box next to the relevant storage and click Go to partitions.

The Partitions table opens.

The table has the following columns:

  • Tenant—the name of the tenant that owns the stored data.
  • Created—partition creation date.
  • Space—the name of the space.
  • Size—the size of the space.
  • Events—the number of stored events.
  • Transfer to cold storage—the date when data will be migrated from the ClickHouse clusters to cold storage disks.
  • Expires—the date when the partition expires. After this date, the partition and the events it contains are no longer available.

You can delete partitions.

To delete a partition:

  1. Open the Partitions table (see above).
  2. Open the More-DropDown drop-down list to the left from the required partition.
  3. Select Delete.

    A confirmation window opens.

  4. Click OK.

The partition has been deleted. Audit event partitions cannot be deleted.

Page top
[Topic 264674]

Searching for related events

You can search for events processed by the Correlator or the Collector services.

To search for events related to the Correlator or the Collector service:

  1. Log in to the KUMA Console and open ResourcesActive services.
  2. Select the check box next to the required correlator or collector and click Go to Events.

    A new browser tab opens with the KUMA Events section open.

  3. To find events, click the magn-glass icon.

    A table with events selected by the search expression ServiceID = <ID of the selected service> will be displayed.

Page top
[Topic 264675]

Service resource sets

Service resource sets are a resource type, a KUMA component, a set of settings based on which the KUMA services are created and operate. Resource sets for services are collections of resources.

Any resources added to a set of resources must be owned by the same tenant that owns the created set of resources. An exception is the shared tenant, whose owned resources can be used in the sets of resources of other tenants.

Resource sets for services are displayed in the Resources<Resource set type for the service> section of the KUMA Console. Available types:

  • Collectors
  • Correlators
  • Storages
  • Agents

When you select the required type, a table opens with the available sets of resources for services of this type. The resource table contains the following columns:

  • Name—the name of a resource set. Can be used for searching and sorting.
  • Updated—the date and time of the last update of the resource set. Can be used for sorting.
  • Created by—the name of the user who created the resource set.
  • Description—the description of the resource set.

Page top
[Topic 264676]

Creating a storage

A storage consists of two parts: one part is created inside the KUMA Console, and the other part is installed on network infrastructure servers intended for storing events. The server part of a KUMA storage consists of ClickHouse nodes collected into a cluster. ClickHouse clusters can be supplemented with cold storage disks.

For each ClickHouse cluster, a separate storage must be installed.

Prior to storage creation, carefully plan the cluster structure and deploy the necessary network infrastructure. When choosing a ClickHouse cluster configuration, consider the specific event storage requirements of your organization.

It is recommended to use ext4 as the file system.

A storage is created in several steps:

  1. Creating a set of resources for a storage in the KUMA Console
  2. Creating a storage service in the KUMA Console
  3. Installing storage nodes in the network infrastructure

When creating storage cluster nodes, verify the network connectivity of the system and open the ports used by the components.

If the storage settings are changed, the service must be restarted.

In this section

ClickHouse cluster structure

ClickHouse cluster node settings

Cold storage of events

Creating a set of resources for a storage

Creating a storage service in the KUMA Console

Installing a storage in the KUMA network infrastructure

Page top
[Topic 264677]

ClickHouse cluster structure

A ClickHouse cluster is a logical group of devices that possess all accumulated normalized KUMA events. It consists of one or more logical shards.

A shard is a logical group of devices that possess a specific portion of all normalized events accumulated in the cluster. It consists of one or more replicas. Increasing the number of shards lets you do the following:

  • Accumulate more events by increasing the total number of servers and disk space.
  • Absorb a larger stream of events by distributing the load associated with an influx of new events.
  • Reduce the time taken to search for events by distributing search zones among multiple devices.

A replica is a device that is a member of a logical shard and possesses a single copy of that shard's data. If multiple replicas exist, it means multiple copies exist (the data is replicated). Increasing the number of replicas lets you do the following:

  • Improve high availability.
  • Distribute the total load related to data searches among multiple machines (although it's best to increase the number of shards for this purpose).

A keeper is a device that participates in coordination of data replication at the whole cluster level. At least one device per cluster must have this role. The recommended number of the devices with this role is 3. The number of devices involved in coordinating replication must be an odd number. The keeper and replica roles can be combined in one machine.

Page top
[Topic 264678]

ClickHouse cluster node settings

Prior to storage creation, carefully plan the cluster structure and deploy the necessary network infrastructure. When choosing a ClickHouse cluster configuration, consider the specific event storage requirements of your organization.

When creating ClickHouse cluster nodes, verify the network connectivity of the system and open the ports used by the components.

For each node of the ClickHouse cluster, you need to specify the following settings:

  • Fully qualified domain name (FQDN)—a unique address to access the node. Specify the entire FQDN, for example, kuma-storage.example.com.
  • Shard, replica, and keeper IDs—the combination of these settings determines the position of the node in the ClickHouse cluster structure and the node role.

Node roles

The roles of the nodes depend on the specified settings:

  • shard, replica, keeper—the node participates in the accumulation and search of normalized KUMA events and helps coordinate data replication at the cluster-wide level.
  • shard, replica—the node participates in the accumulation and search of normalized KUMA events.
  • keeper—the node does not accumulate normalized events, but helps coordinate data replication at the cluster-wide level. Dedicated keepers must be specified at the beginning of the list in the ResourcesStorages → <Storage> → Basic settingsClickHouse cluster nodes section.

ID requirements:

  • If multiple shards are created in the same cluster, the shard IDs must be unique within this cluster.
  • If multiple replicas are created in the same shard, the replica IDs must be unique within this shard.
  • The keeper IDs must be unique within the cluster.

Example of ClickHouse cluster node IDs:

  • shard 1, replica 1, keeper 1;
  • shard 1, replica 2;
  • shard 2, replica 1;
  • shard 2, replica 2, keeper 3;
  • shard 2, replica 3;
  • keeper 2.
Page top
[Topic 264679]

Cold storage of events

In KUMA, you can configure the migration of legacy data from a ClickHouse cluster to the cold storage. Cold storage can be implemented using the local disks mounted in the operating system or the Hadoop Distributed File System (HDFS). Cold storage is enabled when at least one cold storage disk is specified. If a cold storage disk is not configured and the server runs out of disk space, the storage service is stopped. If both hot storage and cold storage are configured, and space runs out on the cold storage disk, the KUMA storage service is stopped. We recommend avoiding such situations.

Cold storage disks can be added or removed.

After changing the cold storage settings, the storage service must be restarted. If the service does not start, the reason is specified in the storage log.

If the cold storage disk specified in the storage settings has become unavailable (for example, out of order), this may lead to errors in the operation of the storage service. In this case, recreate a disk with the same path (for local disks) or the same address (for HDFS disks) and then delete it from the storage settings.

Rules for moving the data to the cold storage disks

When cold storage is enabled, KUMA checks the storage terms of the spaces once an hour:

  • If the storage term for a space on a ClickHouse cluster expires, the data is moved to the cold storage disks. If a cold storage disk is misconfigured, the data is deleted.
  • If the storage term for a space on a cold storage disk expires, the data is deleted.
  • If the ClickHouse cluster disks are 95% full, the biggest partitions are automatically moved to the cold storage disks. This can happen more often than once per hour.
  • Audit events are generated when data transfer starts and ends.

During data transfer, the storage service remains operational, and its status stays green in the ResourcesActive services section of the KUMA Console. When you hover the mouse pointer over the status icon, a message indicating the data transfer appears. When a cold storage disk is removed, the storage service has the yellow status.

Special considerations for storing and accessing events

  • When using HDFS disks for cold storage, protect your data in one of the following ways:
    • Configure a separate physical interface in the VLAN, where only HDFS disks and the ClickHouse cluster are located.
    • Configure network segmentation and traffic filtering rules that exclude direct access to the HDFS disk or interception of traffic to the disk from ClickHouse.
  • Events located in the ClickHouse cluster and on the cold storage disks are equally available in the KUMA Console. For example, when you search for events or view events related to alerts.
  • Storing events or audit events on cold storage disks is not mandatory; to disable this functionality, specify 0 (days) in the Cold retention period or Audit cold retention period field in the storage settings.

Special considerations for using HDFS disks

  • Before connecting HDFS disks, create directories for each node of the ClickHouse cluster on them in the following format: <HDFS disk host>/<shard ID>/<replica ID>. For example, if a cluster consists of two nodes containing two replicas of the same shard, the following directories must be created:
    • hdfs://hdfs-example-1:9000/clickhouse/1/1/
    • hdfs://hdfs-example-1:9000/clickhouse/1/2/

    Events from the ClickHouse cluster nodes are migrated to the directories with names containing the IDs of their shard and replica. If you change these node settings without creating a corresponding directory on the HDFS disk, events may be lost during migration.

  • HDFS disks added to storage operate in the JBOD mode. This means that if one of the disks fails, access to the storage will be lost. When using HDFS, take high availability into account and configure RAID, as well as storage of data from different replicas on different devices.
  • The speed of event recording to HDFS is usually lower than the speed of event recording to local disks. The speed of accessing events in HDFS, as a rule, is significantly lower than the speed of accessing events on local disks. When using local disks and HDFS disks at the same time, the data is written to them in turn.

In this section

Removing cold storage disks

Detaching, archiving, and attaching partitions

Page top
[Topic 264690]

Removing cold storage disks

Before physically disconnecting cold storage disks, remove these disks from the storage settings.

To remove a disk from the storage settings:

  • In the KUMA Console, under ResourcesStorages, select the relevant storage.

    This opens the storage.

  • In the window, in the Disks for cold storage section, in the required disk's group of settings, click Delete disk.

    Data from removed disk is automatically migrated to other cold storage disks or, if there are no such disks, to the ClickHouse cluster. During data migration, the storage status icon is highlighted in yellow. Audit events are generated when data transfer starts and ends.

  • After event migration is complete, the disk is automatically removed from the storage settings. It can now be safely disconnected.

Removed disks can still contain events. If you want to delete them, you can manually delete the data partitions using the DROP PARTITION command.

If the cold storage disk specified in the storage settings has become unavailable (for example, out of order), this may lead to errors in the operation of the storage service. In this case, create a disk with the same path (for local disks) or the same address (for HDFS disks) and then delete it from the storage settings.

Page top
[Topic 264691]

Detaching, archiving, and attaching partitions

If you want to optimize disk space and speed up queries in KUMA, you can detach data partitions in ClickHouse, archive partitions, or move partitions to a drive. If necessary, you can later reattach the partitions you need and perform data processing.

Detaching partitions

To detach partitions:

  1. Determine the shard on all replicas of which you want to detach the partition.
  2. Get the partition ID using the following command:

    sudo /opt/kaspersky/kuma/clickhouse/bin/client.sh -d kuma --multiline --query "SELECT partition, name FROM system.parts;" |grep 20231130

    In this example, the command returns the partition ID for November 30, 2023.

  3. One each replica of the shard, detach the partition using the following command and specifying the partition ID:

    sudo /opt/kaspersky/kuma/clickhouse/bin/client.sh -d kuma --multiline --query "ALTER TABLE events_local_v2 DETACH PARTITION ID '<partition ID>'"

As a result, the partition is detached on all replicas of the shard. Now you can move the data directory to a drive or archive the partition.

Archiving partitions

To archive detached partitions:

  1. Find the detached partition in disk subsystem of the server:

    sudo find /opt/kaspersky/kuma/clickhouse/data/ -name <ID of the detached partition>\*

  2. Change to the 'detached' directory that contains the detached partition, and while in the 'detached' directory, perform the archival:

    sudo cd <path to the 'detached' directory containing the detached partition>

    sudo zip -9 -r detached.zip *

    For example:

    sudo cd /opt/kaspersky/kuma/clickhouse/data/store/d5b/d5bdd8d8-e1eb-4968-95bd-d8d8e1eb3968/detached/

    sudo zip -9 -r detached.zip *

The partition is archived.

Attaching partitions

To attach archived partitions to KUMA:

  1. Increase the Retention period value.

    KUMA deletes data based on the date specified in the Timestamp field, which records the time when the event is received, and based on the Retention period value that you set for the storage.

    Before restoring archived data, make sure that the Retention period value overlaps the date in the Timestamp field. If this is not the case, the archived data will be deleted within 1 hour.

  2. Place the archive partition in the 'detached' section of your storage and unpack the archive:

    sudo unzip detached.zip -d <path to the 'detached' directory>

    For example:

    sudo unzip detached.zip -d /opt/kaspersky/kuma/clickhouse/data/store/d5b/d5bdd8d8-e1eb-4968-95bd-d8d8e1eb3968/detached/

  3. Run the command to attach the partition:

    sudo /opt/kaspersky/kuma/clickhouse/bin/client.sh -d kuma --multiline --query "ALTER TABLE events_local_v2 ATTACH PARTITION ID '<partition ID>'"

    Repeat the steps of unpacking the archive and attaching the partition on each replica of the shard.

As a result, the archived partition is attached and its events are again available for search.

Page top
[Topic 270284]

Creating a set of resources for a storage

In the KUMA Console, a storage service is created based on the set of resources for the storage.

To create a set of resources for a storage in the KUMA Console:

  1. In the KUMA Console, under ResourcesStorages, click Add storage.

    This opens the Create storage window.

  2. On the Basic settings tab, in the Storage name field, enter a unique name for the service you are creating. The name must contain 1 to 128 Unicode characters.
  3. In the Tenant drop-down list, select the tenant that will own the storage.
  4. You can optionally add up to 256 Unicode characters describing the service in the Description field.
  5. In the Retention period field, specify the period, in days from the moment of arrival, during which you want to store events in the ClickHouse cluster. When the specified period expires, events are automatically deleted from the ClickHouse cluster. If cold storage of events is configured, when the event storage period in the ClickHouse cluster expires, the data is moved to cold storage disks. If a cold storage disk is misconfigured, the data is deleted.
  6. In the Audit retention period field, specify the period, in days, to store audit events. The minimum value and default value is 365.
  7. If cold storage is required, specify the event storage term:
    • Cold retention period—the number of days to store events. The minimum value is 1.
    • Audit cold retention period—the number of days to store audit events. The minimum value is 0.
  8. In the Debug drop-down list, specify whether resource logging must be enabled. The default value (Disabled) means that only errors are logged for all KUMA components. If you want to obtain detailed data in the logs, select Enabled.
  9. If you want to change ClickHouse settings, in the ClickHouse configuration override field, paste the lines with settings from the ClickHouse configuration XML file /opt/kaspersky/kuma/clickhouse/cfg/config.xml. Specifying the root elements <yandex>, </yandex> is not required. Settings passed in this field are used instead of the default settings.

    Example:

    <merge_tree>

    <parts_to_delay_insert>600</parts_to_delay_insert>

    <parts_to_throw_insert>1100</parts_to_throw_insert>

    </merge_tree>

  10. If necessary, in the Spaces section, add spaces to the storage to distribute the stored events.

    There can be multiple spaces. You can add spaces by clicking the Add space button and remove them by clicking the Delete space button.

    Available settings:

    • In the Name field, specify a name for the space containing 1 to 128 Unicode characters.
    • In the Retention period field, specify the number of days to store events in the ClickHouse cluster.
    • If necessary, in the Cold retention period field, specify the number of days to store the events in the cold storage. The minimum value is 1.
    • In the Filter section, you can specify conditions to identify events that will be put into this space. You can select an existing filter from the drop-down list or create a new filter.

      Creating a filter in resources

      1. In the Filter drop-down list, select Create new.
      2. If you want to keep the filter as a separate resource, select the Save filter check box.

        In this case, you will be able to use the created filter in various services.

        This check box is cleared by default.

      3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain 1 to 128 Unicode characters.
      4. In the Conditions settings block, specify the conditions that the events must meet:
        1. Click the Add condition button.
        2. In the Left operand and Right operand drop-down lists, specify the search parameters.

          Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

        3. In the operator drop-down list, select the relevant operator.

          Filter operators

          • =—the left operand equals the right operand.
          • <—the left operand is less than the right operand.
          • <=—the left operand is less than or equal to the right operand.
          • >—the left operand is greater than the right operand.
          • >=—the left operand is greater than or equal to the right operand.
          • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
          • contains—the left operand contains values of the right operand.
          • startsWith—the left operand starts with one of the values of the right operand.
          • endsWith—the left operand ends with one of the values of the right operand.
          • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
          • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

            The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

            If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

          • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

            If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

          • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
          • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
          • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
          • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
          • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
          • inContextTable—presence of the entry in the specified context table.
          • intersect—presence in the left operand of the list items specified in the right operand.
        4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

          The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

          This check box is cleared by default.

        5. If you want to add a negative condition, select If not from the If drop-down list.
        6. You can add multiple conditions or a group of conditions.
      5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
      6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

        You can view the nested filter settings by clicking the edit-grey button.

    After the service is created, you can view and delete spaces in the storage resource settings.

    There is no need to create a separate space for audit events. Events of this type (Type=4) are automatically placed in a separate Audit space with a storage term of at least 365 days. This space cannot be edited or deleted from the KUMA Console.

  11. If necessary, in the Disks for cold storage section, add to the storage the disks where you want to transfer events from the ClickHouse cluster for long-term storage.

    There can be multiple disks. You can add disks by clicking the Add disk button and remove them by clicking the Delete disk button.

    Available settings:

    • In the Type drop-down list, select the type of the disk being connected:
      • Local—for the disks mounted in the operating system as directories.
      • HDFS—for the disks of the Hadoop Distributed File System.
    • In the Name field, specify the disk name. The name must contain 1 to 128 Unicode characters.
    • If you select Local disk type, specify the absolute directory path of the mounted local disk in the Path field. The path must begin and end with a "/" character.
    • If you select HDFS disk type, specify the path to HDFS in the Host field. For example, hdfs://hdfs1:9000/clickhouse/.
  12. If necessary, in the ClickHouse cluster nodes section, add ClickHouse cluster nodes to the storage.

    There can be multiple nodes. You can add nodes by clicking the Add node button and remove them by clicking the Remove node button.

    Available settings:

    • In the FQDN field, specify the fully qualified domain name of the node being added. For example, kuma-storage-cluster1-server1.example.com.
    • In the shard, replica, and keeper ID fields, specify the role of the node in the ClickHouse cluster. The shard and keeper IDs must be unique within the cluster, the replica ID must be unique within the shard. The following example shows how to populate the ClickHouse cluster nodes section for a storage with dedicated keepers in a distributed installation. You can adapt the example to suit your needs.

      distributed

      Distributed Installation diagram

      Example:

      ClickHouse cluster nodes

      FQDN: kuma-storage-cluster1-server1.example.com

      Shard ID: 0

      Replica ID: 0

      Keeper ID: 1

      FQDN: kuma-storage-cluster1server2.example.com

      Shard ID: 0

      Replica ID: 0

      Keeper ID: 2

      FQDN: kuma-storage-cluster1server3.example.com

      Shard ID: 0

      Replica ID: 0

      Keeper ID: 3

      FQDN: kuma-storage-cluster1server4.example.com

      Shard ID: 1

      Replica ID: 1

      Keeper ID: 0

      FQDN: kuma-storage-cluster1server5.example.com

      Shard ID: 1

      Replica ID: 2

      Keeper ID: 0

      FQDN: kuma-storage-cluster1server6.example.com

      Shard ID: 2

      Replica ID: 1

      Keeper ID: 0

      FQDN: kuma-storage-cluster1server7.example.com

      Shard ID: 2

      Replica ID: 2

      Keeper ID: 0

  13. On the Advanced settings tab, in the Buffer size field, enter the buffer size in bytes, that causes events to be sent to the database when reached. The default value is 64 MB. No maximum value is configured. If the virtual machine has less free RAM than the specified Buffer size, KUMA sets the limit to 128 MB.
  14. On the Advanced Settings tab, In the Buffer flush interval field, enter the time in seconds for which KUMA waits for the buffer to fill up. If the buffer is not full, but the specified time has passed, KUMA sends events to the database. The default value is 1 second.
  15. On the Advanced settings tab, in the Disk buffer size limit field, enter the value in bytes. The disk buffer is used to temporarily store events that could not be sent for further processing or storage. If the disk space allocated for the disk buffer is exhausted, events are rotated as follows: new events replace the oldest events written to the buffer. The default value is 10 GB.
  16. On the Advanced Settings tab, from the Disk buffer disabled drop-down list, select a value to Enable or Disable the use of the disk buffer. By default, the disk buffer is enabled.
  17. On the Advanced Settings tab, In the Write to local database table drop-down list, select Enable or Disable. Writing is disabled by default.

    In Enable mode, data is written only on the host where the storage is located. We recommend using this functionality only if you have configured balancing on the collector and/or correlator — at step 6. Routing, in the Advanced settings section, the URL selection policy field is set to Round robin.

    In Disable mode, data is distributed among the shards of the cluster.

The set of resources for the storage is created and is displayed under ResourcesStorages. Now you can create a storage service.

Page top
[Topic 264692]

Creating a storage service in the KUMA Console

When a set of resources is created for a storage, you can proceed to create a storage service in KUMA.

To create a storage service in the KUMA Console:

  1. In the KUMA Console, under ResourcesActive services, click Add service.
  2. In the opened Choose a service window, select the set of resources that you just created for the storage and click Create service.

The storage service is created in the KUMA Console and is displayed under ResourcesActive services. Now storage services must be installed to each node of the ClickHouse cluster by using the service ID.

Page top
[Topic 264693]

Installing a storage in the KUMA network infrastructure

To create a storage:

  1. Log in to the server where you want to install the service.
  2. Create the /opt/kaspersky/kuma/ folder.
  3. Copy the "kuma" file to the /opt/kaspersky/kuma/ folder. The file is located in the installer in the /kuma-ansible-installer/roles/kuma/files/ folder.

    Make sure the kuma file has sufficient rights to run.

  4. Execute the following command:

    sudo /opt/kaspersky/kuma/kuma storage --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA console> --install

    Example: sudo /opt/kaspersky/kuma/kuma storage --core https://kuma.example.com:7210 --id XXXXX --install

    When deploying several KUMA services on the same host, during the installation process you must specify unique ports for each component using the --api.port <port> parameter. The following setting values are used by default: --api.port 7221.

  5. Repeat steps 1–2 for each storage node.

The storage is installed.

Page top
[Topic 264694]

Creating a correlator

A correlator consists of two parts: one part is created inside the KUMA Console, and the other part is installed on the network infrastructure server intended for processing events.

Actions in the KUMA Console

A correlator is created in the KUMA Console by using the Installation Wizard. This Wizard combines the necessary resources into a set of resources for the correlator. Upon completion of the Wizard, the service itself is automatically created based on this set of resources.

To create a correlator in the KUMA Console,

Start the Correlator Installation Wizard:

  • In the KUMA Console, under Resources, click Add correlator.
  • In the KUMA Console, under ResourcesCorrelators, click Add correlator.

As a result of completing the steps of the Wizard, a correlator service is created in the KUMA Console.

A resource set for a correlator includes the following resources:

These resources can be prepared in advance, or you can create them while the Installation Wizard is running.

Actions on the KUMA correlator server

If you are installing the correlator on a server that you intend to use for event processing, you need to run the command displayed at the last step of the Installation Wizard on the server. When installing, you must specify the ID automatically assigned to the service in the KUMA Console, as well as the port used for communication.

Testing the installation

After creating a correlator, it is recommended to make sure that it is working correctly.

In this section

Starting the Correlator Installation Wizard

Installing a correlator in a KUMA network infrastructure

Validating correlator installation

Page top
[Topic 264695]

Starting the Correlator Installation Wizard

To start the Correlator Installation Wizard:

  • In the KUMA Console, under Resources, click Add correlator.
  • In the KUMA Console, under ResourcesCorrelators, click Add correlator.

Follow the instructions of the Wizard.

Aside from the first and last steps of the Wizard, the steps of the Wizard can be performed in any order. You can switch between steps by clicking the Next and Previous buttons, as well as by clicking the names of the steps in the left side of the window.

After the Wizard completes, a resource set for the correlator is created in the KUMA Console under ResourcesCorrelators, and a correlator service is added under ResourcesActive services.

In this section

Step 1. General correlator settings

Step 2. Global variables

Step 3. Correlation

Step 4. Enrichment

Step 5. Response

Step 6. Routing

Step 7. Setup validation

Page top
[Topic 264696]

Step 1. General correlator settings

This is a required step of the Installation Wizard. At this step, you specify the main settings of the correlator: the correlator name and the tenant that will own it.

To define the main settings of the correlator:

  • In the Name field, enter a unique name for the service you are creating. The name must contain 1 to 128 Unicode characters.
  • In the Tenant drop-down list, select the tenant that will own the correlator. The tenant selection determines what resources will be available when the collector is created.

    If you return to this window from another subsequent step of the Installation Wizard and select another tenant, you will have to manually edit all the resources that you have added to the service. Only resources from the selected tenant and shared tenant can be added to the service.

  • If required, specify the number of processes that the service can run concurrently in the Workers field. By default, the number of worker processes is the same as the number of vCPUs on the server where the service is installed.
  • If necessary, use the Debug drop-down list to enable logging of service operations.
  • You can optionally add up to 256 Unicode characters describing the service in the Description field.

The main settings of the correlator are defined. Proceed to the next step of the Installation Wizard.

Page top
[Topic 264698]

Step 2. Global variables

If tracking values in event fields, active lists, or dictionaries is not enough to cover some specific security scenarios, you can use global and local variables. You can use them to take various actions on the values received by the correlators by implementing complex logic for threat detection. Variables can be assigned a specific function and then queried from correlation rules as if they were ordinary event fields, with the triggered function result received in response.

To add a global variable in the correlator,

click the Add variable button and specify the following parameters:

  • In the Variable window, enter the name of the variable.

    Variable naming requirements

    • Must be unique within the correlator.
    • Must contain 1 to 128 Unicode characters.
    • Must not begin with the character $.
    • Must be written in camelCase or CamelCase.
  • In the Value window, enter the variable function.

    Description of variable functions.

The global variable is added. It can be queried from correlation rules by adding the $ character in front of the variable name. There can be multiple variables. Added variables can be edited or deleted by using the cross icon.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 264699]

Step 3. Correlation

This is an optional but recommended step of the Installation Wizard. On the Correlation tab of the Installation Wizard, select or create correlation rules. These resources define the sequences of events that indicate security-related incidents. When these sequences are detected, the correlator creates a correlation event and an alert.

If you have added global variables to the correlator, all added correlation rules can query them.

Correlation rules that are added to the set of resources for the correlator are displayed in the table with the following columns:

  • Correlation rules—name of the correlation rule resource.
  • Type—type of correlation rule: standard, simple, operational. The table can be filtered based on the values of this column by clicking the column header and selecting the relevant values.
  • Actions—list of actions that will be performed by the correlator when the correlation rule is triggered. These actions are indicated in the correlation rule settings. The table can be filtered based on the values of this column by clicking the column header and selecting the relevant values.

    Available values:

    • Output—correlation events created by this correlation rule are transmitted to other correlator resources: enrichment, response rule, and then to other KUMA services.
    • Edit active list—the correlation rule changes the active lists.
    • Loop to correlator—the correlation event is sent to the same correlation rule for reprocessing.
    • Categorization—the correlation rule changes asset categories.
    • Event enrichment—the correlation rule is configured to enrich correlation events.
    • Do not create alert—when a correlation event is created as a result of a correlation rule triggering, no alert is created for that. If you do not want to create an alert when a correlation rule is triggered, but you still want to send a correlation event to the storage, select the Output and No alert check boxes. If you select only the No alert check box, a correlation event is not saved in the storage.
    • Shared resource—the correlation rule or the resources used in the correlation rule are located in a shared tenant.

You can use the Search field to search for a correlation rule. Added correlation rules can be removed from the set of resources by selecting the relevant rules and clicking Delete.

Selecting a correlation rule opens a window with its settings, which can be edited and then saved by clicking Save. If you click Delete in this window, the correlation rule is unlinked from the set of resources.

Click the Move up and Move down buttons to change the position of the selected correlation rules in the table. It affects their execution sequence when events are processed. By clicking the Move operational to top button, you can move correlation rules of the operational type to the beginning of the correlation rules list.

To link the existing correlation rules to the set of resources for the correlator:

  1. Click Link.

    The resource selection window opens.

  2. Select the relevant correlation rules and click OK.

The correlation rules will be linked to the set of resources for the correlator and will be displayed in the rules table.

To create a new correlation rule in a set of resources for a correlator:

  1. Click Add.

    The correlation rule creation window opens.

  2. Specify the correlation rule settings and click Save.

The correlation rule will be created and linked to the set of resources for the correlator. It is displayed in the correlation rules table and in the list of resources under ResourcesCorrelation rules.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 264701]

Step 4. Enrichment

Expand all | Collapse all

This is an optional step of the Installation Wizard. On the Enrichment tab of the Installation Wizard, you can select or create enrichment rules and indicate which data from which sources you want to add to correlation events that the correlator creates. There can be more than one enrichment rule. You can add them by clicking the Add button and can remove them by clicking the cross button.

To add an existing enrichment rule to a set of resources:

  1. Click Add.

    This opens the enrichment rule settings block.

  2. In the Enrichment rule drop-down list, select the relevant resource.

The enrichment rule is added to the set of resources for the correlator.

To create a new enrichment rule in a set of resources:

  1. Click Add.

    This opens the enrichment rule settings block.

  2. In the Enrichment rule drop-down list, select Create new.
  3. In the Source kind drop-down list, select the source of data for enrichment and define its corresponding settings:
    • constant

      This type of enrichment is used when a constant needs to be added to an event field. Settings of this type of enrichment:

      • In the Constant field, specify the value that should be added to the event field. The value may not be longer than 255 Unicode characters. If you leave this field blank, the existing event field value will be cleared.
      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.

      If you are using the event enrichment functions for extended schema fields of "String", "Number", or "Float" type with a constant, the constant is added to the field.

      If you are using the event enrichment functions for extended schema fields of "Array of strings", "Array of numbers", or "Array of floats" type with a constant, the constant is added to the elements of the array.

    • dictionary

      This type of enrichment is used if you need to add a value from the dictionary of the Dictionary type.

      When this type is selected in the Dictionary name drop-down list, you must select the dictionary that will provide the values. In the Key fields settings block, you have to click the Add field button and select the event fields whose values will be used for dictionary entry selection.

      If you are using event enrichment with the "Dictionary" type selected as the "Source kind" setting, and an array field is specified in the "Key enrichment fields" setting, when an array is passed as the dictionary key, the array is serialized into a string in accordance with the rules of serializing a single value in the TSV format.

      Example: The "Key enrichment fields" setting uses the SA.StringArrayOne extended schema field. The SA.StringArrayOne extended schema field contains 3 elements: "a", "b" and "c". The following value is passed to the dictionary as the key: ['a','b','c'].

      If the "Key enrichment fields" setting uses an extended schema array field and a regular event schema field, the field values are separated by the "|" character when the dictionary is queried.

      Example: The "Key enrichment fields" setting uses two fields: the SA.StringArrayOne extended schema field and the Code field. The SA.StringArrayOne extended schema field contains 3 elements: "a", "b", and "c"; the Code string field contains the character sequence "myCode". The following value is passed to the dictionary as the key: ['a','b','c']|myCode.

    • event

      This type of enrichment is used when you need to write a value from another event field to the current event field. Settings of this type of enrichment:

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
      • In the Source field drop-down list, select the event field whose value will be written to the target field.
      • In the Conversion settings block, you can create rules for modifying the original data before it is written to the KUMA event fields. The conversion type can be selected from the drop-down list. You can click the Add conversion and Delete buttons to add or delete a conversion, respectively. The order of conversions is important.

        Available conversions

        Conversions are changes that can be applied to a value before it gets written to the event field. The conversion type is selected from a drop-down list.

        Available conversions:

        • lower—is used to make all characters of the value lowercase
        • upper—is used to make all characters of the value uppercase
        • regexp – used to convert a value using the regular expression RE2. When this conversion type is selected, the field appears where regular expression should be added.
        • substring—is used to extract characters in the position range specified in the Start and End fields. These fields appear when this conversion type is selected.
        • replace—is used to replace specified character sequence with the other character sequence. When this type of conversion is selected, new fields appear:
          • Replace chars—in this field you can specify the character sequence that should be replaced.
          • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
        • trim—used to simultaneously remove the characters specified in the Chars field from the leading and end positions of the value. The field appears when this type of conversion is selected. For example, a trim conversion with the Micromon value applied to Microsoft-Windows-Sysmon results in soft-Windows-Sys.
        • append is used to add the characters specified in the Constant field to the end of the event field value. The field appears when this type of conversion is selected.
        • prepend—used to prepend the characters specified in the Constant field to the start of the event field value. The field appears when this type of conversion is selected.
        • replace with regexp—is used to replace RE2 regular expression results with the character sequence.
          • Expression—in this field you can specify the regular expression which results that should be replaced.
          • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
        • Converting encoded strings to text:
          • decodeHexString—used to convert a HEX string to text.
          • decodeBase64String—used to convert a Base64 string to text.
          • decodeBase64URLString—used to convert a Base64url string to text.

          When converting a corrupted string or if conversion error occur, corrupted data may be written to the event field.

          During event enrichment, if the length of the encoded string exceeds the size of the field of the normalized event, the string is truncated and is not decoded.

          If the length of the decoded string exceeds the size of the event field into which the decoded value is to be written, such a string is truncated to fit the size of the event field.

        Conversions when using the extended event schema

        Whether or not a conversion can be used depends on the type of extended event schema field being used:

        • For an additional field of the "String" type, all types of conversions are available.
        • For fields of the "Number" and "Float" types, the following types of conversions are available: regexp, substring, replace, trim, append, prepend, replaceWithRegexp, decodeHexString, decodeBase64String, decodeBase64URLString.
        • For fields of "Array of strings", "Array of numbers", and "Array of floats" types, the following types of conversions are available: append, prepend.

    • template

      This type of enrichment is used when you need to write a value obtained by processing Go templates into the event field. Settings of this type of enrichment:

      • Put the Go template into the Template field.

        Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field from which the value must be passed to the script.

        Example: Attack on {{.DestinationAddress}} from {{.SourceAddress}}.

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.

      To convert the data in an array field in a template into the TSV format, you must use the toString function.

      If you are using enrichment of events that have the "Template" type selected as the "Source kind" setting, in which the target field has the "String" type, and the source field is an extended event schema field containing an array of strings, you can use one of the following examples for the template.

      Example:

      {{.SA.StringArrayOne}}

      Example:

      {{- range $index, $element := . SA.StringArrayOne -}}

      {{- if $index}}, {{end}}"{{$element}}"{{- end -}}

    • dns

      This type of enrichment is used to send requests to a private network DNS server to convert IP addresses into domain names or vice versa. IP addresses are converted to DNS names only for private addresses: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 100.64.0.0/10.

      Available settings:

      • URL—in this field, you can specify the URL of a DNS server to which you want to send requests. You can click the Add URL button to specify multiple URLs.
      • RPS—maximum number of requests sent to the server per second. The default value is 1,000.
      • Workers—maximum number of requests per one point in time. The default value is 1.
      • Max tasks—maximum number of simultaneously fulfilled requests. By default, this value is equal to the number of vCPUs of the KUMA Core server.
      • Cache TTL—the lifetime of the values stored in the cache. The default value is 60.
      • Cache disabled—you can use this drop-down list to enable or disable caching. Caching is enabled by default.
    • cybertrace

      This type of enrichment is used to add information from CyberTrace data streams to event fields.

      Available settings:

      • URL (required)—in this field, you can specify the URL of a CyberTrace server to which you want to send requests.
      • Number of connections—maximum number of connections to the CyberTrace server that can be simultaneously established by KUMA. By default, this value is equal to the number of vCPUs of the KUMA Core server.
      • RPS—maximum number of requests sent to the server per second. The default value is 1,000.
      • Timeout—amount of time to wait for a response from the CyberTrace server, in seconds. The default value is 30.
      • Mapping (required)—this settings block contains the mapping table for mapping KUMA event fields to CyberTrace indicator types. The KUMA field column shows the names of KUMA event fields, and the CyberTrace indicator column shows the types of CyberTrace indicators.

        Available types of CyberTrace indicators:

        • ip
        • url
        • hash

        In the mapping table, you must provide at least one string. You can click the Add row button to add a string, and can click the cross button to remove a string.

    • timezone

      This type of enrichment is used in collectors and correlators to assign a specific timezone to an event. Timezone information may be useful when searching for events that occurred at unusual times, such as nighttime.

      When this type of enrichment is selected, the required timezone must be selected from the Timezone drop-down list.

      Make sure that the required time zone is set on the server hosting the enrichment-utilizing service. For example, you can do this by using the timedatectl list-timezones command, which shows all time zones that are set on the server. For more details on setting time zones, please refer to your operating system documentation.

      When an event is enriched, the time offset of the selected timezone relative to Coordinated Universal Time (UTC) is written to the DeviceTimeZone event field in the +-hh:mm format. For example, if you select the Asia/Yekaterinburg timezone, the value +05:00 will be written to the DeviceTimeZone field. If the enriched event already has a value in the DeviceTimeZone field, it will be overwritten.

      By default, if the timezone is not specified in the event being processed and enrichment rules by timezone are not configured, the event is assigned the timezone of the server hosting the service (collector or correlator) that processes the event. If the server time is changed, the service must be restarted.

      Permissible time formats when enriching the DeviceTimeZone field

      When processing incoming raw events in the collector, the following time formats can be automatically converted to the +-hh:mm format:

      Time format in a processed event

      Example

      +-hh:mm

      -07:00

      +-hhmm

      -0700

      +-hh

      -07

      If the date format in the DeviceTimeZone field differs from the formats listed above, the collector server timezone is written to the field when an event is enriched with timezone information. You can create custom normalization rules for non-standard time formats.

  4. Use the Debug drop-down list to indicate whether or not to enable logging of service operations. Logging is disabled by default.
  5. In the Filter section, you can specify conditions to identify events that will be processed using the enrichment rule. You can select an existing filter from the drop-down list or create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

          The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

          If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

          If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
        • inContextTable—presence of the entry in the specified context table.
        • intersect—presence in the left operand of the list items specified in the right operand.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

The new enrichment rule was added to the set of resources for the correlator.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 264702]

Step 5. Response

Expand all | Collapse all

This is an optional step of the Installation Wizard. On the Response tab of the Installation Wizard, you can select or create response rules and indicate which actions must be performed when the correlation rules are triggered. There can be multiple response rules. You can add them by clicking the Add button and can remove them by clicking the cross button.

To add an existing response rule to a set of resources:

  1. Click Add.

    The response rule settings window opens.

  2. In the Response rule drop-down list, select the relevant resource.

The response rule is added to the set of resources for the correlator.

To create a new response rule in a set of resources:

  1. Click Add.

    The response rule settings window opens.

  2. In the Response rule drop-down list, select Create new.
  3. In the Type drop-down list, select the type of response rule and define its corresponding settings:
    • KSC response—response rules for automatically launching the tasks on Kaspersky Security Center assets. For example, you can configure automatic startup of a virus scan or database update.

      Tasks are automatically started when KUMA is integrated with Kaspersky Security Center. Tasks are run only on assets that were imported from Kaspersky Security Center.

      Response settings

      • Kaspersky Security Center task (required)—name of the Open Single Management Platform task that you want to start. Tasks must be created beforehand, and their names must begin with "KUMA ". For example, "KUMA antivirus check".

        Types of Open Single Management Platform tasks that can be started using KUMA:

        • Update
        • Virus scan
      • Event field (required)—defines the event field of the asset for which the Open Single Management Platform task must be started. Possible values:
        • SourceAssetID
        • DestinationAssetID
        • DeviceAssetID

      To send requests to Open Single Management Platform, you must make sure that the Open Single Management Platform is reachable over UDP.

    • Run script—response rules for automatically running a script. For example, you can create a script containing commands to be executed on the KUMA server when selected events are detected.

      The script file is stored on the server where the correlator service using the response resource is installed: /opt/kaspersky/kuma/correlator/<Correlator ID>/scripts.

      The kuma user of this server requires the permissions to run the script.

      Response settings

      • Timeout—the number of seconds the system will wait before running the script.
      • Script name (required)—the name of the script file.

        If the script Response resource is linked to the Correlator service, but the is no script file in the /opt/kaspersky/kuma/correlator/<Correlator ID>/scripts folder, the service will not start.

      • Script arguments—parameters or event field values that must be passed to the script.

        If the script includes actions taken on files, you should specify the absolute path to these files.

        Parameters can be written with quotation marks (").

        Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field which value must be passed to the script.

        Example: -n "\"usr\": {{.SourceUserName}}"

    • KEDR response—response rules for automatically creating prevention rules, starting network isolation, or starting the application on Kaspersky Endpoint Detection and Response and Kaspersky Security Center assets.

      Automatic response actions are carried out when KUMA is integrated with Kaspersky Endpoint Detection and Response.

      Response settings

      • Event field (required)—event field containing the asset for which the response actions are needed. Possible values:
        • SourceAssetID
        • DestinationAssetID
        • DeviceAssetID
      • Task type—response action to be performed when data matching the filter is received. The following types of response actions are available:
        • Enable network isolation.

          When selecting this type of response, you need to define values for the following settings:

          • Isolation timeout—the number of hours during which the network isolation of an asset will be active. You can indicate from 1 to 9,999 hours.

            If necessary, you can add an exclusion for network isolation.

            To add an exclusion for network isolation:

            1. Click the Add exclusion button.
            2. Select the direction of network traffic that must not be blocked:
              • Inbound.
              • Outbound.
              • Inbound/Outbound.
            3. In the Asset IP field, enter the IP address of the asset whose network traffic must not be blocked.
            4. If you selected Inbound or Outbound, specify the connection ports in the Remote ports and Local ports fields.
            5. If you want to add more than one exclusion, click Add exclusion and repeat the steps to fill in the Traffic direction, Asset IP, Remote ports and Local ports fields.
            6. If you want to delete an exclusion, click the Delete button under the relevant exclusion.

            When adding exclusions to a network isolation rule, Kaspersky Endpoint Detection and Response may incorrectly display the port values in the rule details. This does not affect application performance. For more details on viewing a network isolation rule, please refer to the Kaspersky Anti Targeted Attack Platform Help Guide.

        • Disable network isolation.
        • Add prevention rule.

          When selecting this type of response, you need to define values for the following settings:

          • Event fields to extract hash from—event fields from which KUMA extracts SHA256 or MD5 hashes of the files that must be prevented from starting.

            The selected event fields and the values selected in the Event field must be added to the inherited fields of the correlation rule.

          • File hash #1—SHA256 or MD5 hash of the file to be blocked.

          At least one of the above fields must be completed.

        • Delete prevention rule.
        • Run program.

          When selecting this type of response, you need to define values for the following settings:

          • File path—path to the file of the process that you want to start.
          • Command line parameters—parameters with which you want to start the file.
          • Working directory—directory in which the file is located at the time of startup.

          When a response rule is triggered for users with the Main administrator role, the Run program task will be displayed in the Task manager section of the program web interface. Scheduled task is displayed for this task in the Created column of the task table. You can view task completion results.

          All of the listed operations can be performed on assets that have Kaspersky Endpoint Agent for Windows. On assets that have Kaspersky Endpoint Agent for Linux, the program can only be started.

          At the software level, the capability to create prevention rules and network isolation rules for assets with Kaspersky Endpoint Agent for Linux is unlimited. KUMA and Kaspersky Endpoint Detection and Response do not provide any notifications about unsuccessful application of these rules.

    • Response via KICS for Networks—response rules for automatically starting tasks on KICS for Networks assets. For example, you can change the asset status in KICS for Networks.

      Tasks are automatically started when KUMA is integrated with KICS for Networks.

      Response settings

      • Event field (required)—event field containing the asset for which the response actions are needed. Possible values:
        • SourceAssetID
        • DestinationAssetID
        • DeviceAssetID
      • KICS for Networks task—response action to be performed when data matching the filter is received. The following types of response actions are available:
        • Change asset status to Authorized.
        • Change asset status to Unauthorized.

        When a response rule is triggered, KUMA will send KICS for Networks an API request to change the status of the specified device to Authorized or Unauthorized.

    • Response via Active Directory—response rules for changing the permissions of Active Directory users. For example, block a user.

      Tasks are started if integration with Active Directory is configured.

      Response settings

      • Account ID source—event field, source of the Active Directory account ID value. Possible values:
        • SourceAccountID
        • DestinationAccountID
      • AD command—command that is applied to the account when the response rule is triggered. Available values:
        • Add account to group
        • Remove account from group
        • Reset account password
        • Block account
  • In the Workers field, specify the number of processes that the service can run simultaneously.

    By default, the number of workers is the same as the number of virtual processors on the server where the service is installed.

    This field is optional.

  1. In the Filter section, you can specify conditions to identify events that will be processed using the response rule. You can select an existing filter from the drop-down list or create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

          The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

          If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

          If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
        • inContextTable—presence of the entry in the specified context table.
        • intersect—presence in the left operand of the list items specified in the right operand.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

The new response rule was added to the set of resources for the correlator.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 264705]

Step 6. Routing

This is an optional step of the Installation Wizard. On the Routing tab of the Installation Wizard, you can select or create destinations with settings indicating the forwarding destination of events created by the correlator. Events from a correlator are usually redirected to storage so that they can be saved and later viewed if necessary. Events can be sent to other locations as needed. There can be more than one destination point.

To add an existing destination to a set of resources for a correlator:

  1. In the Add destination drop-down list, select the type of destination resource you want to add:
    • Select Storage if you want to configure forwarding of processed events to the storage.
    • Select Correlator if you want to configure forwarding of processed events to a correlator.
    • Select Other if you want to send events to other locations.

      This type of resource includes correlator and storage services that were created in previous versions of the program.

    The Add destination window opens where you can specify parameters for events forwarding.

  2. In the Destination drop-down list, select the necessary destination.

    The window name changes to Edit destination, and it displays the settings of the selected resource. The resource can be opened for editing in a new browser tab by clicking the edit-grey button.

  3. Click Save.

The selected destination is displayed on the Installation Wizard tab. A destination resource can be removed from the resource set by selecting it and clicking Delete in the opened window.

To add a new destination to a set of resources for a correlator:

  1. In the Add destination drop-down list, select the type of destination resource you want to add:
    • Select Storage if you want to configure forwarding of processed events to the storage.
    • Select Correlator if you want to configure forwarding of processed events to a correlator.
    • Select Other if you want to send events to other locations.

      This type of resource includes correlator and storage services that were created in previous versions of the program.

    The Add destination window opens where you can specify parameters for events forwarding.

  2. Specify the settings on the Basic settings tab:
    • In the Destination drop-down list, select Create new.
    • In the Name field, enter a unique name for the destination resource. The name must contain 1 to 128 Unicode characters.
    • Click the Disabled toggle button to specify whether events will be sent to this destination. By default, sending events is enabled.
    • Select the Type for the destination resource:
      • Select storage if you want to configure forwarding of processed events to the storage.
      • Select correlator if you want to configure forwarding of processed events to a correlator.
      • Select nats-jetstream, tcp, http, kafka, or file if you want to configure sending events to other locations.
    • Specify the URL to which events should be sent in the hostname:<API port> format.

      You can specify multiple destination addresses by clicking the URL button for all types except nats-jetstream and file.

    • For the nats-jetstream and kafka types, use the Topic field to specify which topic the data should be written to. The topic must contain Unicode characters. The Kafka topic is limited to 255 characters.
  3. If necessary, specify the settings on the Advanced settings tab. The available settings vary based on the selected destination resource type:
    • Compression is a drop-down list where you can enable Snappy compression. By default, compression is disabled.
    • Proxy is a drop-down list for proxy server selection.
    • The Buffer size field is used to set buffer size (in bytes) for the destination. The default value is 1 MB, and the maximum value is 64 MB.
    • Timeout field is used to set the timeout (in seconds) for another service or component response. The default value is 30.
    • Disk buffer size limit field is used to specify the size of the disk buffer in bytes. The default size is 10 GB.
    • Cluster ID is the ID of the NATS cluster.
    • TLS mode is a drop-down list where you can specify the conditions for using TLS encryption:
      • Disabled (default)—do not use TLS encryption.
      • Enabled—encryption is enabled, but without verification.
      • With verification—use encryption with verification that the certificate was signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during program installation and are stored on the KUMA Core server in the folder /opt/kaspersky/kuma/core/certificates/.

      When using TLS, it is impossible to specify an IP address as a URL.

    • URL selection policy is a drop-down list in which you can select a method for determining which URL to send events to if several URLs have been specified:
      • Any. Events are sent to one of the available URLs as long as this URL receives events. If the connection is broken (for example, the receiving node is disconnected) a different URL will be selected as the events destination.
      • Prefer first. Events are sent to the first URL in the list of added addresses. If it becomes unavailable, events are sent to the next available node in sequence. When the first URL becomes available again, events start to be sent to it again.
      • Balanced means that packages with events are evenly distributed among the available URLs from the list. Because packets are sent either on a destination buffer overflow or on the flush timer, this URL selection policy does not guarantee an equal distribution of events to destinations.
    • Delimiter is used to specify the character delimiting the events. By default, \n is used.
    • Path—the file path if the file destination type is selected.
    • Buffer flush interval—this field is used to set the time interval (in seconds) at which the data is sent to the destination. The default value is 100.
    • Workers—this field is used to set the number of services processing the queue. By default, this value is equal to the number of vCPUs of the KUMA Core server.
    • You can set health checks using the Health check path and Health check timeout fields. You can also disable health checks by selecting the Health Check Disabled check box.
    • Debug—a toggle switch that lets you specify whether resource logging must be enabled. By default, this toggle switch is in the Disabled position.
    • The Disk buffer disabled drop-down list is used to enable or disable the use of a disk buffer. By default, the disk buffer is disabled.
    • In the Filter section, you can specify the conditions to define events that will be processed by this resource. You can select an existing filter from the drop-down list or create a new filter.

      Creating a filter in resources

      1. In the Filter drop-down list, select Create new.
      2. If you want to keep the filter as a separate resource, select the Save filter check box.

        In this case, you will be able to use the created filter in various services.

        This check box is cleared by default.

      3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain 1 to 128 Unicode characters.
      4. In the Conditions settings block, specify the conditions that the events must meet:
        1. Click the Add condition button.
        2. In the Left operand and Right operand drop-down lists, specify the search parameters.

          Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

        3. In the operator drop-down list, select the relevant operator.

          Filter operators

          • =—the left operand equals the right operand.
          • <—the left operand is less than the right operand.
          • <=—the left operand is less than or equal to the right operand.
          • >—the left operand is greater than the right operand.
          • >=—the left operand is greater than or equal to the right operand.
          • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
          • contains—the left operand contains values of the right operand.
          • startsWith—the left operand starts with one of the values of the right operand.
          • endsWith—the left operand ends with one of the values of the right operand.
          • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
          • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

            The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

            If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

          • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

            If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

          • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
          • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
          • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
          • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
          • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
          • inContextTable—presence of the entry in the specified context table.
          • intersect—presence in the left operand of the list items specified in the right operand.
        4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

          The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

          This check box is cleared by default.

        5. If you want to add a negative condition, select If not from the If drop-down list.
        6. You can add multiple conditions or a group of conditions.
      5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
      6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

        You can view the nested filter settings by clicking the edit-grey button.

  4. Click Save.

The created destination is displayed on the Installation Wizard tab. A destination resource can be removed from the resource set by selecting it and clicking Delete in the opened window.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 264706]

Step 7. Setup validation

This is the required, final step of the Installation Wizard. At this step, KUMA creates a service resource set, and the Services are created automatically based on this set:

  • The set of resources for the correlator is displayed under ResourcesCorrelators. It can be used to create new correlator services. When this set of resources changes, all services that operate based on this set of resources will start using the new parameters after the services restart. To do so, you can click the Save and restart services and Save and update service configurations buttons.

    A set of resources can be modified, copied, moved from one folder to another, deleted, imported, and exported, like other resources.

  • Services are displayed in ResourcesActive services. The services created using the Installation Wizard perform functions inside the KUMA program. To communicate with external parts of the network infrastructure, you need to install similar external services on the servers and assets intended for them. For example, an external correlator service should be installed on a server intended to process events, external storage services should be installed on servers with a deployed ClickHouse service, and external agent services should be installed on Windows assets that must both receive and forward Windows events.

To finish the Installation Wizard:

  1. Click Create and save service.

    The Setup validation tab of the Installation Wizard displays a table of services created based on the set of resources selected in the Installation Wizard. The lower part of the window shows examples of commands that you must use to install external equivalents of these services on their intended servers and assets.

    For example:

    /opt/kaspersky/kuma/kuma correlator --core https://kuma-example:<port used for communication with the KUMA Core> --id <service ID> --api.port <port used for communication with the service> --install

    The "kuma" file can be found inside the installer in the /kuma-ansible-installer/roles/kuma/files/ directory.

    The port for communication with the KUMA Core, the service ID, and the port for communication with the service are added to the command automatically. You should also ensure the network connectivity of the KUMA system and open the ports used by its components if necessary.

  2. Close the Wizard by clicking Save.

The correlator service is created in KUMA. Now the equivalent service must be installed on the server intended for processing events.

Page top
[Topic 264707]

Installing a correlator in a KUMA network infrastructure

A correlator consists of two parts: one part is created inside the KUMA Console, and the other part is installed on the network infrastructure server intended for processing events. The second part of the correlator is installed in the network infrastructure.

To install a correlator:

  1. Log in to the server where you want to install the service.
  2. Create the /opt/kaspersky/kuma/ folder.
  3. Copy the "kuma" file to the /opt/kaspersky/kuma/ folder. The file is located in the installer in the /kuma-ansible-installer/roles/kuma/files/ folder.

    Make sure the kuma file has sufficient rights to run.

  4. Execute the following command:

    sudo /opt/kaspersky/kuma/kuma correlator --core https://<KUMA Core server FQDN>:<port used by KUMA Core server for internal communication (port 7210 by default)> --id <service ID copied from the KUMA Console> --api.port <port used for communication with the installed component> --install

    Example: sudo /opt/kaspersky/kuma/kuma correlator --core https://kuma.example.com:7210 --id XXXX --api.port YYYY --install

    You can copy the correlator installation command at the last step of the Installation Wizard. It automatically specifies the address and port of the KUMA Core server, the identifier of the correlator to be installed, and the port that the correlator uses for communication. Before installation, ensure the network connectivity of KUMA components.

    When deploying several KUMA services on the same host, during the installation process you must specify unique ports for each component using the --api.port <port> parameter. The following setting values are used by default: --api.port 7221.

The correlator is installed. You can use it to analyze events for threats.

Page top
[Topic 264709]

Validating correlator installation

To verify that the correlator is ready to receive events:

  1. In the KUMA Console, go to the ResourcesActive services section.
  2. Make sure that the correlator you installed has the green status.

If the events that are fed into the correlator contain events that meet the correlation rule filter conditions, the events tab will show events with the DeviceVendor=Kaspersky and DeviceProduct=KUMA parameters. The name of the triggered correlation rule will be displayed as the name of these correlation events.

If no correlation events are found

You can create a simpler version of your correlation rule to find possible errors. Use a simple correlation rule and a single Output action. It is recommended to create a filter to find events that are regularly received by KUMA.

When updating, adding, or removing a correlation rule, you must update configuration of the correlator.

When you finish testing your correlation rules, you must remove all testing and temporary correlation rules from KUMA and update configuration of the correlator.

Page top
[Topic 264710]

Creating an event router

An event router is a service that allows you to receive streams of events from collectors and correlators and then distribute the events to specified destinations in accordance with the configured filters.

To have events from the collector sent to the event router, you must create an eventRouter destination resource with the address of the event router and link the resource to the collectors that you want to send events to the event router.

The event router receives events on the API port, just like storage and correlator destinations.

You can create a router in the Resources section.

Using an event router lets you reduce the utilization of links, which is important for low-bandwidth and busy links.

Possible use cases:

Collector—Event router in the data center

collector_event router_xdr

Cascade connection: Multiple collectors—Event router at the branch office; Event router at the branch office—Event router in the data center

event router_event router_xdr

The event router must be installed on a Linux device. Only a user with the Main administrator role can create the service. You can create a service in any tenant; the tenant relation does not impose any restrictions.

You can use the following metrics to get information about the service performance:

  • IO
  • Process
  • OS

As with other resources, the following audit events are generated for the event router in KUMA:

  • Resource was successfully added
  • Resource was successfully updated
  • Resource was successfully deleted

Installing an event router involves two steps:

  1. Create the event router service in the KUMA Console using the Installation Wizard.
  2. Install the event router service on the server.

In this section

Starting the event router installation wizard

Installing the event router on the server

Page top
[Topic 282795]

Starting the event router installation wizard

To start the event router installation wizard:

  1. In the KUMA Console, in the Resources section, click Event routers.
  2. In the Event routers window that opens, click Add.

Follow the instructions of the wizard.

In this section

Step 1. General settings of the event router

Step 2. Routing

Step 3. Setup validation

Page top
[Topic 282808]

Step 1. General settings of the event router

This is a required step of the Installation Wizard. At this step, you specify the main settings of the event router: its name and the tenant that will own it.

To specify the basic settings of the event router:

  1. In the Name field, enter a unique name for the service you are creating. The name must contain 1 to 128 Unicode characters.
  2. In the Tenant drop-down list, select the tenant that will own the event router. An event router belonging to a tenant is organizational in nature and does not impose any restrictions.
  3. If necessary, specify the number of processes that the service can run concurrently in the Handlers field. By default, the number of handlers is the same as the number of vCPUs on the server where the service is installed.
  4. If necessary, use the Debug toggle switch to enable logging of service operations.
  5. You can optionally add up to 4000 Unicode characters describing the service in the Description field.

The basic settings of the event router are configured. Proceed to the next step of the Installation Wizard.

Page top
[Topic 282809]

Step 2. Routing

This is a required step of the Installation Wizard. We recommend sending events to at least two destinations: to the correlator for analysis and to the storage for storage. You can also select another event router as the destination.

To specify the settings of the destination to which you want the event router to send events received from collectors:

  1. In the Routing step of the installation wizard, click Add.
  2. This opens the Create destination window; in that window, specify the following settings:
    1. On the Basic settings tab, in the Name field, enter a unique name for the destination. The name must contain 1 to 128 Unicode characters.
    2. You can use the State toggle switch to enable or disable the service as needed.
    3. In the Type drop-down list, select the type of the destination. The following values are available:
    4. On the Advanced settings tab, specify the values of parameters. The set of parameters that can be configured depends on the type of the destination selected on the Basic settings tab. For detailed information about parameters and their values, click the link for each type of destination in paragraph "c." of this instruction.

The created destination is displayed on the Installation Wizard tab. A destination resource can be removed from the resource set by selecting it and clicking Delete in the opened window.

Routing is configured. You can proceed to the next step of the installation wizard.

Page top
[Topic 282810]

Step 3. Setup validation

This is the required, final step of the Installation Wizard.

To create an event router in the installation wizard:

  1. Click Create and save service.

    The lower part of the window displays the command that you must use to install the event router on the server.

    Example command:

    /opt/kaspersky/kuma/kuma eventrouter --core https://kuma-example:<port used for communication with the KUMA Core> --id <event router service ID> --api.port <port used for communication with the service> --install

    The port for communication with the KUMA Core, the service ID, and the port for communication with the service are added to the command automatically. You must also ensure the network connectivity of KUMA and open the ports used by its components, if necessary.

  2. Close the Wizard by clicking Save.

The service is installed in the KUMA Console. You can now proceed with installing the service in the KUMA network infrastructure.

Page top
[Topic 282812]

Installing the event router on the server

To install the event router on the server:

  1. Log in to the server where you want to install the event router service.
  2. Create the /opt/kaspersky/kuma/ folder.
  3. Copy the "kuma" file to the "/opt/kaspersky/kuma/" directory. The file is located inside the installer in the "/kuma-ansible-installer/roles/kuma/files/" directory.
  4. Make sure the kuma file has sufficient rights to run. If the file is not executable, make it executable:

    sudo chmod +x /opt/kaspersky/kuma/kuma

  5. Place the LICENSE file from the /kuma-ansible-installer/roles/kuma/files/ directory in the /opt/kaspersky/kuma/ directory and accept the license by running the following command:

    sudo /opt/kaspersky/kuma/kuma license

  6. Create the 'kuma' user:

    sudo useradd --system kuma && usermod -s /usr/bin/false kuma

  7. Make the 'kuma' user the owner of the /opt/kaspersky/kuma directory and all files inside the directory:

    sudo chown -R kuma:kuma /opt/kaspersky/kuma/

  8. Add the KUMA event router port to firewall exclusions.

    For the program to run correctly, ensure that the KUMA components are able to interact with other components and programs over the network via the protocols and ports specified during the installation of the KUMA components.

  9. Execute the following command:

    sudo /opt/kaspersky/kuma/kuma eventrouter --core https://<FQDN of the KUMA Core server>:<port used by KUMA Core server for internal communication (port 7210 by default)> --id <service ID copied from the KUMA Console> --api.port <port used for communication with the installed component> --install

    Example:

    sudo /opt/kaspersky/kuma/kuma eventrouter --core https://kuma.example.com:7210 --id XXXX --api.port YYYY --install

The event router is installed on the server. You can use it to receive events from collectors and relay the events to specified destinations.

Page top
[Topic 282813]

Creating a collector

A collector consists of two parts: one part is created inside the KUMA Console, and the other part is installed on a server in the network infrastructure intended for receiving events.

Actions in the KUMA Console

A collector is created in the KUMA Console by using the Installation Wizard. This Wizard combines the necessary resources into a set of resources for the collector. Upon completion of the Wizard, the service itself is automatically created based on this set of resources.

To create a collector in the KUMA Console,

Start the Collector Installation Wizard:

  • In the KUMA Console, in the Resources section, click Add event source.
  • In the KUMA Console, in the ResourcesCollectors section, click Add collector.

As a result of completing the steps of the Wizard, a collector service is created in the KUMA Console.

A resource set for a collector includes the following resources:

  • Connector
  • Normalizer (at least one)
  • Filters (if required)
  • Aggregation rules (if required)
  • Enrichment rules (if required)
  • Destinations (normally two are defined for sending events to the correlator and storage)

These resources can be prepared in advance, or you can create them while the Installation Wizard is running.

Actions on the KUMA Collector Server

When installing the collector on the server that you intend to use for receiving events, run the command displayed at the last step of the Installation Wizard. When installing, you must specify the ID automatically assigned to the service in the KUMA Console, as well as the port used for communication.

Testing the installation

After creating a collector, you are advised to make sure that it is working correctly.

In this section

Starting the Collector Installation Wizard

Installing a collector in a KUMA network infrastructure

Validating collector installation

Ensuring uninterrupted collector operation

Predefined collectors

Page top
[Topic 264711]

Starting the Collector Installation Wizard

A collector consists of two parts: one part is created inside the KUMA Console, and the other part is installed on the network infrastructure server intended for receiving events. The Installation Wizard creates the first part of the collector.

To start the Collector Installation Wizard:

  • In the KUMA Console, in the Resources section, click Add event source.
  • In the KUMA Console, in the ResourcesCollectors section, click Add collector.

Follow the instructions of the Wizard.

Aside from the first and last steps of the Wizard, the steps of the Wizard can be performed in any order. You can switch between steps by clicking the Next and Previous buttons, as well as by clicking the names of the steps in the left side of the window.

After the Wizard completes, a resource set for a collector is created in the KUMA Console under ResourcesCollectors, and a collector service is added under ResourcesActive services.

In this section

Step 1. Connect event sources

Step 2. Transportation

Step 3. Event parsing

Step 4. Filtering events

Step 5. Event aggregation

Step 6. Event enrichment

Step 7. Routing

Step 8. Setup validation

Page top
[Topic 264714]

Step 1. Connect event sources

This is a required step of the Installation Wizard. At this step, you specify the main settings of the collector: its name and the tenant that will own it.

To specify the basic settings of the collector:

  1. In the Collector name field, enter a unique name for the service you are creating. The name must contain 1 to 128 Unicode characters.

    When certain types of collectors are created, agents named "agent: <Collector name>, auto created" are also automatically created together with the collectors. If this type of agent was previously created and has not been deleted, it will be impossible to create a collector named <Collector name>. If this is the case, you will have to either specify a different name for the collector or delete the previously created agent.

  2. In the Tenant drop-down list, select the tenant that will own the collector. The tenant selection determines what resources will be available when the collector is created.

    If you return to this window from another subsequent step of the Installation Wizard and select another tenant, you will have to manually edit all the resources that you have added to the service. Only resources from the selected tenant and shared tenant can be added to the service.

  3. If required, specify the number of processes that the service can run concurrently in the Workers field. By default, the number of worker processes is the same as the number of vCPUs on the server where the service is installed.
  4. If necessary, use the Debug drop-down list to enable logging of service operations.

    Error messages of the collector service are logged even when debug mode is disabled. The log can be viewed on the machine where the collector is installed, in the /opt/kaspersky/kuma/collector/<collector ID>/log/collector directory.

  5. You can optionally add up to 256 Unicode characters describing the service in the Description field.

The main settings of the collector are specified. Proceed to the next step of the Installation Wizard.

Page top
[Topic 264717]

Step 2. Transportation

This is a required step of the Installation Wizard. On the Transport tab of the Installation Wizard, select or create a connector and in its settings, specify the source of events for the collector service.

To add an existing connector to a resource set,

select the name of the required connector from the Connector drop-down list.

The Transport tab of the Installation Wizard displays the settings of the selected connector. You can open the selected connector for editing in a new browser tab by clicking the edit-grey button.

To create a new connector:

  1. Select Create new from the Connector drop-down list.
  2. In the Type drop-down list, select the connector type and specify its settings on the Basic settings and Advanced settings tabs. The available settings depend on the selected type of connector:

    When using the tcp or udp connector type at the normalization stage, IP addresses of the assets from which the events were received will be written in the DeviceAddress event field if it is empty.

    When using a wmi or wec connector, agents will be automatically created for receiving Windows events.

    It is recommended to use the default encoding (UTF-8), and to apply other settings only if bit characters are received in the fields of events.

    Making KUMA collectors to listen on ports up to 1,000 requires running the service of the relevant collector with root privileges. To do this, after installing the collector, add the line AmbientCapabilities = CAP_NET_BIND_SERVICE to its systemd configuration file in the [Service] section.
    The systemd file is located in the /usr/lib/systemd/system/kuma-collector-<collector ID>.service directory.

The connector is added to the resource set of the collector. The created connector is only available in this resource set and is not displayed in the web interface ResourcesConnectors section.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 264718]

Step 3. Event parsing

Expand all | Collapse all

This is a required step of the Installation Wizard. On the Event parsing tab of the Installation Wizard, select or create a normalizer whose settings will define the rules for converting raw events into normalized events. You can add multiple event parsing rules to the normalizer to implement complex event processing logic. You can test the normalizer using test events.

When creating a new normalizer in the Installation Wizard, by default it is saved in the set of resources for the collector and cannot be used in other collectors. The Save normalizer check box lets you create the normalizer as a separate resource, in which case the normalizer can be selected in other collectors of the tenant.

If, when changing the settings of a collector resource set, you change or delete conversions in a normalizer connected to it, the edits will not be saved, and the normalizer itself may be corrupted. If you need to modify conversions in a normalizer that is already part of a service, the changes must be made directly to the normalizer under ResourcesNormalizers in the web interface.

Adding a normalizer

To add an existing normalizer to a resource set:

  1. Click the Add event parsing button.

    This opens the Basic event parsing window with the normalizer settings and the Normalization scheme tab active.

  2. In the Normalizer drop-down list, select the required normalizer. The drop-down list includes normalizers belonging to the tenant of the collector and the Shared tenant.

    The Basic event parsing window displays the settings of the selected normalizer.

    If you want to edit the normalizer settings, in the Normalizer drop-down list, click the pencil icon next to the name of the relevant normalizer. This opens the Edit normalizer window with a dark circle. Clicking the dark circle opens the Basic event parsing window where you can edit the normalizer settings.

    If you want to edit advanced parsing settings, move the cursor over the dark circle to make a plus icon appear; click the plus icon to open the Advanced event parsing window. For details about configuring advanced event parsing, see below.

  3. Click OK.

The normalizer is displayed as a dark circle on the Basic event parsing tab of the Installation Wizard. Clicking on the circle will open the normalizer options for viewing.

To create a new normalizer in the collector:

  1. At the Event parsing step, on the Parsing schemes tab, click the Add event parsing.

    This opens the Basic event parsing window with the normalizer settings and the Normalization scheme tab active.

  2. If you want to save the normalizer as a separate resource, select the Save normalizer check box; this makes the saved normalizer available for use in other collectors of the tenant. This check box is cleared by default.
  3. In the Name field, enter a unique name for the normalizer. The name must contain 1 to 128 Unicode characters.
  4. In the Parsing method drop-down list, select the type of events to receive. Depending on your choice, you can use the preconfigured rules for matching event fields or set your own rules. When you select some of the parsing methods, additional settings fields may need to be filled.

    Available parsing methods:

    • json

      This parsing method is used to process JSON data where each object, including its nested objects, occupies a single line in a file.

      When processing files with hierarchically arranged data, you can access the fields of nested objects by specifying the names of the parameters dividing them by a period. For example, the username parameter from the string "user": {"username": "system: node: example-01"} can be accessed by using the user.username query.

      Files are processed line by line. Multi-line objects with nested structures may be normalized incorrectly.

      In complex normalization schemes where additional normalizers are used, all nested objects are processed at the first normalization level, except for cases when the extra normalization conditions are not specified and, therefore, the event being processed is passed to the additional normalizer in its entirety.

      Newline characters can be \n and \r\n. Strings must be UTF-8 encoded.

      If you want to send the raw event for advanced normalization, at each nesting level in the Advanced event parsing window, select Yes in the Keep raw event drop-down list.

    • cef

      This parsing method is used to process CEF data.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button.

    • regexp

      This parsing method is used to create custom rules for processing data in a format using regular expressions.

      In the Normalization parameter block field, add a regular expression (RE2 syntax) with named capture groups. The name of a group and its value will be interpreted as the field and the value of the raw event, which can be converted into an event field in KUMA format.

      To add event handling rules:

      1. Copy an example of the data you want to process to the Event examples field. This is an optional but recommended step.
      2. In the Normalization parameter block field add a regular expression with named capture groups in RE2 syntax, for example "(?P<name>regexp)". The regular expression added to the Normalization parameter must exactly match the event. Also, when developing the regular expression, it is recommended to use special characters that match the starting and ending positions of the text: ^, $.

        You can add multiple regular expressions by clicking the Add regular expression button. If you need to remove the regular expression, click the cross button.

      3. Click the Copy field names to the mapping table button.

        Capture group names are displayed in the KUMA field column of the Mapping table. Now you can select the corresponding KUMA field in the column next to each capture group. Otherwise, if you named the capture groups in accordance with the CEF format, you can use the automatic CEF mapping by selecting the Use CEF syntax for normalization check box.

      Event handling rules were added.

    • syslog

      This parsing method is used to process data in syslog format.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button.

    • csv

      This parsing method is used to create custom rules for processing CSV data.

      When choosing this method, you must specify the separator of values in the string in the Delimiter field. Any single-byte ASCII character can be used as a delimiter.

    • kv

      This parsing method is used to process data in key-value pair format.

      If you select this method, you must provide values in the following required fields:

      • Pair delimiter—specify a character that will serve as a delimiter for key-value pairs. You can specify any one-character (1 byte) value, provided that the character does not match the value delimiter.
      • Value delimiter—specify a character that will serve as a delimiter between the key and the value. You can specify any one-character (1 byte) value, provided that the character does not match the delimiter of key-value pairs.
    • xml

      This parsing method is used to process XML data in which each object, including its nested objects, occupies a single line in a file. Files are processed line by line.

      If you want to send the raw event for advanced normalization, at each nesting level in the Advanced event parsing window, select Yes in the Keep raw event drop-down list.

      When this method is selected in the parameter block XML attributes you can specify the key attributes to be extracted from tags. If an XML structure has several attributes with different values in the same tag, you can indicate the necessary value by specifying its key in the Source column of the Mapping table.

      To add key XML attributes,

      Click the Add field button, and in the window that appears, specify the path to the required attribute.

      You can add more than one attribute. Attributes can be removed one at a time using the cross icon or all at once by clicking the Reset button.

      If XML key attributes are not specified, then in the course of field mapping the unique path to the XML value will be represented by a sequence of tags.

      Tag numbering

      Tag numbering is available as of KUMA 2.1.3. This functionality allows automatically numbering tags in XML events, which lets you parse an event with identical tags or unnamed tags, such as <Data>.

      As an example, we will use the Tag numbering functionality to number the tags of the EventData attribute of Microsoft Windows PowerShell event ID 800.

      PowerShell Event ID 800

      To parse such events, you must:

      • Configure tag numbering.
      • Configure data mapping for numbered tags with KUMA event fields.

      KUMA 3.0.x supports using XML attributes and Tag numbering functionality at the same time in the same extra normalizer. If an attribute contains unnamed tags or identical tags, we recommend using the Tag numbering functionality. If the attribute contains only named tags, use XML attributes. To use this functionality in extra normalizers, you must sequentially enable the "Keep raw event" setting in each extra normalizer along the path that the event follows to the target extra normalizer, and in the target extra normalizer itself.

      For an example of this functionality in action, you can refer to the MicrosoftProducts normalizer — the "Keep raw event" setting is enabled sequentially in the "AD FS" and "424" extra normalizers.

      To configure parsing of events with identically named or unnamed tags:

      1. Create a new normalizer or open an existing normalizer for editing.
      2. In the Basic event parsing window of the normalizer, in the Parsing method drop-down list, select 'xml' and in the Tag numbering field, click Add field.

        In the displayed field, enter the full path to the tag to whose elements you want to assign a number. For example, Event.EventData.Data. The first number to be assigned to a tag is 0. If the tag is empty, for example, <Data />, it is also assigned a number.

      3. To configure data mapping, under Mapping, click Add row and do the following:
        1. In the new row, in the Source field, enter the full path to the tag and its index. For the Microsoft Windows event from the example above, the full path with indices look like this:
          • Event.EventData.Data.0
          • Event.EventData.Data.1
          • Event.EventData.Data.2 and so on
        2. In the KUMA field drop-down list, select the field in the KUMA event that will receive the value from the numbered tag after parsing.
      4. To save changes:
        • If you created a new normalizer, click Save.
        • If you edited an existing normalizer, click Update configuration in the collector to which the normalizer is linked.

      Parsing is configured.

    • netflow5

      This parsing method is used to process data in the NetFlow v5 format.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button. If the netflow5 type is selected for the main parsing, extra normalization is not available.

      In mapping rules, the protocol type for netflow5 is not indicated in the fields of KUMA events by default. When parsing data in NetFlow format, on the Enrichment normalizer tab, you must create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

    • netflow9

      This parsing method is used to process data in the NetFlow v9 format.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button. If the netflow9 type is selected for the main parsing, extra normalization is not available.

      In mapping rules, the protocol type for netflow9 is not indicated in the fields of KUMA events by default. When parsing data in NetFlow format, on the Enrichment normalizer tab, you must create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

    • sflow5

      This parsing method is used to process data in sflow5 format.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button. If the sflow5 type is selected for the main parsing, extra normalization is not available.

    • ipfix

      This parsing method is used to process IPFIX data.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button. If the ipfix type is selected for the main parsing, extra normalization is not available.

      In mapping rules, the protocol type for ipfix is not indicated in the fields of KUMA events by default. When parsing data in NetFlow format, on the Enrichment normalizer tab, you must create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

    • sql—this method becomes available only when using a sql type connector.

      The normalizer uses this method to process data obtained by making a selection from the database.

  5. In the Keep raw event drop-down list, specify whether to store the original raw event in the newly created normalized event. Available values:
    • Don't save—do not save the raw event. This is the default setting.
    • Only errors—save the raw event in the Raw field of the normalized event if errors occurred when parsing it. This value is convenient to use when debugging a service. In this case, every time an event has a non-empty Raw field, you know there was a problem.

      If fields containing the names *Address or *Date* do not comply with normalization rules, these fields are ignored. No normalization error occurs in this case, and the values of the fields are not displayed in the Raw field of the normalized event even if the Keep raw eventOnly errors option was selected.

    • Always—always save the raw event in the Raw field of the normalized event.
  6. In the Keep extra fields drop-down list, choose whether you want to store the raw event fields in the normalized event if no mapping rules have been configured for them (see below). The data is stored in the Extra event field. Normalized events can be searched and filtered based on the data stored in the Extra field.

    Filtering based on data from the Extra event field

    Conditions for filters based on data from the Extra event field:

    • Condition—If.
    • Left operand—event field.
    • In this event field, you can specify one of the following values:
      • Extra field.
      • Value from the Extra field in the following format:

        Extra.<field name>

        For example, Extra.app.

        A value of this type is specified manually.

      • Value from the array written to the Extra field in the following format:

        Extra.<field name>.<array element>

        For example, Extra.array.0.

        The values in the array are numbered starting from 0.

        A value of this type is specified manually.

        To work with a value from the Extra field at depth 3 and below, use backquotes ``. For example, `Extra.lev1.lev2.lev3`.

    • Operator – =.
    • Right operand—constant.
    • Value—the value by which you need to filter events.

    By default, fields are not saved.

  7. Copy an example of the data you want to process to the Event examples field. This is an optional but recommended step.
  8. In the Mapping table, configure the mapping of raw event fields to fields of the event in KUMA format:
    1. In the Source column, provide the name of the raw event field that you want to convert into the KUMA event field.

      For details about the field format, refer to the Normalized event data model article. For a description of the mapping, refer to the Mapping fields of predefined normalizers article.

      Clicking the wrench-new button next to the field names in the Source column opens the Conversion window, in which you can click the Add conversion button to create rules for modifying the original data before they are written to the KUMA event fields.

      Available conversions

      Conversions are changes that can be applied to a value before it gets written to the event field. The conversion type is selected from a drop-down list.

      Available conversions:

      • lower—is used to make all characters of the value lowercase
      • upper—is used to make all characters of the value uppercase
      • regexp – used to convert a value using the regular expression RE2. When this conversion type is selected, the field appears where regular expression should be added.
      • substring—is used to extract characters in the position range specified in the Start and End fields. These fields appear when this conversion type is selected.
      • replace—is used to replace specified character sequence with the other character sequence. When this type of conversion is selected, new fields appear:
        • Replace chars—in this field you can specify the character sequence that should be replaced.
        • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
      • trim—used to simultaneously remove the characters specified in the Chars field from the leading and end positions of the value. The field appears when this type of conversion is selected. For example, a trim conversion with the Micromon value applied to Microsoft-Windows-Sysmon results in soft-Windows-Sys.
      • append is used to add the characters specified in the Constant field to the end of the event field value. The field appears when this type of conversion is selected.
      • prepend—used to prepend the characters specified in the Constant field to the start of the event field value. The field appears when this type of conversion is selected.
      • replace with regexp—is used to replace RE2 regular expression results with the character sequence.
        • Expression—in this field you can specify the regular expression which results that should be replaced.
        • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
      • Converting encoded strings to text:
        • decodeHexString—used to convert a HEX string to text.
        • decodeBase64String—used to convert a Base64 string to text.
        • decodeBase64URLString—used to convert a Base64url string to text.

        When converting a corrupted string or if conversion error occur, corrupted data may be written to the event field.

        During event enrichment, if the length of the encoded string exceeds the size of the field of the normalized event, the string is truncated and is not decoded.

        If the length of the decoded string exceeds the size of the event field into which the decoded value is to be written, such a string is truncated to fit the size of the event field.

      Conversions when using the extended event schema

      Whether or not a conversion can be used depends on the type of extended event schema field being used:

      • For an additional field of the "String" type, all types of conversions are available.
      • For fields of the "Number" and "Float" types, the following types of conversions are available: regexp, substring, replace, trim, append, prepend, replaceWithRegexp, decodeHexString, decodeBase64String, decodeBase64URLString.
      • For fields of "Array of strings", "Array of numbers", and "Array of floats" types, the following types of conversions are available: append, prepend.

      In the Conversion window, you can swap the added rules by dragging them by the DragIcon icon; you can also delete them using the cross-black icon.

    2. In the KUMA field column, select the required KUMA event field from the drop-down list. You can search for fields by entering their names in the field.

      Recommendations concerning the KUMA field column

      We recommend that you configure the mapping for the following KUMA fields. Otherwise, you will not be able to view observables in alert details and incident details.

      The recommended KUMA fields depend on the observable types:

      • For observables of the MD5 hash and SHA256 types:
        • FileHash
      • For observables of the URL type:
        • RequestUrl
      • For observables of the IP address type:
        • DeviceCustomIPv6Address1
        • DeviceCustomIPv6Address2
        • DeviceCustomIPv6Address3
        • DeviceCustomIPv6Address4
        • DestinationTranslatedAddress
        • DeviceTranslatedAddress
        • DestinationAddress
        • DeviceAddress
        • SourceTranslatedAddress
        • SourceAddress
      • For observables of the Domain name type:
        • DestinationDnsDomain
        • DeviceDnsDomain
        • DeviceNtDomain
        • DestinationNtDomain
        • SourceDnsDomain
        • SourceNtDomain
      • For observables of the UserName type:
        • DestinationUserName
        • SourceUserName
      • For observables of the HostName type:
        • DestinationHostName
        • DeviceHostName
        • SourceHostName
    3. If the name of the KUMA event field selected at the previous step begins with DeviceCustom* or Flex*, you can add a unique custom label in the Label field.

    New table rows can be added by clicking the Add row button. Rows can be deleted individually by clicking the cross button or all at once by clicking the Clear all button.

    If you want KUMA to enrich events with asset information, and the asset information to be available in the alert card when a correlation rule is triggered, in the Mapping table, configure a mapping of host address and host name fields depending on the purpose of the asset. For example, the mapping can apply to SourceAddress and SourceHostName, or DestinationAddress and DestinationHostName fields. As a result of enrichment, the event card includes a SourceAssetID or DestinationAssetID field, and a link to the asset card. Also, as a result of enrichment, asset information is available in the alert card.

    If you have loaded data into the Event examples field, the table will have an Examples column containing examples of values carried over from the raw event field to the KUMA event field.

    If the size of the KUMA event field is less than the length of the value placed in it, the value is truncated to the size of the event field.

  9. Click OK.

The normalizer is displayed as a dark circle on the Event parsing tab of the Installation Wizard. If you want to open the normalizer settings for viewing, click the dark circle. When you hover the mouse over the circle, a plus sign is displayed. Click it to add event parsing rules (see below).

Enriching normalized events with additional data

You can add additional data to newly created normalized events by creating enrichment rules in the normalizer. These enrichment rules are stored in the normalizer where they were created. There can be more than one enrichment rule.

To add enrichment rules to the normalizer:

  1. Select the main or additional normalization rule to open a window, and in that window, click the Enrichment tab.
  2. Click the Add enrichment button.

    The enrichment rule parameter block appears. You can delete the group of settings by clicking the cross-black button.

  3. Select the enrichment type from the Source kind drop-down list. Depending on the selected type, you may see advanced settings that will also need to be completed.

    Available Enrichment rule source types:

    • constant

      This type of enrichment is used when a constant needs to be added to an event field. Settings of this type of enrichment:

      • In the Constant field, specify the value that should be added to the event field. The value may not be longer than 255 Unicode characters. If you leave this field blank, the existing event field value will be cleared.
      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.

      If you are using the event enrichment functions for extended schema fields of "String", "Number", or "Float" type with a constant, the constant is added to the field.

      If you are using the event enrichment functions for extended schema fields of "Array of strings", "Array of numbers", or "Array of floats" type with a constant, the constant is added to the elements of the array.

    • dictionary

      This type of enrichment is used if you need to add a value from the dictionary of the Dictionary type.

      When this type is selected in the Dictionary name drop-down list, you must select the dictionary that will provide the values. In the Key fields settings block, you have to click the Add field button and select the event fields whose values will be used for dictionary entry selection.

      If you are using event enrichment with the "Dictionary" type selected as the "Source kind" setting, and an array field is specified in the "Key enrichment fields" setting, when an array is passed as the dictionary key, the array is serialized into a string in accordance with the rules of serializing a single value in the TSV format.

      Example: The "Key enrichment fields" setting uses the SA.StringArrayOne extended schema field. The SA.StringArrayOne extended schema field contains 3 elements: "a", "b" and "c". The following value is passed to the dictionary as the key: ['a','b','c'].

      If the "Key enrichment fields" setting uses an extended schema array field and a regular event schema field, the field values are separated by the "|" character when the dictionary is queried.

      Example: The "Key enrichment fields" setting uses two fields: the SA.StringArrayOne extended schema field and the Code field. The SA.StringArrayOne extended schema field contains 3 elements: "a", "b", and "c"; the Code string field contains the character sequence "myCode". The following value is passed to the dictionary as the key: ['a','b','c']|myCode.

    • table

      This type of enrichment is used if you need to add a value from the dictionary of the Table type.

      When this enrichment type is selected in the Dictionary name drop-down list, select the dictionary for providing the values. In the Key fields group of settings, click the Add field button to select the event fields whose values are used for dictionary entry selection.

      In the Mapping table, configure the dictionary fields to provide data and the event fields to receive data:

      • In the Dictionary field column, select the dictionary field. The available fields depend on the selected dictionary resource.
      • In the KUMA field column, select the event field to which the value is written. For some of the selected fields (*custom* and *flex*), in the Label column, you can specify a name for the data written to them.

      New table rows can be added by clicking the Add new element button. Columns can be deleted by clicking the cross button.

    • event

      This type of enrichment is used when you need to write a value from another event field to the current event field. Settings of this type of enrichment:

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
      • In the Source field drop-down list, select the event field whose value will be written to the target field.
      • Clicking the wrench-new button opens the Conversion window in which you can, by clicking the Add conversion button, create rules for modifying the original data before writing them to the KUMA event fields.

        Available conversions

        Conversions are changes that can be applied to a value before it gets written to the event field. The conversion type is selected from a drop-down list.

        Available conversions:

        • lower—is used to make all characters of the value lowercase
        • upper—is used to make all characters of the value uppercase
        • regexp – used to convert a value using the regular expression RE2. When this conversion type is selected, the field appears where regular expression should be added.
        • substring—is used to extract characters in the position range specified in the Start and End fields. These fields appear when this conversion type is selected.
        • replace—is used to replace specified character sequence with the other character sequence. When this type of conversion is selected, new fields appear:
          • Replace chars—in this field you can specify the character sequence that should be replaced.
          • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
        • trim—used to simultaneously remove the characters specified in the Chars field from the leading and end positions of the value. The field appears when this type of conversion is selected. For example, a trim conversion with the Micromon value applied to Microsoft-Windows-Sysmon results in soft-Windows-Sys.
        • append is used to add the characters specified in the Constant field to the end of the event field value. The field appears when this type of conversion is selected.
        • prepend—used to prepend the characters specified in the Constant field to the start of the event field value. The field appears when this type of conversion is selected.
        • replace with regexp—is used to replace RE2 regular expression results with the character sequence.
          • Expression—in this field you can specify the regular expression which results that should be replaced.
          • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
        • Converting encoded strings to text:
          • decodeHexString—used to convert a HEX string to text.
          • decodeBase64String—used to convert a Base64 string to text.
          • decodeBase64URLString—used to convert a Base64url string to text.

          When converting a corrupted string or if conversion error occur, corrupted data may be written to the event field.

          During event enrichment, if the length of the encoded string exceeds the size of the field of the normalized event, the string is truncated and is not decoded.

          If the length of the decoded string exceeds the size of the event field into which the decoded value is to be written, such a string is truncated to fit the size of the event field.

        Conversions when using the extended event schema

        Whether or not a conversion can be used depends on the type of extended event schema field being used:

        • For an additional field of the "String" type, all types of conversions are available.
        • For fields of the "Number" and "Float" types, the following types of conversions are available: regexp, substring, replace, trim, append, prepend, replaceWithRegexp, decodeHexString, decodeBase64String, decodeBase64URLString.
        • For fields of "Array of strings", "Array of numbers", and "Array of floats" types, the following types of conversions are available: append, prepend.

      When using enrichment of events that have the "Event" selected as the "Source kind" setting and the fields of the extended event schema are used as arguments, the following special considerations apply:

      • If the source field is an "Array of strings" field and the target field is a "String" field, the values are written to the target field in the TSV format.

        Example: The SA.StringArray extended event schema field contains values: "string1", "string2", "string3". An event enrichment operation is performed. The result of the operation is written to the DeviceCustomString1 event schema field. As a result of the operation, the DeviceCustomString1 field contains ["string1", "string2", "string3"].

      • If the source field is an "Array of strings" field and the target field is an "Array of strings" field, the values of the source field are appended to the values of the target field and are placed in the target field, with commas (",") used as the separator character.

        Example: The SA.StringArrayOne extended event schema field contains values: "string1", "string2", "string3". An event enrichment operation is performed. The result of the operation is written to the SA.StringArrayTwo event schema field. As a result of the operation, the SA.StringArrayTwo field contains "string1", "string2", "string3".

    • template

      This type of enrichment is used when you need to write a value obtained by processing Go templates into the event field. Settings of this type of enrichment:

      • Put the Go template into the Template field.

        Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field from which the value must be passed to the script.

        Example: Attack on {{.DestinationAddress}} from {{.SourceAddress}}.

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.

      To convert the data in an array field in a template into the TSV format, you must use the toString function.

      If you are using enrichment of events that have the "Template" type selected as the "Source kind" setting, in which the target field has the "String" type, and the source field is an extended event schema field containing an array of strings, you can use one of the following examples for the template.

      Example:

      {{.SA.StringArrayOne}}

      Example:

      {{- range $index, $element := . SA.StringArrayOne -}}

      {{- if $index}}, {{end}}"{{$element}}"{{- end -}}

  4. In the Target field drop-down list, select the KUMA event field to which you want to write the data.

    This setting is not available for the enrichment source of the Table type.

  5. If you want to enable details in the normalizer log, set the Debug toggle switch to enabled. Details are disabled by default.
  6. Click OK.

Event enrichment rules with the additional data are added to the normalizer, to the selected parsing rule.

Configuring parsing linked to IP addresses

You can direct events from multiple IP addresses, from sources of different types, to the same collector, and the collector will apply the corresponding configured normalizers.

You can use this method for collectors with a connector of the UDP, TCP, or HTTP type. If a UDP, TCP, or HTTP connector is specified in the collector at the Transport step, then at the Event parsing step, you can specify multiple IP addresses on the Parsing settings tab and choose the normalizer that you want to use for events coming from the specified addresses. The following types of normalizers are available: json, cef, regexp, syslog, csv, kv, xml.

In a collector with configured normalizers linked to IP addresses, if you change the connector type to any type other than UDP, TCP, HTTP, the Parsing settings tab disappears and only the first of the previously specified normalizers is specified at the Parsing step. The tab disappears from the web interface immediately, but the changes are applied after the resource is saved. If you want to restore the previous settings, exit the collector installation wizard without saving.

For normalizers of the Syslog and regexp types, you can use a normalizer chain by specifying extra normalization conditions depending on the value of the DeviceProcessName field. The difference from extra normalization is that you can specify shared normalizers.

To configure parsing with linking to IP addresses:

  1. At the Event parsing step, go to the Parsing settings tab.
  2. In the IP address(-es) field, specify one or more IP addresses from which events will be received. You can specify multiple IP addresses separated by commas. Available format: IPv4. The length of the address list is unlimited; however, we recommend specifying a reasonable number of addresses to keep the load on the collector balanced. This field is mandatory if you want to apply multiple normalizers in one collector.

    Limitation: for each IP+normalizer combination, the IP address must be unique. KUMA checks the uniqueness of addresses, and if you specify the same IP address for different normalizers, the "The field must be unique" message is displayed.

    If you want to send all events to the same normalizer without specifying IP addresses, we recommend creating a separate collector. We also recommend creating a separate collector with one normalizer if you want to apply the same normalizer to events from a large number of IP addresses; this helps improve the performance.

  3. In the Normalizer field, create a normalizer or select an existing normalizer from the drop-down list. The arrow next to the drop-down list takes you to the Parsing schemes tab.

    Normalization is triggered if you have a connector type configured: UDP, TCP, HTTP; the event source header must be specified in the HTTP case.

    Taking into account the available connectors, the following normalizer types are available for automatic source recognition: json, cef, regexp, syslog, csv, kv, xml.

  4. If you selected the Syslog or regexp normalizer type, you can Additional condition. Conditional normalization is available if Field mapping for DeviceProcessName is configured in the main normalizer. Under Condition, specify the process name in the DeviceProcessName field and create a normalizer or select an existing normalizer from the drop-down list. You can specify multiple combinations of DeviceProcessName + normalizer, normalization is performed until the first match is achieved.

Parsing with linking to IP addresses is configured.

Creating a structure of event normalization rules

To implement a complex event processing logic, you can add multiple event parsing rules to the normalizer. Events are transmitted between the parsing rules depending on the specified conditions. The sequence of creating parsing rules is important. The event is processed sequentially, and its path is shown using arrows.

To create an additional parsing rule:

  1. Create a normalizer (see above).

    The created normalizer is displayed in the window as a dark circle.

  2. Hover the mouse over the circle and click the plus sign button that appears.
  3. In the Additional event parsing window that opens, specify the parameters of the additional event parsing rule:
    • Extra normalization conditions tab:

      If you want to send a raw event for extra normalization, select Yes in the Keep raw event drop-down list. The default value is No. We recommend passing a raw event to normalizers of json and xml types. If you want to send a raw event for extra normalization to the second, third, etc nesting levels, at each nesting level, select Yes in the Keep raw event drop-down list.

      To send only the events with a specific field to the additional normalizer, specify this field in the Field to pass into normalizer field.

      On this tab, you can also define other conditions. When these conditions are met, the event is sent for additional parsing.

    • Normalization scheme tab:

      On this tab, you can configure event processing rules, similar to the main normalizer settings (see above). The Keep raw event setting is not available. The Event examples field displays the values specified when the initial normalizer was created.

    • Enrichment tab:

      On this tab, you can configure event enrichment rules (see above).

  4. Click OK.

The additional parsing rule is added to the normalizer and displayed as a dark block with the conditions under which this rule is triggered. You can change the settings of the additional parsing rule by clicking it. If you hover the mouse over the additional parsing rule, a plus button appears. You can click this button to create a new additional parsing rule. To delete a normalizer, click the button with the trash icon.

The upper right corner of the window contains a search window where you can search parsing rules by name.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 264719]

Step 4. Filtering events

This is an optional step of the Installation Wizard. The Event filtering tab of the Installation Wizard allows you to select or create a filter whose settings specify the conditions for selecting events. You can add multiple filters to the collector. You can swap the filters by dragging them by the DragIcon icon as well as delete them. Filters are combined by the AND operator.

To add an existing filter to a collector resource set,

Click the Add filter button and select the required filter from the Filter drop-down menu.

To add a new filter to the collector resource set:

  1. Click the Add filter button and select Create new from the Filter drop-down menu.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. This can be useful if you decide to reuse the same filter across different services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter in the Name field. The name must contain 1 to 128 Unicode characters.
  4. In the Conditions section, specify the conditions that must be met by the filtered events:
    • The Add condition button is clicked to add filtering conditions. You can select two values (two operands, left and right) and assign the operation you want to perform with the selected values. The result of the operation is either True or False.
      • In the operator drop-down list, select the function to be performed by the filter.

        In this drop-down list, you can select the do not match case check box if the operator should ignore the case of values. This check box is ignored if the InSubnet, InActiveList, InCategory, and InActiveDirectoryGroup operators are selected. This check box is cleared by default.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

          The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

          If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

          If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
        • inContextTable—presence of the entry in the specified context table.
        • intersect—presence in the left operand of the list items specified in the right operand.
      • In the Left operand and Right operand drop-down lists, select where the data to be filtered will come from. As a result of the selection, Advanced settings will appear. Use them to determine the exact value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.
      • You can use the If drop-down list to choose whether you need to create a negative filter condition.

      Conditions can be deleted by clicking the cross button.

    • The Add group button is clicked to add groups of conditions. Operator AND can be switched between AND, OR, and NOT values.

      A condition group can be deleted by clicking the cross button.

    • By clicking Add filter, you can add existing filters selected in the Select filter drop-down list to the conditions. You can click edit-grey to navigate to a nested filter.

      A nested filter can be deleted by clicking the cross button.

The filter has been added.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 264720]

Step 5. Event aggregation

This is an optional step of the Installation Wizard. The Event aggregation tab of the Installation Wizard allows you to select or create an aggregation rule whose settings specify the conditions for aggregating events of the same type. You can add multiple aggregation rules to the collector.

To add an existing aggregation rule to a set of collector resources,

click Add aggregation rule and select Aggregation rule in the drop-down list.

To add a new aggregation rule to a set of collector resources:

  1. Click the Add aggregation rule button and select Create new from the Aggregation rule drop-down menu.
  2. Enter the name of the newly created aggregation rule in the Name field. The name must contain 1 to 128 Unicode characters.
  3. In the Threshold field, specify how many events must be accumulated before the aggregation rule triggers and the events are aggregated. The default value is 100.
  4. In the Triggered rule lifetime field, specify how long (in seconds) the collector must accumulate events to be aggregated. When this time expires, the aggregation rule is triggered and a new aggregation event is created. The default value is 60.
  5. In the Identical fields section, click the Add field button to select the fields that will be used to identify the same types of events. Selected events can be deleted by clicking the buttons with a cross icon.
  6. In the Unique fields section, you can click Add field to select the fields that will disqualify events from aggregation even if the events contain fields listed in the Identical fields section. Selected events can be deleted by clicking the buttons with a cross icon.
  7. In the Sum fields section, you can click the Add field button to select the fields whose values will be summed during the aggregation process. Selected events can be deleted by clicking the buttons with a cross icon.
  8. In the Filter section, you can specify the conditions to define events that will be processed by this resource. You can select an existing filter from the drop-down list or create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

          The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

          If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

          If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
        • inContextTable—presence of the entry in the specified context table.
        • intersect—presence in the left operand of the list items specified in the right operand.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

Aggregation rule added. You can delete it by clicking the cross button.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 264721]

Step 6. Event enrichment

Expand all | Collapse all

This is an optional step of the Installation Wizard. On the Event enrichment tab of the Installation Wizard, you can specify which data from which sources should be added to events processed by the collector. Events can be enriched with data obtained using enrichment rules or LDAP.

Rule-based enrichment

There can be more than one enrichment rule. You can add them by clicking the Add enrichment button and can remove them by clicking the cross button. You can use existing enrichment rules or create rules directly in the Installation Wizard.

To add an existing enrichment rule to a set of resources:

  1. Click Add enrichment.

    This opens the enrichment rules settings block.

  2. In the Enrichment rule drop-down list, select the relevant resource.

The enrichment rule is added to the set of resources for the collector.

To create a new enrichment rule in a set of resources:

  1. Click Add enrichment.

    This opens the enrichment rules settings block.

  2. In the Enrichment rule drop-down list, select Create new.
  3. In the Source kind drop-down list, select the source of data for enrichment and define its corresponding settings:
    • constant

      This type of enrichment is used when a constant needs to be added to an event field. Settings of this type of enrichment:

      • In the Constant field, specify the value that should be added to the event field. The value may not be longer than 255 Unicode characters. If you leave this field blank, the existing event field value will be cleared.
      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.

      If you are using the event enrichment functions for extended schema fields of "String", "Number", or "Float" type with a constant, the constant is added to the field.

      If you are using the event enrichment functions for extended schema fields of "Array of strings", "Array of numbers", or "Array of floats" type with a constant, the constant is added to the elements of the array.

    • dictionary

      This type of enrichment is used if you need to add a value from the dictionary of the Dictionary type.

      When this type is selected in the Dictionary name drop-down list, you must select the dictionary that will provide the values. In the Key fields settings block, you have to click the Add field button and select the event fields whose values will be used for dictionary entry selection.

      If you are using event enrichment with the "Dictionary" type selected as the "Source kind" setting, and an array field is specified in the "Key enrichment fields" setting, when an array is passed as the dictionary key, the array is serialized into a string in accordance with the rules of serializing a single value in the TSV format.

      Example: The "Key enrichment fields" setting uses the SA.StringArrayOne extended schema field. The SA.StringArrayOne extended schema field contains 3 elements: "a", "b" and "c". The following value is passed to the dictionary as the key: ['a','b','c'].

      If the "Key enrichment fields" setting uses an extended schema array field and a regular event schema field, the field values are separated by the "|" character when the dictionary is queried.

      Example: The "Key enrichment fields" setting uses two fields: the SA.StringArrayOne extended schema field and the Code field. The SA.StringArrayOne extended schema field contains 3 elements: "a", "b", and "c"; the Code string field contains the character sequence "myCode". The following value is passed to the dictionary as the key: ['a','b','c']|myCode.

    • event

      This type of enrichment is used when you need to write a value from another event field to the current event field. Settings of this type of enrichment:

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
      • In the Source field drop-down list, select the event field whose value will be written to the target field.
      • In the Conversion settings block, you can create rules for modifying the original data before it is written to the KUMA event fields. The conversion type can be selected from the drop-down list. You can click the Add conversion and Delete buttons to add or delete a conversion, respectively. The order of conversions is important.

        Available conversions

        Conversions are changes that can be applied to a value before it gets written to the event field. The conversion type is selected from a drop-down list.

        Available conversions:

        • lower—is used to make all characters of the value lowercase
        • upper—is used to make all characters of the value uppercase
        • regexp – used to convert a value using the regular expression RE2. When this conversion type is selected, the field appears where regular expression should be added.
        • substring—is used to extract characters in the position range specified in the Start and End fields. These fields appear when this conversion type is selected.
        • replace—is used to replace specified character sequence with the other character sequence. When this type of conversion is selected, new fields appear:
          • Replace chars—in this field you can specify the character sequence that should be replaced.
          • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
        • trim—used to simultaneously remove the characters specified in the Chars field from the leading and end positions of the value. The field appears when this type of conversion is selected. For example, a trim conversion with the Micromon value applied to Microsoft-Windows-Sysmon results in soft-Windows-Sys.
        • append is used to add the characters specified in the Constant field to the end of the event field value. The field appears when this type of conversion is selected.
        • prepend—used to prepend the characters specified in the Constant field to the start of the event field value. The field appears when this type of conversion is selected.
        • replace with regexp—is used to replace RE2 regular expression results with the character sequence.
          • Expression—in this field you can specify the regular expression which results that should be replaced.
          • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
        • Converting encoded strings to text:
          • decodeHexString—used to convert a HEX string to text.
          • decodeBase64String—used to convert a Base64 string to text.
          • decodeBase64URLString—used to convert a Base64url string to text.

          When converting a corrupted string or if conversion error occur, corrupted data may be written to the event field.

          During event enrichment, if the length of the encoded string exceeds the size of the field of the normalized event, the string is truncated and is not decoded.

          If the length of the decoded string exceeds the size of the event field into which the decoded value is to be written, such a string is truncated to fit the size of the event field.

        Conversions when using the extended event schema

        Whether or not a conversion can be used depends on the type of extended event schema field being used:

        • For an additional field of the "String" type, all types of conversions are available.
        • For fields of the "Number" and "Float" types, the following types of conversions are available: regexp, substring, replace, trim, append, prepend, replaceWithRegexp, decodeHexString, decodeBase64String, decodeBase64URLString.
        • For fields of "Array of strings", "Array of numbers", and "Array of floats" types, the following types of conversions are available: append, prepend.

    • template

      This type of enrichment is used when you need to write a value obtained by processing Go templates into the event field. Settings of this type of enrichment:

      • Put the Go template into the Template field.

        Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field from which the value must be passed to the script.

        Example: Attack on {{.DestinationAddress}} from {{.SourceAddress}}.

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.

      To convert the data in an array field in a template into the TSV format, you must use the toString function.

      If you are using enrichment of events that have the "Template" type selected as the "Source kind" setting, in which the target field has the "String" type, and the source field is an extended event schema field containing an array of strings, you can use one of the following examples for the template.

      Example:

      {{.SA.StringArrayOne}}

      Example:

      {{- range $index, $element := . SA.StringArrayOne -}}

      {{- if $index}}, {{end}}"{{$element}}"{{- end -}}

    • dns

      This type of enrichment is used to send requests to a private network DNS server to convert IP addresses into domain names or vice versa. IP addresses are converted to DNS names only for private addresses: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 100.64.0.0/10.

      Available settings:

      • URL—in this field, you can specify the URL of a DNS server to which you want to send requests. You can click the Add URL button to specify multiple URLs.
      • RPS—maximum number of requests sent to the server per second. The default value is 1,000.
      • Workers—maximum number of requests per one point in time. The default value is 1.
      • Max tasks—maximum number of simultaneously fulfilled requests. By default, this value is equal to the number of vCPUs of the KUMA Core server.
      • Cache TTL—the lifetime of the values stored in the cache. The default value is 60.
      • Cache disabled—you can use this drop-down list to enable or disable caching. Caching is enabled by default.
    • cybertrace

      This type of enrichment is used to add information from CyberTrace data streams to event fields.

      Available settings:

      • URL (required)—in this field, you can specify the URL of a CyberTrace server to which you want to send requests.
      • Number of connections—maximum number of connections to the CyberTrace server that can be simultaneously established by KUMA. By default, this value is equal to the number of vCPUs of the KUMA Core server.
      • RPS—maximum number of requests sent to the server per second. The default value is 1,000.
      • Timeout—amount of time to wait for a response from the CyberTrace server, in seconds. The default value is 30.
      • Mapping (required)—this settings block contains the mapping table for mapping KUMA event fields to CyberTrace indicator types. The KUMA field column shows the names of KUMA event fields, and the CyberTrace indicator column shows the types of CyberTrace indicators.

        Available types of CyberTrace indicators:

        • ip
        • url
        • hash

        In the mapping table, you must provide at least one string. You can click the Add row button to add a string, and can click the cross button to remove a string.

    • timezone

      This type of enrichment is used in collectors and correlators to assign a specific timezone to an event. Timezone information may be useful when searching for events that occurred at unusual times, such as nighttime.

      When this type of enrichment is selected, the required timezone must be selected from the Timezone drop-down list.

      Make sure that the required time zone is set on the server hosting the enrichment-utilizing service. For example, you can do this by using the timedatectl list-timezones command, which shows all time zones that are set on the server. For more details on setting time zones, please refer to your operating system documentation.

      When an event is enriched, the time offset of the selected timezone relative to Coordinated Universal Time (UTC) is written to the DeviceTimeZone event field in the +-hh:mm format. For example, if you select the Asia/Yekaterinburg timezone, the value +05:00 will be written to the DeviceTimeZone field. If the enriched event already has a value in the DeviceTimeZone field, it will be overwritten.

      By default, if the timezone is not specified in the event being processed and enrichment rules by timezone are not configured, the event is assigned the timezone of the server hosting the service (collector or correlator) that processes the event. If the server time is changed, the service must be restarted.

      Permissible time formats when enriching the DeviceTimeZone field

      When processing incoming raw events in the collector, the following time formats can be automatically converted to the +-hh:mm format:

      Time format in a processed event

      Example

      +-hh:mm

      -07:00

      +-hhmm

      -0700

      +-hh

      -07

      If the date format in the DeviceTimeZone field differs from the formats listed above, the collector server timezone is written to the field when an event is enriched with timezone information. You can create custom normalization rules for non-standard time formats.

    • geographic data

      This type of enrichment is used to add IP address geographic data to event fields. Learn more about linking IP addresses to geographic data.

      When this type is selected, in the Mapping geographic data to event fields settings block, you must specify from which event field the IP address will be read, select the required attributes of geographic data, and define the event fields in which geographic data will be written:

      1. In the Event field with IP address drop-down list, select the event field from which the IP address is read. Geographic data uploaded to KUMA is matched against this IP address.

        You can click the Add event field with IP address button to specify multiple event fields with IP addresses that require geographic data enrichment. You can delete event fields added in this way by clicking the Delete event field with IP address button.

        When the SourceAddress, DestinationAddress, and DeviceAddress event fields are selected, the Apply default mapping button becomes available. You can click this button to add preconfigured mapping pairs of geographic data attributes and event fields.

      2. For each event field you need to read the IP address from, select the type of geographic data and the event field to which the geographic data should be written.

        You can click the Add geodata attribute button to add field pairs for Geodata attributeEvent field to write to. You can also configure different types of geographic data for one IP address to be written to different event fields. To delete a field pair, click cross-red.

        • In the Geodata attribute field, select which geographic data corresponding to the read IP address should be written to the event. Available geographic data attributes: Country, Region, City, Longitude, Latitude.
        • In the Event field to write to, select the event field which the selected geographic data attribute must be written to.

        You can write identical geographic data attributes to different event fields. If you configure multiple geographic data attributes to be written to the same event field, the event will be enriched with the last mapping in the sequence.

  4. Use the Debug drop-down list to indicate whether or not to enable logging of service operations. Logging is disabled by default.
  5. In the Filter section, you can specify conditions to identify events that will be processed by the enrichment rule resource. You can select an existing filter from the drop-down list or create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

          The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

          If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

          If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
        • inContextTable—presence of the entry in the specified context table.
        • intersect—presence in the left operand of the list items specified in the right operand.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

The new enrichment rule was added to the set of resources for the collector.

LDAP enrichment

To enable enrichment using LDAP:

  1. Click Add enrichment with LDAP data.

    This opens the settings block for LDAP enrichment.

  2. In the LDAP accounts mapping settings block, click the New domain button to specify the domain of the user accounts. You can specify multiple domains.
  3. In the LDAP mapping table, define the rules for mapping KUMA fields to LDAP attributes:
    • In the KUMA field column, indicate the KUMA event field which data should be compared to LDAP attribute.
    • In the LDAP attribute column, specify the attribute that must be compared with the KUMA event field. The drop-down list contains standard attributes and can be augmented with custom attributes.

      Before configuring event enrichment using custom attributes, make sure that custom attributes are configured in AD.

      To enrich events with accounts using custom attributes:

      1. Add Custom AD Account Attributes in the LDAP connection settings.

        Standard imported attributes from AD cannot be added as custom attributes. For example, if you add the standard accountExpires attribute as a custom attribute, KUMA returns an error when saving the connection settings.

        The following account attributes can be requested from Active Directory:

        • accountExpires
        • badPasswordTime
        • cn
        • co
        • company
        • department
        • description
        • displayName
        • distinguishedName
        • division
        • employeeID
        • givenName
        • l
        • lastLogon
        • lastLogonTimestamp
        • Mail
        • mailNickname
        • managedObjects
        • manager
        • memberOf (this attribute can be used for search during correlation)
        • mobile
        • name
        • objectCategory
        • objectGUID (this attribute always requested from Active Directory even if a user doesn't specify it)
        • objectSID
        • physicalDeliveryOfficeName
        • pwdLastSet
        • sAMAccountName
        • sAMAccountType
        • sn
        • streetAddress
        • telephoneNumber
        • title
        • userAccountControl
        • UserPrincipalName
        • whenChanged
        • whenCreated

        After you add custom attributes in the LDAP connection settings, the LDAP attribute to receive drop-down list in the collector automatically includes the new attributes. Custom attributes are identified by a question mark next to the attribute name. If you added the same attribute for multiple domains, the attribute is listed only once in the drop-down list. You can view the domains by moving your cursor over the question mark. Domain names are displayed as links. If you click a link, the domain is automatically added to LDAP accounts mapping if it was not previously added.

        If you deleted a custom attribute in the LDAP connection settings, manually delete the row containing the attribute from the mapping table in the collector. Account attribute information in KUMA is updated each time you import accounts.  

      2. Import accounts.
      3. In the collector, in the LDAP mapping table, define the rules for mapping KUMA fields to LDAP attributes.
      4. Restart the collector.

        After the collector is restarted, KUMA begins enriching events with accounts.

         

    • In the KUMA event field to write to column, specify in which field of the KUMA event the ID of the user account imported from LDAP should be placed if the mapping was successful.

    You can click the Add row button to add a string to the table, and can click the cross button to remove a string. You can click the Apply default mapping button to fill the mapping table with standard values.

Event enrichment rules for data received from LDAP were added to the group of resources for the collector.

If you add an enrichment to an existing collector using LDAP or change the enrichment settings, you must stop and restart the service.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 264723]

Step 7. Routing

This is an optional step of the Installation Wizard. On the Routing tab of the Installation Wizard, you can select or create destinations with settings indicating the forwarding destination of events processed by the collector. Typically, events from the collector are routed to two points: to the correlator to analyze and search for threats; and to the storage, both for storage and so that processed events can be viewed later. Events can be sent to other locations as needed. There can be more than one destination point.

To add an existing destination to a collector resource set:

  1. In the Add destination drop-down list, select the type of destination resource you want to add:
    • Select Storage if you want to configure forwarding of processed events to the storage.
    • Select Correlator if you want to configure forwarding of processed events to a correlator.
    • Select Other if you want to send events to other locations.

      This type of resource includes correlator and storage services that were created in previous versions of the program.

    The Add destination window opens where you can specify parameters for events forwarding.

  2. In the Destination drop-down list, select the necessary destination.

    The window name changes to Edit destination, and it displays the settings of the selected resource. To open the settings of a destination for editing in a new browser tab, click edit-grey.

  3. Click Save.

The selected destination is displayed on the Installation Wizard tab. A destination resource can be removed from the resource set by selecting it and clicking Delete in the opened window.

To add a new destination resource to a collector resource set:

  1. In the Add destination drop-down list, select the type of destination resource you want to add:
    • Select Storage if you want to configure forwarding of processed events to the storage.
    • Select Correlator if you want to configure forwarding of processed events to a correlator.
    • Select Other if you want to send events to other locations.

      This type of resource includes correlator and storage services that were created in previous versions of the program.

    The Add destination window opens where you can specify parameters for events forwarding.

  2. Specify the settings on the Basic settings tab:
    • In the Destination drop-down list, select Create new.
    • In the Name field, enter a unique name for the destination resource. The name must contain 1 to 128 Unicode characters.
    • Click the Disabled toggle button to specify whether events will be sent to this destination. By default, sending events is enabled.
    • Select the Type for the destination resource:
      • Select storage if you want to configure forwarding of processed events to the storage.
      • Select correlator if you want to configure forwarding of processed events to a correlator.
      • Select nats-jetstream, tcp, http, kafka, or file if you want to configure sending events to other locations.
    • Specify the URL to which events should be sent in the hostname:<API port> format.

      You can specify multiple destination addresses by clicking the URL button for all types except nats-jetstream, file, and diode.

    • For the nats-jetstream and kafka types, use the Topic field to specify which topic the data should be written to. The topic must contain Unicode characters. The Kafka topic is limited to 255 characters.
  3. If necessary, specify the settings on the Advanced settings tab. The available settings vary based on the selected destination resource type:
    • Compression is a drop-down list where you can enable Snappy compression. By default, compression is disabled.
    • Proxy is a drop-down list for proxy server selection.
    • The Buffer size field is used to set buffer size (in bytes) for the destination. The default value is 1 MB, and the maximum value is 64 MB.
    • Timeout field is used to set the timeout (in seconds) for another service or component response. The default value is 30.
    • Disk buffer size limit field is used to specify the size of the disk buffer in bytes. The default size is 10 GB.
    • Cluster ID is the ID of the NATS cluster.
    • TLS mode is a drop-down list where you can specify the conditions for using TLS encryption:
      • Disabled (default)—do not use TLS encryption.
      • Enabled—encryption is enabled, but without verification.
      • With verification—use encryption with verification that the certificate was signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during program installation and are stored on the KUMA Core server in the folder /opt/kaspersky/kuma/core/certificates/.

      When using TLS, it is impossible to specify an IP address as a URL.

    • URL selection policy is a drop-down list in which you can select a method for determining which URL to send events to if several URLs have been specified:
      • Any. Events are sent to one of the available URLs as long as this URL receives events. If the connection is broken (for example, the receiving node is disconnected) a different URL will be selected as the events destination.
      • Prefer first. Events are sent to the first URL in the list of added addresses. If it becomes unavailable, events are sent to the next available node in sequence. When the first URL becomes available again, events start to be sent to it again.
      • Balanced means that packages with events are evenly distributed among the available URLs from the list. Because packets are sent either on a destination buffer overflow or on the flush timer, this URL selection policy does not guarantee an equal distribution of events to destinations.
    • Delimiter is used to specify the character delimiting the events. By default, \n is used.
    • Path—the file path if the file destination type is selected.
    • Buffer flush interval—this field is used to set the time interval (in seconds) at which the data is sent to the destination. The default value is 100.
    • Workers—this field is used to set the number of services processing the queue. By default, this value is equal to the number of vCPUs of the KUMA Core server.
    • You can set health checks using the Health check path and Health check timeout fields. You can also disable health checks by selecting the Health Check Disabled check box.
    • Debug—a toggle switch that lets you specify whether resource logging must be enabled. By default, this toggle switch is in the Disabled position.
    • The Disk buffer disabled drop-down list is used to enable or disable the use of a disk buffer. By default, the disk buffer is disabled.

      The disk buffer is used if the collector cannot send normalized events to the destination. The amount of allocated disk space is limited by the value of the Disk buffer size limit setting.

      If the disk space allocated for the disk buffer is exhausted, events are rotated as follows: new events replace the oldest events written to the buffer.

    • In the Filter section, you can specify the conditions to define events that will be processed by this resource. You can select an existing filter from the drop-down list or create a new filter.

      Creating a filter in resources

      1. In the Filter drop-down list, select Create new.
      2. If you want to keep the filter as a separate resource, select the Save filter check box.

        In this case, you will be able to use the created filter in various services.

        This check box is cleared by default.

      3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain 1 to 128 Unicode characters.
      4. In the Conditions settings block, specify the conditions that the events must meet:
        1. Click the Add condition button.
        2. In the Left operand and Right operand drop-down lists, specify the search parameters.

          Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

        3. In the operator drop-down list, select the relevant operator.

          Filter operators

          • =—the left operand equals the right operand.
          • <—the left operand is less than the right operand.
          • <=—the left operand is less than or equal to the right operand.
          • >—the left operand is greater than the right operand.
          • >=—the left operand is greater than or equal to the right operand.
          • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
          • contains—the left operand contains values of the right operand.
          • startsWith—the left operand starts with one of the values of the right operand.
          • endsWith—the left operand ends with one of the values of the right operand.
          • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
          • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

            The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

            If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

          • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

            If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

          • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
          • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
          • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
          • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
          • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
          • inContextTable—presence of the entry in the specified context table.
          • intersect—presence in the left operand of the list items specified in the right operand.
        4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

          The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

          This check box is cleared by default.

        5. If you want to add a negative condition, select If not from the If drop-down list.
        6. You can add multiple conditions or a group of conditions.
      5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
      6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

        You can view the nested filter settings by clicking the edit-grey button.

  4. Click Save.

The created destination is displayed on the Installation Wizard tab. A destination resource can be removed from the resource set by selecting it and clicking Delete in the opened window.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 264724]

Step 8. Setup validation

This is the required, final step of the Installation Wizard. At this step, KUMA creates a service resource set, and the Services are created automatically based on this set:

  • The set of resources for the collector is displayed under ResourcesCollectors. It can be used to create new collector services. When this set of resources changes, all services that operate based on this set of resources will start using the new parameters after the services restart. To do so, you can click the Save and restart services and Save and update service configurations buttons.

    A set of resources can be modified, copied, moved from one folder to another, deleted, imported, and exported, like other resources.

  • Services are displayed in ResourcesActive services. The services created using the Installation Wizard perform functions inside the KUMA program. To communicate with external parts of the network infrastructure, you need to install similar external services on the servers and assets intended for them. For example, an external collector service should be installed on a server intended as an events recipient, external storage services should be installed on servers that have a deployed ClickHouse service, and external agent services should be installed on the Windows assets that must both receive and forward Windows events.

To finish the Installation Wizard:

  1. Click Create and save service.

    The Setup validation tab of the Installation Wizard displays a table of services created based on the set of resources selected in the Installation Wizard. The lower part of the window shows examples of commands that you must use to install external equivalents of these services on their intended servers and assets.

    For example:

    /opt/kaspersky/kuma/kuma collector --core https://kuma-example:<port used for communication with the KUMA Core> --id <service ID> --api.port <port used for communication with the service> --install

    The "kuma" file can be found inside the installer in the /kuma-ansible-installer/roles/kuma/files/ directory.

    The port for communication with the KUMA Core, the service ID, and the port for communication with the service are added to the command automatically. You should also ensure the network connectivity of the KUMA system and open the ports used by its components if necessary.

  2. Close the Wizard by clicking Save collector.

The collector service is created in KUMA. Now you will install a similar service on the server intended for receiving events.

If a wmi or wec connector was selected for collectors, you must also install the automatically created KUMA agents.

Page top
[Topic 264725]

Installing a collector in a KUMA network infrastructure

A collector consists of two parts: one part is created inside the KUMA Console, and the other part is installed on the network infrastructure server intended for receiving events. The second part of the collector is installed in the network infrastructure.

To install a collector:

  1. Log in to the server where you want to install the service.
  2. Create the /opt/kaspersky/kuma/ folder.
  3. Copy the "kuma" file to the /opt/kaspersky/kuma/ folder. The file is located in the installer in the /kuma-ansible-installer/roles/kuma/files/ folder.

    Make sure the kuma file has sufficient rights to run. If the file is not executable, make it executable:

    sudo chmod +x /opt/kaspersky/kuma/kuma

  4. Place the LICENSE file from the /kuma-ansible-installer/roles/kuma/files/ directory in the /opt/kaspersky/kuma/ directory and accept the license by running the following command:

    sudo /opt/kaspersky/kuma/kuma license

  5. Create the 'kuma' user:

    sudo useradd --system kuma && usermod -s /usr/bin/false kuma

  6. Make the 'kuma' user the owner of the /opt/kaspersky/kuma directory and all files inside the directory:

    sudo chown -R kuma:kuma /opt/kaspersky/kuma/

  7. Execute the following command:

    sudo /opt/kaspersky/kuma/kuma collector --core https://<FQDN of the KUMA Core server>:<port used by KUMA Core for internal communication (port 7210 is used by default)> --id <service ID copied from the KUMA Console> --api.port <port used for communication with the installed component>

    Example: sudo /opt/kaspersky/kuma/kuma collector --core https://test.kuma.com:7210 --id XXXX --api.port YYYY

    If errors are detected as a result of the command execution, make sure that the settings are correct. For example, the availability of the required access level, network availability between the collector service and the Core, and the uniqueness of the selected API port. After fixing errors, continue installing the collector.

    If no errors were found, and the collector status in the KUMA Console is changed to green, stop the command execution and proceed to the next step.

    The command can be copied at the last step of the installer wizard. It automatically specifies the address and port of the KUMA Core server, the identifier of the collector to be installed, and the port that the collector uses for communication.

    When deploying several KUMA services on the same host, during the installation process you must specify unique ports for each component using the --api.port <port> parameter. The following setting values are used by default: --api.port 7221.

    Before installation, ensure the network connectivity of KUMA components.

  8. Run the command again by adding the --install key:

    sudo /opt/kaspersky/kuma/kuma collector --core https://<FQDN of the KUMA Core server>:<port used by KUMA Core for internal communication (port 7210 is used by default)> --id <service ID copied from the KUMA Console> --api.port <port used for communication with the installed component> --install

    Example: sudo /opt/kaspersky/kuma/kuma collector --core https://kuma.example.com:7210 --id XXXX --api.port YYYY --install

  9. Add KUMA collector port to firewall exclusions.

    For the program to run correctly, ensure that the KUMA components are able to interact with other components and programs over the network via the protocols and ports specified during the installation of the KUMA components.

The collector is installed. You can use it to receive data from an event source and forward it for processing.

Page top
[Topic 264726]

Validating collector installation

To verify that the collector is ready to receive events:

  1. In the KUMA Console, go to the ResourcesActive services section.
  2. Make sure that the collector you installed has the green status.

If the status of the collector is not green, view the log of this service on the machine where it is installed, in the /opt/kaspersky/kuma/collector/<collector ID>/log/collector directory. Errors are logged regardless of whether debug mode is enabled or disabled.

If the collector is installed correctly and you are sure that data is coming from the event source, the table should display events when you search for events associated with the collector.

To check for normalization errors using the Events section of the KUMA Console:

  1. Make sure that the Collector service is running.
  2. Make sure that the event source is providing events to the KUMA.
  3. Make sure that you selected Only errors in the Keep raw event drop-down list of the Normalizer resource in the Resources section of the KUMA Console.
  4. In the Events section of KUMA, search for events with the following parameters:

If any events are found with this search, it means that there are normalization errors and they should be investigated.

To check for normalization errors using the Grafana Dashboard:

  1. Make sure that the Collector service is running.
  2. Make sure that the event source is providing events to the KUMA.
  3. Open the Metrics section and follow the KUMA Collectors link.
  4. See if the Errors section of the Normalization widget displays any errors.

If there are any errors, it means that there are normalization errors and they should be investigated.

For WEC and WMI collectors, you must ensure that unique ports are used to connect to their agents. This port is specified in the Transport section of Collector Installation Wizard.

Page top
[Topic 264728]

Ensuring uninterrupted collector operation

An uninterrupted event stream from the event source to KUMA is important for protecting the network infrastructure. Continuity can be ensured though automatic forwarding of the event stream to a larger number of collectors:

  • On the KUMA side, two or more identical collectors must be installed.
  • On the event source side, you must configure control of event streams between collectors using third-party server load management tools, such as rsyslog or nginx.

With this configuration of the collectors in place, no incoming events will be lost if the collector server is unavailable for any reason.

Please keep in mind that when the event stream switches between collectors, each collector will aggregate events separately.

If the KUMA collector fails to start, and its log includes the "panic: runtime error: slice bounds out of range [8:0]" error:

  1. Stop the collector.

    sudo systemctl stop kuma-collector-<collector ID>

  2. Delete the DNS enrichment cache files.

    sudo rm -rf /opt/kaspersky/kuma/collector/<collector ID>/cache/enrichment/DNS-*

  3. Delete the event cache files (disk buffer). Run the command only if you can afford to jettison the events in the disk buffers of the collector.

    sudo rm -rf /opt/kaspersky/kuma/collector/<collector ID>/buffers/*

  4. Start the collector service.

    sudo systemctl start kuma-collector-<collector ID>

In this section

Event stream control using rsyslog

Event stream control using nginx

Page top
[Topic 264729]

Event stream control using rsyslog

To enable rsyslog event stream control on the event source server:

  1. Create two or more identical collectors that you want to use to ensure uninterrupted reception of events.
  2. Install rsyslog on the event source server (see the rsyslog documentation).
  3. Add rules for forwarding the event stream between collectors to the configuration file /etc/rsyslog.conf:

    *. * @@ <main collector server FQDN>: <port for incoming events>

    $ActionExecOnlyWhenPreviousIsSuspended on

    *. * @@ <backup collector server FQDN>: <port for incoming events>

    $ActionExecOnlyWhenPreviousIsSuspended off

    Example configuration file

    Example configuration file specifying one primary and two backup collectors. The collectors are configured to receive events on TCP port 5140.

    *.* @@kuma-collector-01.example.com:5140

    $ActionExecOnlyWhenPreviousIsSuspended on

    & @@kuma-collector-02.example.com:5140

    & @@kuma-collector-03.example.com:5140

    $ActionExecOnlyWhenPreviousIsSuspended off

  4. Restart rsyslog by running the following command:

    systemctl restart rsyslog.

Event stream control is now enabled on the event source server.

Page top
[Topic 264730]

Event stream control using nginx

To control event stream using nginx, you need to create and configure a ngnix server to receive events from the event source and then forward these to collectors.

To enable nginx event stream control on the event source server:

  1. Create two or more identical collectors that you want to use to ensure uninterrupted reception of events.
  2. Install nginx on the server intended for event stream control.
    • Installation command in Oracle Linux 8.6:

      $sudo dnf install nginx

    • Installation command in Ubuntu 20.4:

      $sudo apt-get install nginx

      When installing from sources, you must compile with the parameter -with-stream option:
      $ sudo ./configure -with-stream -without-http_rewrite_module -without-http_gzip_module

  3. On the nginx server, add the stream module to the nginx.conf configuration file that contains the rules for forwarding the stream of events between collectors.

    Example stream module

    Example module in which event stream is distributed between the collectors kuma-collector-01.example.com and kuma-collector-02.example.com, which receive events via TCP on port 5140 and via UDP on port 5141. Balancing uses the nginx.example.com ngnix server.

    stream {

     upstream syslog_tcp {

    server kuma-collector-1.example.com:5140;

    server kuma-collector-2.example.com:5140;

    }

    upstream syslog_udp {

    server kuma-collector-1.example.com:5141;

    server kuma-collector-2.example.com:5141;

    }

     server {

    listen nginx.example.com:5140;

    proxy_pass syslog_tcp;

    }

    server {

    listen nginx.example.com:5141 udp;

    proxy_pass syslog_udp;

    proxy_responses 0;

    }

    }

     worker_rlimit_nofile 1000000;

    events {

    worker_connections 20000;

    }

    # worker_rlimit_nofile is the limit on the number of open files (RLIMIT_NOFILE) for workers. This is used to raise the limit without restarting the main process.

    # worker_connections is the maximum number of connections that a worker can open simultaneously.

  4. Restart nginx by running the following command:

    systemctl restart nginx

  5. On the event source server, forward events to the ngnix server.

Event stream control is now enabled on the event source server.

Nginx Plus may be required to fine-tune balancing, but certain balancing methods, such as Round Robin and Least Connections, are available in the base version of ngnix.

For more details on configuring nginx, please refer to the nginx documentation.

Page top
[Topic 264731]

Predefined collectors

The predefined collectors listed in the table below are included in the OSMP distribution kit.

Predefined collectors

Name

Description

[OOTB] CEF

Collects CEF events received over the TCP protocol.

[OOTB] KSC

Collects events from Kaspersky Security Center over the Syslog TCP protocol.

[OOTB] KSC SQL

Collects events from Kaspersky Security Center using an MS SQL database query.

[OOTB] Syslog

Collects events via the Syslog protocol.

[OOTB] Syslog-CEF

Collects CEF events that arrive over the UDP protocol and have a Syslog header.

Page top
[Topic 270287]

Creating an agent

A KUMA agent consists of two parts: one part is created inside the KUMA Console, and the second part is installed on a server or on an asset in the network infrastructure.

An agent is created in several steps:

  1. Creating a set of resources for an agent in the KUMA Console
  2. Creating an agent service in the KUMA Console
  3. Installing the server portion of the agent to the asset that will forward messages

A KUMA agent for Windows assets can be created automatically when you create a collector with the wmi or wec transport type. Although the set of resources and service of these agents are created in the Collector Installation Wizard, they must still be installed to the asset that will be used to forward a message.

In this section

Creating a set of resources for an agent

Creating an agent service in the KUMA Console

Installing an agent in a KUMA network infrastructure

Automatically created agents

Update agents

Transferring events from isolated network segments to KUMA

Transferring events from Windows machines to KUMA

Page top
[Topic 264732]

Creating a set of resources for an agent

In the KUMA Console, an agent service is created based on the set of resources for an agent that unites connectors and destinations.

To create a set of resources for an agent in the KUMA Console:

  1. In the KUMA Console, under ResourcesAgents, click Add agent.

    This opens a window for creating an agent with the Base settings tab active.

  2. Specify the settings on the Base settings tab:
    • In the Agent name field, enter a unique name for the created service. The name must contain 1 to 128 Unicode characters.
    • In the Tenant drop-down list, select the tenant that will own the storage.
    • If necessary, move the Debug toggle switch to the active position to enable logging of service operations.
    • You can optionally add up to 256 Unicode characters describing the service in the Description field.
  3. Click AddResource to create a connection for the agent and switch to the added Connection <number> tab.

    You can remove tabs by clicking cross.

  4. In the Connector group of settings, add a connector:
    • If you want to select an existing connector, select it from the drop-down list.
    • If you want to create a new connector, select Create new in the drop-down list and specify the following settings:
      • Specify the connector name in the Name field. The name must contain 1 to 128 Unicode characters.
      • In the Type drop-down list, select the connector type and specify its settings on the Basic settings and Advanced settings tabs. The available settings depend on the selected type of connector:

        The agent type is determined by the connector that is used in the agent. The only exception is for agents with a destination of the diode type. These agents are considered to be diode agents.

        When using the tcp or udp connector type at the normalization stage, IP addresses of the assets from which the events were received will be written in the DeviceAddress event field if it is empty.

        The ability to edit previously created wec or wmi connections in agents, collectors, and connectors is limited. You can change the connection type from wec to wmi and vice versa, but you cannot change the wec or wmi connection to any other connection type. At the same time, when editing other connection types, you cannot select the wec or wmi types. You can create connections without any restrictions on the types of connectors.

    • You can optionally add up to 4,000 Unicode characters describing the resource in the Description field.

    The connector is added to the selected connection of the agent's set of resources. The created connector is only available in this resource set and is not displayed in the web interface ResourcesConnectors section.

  5. In the Destinations group of settings, add a destination.
    • If you want to select an existing destination, select it from the drop-down list.
    • If you want to create a new destination, select Create new in the drop-down list and specify the following settings:
      • Specify the destination name in the Name field. The name must contain 1 to 128 Unicode characters.
      • In the Type drop-down list, select the destination type and specify its settings on the Basic settings and Advanced settings tabs. The available settings depend on the selected type of destination:
    • You can optionally add up to 4,000 Unicode characters describing the resource in the Description field.

      The advanced settings for an agent destination (such as TLS mode and compression) must match the advanced destination settings for the collector that you want to link to the agent.

    There can be more than one destination point. You can add them by clicking the Add destination button and can remove them by clicking the cross button.

  6. Repeat steps 3–5 for each agent connection that you want to create.
  7. Click Save.

The set of resources for the agent is created and displayed under ResourcesAgents. Now you can create an agent service in KUMA.

Page top
[Topic 264733]

Creating an agent service in the KUMA Console

When a set of resources is created for an agent, you can proceed to create an agent service in KUMA.

To create an agent service in the KUMA Console:

  1. In the KUMA Console, under ResourcesActive services, click Add service.
  2. In the opened Choose a service window, select the set of resources that was just created for the agent and click Create service.

The agent service is created in the KUMA Console and is displayed under ResourcesActive services. Now agent services must be installed to each asset from which you want to forward data to the collector. A service ID is used during installation.

Page top
[Topic 264734]

Installing an agent in a KUMA network infrastructure

When an agent service is created in KUMA, you can proceed to installation of the agent to the network infrastructure assets that will be used to forward data to a collector.

Prior to installation, verify the network connectivity of the system and open the ports used by its components.

In this section

Installing a KUMA agent on Linux assets

Installing a KUMA agent on Windows assets

Page top
[Topic 264735]

Installing a KUMA agent on Linux assets

KUMA agent installed on Linux devices stops when you close the terminal or restart the server. To avoid starting the agents manually, we recommend installing the agent by using a system that automatically starts applications when the server is restarted, such as Supervisor. To start the agents automatically, define the automatic startup and automatic restart settings in the configuration file. For more details on configuring settings, please refer to the official documentation of automatic application startup systems. An example of configuring settings in Supervisor, which you can adapt to your needs:

[program:agent_<agent name>] command=sudo /opt/kaspersky/kuma/kuma agent --core https://<KUMA Core server FQDN>:<port used by KUMA Core

autostart=true

autorestart=true

To install a KUMA agent to a Linux asset:

  1. Log in to the server where you want to install the service.
  2. Create the following directories:
    • /opt/kaspersky/kuma/
    • /opt/kaspersky/agent/
  3. Copy the "kuma" file to the /opt/kaspersky/kuma/ folder. The file is located in the installer in the /kuma-ansible-installer/roles/kuma/files/ folder.

    Make sure the kuma file has sufficient rights to run.

  4. Execute the following command:

    sudo /opt/kaspersky/kuma/kuma agent --core https://<KUMA Core server FQDN>:<port used by KUMA Core server for internal communication (port 7210 by default)> --id <service ID copied from the KUMA console> --wd <path to the directory that will contain the files of the installed agent. If this flag is not specified, the files will be stored in the directory where the kuma file is located>

    Example: sudo /opt/kaspersky/kuma/kuma agent --core https://kuma.example.com:7210 --id XXXX --wd /opt/kaspersky/kuma/agent/XXXX

The KUMA agent is installed on the Linux asset. The agent forwards data to KUMA, and you can set up a collector to receive this data.

Page top
[Topic 264736]

Installing a KUMA agent on Windows assets

Prior to installing a KUMA agent to a Windows asset, the server administrator must create a user account with the EventLogReaders and Log on as a service permissions on the Windows asset. This user account must be used to start the agent.
If you want to run the agent under a local account, you will need administrator rights and Log on as a service. If you want to perform the collection remotely and only read logs under a domain account, EventLogReaders rights are sufficient.

To install a KUMA agent to a Windows asset:

  1. Copy the kuma.exe file to a folder on the Windows asset. C:\Users\<User name>\Desktop\KUMA folder is recommended for installation.

    The kuma.exe file is located inside the installer in the /kuma-ansible-installer/roles/kuma/files/ folder.

  2. Start the Command Prompt on the Windows asset with Administrator privileges and locate the folder containing the kuma.exe file.
  3. Execute the following command:

    kuma agent --core https://<fullly qualified domain name of the KUMA Core server>:<port used by the KUMA Core server for internal communications (port 7210 by default)> --id <ID of the agent service that was created in KUMA> --user <name of the user account used to run the agent, including the domain> --install

    Example:

    kuma agent --core https://kuma.example.com:7210 --id XXXXX --user domain\username --install

    You can get help information by executing the kuma help agent command.

  4. Enter the password of the user account used to run the agent.

The C:\Program Files\Kaspersky Lab\KUMA\agent\<agent ID> folder is created and the KUMA agent service is installed in it. The agent forwards Windows events to KUMA, and you can set up a collector to receive them.

When the agent service is installed, it starts automatically. The service is also configured to restart in case of any failures. The agent can be restarted from the KUMA Console, but only when the service is active. Otherwise, the service needs to be manually restarted on the Windows asset.

Removing a KUMA agent from Windows assets

To remove a KUMA agent from a Windows asset:

  1. Start the Command Prompt on the Windows machine with Administrator privileges and locate the folder with kuma.exe file.
  2. Run any of the commands below:

The specified KUMA agent is removed from the Windows asset. Windows events are no longer sent to KUMA.

When configuring services, you can check the configuration for errors before installation by running the agent with the following command:

kuma agent --core https://<fullly qualified domain name of the KUMA Core server>:<port used by the KUMA Core server for internal communications (port 7210 by default)> --id <ID of the agent service that was created in KUMA> --user <name of the user account used to run the agent, including the domain>

Page top
[Topic 264737]

Automatically created agents

When creating a collector with wec or wmi connectors, agents are automatically created for receiving Windows events.

Automatically created agents have the following special conditions:

  • Automatically created agents can have only one connection.
  • Automatically created agents are displayed under ResourcesAgents, and auto created is indicated at the end of their name. Agents can be reviewed or deleted.
  • The settings of automatically created agents are defined automatically based on the collector settings from the Connect event sources and Transport sections. You can change the settings only for a collector that has a created agent.
  • The description of an automatically created agent is taken from the collector description in the Connect event sources section.
  • Debugging of an automatically created agent is enabled and disabled in the Connect event sources section of the collector.
  • When deleting a collector with an automatically created agent, you will be prompted to choose whether to delete the collector together with the agent or to just delete the collector. When deleting only the collector, the agent will become available for editing.
  • When deleting automatically created agents, the type of collector changes to http, and the connection address is deleted from the URL field of the collector.
  • If at least one Windows log name in wec or wmi connector is specified incorrectly, the agent will not receive events from any Windows log listed in the connector. At the same time the agent status will be green. Attempts to receive events will be repeated every 60 seconds, and error messages will be added to the service log.

In the KUMA interface, automatically created agents appear at the same time when the collector is created. However, they must still be installed on the asset that will be used to forward a message.

Page top
[Topic 264739]

Update agents

When updating KUMA versions, the WMI and WEC agents installed on remote machines must also be updated.

To update the agent, use an administrator account and follow these steps:

  1. In the KUMA Console, in the ResourcesActive servicesAgents section, select the agent that you want to update and copy its ID.

    You need the ID to install the new agent with the same ID after removing the old agent.

  2. In Windows, in the Services section, open the agent and click Stop.
  3. On the command line, go to the folder where the agent is installed and run the command to remove the agent from the server.

    kuma.exe agent --id <ID of agent service that was created in KUMA> --uninstall

  4. Place the new agent in the same folder.
  5. On the command line, go to the folder with the new agent and from that folder, run the installation command using the agent ID from step 1.

    kuma agent --core https://<fullly qualified domain name of the KUMA Core server>:<port used by the KUMA Core server for internal communications (port 7210 by default)> --id <ID of the agent service that was created in KUMA> --user <name of the user account used to run the agent, including the domain> --install

The agent is updated.

Page top
[Topic 264740]

Transferring events from isolated network segments to KUMA

Data transfer scenario

Data diodes can be used to transfer events from isolated network segments to KUMA. Data transfer is organized as follows:

  1. KUMA agent that is Installed on a standalone server, with a diode destination receives events and moves them to a directory from which the data diode will pick up the events.

    The agent accumulates events in a buffer until it overflows or for a user-defined period after the last write to disk. The events are then written to a file in the temporary directory of the agent. The file is moved to the directory processed by the data diode; its name is a combination of the file contents hash (SHA256) and the file creation time.

  2. The data diode moves files from the isolated server directory to the external server directory.
  3. A KUMA collector with a diode connector installed on an external server reads and processes events from the files of the directory where the data diode places files.

    After all events are read from a file, it is automatically deleted. Before reading events, the contents of files are verified based on the hash in the file name. If the contents fail verification, the file is deleted.

In the described scenario, the KUMA components are responsible for moving events to a specific directory within the isolated segment and for receiving events from a specific directory in the external network segment. The data diode transfers files containing events from the directory of the isolated network segment to the directory of the external network segment.

For each data source within an isolated network segment, you must create its own KUMA collector and agent, and configure the data diode to work with separate directories.

Configuring KUMA components

Configuring KUMA components for transferring data from isolated network segments consists of the following steps:

  1. Creating a collector service in the external network segment.

    At this step, you must create and install a collector to receive and process the files that the data diode will transfer from the isolated network segment. You can use the Collector Installation Wizard to create the collector and all the resources it requires.

    At the Transport step, you must select or create a connector of the diode type. In the connector, you must specify the directory to which the data diode will move files from the isolated network segment.

    The user "kuma" that runs the collector must have read/write/delete permissions in the directory to which the data diode moves data from the isolated network segment.

  2. Creating a set of resources for a KUMA agent.

    At this step, you must create a set of resources for the KUMA agent that will receive events in an isolated network segment and prepare them for transferring to the data diode. The diode agent resource set has the following requirements:

    • The destination in the agent must have the diode type. In this resource, you must specify the directory from which the data diode will move files to the external network segment.
    • You cannot select connectors of the sql or netflow types for the diode agent.
    • TLS mode must be disabled in the connector of the diode agent.
  3. Downloading the agent configuration file as JSON file.
    1. The set of agent resources from a diode-type destination must be downloaded as a JSON file.
    2. If secret resources were used in the agent resource set, you must manually add the secret data to the configuration file.
  4. Installing the KUMA agent service in the isolated network segment.

    At this step, you must install the agent in an isolated network segment based on the agent configuration file that was created at the previous step. It can be installed to Linux and Windows devices.

Configuring a data diode

The data diode must be configured as follows:

  • Data must be transferred atomically from the directory of the isolated server (where the KUMA agent places the data) to the directory of the external server (where the KUMA collector reads the data).
  • The transferred files must be deleted from the isolated server.

For information on configuring the data diode, please refer to the documentation for the data diode used in your organization.

Special considerations

When working with isolated network segments, operations with SQL and NetFlow are not supported.

When using the scenario described above, the agent cannot be administered through the KUMA Console because it resides in an isolated network segment. Such agents are not displayed in the list of active KUMA services.

In this section

Diode agent configuration file

Description of secret fields

Installing Linux Agent in an isolated network segment

Installing Windows Agent in an isolated network segment

Page top
[Topic 264748]

Diode agent configuration file

A created set of agent resources with a diode-type destination can be downloaded as a configuration file. This file is used when installing the agent in an isolated network segment.

To download the configuration file:

In the KUMA Console, under ResourcesAgents, select the relevant set of agent resources with a 'diode' destination and click Download config.

The agent settings configuration is downloaded as a JSON file based on the settings of your browser. Secrets used in the agent resource set are downloaded empty. Their IDs are specified in the file in the "secrets" section. To use a configuration file to install an agent in an isolated network segment, you must manually add secrets to the configuration file (for example, specify the URL and passwords used in the agent connector to receive events).

You must use an access control list (ACL) to configure permissions to access the file on the server where the agent will be installed. File read access must be available to the user account that will run the diode agent.

Below is an example of a diode agent configuration file with a kafka connector.

{

    "config": {

        "id": "<ID of the set of agent resources>",

        "name": "<name of the set of agent resources>",

        "proxyConfigs": [

            {

                "connector": {

                    "id": "<ID of the connector. This example shows a kafka-type connector, but other types of connectors can also be used in a diode agent. If a connector is created directly in the set of agent resources, the ID is not defined.>",

                    "name": "<name of the connector>",

                    "kind": "kafka",

                    "connections": [

                        {

                            "kind": "kafka",

                            "urls": [

                                "localhost:9093"

                            ],

                            "host": "",

                            "port": "",

                            "secretID": "<ID of the secret>",

                            "clusterID": "",

                            "tlsMode": "",

                            "proxy": null,

                            "rps": 0,

                            "maxConns": 0,

                            "urlPolicy": "",

                            "version": "",

                            "identityColumn": "",

                            "identitySeed": "",

                            "pollInterval": 0,

                            "query": "",

                            "stateID": "",

                            "certificateSecretID": "",

                            "authMode": "pfx",

                            "secretTemplateKind": "",

                            "certSecretTemplateKind": ""

                        }

                    ],

                    "topic": "<kafka topic name>",

                    "groupID": "<kafka group ID>",

                    "delimiter": "",

                    "bufferSize": 0,

                    "characterEncoding": "",

                    "query": "",

                    "pollInterval": 0,

                    "workers": 0,

                    "compression": "",

                    "debug": false,

                    "logs": [],

                    "defaultSecretID": "",

                    "snmpParameters": [

                        {

                            "name": "-",

                            "oid": "",

                            "key": ""

                        }

                    ],

                    "remoteLogs": null,

                    "defaultSecretTemplateKind": ""

                },

                "destinations": [

                    {

                        "id": "<ID of the destination. If the destination is created directly in the set of agent resources, the ID is not defined.>",

                        "name": "<destination name>",

                        "kind": "diode",

                        "connection": {

                            "kind": "file",

                            "urls": [

                                "<path to the directory where the destination should place events that the data diode will transmit from the isolated network segment>",

                                "<path to the temporary directory in which events are placed to prepare for data transmission by the diode>"

                            ],

                            "host": "",

                            "port": "",

                            "secretID": "",

                            "clusterID": "",

                            "tlsMode": "",

                            "proxy": null,

                            "rps": 0,

                            "maxConns": 0,

                            "urlPolicy": "",

                            "version": "",

                            "identityColumn": "",

                            "identitySeed": "",

                            "pollInterval": 0,

                            "query": "",

                            "stateID": "",

                            "certificateSecretID": "",

                            "authMode": "",

                            "secretTemplateKind": "",

                            "certSecretTemplateKind": ""

                        },

                        "topic": "",

                        "bufferSize": 0,

                        "flushInterval": 0,

                        "diskBufferDisabled": false,

                        "diskBufferSizeLimit": 0,

                        "healthCheckPath": "",

                        "healthCheckTimeout": 0,

                        "healthCheckDisabled": false,

                        "timeout": 0,

                        "workers": 0,

                        "delimiter": "",

                        "debug": false,

                        "disabled": false,

                        "compression": "",

                        "filter": null,

                        "path": ""

                    }

                ]

            }

        ],

        "workers": 0,

        "debug": false

    },

    "secrets": {

        "<secret ID>": {

            "pfx": "<encrypted pfx key>",

            "pfxPassword": "<password to the encrypted pfx key. The changeit value is exported from KUMA instead of the actual password. In the configuration file, you must manually specify the contents of secrets>"

        }

    },

    "tenantID": "<ID of the tenant>"

}

Page top
[Topic 264749]

Description of secret fields

Secret fields

Field name

Type

Description

user

string

User name

password

string

Password

token

string

Token

urls

array of strings

URL list

publicKey

string

Public key (used in PKI)

privateKey

string

Private key (used in PKI)

pfx

string containing the base64-encoded pfx file

Base64-encoded contents of the PFX file. In Linux, you can get the base64 encoding of a file by running the following command:

base64 -w0 src > dst

pfxPassword

string

Password of the PFX

securityLevel

string

Used in snmp3. Possible values: NoAuthNoPriv, AuthNoPriv, AuthPriv

community

string

Used in snmp1

authProtocol

string

Used in snmp3. Possible values: MD5, SHA, SHA224, SHA256, SHA384, SHA512

privacyProtocol

string

Used in snmp3. Possible values: DES, AES

privacyPassword

string

Used in snmp3

certificate

string containing the base64-encoded pem file

Base64-encoded contents of the PEM file. In Linux, you can get the base64 encoding of a file by running the following command:

base64 -w0 src > dst

Page top
[Topic 264750]

Installing Linux Agent in an isolated network segment

To install a KUMA agent to a Linux device in an isolated network segment:

  1. Place the following files on the Linux server in an isolated network segment that will be used by the agent to receive events and from which the data diode will move files to the external network segment:
    • Agent configuration file.

      You must use an access control list (ACL) to configure access permissions for the configuration file so that only the KUMA user will have file read access.

    • Executive file /opt/kaspersky/kuma/kuma (the "kuma" file can located in the installer in the /kuma-ansible-installer/roles/kuma/files/ folder).
  2. Execute the following command:

    sudo ./kuma agent --cfg <path to the agent configuration file> --wd <path to the directory where the files of the agent being installed will reside. If this flag is not specified, the files will be stored in the directory where the kuma file is located>

The agent service is installed and running on the server in an isolated network segment. It receives events and relays them to the data diode so that they can be sent to an external network segment.

Page top
[Topic 264751]

Installing Windows Agent in an isolated network segment

Prior to installing a KUMA agent to a Windows asset, the server administrator must create a user account with the EventLogReaders and Log on as a service permissions on the Windows asset. This user account must be used to start the agent.

To install a KUMA agent to a Windows device in an isolated network segment:

  1. Place the following files on the Window server in an isolated network segment that will be used by the agent to receive events and from which the data diode will move files to the external network segment:
    • Agent configuration file.

      You must use an access control list (ACL) to configure access permissions for the configuration file so that the file can only be read by the user account that will run the agent.

    • Kuma.exe executable file. This file can be found inside the installer in the /kuma-ansible-installer/roles/kuma/files/ directory.

    It is recommended to use the C:\Users\<user name>\Desktop\KUMA folder.

  2. Start the Command Prompt on the Windows asset with Administrator privileges and locate the folder containing the kuma.exe file.
  3. Execute the following command:

    kuma.exe agent --cfg <path to the agent configuration file> --user <user name that will run the agent, including the domain> --install

    You can get installer Help information by running the following command:

    kuma.exe help agent

  4. Enter the password of the user account used to run the agent.

The C:\Program Files\Kaspersky Lab\KUMA\agent\<Agent ID> folder is created in which the KUMA agent service is installed. The agent moves events to the folder so that they can be processed by the data diode.

When installing the agent, the agent configuration file is moved to the directory C:\Program Files\Kaspersky Lab\KUMA\agent\<agent ID specified in the configuration file>. The kuma.exe file is moved to the C:\Program Files\Kaspersky Lab\KUMA directory.

When installing an agent, its configuration file must not be located in the directory where the agent is installed.

When the agent service is installed, it starts automatically. The service is also configured to restart in case of any failures.

Removing a KUMA agent from Windows assets

To remove a KUMA agent from a Windows asset:

  1. Start the Command Prompt on the Windows machine with Administrator privileges and locate the folder with kuma.exe file.
  2. Run any of the commands below:

The specified KUMA agent is removed from the Windows asset. Windows events are no longer sent to KUMA.

When configuring services, you can check the configuration for errors before installation by running the agent with the following command:

kuma.exe agent --cfg <path to agent configuration file>

Page top
[Topic 264752]

Transferring events from Windows machines to KUMA

To transfer events from Windows machines to KUMA, a combination of a KUMA agent and a KUMA collector is used. Data transfer is organized as follows:

  1. The KUMA agent installed on the machine receives Windows events:
    • Using the WEC connector: the agent receives events arriving at the host under a subscription, as well as the server logs.
    • Using the WMI connector: the agent connects to remote servers specified in the configuration and receives events.
  2. The agent sends events (without preprocessing) to the KUMA collector specified in the destination.

    You can configure the agent so that different logs are sent to different collectors.

  3. The collector receives events from the agent, performs a full event processing cycle, and sends the processed events to the destination.

Receiving events from the WEC agent is recommended when using centralized gathering of events from Windows hosts using Windows Event Forwarding (WEF). The agent must be installed on the server that collects events; it acts as the Windows Event Collector (WEC). We do not recommend installing KUMA agents on every endpoint host from which you want to receive events.

The process of configuring the receipt of events using the WEC Agent is described in detail in the appendix: Configuring receipt of events from Windows devices using KUMA Agent (WEC).

For details about the Windows Event Forwarding technology, please refer to the official Microsoft documentation.

We recommend receiving events using the WMI agent in the following cases:

  • If it is not possible to use the WEF technology to implement centralized gathering of events, and at the same time, installation of third-party software (for example, the KUMA agent) on the event source server is prohibited.
  • If you need to obtain events from a small number of hosts — no more than 500 hosts per one KUMA agent.

For connecting Windows logs as an event source, we recommend using the "Add event source" wizard. When using a wizard to create a collector with WEC or WMI connectors, agents are automatically created for receiving Windows events. You can also manually create the resources necessary for collecting Windows events.

An agent and a collector for receiving Windows events are created and installed in several stages:

  1. Creating a set of resources for an agent

    Agent connector:

    When creating an agent, on the Connection tab, you must create or select a connector of the WEC or WMI type.

    If at least one Windows log name in a WEC or WMI connector is specified incorrectly, the agent will receive events from all Windows logs listed in the connector, except the problematic log. At the same time the agent status will be green. Attempts to receive events will be repeated every 60 seconds, and error messages will be added to the service log.

    Agent destination:

    The type of agent destination depends on the data transfer method you use: nats, tcp, http, diode, kafka, file.

    You must use the \0 value as the destination separator.

    The advanced settings for the agent destination (such as separator, compression and TLS mode) must match the advanced destination settings for the collector connector that you want to link to the agent.

  2. Creating an agent service in the KUMA Console
  3. Installing the KUMA agent on the Windows machine from which you want to receive Windows events.

    Before installation, make sure that the system components have access to the network and open the necessary network ports:

    • Port 7210, TCP: from server with collectors to the Core.
    • Port 7210, TCP: from agent server to the Core.
    • The port configured in the URL field when the connector was created: from the agent server to the server with the collector.
  4. Creating and installing KUMA collector.

    When creating a set of collectors, at the Transport step, you must create or select a connector that the collector will use to receive events from the agent. Connector type must match the type of the agent destination.

    The advanced settings of the connector (such as delimiter, compression, and TLS mode) must match the advanced settings of the agent destination that you want to link to the agent.

For some playbooks to work correctly, you may need to configure additional enrichment of the collector.

To edit enrichment rule settings in the KUMA collector:

  1. Add an enrichment rule by clicking Add enrichment rule and specify the following information in the corresponding fields:
    • Name: Specify an arbitrary name for the rule.
    • Source kind: dns.
    • URL: IP address of the domain controller.
    • Requests per second: 5.
    • Workers: 2.
    • Cache TTL: 3600.
  2. Add an enrichment rule by clicking Add enrichment rule and do the following:
    1. Fill in the following fields:
      • Name: Specify an arbitrary name for the rule.
      • Source kind: event.
      • Source field: DestinationNTDomain.
      • Target field: DestinationNTDomain.
    2. Click Add conversion and specify the following information in the corresponding fields:
      • Type: append.
      • Constant: .RU.
      • Type: replace.
      • Chars: RU.RU.
      • With chars: RU.
  3. Repeat the substeps from step 2 and specify SourceNTDomain as the Source field and Target field.
  4. Add enrichment with LDAP data and do the following:
    • Under LDAP accounts mapping, specify the name of the domain controller.
    • Click Apply default mapping to fill the mapping table with standard values.
Page top
[Topic 264753]