Kaspersky Unified Monitoring and Analysis Platform

Program architecture

The standard program installation includes the following components:

  • One or more Collectors that receive messages from event sources and parse, normalize, and, if required, filter and/or aggregate them
  • A Correlator that analyzes normalized events received from Collectors, performs the necessary actions with active lists, and creates alerts in accordance with the correlation rules
  • The Core that includes a graphical interface to monitor and manage the settings of system components.
  • The Storage, which contains normalized events and registered incidents

Events are transmitted between components over optionally encrypted, reliable transport protocols. You can configure load balancing to distribute load between service instances, and it is possible to enable automatic switching to the backup component if the primary one is unavailable. If all components are unavailable, events are saved to the hard disk buffer and sent later. The buffer disk size for temporary event storage can be adjusted.

kuma-arch

KUMA architecture

In this Help topic

Core

Collector

Correlator

Storage

Basic entities

Page top
[Topic 217958]

Core

The Core is the central component of KUMA that serves as the foundation upon which all other services and components are built. It provides a graphical user interface that is intended for everyday use by operators/analysts and for configuring the entire system.

The Core allows you to:

  • create and configure services, or components, of the program, as well as integrate the necessary software into the system;
  • manage program services and user accounts in a centralized way;
  • visualize statistical data on the program
  • investigate security threats based on the received events.
Page top
[Topic 217779]

Collector

A collector is an application component that receives messages from event sources, processes them, and transmits them to a storage, correlator, and/or third-party services to identify alerts.

For each collector, you need to configure one connector and one normalizer. You can also configure an unlimited number of additional Normalizers, Filters, Enrichment rules, and Aggregation rules. To enable the collector to send normalized events to other services, specific destinations must be added. Normally, two destinations are used: the storage and the correlator.

The collector operation algorithm includes the following steps:

  1. Receiving messages from event sources

    To receive messages, you must configure an active or passive connector. The passive connector can only receive messages from the event source, while the active connector can initiate a connection to the event source, such as a database management system.

    Connectors can also vary by type. The choice of connector type depends on the transport protocol for transmitting messages. For example, for an event source that transmits messages over TCP, you must install a TCP type connector.

    The program has the following connector types available:

    • internal
    • tcp
    • udp
    • netflow
    • nats
    • kafka
    • http
    • sql
    • file
    • ftp
    • nfs
    • wmi
    • wec
    • snmp
  2. Event parsing and normalization

    Events received by the connector are processed using the parser and normalization rules set by the user. The choice of normalizer depends on the format of the messages received from the event source. For example, you must select a CEF-type root normalizer for a source that sends events in CEF format.

    The following normalizers are available in the program:

    • JSON
    • CEF
    • Regexp
    • Syslog (as per RFC3164 and RFC5424)
    • CSV.
    • Key-value
    • XML
    • NetFlow v5
    • NetFlow v9
    • IPFIX (v10).
  3. Filtering of normalized events

    You can configure filters that allow you to select only the events that satisfy the specified conditions for further processing. Events that do not meet the filtering conditions are eliminated at this stage and are not processed further.

  4. Enrichment and mutation of normalized events

    Enrichment rules let you to supplement event contents with information from internal and external sources. The program has the following enrichment sources:

    • constant
    • cybertrace
    • dictionary
    • dns
    • event
    • ldap
    • template

    Mutation rules let you convert event contents in accordance with the defined criteria. The program has the following conversion methods:

    • lower—converts all characters to lower case.
    • upper—converts all characters to upper case.
    • regexp—extracts a substring using RE2 regular expressions.
    • substring—selects text strings by specified item numbers.
    • replace—replaces text with the entered string.
    • trim—deletes the specified characters.
    • append—adds characters to the end of the field value.
    • prepend—adds characters to the beginning of the field value.
  5. Aggregation of normalized events

    You can configure aggregation rules to reduce the number of similar events that are transmitted to the storage and/or the correlator. For example, you can aggregate into one event all messages about network connections transmitted over the same protocol (transport and application layers) between two IP addresses and received during a specified time interval. If aggregation rules are configured, multiple events will be processed and saved as a single event. This helps you reduce the load on the services responsible for further event processing, saves you storage space, and reduces events processed per second (EPS) count.

  6. Transmission of normalized events

    After all the processing stages are completed, the event is sent to configured destinations.

Page top
[Topic 217762]

Correlator

The Correlator is a program component that analyzes normalized events. Information from active lists and/or dictionaries can be used in the correlation process.

The data obtained by analysis is used to carry out the following tasks:

Event correlation is performed in real time. The operating principle of the correlator is based on an event signature analysis. This means that every event is processed according to the correlation rules set by the user. When the program detects a sequence of events that satisfies the conditions of the correlation rule, it creates a correlation event and sends it to the Storage. The correlation event can also be sent to the correlator for repeated analysis, which allows you to customize the correlation rules so that they are triggered by the results of a previous analysis. Products of one correlation rule can be used by other correlation rules.

You can distribute correlation rules and the active lists they use among correlators, thereby sharing the load between services. In this case, the collectors will send normalized events to all available correlators.

The Correlator operation algorithm has the following steps:

  1. Obtaining an event

    The correlator receives a normalized event from the collector or from another service.

  2. Applying correlation rules

    You can configure correlation rules so they are triggered based on a single event or a sequence of events. If no alert was detected using the correlation rules, the event processing ends.

  3. Responding to an alert

    You can specify actions that the program must perform when an alert is detected. The following actions are available in the program:

    • Event enrichment
    • Operations with active lists
    • Sending notifications
    • Storing correlation event
  4. Sending a correlation event

    When the program detects a sequence of events that satisfies the conditions of the correlation rule, it creates a correlation event and sends it to the storage. Event processing by the correlator is now finished.

Page top
[Topic 217784]

Storage

A KUMA storage is used to store normalized events so that they can be quickly and continually accessed from KUMA for the purpose of extracting analytical data. Access speed and continuity are ensured through the use of the ClickHouse technology. This means that a storage is a ClickHouse cluster bound to a KUMA storage service.

Storage components: clusters, shards, replicas, and keepers.

A cluster is a logical group of machines that possess all accumulated normalized KUMA events. It consists of one or more logical shards.

A shard is a logical group of machines that possess a specific portion of all normalized events accumulated in the cluster. It consists of one or more replicas. Increasing the number of shards lets you do the following:

  • Accumulate more events by increasing the total number of servers and disk space.
  • Absorb a larger stream of events by distributing the load associated with an influx of new events.
  • Reduce the time taken to search for events by distributing search areas among multiple machines.

A replica is a machine that is a member of the logical shard and possesses a copy of the data of this shard. If there are multiple replicas, there are multiple copies (data is replicated). Increasing the number of replicas lets you do the following:

  • Improve fault tolerance.
  • Distribute the total load related to data searches among multiple machines (although it's best to increase the number of shards for this purpose).

A keeper is an optional role of a replica that involves the replica's participation in coordinating data replication throughout the entire cluster. There must be at least one replica with this role for the entire cluster. It is recommended to have 3 keeper replicas. The number of replicas involved in coordinating replication must be an odd number.

When choosing a ClickHouse cluster configuration, consider the specific event storage requirements of your organization. For more information, please refer to the ClickHouse documentation.

In repositories, you can create spaces. The spaces enable to create a data structure in the cluster and, for example, store the events of a certain type together.

Page top
[Topic 218010]

Basic entities

This section describes the main entities that KUMA works with.

In this section

About tenants

About events

About alerts

About incidents

About assets

About resources

About services

About agents

About Priority

Page top
[Topic 220211]

About tenants

KUMA has a multitenancy mode in which one instance of the KUMA application installed in the infrastructure of the main organization (main tenant) enables isolation of branches (tenants) so that they receive and process their own events.

The system is managed centrally through the main interface while tenants operate independently of each other and have access only to their own resources, services and settings. Events of tenants are stored separately.

Users can have access to multiple tenants at the same time. You can also select which tenants' data will be displayed in sections of the KUMA web interface.

In KUMA, two tenants are created by default:

  • The main tenant contains resources and services related to the main tenant. These resources are available only to the general administrator.
  • The shared tenant is where the general administrator can place resources, asset categories, and monitoring policies that users of all tenants will be able to utilize.
Page top
[Topic 221264]

About events

Events are instances of the security-related activities of network assets and services that can be detected and recorded. For example, events include login attempts, interactions with a database, and sensor information broadcasts. Each separate event may seem meaningless, but when considered together they form a bigger picture of network activities to help identify security threats. This is the core functionality of KUMA.

KUMA receives events from logs and restructures their information, making the data from different event sources consistent (this process is called normalization). Afterwards, the events are filtered, aggregated, and later sent to the correlator service for analysis and to the Storage for retaining. When KUMA recognizes specific event or a sequences of events, it creates correlation events, that are also analyzed and retained. If an event or sequence of events indicates a potential security threat, KUMA creates an alert. This alert consists of a warning about the threat and all related data that should be investigated by a security officer.

Throughout their life cycle, events undergo conversions and may receive different names. Below is a description of a typical event life cycle:

The first steps are carried out in a collector.

  1. Raw event. The original message received by KUMA from an event source using a Connector is called a raw event. This is an unprocessed message and it cannot be used yet by KUMA. To fit into the KUMA pipeline, raw events must be normalized into the KUMA data model. That's what the next stage is for.
  2. Normalized event. A normalizer is a set of parsers that maps raw events into KUMA data model. After this conversion, the original message becomes a normalized event and can be used by KUMA for analysis. From here on, only normalized events are used in KUMA. Raw events are no longer used, but they can be kept as a part of normalized events inside the Raw field.

    The program has the following normalizers:

    • JSON
    • CEF
    • Regexp
    • Syslog (as per RFC3164 and RFC5424)
    • CSV/TSV
    • Key-value
    • XML
    • Netflow v5, v9, IPFIX (v10)
    • SQL

    At this point normalized events can already be used for analysis.

  3. Event destination. After the Collector service have processed an event, it is ready to be used by other KUMA services and sent to the KUMA Correlator and/or Storage.

The next event life cycle steps are completed in the correlator.

Event types:

  1. Base event. An event that was normalized.
  2. Aggregated event. When dealing with a large number of similar events, you can "merge" them into a single event to save processing time and resources. They act as base events, but In addition to all the parameters of the parent events (events that are "merged"), aggregated events have a counter that shows the number of parent events it represents. Aggregated events also store the time when the first and last parent events were received.
  3. Correlation event. When a sequence of events is detected that satisfies the conditions of a correlation rule, the program creates a correlation event. These events can be filtered, enriched, and aggregated. They can also be sent for storage or looped into the Correlator pipeline.
  4. Audit event. Audit events are created when certain security-related actions happen in KUMA; these events are used to ensure system integrity. They are stored at least for 365 days.
  5. Monitoring event. These events are used to track changes in the amount of data received by KUMA.
Page top
[Topic 217693]

About alerts

In KUMA, an alert is created when a sequence of events is received that triggers a correlation rule. Correlation rules are created by KUMA analysts to check incoming events for possible security threats, so when a correlation rule is triggered, it's a warning there may be some malicious activity happening. Security officers should investigate these alerts and respond if necessary.

KUMA automatically assigns the priority to each alert. This parameter shows how important or numerous the processes are that triggered the correlation rule. Alerts with higher priority should be dealt with first. The priority value is automatically updated when new correlation events are received, but a security officer can also set it manually. In this case, the alert priority is no longer automatically updated.

Alerts have related events linked to them, making alerts enriched with data from these events. KUMA also offers drill down functionality for alert investigations.

You can create incidents based on alerts.

Below is the life cycle of an alert:

  1. KUMA creates an alert when a correlation rule is triggered. The alert is updated if the correlation rule is triggered again. Alert is assigned the New status.
  2. A security officer assigns the alert to an operator for investigation. The alert status changes to assigned.
  3. The operator performs one of the following actions:
    • Close the alert as false a positive (alert status changes to closed).
    • Respond to the threat and close the alert (alert status changes to closed).

Afterwards, the alert is no longer updated with new events and if the correlation rule is triggered again, a new alert is created.

Alert management in KUMA is described in this section.

Page top
[Topic 217691]

About incidents

If the nature of the data received by KUMA or the generated correlation events and alerts indicate a possible attack or vulnerability, the symptoms of such an event can be combined into an incident. This allows security experts to analyze threat manifestations in a comprehensive manner and facilitates response.

You can assign a category, type, and priority to incidents, and assign incidents to data protection officers for processing.

Incidents can be exported to RuCERT.

Page top
[Topic 220212]

About assets

Assets are network devices registered in KUMA. Network assets generate network traffic when they send and receive data. The KUMA program can be configured to track this activity and create baseline events with a clear indication of where the traffic is coming from and where it is going. The event can contain source and destination IP addresses, as well as DNS names. If you register an asset with certain parameters (for example, a particular IP address), a connection is formed between this asset and all events that contain this IP in any of its parameters.

Assets can be divided into logical groups. This helps keep your network structure transparent and gives you additional ways to work with correlation rules. When an event with an asset is processed, the category of this asset is taken into consideration. For example, if you assign high priority to a certain category of assets, base events involving these assets will trigger creation of correlation events with higher priority. This in turn will cascade into higher priority alerts and, therefore, a faster response to it.

It is worth having assets registered in KUMA because using them makes it possible to formulate clear and versatile correlation rules for much more efficient event analysis.

Asset management in KUMA is described in this section.

Page top
[Topic 217692]

About resources

Resources are KUMA components that contain parameters for implementing various functions: for example, establishing a connection with a given web address or converting data according to certain rules. These components, like parts of a constructor set, are assembled into resource sets for services, based on which, in turn, KUMA services are created.

Page top
[Topic 221640]

About services

Services are the main components of KUMA that work with events: receiving, processing, analyzing, and storing them. Each service consists of two parts that work together:

  • One part of the service is created inside the KUMA web interface based on set of resources for services
  • The second part of the service is installed in the network infrastructure where the KUMA system is deployed as one of its components. The server part of a service can consist of several instances: for example, services of the same agent or storage can be installed on several computers at once.

Parts of services are connected to each other by using the IDs of services.

Page top
[Topic 221642]

About agents

KUMA agents are services that are used to forward unprocessed events from servers and workstations to KUMA collectors.

Types of agents:

  • Wmi agents are used to receive data from remote Windows machines using Windows Management Instrumentation. They are installed to Windows assets.
  • Wec agents are used to receive Windows logs from a local machine using Windows Event Collector. They are installed to Windows assets.
  • Tcp agents are used to receive data over the TCP protocol. They are installed to Linux assets.
  • Udp agents are used to receive data over the UDP protocol. They are installed to Linux assets.
  • Nats agents are used for NATS communications. They are installed to Linux assets.
  • Kafka agents are used for Kafka communications. They are installed to Linux assets.
  • Http agents are used for communication over the HTTP protocol. They are installed to Linux assets.
  • File agents are used to get data from a file. They are installed to Linux assets.
  • Ftp agents are used to receive data over the File Transfer Protocol. They are installed to Linux assets.
  • Nfs agents are used to receive data over the Network File System protocol. They are installed to Linux assets.
  • Snmp agents are used to receive data over the Simple Network Management Protocol. They are installed to Linux assets.
Page top
[Topic 217690]

About Priority

Priority reflects the relative importance of security-sensitive activity detected by a KUMA correlator. It shows the order in which multiple alerts should be processed, and indicates whether senior security officers should be involved.

The Correlator automatically assigns priority to correlation events and alerts based on correlation rule settings. The priority of an alert also depends on the assets related to the processed events because correlation rules take into account the priority of a related asset's category. If the alert or correlation event does not have linked assets with a defined priority or does not have any related assets at all, the priority of this alert or correlation event is equal to the priority of the correlation rule that triggered them. The alert or the correlation event priority is never lower than the priority of the correlation rule that triggered them.

Alert priority can be changed manually. The priority of alerts changed manually is no longer automatically updated by correlation rules.

Possible priority values:

  • Low
  • Medium
  • High
  • Critical
Page top
[Topic 217695]