The first step is to analyze which types of logs exist and which could be used to detect the suspicious behavior. How is the behavior logged? Are there different systems on which logging is performed differently? Can our customers' systems generate the relevant log data or does this require additional software? Does additional log volume arise?
In the second step, we analyze how the behavior can be mapped with existing data models; this can later help us to correlate different events. The data models not only help us with the analysis, but also give us the opportunity to search efficiently in the logs and thus keep the demands on the infrastructure and therefore the costs for our customers low.
The final question is whether the suspicious behavior can be simulated. If this is the case, we are planning to integrate a procedure with which the use case can be tested end-to-end on a daily basis
Now it's time for implementation! This begins with the normalization of the raw log data. The required field contents are extracted and assigned to the corresponding fields of the data model. For example, UserName='Admin' becomes the normalized field user='Admin'.
As soon as the data is available in the desired format, we start writing the "search". Depending on the use case, this can be very simple (for example, searching for certain field contents) or very complicated (if information from different data models needs to be merged and correlated). We note that each use case contains certain mandatory elements:
Whenever possible, we write a test procedure and integrate it into our end-to-end testing framework. This simulates the processes to be detected on a dedicated host on a daily basis and gives us the opportunity to determine whether all use cases are working properly.
As part of our release process, the new use cases are extensively tested and then rolled out to our customers.
Due to the large number of systems and network architectures, the list of possible sources of error is also long. Without end-to-end testing, it is very difficult to estimate the degree to which a SIEM is available, and defects can sometimes remain undetected for years.
A selection of possible sources of error illustrates how important ongoing monitoring is:
The development of a use case is a complex matter, even for simple cases. Only in combination with an end-to-end testing framework can the reliable functioning of attack detection be guaranteed.
A use case defines an attack. When we talk about a use case in the context of a SIEM solution, we mean the search for suspicious behavior that is to be detected by a use case. Combined with the test procedure, see also Use Case Testing, this ensures that the search and alerting function reliably.
We now also have the option of rolling out use cases in an accelerated process. This is useful in the event of an acute threat, for example. Later, emergency use cases are either rolled out as normal use cases or removed if the threat situation no longer exists.
Many of our customers have their own specifications or wishes as to what should be monitored in addition to our existing use cases. In this case, we provide advice and implement customer-specific use cases.
This checks the integrity of the SIEM platform. For example, we check whether log data arrives late or whether internal limits are reached.