Exploring Windows Event Logs and Elastic Security for Incident Response
Elastic Security is incredibly useful for threat hunting especially with the success of tools like RockNSM and the HELK project. But what about hunting through old logs aided by detection tools that threat hunters use? How feasible is it to use Elastic Security if just want to stand it up the tool and throw some data at it? In this blog we we will explore how to take advantage of Elastic Security and the open source detection rules that are bundled in each release.
Problem Statement
If you have been in information security for any length of time you have probably heard the phrase at one point or another: "Prevention is ideal, but detection is a must". However detection, especially automated detection has traditionally been hard. Under-resourced security teams or even folks that are responsible for securing their enterprise without experience don't know what to focus on or how to start. For many years security teams have had to utilize a patchwork of customized tooling and APIs to make their own SOC due to everything being a shiny black box. This is still the case in most environments. In this case where there might be folks that know how to get the right telemetry they are often left with a pile of logs to sift through not knowing what to focus efforts on in the event of an incident.
Disclaimer: There are a large amount of tools available to do log analysis, many of them that I have used that are no longer maintained they might have traditionally been better options previously.
Thankfully there are a number of options to get started with the rise of tooling like DeepBlueCLI, EQL, and the ever popular Sigma, The detection engineering space has exploded and the opportunity to learn security concepts has exploded with it. For this blog I was interested in Windows security specifically using Windows Event Logs and how they can be used as a goldmine for incident response.
Windows Security
All systems utilize some for of logging for troubleshooting, error reporting and security detail. The Windows event service write events to event channels. These event channels can be read using third party tools, Nirsoft’s MyEventViewer, Event Log Explorer, PowerShell, and if you're desperate the Event Log Viewer in Windows.
First I started exploring the different logs available at EVTX-ATTACK-SAMPLES. This repository houses event logs that are Windows events samples associated to specific attack and post-exploitation techniques. A great place to start if we want to look at different examples of a tactic such as Privilege Escalation being manifested in our event log data.
Elastic Security
I am a huge fan of open source. I'm an even bigger fan of security tools that are open source or open in nature and it's no surprise that the Elastic Stack or ELK Stack for short is used for security. Security teams have been using it for several use cases including threat hunting, network security monitoring, vulnerability assessment, incident response and so many more because of it's ease of use and versatility. Elastic doubled down on security which led to multiple product development choices including the Elastic Security SIEM or Security application within Kibana but it wasn't until 7.6 that Elastic released the Detections feature now deemed the detection engine which houses detection logic across log types with the majority being focused on Windows logs. However Elastic's primary use case when it comes to the Detection Engine is detection inside of a SIEM. I am using ThremulationStation for this blog post as well. :)
Let's dive in!
Let's first take a look around the Security app.
To get to Rules we want to click on Alerts -> Manage rules
- ThremulationStation enables all Windows based rules OOTB (Except for the ML ones).
Elastic Detection Rules
Next we will use the filter for Windows so we are only looking at them.
Windows Rules
We can see we have 324 prebuilt rules. Let's open one of them up!
Elastic Rules
This rule is fairly simple, it uses EQL or Elastic Query Language to query for a user that was added to a privileged group such as "Domain Admins"
We can see this rule runs every five minutes. So every five minutes this rule will run against x indices. This rule uses the @timestamp
field in the index pattern.
To put that into simple terms the @timestamp
field will contain the timestamp of the event data as it is ingested. This is an issue because the detection rule will run every five minutes relative to what time it is now be default. Another way to explain this is that we need be able to run rules backwards or rather trick Elastic into thinking the time is relative to now. Previous to this blog post, I used this exercise as a way of enablement by making an ingest pipeline to use event.ingested
since it was closer to the current time and copied it's value to @timestamp
. Tada! I had data! Thankfully Elastic has now addressed this in the Elastic discussion here: Run detection rules backwards, and implemented a timestamp override option that is enabled by default in all preloaded detection rules!
Getting Started
Without spending too much time on coming with a custom way to ingest these logs there are really two ways we can dive into this. We can use the Elastic Agent or Winlogbeat. Since we are using Thremulation Station we will give Elastic Agent a shot. According to the docs for the Windows integrations it will be pick up certain channels and we can configure a custom event channel. However the only way we can do that is to load up WEC/WEF which is quite frankily too much work so instead we will use Winlogbeat. Thankfully Samir has provided us a bulk read script that is incredibly handy. First, we will download Winlogbeat, and grab Samir's default Winlogbeat configuration.
-
Grab Winlogbeat from: Download Winlogbeat | Ship Windows Event Logs | Elastic | Elastic
-
Unzip the file
-
Move Samir's configuration file or use the supplied command options in the bulk read script to inform the script where the Winlogbeat configuration file is. There are a few things in here to call out that are useful to know when reading in these logs.
1winlogbeat.event_logs:
2- name: ${EVTX_FILE}
This setting is using a variable called $EVTX_FILE
that must be set in order to inform winlogbeat what EVTX log we want it to read.
1winlogbeat.registry_file: "${CWD}/winlogbeat/evtx-registry.yml"
The above line is keeping a history of the files read in by Winlogbeat. We will have to delete it if we want to reread a log. Just like above this line references a variable in this case called $CWD
which is synomomous with "current working directory" Where ever we are running this script these files will be written to that directory. So we will want to keep our structure relatively flat so that Winlogbeat knows where to find this file for subsequent runs.
1output.elasticsearch:
2enabled: false
3hosts: ['http://localhost:9200']
This last line is pretty important since it will determine whether or not our data reaches Elasticsearch! In ThremulationStation we need to use https
and specify our username and password as vagrant
.
1output.elasticsearch:
2enabled: true
3hosts: ['https://localhost:9200']
4# The setting below is for ThremulationStation since the certificate is self-signed
5ssl.verification_mode: none
6
7username: "vagrant"
8password: "vagrant"
We are ready to give this a shot!
Why didn't it work? In older versions of Winlogbeat the parsing/processing of our data from Sysmon, Security and later Powershell happened in the form of js
files or processors in the Winlogbeat module directory. Now, everything is done through ingest pipelines. We can examine these changes by comparing the configuration files between 7.16.3 and 8.5.0.
7.16.3 Winlogbeat
8.5.0 Winlogbeat
By looking at line 125
we can see that the output for Elasticsearch will point to a ingest pipeline called winlogbeat-%{[agent.version]}-routing
. So we will need to change one line in our sample configuration to "route" the data appropriately. First we will run vanilla Winlogbeat with our correct settings in our configuration and then we will run winlogbeat.exe setup
to make sure the pipelines, dashboards, etc are loaded for us.
Winlogbeat Setup
Now we can just pick a Tactic and start importing logs! For this example I moved the entire Privilege Escalation folder into my working directory.
The BulkRead script will just recursively look through this directory and should parse everything in that folder.
Winlogbeat Bulk Read
Let's check on Elasticsearch..
We can see that we have data coming in from Winlogbeat, over 160 docs so far!
Validation
A peek at the Detection & Response dashboard shows us that we have alerts being generated from our data in the SIEM! Remember that event.ingested
is populated with the timestamp of now.
Just to make sure this data is what we expect we will open up just the alerts by clicking on View Alerts
By looking at the Reason
column in the alerts table we can see a number of alerts from a default hostname of MSEDGEWIN10
which is a virtual machine that we historically could download directly form Microsoft for testing Windows 10.
Lastly, with an extremly lazy query using KQL in Discover we can see that the field log.file.path
has our current working directory path and a reference to the files that were processed by Winlogbeat! We can also see with a simple sort that the oldest log we ingested was from February 2nd, 2019! This is a good gotcha to keep in mind that the alerts will exists when the script was ran however the data will reside wherever the original timestamps were from.
From here there are several things we can do with this data. For instance we could look at the different patterns in adversary tradecraft against the same detection rule. If we wanted to, we could single out a specific log file and see if we can ascertain what happened and if there are any detection gaps we could potentially fill! Lastly, we could take this same data and run it through other security tools that analyze Windows logs. As an analyst and a researcher I would encourage this! There could be more parts to this blog as well as turning it into a series in the future, more to come!