Tag Archive: Sentinel

Microsoft Sentinel Parsing tips – Whitespace control

This post will be a part of a multiple posts to cover data parsing in Microsoft Sentinel.

Intro

Kusto is a powerfull query language and easy to adopt.

Even if Kusto is very powerfull, working with custom log sources is, sometimes, a mess. Some parsers requires more effort and some are very simple.

In general, when it’s possible to use operators like “parse” (link) function or “parse-kv” (link) it’s very welcome. However, the reality has a different challenge for us.

In this post we want to share a quick pro tip to solve the mystic of hidden whitespaces

The challenge of whitespaces

Whitespaces ” ” exists everywhere, the challenge is how it’s presented in log analytics.

Log analytics does a lot for the user in terms of nicely present data. It actually removes duplicate whitespaces, as well as leading and trailing whitespace.

This could result in problems like failing parsers, regex and string operators like “==”, “startswith”, “endswith” etc will fail. Especially if it’s not consistent.

Marking the string in the output view does not show the extra whitespaces

Copying the text and paste into a text-editor will not show it either like in below example where we copied the output into VS Code (we can only see one dot to show one whitespace between foo and bar)

However, the double whitespaces are interpreted during execution, and it’s only in the presentation view the extra is removed. As in below example, we used split on ” ” to show the existence of the double whitespace.

When working with multiple log sources you don’t want to search and see if they exist (which may change during the log source life cycle), you rather want a way to always make the log to look good in your parser.

Solution

To properly address this (if there aren’t any good ways to change the audit settings of the system sending the logs)

To handle the duplicate white spaces we use the replace_regex function (link here) and use the whitespace “\s” with the quantifier “+” which means one or multiple times and replace it with a space ” “.

This will search for spaces (one or more) and replace it with a single, because we don’t want to remove single spaces. And by using the same column name “SyslogMessage” we will actually reuse the same column for our clean output.

Please note that this will not change the message in the database, only during execution.

Doing this gives us the following output.

The next step is that we want to remove the leading and trailing whitespaces. If we for instance expect the first character to be a value, the leading whitespace could make our parser to fail or an analytic rule.

We have seen occasions where this happens from time to time and not all messages in a log source.

To fix the leading and trailing whitespaces we use another regex to look for start of string and end of string. But this time we want to replace with “nothing/null” which is why we can’t use this regex in the first cleaning.

In the second run we use the same column name again to cleanup the SyslogMessage. There is a best practice to always keep the original message, however, this is to solve an error from the log source and not to alter the SyslogMessage.

The regex starts with an anchor “^” to define the start of the string and followed by a whitespace “\s” since we cleaned all double whitespaces we don’t need to use the quantifier. To handle the trailing whitespace we use the OR operand “|” and check for a whitespace “\s” followed by the anchor “$” to determine the end of the string. If we get any hits it will be replaced with null and we have a clean string.

By adding these 2 lines of code to the parser, we will avoid running into strange issues which could take some time to troubleshoot.

//Sample
CustomLogSource_CL
| extend SyslogMessage = replace_regex(SyslogMessage,@"\s+",@" ") //Remove duplicate whitespaces
| extend SyslogMessage = replace_regex(SyslogMessage,@"^\s|\s$",@"") //Remove leading and trailing whitespaces

Happy Hunting!

Near-Real-Time analytic rules in Microsoft Sentinel

NRT Rules are hard-coded to run once every minute and capture events ingested in the preceding minute.

This is for faster detection and response opportunity.

Considerations

  • No more than 20 rules can be defined per customer at this time
  • As this type of rule is new, its syntax is currently limited but will gradually evolve. Therefore, at this time the following restrictions are in effect:
    • The query defined in an NRT rule can reference only one table. Queries can, however, refer to multiple watchlists and to threat intelligence feeds.
    • You cannot use unions or joins.
    • Because this rule type is in near real time, we have reduced the built-in delay to a minimum (two minutes).
    • Since NRT rules use the ingestion time rather than the event generation time (represented by the TimeGenerated field), you can safely ignore the data source delay and the ingestion time latency (see above).
    • Queries can run only within a single workspace. There is no cross-workspace capability.
    • There is no event grouping. NRT rules produce a single alert that groups all the applicable events.

There is a technical limit which blocks union, join etc.

For further information about Near-Real-Time, NRT, analytic rules, please visit:

https://docs.microsoft.com/en-us/azure/sentinel/near-real-time-rules

Happy Hunting!

Becoming a Sentinel Notebooks Ninja – training links

Do you want to learn more about Sentinel Notebooks (built on Jupyter Notebooks)? Microsoft have released a set of trainings to skill up in the area

Notebooks can be useful for cross tenant hunting and also cross product and multiple data sources if needed.

They can also be interactive in terms of a manual playbook with steps mixed with queries and graphs which would make it easy to follow through.

Sorry for the short blog post, but this one is about sharing content

Happy Hunting!

Microsoft 365 Defender connector for Azure Sentinel in public preview

365 defender connector

A new connector for Microsoft 365 Defender is in public preview in Azure Sentinel. This connector makes it possible to ingest the hunting data into Sentinel

Currently, the Defender for Endpoint Data is available

To enable

  • Go to you Azure Sentinel Instance and select Connectors
  • Search for Microsoft 365 Defender
365 defender connector
  • Click Open Connector Page
  • Select which Events you want to ingest
threat hunting data
  • Click Apply Changes

Example queries

//Registry events
DeviceRegistryEvents
| where ActionType == "RegistryValueSet"
| where RegistryValueName == "DefaultPassword"
| where RegistryKey has @"SOFTWAREMicrosoftWindows NTCurrentVersionWinlogon"
| project Timestamp, DeviceName, RegistryKey
| top 100 by Timestamp
//Process and Network events
union DeviceProcessEvents, DeviceNetworkEvents
| where Timestamp > ago(7d)
| where FileName in~ ("powershell.exe", "powershell_ise.exe")
| where ProcessCommandLine has_any("WebClient",
"DownloadFile",
"DownloadData",
"DownloadString",
"WebRequest",
"Shellcode",
"http",
"https")
| project Timestamp, DeviceName, InitiatingProcessFileName,
InitiatingProcessCommandLine,
FileName, ProcessCommandLine, RemoteIP, RemoteUrl, RemotePort, RemoteIPType
log view

If we look at the tables we can see the new created tables

table view

More information about the data in these tables is available in this post https://blog.sec-labs.com/2018/06/threat-hunting-with-windows-defender-atp/

For further reading:

Azure Sentinel is now GA

Azure Sentinel—the cloud-native SIEM that empowers defenders is now generally available

azure sentinel

Some of the new features are:

  • Workbooks are replacing dashboards, providing for richer analytics and visualizations
  • New Microsoft and 3rd party connectors

Detection and hunting:

  • Out of the box detection rules: The GitHub detection rules are now built into Sentinel.
  • Easy elevation of MTP alerts to Sentinel incidents.
  • Built-in detection rules utilizing the threat intelligence connector.
  • New ML models to discover malicious SSH access, fuse identity, and access data to detect 35 unique threats that span multiple stages of the kill chain. Fusion is now on by default and managed through the UI
  • Template playbooks now available on Github.
  • New threat hunting queries and libraries for Jupyter Notebooks

Incidents:

  • The interactive investigation graph is now publicly available.
  • Incidents support for tagging, comments, and assignments, both manually and automatically using playbooks.

MSSP and enterprise support:

  • Azure Lighthouse for multi-tenant management
  • RBAC support

For further information:

Pricing: https://azure.microsoft.com/en-us/pricing/details/azure-sentinel/
Product page: https://azure.microsoft.com/en-us/services/azure-sentinel/
Documentation: https://docs.microsoft.com/en-us/azure/sentinel/

Happy Hunting

Audit Scheduled tasks using Azure Sentinel

Azure Sentinel is a powerful cloud based SIEM solution.
This blog series will be on how to work with Sentinel.

It will be example based on different solutions which we might run into.

This first post is about how you can work with logs and get insight in Scheduled Tasks as a way for attackers to persist in your network

For further information regards Sentinel, visit https://azure.microsoft.com/en-us/services/azure-sentinel/

Scheduled Tasks

By default there are no events created if someone creates or modifieds a scheduled task. To enable logging you have to enable logging of object access.

To view current settings, use the following command:

auditpol.exe /get /category:*

Only Success is required for this. This enables us to get the event 4698

To enable logging, create a new GPO and assign the following settings (depending if you want success/failure or only success)

You also have to configure your agents to send log to your workspace, you can download the agent from the Azure Sentinel workspace / <workspace name> / Advanced Settings

Otherwise, you can add the Sentinel workspace to your existing agents

$Agent = New-Object -ComObject AgentConfigManager.MgmtSvcCfg
$ID= "<WorkspaceID>"
$Key = "<key>"
$Agent.AddCloudWorkspace($ID,$Key)
restart-service HealthService

In Azure Sentinel – Data connectors, configure Security Events

Azure Sentinel Connectors

Verify heartbeats from computers


Heartbeat | summarize argmax(TimeGenerated, *) by Computer


So now we have logs from 2 computers and now we want to query Scheduled Tasks

A simple way is to just query the EventID

sentinel

We can use project to format our table but we still want to get information about the tasks that were created to get a better overview


According to documentation we can use Parse operator into one or more calculated columns

https://docs.microsoft.com/en-us/azure/kusto/query/parseoperator

//Example
SecurityEvent
| where EventID == "4698"
| parse EventData with * '"SubjectUserName">' SubjectUserName '<' * '"SubjectDomainName">' SubjectDomainName '<' *

This query will

  • Select all events where eventid=4698
  • parse the column event data and look for ‘”SubjectUserName”>’
  • Put everyting to a column named SubjectUserName until character ‘<‘
  • the wildcard will run the samething again
  • Continue parsing until ‘”SubjectDomainName”>’
  • Put everything into column SubjectDomainName until character ‘<‘
sentinel

To continue this to get some really useful information we continue to parse the content until we get everything we need

//Sec-Labs Demo - Sentinel Hunting for Scheduled Tasks Persistance
let start=datetime("2019-03-12T19:39:47.762Z");
let end=datetime("2019-03-19T22:39:47.762Z");
SecurityEvent
|where TimeGenerated > start and TimeGenerated < end
| where EventID == "4698"
| parse EventData with * '"SubjectUserName">' SubjectUserName '<' * '"SubjectDomainName">' SubjectDomainName '<' * '"TaskName">\\' TaskName '<' * 'Author>' Author '<' * '<Command>' SchedCommand '</Command' * 'Arguments>' SchedArgs '</Arguments' * 'WorkingDirectory>' SchedDir '&' *
| where isnotempty (SubjectUserName) 
| project TimeGenerated,SubjectUserName,Computer,Activity,SubjectDomainName,TaskName,SchedCommand,SchedArgs,SchedDir
| project-rename CreatedBy = SubjectUserName

hunting queries

To rename columns, you can use project-rename <new name> = <old column name>

Happy Hunting!