Tag Archive: SIEM

Creating NRT Rules in Microsoft Sentinel

For information about NRT rules, please see previous blog post or visit

https://docs.microsoft.com/en-us/azure/sentinel/near-real-time-rules

Creating NRT rules

Navigate to Microsoft Sentinel in the Azure portal

https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/microsoft.securityinsightsarg%2Fsentinel

In the navigation, select Analytics

Click Create and select NRT query rule


Give it a name and add Description, Mitre Tactics and Severity and click Next

In the configuration window, there are no schedule and lookback time to define

Configure your query accordingly and continue the wizard.

Requirements

You can only refer to one table and cannot use unions or joins

No cross workspace query

Use project and only keep the necessary fields to avoid truncation due to size limitations of the alerts

For further information, please visit

https://docs.microsoft.com/en-us/azure/sentinel/create-nrt-rules

Near-Real-Time analytic rules in Microsoft Sentinel

NRT Rules are hard-coded to run once every minute and capture events ingested in the preceding minute.

This is for faster detection and response opportunity.

Considerations

  • No more than 20 rules can be defined per customer at this time
  • As this type of rule is new, its syntax is currently limited but will gradually evolve. Therefore, at this time the following restrictions are in effect:
    • The query defined in an NRT rule can reference only one table. Queries can, however, refer to multiple watchlists and to threat intelligence feeds.
    • You cannot use unions or joins.
    • Because this rule type is in near real time, we have reduced the built-in delay to a minimum (two minutes).
    • Since NRT rules use the ingestion time rather than the event generation time (represented by the TimeGenerated field), you can safely ignore the data source delay and the ingestion time latency (see above).
    • Queries can run only within a single workspace. There is no cross-workspace capability.
    • There is no event grouping. NRT rules produce a single alert that groups all the applicable events.

There is a technical limit which blocks union, join etc.

For further information about Near-Real-Time, NRT, analytic rules, please visit:

https://docs.microsoft.com/en-us/azure/sentinel/near-real-time-rules

Happy Hunting!

Azure Sentinel is now GA

Azure Sentinel—the cloud-native SIEM that empowers defenders is now generally available

azure sentinel

Some of the new features are:

  • Workbooks are replacing dashboards, providing for richer analytics and visualizations
  • New Microsoft and 3rd party connectors

Detection and hunting:

  • Out of the box detection rules: The GitHub detection rules are now built into Sentinel.
  • Easy elevation of MTP alerts to Sentinel incidents.
  • Built-in detection rules utilizing the threat intelligence connector.
  • New ML models to discover malicious SSH access, fuse identity, and access data to detect 35 unique threats that span multiple stages of the kill chain. Fusion is now on by default and managed through the UI
  • Template playbooks now available on Github.
  • New threat hunting queries and libraries for Jupyter Notebooks

Incidents:

  • The interactive investigation graph is now publicly available.
  • Incidents support for tagging, comments, and assignments, both manually and automatically using playbooks.

MSSP and enterprise support:

  • Azure Lighthouse for multi-tenant management
  • RBAC support

For further information:

Pricing: https://azure.microsoft.com/en-us/pricing/details/azure-sentinel/
Product page: https://azure.microsoft.com/en-us/services/azure-sentinel/
Documentation: https://docs.microsoft.com/en-us/azure/sentinel/

Happy Hunting

Audit Scheduled tasks using Azure Sentinel

Azure Sentinel is a powerful cloud based SIEM solution.
This blog series will be on how to work with Sentinel.

It will be example based on different solutions which we might run into.

This first post is about how you can work with logs and get insight in Scheduled Tasks as a way for attackers to persist in your network

For further information regards Sentinel, visit https://azure.microsoft.com/en-us/services/azure-sentinel/

Scheduled Tasks

By default there are no events created if someone creates or modifieds a scheduled task. To enable logging you have to enable logging of object access.

To view current settings, use the following command:

auditpol.exe /get /category:*

Only Success is required for this. This enables us to get the event 4698

To enable logging, create a new GPO and assign the following settings (depending if you want success/failure or only success)

You also have to configure your agents to send log to your workspace, you can download the agent from the Azure Sentinel workspace / <workspace name> / Advanced Settings

Otherwise, you can add the Sentinel workspace to your existing agents

$Agent = New-Object -ComObject AgentConfigManager.MgmtSvcCfg
$ID= "<WorkspaceID>"
$Key = "<key>"
$Agent.AddCloudWorkspace($ID,$Key)
restart-service HealthService

In Azure Sentinel – Data connectors, configure Security Events

Azure Sentinel Connectors

Verify heartbeats from computers


Heartbeat | summarize argmax(TimeGenerated, *) by Computer


So now we have logs from 2 computers and now we want to query Scheduled Tasks

A simple way is to just query the EventID

sentinel

We can use project to format our table but we still want to get information about the tasks that were created to get a better overview


According to documentation we can use Parse operator into one or more calculated columns

https://docs.microsoft.com/en-us/azure/kusto/query/parseoperator

//Example
SecurityEvent
| where EventID == "4698"
| parse EventData with * '"SubjectUserName">' SubjectUserName '<' * '"SubjectDomainName">' SubjectDomainName '<' *

This query will

  • Select all events where eventid=4698
  • parse the column event data and look for ‘”SubjectUserName”>’
  • Put everyting to a column named SubjectUserName until character ‘<‘
  • the wildcard will run the samething again
  • Continue parsing until ‘”SubjectDomainName”>’
  • Put everything into column SubjectDomainName until character ‘<‘
sentinel

To continue this to get some really useful information we continue to parse the content until we get everything we need

//Sec-Labs Demo - Sentinel Hunting for Scheduled Tasks Persistance
let start=datetime("2019-03-12T19:39:47.762Z");
let end=datetime("2019-03-19T22:39:47.762Z");
SecurityEvent
|where TimeGenerated > start and TimeGenerated < end
| where EventID == "4698"
| parse EventData with * '"SubjectUserName">' SubjectUserName '<' * '"SubjectDomainName">' SubjectDomainName '<' * '"TaskName">\\' TaskName '<' * 'Author>' Author '<' * '<Command>' SchedCommand '</Command' * 'Arguments>' SchedArgs '</Arguments' * 'WorkingDirectory>' SchedDir '&' *
| where isnotempty (SubjectUserName) 
| project TimeGenerated,SubjectUserName,Computer,Activity,SubjectDomainName,TaskName,SchedCommand,SchedArgs,SchedDir
| project-rename CreatedBy = SubjectUserName

hunting queries

To rename columns, you can use project-rename <new name> = <old column name>

Happy Hunting!

Windows Event Forward and Custom Logs

First of all, this post is more about configuring custom event channels than configure WEF.

 

There is more than one way to work with event logs and the most important is to start working with event logs.

Some of the benefits is one place to find the logs for multiple systems and if someone clears, for example, the security log it’s important that you can find the log events before that happened and have alerts triggered on the clearing event.

Using the WEC (Windows Event collector) service is a free option and one of the most frequent used way to gather logs from Windows Clients.

So where do these events end up?

 

Windows Event Forward uses WinRM to forward the logs from the source to the server which runs the Windows Event Collector Service.

There are 2 different options where one option is to let the WEC server to connect to the client and poll the events and the other options is to let the client to push the events to the WEC server.

This is configured in the subscription part in Event Viewer

Besides the Subscription types you also must configure the Destination log (Default Forwarded Events) and select which events will be forwarded.

There are a few git projects for events in xml(xpath) format which you can use to automatically update the events.

 

There are more than security people which wants to be able to forward events.

IT operations and endpoint management teams would benefit from WEF by being able to collect errors and other events that might help with troubleshooting.

If you are about to publish new applocker rules you could set them in Audit mode and collect and analyze information where the rules would impact on a user.

Since we have multiple user cases for WEF you may want to separate the logs into different logs.

Security people maybe don’t want the support-.log to fill their selections of security related events.

You may want to forward the security logs into a SIEM solution like Splunk or QRADAR and don’t want to waist SIEM data license with non-security events.

 

To achieve this, we create a custom log.

Using Ecmangen.exe (provided in one of the Windows 10 SDKs, beware of that this tool is removed from the latest releases)

Save the output to c:\temp\WEF and run the following commands

“C:\Program Files (x86)\Windows Kits\10\bin\x64\mc.exe” C:\temp\wef\WEFEvents.man

“C:\Program Files (x86)\Windows Kits\10\bin\x64\mc.exe” -css WEFEvents.DummyEvent C:\temp\wef\WEFEvents.man

 “C:\Program Files (x86)\Windows Kits\10\bin\x64\rc.exe” C:\temp\wef\WEFEvents.rc

 “C:\Windows\Microsoft.NET\Framework64\v4.0.30319\csc.exe” /win32res:C:\temp\wef\WEFEvents.res /unsafe /target:library /out:C:\temp\wef\WEFEvents.dll C:\temp\wef\WEFEvents.cs

Copy the WEFEvents.dll and WEFEvents.man to c:\windows\system32 and register with:

wevtutil im c:\Windows\system32\WEFEvents.man

 

You will now be able to use these logs for WEF.

You can have, for example, one for servers one for clients. One with a SPLUNK forwarder and one inserted to a database with a nice custom interface which suites your need depending of what you have.