Detect

Take actions from Threat Hunting in M365 Defender

We wrote a blog post earlier about the news in threat hunting

New features in Advanced Hunting – Microsoft 365 Defender – SEC-LABS R&D

Another feature in hunting, which will speed up responses from a threat hunting scenario is Take Action

When selecting a record in the result, the Take Action button will be visible as seen in below picture

take actions, m365 defender

So instead of just creating a new incident or adding events to an existing incident we can take actions from the hunting experience.

In the Take actions experience we have actions grouped by Devices, Files and Users.

actionable items, m365 defender

The action options available is dependent on the data in the result. For instance, file information like checksum is required to being able to quarantine a file.

When clicking Next we can see the target selected and click Next

We can add a Remediation name and Description for our action

This feature enables a rapid response at the fingertips of the threat hunters for immediate actions

For further information, please visit

https://docs.microsoft.com/en-us/microsoft-365/security/defender/advanced-hunting-take-action?view=o365-worldwide

Happy Hunting!

Sec-Labs Team

Creating NRT Rules in Microsoft Sentinel

For information about NRT rules, please see previous blog post or visit

https://docs.microsoft.com/en-us/azure/sentinel/near-real-time-rules

Creating NRT rules

Navigate to Microsoft Sentinel in the Azure portal

https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/microsoft.securityinsightsarg%2Fsentinel

In the navigation, select Analytics

Click Create and select NRT query rule


Give it a name and add Description, Mitre Tactics and Severity and click Next

In the configuration window, there are no schedule and lookback time to define

Configure your query accordingly and continue the wizard.

Requirements

You can only refer to one table and cannot use unions or joins

No cross workspace query

Use project and only keep the necessary fields to avoid truncation due to size limitations of the alerts

For further information, please visit

https://docs.microsoft.com/en-us/azure/sentinel/create-nrt-rules

Near-Real-Time analytic rules in Microsoft Sentinel

NRT Rules are hard-coded to run once every minute and capture events ingested in the preceding minute.

This is for faster detection and response opportunity.

Considerations

  • No more than 20 rules can be defined per customer at this time
  • As this type of rule is new, its syntax is currently limited but will gradually evolve. Therefore, at this time the following restrictions are in effect:
    • The query defined in an NRT rule can reference only one table. Queries can, however, refer to multiple watchlists and to threat intelligence feeds.
    • You cannot use unions or joins.
    • Because this rule type is in near real time, we have reduced the built-in delay to a minimum (two minutes).
    • Since NRT rules use the ingestion time rather than the event generation time (represented by the TimeGenerated field), you can safely ignore the data source delay and the ingestion time latency (see above).
    • Queries can run only within a single workspace. There is no cross-workspace capability.
    • There is no event grouping. NRT rules produce a single alert that groups all the applicable events.

There is a technical limit which blocks union, join etc.

For further information about Near-Real-Time, NRT, analytic rules, please visit:

https://docs.microsoft.com/en-us/azure/sentinel/near-real-time-rules

Happy Hunting!

New features in Advanced Hunting – Microsoft 365 Defender

During Ignite, Microsoft has announced a new set of features in the Advanced Hunting in Microsoft 365 Defender.

These features will definitely help you in the Threat Hunting process and also reduce the gap between analysts, responders and threat hunters and simplify the life of a threat hunter.

Multi-tab support

When having hunting training classes, I usually recommend to use multiple browser tabs. One for the query development, and one used to go back to previous queries to see how some things were done earlier.

for example, if you are developing a hunting query and need an if statement, external data, regex or other more advanced features it is easier to just open a previous query to see how it was solved last time. At least until you get more fluent in KQL. This is to avoid having to save your new query, go back to the old one, and then back to the new again

With the multi-tab support we can open the query in a new tab

Resource usage

The new Hunting Page will now provide the resource usage for the query both timing and an indicator of the resource usage

This will make it easy to see when query optimization is recommended and needed.
You could for example use equals, has instead of contains, remove columns not used to reduce the dataset etc. Of course, when it’s feasible.

If you would like to learn more about how to optimize queries, please visit:

https://docs.microsoft.com/en-us/microsoft-365/security/defender/advanced-hunting-best-practices?view=o365-worldwide

UX

Schema, Functions, Queries and Detection Rules have been separated into tabs for, according to my opinion, easier access and pivoting which will give a better overview in each tab.

Schema Reference

The schema reference will open as a side pane




When looking at one of the *events tables, the ActionType column is very useful to see which events are being logged.
Earlier, I usually selected distinct ActionType in the query to have a look at the events being logged. Now, it’s possible to use the quick access from the portal to expand all action types for a specific table.

Above image shows the action types for DeviceFileEvents. In the DeviceEvents there are around 180 different action types to query.

For the hunting query development and hunting use-cases, the action types is a great go-to resource.

The columns in the schema reference is clickable and can in a simple way be added to the query

Simple query management

Inspect record

The inspect record pane is an easy way to see the data for one single row. When developing new queries I usually take a subset of data (take/limit 20) to see an overview of the results, and also select an event to see all data instead of side scrolling through all columns when needed.

New features in inspect record is that we can do quick filters which will be added to the query.

In this example we would like to know more about process executions from the C:\AttackTools folder

If we would like other pre-defined FolderPath filters, we can select View more filters for FolderPath
We can continue the query development and as in below example, get the count for each file in the folder specified in the query.

Last but definitely not leastLink the query results to an incident

This is my favorite, this will reduce the gap and simplify the process between threat hunters, responders, and analysts.

By selecting the relevant events in the result, they can be added to an existing incident, or create a new incidents.

This feature will help organizations to define the threat hunting both in a proactive hunting scenario, and in a reactive, post breach scenario when the hunters will assist analysts and responder with a simplified process.

How to link the data to an incident

To be able to link the data you need to have the following columns in the output

  • Timestamp
  • DeviceId/AccountObjectID/AccountSid/RecipientEmailAddress (Depending on query table)
  • ReportId

Develop and run the query

Please note, you cannot have multiple queries in the query window when linking to incident

Choose to create a new incident or link to an existing

Add the necessary details and click next
Select the impacted entities
After finishing the wizard, the data will end up in a new alert in the incident

Last tip

Run a quick check in your environment to see if you have remote internet-based logon attempts on your devices by looking for RemoteIPType == “Public”. There are other where RemoteIPType is useful, like processes communicating with Internet.

Happy Hunting!

Becoming a Sentinel Notebooks Ninja – training links

Do you want to learn more about Sentinel Notebooks (built on Jupyter Notebooks)? Microsoft have released a set of trainings to skill up in the area

Notebooks can be useful for cross tenant hunting and also cross product and multiple data sources if needed.

They can also be interactive in terms of a manual playbook with steps mixed with queries and graphs which would make it easy to follow through.

Sorry for the short blog post, but this one is about sharing content

Happy Hunting!

Download quarantined files is GA

As announced by Microsoft last week, the Download quarantined files is generally available.

This will simplify for SecOps to download quarantined files for further analysis.

So, why do SecOps want to download files?

One reason could be that they want to do forensic analysis on the file to see if taken response actions was enough or extract indicator which they can hunt for.

The feature is enabled in advanced features and is enabled by default

MDATP Settings – Microsoft 365 security

Cloud protection integration

The file download is dependent on the sample submission settings. Make sure it’s turned on!

Requirements 

The file download is available from multiple pages in defender

It’s also visible on the file page, and the reason why we want to have the option to download in multiple pages is to avoid having to switch view and to be able to take the actions where we are in the portal

Update

The possibility to set password for the file download makes it more safe and also avoid file to be detected during download

Live response API – build your custom playbooks

PUBLIC PREVIEW FEATURE

We have been able to use Live Response for some time now. It’s really great and we can take the response actions we find necessary and download data from the endpoint through the browser session.

Here is a very high level of how the architecture looks for the live response feature

Some things which may be difficult today with the limitations of single session is we can only connect to one machine at the time and automation does not apply for a browser session

If a machine is compromised in any way it’s useful, but if we want to automate the responses or run the same custom playbook for multiple devices we need to use the API

The API can be used both to collect necessary artefacts from devices, and also take remediation actions.

On some events, we’ve presented how to use the Live Response to dump memory and export the dmp files to Azure storage as an example how powerful it is.

Requirements

Requirements and limitations

  1. Rate limitations for this API are 10 calls per minute (additional requests are responded with HTTP 429).
  2. 25 concurrently running sessions (requests exceeding the throttling limit will receive a “429 – Too many requests” response).
  3. If the machine is not available, the session will be queued for up to 3 days.
  4. RunScript command timeouts after 10 minutes.
  5. Live response commands cannot be queued up and can only be executed one at a time.
  6. If the machine that you are trying to run this API call is in an RBAC device group that does not have an automated remediation level assigned to it, you’ll need to at least enable the minimum Remediation Level for a given Device Group.
  7. Multiple live response commands can be run on a single API call. However, when a live response command fails all the subsequent actions will not be executed.

Minimum Requirements

Before you can initiate a session on a device, make sure you fulfill the following requirements:

Set up service principle with API access

Sample code to connect with the service principle

Connecting to M365Defender

Connect to MDE API ( which applies to this case)

Request

Header

NameTypeDescription
AuthorizationStringBearer<token>. Required.
Content-Typestringapplication/json. Required.

Body

ParameterTypeDescription
CommentStringComment to associate with the action.
CommandsArrayCommands to run. Allowed values are PutFile, RunScript, GetFile.

Available commands

Command TypeParametersDescription
PutFileKey: FileNameValue: <file name>Puts a file from the library to the device. Files are saved in a working folder and are deleted when the device restarts by default.
RunScriptKey: ScriptName
Value: <Script from library>Key: Args
Value: <Script arguments>
Runs a script from the library on a device.The Args parameter is passed to your script.Timeouts after 10 minutes.
GetFileKey: Path
Value: <File path>
Collect file from a device. NOTE: Backslashes in path must be escaped.

Sample Live response request body

Use can upload your own scripts to the library and call the scripts in a similar way as when you use interactive Live Response

POST https://api.securitycenter.microsoft.com/api/machines/1e5bc9d7e413ddd7902c2932e418702b84d0cc07/runliveresponse


{
   "Commands":[
      {
         "type":"RunScript",
         "params":[
            {
               "key":"ScriptName",
               "value":"minidump.ps1"
            },
            {
               "key":"Args",
               "value":"OfficeClickToRun"
            }

         ]
      },
      {
         "type":"GetFile",
         "params":[
            {
               "key":"Path",
               "value":"C:\\windows\\TEMP\\OfficeClickToRun.dmp.zip"
            }
         ]
      }
   ],
   "Comment":"Testing Live Response API"
}

For further reading, please visit

https://docs.microsoft.com/en-us/microsoft-365/security/defender-endpoint/run-live-response

Happy Hunting!

Use kusto to breakdown time stamps

Some times you might want to split the time stamp of an event into smaller pieces, like month, day, hour etc.

For instance, you might want to see if you have more alerts during some specific hours of the day or if anyone is using RDP in the middle of the night.

To achieve this we use the function datetime_part which can split the time stamp to the following parts

  • Year
  • Quarter
  • Month
  • week_of_year
  • Day
  • DayOfYear
  • Hour
  • Minute
  • Second
  • Millisecond
  • Microsecond
  • Nanosecond

This data could, of course, be used to further analysis and joined with other events.

//Sample query
AlertInfo
| extend alerthour = datetime_part("hour", Timestamp)
| summarize count() by alerthour, DetectionSource
| sort by alerthour asc
| render areachart   

For further reading about Kusto datetime_part, please visit
https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/datetime-partfunction

#HappyHunting

Helpful feature in MDATP

One of the benefits of using a cloud service backend instead of on-prem appliance boxes is that we can get new features without doing anything except for “enable” depending on feature.

One feature I like is the “flag event” feature in the timeline.

flag event defender atp

In the machine timeline view there is a “flag” we can enable on each event we find interesting. This will make it easier to go back and further investigate suspicious activities.

In the overview we can see where the flags are located in the timeline and if we want, we can also filter on flagged events

Happy Hunting

Application Consent – Protect Detect and Respond

As companies raise their bars and protect more and more accounts with Multi-factor Authentication the attacks are twisting with new angles. The method of using Application Consent is nothing new but attackers haven’t had a need to use it as a stolen password is normally less friction.

So what is Application Consent, Application consent is a way to grant permissions to Applications to access your data that they need to perform their specific Task. An example could be a be a Travel App that needs to read your Travel itinerary so that it can automatically update your Calendar with Flight Data or other information.

I am sure everyone has seen a App Consent Screen

Source: Microsoft docs

Kevin Mitnick did a malicious Ransomware PoC roughly with Application Consent two years ago around this, feel free to watch the demo on the youtube link.

Application Control Protect

The first thing you should ask your selves, do you allow your users to Grant Permissions themselves of their data or have you as an organization centrally taken this control?

The settings can be configured under your Azure Active Directory

First off can your users Register Applications themselves or is this under central control?

AAD > User Settings > Enterprise Applications

So if you do not allow this the users would never be able to allow an App consent either, but if you do you can control how much data they can share and under what circumstances, you will find a few options in the detailed settings below.

AAD > Enterprise Applications > User Settings

AAD > Enterprise Applications > Consent and Permissions > User Consent Settings (Preview at the time of writing)

  • Do not allow user consent
  • Allow user consent for apps from verified publishers
  • Allow user consent for Apps

Allowing users to allow Apps will put you at risk as they can be lured into accepting an Application Consent. This is not only sensitive from a Security Threat perspective, but also from a privacy / secrecy perspective where third party apps malicious or not are for an example being granted access to PII or Customer Data.

Here you need to find the balance between control and risk on how much you can detect. With the “Allow user consent for apps from verified publishers” you also have the option to control what data and methods are being granted as well. Not that the offline_access is something you need to review thoroughly as that opens up your exposure.

Another possibility that exists is also to user a Admin Consent Requests, in this case a User can request a consent that an Admin will have to review and approve or deny.

AAD > Enterprise Applications > User Settings

Application Control Detection

There are a few ways to see and detect Application Consent, either you create a manual process to review this on a schedule or you use the tools you have at hand. Some examples on what you can use below depending on how you are licensed and how you have integrated Logs.

If you have integrated Office 365 Logs to Azure Sentinel this is an example query to find application consent activity.

AuditLogs 
| where OperationName == "Consent to application"
| extend displayName_ = tostring(TargetResources[0].displayName)
| extend userPrincipalName_ = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| project displayName_, userPrincipalName_, ActivityDateTime 

Application Control Respond

So what can you do if you find Applications that you suspect are doing malicious activities or is putting your data at risk.

You have a few options, start with documenting and putting a timeline with all the activities you are taking, its easy to forget when you need to go back in time.

  • Block Sign-in to Application
  • Remove Users from the Application
  • Remove the Application Completely
  • Ban/Block Application in MCAS
  • Review Permissions under the App in AAD

I wouldn’t recommend removing the app until your investigations is complete, id rather block the Login. Depending on that tools you have you can start going through your audit logs in relation to this app.

More Reading

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-permissions-and-consent

https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/manage-consent-requests

https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-user-consent

https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/detect-and-remediate-illicit-consent-grants?view=o365-worldwide