Another feature in hunting, which will speed up responses from a threat hunting scenario is TakeAction
When selecting a record in the result, the Take Action button will be visible as seen in below picture
So instead of just creating a new incident or adding events to an existing incident we can take actions from the hunting experience.
In the Take actions experience we have actions grouped by Devices, Files and Users.
The action options available is dependent on the data in the result. For instance, file information like checksum is required to being able to quarantine a file.
When clicking Next we can see the target selected and click Next
We can add a Remediation name and Description for our action
This feature enables a rapid response at the fingertips of the threat hunters for immediate actions
NRT Rules are hard-coded to run once every minute and capture events ingested in the preceding minute.
This is for faster detection and response opportunity.
Considerations
No more than 20 rules can be defined per customer at this time
As this type of rule is new, its syntax is currently limited but will gradually evolve. Therefore, at this time the following restrictions are in effect:
The query defined in an NRT rule can reference only one table. Queries can, however, refer to multiple watchlists and to threat intelligence feeds.
You cannot use unions or joins.
Because this rule type is in near real time, we have reduced the built-in delay to a minimum (two minutes).
Since NRT rules use the ingestion time rather than the event generation time (represented by the TimeGenerated field), you can safely ignore the data source delay and the ingestion time latency (see above).
Queries can run only within a single workspace. There is no cross-workspace capability.
There is no event grouping. NRT rules produce a single alert that groups all the applicable events.
There is a technical limit which blocks union, join etc.
For further information about Near-Real-Time, NRT, analytic rules, please visit:
During Ignite, Microsoft has announced a new set of features in the Advanced Hunting in Microsoft 365 Defender.
These features will definitely help you in the Threat Hunting process and also reduce the gap between analysts, responders and threat hunters and simplify the life of a threat hunter.
Multi-tab support
When having hunting training classes, I usually recommend to use multiple browser tabs. One for the query development, and one used to go back to previous queries to see how some things were done earlier.
for example, if you are developing a hunting query and need an if statement, external data, regex or other more advanced features it is easier to just open a previous query to see how it was solved last time. At least until you get more fluent in KQL. This is to avoid having to save your new query, go back to the old one, and then back to the new again
With the multi-tab support we can open the query in a new tab
Resource usage
The new Hunting Page will now provide the resource usage for the query both timing and an indicator of the resource usage
This will make it easy to see when query optimization is recommended and needed. You could for example use equals, has instead of contains, remove columns not used to reduce the dataset etc. Of course, when it’s feasible.
If you would like to learn more about how to optimize queries, please visit:
Schema, Functions, Queries and Detection Rules have been separated into tabs for, according to my opinion, easier access and pivoting which will give a better overview in each tab.
Schema Reference
The schema reference will open as a side pane
When looking at one of the *events tables, the ActionType column is very useful to see which events are being logged. Earlier, I usually selected distinct ActionType in the query to have a look at the events being logged. Now, it’s possible to use the quick access from the portal to expand all action types for a specific table.
Above image shows the action types for DeviceFileEvents. In the DeviceEvents there are around 180 different action types to query.
For the hunting query development and hunting use-cases, the action types is a great go-to resource.
The columns in the schema reference is clickable and can in a simple way be added to the query
Simple query management
Inspect record
The inspect record pane is an easy way to see the data for one single row. When developing new queries I usually take a subset of data (take/limit 20) to see an overview of the results, and also select an event to see all data instead of side scrolling through all columns when needed.
New features in inspect record is that we can do quick filters which will be added to the query.
In this example we would like to know more about process executions from the C:\AttackTools folder
If we would like other pre-defined FolderPath filters, we can select View more filters for FolderPathWe can continue the query development and as in below example, get the count for each file in the folder specified in the query.
Last but definitely not least… Link the query results to an incident
This is my favorite, this will reduce the gap and simplify the process between threat hunters, responders, and analysts.
By selecting the relevant events in the result, they can be added to an existing incident, or create a new incidents.
This feature will help organizations to define the threat hunting both in a proactive hunting scenario, and in a reactive, post breach scenario when the hunters will assist analysts and responder with a simplified process.
How to link the data to an incident
To be able to link the data you need to have the following columns in the output
Timestamp
DeviceId/AccountObjectID/AccountSid/RecipientEmailAddress (Depending on query table)
ReportId
Develop and run the query
Please note, you cannot have multiple queries in the query window when linking to incident
Choose to create a new incident or link to an existing
Add the necessary details and click nextSelect the impacted entitiesAfter finishing the wizard, the data will end up in a new alert in the incident
Last tip
Run a quick check in your environment to see if you have remote internet-based logon attempts on your devices by looking for RemoteIPType == “Public”. There are other where RemoteIPType is useful, like processes communicating with Internet.
As announced by Microsoft last week, the Download quarantined files is generally available.
This will simplify for SecOps to download quarantined files for further analysis.
So, why do SecOps want to download files?
One reason could be that they want to do forensic analysis on the file to see if taken response actions was enough or extract indicator which they can hunt for.
The feature is enabled in advanced features and is enabled by default
Devices have Windows 10 version 1703 or later, or Windows server 2016 or 2019
The file download is available from multiple pages in defender
It’s also visible on the file page, and the reason why we want to have the option to download in multiple pages is to avoid having to switch view and to be able to take the actions where we are in the portal
Update
The possibility to set password for the file download makes it more safe and also avoid file to be detected during download
We have been able to use Live Response for some time now. It’s really great and we can take the response actions we find necessary and download data from the endpoint through the browser session.
Here is a very high level of how the architecture looks for the live response feature
Some things which may be difficult today with the limitations of single session is we can only connect to one machine at the time and automation does not apply for a browser session
If a machine is compromised in any way it’s useful, but if we want to automate the responses or run the same custom playbook for multiple devices we need to use the API
The API can be used both to collect necessary artefacts from devices, and also take remediation actions.
On some events, we’ve presented how to use the Live Response to dump memory and export the dmp files to Azure storage as an example how powerful it is.
Requirements
Requirements and limitations
Rate limitations for this API are 10 calls per minute (additional requests are responded with HTTP 429).
25 concurrently running sessions (requests exceeding the throttling limit will receive a “429 – Too many requests” response).
If the machine is not available, the session will be queued for up to 3 days.
RunScript command timeouts after 10 minutes.
Live response commands cannot be queued up and can only be executed one at a time.
If the machine that you are trying to run this API call is in an RBAC device group that does not have an automated remediation level assigned to it, you’ll need to at least enable the minimum Remediation Level for a given Device Group.
Multiple live response commands can be run on a single API call. However, when a live response command fails all the subsequent actions will not be executed.
Minimum Requirements
Before you can initiate a session on a device, make sure you fulfill the following requirements:
Verify that you’re running a supported version of Windows.Devices must be running one of the following versions of Windows
Some times you might want to split the time stamp of an event into smaller pieces, like month, day, hour etc.
For instance, you might want to see if you have more alerts during some specific hours of the day or if anyone is using RDP in the middle of the night.
To achieve this we use the function datetime_part which can split the time stamp to the following parts
Year
Quarter
Month
week_of_year
Day
DayOfYear
Hour
Minute
Second
Millisecond
Microsecond
Nanosecond
This data could, of course, be used to further analysis and joined with other events.
One of the benefits of using a cloud service backend instead of on-prem appliance boxes is that we can get new features without doing anything except for “enable” depending on feature.
One feature I like is the “flag event” feature in the timeline.
In the machine timeline view there is a “flag” we can enable on each event we find interesting. This will make it easier to go back and further investigate suspicious activities.
In the overview we can see where the flags are located in the timeline and if we want, we can also filter on flagged events
As companies raise their bars and protect more and more accounts with Multi-factor Authentication the attacks are twisting with new angles. The method of using Application Consent is nothing new but attackers haven’t had a need to use it as a stolen password is normally less friction.
So what is Application Consent, Application consent is a way to grant permissions to Applications to access your data that they need to perform their specific Task. An example could be a be a Travel App that needs to read your Travel itinerary so that it can automatically update your Calendar with Flight Data or other information.
I am sure everyone has seen a App Consent Screen
Source: Microsoft docs
Kevin Mitnick did a malicious Ransomware PoC roughly with Application Consent two years ago around this, feel free to watch the demo on the youtube link.
Application ControlProtect
The first thing you should ask your selves, do you allow your users to Grant Permissions themselves of their data or have you as an organization centrally taken this control?
The settings can be configured under your Azure Active Directory
First off can your users Register Applications themselves or is this under central control?
AAD > User Settings > Enterprise Applications
So if you do not allow this the users would never be able to allow an App consent either, but if you do you can control how much data they can share and under what circumstances, you will find a few options in the detailed settings below.
AAD > Enterprise Applications > User Settings
AAD > Enterprise Applications > Consent and Permissions > User Consent Settings (Preview at the time of writing)
Do not allow user consent
Allow user consent for apps from verified publishers
Allow user consent for Apps
Allowing users to allow Apps will put you at risk as they can be lured into accepting an Application Consent. This is not only sensitive from a Security Threat perspective, but also from a privacy / secrecy perspective where third party apps malicious or not are for an example being granted access to PII or Customer Data.
Here you need to find the balance between control and risk on how much you can detect. With the “Allow user consent for apps from verified publishers” you also have the option to control what data and methods are being granted as well. Not that the offline_access is something you need to review thoroughly as that opens up your exposure.
Another possibility that exists is also to user a Admin Consent Requests, in this case a User can request a consent that an Admin will have to review and approve or deny.
AAD > Enterprise Applications > User Settings
Application Control Detection
There are a few ways to see and detect Application Consent, either you create a manual process to review this on a schedule or you use the tools you have at hand. Some examples on what you can use below depending on how you are licensed and how you have integrated Logs.
So what can you do if you find Applications that you suspect are doing malicious activities or is putting your data at risk.
You have a few options, start with documenting and putting a timeline with all the activities you are taking, its easy to forget when you need to go back in time.
Block Sign-in to Application
Remove Users from the Application
Remove the Application Completely
Ban/Block Application in MCAS
Review Permissions under the App in AAD
I wouldn’t recommend removing the app until your investigations is complete, id rather block the Login. Depending on that tools you have you can start going through your audit logs in relation to this app.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.