Another feature in hunting, which will speed up responses from a threat hunting scenario is TakeAction
When selecting a record in the result, the Take Action button will be visible as seen in below picture
So instead of just creating a new incident or adding events to an existing incident we can take actions from the hunting experience.
In the Take actions experience we have actions grouped by Devices, Files and Users.
The action options available is dependent on the data in the result. For instance, file information like checksum is required to being able to quarantine a file.
When clicking Next we can see the target selected and click Next
We can add a Remediation name and Description for our action
This feature enables a rapid response at the fingertips of the threat hunters for immediate actions
During Ignite, Microsoft has announced a new set of features in the Advanced Hunting in Microsoft 365 Defender.
These features will definitely help you in the Threat Hunting process and also reduce the gap between analysts, responders and threat hunters and simplify the life of a threat hunter.
Multi-tab support
When having hunting training classes, I usually recommend to use multiple browser tabs. One for the query development, and one used to go back to previous queries to see how some things were done earlier.
for example, if you are developing a hunting query and need an if statement, external data, regex or other more advanced features it is easier to just open a previous query to see how it was solved last time. At least until you get more fluent in KQL. This is to avoid having to save your new query, go back to the old one, and then back to the new again
With the multi-tab support we can open the query in a new tab
Resource usage
The new Hunting Page will now provide the resource usage for the query both timing and an indicator of the resource usage
This will make it easy to see when query optimization is recommended and needed. You could for example use equals, has instead of contains, remove columns not used to reduce the dataset etc. Of course, when it’s feasible.
If you would like to learn more about how to optimize queries, please visit:
Schema, Functions, Queries and Detection Rules have been separated into tabs for, according to my opinion, easier access and pivoting which will give a better overview in each tab.
Schema Reference
The schema reference will open as a side pane
When looking at one of the *events tables, the ActionType column is very useful to see which events are being logged. Earlier, I usually selected distinct ActionType in the query to have a look at the events being logged. Now, it’s possible to use the quick access from the portal to expand all action types for a specific table.
Above image shows the action types for DeviceFileEvents. In the DeviceEvents there are around 180 different action types to query.
For the hunting query development and hunting use-cases, the action types is a great go-to resource.
The columns in the schema reference is clickable and can in a simple way be added to the query
Simple query management
Inspect record
The inspect record pane is an easy way to see the data for one single row. When developing new queries I usually take a subset of data (take/limit 20) to see an overview of the results, and also select an event to see all data instead of side scrolling through all columns when needed.
New features in inspect record is that we can do quick filters which will be added to the query.
In this example we would like to know more about process executions from the C:\AttackTools folder
If we would like other pre-defined FolderPath filters, we can select View more filters for FolderPathWe can continue the query development and as in below example, get the count for each file in the folder specified in the query.
Last but definitely not least… Link the query results to an incident
This is my favorite, this will reduce the gap and simplify the process between threat hunters, responders, and analysts.
By selecting the relevant events in the result, they can be added to an existing incident, or create a new incidents.
This feature will help organizations to define the threat hunting both in a proactive hunting scenario, and in a reactive, post breach scenario when the hunters will assist analysts and responder with a simplified process.
How to link the data to an incident
To be able to link the data you need to have the following columns in the output
Timestamp
DeviceId/AccountObjectID/AccountSid/RecipientEmailAddress (Depending on query table)
ReportId
Develop and run the query
Please note, you cannot have multiple queries in the query window when linking to incident
Choose to create a new incident or link to an existing
Add the necessary details and click nextSelect the impacted entitiesAfter finishing the wizard, the data will end up in a new alert in the incident
Last tip
Run a quick check in your environment to see if you have remote internet-based logon attempts on your devices by looking for RemoteIPType == “Public”. There are other where RemoteIPType is useful, like processes communicating with Internet.
As announced by Microsoft last week, the Download quarantined files is generally available.
This will simplify for SecOps to download quarantined files for further analysis.
So, why do SecOps want to download files?
One reason could be that they want to do forensic analysis on the file to see if taken response actions was enough or extract indicator which they can hunt for.
The feature is enabled in advanced features and is enabled by default
Devices have Windows 10 version 1703 or later, or Windows server 2016 or 2019
The file download is available from multiple pages in defender
It’s also visible on the file page, and the reason why we want to have the option to download in multiple pages is to avoid having to switch view and to be able to take the actions where we are in the portal
Update
The possibility to set password for the file download makes it more safe and also avoid file to be detected during download
We have been able to use Live Response for some time now. It’s really great and we can take the response actions we find necessary and download data from the endpoint through the browser session.
Here is a very high level of how the architecture looks for the live response feature
Some things which may be difficult today with the limitations of single session is we can only connect to one machine at the time and automation does not apply for a browser session
If a machine is compromised in any way it’s useful, but if we want to automate the responses or run the same custom playbook for multiple devices we need to use the API
The API can be used both to collect necessary artefacts from devices, and also take remediation actions.
On some events, we’ve presented how to use the Live Response to dump memory and export the dmp files to Azure storage as an example how powerful it is.
Requirements
Requirements and limitations
Rate limitations for this API are 10 calls per minute (additional requests are responded with HTTP 429).
25 concurrently running sessions (requests exceeding the throttling limit will receive a “429 – Too many requests” response).
If the machine is not available, the session will be queued for up to 3 days.
RunScript command timeouts after 10 minutes.
Live response commands cannot be queued up and can only be executed one at a time.
If the machine that you are trying to run this API call is in an RBAC device group that does not have an automated remediation level assigned to it, you’ll need to at least enable the minimum Remediation Level for a given Device Group.
Multiple live response commands can be run on a single API call. However, when a live response command fails all the subsequent actions will not be executed.
Minimum Requirements
Before you can initiate a session on a device, make sure you fulfill the following requirements:
Verify that you’re running a supported version of Windows.Devices must be running one of the following versions of Windows
As companies raise their bars and protect more and more accounts with Multi-factor Authentication the attacks are twisting with new angles. The method of using Application Consent is nothing new but attackers haven’t had a need to use it as a stolen password is normally less friction.
So what is Application Consent, Application consent is a way to grant permissions to Applications to access your data that they need to perform their specific Task. An example could be a be a Travel App that needs to read your Travel itinerary so that it can automatically update your Calendar with Flight Data or other information.
I am sure everyone has seen a App Consent Screen
Source: Microsoft docs
Kevin Mitnick did a malicious Ransomware PoC roughly with Application Consent two years ago around this, feel free to watch the demo on the youtube link.
Application ControlProtect
The first thing you should ask your selves, do you allow your users to Grant Permissions themselves of their data or have you as an organization centrally taken this control?
The settings can be configured under your Azure Active Directory
First off can your users Register Applications themselves or is this under central control?
AAD > User Settings > Enterprise Applications
So if you do not allow this the users would never be able to allow an App consent either, but if you do you can control how much data they can share and under what circumstances, you will find a few options in the detailed settings below.
AAD > Enterprise Applications > User Settings
AAD > Enterprise Applications > Consent and Permissions > User Consent Settings (Preview at the time of writing)
Do not allow user consent
Allow user consent for apps from verified publishers
Allow user consent for Apps
Allowing users to allow Apps will put you at risk as they can be lured into accepting an Application Consent. This is not only sensitive from a Security Threat perspective, but also from a privacy / secrecy perspective where third party apps malicious or not are for an example being granted access to PII or Customer Data.
Here you need to find the balance between control and risk on how much you can detect. With the “Allow user consent for apps from verified publishers” you also have the option to control what data and methods are being granted as well. Not that the offline_access is something you need to review thoroughly as that opens up your exposure.
Another possibility that exists is also to user a Admin Consent Requests, in this case a User can request a consent that an Admin will have to review and approve or deny.
AAD > Enterprise Applications > User Settings
Application Control Detection
There are a few ways to see and detect Application Consent, either you create a manual process to review this on a schedule or you use the tools you have at hand. Some examples on what you can use below depending on how you are licensed and how you have integrated Logs.
So what can you do if you find Applications that you suspect are doing malicious activities or is putting your data at risk.
You have a few options, start with documenting and putting a timeline with all the activities you are taking, its easy to forget when you need to go back in time.
Block Sign-in to Application
Remove Users from the Application
Remove the Application Completely
Ban/Block Application in MCAS
Review Permissions under the App in AAD
I wouldn’t recommend removing the app until your investigations is complete, id rather block the Login. Depending on that tools you have you can start going through your audit logs in relation to this app.
One thing we usually discuss with customers is the workload. Everyone has too much to do and it can, sometimes be difficult to prioritize investigations.
Especially
now, where you might be short on staff, and the Covid-19 virus can strike at
the SOC organization or reduce the numbers of available people.
Of course,
this does not only apply during the world crisis of Covid-19. Automation is
also a help in the normal day to day work.
There are benefits
of being able to automate responses and we have these discussions with many
customers.
MDATP
Automatic self-healing is built-in into Defender ATP and is mimicking these
ideal steps a human would take to investigate and remediate organizational
assets, impacted by a cyber threat.
This is
done using 20 built-in investigation playbooks and 10 remediation actions
Increased Capacity
Respond
at the speed of automation
Investigate
and remediate all alerts automatically
Free
up critical resources to work on strategic initiatives
Cost implications
It
will drive down the cost per investigation and remediation
Takes
away manual, repetitive tasks
Automated
remediation eliminates downtime
Get full
value of your protection suite and people, quick configuration and you are up
and running
SecOps Investigation (Manual)
Sometimes
it will take some time from the alert being triggered until someone has the
time to start looking at it. Manual work
also requires more resources for review and approval for each action
From a
SecOPs perspective, an initial response involves information gathering.
Collecting:
Process list
Services
Drivers
Network connections
Files created
Where did the file originate from?
etc
Based on
our results, we will decide the remediation steps (if we do not follow a
playbook here, the catch will be different result depending on who makes the
response).
Remediation:
The
remediation will include connecting remotely or manually collect the device and
then launch tools for the remediation process.
Automatic response with Auto IR
Fast time
to respond which will avoid additional damage and compromise of additional devices,
when attackers will start moving lateral in the environment.
It’s our
24/7 buddy who assists the SOC staff to remediate threats so the human staff can
focus on other things
MDATP is sending telemetry data to
the cloud
MDATP cloud continuously analyzes
the data to detect threats
Once a threat is identitfied an
alert is being raised
The alert kicks off a new automated
investigation
AIRS component asks Sense client to
initiate SenseIR
SenseIR is then orchestrated by AIRS
on what action should be executed (Collection/Remediation)
Based on the data collected from the
machine (current and historical) AIRS decides what actions should be taken
For every threat identified, AIRS
will automatically analyze the best course of action and tailor a dedicated
surgical remediation action to be executed using on device components (e.g.
Windows Defender Antivirus)
Playbook is executed
“suspicious host” playbook is just an example of “catch all” playbook that is applied after detailed AutoIR investigation for evidences raised by alerts / incident to ensure that nothing is missed.
Data Collection
Volatile
data
All
processes list – main image, loaded modules, handles, suspicious memory
sections
All
services list
All
drivers list
All
connections
None-Volatile
data
Recently
created files – x minutes febore / after alert
All
persistence methods
Recently
executed files
Download
location
Incrimination
Microsoft
Security Graph eco system – DaaS, AVaaS, TI, TA, Detection engine, ML
infrastructure etc.
Custom
TI indicators – for allow / block list
Remediation
How?
By
leveraging OS components (e.g. Defender Antivirus) to perform the remediation
(prebuilt into the system, low level actions (driver), tried and tested)
What?
File
actions
Process
actions
Service
actions
Registry
actions
Driver
actions
Persistency
methods (Reg, Link files, etc.) actions
Azure Sentinel—the cloud-native SIEM that empowers defenders is now generally available
Some of the new features are:
Workbooks are replacing dashboards, providing for richer analytics and visualizations
New Microsoft and 3rd party connectors
Detection and hunting:
Out of the box detection rules: The GitHub detection rules are now built into Sentinel.
Easy elevation of MTP alerts to Sentinel incidents.
Built-in detection rules utilizing the threat intelligence connector.
New ML models to discover malicious SSH access, fuse identity, and access data to detect 35 unique threats that span multiple stages of the kill chain. Fusion is now on by default and managed through the UI
Template playbooks now available on Github.
New threat hunting queries and libraries for Jupyter Notebooks
Incidents:
The interactive investigation graph is now publicly available.
Incidents support for tagging, comments, and assignments, both manually and automatically using playbooks.
The 2019 version of the Gartner Magic Quadrant clearly shows that Microsoft is in the game to provide extremely powerfull Endpoint protection platform (EPP). Microsoft is named a leader!
With built-in powerful capability which ties to Protect, Detect and respond, they have given us great tools for our security work.
Microsoft is unique in the EPP space, as it is the only vendor that can provide built-in endpoint protection capabilities tightly integrated with the OS. Windows Defender Antivirus (known as System Center Endpoint Protection in Window 7 and 8) is now a core component of all versions of the Windows 10 OS, and provides cloud-assisted attack protection.
Microsoft Defender Advanced Threat Protection (ATP) provides an EDR capability, monitoring and reporting on Windows Defender Antivirus and Windows Defender Exploit Guard (“Exploit Guard”), vulnerability and configuration management, as well as advanced hardening tools.
The Microsoft Defender ATP incident response console consolidates alerts and incident response activities across Microsoft Defender ATP, Office 365 ATP, Azure ATP and Active Directory, as well as incorporates data sensitivity from Azure information protection.
Microsoft is much more open to supporting heterogeneous environments and has released EPP capabilities for Mac. Linux is supported through partners, while native agents are on the roadmap.
Microsoft has been placed in the Leaders quadrant this year due to the rapid market share gains of Windows Defender Antivirus (Defender), which is now the market share leader in business endpoints.
In addition, excellent execution on its roadmap make it a credible replacement for competitive solutions, particularly for organizations looking to reduce complexity.
Gartner
The benefit of the insights and protection these tools, and ability to use built-in SOAR capabilities, gives security teams around the globe a better and much faster understanding of the attacks for much fast response.
Many features like Exploit Protection, Network Protection, Attack Surface reduction, Firewall and more will provide a more reliable platform which is easy to manage.
The enriched alerts and incidents gives security teams a chance to put their effort to the critical incidents and avoid spending time trying to fight the noice in all different tools and manual tasks.
Automated investigations
Build your playbooks
Take back the control with live response
We also have the threat and vulnerability management feature which gives you visibility on vulnerable software in your estate
A few days ago, a post on medium stated that an arbitrary code execution was possible in Squirrel which affected Teams and other applications which used Squirrel and Nuget for updates.
In the post, Teams is mentioned as example but other affected application were mentioned on twitter.
So, to see what our environment is up to with regards to this. Our favorite place to go to: Defender ATP – Advanced Hunting!
To explain the query, since there are other apps than teams which uses Squirrel, we aim to keep the query as broad as we can.
Since some applications uses Squirrel and web for updates we can’t simply say that all web requests are malicious. But we have done some verification and discovered many apps vulnerable to this.
To make it more easy to overview we’re adding the URL to a column
To continue this we can count unique URL’s to find anomalies
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.