One challenge with configuring Windows firewall (besides default of block inbound and allow outbound) is that we need the data. Before Defender for Endpoint we had to rely on event forward and similar techniques to get the data of the environment.
Microsoft has recently released a Defender Firewall (Windows Firewall) report where we can see the events. This has some prerequisites in terms of audit settings (Windows audit settings, not the firewall log)
Configure firewall auditing
Enable auditing for Filtering platform
Always verify eventual impact in your environment before enabling new settings on all endpoints
To get the data, it requires the device to be onboarded to MDE
What’s described in this post is no longer applicable due to TID parameter are added to links in the portal
We developed an extension that does exactly the same thing but this is not needed anymore hence why we won’t release it. Microsoft has updated links in the portal to include the TID-parameter which is awesome and for people working with many customers this is really great news and you don’t need multiple profiles either!
[Old post]
If you work with multiple customers with Microsoft 365 Defender or working in a multi-tenant setup, you have probably noticed that your end up in the first tenant even if changing the tid-parameter in the url.
The reason why this happens is that when for instance, clicking on links in Defender, it will take you to the tenant stored in a cookie, especially if you don’t have the tenant id parameter in the link.
It can be addressed by working with multiple profiles, but if you don’t want that you can just do the following
Open dev tools and go to Application and expand Cookies Select the security.microsoft.com and right-click on sccauth and select delete
Fine-tuning the analytic rules to minimizing the number of false-positives can be time-consuming and you still want to keep the high visibility so you don’t want to risk false-negatives. At the same time, the risk of managing a high number of incidents, especially if they are false-positives, would also be time-consuming.
To be able to fine-tune the analytic rules, we need historical data. Same as what was needed when developing the detection in the first place and for fine-tuning we also need decisions made when classifying the incident and if those decisions was related to any specific entities.
Machine Learning to the help
Microsoft Sentinel uses machine learning to analyze signals from the data sources and the responses made to an incident over time to assist and providing data for fine-tuning decisions.
The rules with recommendations for a fine-tune is noted with a light bulb next to the rule name as in below picture.
Fine-tuning recommendations available (preview feature)
When editing the analytic rule, in the Rule Logic tab, the Tuning insights is available
There are several panes which can be scrolled through which contains actionable items like exclude accounts, IPs etc. from the analytic rule
The third pane shows the importance of correct mapped entities since this is the only way to get results and shows the four most frequent entities in the alerts generated by the analytic rule.
Hopefully this can share some light on make your work more effective by working with your analytic rules to make your detection better.
Don’t forget to be careful and thing through your exclusions to avoid losing visibility.
Another feature in hunting, which will speed up responses from a threat hunting scenario is TakeAction
When selecting a record in the result, the Take Action button will be visible as seen in below picture
So instead of just creating a new incident or adding events to an existing incident we can take actions from the hunting experience.
In the Take actions experience we have actions grouped by Devices, Files and Users.
The action options available is dependent on the data in the result. For instance, file information like checksum is required to being able to quarantine a file.
When clicking Next we can see the target selected and click Next
We can add a Remediation name and Description for our action
This feature enables a rapid response at the fingertips of the threat hunters for immediate actions
NRT Rules are hard-coded to run once every minute and capture events ingested in the preceding minute.
This is for faster detection and response opportunity.
Considerations
No more than 20 rules can be defined per customer at this time
As this type of rule is new, its syntax is currently limited but will gradually evolve. Therefore, at this time the following restrictions are in effect:
The query defined in an NRT rule can reference only one table. Queries can, however, refer to multiple watchlists and to threat intelligence feeds.
You cannot use unions or joins.
Because this rule type is in near real time, we have reduced the built-in delay to a minimum (two minutes).
Since NRT rules use the ingestion time rather than the event generation time (represented by the TimeGenerated field), you can safely ignore the data source delay and the ingestion time latency (see above).
Queries can run only within a single workspace. There is no cross-workspace capability.
There is no event grouping. NRT rules produce a single alert that groups all the applicable events.
There is a technical limit which blocks union, join etc.
For further information about Near-Real-Time, NRT, analytic rules, please visit:
During Ignite, Microsoft has announced a new set of features in the Advanced Hunting in Microsoft 365 Defender.
These features will definitely help you in the Threat Hunting process and also reduce the gap between analysts, responders and threat hunters and simplify the life of a threat hunter.
Multi-tab support
When having hunting training classes, I usually recommend to use multiple browser tabs. One for the query development, and one used to go back to previous queries to see how some things were done earlier.
for example, if you are developing a hunting query and need an if statement, external data, regex or other more advanced features it is easier to just open a previous query to see how it was solved last time. At least until you get more fluent in KQL. This is to avoid having to save your new query, go back to the old one, and then back to the new again
With the multi-tab support we can open the query in a new tab
Resource usage
The new Hunting Page will now provide the resource usage for the query both timing and an indicator of the resource usage
This will make it easy to see when query optimization is recommended and needed. You could for example use equals, has instead of contains, remove columns not used to reduce the dataset etc. Of course, when it’s feasible.
If you would like to learn more about how to optimize queries, please visit:
Schema, Functions, Queries and Detection Rules have been separated into tabs for, according to my opinion, easier access and pivoting which will give a better overview in each tab.
Schema Reference
The schema reference will open as a side pane
When looking at one of the *events tables, the ActionType column is very useful to see which events are being logged. Earlier, I usually selected distinct ActionType in the query to have a look at the events being logged. Now, it’s possible to use the quick access from the portal to expand all action types for a specific table.
Above image shows the action types for DeviceFileEvents. In the DeviceEvents there are around 180 different action types to query.
For the hunting query development and hunting use-cases, the action types is a great go-to resource.
The columns in the schema reference is clickable and can in a simple way be added to the query
Simple query management
Inspect record
The inspect record pane is an easy way to see the data for one single row. When developing new queries I usually take a subset of data (take/limit 20) to see an overview of the results, and also select an event to see all data instead of side scrolling through all columns when needed.
New features in inspect record is that we can do quick filters which will be added to the query.
In this example we would like to know more about process executions from the C:\AttackTools folder
If we would like other pre-defined FolderPath filters, we can select View more filters for FolderPathWe can continue the query development and as in below example, get the count for each file in the folder specified in the query.
Last but definitely not least… Link the query results to an incident
This is my favorite, this will reduce the gap and simplify the process between threat hunters, responders, and analysts.
By selecting the relevant events in the result, they can be added to an existing incident, or create a new incidents.
This feature will help organizations to define the threat hunting both in a proactive hunting scenario, and in a reactive, post breach scenario when the hunters will assist analysts and responder with a simplified process.
How to link the data to an incident
To be able to link the data you need to have the following columns in the output
Timestamp
DeviceId/AccountObjectID/AccountSid/RecipientEmailAddress (Depending on query table)
ReportId
Develop and run the query
Please note, you cannot have multiple queries in the query window when linking to incident
Choose to create a new incident or link to an existing
Add the necessary details and click nextSelect the impacted entitiesAfter finishing the wizard, the data will end up in a new alert in the incident
Last tip
Run a quick check in your environment to see if you have remote internet-based logon attempts on your devices by looking for RemoteIPType == “Public”. There are other where RemoteIPType is useful, like processes communicating with Internet.
Sometimes when real-time protection and on-demand scanning takes a bit of time. It’s sometimes difficult to see exactly what it’s doing and what takes time.
A new set of PowerShell cmd-lets have been released which allows us to do a performance recording of defender, New-MpPerformanceRecording and Get-MpPerformanceReport, and troubleshooting performance.
When showing the help of the cmd-let we can see that there are two parameters
-RecordTo <string>: The path of the outputfile
-Seconds <int>: Number of seconds to run the recording
The seconds parameter is useful when running none-interactive sessions against multiple devices
Using the performance recorder
In PowerShell, we use the command New-MpPerformanceRecording and specify the output .etl file.
New-MpPerformanceRecording -RecordTo .\scan.etl
This will start the recorder
We can press Enter at any time to stop the recording and Ctrl-C if we want to abort.
The output file is saved and we can now open it with the cmd-let Get-MpPerformanceReport
The cmd-let allows us to look at the data in different ways
Since it’s a ETL file we can actually open it with any ETL viewer, however, the result is not presented to us in the same way
Using PerfView as an example of opening etl files
We can see that Windows Performance Recorder is used under the hood
IMPORTANT, If you plan to use this troubleshooting to find paths for exclusions, be very careful. You might accidently open up your device to threats. If you are not 100% certain of your exclusions, please ask for help!
As announced by Microsoft last week, the Download quarantined files is generally available.
This will simplify for SecOps to download quarantined files for further analysis.
So, why do SecOps want to download files?
One reason could be that they want to do forensic analysis on the file to see if taken response actions was enough or extract indicator which they can hunt for.
The feature is enabled in advanced features and is enabled by default
Devices have Windows 10 version 1703 or later, or Windows server 2016 or 2019
The file download is available from multiple pages in defender
It’s also visible on the file page, and the reason why we want to have the option to download in multiple pages is to avoid having to switch view and to be able to take the actions where we are in the portal
Update
The possibility to set password for the file download makes it more safe and also avoid file to be detected during download
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.