Fine-tuning the analytic rules to minimizing the number of false-positives can be time-consuming and you still want to keep the high visibility so you don’t want to risk false-negatives. At the same time, the risk of managing a high number of incidents, especially if they are false-positives, would also be time-consuming.
To be able to fine-tune the analytic rules, we need historical data. Same as what was needed when developing the detection in the first place and for fine-tuning we also need decisions made when classifying the incident and if those decisions was related to any specific entities.
Machine Learning to the help
Microsoft Sentinel uses machine learning to analyze signals from the data sources and the responses made to an incident over time to assist and providing data for fine-tuning decisions.
The rules with recommendations for a fine-tune is noted with a light bulb next to the rule name as in below picture.
When editing the analytic rule, in the Rule Logic tab, the Tuning insights is available
There are several panes which can be scrolled through which contains actionable items like exclude accounts, IPs etc. from the analytic rule
The third pane shows the importance of correct mapped entities since this is the only way to get results and shows the four most frequent entities in the alerts generated by the analytic rule.
Hopefully this can share some light on make your work more effective by working with your analytic rules to make your detection better.
Don’t forget to be careful and thing through your exclusions to avoid losing visibility.
For further reading, please visit