Latest Posts

CVE-2017-0290 – RCE in The Microsoft Malware Protection Engine

Last Friday, Tavis Ormandy and Natalie Silvanovich reported that they had discovered “the worst Windows remote code exec in recent memory”.

The vulnerability was reported to Microsoft who released an advisory: https://technet.microsoft.com/library/security/4022344.aspx

The good thing, no action is requred by the Enterprise administrators if default configuration to automatic upate definitions and the Malware Protection Engine are kept up to date.

Otherwise, patch now!

From the advisory:

Why is no action required to install this update?
In response to a constantly changing threat landscape, Microsoft frequently updates malware definitions and the Microsoft Malware Protection Engine. In order to be effective in helping protect against new and prevalent threats, antimalware software must be kept up to date with these updates in a timely manner.

For enterprise deployments as well as end users, the default configuration in Microsoft antimalware software helps ensure that malware definitions and the Microsoft Malware Protection Engine are kept up to date automatically. Product documentation also recommends that products are configured for automatic updating.

CVE ID Vulnerability Title Exploitability Assessment for
Latest Software Release
Exploitability Assessment for
Older Software Release
Denial of Service
Exploitability Assessment
CVE-2017-0290 Scripting Engine Memory Corruption Vulnerability 2 – Exploitation Less Likely 2 – Exploitation Less Likely Not applicable

 

To exploit this vulnerability a special crafted file has to be scanned by the system. The file can be delivered in numerous ways – Via WEB, attachment etc.

The real-time scan will automatically scan the files and this funtionality is nothing you should disable.
The real-time scan runs on file shares so this vulernability doesn not only apply on clients

Affected products

Antimalware Software Microsoft Malware Protection Engine Remote Code Execution Vulnerability- CVE-2017-0290
Microsoft Forefront Endpoint Protection 2010 Critical
Remote Code Execution
Microsoft Endpoint Protection Critical
Remote Code Execution
Microsoft Forefront Security for SharePoint Service Pack 3 Critical
Remote Code Execution
Microsoft System Center Endpoint Protection Critical
Remote Code Execution
Microsoft Security Essentials Critical
Remote Code Execution
Windows Defender for Windows 7 Critical
Remote Code Execution
Windows Defender for Windows 8.1 Critical
Remote Code Execution
Windows Defender for Windows RT 8.1 Critical
Remote Code Execution
Windows Defender for Windows 10, Windows 10 1511, Windows 10 1607, Windows Server 2016, Windows 10 1703 Critical
Remote Code Execution
Windows Intune Endpoint Protection Critical
Remote Code Execution

 

Actions:

  • Verify that the update is installed
  • If necessary, install the update

For further information:

https://bugs.chromium.org/p/project-zero/issues/detail?id=1252&desc=5

 

 

 

Security Best Practice for Active Directory

Securing active directory is really important.

We still see help desk staff being added to Domain Admins group, Admins are elevating to their DA account to run powershell, RSAT etc on their device which they also use to download software, browse the internet and basically everything they do on day to day basis.

Domain Admins in the past was the easy way to managing almost everything. Exchange, Users, Systems running on member servers, Servers (I’ve even seen domain controllers), Service accounts have been added to Domain Admins group. The simple reason for this was “It just works and it’s easy” or the worst phrase “We have always done it this way”.

Compromised credentials on servers or computers used for day to day administrative tasks is a common way to get the keys to the kingdom and the high value assests every company tries to hard to protect.

 

If you have the time and want to provide proper AD security for your environment there is a Best Practice Guide to Secure Active Directory.

https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/plan/security-best-practices/best-practices-for-securing-active-directory

Guide on how to Configure SQL Server Index Optimization with Ola Hallengrens Maintenance Solution

Jörgen Nilsson and I presented this on TechEd North America and we thougt it would be a good idea to share this information in a more written form.

The purpose of this guide is to get a quick start of how to run SQL Server Index Optimization for non-database administrators.

To maintain performance in a Database it is most often recommended to maintain the indexes in a database. All applications has different support statements and best practices so before you implement this please review the specific application. You also need to consider that doing this will generate lots of transactional logs so keep an eye out for disk space and make sure you schedule backups after the tasks has run.

If you don’t maintain your indexes for System Center Configuration Manager your system will eventually be slow and suffer from performance issues like slow updates of collections different status messages and so on. So it’s vital to maintain your indexes. I have written a step by step to implementing a great community script from Ola Hallengren to do this. To get you started.

I highly recommend that you read the FAQ before you proceed.

http://ola.hallengren.com/frequently-asked-questions.html

First off you can run the IndexCheck script against the database you suspect has a high grade of defragmentation to display the actual fragmentation performance busters are most likely results with high page counts and high fragmentation.

1. Start SQL Management Studio and Login

2. Open the IndexCheck script. I usually add the extra statements provided below to the script from Ola

To Point out the Database to run against.

USE CM_S01

To sort the output in a more structured way by defragmentation I add

Order by AvgFragmentationInPercent desc

3. Execute the script and it will show you an output of the fragmentation level of the databases, as you see in the AvgFragmentationPercentage it will display the level.

4. OK, so now we see that we have defragmentation on our indexes, so SQL wise to maintain the Indexes there are three methods of doing that. And the solution will automatically choose depending on fragmentation level and if you have SQL Server Enterprise the “Rebuild Online” option will be available.

  • Reorganize (Kind of a defragmentation)
  • Rebuild (New indexes are created and rebuilt, NOTE: Its only available in SQL Server Enterprise)
  • Offline Rebuild (Same as B but it’s done offline)

5. Before we install the Maintenance solution we want to create a database in SQL we can use for logging the activities and saving our tasks in so we don’t put this information in any of the other databases.

6. In SQL Management Studio create a new Database.

7. Give it a Name and set the Size you want the database to have

8. Under Options change the Recovery Model to be Simple and press OK

9. Now when we have created a database to use for the solution we need to start the SQL Server Agent. Right click and start the Server Agent

10. Once it’s started we can open the MaintenanceSolution.sql and do the following edits.

  • Change Master to the Database name you selected earlier in my case SQL Maintenance
  • For the other values I suggest that you change the Backup Path to where you want to store the Backup Files the others I usually use the defaults for.

11. Execute the Script and verify that it has run successfully and that the jobs were created under SQL Server Agent

12. Now onto the IndexOptimize job which we need to configure.

  • By default the job runs on all your user databases on the Server so if you want to only run it on specified databases you need to configure that.
  • And you also need to set up this on a schedule, and my recommendation is to set it up on a weekly schedule at a time where you don’t have heavy load on the server.

NOTE: The first time you run the task I highly suggest that you have lots of time, we have seen that this task can take from minutes to several 10+ hours if the indexes are really defragmented and your SQL box is really performing well. So plan for that initial execution, once you have this running on a regular basis it will not take that long every time it runs.

13. To change the databases that the IndexOptimize task is run on right click on the job and select properties. Once in the properties go to the Steps section and select edit on the Step

14. To change it change the USER_DATABASES value to the preferred databases you want to run this against.

So if I’d like the job to run against only CM_S01 and SUSDB I would change the value to CM_S01, SUSDB. There are more options to choose from if you’d like to run against all databases and maybe just exclude on there are exclude options as well very well explained at the solution webpage. http://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html

15. So to setup a schedule navigate to the schedule section and click New.

16. Once in the schedule setup the schedule you want and plan so that it doesn’t interfere with any of your other maintenance tasks like backup. Simply give it a name and configure desired schedule.

17. As we enabled the solution to Log to a Table we can explore what actions it has taken by simply reviewing the CommandLog Table. To show what has been going on in my environment run the command below. Where we can track how long Index optimization takes and if we have certain indexes what we frequently hit and so forth.

USE SQLMaintenance
select * from dbo.CommandLog

18. We are also getting log files on the Disk Drives and they are stored under the default SQL Server Installation directory under Log.

19. To maintain the solution itself there are some tasks to do cleanup so we don’t fill the CommandLog table with years of data and the disks with log files for years. So to cleanup all these activities I highly suggest that you configure these additional jobs so that you cleanup after the solution. You may want to adjust the steps in the Jobs to have it keep the activities for the period of time you want to have. By default it keeps the data for 30 days. As a recommendation set them to run on a weekly basis as well

20. So now we have scheduled and configured Index Optimization, is very important that you keep track of disk space and that you schedule backups after the tasks are run as they will generate transaction logs that may fill up your disks.

A Special thanks to fellow MVP Steve Thompson for inspiring me to use the separate Database when using the Maintenance Solution. Also Ola Hallengren for making such an awesome solution and sharing it. Powered by Community