©2018 by packetaddict.com. Proudly created with Wix.com

  • Michael Weeks

Intrusion Detection Using PowerShell


1.2 Microsoft in the late 90s and at the turn of the millennium was not held in high regard as it pertains to security. Microsoft actually stopped all development and Bill Gates ushered in an era of what he called “Trustworthy Computing” (Callahan 2014). He defined Trustworthy Computing as “computing that is as available, reliable and secure as electricity, water services and telephony.” (Gates 2012). These efforts have gone a long way to making Microsoft the largest client side Operating System on the planet. (w3Schools.com 2014).

1.3 That being said it is very likely that an intrusion detection analyst will be using some type of Microsoft Windows as their main workstation, and analyzing windows systems. In the past Intrusion Analyst have had to download many tools with a focus on Linux power tools in order to properly perform analysis. Microsoft has provided tools where this is not necessary and an Analyst can perform intrusion analysis without downloading a lot of tools. The number one best tool that Microsoft has created is PowerShell. PowerShell is full-featured scripting language that was built by Microsoft as an administration language built on the .NET Framework. (TechNet 2013) PowerShell is much, much more: it has the ability to run all of the classic cmd.exe commands like the net.exe and netsh.exe, it also has all the com objects built-in so VB scripts can be upgraded to the superior PowerShell, and all this an object-oriented scripting language.

1.4 After an analyst gets familiar with PowerShell as an analyst language it is a seamless transfer to use the administrative capability to perform administrative monitoring tasks of some of the other Microsoft Security technologies such as: Microsoft Windows Firewall, Active Directory, and Windows Event Logs. In order to take advantage of the monitoring capability of PowerShell an Analyst will need to learn how to script and use programmatic logic, which in PowerShell is not difficult, but some of the nuances can be somewhat complicated.

2. Body

2.1 The first thing an Analyst would have to learn when using a new tool is how to use it. If an analyst is using at least Windows 7 (and at the time of this paper they should) PowerShell comes pre-installed it’s as easy as used to be with the old start > run “cmd” but instead type “powershell”.

The Analyst will get the pretty blue PowerShell window where they can start to experiment.

So the analyst types “dir” and gets a directory listing, but it looks different so the analyst can try something else.

OK the analyst has some *nix experience and types “ls” – what’s going on?

After a little research on the internet the analyst finds the get-alias cmdlet notices that “ls” is an alias for the Get-ChildItem cmdlet. PowerShell uses a noun-verb pair for its naming of the .net based commands that are used as part of its commands. Get-Command is a good cmdlet to see what cmdlets are available (See appendix A for list of cmdlets) as well as the Get-Help cmdlet. After reviewing the cmdlets any Intrusion Analyst should immediately hone-in on the Select-String cmdlet, the first line in the DESCRIPTION section “The Select-String cmdlet searches for text and text patterns in input strings and files. You can use it like Grep in UNIX and Findstr in Windows.” So as an analyst I may have a large text file full of logs from a firewall and may need to find an IP address. To test the analyst can look at any syslog file and identify the specific regular expression. To test, the syslog examples from Cisco http://www.cisco.com/web/about/security/intelligence/identify-incidents-via-syslog.html copies to test on the analyst’s system is CiscoLogFileExamples.txt, and one of the IP addresses in the file is so to test the analyst runs the following command and gets the following results.

Ah, looks like Grep but with some extra data, let’s try only getting matches:

Much better, so let’s try some more piping, is always nice to see how many connections are made when analyzing a single host.

Nice we got counts, now let’s do something interesting so the analyst checks all the IP addresses in the file. Here’s the command to get the count:

If he back-tracks through the command he sees:

Removing Measure-Object shows all the individual IPs:

Removing Sort-Object –Unique shows:

However, it would be nice to be able to count each of those individually, using some shell magic:

With this type of analysis, the analyst can easily find the top talkers in a log file. Here is the command for ease of use in the future: PS :> select-string "\b(?:\d{1,3}\.){3}\d{1,3}\b" .\CiscoLogFileExamples.txt | select -ExpandProperty matches | select value | group value | sort count –des.

2.2 A bane to windows systems is the difficulty of centralizing logs to a central repository for review. Using the legacy command netsh, PowerShell baked in logic, some .Net and some Windows Administration an analyst could easily configure a central log repository for Window Advanced Firewall. The first step is to find where the logs are on a windows system: PS :> netsh advfirewall show allprofiles | Select-String FileName | select -ExpandProperty line | Select-String "%systemroot%.+\.log" | select -ExpandProperty matches | select -ExpandProperty value | sort –uniq. This will get the setting for logs in the windows firewall which should be enabled in GPO policy for analysis. This script shows the file is at: %systemroot%\system32\LogFiles\Firewall\pfirewall.log, so in order to open it PowerShell will need to be run in in with administrative privileges. The analyst simply types “powershell” into the run/search bar and then right-click to get an administrative PowerShell console. So the analyst has to do a little scripting, first step is to get the above command into a variable.

Then finally he can run the above command and get the connections count:

They will get the following information:

Very interesting information, the analyst is able to identify level of connections, the default gateway and all connections from the system. So how to solve the central logging problem? First the analyst needs to determine where on the network to send data. Let’s assume he chooses “\\secureshare\logs\” changing this command is simple. Let’s say the analyst wants to send this systems logs to the share drive with the following format: (Date)-(Hostname)-FWLogs.log. The analyst runs the same log-name identifier: With the log file cleaned up it’s simple method of selecting what you want and sending it to the log file:

The analyst can either schedule this across the domain using GPO, the invoke-command cmdlet, or create scheduled tasks on critical systems.

2.3 PowerShell has tremendous baked in functionality but when an analyst needs a specific capability there are the PowerShell Modules and community extension. They are all worth looking at, but something that is incredibly useful and as tremendous daily uses are the Quest Active Directory cmdlets, by Quest(Dell) Software. The cmdlets can be downloaded at http://www.quest.com/powershell/activeroles-server.aspx and after accepting the agreement the analyst can download and execute the installation file. After installation the analyst can start the Quest Active Directory Shell or run the following code: add-PSSnapin quest.activeroles.admanagement

This will run the PowerShell snapin if it is not added already (if it is the analyst will just receive an error, but there is an easy fix), this is especially good for scripting. Scripts using this cmdlet set is very good for doing monitoring of critical security groups such as “Domain Admin”. So the analyst wants to monitor a security group, first the analyst adds the above code into the top of the script after installing Quest Active Directory cmdlets.

Then identify what file variables you want and what you want to monitor:

Then create the logic for the changes and create email messages and there you have it:

The logic goes tests if the compare path exists, if so compare the difference between the current and compare files, if there are any send a mail message to the analyst. Then move the current file to the compare file for the next run. If the compare file doesn’t exist move the file to the compare file and send a message stating monitoring has started. However, how to make this a recurring activity? Well there multiple ways, there are scheduled tasks for windows as well as just coding it into the script. The analyst can easily run this in a continuous loop with the classic while loop.

The sleep at the end says wait 100 seconds, this can be manipulated more as needed. So what if the analyst needs to analyze more groups? Well, let’s parameterize and functionalize this script.

Now all an analyst has to do is run this code in a shell and kick off monitoring as they need.

PS:> Start-Job {ADMonitor –interim 100 –group “Domain Admins”}

PS:> Start-Job {ADMonitor –interim 100 –group “Enterprise Admins”}

PS:> Start-Job {ADMonitor –interim 100 –group “Schema Admins”}

This way, the analyst can monitor multiple things and have it run as a service. This framework can even be done to monitor locked accounts, using the get-qaduser –locked script.

2.4 However, knowing an account is locked out is not as important as knowing why. This really only works with monitoring security events, the best code for monitoring event logs from a central location is from Robert Sheldon, from the Window IT Pro Article (2008). The actual script was well designed in that there is not much need to change the code except for the SMTP information for the mail alerts. (Code Attached). The .xml and .csv files are what really need to be modified.










If the analyst ensures the path is outlined for these in the “logmonitor” script and has the correct email information, they will receive the log information every time an account locks out, with the system and reason that an account locked out.

2.5 So the analyst can now determine when security groups change, why security accounts get locked out, obtain network connection logs across the domain, and process the logs through multiple processes including regular expression pattern matching. Monitoring Domain Admin accounts is great but what about Local Admin accounts? This activity is one of the basic things an attacker will do after he exploits a machine. Is it possible to use some of the techniques outlined earlier? Absolutely, let’s see what our analyst can do. First the Analyst needs to obtain a list of machines he wants to monitor. This can be done using the quest active directory cmdlets.

This will get a list systems on a domain and since the analyst is getting this information, why not alert?

So how to get a list of administrators from a local group on a remote system? The analyst does some checking and there are multiple methods, he decides to go with Active Directory Service Inquiry since he’s calling the domain information to query the systems already. He writes the following function:

By running “PS > listlocal-remote –strComputer $systemnameorIP –localgroup Administrators” the analyst can easily get a list of local admins from a remote system. Here is the logic to compare old and new scripts using the same logic as earlier:

2.6 The full script is attached in the Appendix A, this will let the analyst know when a local admin group changes on a system on the domain. The analyst can create a scheduled task to check or a sleep with the start-job/service creation to keep the monitoring script running. This will ensure that the analyst will know when an admin account changes and to track down and ensure that he knows why and when it changed.

3. Conclusion:

3.1 Windows environments have historically fell short automation world because of the lack of a full-featured scripting language. This has gone away with PowerShell. PowerShell is supposed to be an administration language however it has tremendous capability in regards to Security scripting and monitoring. There is only a limit by the analyst’s imagination.

3.2 In this paper we looked at how to get use the PowerShell shell commands to analyze logs and windows systems using several cmdlets. We also looked at how to create some monitoring that has been traditionally been lacking in Windows Systems such as the Quest Active Directory cmdlets and some techniques to compare changes from one moment to another as well as automation and scheduling. We also looked at monitoring of privileged accounts across the domain to ensure that there are no improper accounts being created across the domain.

3.3 We also looked at centralizing logs across the domain from the windows firewall in order to get the proper logs into one place for the network connections for analysis. This coupled with monitoring of other logs is critical for analysis and detection.

3.4 Historically intrusion analysts have depended on tools for the identification and interpretation of this type of information, and there are lots of tools out there that will do this type of monitoring/analysis. They all cost lots of money though, and for the analyst in a small shop that needs to monitor this information Microsoft has provided the tools to monitor and extrapolate this type of information it just takes dedication and the will to accomplish the task.