*See the slides here*
As the art of red teaming evolves, more and more emphasis has been placed on team based solutions to common problems. Authors of capabilities or support tools are now focused on building in a collaborative approach to using their project. It is no mystery why this has happened – during engagements or assessments, people work as a team (duh). Need examples of capabilities moving this direction? Check out MSF Pro, Dradis Pro, Silent Break’s Security’s Throwback, Cobalt Strike, etc.
Over the years, I have been able to witness the collaborative approach to hacking first hand at various red team events and at my job. No longer are people stuffed into their own corner trying to individually tackle hundreds of systems, no longer are people screaming across the room to clumsily pass shells in Metasploit, and no longer is data being hoarded by a single point of failure. It is quite beautiful.
While the tools in use today are a much better step forward and have components built for scanning, I find people still rely on individualized NMap setups for pentesting. Testers often times do not even scan at all for fear of tripping sensors during more advanced engagements. With this approach, people are going back to the limited sharing of data and often missing exploit opportunities. I was inspired to find a solution that would work for my engagements but first…
With a read through of the NMap book (http://nmap.org/book), you can learn quite a bit about the options and techniques available to you in different situations. During a large-scope pentest, I find the -min-rtt-timeout and -max-rtt-timeout options extremely useful, especially if you are conducting scans over the Internet. These options allow you to fine tune the performance of your scan by manually setting timeout options. By pinging an alive target, taking note of the TTL, and tuning the min/max options, the scan will no longer have to automatically adjust the timeout, resulting in less dropped packets and a quicker scan. One other option that is useful is -min-hostgroup, which will break up the scan into smaller chunks. This allows you to get results in stages and reduce the amount of work the scanner is doing at any one moment.
PING () 56(84) bytes of data. 64 bytes from (): icmp_req=1 ttl=128 time=18.4 ms nmap -Pn -n -sS --top-ports 1000 --min-rtt-timeout 50ms --max-rtt-timeout 150ms --min-hostgroup 128 -vvv --open google.com -oA 98
For red teaming, scanning can be a risky situation depending on sophistication of the network operations staff and the nature of network boundaries you will be crossing. As a general rule, I minimize or avoid automated scanning as much as possible during these complex engagements (especially if already internal to the network). Small numbers of TCP/UDP scans focused on individual IPs and specific ports are less risky, since they can blend into “normal” network traffic a bit better. If you must, keep the scanning sporadically timed, focused on certain hosts/ports, and vary the originating host BUT never utilize a piece component of your callback infrastructure or a non-moveable system for your scans. I have seen a person or two earn a dunce cap for doing just that.
Distributed scanning is an “old” concept dating back to the 90s or early 2000s (Maybe Earlier?). Distributed scanning is the use of a client-server architecture to conduct scans in which multiple client nodes are responsible for executing the scan for you. You can have an extremely large number of scanning nodes and have them distributed around the world. The server is the centralized controller in charge of dishing out the scans and getting the results. Essentially, this creates a “botnet” of scanning nodes.
There are several benefits to doing scanning this way:
- Efficient: By spreading out the work across multiple hosts, the scan will finish sooner. This is much more observable when you are scanning Class B ranges or larger.
- Covert-ish: By spreading out the scan over different hosts distributed worldwide, it reduces the amount of times the same host makes a “malicious” request to a target network. While the network operations staff might still recognize it is a scan if their equipment is properly configured, attributing or predicting the scan will be much more difficult.
- Disposable: By using scanning nodes out in the world and not initiating scans from your operational infrastructure, you are minimizing the risk of mitigation against your more important exploitation actions. With rapidly deployable and cheap rackspace hosting, you can stand up scanning nodes and tear them down immediately after the scan.
- Viewpoint: You can see certain networks differently depending on where you originate. For example, certain countries/networks might block US addresses etc.
There are a couple tools that are built to do this already. RNmap appears to have been developed in early 2000 and development ended in 2003. This utilized a GUI type interface on the server to communicate out to clients. This was an early proof of concept that paved the way for future methods. I found several smaller more custom proof of concept examples of distributed scanners, but nothing quite as full featured as DNMap.
DNmap was developed by Sebastian Garcia and has been presented many times at various cons. DNmap is a distributed scanner written in Python using the Twisted libraries. It implements a custom protocol to communicate to scanning nodes and retrieve results. It was easy to get running and very good at accomplishing the distributed scan. For a walkthrough on getting it running, check out the Texas Tech blog on it: Raidersec.
While it worked quite well for my individual test scenarios, I found that it had some things that could be improved. First and foremost, I found it was difficult to use in team environments. Multiple people trying to use the scanning server or interfacing with the “jobs” file while a scan was running could screw up the scanning operations. DNmap also lacked client authentication, meaning anyone could connect to the scanning server and request a job. These small issues inspired me to play with my own implementation.
Minions – Collaborative Distributed Scanning
So when I started Minions, I had a list of things I desired from my solution and my development heavily focused around these items.
- Distributed – See reasons above
- Low Overhead… fire it off and leave it running
- Team Interface – SYNERGY!
- Automation – I don’t want to type the same NMap options over and over again
- Scheduled jobs – Sometimes we scan customers overnight
- Secure – Client/Server authentication
- Built in mgmt capabilities – I don’t want to SSH to my scanning server unless something really breaks
What I came up with is a full-featured collaborative scanning tool that integrates with and uses a modified DNmap implementation. It uses Django on the backend and Bootstrap/JQuery on the front end. Minions provides an easy to use team-based interface to control the DNmap server. I was able to implement most of the features need to meet my original requirements:
- Execute and schedule scan jobs
- Create and manage scan profiles
- Query and manage previous scans
- Download all forms of scan output
- Output files stored on server on processing
- Server provides zip for download
- gnmap, xml, nmap (zip is nice)
- Implement different layers of security
- SSL in transit, authentication from client to server, input checking & validation, server protections
To accomplish this, I had to make some changes to DNmap. Some of the modifications were to tailor the tool for my front end, while others simply add features to meet my requirements. The major modifications include:
- Added ability to poll for new output files and parse to SQL
- Added ability to retrieve -oA output forms (not just .nmap)
- Changed the way the jobs and trace file work to aid for endless execution and dynamic jobs
- Added client authentication using certs
- Previously, anybody could connect to server and grab scan jobs…
While the tool works pretty well as is and already can provide some value to teams, I have a couple things I would like to work on going forward. They are mostly surface level.
- Better UI including better handling of various browser sizes and customization of the bootstrap theme.
- Parsing of the Nmap XML results so they are able to be queried
- Smart detection and setting of the RTT Timeouts
- Rewrite of the distributed scanning backend
I have already thought of some pretty cool unique use cases. I have tested the idea of using this tool on internal red team engagements where I can turn their compromised Linux servers into scanning nodes. I’ve also toyed with the use on external assessments where I can rapidly deploy a large number of IPs using rented rackspace. Minions can also help on internal pentests, where you could deploy the dnmap agent to multiple assessment machines and have a single tester control all scanning. This is faster than having one machine execute all scans, and more efficient than breaking up target IP space by hand and having each tester execute Nmap manually.
There are several screenshots below. You can go find the tool on my Github along with some setup instructions, details on how to work the tool, as well as more info. If you have questions, find me on freenode (sixdub) or email me!
Happy hacking and cheers!