Do You Miss Being a Red Teamer?

It is a question that gets posed to me pretty frequently: “Do you miss being a red teamer?” If you came all the way to my blog to see the answer, I will save you some time and from reading a couple hundred words – No. The real meaning of this post is not in that single word answer, but rather it reveal itself when you consider the question “why don’t you miss it?”

First, we must rewind for a quick recap: In 2014 after separating from the USAF, I joined a small-ish (at that time) team of folks to do consulting, specifically as a penetration tester and red teamer. For three years, I was lucky to work with brilliant coworkers / researchers / hackers  who pushed me every day to excel in the offensive space and encouraged a unique creativity that seemed natural when solving hard problems. I had the fortune of leading a multitude of engagements from program development work with corporate red teams to external red team assessments for a variety of companies. I was also lucky to share my passion of offensive work as  a trainer at BlackHat where the days were long but seeing the joy people had in problem solving made it all worth it.

In 2017, there was a natural fork in the road and I decided to take a different path. I went to work for a product company doing network forensics and threat research work. When I made the change, there was a chain reaction that I did not fully anticipate – consistent challenges from people who sought to understand why I would take this new path. Over the past year, at conferences, on social media, and in hallway conversations, people have always found a way to ask the same old question: “Do you miss being a red teamer?” Often times it is innocent curiosity that leads to the question but sometimes people have been more direct, they think I made the wrong decision or can’t understand why I would make the change. At times, it has been challenging but I have always been open to sharing this response and now, I wanted to do it in a more open forum.

The Answer

Simply put – I don’t miss being a red teamer because I still feel like one. I still use my offensive skill set and contrarian personality to solve challenging problems in creative ways. I still get to emulate threats, study techniques and build malicious tools (just for a different purpose). I still get the satisfaction of helping people defend their networks and improve their security. I still get the thrill of competing and taking operational actions against adversaries who have their own objectives to accomplish. Every day I get to study the past, think outside the box, and learn something new. Many of the characteristics and qualities that made red team work enjoyable for me are present in my current blue team role. Many might disagree but I still consider myself a red teamer – just one that is living in a long-term purple team exercise. I often wonder if there are others who feel like this and enjoy the mix. 

I do not regret the path I have taken, on the contrary, I have thoroughly enjoyed it. I would encourage people who are curious on either side to apply your domain specific knowledge to the competing domain. If you are a red teamer, go hang out with the IR / Intel / SOC / Malware / Ops folks for the day. If you are a blue teamer, ask the red team to do a ride along during an engagement and make sure you are there for the campaign prep. For those who simply can’t “ride along”, there are other ways to do this: personal research, job rotations, technical exchanges, conference villages, CTFs etc. Being able to attack problems from both angles is extremely rewarding and I promise you, having inside knowledge with a diverse skill set will force you to consistently seek improvement. There isn’t a day that goes by where I don’t reflect upon some offensive action I took in the past knowing that I could have done so much better if I only had the knowledge I have now.

Infrastructure Diversity – Hunting In Shared Infrastructure

As an attacker, it is all too easy to settle down into a rhythm. That rhythm of operations, the specific techniques and automation involved with conducting offensive work, boiled down to foundational tradecraft decisions that are often reused between campaigns. Why reuse of tradecraft between campaigns? Well, it enables scalable and efficient operations; unfortunately, it also creates a digital fingerprint. We have seen the results of this at a national level with the deep revelations of the operations of advanced threat actors. Recently, I have shifted jobs into a Security Engineer role where I get to work with customers and with “BIG” (notice the caps) data to do network forensics and threat detection. Being on the defender’s side of the breach has definitely helped to refine certain aspects of my tradecraft. Don’t worry… I will still be blogging about red team stuff :).
Continue reading

The Diamond Model and Network Based Threat Replication

I recently spoke at BSides DC with Chris Ross (@xorrior) on replicating adversarial post-exploitation tools and tradecraft. During that presentation, we briefly covered the concept of specific adversary emulation and discussed, at a very high level, how the Diamond Model of Intrusion Analysis could be used as a framework for adversary emulation. In hopes of better explaining the concept, I decided it would be best to expand our thoughts in a blog post, one that developed into quite a long post. 

As with most of my posts, I’ve tried my best to provide as much reference material as possible. I’ve cited these items throughout the post where appropriate, and want to ensure that the original authors get the credit they deserve:

How Do I Replicate Threats??

I’ve blogged previously about red team operations and the various components and considerations that go into these types of operations. One subset of red team operations is threat replication exercises. During these exercises, the red team is responsible for modeling and emulating a specific threat in order to measure effectiveness of detection, response, and mitigation in the environment. There are strengths and weaknesses to such an approach, and often times it takes immense resources to accurately replicate an adversary.

One common question I am routinely asked, and have never had an adequate answer for until now, is: how do you replicate threats given the lack of commercially available intelligence on adversary operations, intentions, objectives, capabilities, etc.? I am not currently aware of any widely-accepted red team frameworks that formally define how an offensive team can effectively simulate threats. After refreshing myself on the Diamond Model, I see definitive value in using it in a red team context to better shape how the red team can effectively emulate an adversary during an engagement.

Continue reading

Common Ground Part 3: Execution and the People Factor

This is part three of a blog series titled: Common Ground. In Part One, I discussed the background and evolution of red teaming. I dove deep into how it applies to the information security industry and some limitations that are faced on engagements. In Part Two, I discussed various components of the planning phase to help ensure a valuable and mature exercise. In this part, I will discuss how a red team can execute a thorough operation. I will steer clear of the technical components of the network red team, instead focusing on the most important outcome of a red team assessment: communication improvement and bias identification. I encourage you to read my disclaimer in Part One before continuing.

Decision Cycles

If you have been through any strategy classes or military training, there is a good chance you have heard of the overused and applied concept of decision cycles. In school, I loved the numerous military strategy and history classes I took, but at the time, I too often thought the concepts were defined to the point of absurdity (and I am sure I will get some fun comments from friends now that I am writing about them here). Later in my career, when I found myself playing adversary against major industry blue teams, I fell back to some of the military strategy lessons to help achieve our objective in the wargame on their network. Throughout my first several engagements, I learned that those lessons could be applied heavily to help achieve a major goal of the assessment—improvement of process. 

Decision cycles are a set of steps taken by an entity in a repetitive manner to plan their course of action, take that action, learn from the results, and implement new decisions. In theory, they are foundational to everything we do day-to-day, even if you do not realize or formally follow a decision cycle. Formal examples:

  • Plan-Do-Check-Act – Heavily used in quality assurance
  • Observe-Orient-Decide-Act (OODA)– Used by the US Military. A diagram of this process as formulated by Boyd is below
  • Observation–Hypothesis–Experiment–Evaluation (Scientific Method) – Used to evaluate hypothesis and make adjustments in scientific research

John Boyd, the person who formulated and studied the OODA cycle also theorized about how these loops could be used to defeat an enemy. He believed that if you could work through your decision cycles faster than your adversary you would be able to react to events quicker and, therefore, gain a strategic advantage. In this concept, he covered numerous aspects that affected the speed of decision cycles and methods of intentionally slowing your adversaries’ decision cycles. At each step in the cycle, many factors impact the decision maker and affected the outcome.



Here are the phases broken down:

  • Observation: During the observe phase, the entity uses the unfolding circumstances combined with current interactions within the environment to formulate their perception of what is happening. They combine this info with outside information (enrichment data) to form the basis of their understanding.
  • Orient: During this phase, the decision maker takes their perception of the situation, and aligns their thoughts with the actions that are occurring. They utilize the culture of their organization, knowledge of past events, and the constant flux of information to begin to shape a potential decision.
  • Decide: This simple phase takes the information from the previous two phases and makes the decision on the best course of action.
  • Act: Having made the decision, the entity now executes that decision and utilizes any outcome as part of a feedback loop going back into the observe phase to mold a changing environment.

Affecting Decisions In Each Phase – Case Study

With knowledge of this process and the goal of intentionally slowing down the blue team’s decision cycles, the red team can plot certain actions to not only identify technical vulnerability, but also human vulnerability. Below are some possible ways that the red team can alter the decision cycles of the blue team. They are small examples that can be expanded upon greatly to build a strategic playbook of sorts.

  • Observation
    • Limit the amount of information the decision maker can gain from the environment or overwhelm the decision maker with too much competing information.
    • Plant false flags pointing to multiple adversaries to produce offensive counter-information and force the decision maker to rely on outside information that is actually not relevant.
    • Disable network security sensors to prevent them from gaining a perception in the environment.
  • Orient
    • Identify known personalities in the decision making process and leverage personality flaws or biases to hamper a proper decision.
    • Prevent the thorough analysis of information by shifting or changing TTPs frequently.
    • Identify cultural traditions and norms and utilize them as part of your attack path (it is normal for people to log in after hours, so log in after hours).
    • Study past breaches in the environment and identify errors that you can piggyback off of.
  • Decide
    • Adapt your TTPs in a rapid fashion so that they consider changing their decision numerous times; therefore, introducing a delay.
  • Act
    • Recognize that a decision has been executed and orient yourself prior to allow their feedback loop to occur.
    • Nullify their actions in a rapid fashion so they feel the need to repeat the same actions.

By recognizing the “people aspect” of an engagement, the red team is better equipped to use technical expertise in the environment to operate without valid response and, therefore, achieve success. Understanding strategy and psychology of influence helps to hammer home the impact of the blue team’s lack of operational exercise of their incident response plans.

Incident Response Cycle

While not technically a decision cycle, the incident response process or “killchain” is a cyclical process that is industry recognized and applied. This process is heavily defined in NIST 800-61 R2 and their figure of the process is below. As a network adversary, recognizing the process your blue team opponent is operating within allows you to better predict their actions and plot their potential steps, which decreases the time required for your decision making and increases your strategic advantage.

IR Process

Outside of the technical response that will be conducted as part of the exercise, there is an assumed phase within each step of the process. This assumed phase is the communications, command, and control (C3) that occurs within the organization being exercised. Within the C3, there can be significant time delays introduced that prevent a rapid or thorough response, essentially crippling the technical responders. The red team can exercise this process during an engagement by including the communication plans in the exercise and ensuring that the blue team conducts a complete response as they would against a real adversary. Common issues we see with C3 in an organization:

  • Delegation – Certain decisions must be delegated down to prevent time lag.
  • Leadership Lag – Not all non-technical leaders will understand how to respond. They must have practice to react quicker during a real breach.
  • Out-of-Band C2 – During response efforts, the use of a compromised network will allow the adversary to instantly be inside your decision cycles by witnessing it in real-time. The adversary will also be able to apply deception or disruption operations.

Another aspect of the process is the ability for adversaries and red teams to increase the lag time by attacking targets that intentionally increase the amount of communication required on behalf of network defenders. By laterally spreading through multiple organizations, subsidiaries, international components, or business lines, the red team is able to force collaboration and response across organizational or international lines, which is inherently slow, if it happens at all. Testing this multinational communication  also forces decision makers to consider incorporating international and privacy laws into their response plans.

With the red team targeting the human factor as well as the technical, and working through these issues in advance, the organization will dramatically decrease the disparity between their mean time to detection and their mean time to response.


When analyzing decision cycles and the incident response processes, one must recognize that bias plays a big factor in sufficiently defending an organization at all levels. As a red team, recognizing the various biases that could be used as weapons against an organization can be extremely useful. As a blue team member, you must recognize that these exist and practice working through them.

  • Confirmation Bias – look for information that supports your existing beliefs. Example: “I believe I have excellent egress controls so I will look for information that shows I am successfully blocking C2.”
  • Anchoring Bias – Jumping to conclusions or basing conclusions on information gathered early in investigation. Example: “We have the adversary contained, the email gateway shows only 1 user was phished.”
  • Automation Bias – The overreliance on automated systems which skew your available information. Example: “The threat intelligence tool does not report any adversaries, so how can there be one?”
  • Framing Effect – Drawing different conclusions on information depending on presentation. Example: “Information is presented to the CIO by the CISO in a positive, hopeful light (things are under control) because he does not want to be fired even though the breach is uncontrolled.”
  • Information Bias – Tendency to seek more information even though it is not needed for decisions. This is extremely common in new decision makers. Example: “The new CISO wants to continue seeking external info on the threat and consulting with external teams before deciding on the course of action in response.”
  • Zero Risk Bias – Preference in reducing a small risk to zero over the greater reduction in a much larger risk profile. Example: “Although the organization has invested 100K in phishing prevention, I want to continue making sure that is entirely not possible even though I have user home directories completely unsecured.”

If these biases impact your organization during a red team assessment, it can be identified through a rigorous debriefing process where both teams share journals of activity for time comparisons. This will allow for improvement and recognition of the risk of bias in decision making.

For more information on cognitive bias and the impact on decision making, check out this article on MindTools. Also, this Harvard Business Review article is awesome: Why Good Leaders Make Bad Decisions.


As shown throughout this post, one major motivation for conducting a red team assessment is working through a full response and breach scenario to practice making decisions. Blue teams should recognize that their personalities are in-scope and red teams should learn to focus on utilizing the psychological aspects of conflict to their advantage in addition to the technical vulnerabilities they uncover. Rigorous debriefing and team work can benefit all stakeholders involved, allowing for a fluid and rapid response during a real breach scenario.

Blog Series Wrap Up

That wraps up my brain dump and blog series about red teaming. While these posts were not overly technical in nature, I hope this series serves people well and encourages a proactive discussion on how analytical techniques can help organizations improve, both organizationally and technically. There might be another post or two like this down the line if they were helpful, particularly on debriefing. I will state it again, any form of this analysis being conducted to better the organization is useful even if it does not apply to a strict definition. As the terms have evolved, there are subsets of study and room for additional applications in the industry.

Finally, since I understand the relationship between red and blue teams can be contentious, I offer this final advice:

To the red team: you are providing a service to your customer— the blue team. You should be as familiar with defensive techniques as you are offensive. Prepare well, conduct a thorough assessment, and do not get wrapped up in politics.

To the blue team: the red team is on the same team. While they may cause you more work, it is much more fun practicing response in your real environment than in a training lab. Share info and teach the red team so that they can better train you!

Common Ground: Planning is Key

This is part two of a blog series titled: Common Ground. In part one, I discussed the backgrounds and evolution of red teaming, diving deep into how it applies to the information security industry and some limitations engagements face. In this part, I will discuss common components of red team planning and how they play into execution. There are many publications, documents, articles, and books focused on the structure of red teams, but I’m going to cover facets integral to engagement planning that I don’t see discussed enough.

Planning can be completed formally or informally. Organizations often benefit by being heavily involved in the planning process; however, sometimes the task is delegated to the red team with the organization giving final approvals. Finally, while not every single component here may be thoroughly planned in every engagement, I do not believe that it lessens the validity of the engagement as long as execution strikes back to the central theme or motivation for testing in the first place.  I encourage you to read my disclaimer in my previous piece before continuing.

References that are useful for the generic planning process of network red team engagements:

Continue reading

Common Ground Part 1: Red Team History & Overview

Over the past ten years, red teaming has grown in popularity and has been adopted across different industries as a mature method of assessing an organization’s ability to handle challenges. With its widespread adoption, the term “red team” has come to mean different things to different people depending on their professional background. This is part one of a three-part blog series where I will break down and inspect red teaming. In this section, I will address what I believe red teaming is, how it applies to the infosec industry, how it is different from other technical assessments, and the realistic limitations on these types of engagements. In part two, I will discuss some topics important to planning a red team engagement, including organizational fit, threat models, training objectives, and assessment “events.” Finally, in part three,  I will discuss red team execution, focusing on the human and strategic factors instead of technical aspects. That post will cover how network red teams supplement their technical testing by identifying human and procedural weaknesses, such as bias and process deficiencies inside of the targets incident response cycle ranging from the technical responder up through the CIO.

Many thanks to my current team in ATD and those in the industry who continue to share in this discussion with me. I learn new stuff from all of you every day and I love that as a red team community, we want to continue honing our tradecraft.

Disclaimer: Before going too much further, it is obvious that this is a contentious topic. My view points are derived from experience in the military, planning and executing operations followed by several years in the commercial industry helping to build and train industry-leading red teams; however, I am constantly  learning and by no means think I have all of the answers. If you disagree with any points I discuss in these posts, I respect your viewpoint and experience behind it, but we might have to agree to disagree. In the end, the importance of red teaming isn’t about a single or even a group’s philosophy; rather it’s about best preparing organizations to handle challenges as they arise. If your methodology suits your organization’s needs, I applaud it!

Continue reading

Creepy User-Centric Post-Exploitation

I love seeing red and blue teams square off during an engagement. It works best if both sides avoid selfish desires and focus on the task at hand; improvement and training is the ultimate goal.  A key component of the offensive aspect of this feud is the ability for the red team to conduct adversarial actions against users to gather data and accomplish objectives. Throughout every engagement, the red team has to be constantly aware of user behavior — tracking their movements, exploiting their weaknesses, mapping relationships, and analyzing yielded data to better accomplish the adversarial mission. By collecting, analyzing, and processing user-based intelligence, the red team is armed and prepared to succeed in accomplishing training objectives while also carrying out realistic adversarial actions.

Keylogging, clipboard monitoring, and screenshots provide easy examples of user-centric post-exploitation actions that are both super useful for the red team and borderline creepy at times. These are also some of my favorite techniques before and after escalation of obtaining valuable intel. With strictly the data from these actions our team has been able to obtain passwords to critical ICS nodes, get screenshots of admins accessing sensitive data repositories (i.e. mainframes for healthcare, finance, etc), retrieve router configs copied to the clipboard, and many many more awesome things. In short, these actions are crucial for success in a large-scale and long-term engagement.

One key thing about being in a red team: you must avoid limiting yourself to certain actions or tools out of habit. You have to ditch the myopic view and broaden your horizon. When I run out of ideas, I look to the real adversaries to see what they are doing. Several sets of threat actors (i.e. Flame, Duqu) have been particularly inspiring and driven us to “up our game” when it comes to utilizing intelligence gathering against users. These actors all appear to have a wide array of modular capabilities in their tools that allow them to accomplish required actions. For our team, Empire  and Cobalt Strike have the majority of capabilities we need for data collection; however, every so often we want to dig deeper and demonstrate additional actions that an adversary could carry out. In a recent engagement, those specific actions were webcam capture and microphone audio recording.

You might ask “… REALLY? Why do I need audio/video from a target?” If you have asked that, you might consider brainstorming about all the ways an adversary gathers intel from a system or why they gather it. For example’s sake, audio capture makes a lot of sense for a military command center or political office. In a separate case, video capture of a high ranking C-level executive in their private office might result in good blackmail material for manipulation or access to sensitive discussions.

Before I go too far, I would be remiss if I didn’t mention that the Metasploit Project has post-exploitation capabilities to carry out these actions. Due to a couple of fail cases (which all tools have depending on the situation), I took it upon myself to look for or develop alternatives… Plus, I <3 Offensive/Defensive PowerShell.
Continue reading

Empire & Tool Diversity: Integration is Key

Since the release of PowerShell Empire at BSidesLV 2015 by Will Schroeder (@harmj0y) and myself, the project has taken off. I could not be more proud of the community of contributors and users that have rallied together to help us maintain and continue building Empire. Since the project’s release, Matt Nelson (@enigma0x3) has joined our team and has taken charge of handling the various issues that arise from time to time (many thanks to him for this uphill battle). Also, Matt Graeber (@mattifestation) is now working with us and will likely have a lot of backstage influence on the continued development AND detection/mitigation of Empire. To think of it, I have been mostly hands-off with Empire development recently… Will and Matt work at speeds that I can only envy and their vision for the tool is fantastic. This post is continuing an ongoing blog series that the Empire team is doing and will cover integration with existing toolsets, namely Metasploit and Cobalt Strike. The remainder of the series with some background and an ongoing list of series posts is kept here.

Early on we recognized that we didn’t want Empire to be an “all-purpose” agent, so we wanted to ensure that it integrated well with existing operational tools and platforms. We firmly believe that there is no single universally perfect tool- every situation demands a different perspective and each blue team likely has different training objectives (red teams take note, they are your “customer”). With unity in mind, we wanted to primarily focus our integration on easily passing sessions back and forth between Empire, Metasploit, and Cobalt Strike.

Continue reading

Remote Weaponization of WSUS MITM

Network attacks (WPAD Injection, HTTP/WSUS MITM, SMB Relay etc.) are a very useful attack vector for adversaries trying to laterally spread, gather credentials or escalate privileges in a semi-targeted manner. This vector is used by known adversaries attempting to penetrate deep into networks, and numerous threat/malware reports have cited tools with functionality that allows attackers to perform these attacks in a remote fashion. Duqu 2.0 is a great example of where such attacks can be found in the wild and the reports of this actor make a great case study.

I became even more familiar with the techniques thanks to demos and stories from Jeff Dimmock (@bluscreenofjeff) and Andy Robbins (@_wald0), with whom I work everyday. After learning Responder, I toyed with broader capabilities such as MITMf, which combines a variety of tools into a weaponized platform for easy integration into your methodology. For those unfamiliar with these tools, please check out the following links:

In the case of the evil and wily APT actors in referenced Duqu 2.0 report, the actors used a module built specifically for their toolkit and did not require the use of public tools or external scripts. Unfortunately, for a long period of time, public MiTM/relay attack tools still required you to physically be on a local LAN (that I’m aware of anyways… I welcome your comments). In early 2015, Kevin Robertson (@kevin_robertson) released Inveigh, a PowerShell network attack tool that uses raw sockets to implement a limited subset of techniques including LLMNR spoofing, MDNS spoofing, and SMB relay. Inveigh opens the door for many interesting attack chains and allows us to better emulate threats using these vectors in a remote fashion. If you care why we emulate threats, go elsewhere… Raphael Mudge has some really great ideas and thoughts on the topic.

Continue reading

Derivative Local Admin


User hunting is the process of tracking down where users are logged in or have a session in the network. By locating their login or session, you might be able to gain access to that Machine, privesc (if required), and operate in the context of the new user. This is obviously most helpful with elevated user accounts.

Harmj0y has talked in-depth about user hunting in multiple blog posts and at several different conferences… you might wonder, why another post? While many people have paid attention and are plenty capable of running PowerView’s “Invoke-UserHunter” function, they might not fully understand how it works below the hood. This can prevent them from being successful in “Austere” networks (you know, where things get weird), or very very large enterprise networks. Also, as people begin to lock down and follow best practice in Windows Domains, I have noticed a new trend in how I am gaining access to elevated user accounts.

For some of the posts and presentations on user hunting, check out:

Continue reading