I recently spoke at BSides DC with Chris Ross (@xorrior) on replicating adversarial post-exploitation tools and tradecraft. During that presentation, we briefly covered the concept of specific adversary emulation and discussed, at a very high level, how the Diamond Model of Intrusion Analysis could be used as a framework for adversary emulation. In hopes of better explaining the concept, I decided it would be best to expand our thoughts in a blog post, one that developed into quite a long post.
As with most of my posts, I’ve tried my best to provide as much reference material as possible. I’ve cited these items throughout the post where appropriate, and want to ensure that the original authors get the credit they deserve:
- Original whitepaper: Diamond Model for Intrusion Analysis, Sergio Caltagirone, Andrew Pendergast, and Christopher Betz
- Summary of the concept: Diamond Summary, Sergio Caltagirone
- Threat rep blog post: Puttering My Panda, Raphael Mudge
- APT Report on PUTTER PANDA: PUTTER PANDA, Crowdstrike
How Do I Replicate Threats??
I’ve blogged previously about red team operations and the various components and considerations that go into these types of operations. One subset of red team operations is threat replication exercises. During these exercises, the red team is responsible for modeling and emulating a specific threat in order to measure effectiveness of detection, response, and mitigation in the environment. There are strengths and weaknesses to such an approach, and often times it takes immense resources to accurately replicate an adversary.
One common question I am routinely asked, and have never had an adequate answer for until now, is: how do you replicate threats given the lack of commercially available intelligence on adversary operations, intentions, objectives, capabilities, etc.? I am not currently aware of any widely-accepted red team frameworks that formally define how an offensive team can effectively simulate threats. After refreshing myself on the Diamond Model, I see definitive value in using it in a red team context to better shape how the red team can effectively emulate an adversary during an engagement.
What is This Model Thingy?
The Diamond Model of Intrusion Analysis was published in a white paper in 2013 by Sergio Caltagirone (@cnoanalysis), Andrew Pendergast (@0xAndrew), and Christopher Betz. The model is set and graph theory based and strives to create a framework for properly categorizing intrusion activity. The foundation of the model is that there is an atomic element (the “event”) that is made up of at least four features (nodes) that are interconnected (links): the adversary, infrastructure, capability and victim. Events are a single step of a complete attack path that the adversary must take in order to achieve their objective. Activity threads are an ordered series of events that represent the flow of the operation carried out the operators. Threads can be correlated and matched across various campaigns to form activity groups which share common tactics, techniques and procedures (TTPs).
From any one of the specific features, the intrusion analyst is able to observe activity of the other linked elements. For example, from a victim, the analyst will be able to identify the capabilities and infrastructure used in an event. From the capability or infrastructure, the intrusion analyst MIGHT be able to observe the adversary.
Now, let me break down some of these primitives a bit more.
Axiom 2 of Diamond Model for Intrusion Analysis: There exists a set of adversaries (insiders, outsiders, individuals, groups, and organizations) which seek to compromise computer systems or networks to further their intent and satisfy their needs.
The adversary is the model feature focused on defining the person responsible for the malicious actions. While a simple concept, it can be easily broken into the adversary customer and the adversary operator. The customer is the component of the adversary defining the end goal of the operation and receiving the intelligence collected. The operator is the technical component that is responsible for executing operations.
This model feature focuses on describing and defining the tools used and techniques employed by the adversary. It is important to note that within a certain malicious event or inside of an activity thread, only a limited subset of capabilities of the adversaries arsenal will likely be observed.
This model feature describes the physical and logical assets utilized to deliver, stage, control, and communicate with the capabilities. In the white paper, the authors define two types of infrastructure
- Type I – Fully controlled or owned by the adversary
- Type II – Controlled or owned by an unwilling intermediary
The infrastructure feature can reveal interesting adversary details about the malicious action being investigated; however, infrastructure can also be used to link multiple malicious actions to a single adversary. The internet service provider (ISP) is a subcomponent of the infrastructure feature that can be used as a selector to identify potential relationships between disparate events (actors often reuse the same ISP across actions). Further, the type of infrastructure (cloud based, VPS, data center, private/residential, etc.) can be particularly interesting to observe because it reflects the adversaries operational infrastructure methodology which will likely not change across campaigns, especially with actors who have organizational division across various operational functions (developers, operators, network ops, etc).
The victim model feature describes both the generic categorization of the victim, such as the industry, market, organization or persona, and specific classification of the attacked victim assets. In the case of targeted attacks, the victims are often a means to an end and might be utilized as part of the infrastructure of the actor. This can be best recognized during the early parts of an attack chain where systems will be compromised simply because of their technical importance when escalating privileges. For example, if an adversary seeks to compromise a highly targeted organization, they might first compromise a vulnerable web server known to be used by users of the target. Next, they could waterhole the site to gain an initial foothold in the organization. Using credential abuse, they would laterally spread to available systems to eventually obtain domain admins. Up until this point, the assets comprised did not have a direct impact on the final objective but simply allowed them to maintain a foothold until obtaining the objective.
Meta-Features / Extended Model
In addition to the basic features, there are meta-features that focus on and define higher level constructs and relationships between the basic features outlined in the model. In the expanded diamond model, there are two critical meta-features added to expand the relationships and complexities involved in intrusion analysis: the socio-political factor and technology.
The socio-political meta-feature defines the underlying adversarial intentions/objectives in all malicious activity. This ties back to the larger strategic picture of adversary motivations, and is critical because it focuses on the concept of victimology. The intentions and objectives of the adversary lead to a selective choice of victims and how those victims play into the grander scheme.
The technology meta-feature introduces and expands upon the relationship between the infrastructure and the capability. This meta-feature ties together all of the backend technologies that make the communication between the capability and the infrastructure possible. Things like DNS, cloud VPS, compromised blogs, etc. can all potentially fall into this realm and contribute to the capabilities of the actor.
Putting It Together
Using this model, incident response teams can categorize observable activity on the diamond model as a single facet of the event, and then center their investigation starting on a single feature. When analyzing relevant data about that node, they are then able to pivot to the corresponding features to expand knowledge of the adversary. Furthermore, this model allows defenders to group specific threat actors and their linked activities across incidents using activity grouping, detailing patterns of shared TTPs.
So What? Why Does a Red Team Care?
Since blue teams can accurately and scientifically use this model to track adversaries, and the red team is attempting to emulate a specific adversary, it makes a lot of sense to utilize this model to also replicate the threat of choice. By starting with the knowledge of what adversary we are emulating, the red team can work to define and mimic that adversary across the various features in the diamond model.
First, the red team should either work with the customer to identify likely threats to the environment or take known threats they have observed before and decided upon the particular adversary to be emulated. The red team will need to research all aspects of that actor and potentially seek additional info from threat intelligence teams, incident responders, and private threat sharing groups that are allowed to distribute information to include reports, hashes of malware, research, indicators etc. To find groups and connections in this realm, I recommend working with established blue team members or malware analysts who likely work with partners in the same industry. In the private sector, intelligence is often proprietary, close hold and rarely shared between competing organizations for fear of reputation loss or competitive advantage which makes this step extremely difficult. During the exercise, the red team will be playing the role of both the adversary customer and adversary operator, so there is no need to differentiate the two. The red team lead should be heavily steering operational actions based on predefined plans and intentions of the simulated customer.
For the purpose of capabilities, the red team can choose to closely mirror the TTPs and malware used by the actor, but does not need to employ the entire “arsenal” of the adversary. Since the exercise is simulating a series of malicious events within a single activity group, it would be highly unlikely for the adversary to be forced to use every tool at their disposal. Therefore, a “white card” can be introduced that allows the red team to simulate important aspects of the capabilities and indicators. As demonstrated during the BSidesDC presentation, the red team could develop tools to specifically mimic the capabilities used at a near identical level.
For infrastructure, it will not be necessary (or possible) to obtain the exact infrastructure used. The red team can closely simulate the type of infrastructure uses, Type I or Type II, and the forms of infrastructure owned by the adversary. For example, if the adversary is using WordPress blogs for staging of payloads, the red team can easily set up and configure simulated victim WordPress pages to host their initial staging. Further, if the adversary is utilizing VPS providers, it would be possible for the red team to replicate those actions. Also as part of the infrastructure feature and technology meta-feature, the red team will want to ensure that their command and control (C2) mechanisms closely match those of the adversary. Cobalt Strike provides a method of performing Malleable C2, and Empire 2.0 also introduced extensible C2 modules that can aid red teams in this endeavor.
Throughout all phases of the engagement, the red team will want to ensure that each victim is chosen for a specific targeted purpose. Some victim assets will be used as a means to an end throughout the killchain (intel collection, credential abuse, escalation of privileges) while others will serve as intermediary point to the final objective. This also ties into the socio-political meta-feature in that the end goal of the red team and “crown jewel” objective should be based on the knowledge of the adversary, not simply a blind selection by the red team. This will oftentimes be the hardest part to accurately carry out because the minds of the adversary and intentions are often unknown. In general, the red team can perform targeting analysis and center of gravity analysis to define what might be particularly useful to the adversary or harmful to the organization, but this will nearly always be an approximation.
Some of the model meta-features can be utilized to further replicate threats during the exercise. The meta-features are listed in the figures above and have varying effects on red team exercises. One feature I find useful to integrate would be attacker resources. For example, if we are emulating an adversary with few resources, the red team should factor that into every facet of their engagement. Limited toolsets, software, training, facilities, etc. can be simulated to better emulate the actor. This hits home for me because I passionately believe that not every threat replication exercise should be a truly “advanced” threat. Many of the adversaries having success are extremely limited in resources and utilize open source tool sets.
Methodology is another meta-feature that is easy to build into the replication workflow, in the case there is adequate reporting on the methodology of the actor in question. Many established APT groups have a plethora of reporting on them and case studies written based on response efforts.
Challenges of Replicating Threats
Simply stated, the challenge of replicating threats as a service is that we are not a real world adversary. There is a high barrier to entry to performing these kinds of services. Here are some of the challenges we face often:
- Heavy requirement of reverse engineering and development time
- Custom tools require specific operator training and testing
- Intel must be enriched and modified to mold to the operational environment
- Must have a mix of intelligence analyst and network operators realistic to what the threats are using
Further, as ethical providers with the intention of improving our blue team counterparts, there are many limits set on our actions which prevent us from replicating certain aspects of tradecraft. Here is a list of some potential roadblocks to simulating adversaries as a service:
- Blackmail / extortion is off the table
- HUMINT recruiting is not in-scope
- Targeting of home networks or personal devices is prohibited
- Cannot target BYOD networks or segments due to lack of network authority
- Denial of service in any form is not approved
- Cannot compromise 3rd parties to waterhole or collect intel
- Cannot target affiliates or subsidiaries without explicit approval
- It is not cost efficient and practical to use 0-day exploits for consulting
Let’s pretend that we are a red team for a defense industrial based company, specifically involved with aerospace. Through our local law enforcement contacts and many threat intelligence providers, we know that PUTTER PANDA is a threat that targets us and we want to replicate it. We can plan and initiate the threat replication exercise using the model outlined above.
First, we research and gain knowledge on the adversary. Through Crowdstrike’s reporting, we learn a lot of potential information about the adversary including a possible affiliation: PLA 3GSD 12th Bureau Unit 61486. We learn that the adversary is likely well-funded with computer science backgrounds and military-based training. Furthermore, PUTTER PANDA likely has a medium to high amount of resources due to affiliations with other adversaries in the shared victim space. Interpreting the various reporting, we make the assumption that while the adversary does not use the most sophisticated techniques, they are largely successful using simple methods and focus on medium term access until achieving the objective.
Pivoting to the infrastructure model feature, we can begin to break down and specifically plan for our engagement. We can register DNS domains similar to the style and naming of the reported indicators. We would likely have a mix of tech-related domains, “normal” domains, and aerospace-related domains. Based on analysis of the IP addresses used in previous campaigns, we learn that PUTTER PANDA uses Type I cloud infrastructure and data centers to host their C2 nodes (ie. 220.127.116.11 -> WebNX). With this knowledge, we can register systems within the same cloud hosting companies at geographically distributed locations.
For the capability model feature, we must plan, develop, and test our tools used to replicate the threat. If you are okay with a general threat replication rather than highly specific replication, you could utilize a popular platform such as Cobalt Strike, which provides a TON of features to perform these kinds of actions. Cobalt Strike can generate MS Word droppers similar to the ones used by PUTTER PANDA and also has a malleable C2 feature that makes the Beacon RAT communicate with configurable indicators. Also, the Empire RAT has a plug and play communication method that allows us to directly mirror the C2 method of PUTTER PANDA. For a more tailored, accurate, and specific threat replication, you will likely need to develop custom capabilities. Using C/C#, we could dev up a simple .scr or .exe dropper to simulate the initial dropper used by PUTTER PANDA with the same XOR obfuscation. Next, that dropper could be used to execute a repurposed and mocked up version of their RAT. In all likeliness, some aspects of these tools will need to be modified and improved for better operational security and to prevent data exploitation/loss during the op. Follow-on post-exploitation capabilities used by the actor are mostly open source tools and can be delivered through the RAT.
Prior to execution, operational planning can be conducted to analyze and identify what would end state could be considered successful for the adversary customer. Generally, with suspected Chinese APT groups, the suspected intentions are intellectual property theft and intelligence targeting. With that in mind, the red team will likely go into the operation with the goal of obtaining sensitive plans or schematics for military hardware being produced by this company. A secondary objective of personal information about the engineers assigned to the projects can be prioritized as it might feed intelligence recruiting of those engineers. Throughout the operation, the red team should carefully choose which assets to access based on what value those assets bring to either the attack chain or ability to impact the two objectives.
Analyzing the methodology meta-feature, we can plan that we will be performing a smash-and-grab style operation utilize phishing to gain initial access followed by basic post-exploitation collection. We will utilize credential abuse, keylogging, and screenshots to gather information and laterally spread. Once we accomplish our objective, we will want to exfil it in chunks to various dead drop locations for later retrieval. The methodology meta-feature combined with the timestamps meta-feature define the hours by which we operate which in this specific case, dictates that we will likely want to operate during the day hours in Shanghai based on proposed attribution (nighttime in the US).
There is a definite barrier to entry in performing realistic threat replication during red team exercises. To be perfectly accurate, it will require reverse engineering, development, and proper modeling, but most organizations do not really understand how to even accurately measure the threat they are trying to replicate. The Diamond Model for Intrusion Analysis presents a very clear approach that we, as red team members, can lay out and present our plans for replicating a specific threat the same way that a SOC will lay out and categorize event activity. I would encourage red teams and blue teams alike to question the internal/external providers they use on how they are performing threat replication and whether they are truly replicating the multifaceted aspects involved with an adversary. I would also challenge red team members to become more familiar with threats challenging organizations and blue team members to work to categorize, track, and characterize the threats facing your organization rather than remediating and moving on. These approaches allow both sides to mature in their methodologies.