Part 1 - Evaluating Security Stack Resilience against Attack use cases - a suggested framework

The suggest Security Assessment Framework outlined here provides a structured approach to evaluate the resilience of a security stack against various specified attack use cases, such as malware delivery, command and control (CnC) communication, and lateral movement within a network. Each scenario is described with detailed prerequisites, including network configurations and specific technologies involved like NGFW (Next-Generation Firewall), Network Sandbox, SIEM (Security Information and Event Management), and others. The framework focuses on actions within different network environments (e.g., Desktop LAN and later Cloud can be added). It categorizes actions and suggests tags for easier selections of actions.

This framework can be an approach to help MSV customers maximizing the utility and effectiveness of Mandiant Security Validation in their security environments.

In Part 1 , I will outline the MSV evaluation process and in Part 2 , I will post some of the use cases that can be used as guidelines when choosing and executing attacks.

MSV life cycle (revise as deemed necessary)

The following high MSV assessment life cycle can be one of the approaches to ensure the security stack remains effective against evolving cybersecurity threats.

1.Plan:

    • Identify which action(s) to run based on
      • the existing threat landscape and/or the security stack (or security technology) that should detect and/prevent the these actions
      • threat profile - CTI and Threat Actor driven based on TTPs
      • Security technology integration - use case driven based on expected configuration (block, alert, event generation). Test data flows, event time stamping, expected control behaviors.
      • Against a threat model - application/service focused assessment to ensure controls function as expected (DB monitoring, authentication, WAF eventing etc)
    • Design and build to find failures, not just to confirm existing controls worked. Aim assessments at the weaker areas.
    • Schedule regular assessments (e.g., quarterly or bi-annually) to continually evaluate the security stack (or security technology) and to adapt to new threats, IT infrastructure changes and technological advancements
    • Note: the Security Assessment Framework using MSV can be the building block of this plan

2.Run Attacks:

    • Run actions (attacks) to evaluate the security stack (or security technology) effectiveness.

3.Analyze Results:

    • Analyze the outcomes of the simulation to identify weaknesses and gaps in the security stack (or security technology).
  • Why security stack (or a security technology) did not block/alert/log an action(s)
  • Determine if missed/Un-blocked/undetected attacks were due to configuration issues, outdated signatures, or other factors.

4.Close Gaps:

    • based on the identified gaps
      • Update signatures and rules
      • Fine-tune settings, thresholds, and sensitivity to improve detection and reduce false positives.
      • Increase logs visibility if needed.
      • Seek vendor support to guide how to close these gaps (create new signatures, feature request, or others)
      • Implement additional security measures if needed.

5.Retest:

    • Track progress against the implementation of these improvements, pre-compile a set of new actions for a future assessment to validate implementation
    • Rerun actions (attacks) to test the changes made.
    • Assess how the security stack (or security technology) performs post-tuning and updates.

6.Ongoing Monitoring and Adjustment:

    • Continuously monitor the efficacy of the security stack (or security technology).
    • Regularly update and adjust validations based on emerging threats and changing network conditions.
    • Track progress against the implementation of improvements, pre-compile a set of new actions for a future assessment to validate implementation

Until the next post, stay tuned!

 

 

Solved Solved
7 3 269
3 ACCEPTED SOLUTIONS

Hello @tameri , thank you for this post.
I agree with your framework and we regularly apply it. Internally we call this framework "scenario based tests" because we select a scenario (e.g. existing threat landscape, threat profile, security stack, ...) and we run tests following all the points you outlined.

I'd like to share also the second type of test we do in our testing strategy. We call it "recurring tests".
We create a set of evaluations (we call them "special evaluations") that contain some sample of every type of attack we can select from MSV library. The way we choose these actions is driven by:
- recent threat intelligence information coming from our CSIRT/SOC group (we include latest malware, TTPs and actors related to our vertical)
- the need to cover all stages of attacks (reconnaissance, delivery, exploitation, execution, c&c, action on target)
At the moment our special evaluations contains about 300 actions. We run these evaluations every weekend.
We than collect results and we plot them against time, to check that our security posture remains the same in the time.
Thanks to recurring tests, in the past, we spotted situation where the firewall stopped blocking some actions due to a partially failed upgrade, or alerts coming from the SIEM stopped working due to an overload.
The special evaluation is updated 2-3 times a year to be sure it contains the latest threats.
Every week we also try to "close gaps" (or at least some of them) for actions not detected/prevented/alerted.
The test of the following week is used also to confirm the effectiveness of the change.

Thanks for starting this interesting discussion.
Paolo

View solution in original post

Hello @paolocarrara , Thanks for sharing your experience.

Regarding the weekly specially evaluation exercise, do it manually? if yes, Did you consider adding Advanced Environmental Drift Analysis module (AEDA) module to MSV?
it can automate this manual process at scale. you create monitors, which are scheduled jobs ,that repeatedly run security content with explicitly defined results that, if not met, it generates and send you notifications.
here more information about it, https://docs.mandiant.com/home/msv-monitors-advanced-environmental-drift-analysis-aeda

 

View solution in original post

Hello @tameri,

yes we do it manually. I know the AEDA module but, at the moment, we don't have the possibility to buy it ๐Ÿ˜‰

View solution in original post

3 REPLIES 3

Hello @tameri , thank you for this post.
I agree with your framework and we regularly apply it. Internally we call this framework "scenario based tests" because we select a scenario (e.g. existing threat landscape, threat profile, security stack, ...) and we run tests following all the points you outlined.

I'd like to share also the second type of test we do in our testing strategy. We call it "recurring tests".
We create a set of evaluations (we call them "special evaluations") that contain some sample of every type of attack we can select from MSV library. The way we choose these actions is driven by:
- recent threat intelligence information coming from our CSIRT/SOC group (we include latest malware, TTPs and actors related to our vertical)
- the need to cover all stages of attacks (reconnaissance, delivery, exploitation, execution, c&c, action on target)
At the moment our special evaluations contains about 300 actions. We run these evaluations every weekend.
We than collect results and we plot them against time, to check that our security posture remains the same in the time.
Thanks to recurring tests, in the past, we spotted situation where the firewall stopped blocking some actions due to a partially failed upgrade, or alerts coming from the SIEM stopped working due to an overload.
The special evaluation is updated 2-3 times a year to be sure it contains the latest threats.
Every week we also try to "close gaps" (or at least some of them) for actions not detected/prevented/alerted.
The test of the following week is used also to confirm the effectiveness of the change.

Thanks for starting this interesting discussion.
Paolo

Hello @paolocarrara , Thanks for sharing your experience.

Regarding the weekly specially evaluation exercise, do it manually? if yes, Did you consider adding Advanced Environmental Drift Analysis module (AEDA) module to MSV?
it can automate this manual process at scale. you create monitors, which are scheduled jobs ,that repeatedly run security content with explicitly defined results that, if not met, it generates and send you notifications.
here more information about it, https://docs.mandiant.com/home/msv-monitors-advanced-environmental-drift-analysis-aeda

 

Hello @tameri,

yes we do it manually. I know the AEDA module but, at the moment, we don't have the possibility to buy it ๐Ÿ˜‰