Agreeing on the scope of a security assessment, such as a penetration test, is easier said than done. Define the scope too narrowly, and you will miss the vulnerabilities another attacker could have exploited. On the other hand, an overly-wide or ambiguous scope can unnecessarily increase the cost of the assessment and put at risk careers of people involved with the project.
Defining the Scope
To understand the nature of the potential problem, read Jake William's blog posting Penetration Testing Scope—Murky Waters Ahead, which highlights the challenges associated with determining what may be examined by the tester. For instance, if the assessment is supposed to exclude web applications, should a PHP vulnerability that allows remote code execution be excluded as well?
Defining the scope is as much of a technical as it is a political issue. Many assessments focus exclusively on "web applications" or only on "network services," instead of combining the two efforts, because these resources are usually maintained by different teams. As the result, the funding for these assessments comes from different sources and different people will be blamed for security problems.
Unfortunately, this means that many penetration tests fail to adequately mimic a typical attacker's actions, because the scope of the assessment is artificially constrained.
Who Is Responsible for What?
One way to determine whether a particular vulnerability is in scope is to ask the client whether they are the ones responsible for patching that vulnerability. That's not satisfying, I know. The good news is that the assessor can still provide value by asking questions about the vulnerability, explaining its significance and highlighting the need for someone within the client’s organization should take a look at it.
A well thought-out penetration test will define the rules of engagement in a realistic manner without drawing these artificial boundaries. A consultant should strive to define the rules of engagement and the scope in this "fused" manner when selling the assessment. Similarly, organizations should combine these types of testing efforts into a single engagement, even if it means that two different teams need to collaborate and pool their budgets.
In a follow up to his initial post, Jake pointed out that talking to "the customer is ultimately the correct way to obtain ultimate resolution on the issue." He explained that they report all issues they "find (even if they happen to be out of scope) … this provides a real value added to the client."
Talk to the Client Before Starting the Assessment
Thinking along these lines, consider asking the following questions before committing to or starting the engagement:
- Help the client understand and articulate what's important to them: Have you engaged providers in similar projects before? If so, what did you like about the outcome of that work? What do you wish you could have changed?
- Start defining procedures for changing the scope: Should I encounter concerns that the work is going beyond the scope of the project, whom should I contact first? Which people should get involved in the discussion related to keeping the project on track and within budget?
- Understand client's success criteria: What top 3 factors would you examine to confirm that the project is a success? What aspects of the work should I monitor especially closely to keep the project out of entering a bad state?
To further improve your security assessment skills, consider reading my Cheat Sheet for Creating Security Assessment Reports.