API Pentesting 101: The Rules of Engagement
Picture this. You are in the middle of an API security assessment. You stumble upon a staging server with several web applications that seem to have little to no defenses. The target seems to expose web services you haven’t previously seen during your penetration testing, and the API endpoints implemented here don’t seem to have any documentation.
What do you do?
Let’s talk about that.
Understanding Scope and the Rules of Engagement

It’s not uncommon for new web application penetration testers to get confused about the fundamental aspects of “scoping”. And I’d argue that carries forward to API pentesters too.
An engagement scope is part of a document that defines the parameters of a pentest, including the target systems, assets to be tested, and the testers’ permissions. The scope should also define what is and is not included in the pentest, as well as any restrictions on how the testers may conduct their assessment.
While the scope defines what is allowed to be tested, the rules of engagement (RoE) define how the testing will occur. And what to do when things go sideways. In many cases, the scope is part of the RoE itself.
There is a lot that can go wrong during API security testing. You want to set up for success and ensure you protect both your customer and yourself. A poor understanding of the rules of engagement can lead to trouble; we’ve actually seen pentesters earning a felony arrest record when it wasn’t clear.
So let’s define some of the key components you want to see documented in your API pentest engagements to support the RoE.
Technical Points of Contact

We can’t always access the security experts. During our API security testing, we have to be ready to communicate with the right people affected by our work. It’s important to have clear communication channels with the business as we go about our API testing. Nothing could be worse than a malformed API request that could negatively impact the web service under test. Especially if it’s in production.
Best practices include documenting the technical contact details for those responsible for key systems that are in-scope. Chances are this won’t be the actual developers responsible for the code, but the managers that are ultimately responsible for the API endpoints.
The importance to identify the right point of contact for a component can’t be understated. If critical vulnerabilities are found during API security testing you want to know exactly WHO to go to. Maybe it’s a security triage contact. Maybe not. But you have to have a clear channel to communicate, especially if you end up finding a high-impact critical vulnerability that may expose production data.
I’ll go one step further though. I like FedRAMP’s position on Technical Point of Contact in their Penetration Testing Guidance. They define that there should be a backup for each subsystem and/or application that may be included in the API security testing engagement.
I agree with that. If during penetration testing you come across vulnerabilities that materially impact the business the last thing you want is to get an OOF message when you reach out to a contact.
Trust me. It happens. And it sucks.
Learn from my war wounds. I brought a web service down to its knees once during some tainted injection testing that took several hours to recover from because the primary contact went on holiday and didn’t inform his replacement that API security testing was occurring in that area of the web application.
Evidence & Sensitive Data Handling

An important component of your rules of engagement should cover how you will handle data throughout the engagement. There are several aspects to this.
Defining Sensitive Data
During security testing, you may be exposed to sensitive information about the company, the system, and/or its users. Sensitive data handling needs special attention in the RoE and proper storage and communication measures should be taken to ensure the protection of the information. If your client is covered under various regulatory laws such as the Health Insurance Portability and Accountability Act (HIPAA) or the European data privacy laws, only authorized personnel should be able to view the data discovered.
The RoE should clearly articulate what sensitive data can be potentially exposed, who can access it, and how to handle it.
API request history
You should try to generate a complete log of all API traffic you send to the REST API. Professional tools like Burp Suite make this easy, as you can proxy all traffic and record all HTTP methods to the site page and/or service. You can leverage Burp Suite project files to not only record the HTTP history but also every input tried in the Repeater and Intruder tools.
It’s important that both parties agree on how logs should be shared, and in what format. While some parties prefer JSON or XML, others may expect CEF, CLF, or ELF. Heck, I’ve been asked for PCAPs on HTTPS endpoints before. You just never know.
Lucky for you, there are lots of tools to convert Burp Suite project logs to other formats. You can Google dork to discover what works for you.
Error tracing
Tracing error responses coming back from web applications and understanding how that is shared is also important. As an example, you can tamper with parameters as you send requests to the server and see if you can leak information about the programming language of the API through its error stack frames. I talked about this approach in a previous article.
Why this is important is that when communicating with those who manage the APIs, you want to know what they expect. You want to be able to share exactly what was sent, when, and how the REST API responded. And you may be prohibited from purposely abusing endpoints in this way; it’s important you understand what is allowed and not allowed during the engagement.
Encrypting Communications
It is typically expected that when reporting security threats you do so over an encrypted channel. There should be a secure process that not only protects the data but also ensures a clear non-reputable chain of evidence.
As an example, whenever I share a Python proof-of-concept exploit I have written for a vulnerability I have discovered I both digitally sign and encrypt the exploit using GPG. This ensures that no one tampers with my code during transfer and that only the expected party is able to access it.
It’s also important when sharing your logs. If you share the complete Burp Suite project file as an example, you may be sharing sensitive data pulled back during your API security testing. That has to be encrypted to ensure protection of the data.
A good RoE will include the appropriate information on where to find and validate cipher keys needed for this process.
Reporting Security Issues

Since we are talking about communications, let’s also discuss the whole process of reporting security issues. The rules of engagement should clearly define when and how to report a finding.
As an example, Microsoft’s Pentesting Rules of Engagement clearly describes what to do if you find a potential security flaw related to the Microsoft Cloud or any other Microsoft service. They include instructions on how to validate the report first and then submit the valid vulnerabilities to the Microsoft Security Response Center (MSRC).
The rules of engagement should be developed to make this clear so everyone knows when and how to report issues during API security testing, and clearly articulate how evidence handling should be maintained.
More importantly, the RoE should clearly define when to stop API security testing. Knowing the boundaries of an engagement is critical. Especially when informing key stakeholders of critical vulnerabilities discovered.
I think the Technical Guide to Information Security Testing and Assessment (NIST SP 800-115), Section 7 describes this best…
“appropriate personnel such as the CIO, CISO, and ISSO are informed of any critical high-impact vulnerabilities as soon as they are discovered.”
NIST SP 800-115 – Section 7
Permission to Test

Your rules of engagement should include the appropriate signatures of those in authority to give you permission to test. This should clearly articulate what you are allowed to do as part of your security testing, and when.
This is important even if you are doing an API assessment internally at a company. I have seen red teams forget to get this approval in writing, and then get themselves in a world of hurt; I know someone who lost their job when a CTO and CSO couldn’t agree on whether the red team compromise objectives were properly authorized… and he was scapegoated for conducting the operation.
No one likes to be visited by HR because of confusion about an (un)authorized hack.
Leave no doubt. Get permission to test in writing.
Now, if you are engaging in API bug bounty hunting, this might be approached differently. You want to make sure the BBP includes written safe harbor policies that will cover your jurisdiction.
Conclusion
The rules of engagement are critical when conducting an API pentest. They help to ensure that communications are clear and that everyone knows what is expected during the testing process.
Having a clear understanding of the rules of engagement will help prevent any misunderstandings or issues during the testing process.
Want more insights and resources to help you during your API pentest engagements? Then make sure you grab my Ultimate Guide to API Hacking Resources.