Why writing API exploits is important when reporting vulnerabilities

Why writing API exploits is important when reporting vulnerabilities

It’s incredible how many times I have spoken with an IT professional who finds out the kind of work I do and almost gets angry. They erroneously equate the proof-of-concept (PoC) exploit development I do to the work of malware authors. Or one of those rude bastards that report issues on Friday afternoons right before a long weekend.

It’s just not true.

Recently on Discord, I was discussing this with a few newer bug bounty hunters and was surprised to learn they don’t usually write a lot of PoC code themselves. As explained by one hacker, “I don’t want to get in trouble with the software vendor, and it’s too much effort.”

They were surprised I almost always include a working PoC for anything I report. And most were shocked when they heard I rarely have more than a few messages back and forth with security triage. All because the PoC exploit does most of the lifting in the reporting process.

Let’s talk about that.

So why is it important to write PoC exploits when reporting vulnerabilities?

Simply put, a PoC exploit can help demonstrate the severity of an issue in ways that verbal descriptions cannot. It shows the vendor exactly what an attacker could do with their vulnerable API and helps them take remedial action. This aids significantly in understanding how a vulnerability works and the potential risk it represents.

In addition, PoC exploits serve as excellent documentation of the issue. A proof-of-concept can help support other third-party assessments and provide additional context when required. Plus, creating a Proof-of-Concept exploit is much faster than writing a detailed technical report on an API vulnerability.

I’m not saying you shouldn’t write a technical report. Exactly the opposite. But why waste time with pages and pages of text and screenshots of BurpSuite and/or your terminal windows when PoC code can do all the heavy lifting, supported with a screencast showing the exploit in action?

And remember the benefit to the developers on the other side. If they can quickly run a PoC to see the vulnerability in action, this can really accelerate how quickly the security issue can be fixed.

PoC + debugger = ❤️ for developers

Will a vendor actually run my exploit?

One comment I’ve heard a few times is that the frontline support team is too clueless to understand and use a PoC exploit. That’s a pretty big generalization. And an unfair one.

Sure, the software vendor may have terrible frontline support. But if they have enough security maturity to have a formal Vulnerability Disclosure Program (VDP) and/or Bug Bounty Program (BBP) there is a good chance their security triage team is different from the people responsible for general customer service and support.

You just need to find out how and where to submit your report properly. (More on that in a moment)

A good security triage team will know how to review your submission, including any exploit code you provide. They will usually have sandboxed environments that allow them to test your exploit or, at the very least, have a documented escalation path to someone who can.

Having a working PoC as part of the report also elevates your report above others. Ask anyone in triage; if they have a working exploit that demonstrates impact, it gets pushed to engineering much faster than without.

But you need to know how and WHERE to submit it to get it looked at.

Finding where to submit exploit PoCs

Hopefully, you are working within the scope of a documented and defined program that already explains how to submit a vulnerability report. If the software vendor uses a crowdsourced platform like HackerOne or BugCrowd submitting through their reporting form is usually the best place to start. They do a lot of the heavy lifting to prioritize and triage all issues within a program.

For example, here is HackerOne’s document for submitting a report through them.

But what if the vendor isn’t using one of these platforms?

Have you ever heard about security.txt? It’s a proposed standard that allows websites to define security policies in a well-known and established manner through a text file. In fact, it’s now an RFC standard (RFC 9116).

You can usually find this file in an app directory at <domain>/.well-known/security.txt or at the root of the domain at <domain>/security.txt. As an example, here is one of mine: https://www.vulscan.com/.well-known/security.txt.

The file usually includes the following:

  • The expiration date of the content in the security file
  • A contact you can reach to report an issue
  • The preferred language to use when communicating with the vendor
  • A link to any written security policies (VDP/BBP etc.)
  • A link to any public encryption keys (PGP/GPG etc.)

With that information, you should have everything you need to make a submission.

What to include in an exploit PoC

So you’ve decided to submit your reports with a PoC. Excellent! But what should you include in the exploit?

That’s a great question.

The best answer I can give you is the least amount of code to show the vulnerability and do it in a self-explanatory way. No more. No less.

Let me explain.

I’ve seen hackers write entire modules with detailed logging, a huge number of argument switches, and even full-color ANSI output displays. Heck, I once saw someone write a complete UI program in Python to show something that could have been done with a one-line curl command.

STOP IT! Don’t be cute. You aren’t any more ‘leet because you learned how to use an ANSI art generator.

You are just wasting the time of everyone involved. Sure, layout formatting of text explaining what the exploit is doing at each stage is advantageous. But the more ‘cute’ you try making your exploit, the more chances something will break.

What good is color output if the person in triage is color blind or using a non-standard terminal? Why force them to download ten extra packages so your exploit’s output can look cool? No one cares but you. So stop wasting your time.

Instead, your PoC exploit should include things like:

  • Well documented variables. e.g., Use ‘target_url‘ instead of ‘u.’ Use ‘counter‘ instead of ‘c.’ etc.
  • Make sure variables can be altered at the top of the file or through command line arguments. Never make someone have to modify things inline of actual code blocks.
  • Try not to set environment variables when possible; you want this exploit to run self-contained. If you do need to (i.e., reusable access tokens), remember to unset them after execution.
  • Use self-describing function/method names. e.g. ‘run_exploit()‘ is better than ‘go()‘. ‘convert_response()‘ is better than ‘conv_resp()‘. You get the idea. Saving a few keystrokes doesn’t make it easier for the triage team to read. Remember, they may not understand the code you are writing. So don’t make it harder on them.
  • Use comments to explain the setup and execution. If a condition needs to be met, ensure it’s well described. But you aren’t writing a book here; if you need that much explanation, you haven’t done a great job in the demo code. SHOW, not TELL.
  • This isn’t production code. It’s OK to only have rudimentary error handling. Just be clear when something fails outside of the scope of what the exploit is attempting to do. As an example, there is no need to have complex self-healing connection retry logic. Just fail and error out explaining to rerun the script. However, if the exploit could destabilize a system (i.e., a bad buffer overflow that could hang the service if it fails), make sure you alert and PROMPT the user first so they are aware of it. Yes, that means this is that one time when general try/catch handlers are OK. 🙃
  • Demonstrating impact doesn’t have to be overly complex. You don’t need a complex reverse shell to show remote code execution. Popping calc or having the server send a request to webhook.site or requestbin.com is just fine. Try to avoid using anything that requires triage to have a commercial license or an account. i.e., Burp Collaborator isn’t the best option unless you know the vendor has professional licenses to Burp Suite.
  • If demonstrating cross tenant data leaks (CTDL), ensure you are documenting and demonstrating that both tenants are in control and authorized to be accessed by you as the attacker. Use naming conventions like attacker and victim instead of tenant_1 and tenant_2, so it’s absolutely clear.
  • If you are demonstrating a privileged escalation vulnerability, use naming conventions like low_priv and high_priv over attacker and victim. The clarification on the privilege boundaries is much easier to explain this way.

Where possible, use the simplest tools over more complex code. For example, have you read my article on how to Exploit APIs with cURL? I show how you can demonstrate the exploitation of APIs using nothing more than cURL, which exists on almost all operating systems these days, including Mac and Windows.

Protecting your exploits

A well-written PoC exploit could have real damage potential to the vendor if it falls into the wrong hands. While your payload may be designed to be benign, there are no guarantees others won’t weaponize it. As such, it’s a good idea to ensure you know exactly WHO is receiving it and that they are receiving EXACTLY what you wrote.

I encourage you to digitally sign and encrypt your exploit PoCs. If you can get access to the public PGP key of the security triage team (see security.txt above), you can use that to ensure only authorized personnel are getting access to it. You can ASCII armor it and attach it as a text file to the report, giving you confidence that it can’t be altered during the submission process without the digital signature breaking.


When writing PoC exploits for API vulnerabilities, the goal should always be to make them as simple and self-contained as possible. Use well-documented variables and function names that are descriptive in nature, and keep environmental setup/configuration to a minimum.

Doing this will help reduce the time it takes triage personnel to understand the exploit and allow them to focus more on escalating the issue quickly. And it will empower developers and testers to remediate quickly and test fixes.

Finally, ensure your PoC exploit is secure and encrypted so only authorized personnel can access it. This will help protect it from falling into the wrong hands and adds an extra layer of security to any vulnerability report you submit. Writing exploits for APIs doesn’t have to be complicated or overly complex – just keep it simple and secure.

Exploit all the things!

BTW, if you are just getting into API hacking and want more interesting resources, check out my free Ultimate Guide to API Hacking Resources.

Hack hard!

Dana Epp