API Security Testing using AI in Postman

API Security Testing using AI in Postman

Artificial Intelligence (AI) is becoming an integral part of modern software these days, revolutionizing the way we approach doing our work.

From predictive analytics to automation, AI’s capabilities are increasingly leveraged to enhance efficiency and accuracy in diverse fields.

Software like Postman, which we use for breaking APIs, is not immune to these fundamental changes. The incorporation of AI in Postman has revolutionized API security testing, delivering improved accuracy, speed, and depth of testing, thus ensuring robust security for APIs.

In this article, I am going to show you how to utilize Postman’s powerful generative AI models in their new Postbot feature to quickly generate security tests for API endpoints.

What is Postbot?

Postbot is a feature of Postman introduced in the summer of 2023 that provides an AI assistant for API workflows and testing.

The AI behind Postbot leverages ML models to help you debug and understand APIs, write tests faster, and make sense of large quantities of data.

Out of the box, Postbot is a great tool to help you quickly scaffold tests based on results that come back from API requests and even visualize the results in a graphical manner.

But I am going to show you how to push to limits of the AI assistant to help you author security tests that can help you uncover vulnerabilities in the APIs you are testing.

Ready to weaponize this AI? Let’s go!

The basics of Postbot

Before we can have some real “fun” with Postbot, let’s first look at how it works in Postman.

I’m in a weird mood today, wanting to kick ass and chew bubblegum. And damn, I’m all out of gum.

Since there aren’t really any cool Duke Nukem APIs out there, I think it’s only appropriate that we try this out on the Chuck Norris Joke API at api.chucknorris.io.

I’ll start by creating a new workspace in Postman and generating a collection for the endpoints listed there. It might look something like this:

With the collection created, let’s start using the AI assistant.

Generating our first set of tests with Postbot

So if you click on the three ellipse dots to the right of the Collection name, you will see an option to “Generate Tests”. Click it.

You will see a list of the endpoints it will pass to Postbot to create your first set of tests. Click the “Generate Tests” button and watch the AI do its thing.

When it’s complete, you will see that it reached out to each endpoint, looked at the results, and then created some initial tests for you. All in a matter of seconds.

If you hit “Save Tests” it will move those AI-generated test scripts into the Tests tab of each endpoint. Let’s take a quick look at what that output looks like.

Not a bad start. I wouldn’t say these are great tests, but they have scaffolded a pretty good starting point. If you are new to writing tests in Postman, this will be immensely helpful.

The Postbot AI assistant just wrote dozens of tests based on what it detected in the API. This in itself is a huge time saver.

But we can do more than that.

Generating a test using Postbot’s prompt interface

Just for fun, let’s delete all the tests from the Random Jokes endpoint. While you are here, check out the purple button to the right of the test window.

That button opens the Postbot AI assistant prompt dialog. You can just as easily tell Postbot to write tests for this endpoint by writing “Add tests for this endpoint” in the prompt:

If you then hit the play button, Postbot will do its thing and regenerate a new set of tests. Depending on what you wrote in the prompt, it may or may not generate different sets of tests.

And this is the hidden magic. While Postman has been promoting this as an AI assistant that can run just a few basic commands, the fact is there is a full generative AI model backing it. And with a little experimentation, you can make it do a WHOLE LOT more. 😈

Using Postman’s AI assistant to hack APIs

So we can see that Postbot can generate test code for us. Just how much code depends on the prompt.

Let’s have some fun with this.

One of the ways to attack an API is to abuse the input parameters that it accepts. So many vulnerabilities come from misusing data passed in.

In the Chuck Norris API, a great starting point might be in the joke search endpoint at https://api.chucknorris.io/jokes/search. Pass it a parameter of “query” with some sort of value you are searching for, and send it in to see how it does.

Interesting. Chuck Norris’ favorite chewing gum is bullets. Who knew?

Anyway, let’s see if we can get the AI to generate some interesting test code to abuse the query parameter in the endpoint.

TIP: For the rest of this article, I will regularly delete the tests generated in the search endpoint so I can show you a blank canvas for each test script we generate. I encourage you to do the same while learning how to use the AI prompts in Postbot.

Testing for null values

Let’s start small. How about a null parameter?

Here is the prompt I provided:

Here are the results:

Let’s look at the script closer:

Not bad for a first try. But not a good test. The reality is passing NULL is treated as a string, and, wouldn’t you know it, Chuck Norris can dereference NULL. (Yeah, bad joke).

But at this point, we could easily change NULL to %00 and see how that goes.

Look at that. Progress. Seems there is some length validation going on. Let’s correct that and see what happens.

Oh wow. Looks like we caused an internal server error with an uncaught exception. Are we having fun yet?

I am not going to continue down this path, but you get the idea. We just had AI generate a test we can slightly modify to get the endpoint to work in unpredictable ways.

This is a core principle of offensive security testing.

It only took us a couple of minutes to get to this point. We know about some length validation and now have the endpoint crashing.

Oh, the fun. Let’s move on.

Testing for LFI

Injecting a few periods into the query input gives me an idea. I don’t expect this endpoint to be reading files… but how good is Postbot at generating code to test for local file inclusion (LFI) vulnerabilities?

Let’s find out.

Let’s try this prompt:

Interesting to see Postbot detect the directory traversal attack vector here. But I think we can push it further.

Any real directory traversal / LFI attack would probably use a decent wordlist like one from Daniel Miessler’s SecList repo. Let’s use the one Jason Haddix built.

Let’s try a more complex prompt:

Here are the results:

Holy cow. That code generation is pretty decent. Let’s look at it a bit closer in case you can’t see it in the screenshot very well:

It fetches the wordlist, iterates over it, and looks for the expected response payload. I’d say that’s pretty good.

To be critical of myself, the test validation itself is flawed, as Jason’s LFI wordlist isn’t just looking for the passwd file in the system but other files as well. But as a starting point, that’s still pretty good.

OK. What else can this AI do?

Testing for SSTI

I love server-side template injection (SSTI). It’s pretty easy to test for too. I have a simple wordlist of template expressions for SSTI discovery as a gist on GitHub, which you can grab at https://tinyurl.com/ssti-detection.

Let’s try a prompt like this:

OK, that didn’t quite work. Seems Postbot didn’t realize that was a line-delimited text file and was actually expecting a JSON payload. Maybe the redirection from TinyURL is messing it up. I dunno.

Let’s update the prompt slightly to account for the file format:

Not bad. It’s clear the AI can improve the code generation with small tweaks of the prompt.

OK, let’s move on and try one more thing.

Testing for Command Injection

OK, as a final test of the AI assistant, let’s see if we can get Postbot to try generating a test script for a command injection vulnerability.

I am going to make the prompt somewhat vague to see if the AI even knows how to generate a command injection payload. Let’s try this:

Here are the results:

Holy smokes. Without a lot of prompting, Postbot was smart enough to create an “OK” starting list of command injection payloads to work with and, even more importantly, generated tests to try both with encoded and unencoded payloads.

I’ll be damned. Postbot is a thing.

Conclusion

While this article is only skimming the surface of the new generative AI model built into Postman, it shows real potential.

Is AI going to replace us as API hackers? Of course not. But it sure does make our lives a bit easier.

AI in Postman is more than just a silly bot. It has the potential to take our API security testing and automation workflows to the next level, making us better, faster, smarter API testers.

So here’s to automation and AI. 🍻

May they help make us all better hackers! Skynet 3.0, here we come!!!

One last thing…

API Hacker Inner Circle

The API Hacker Inner Circle is growing. It’s my FREE weekly newsletter where I share articles like this, along with pro tips, industry insights, and community news that I don’t tend to share publicly. If you haven’t yet, join us by subscribing at https://apihacker.blog.

Dana Epp