API Recon Tip: Using AI to “Eyeball” your targets

Using AI to eyeball your targets

Everyone has their own way of conducting API recon. There is no “one way” to do it.

In fact, my methodology changes from time to time based on new techniques I learn and new tools I learn about.

In today’s article, I will show you one way to improve your methodology by adding some automation and AI to discover and rank interesting potential targets. All thanks to “Eyeballer,” a research project by BishopFox that leverages convolutional neural networks to analyze screenshots taken during recon.

But first… why recon?

Before we get too deep into BishopFox’s research, let’s discuss why we would even want to leverage this project.

It all comes down to attack surface detection and the interest in finding resources in the sphere of influence of the target apps and infrastructure we are testing.

Let me explain.

As more agile development processes start to “shift left,” it leaves the creators of these APIs with more responsibility in the continuous integration and continuous deployment (CICD) of their code.

This is more commonly called DevOps. Or, as I like to say, DevOops. 🤣

Here’s why.

Finding dev, test, and even staging environments that run alongside production resources is common. As you become more familiar with the nomenclature of deployed resources, you can start to detect more and more resources that may have different operational security controls than production systems.

This misconfiguration can open you up to more access to resources, code, and data that you may not as easily find in prod.

Last year, when I wrote about hacking a .NET API in the real world, I never shared that I exploited the API in a staging slot of an Azure application deployment that was forgotten about. After completing their deployment slot swap, the developers failed to destroy the instance. Wanting to keep the old instance around in case they needed to roll back, it kept running past its lifetime. And I exploited that fact.

Finding additional API artifacts

I don’t want to go too deep into this, but I do want to share my approach to finding those secondary resources during recon. I will remind you to stay within the defined scope of the engagement. There is no sense in wasting time and resources looking at the attack surface of systems you have no business in probing.

You will obviously use your favorite tools to conduct the recon. Don’t judge me in mine. I use what has worked for me for years and years. I know better and faster tools exist… but habits combined with automation tools and scripts I’ve built over the years leave me in a comfortable position.

Initial asset discovery

Here are a few things I do at the beginning of my asset discovery:

  1. I scan the target domain using subfinder, looking for subdomains that may relate to the web app using the API.
  2. I then use assetfinder to find more domains and subdomains potentially related to a given domain.
  3. I then query https://crt.sh/ looking for any related subdomain resources that have had certificates issued to them and scan the Subject Alternative Name (SAN) metadata to extract even more subdomains.
  4. I then dedupe all the data to come up with a decent list of potential domains that are in the sphere of influence of the target.
  5. I then remove any domains known to be outside the scope of the engagement.

This is all automated. All I do is tell the system the URL of my main target, and it goes about doing the heavy lifting for me.

Service discovery on potential targets

Once I’ve compiled the list of potential domains I want to look closer at, I reverse lookup each domain to get a list of related IPs. This is extremely important since load balancers, caching servers, WAFs, and API gateways can get in the way.

Once I have a database of IPs, I then run nmap against each IP address. There are a few specifics I do beyond a normal scan:

  1. I conduct a SYN scan (-sS), which requires elevated privileges. I usually do this in a cluster of ephemeral containers running in the cloud so I can scan a whole bunch of IPs in parallel from many different IPs, helping to reduce the chance of triggering security controls that may otherwise block my host during a scan.
  2. I scan ALL ports (-p-). While this takes considerably more time, it helps to ensure I don’t miss any unusual services that may exist on the host on non-standard ports.
  3. I write out the results to an XML file (-oX) so I can process all the found ports later.

A typical command line for a scan might be something like:

Now that we clearly understand what ports are open on each host, we can check to see what’s there.

App discovery on potential targets

With a mapping of every port open on a potential target, we can now check to see if the host responds to HTTP requests. While you could try to visit each port in a browser, there is a much more efficient way that is way faster and can be automated.

I use gowitness for this.

Gowitness is a web screenshot tool written in Golang that uses a headless version of Chrome to render a target website and save it into a PNG.

It includes an option to load up a nmap XML scan result file and use that as its input to try each port.

A simple command line looks something like this:

A few things to point out:

  • nmap : Tells gowitness to parse nmap results to load hosts and ports
  • -f “name” : The name of the nmap results XML file
  • -t 10 : The number of threads to use. The default is 4. I typically use ten as it balances disk write performance with CPU use.
  • –timeout 15 : Preflight timeout on connection. The default is 10 seconds. I increased this to 15 seconds just in case of gremlins in the wire, along with traffic shaping.
  • P “dir” : The path to store the screenshots. By default, it goes into a subdirectory called “screenshots”. As I usually organize my work differently with the timestamps of each scan, I typically use a dynamic variable for this.

Additionally, depending on if virtual hosts are discovered on the target, there are a few other params to use:

  • -N : Scans hostnames (for virtual hosting)
  • –header “string” : Add header to the HTTP request. Useful for mapping to hostname on a virtual host, or any additional headers the system requires.

The result? A directory full of screenshots of ports that speak HTTP/HTTPS.

At this point, I would usually browse through the backlog of images looking for interesting screenshots, taking note of pages that look like API endpoints, login pages, stack traces, etc. But on some larger targets, I could have dozens, if not hundreds, of potential screenshots to wade through.

At least until I found BishopFox’s Eyeballer tool.

Using AI to detect interesting targets

I find Eyeballer fascinating. It can scan through tons of screenshots and weight them according to how likely they may contain vulnerabilities. It does this using an AI training model they built based on predictable content labeling.

Some examples include:

  • Old-looking sites: You know the ones. Blocky frames. Broken CSS. Looks like someone used Frontpage in the 2000s to make them. They are old. And probably riddled with vulns.
  • Login pages: Hey, with modern web apps these days, we can’t just grep for password fields. But login pages do have a typical “tell.” We know what they look like. So does the AI.
  • Webapp: This tells you that there is a larger group of pages and functionality available here that can serve as the surface area to attack.
  • Custom 404: Modern sites love to have cutesy custom 404 pages with pictures of broken robots or sad-looking dogs. Unfortunately, they also love to return HTTP 200 response codes while they do it. Eyeballer can help you tell the difference.
  • Parked domains: Parked domains are websites that look real but aren’t valid attack surfaces. Finding these pages and removing them from scope is really valuable over time.

I appreciate that BishopFox includes full insights into their training data. So you can use their model (bishop-fox-pretrained-vN.h5) right out of the box, knowing what it’s looking for. However, if you don’t like their model, you can train your own. If you are a data scientist and want to improve the model, go to town. But that’s beyond the scope of this article.

Using Eyeballer

Eyeballer is written in Python. Installation is as simple as checking out the Github repo and running:

There is an option to build Eyeballer with GPU support. But setting this up with proper TensorFlow support could be a real headache, and since I spin this up in ephemeral containers, I won’t typically have access to a GPU anyway. YMMV, of course.

Once installed, download the pretrained model and place it in your install directory.

Now run it. I use this command:

Now sit back and wait. When it’s done, you will have results in HTML and CSV files. The HTML file shows each screenshot with its labels. You can click on any of the labels at the top to filter them.

Personally, I like the CSV file. It provides you with a comma-separated list of the weighted scores for each label. You can parse that out and organize the results by the most likely screenshots that are suspicious/interesting.

Running Eyeballer against crapi.apisec.ai on an M2 Macbook Air

Conclusion

Recon is an important part of your hacking methodology for APIs. Looking for secondary targets in the sphere of influence of your target API server may help uncover systems that aren’t as hardened as production systems.

Using tricks like automating your subdomain and virtual host enumeration alongside service enumeration helps you to scan for web servers that may exist in unusual places and ultimately capture screenshots for potential new targets.

Add in some AI ❤️ with Eyeballer from BishopFox, and you can quickly prioritize where to look. Who knows what you might find during recon.

Good luck.

One last thing…

API Hacker Inner Circle

Have you joined The API Hacker Inner Circle yet? It’s my FREE weekly newsletter where I share articles like this, along with pro tips, industry insights, and community news that I don’t tend to share publicly. Subscribe at https://apihacker.blog.

Dana Epp