7 Deadly Sins of API Security Testing

Okay, so today, I want to discuss the seven deadly sins of API security testing. I covered this topic at the latest APISEC conference in May. I have embedded my talk below if you want to watch it.

Oh, you like to read? Awesome. You are my kind of peeps. 

Let’s get right to it.  

SIN #1 – Timing

Let’s start by discussing timing. This is all about when you kick off your testing — whether it’s too early or too late. It depends on the kind of testing you’re doing and the specific areas you focus on.

We’ve all seen those defect injection charts showing where defects creep in, where we discover them during testing, and the cost impact of discovery. It’s no surprise that most vulnerabilities in an API pop up during the coding phase but aren’t found until deployment.

As we move through the testing cycle, we often don’t catch these vulnerabilities until later, typically during functional and system testing. Ideally, we want to catch the majority of them before the release phase.

This underscores the importance of different types of testing throughout the process.

Static Application Security Testing (SAST)

Let’s kick things off with Static Application Security Testing, or SAST. This is where tools like SonarQube come into play, scanning the code for vulnerabilities. Looking at this through the lens of security, we realize that if SAST is implemented correctly, it can catch the most costly bugs at the architectural and design levels before anyone else even begins testing.

Ideally, this early detection is perfect, but it’s not always practical or feasible. Developers are primarily focused on writing code and often don’t have the bandwidth to think about testing strategies while in the thick of programming.

Dynamic Application Security Testing (DAST)

As we shift left and incorporate more unit testing, it certainly helps, but it doesn’t provide the complete picture of where API security testing needs to be. 

That’s where dynamic application security testing (DAST) comes in. Tools like Burp Suite, with features like the Web Vulnerability Scanner and API scanner, can be handy here. These tools simulate attacks against APIs to see how they respond.

DAST allows us to explore data tainting in various areas and verify things like authentication and authorization headers to ensure they are used correctly. However, while DAST is great for these purposes, it won’t fully understand the application.

Human Application Security Testing (HAST)

This is where HAST, or Human Application Security Testing, comes into play. Some call it manual testing, but I think that’s a terrible name because much of HAST can be automated. Tools like Postman Collection Runner can automate many of these tests, allowing them to be rerun and retested with various data inputs.

For instance, API fuzzing is a technique that can be employed here. It helps validate that things work as intended and, more importantly, uncovers ways to make them behave in unintended ways — often revealing significant security vulnerabilities.

SIN #2 – Ignorance

Beyond timing, another critical issue is ignorance — a lack of visibility into the APIs we need to test. This includes shadow APIs, rogue APIs, old zombie APIs, and, worst of all, undocumented APIs. 

Often, teams lack a comprehensive inventory of their API assets; not just the ones they’re developing but also those they’re consuming and depending on. This lack of visibility increases our risk exposure, especially with undocumented APIs where we have little insight.

Even API documentation can’t be fully trusted. Developers might not always keep them up to date. While tools can automate documentation generation, they may not always produce quality docs, or in some cases, any docs at all. 

How often have you seen documentation for public-facing interfaces but not for admin endpoints? For thorough API security testing, we need to test everything, which requires full visibility.

This is why regular API discovery is crucial. While API docs are helpful, they can miss undocumented areas unless you generate your own based on captured traffic. Regular discovery helps us map out the complete attack surface, ensuring we know all the places we need to test. 

SIN #3 – Negligence

Our third sin is negligence, which is failing to do proper recon to identify everything in the APIs that needs testing. Not knowing about the APIs is one thing, but not understanding how they function internally is another issue entirely.

Negligence often arises because people don’t fully fingerprint the entire application and its features. This means if there are unknown functions and features, an entire area of attack surface is being missed, which is problematic. We need to find everything and anything within the API. If it’s a multi-role system, we must ensure we have access to every role the API might be used in. In a multi-tenant system, we must understand the tenancy model, prevent cross-tenant data leakage, and ensure we cannot access other clients’ data.

Fingerprinting

Fingerprinting helps us understand the application’s functions. It can be as simple as checking if we have the correct licensing to see everything in the API. We must also understand how the API works and how the application that consumes it functions. Regular reviews are crucial to see everything the API exposes, including how session management and tokens are utilized, and identifying special claims and artifact information that the APIs might rely on.

Data Analysis

When examining payloads, we need to consider data structures since they may change as APIs evolve. Proper versioning is essential, but unversioned changes can break functionality. Recon helps us find and document these details, providing clarity on object models, payloads, parameters, and headers. Without thorough application behavior analysis, we won’t understand the impacts of these elements.

Data Tainting

Testing should explore every aspect, such as what happens if we delete or alter cookies or encounter unexpected claims in tokens. Proper recon and analysis reveal how APIs respond to varied inputs. Tools like OpenAPI and Swagger define endpoint functions and constraints, but we must verify if the implementations match the documentation.

For example, a product object might have a price pattern constraint in the API docs. We must test if the implementation follows this pattern. Maybe it’s supposed to be a string but is implemented as a float or double. Taint analysis and injection patterns, like when using the Naughty String database, can show how APIs respond. Testing boundary conditions, like out-of-range integers, is also crucial.

Lastly, negligence includes failing to track API changes and drift. As API security testing matures, early detection of impactful changes is vital. Tools like oasdiff can compare existing API specs with new ones to identify changes, providing early alerts for further testing. Leveraging such tools ensures we stay ahead of changes, even before the development team informs us.

Neglecting these aspects results in missing critical vulnerabilities, leading to insufficient security testing. Regular API discovery, fingerprinting, and behavior analysis are essential for thorough API security testing.

SIN #4 – Chaos

Chaos is our fourth sin. It arises when people don’t follow a plan, and this lack of planning becomes a significant issue. It’s incredibly frustrating because there’s well-defined, documented guidance available. OWASP, for example, has long provided recommendations on how to conduct testing, particularly in API security.

When we fail to plan, it’s as if we’re planning to fail. 

OWASP offers valuable resources like the API Security Top Ten, which highlights the most common vulnerabilities found in the field. However, this list is not a comprehensive standard — it’s just the basics. Covering these basics is essential, but that’s only the beginning. 

Many vendors push their API security solutions based on the Top Ten, but proper testing requires a much broader scope.

This is where the Application Security Verification Standard (ASVS) comes into play, offering a detailed and granular approach to testing. ASVS includes an entire section (V13) dedicated to API security. 

For instance, V13.1.3 specifies testing to ensure API URLs don’t expose sensitive information like API keys and session tokens. The guidance provided by ASVS includes mappings to Common Weakness Enumerations (CWEs), which help identify the specific weaknesses and offer mitigation strategies. This is invaluable for informing developers during security testing.

Without a solid plan, testing efforts become uncoordinated and haphazard, undermining the whole process. This leads to irregular testing schedules and misalignment with development and release cycles, increasing the window of exposure for potential vulnerabilities.

Moreover, without proper planning, tracking test results and ensuring issues and vulnerabilities are logged into defect tracking systems often falls through the cracks. 

Effective test planning must include robust reporting mechanisms. Without these, communication with the development team suffers, leaving them unaware of their security exposure. 

This chaos ultimately hinders the security testing process, making it less effective.

SIN #5 – Overambition

The fifth sin I want to discuss is overambition. This happens when we get overly excited about diving into security testing and try to cover too much at once. 

It’s a classic case of trying to boil the ocean — you simply can’t achieve everything all at once. Overambition can derail security programs because excessive testing can lead to too many false positives or too much noise, reducing the effectiveness and trust in the testing process.

When planning your testing strategy, it’s essential to start small. Focus on getting test coverage in the areas of highest concern. 

I recommend beginning with authentication, authorization, and session management. Every API endpoint will involve some form of validation, and robust testing in these areas can significantly reduce the risk of drive-by attacks. Ensuring proper authorization and authentication limits the blast radius of potential vulnerabilities by requiring access credentials.

While we shouldn’t neglect other areas, starting small allows for manageable progress. It’s about taking small bites — focusing on specific areas, getting them right, learning from the process, iterating, and then expanding coverage. 

This approach ensures a solid foundation and gradual improvement, making the security testing process more effective and sustainable.

SIN #6 – Blame

The sixth issue I want to address is blame. This is about creating a dangerous feedback loop where an “us versus them” mentality develops between security engineers and developers. 

This dynamic can create an environment that isn’t psychologically safe, where open, honest conversations about vulnerabilities are stifled, and trust in the security team diminishes.

To avoid this, we must foster a collaborative relationship with dev teams, making them our advocates rather than adversaries. When developers feel threatened or blamed, they become resistant to our input and reports. 

Instead, we should aim to help, not hinder, their work.

Here are some ways to achieve this:

  1. Provide Proof of Concept (PoC) Exploits: Including PoC exploits in vulnerability reports allows developers to see the issues firsthand. If they can replicate the exploit, it helps them understand the problem better and prioritize fixes based on actual damage potential.
  2. Root Cause Mapping: Utilize CWEs in the ASVS to link vulnerabilities to their root causes. Provide developers with information and guidance on how to fix these issues. This common language helps both teams communicate effectively and understand the vulnerabilities.
  3. Create SAST Rules for Future Detection: Once a vulnerability is identified, work with the dev team to create SAST rules that can detect similar issues in the future. This proactive approach helps reduce the likelihood of recurring vulnerabilities.

By adopting these practices, we can create a supportive and collaborative environment where security and development teams work together to improve overall security. This builds trust and makes the entire process more effective and efficient.

SIN #7 – Faith

Our final sin is faith — putting too much trust in vendor products. 

We’ve discussed SAST and DAST, where vendors have developed impressive technology. However, when it comes to HAST, we’re still using vendor tools. The reality is that the deepest, most impactful vulnerabilities are often found through human ingenuity and thinking.

This means we can’t solely rely on vendor tools because security testing is a process, not a product. A solid test plan should guide this process, with tools supporting it. You can’t buy yourself out of security testing; the results should reflect a well-executed plan.

In other words, if the products you’re using can’t support your test plan and provide the necessary information and outputs to confirm whether you’ve tested a particular scenario effectively, then those tools might not be the best choice. If a tool doesn’t deliver meaningful results immediately, you should question its value.

To sum up, while vendor tools are valuable, the foundation of effective security testing lies in a well-structured process supported by these tools, not driven by them.

Conclusion 

So, where are we? We’ve discussed these seven sins, but let’s shift our perspective from the negative to the positive. How do we approach this effectively?

Start testing today

First, start your testing today. Assess what makes the most sense for your team. Should you begin with HAST? SAST? Or implement testing at various stages? Determine where it will add the most value to your system now and improve on it over time.

Recon and inventory your APIs

Next, inventory your APIs to ensure you have complete visibility of their locations and how they are being used. Conduct thorough recon on these APIs to fully understand their functionality, how applications consume them, and how data flows through the system.

Have a test plan

Then, build a comprehensive test plan. Leverage OWASP guidance, not just the API Security Top 10, but also the ASVS, especially Section 13. This helps ensure you’re testing the right aspects. For instance, the OWASP API Security Top 10 includes fewer than 50 security controls, whereas ASVS Level 1, the bare minimum, has over 125 controls. Level 2, which is more suitable for API testing, includes over 250 security controls. Each endpoint needs to be validated against these controls to determine what works and what doesn’t.

Start small and iterate

Recognize that there is a lot of work to be done. Start small and iterate. You can’t test everything at once, so focus on the most critical areas first, then gradually expand and improve your testing efforts.

Support the developers

Support your developers; don’t hinder them. Provide them with the tools and information to quickly remediate vulnerabilities, enabling you to move forward effectively. Remember, you can’t buy your way out of security testing—it requires effort and diligence.

Use the right tools

Leverage the right tools to assist you in executing your test plan, but understand that no vendor tool can solve everything for you. They can’t fully grasp how your APIs and business logic operate. It takes a well-thought-out process, continuous effort, and the right mix of tools to achieve robust security testing.

Whew. That’s it — the 7 Deadly Sins of API Security Testing. I hope it resonates with you and that you have some ideas on how to avoid them in the future. 

Good luck!

One last thing…

API Hacker Inner Circle

Have you joined The API Hacker Inner Circle yet? It’s my FREE weekly newsletter where I share articles like this, along with pro tips, industry insights, and community news that I don’t tend to share publicly. If you haven’t, subscribe at https://apihacker.blog.

Dana Epp

Discover more from Dana Epp's Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading