The Security Researcher’s Guide to Reporting Vulnerabilities to Vendors

The security researcher's guide to reporting vulnerabilities to vendors

You’ve found a security vulnerability in some software you are testing. Good on ya.

Now what?

According to some on the Internet, the sky should be raining money. You are OWED something.

I’m here to tell you that if that’s your attitude… you better damn well know what you’re doing because the real world doesn’t work like that.

Don’t believe me? Ask the four students in Malta who were arrested after reporting a security vulnerability in a popular application they regularly use at school.

I won’t get into that debacle of poor communications and even worse police investigation and incident response on that specific case. But I do want to get to the heart of the matter, which is how to safely report security vulnerabilities to companies (aka vendors).

Before we get too deep into it, I think we need to ensure we are all level set on what I am talking about.

What is security research?

Think of security research as the process of finding and analyzing weaknesses in digital systems. That could represent software, hardware, or even the network. It can range from reverse engineering to fuzzing or other methods that taint input to modify the system’s output. The goal is to discover interesting ways to make the system work in ways not intended that may expose people, their resources, and their data to risk.

Your motivations as a security researcher are your own. Maybe it’s curiosity. A need to protect the Internet. Perhaps it’s a desire to make money. Or to prove your skills to others. Whatever it is, you need to be honest with yourself and understand how that motivation may impact YOUR decisions on the targets you choose, the efforts you invest into the research, and how you will communicate your findings with the vendor.

Whatever the motivation, you must understand that communicating with the vendor may cause conflict if you aren’t careful. So it’s important you know some of the things you should be doing as a security researcher in an effort to collaborate with the vendor and turn them into your advocate rather than an adversary:

  • You should ensure that any testing is legal and authorized.
  • You should respect the privacy of others.
  • You should make reasonable efforts to contact the security team of the vendor.
  • You should provide sufficient details to allow the vulnerabilities to be verified and reproduced.
  • You should NOT demand payment or rewards for reporting vulnerabilities outside an established bug bounty program.
  • You should give the vendor reasonable time to fix the issue before disclosing it publicly.

OWASP publishes a Vulnerability Disclosure Cheat Sheet that you can look at for more details of your responsibilities when disclosing a vulnerability and that of the vendor.

However, I would like to focus a bit more on collaborating with the vendor, as it’s a critical component of a successful engagement.

Have empathy: Vendors are powered by people.

If you spend any amount of time in this industry, you will find horror stories from security researchers and vendors alike about how the other side was unreasonable, unresponsive, or just plain belligerent.

Many times, when you dig deeper into all the drama, you find that one side needs to try to understand the position and perspective of the other better.

As someone who has been responsible for both reporting and triaging such incidents, let me try to paint a clear picture.

Vendors come in all shapes and sizes…

The vendor that’s not ready

First off, not all vendors have enough security maturity to actually have a security triage process to begin with. So if you blindly drop a vuln report in their inbox and demand that it get fixed, they may immediately get defensive.

Worse yet, it could be perceived as an attack against their systems and software. They never authorized you to conduct such research. And they may simply not be ready or capable of working with you.

At least, not yet.

Don’t take it personally. They are probably scared. They have yet to learn how to handle this sort of thing internally.

This might be a great time to help them. Who knows, this could be a great consulting opportunity.

I’ve helped vendors establish Vulnerability Disclosure Policies (VDP) and taught them how to triage incidents internally and communicate effectively externally.

Don’t give up on them. Just be patient. Be helpful. But don’t force it.

The vendor with a VDP but no Bug Bounty Program

If a vendor has a published VDP, you should at least have clear guidance on how they want to communicate and what they expect of you. And you should have a clear definition of what systems are in and out of scope to make sure neither party is wasting time.

Understand that vendors at this stage could be overwhelmed by dealing with a lot of reports. So, it’s essential that your reports are well-researched and documented so it’s easy to consume and understand. You would be surprised at how many poorly written emails and reports they have to wade through. They could be dealing with so many false positives due to the latest updates to a popular vulnerability scanner (ya, it happens) or be dealing with validating already reported vulnerabilities that may be a duplicate of yours.

Realize you aren’t the only security researcher in the world. Hell, HackerOne humble-brags about having over 1 million security researchers in their system alone. Depending on the popularity of the target and the security maturity of the organization, the security triage team may have a lot on their plate.

The vendor with a Bug Bounty Program

If a vendor has a Bug Bounty Program (BBP), this is great. It’s the one time when you should have clear guidance on not only the resources in scope but also the expectation of how you may be rewarded for security vulnerabilities that are successfully reported and confirmed.

These vendors usually have a pretty high level of security maturity. Their triage process should be pretty streamlined. But, know this… I rarely encounter security triage teams that are fully staffed and perfect in their engagements with security researchers regarding bug bounties.

See a trend here? Patience is essential, regardless of the type of vendor you are working with.

If the BBP is registered in a crowdsourced platform like HackerOne or BugCrowd, you do have the opportunity to look at the average response times and payout history to understand what you should be able to expect. I’m a big fan of metrics like response service level agreements (SLA) that help to identify healthy programs to engage with.

Vendors vs. Researchers: You may be smarter than them, but you aren’t in their shoes

I eluded to this earlier when I was talking about the fact you don’t know what’s going on inside of security triage at the vendor. They are usually understaffed and under-resourced. Experienced people move on from triage to other areas within the company and ultimately expose the entire process to cracks where things will go wrong.

I am of course generalizing here, but I do hope you get my point. Mistakes happen.

But beyond mistakes, it goes deeper than that. Security technical debt is a thorn in the side of any software engineering group. Depending on how old the software is, it could be quite brittle and leverage older components or technologies that are themselves vulnerable. Or newer components may not have enough security testing to consider all possible attack vectors. The threat landscape changes almost daily, and you can’t expect developers to know how to respond to everything they encounter. It takes time to fix stuff, even when it may seem trivial on the surface.

I was humbled once back in the mid-90s when I was criticizing the response to a security vulnerability I found in Windows NT4. I was invited down to the Microsoft campus in Redmond, where a team walked me through what it took to remediate a fix that low in the stack and just how much testing across so many different languages had to happen before a patch could even be released. This was well before Bill Gates’ trustworthy computing memo, yet the process and effort required to triage and fix a security vulnerability was clearly a gargantuan effort. I gained a lot more respect for the process that week.

Anyways, to my point. You probably understand the vulnerability you found and the PoC exploit you wrote way better than the people you talk to in security triage or support. But leave your ego at the door. You DON’T know how that vulnerability fits into the bigger picture on their end.

You don’t know what the current product backlog is. You don’t know the impact a fix may have on the vulnerable section of code or the modules that may interact with it. You don’t know how the vendor ranks the risk of this vulnerability in comparison to other security technical debt. You don’t know of mitigations they may put in place to reduce the risk of your vulnerability until a permanent fix can be implemented.

The list can go on. You just don’t know all the things happening behind the scenes. Don’t assume nefarious intent by the vendor. Assume they have good intentions and want to keep their customers, systems, and data safe. No one purposely writes insecure code and wants to put their customers at risk.

And if they know it or not… they want to know about these vulnerabilities you found. It may not seem like it from time to time when you consider their responses… but they do. If it’s a real vulnerability that actually impacts the business, they want to know.

You just need to approach them in a positive way that helps them see that. Finding the right contact to talk to is the best place to start.

Finding the right contact to talk to

This might be depressing to hear, but sometimes, finding a vuln is easier than finding the right person to talk to inside a company. Not all companies have a well-defined security triage process that is well-documented and easy to find.

When looking for the right point of contact, here are a few places to look:

  • security.txt: Just like there is a robots.txt for web crawlers, there is a security.txt for security researchers. If the company follows the RFC specs, you can usually find this file in an app directory at <domain>/.well-known/security.txt or at the root of the domain at <domain>/security.txt. It will include contact and encryption key details, as well as links to any VDP and/or BBP documentation that they have.
  • Bug Bounty Program: Chances are if the vendor has a BBP they will have clear documentation that is findable. They may or may not be registered in a crowdsourced platform like HackerOne or BugCrowd. Or have links to the program somewhere on their website. Some Google dorking should be able to find it.
  • Contact page: Most vendors have a “Contact Us” page on their website. You can use this to contact the company and ask who the best people would be to talk to the security team. Do NOT report your vulnerability here. All you are asking for is a referral to someone responsible for security.
  • Legal / Privacy pages: You’d be surprised how often you can find contact details for security-related issues deep within the legal and/or privacy pages of a vendor’s website. It’s usually here where things like disclosure policies and safe harbor statements are linked to.
  • Domain registration details: The WHOIS information found in DNS records may include contact details of people responsible for the resource. This might include an abuse@domain, security@domain, or webmaster@domain email address that you can use as a starting point.
  • Use CERT: If you can’t find a contact for a vendor or find the contacts you have found aren’t responding, you might want to contact your local CERT Coordination Center (CERT/CC). They usually have ways to engage with vendors when you cannot.

Submitting your report

Now that you have the right contact details, it’s time to reach out.

You want to first engage with them to determine how they wish to communicate with you and what they expect. If you haven’t already collected the details, you will need to ask them for the following:

  • Do they have a Vulnerability Disclosure Policy (VDP) you can follow?
  • Do they have a public PGP key to use for secure communications?

Notice how you DON’T ask if they have a bug bounty program at this stage? It can be taken the wrong way if it looks like you are demanding payment to submit the report. I like how Troy Hunt calls this “beg bounties.” Chester Wisniewski from Sophos went into even more detail to show the kind of reports you SHOULDN’T be submitting.

If you feel you want compensation for your security research work, you should already have known about any bug bounty program a vendor has before you started the work. But I digress.

Once you have as much information as possible to submit your report, do so.

I was going to write a full breakdown of what to include in a good report and remembered that Vickie Li already wrote about that. So I encourage you to read her article.

One thing I will add is that the people reading your report are human. Always act professionally, and never take their responses personally. Never assume the worst. You don’t know what’s going on over there. Show respect. Be empathetic. And cut ties if they don’t reciprocate.

They owe you NOTHING. But you owe them NOTHING. If you find it challenging to collaborate, then move on. There are plenty of other companies that appreciate the efforts of professional security researchers.

One last thing…

If you regularly read my blog, you know that I believe hacking is not a crime. But I also believe that without safe harbor provisions, clear VDPs, and well-defined BBPs, you really shouldn’t be conducting research on production systems and expecting to be compensated for it.

Yes, you can make money hacking. To do so though, you have to play by the rules. Don’t waste the vendor’s time or yours. Don’t extort vendors. Don’t hold a vuln as ransom. Use coordinated vulnerability disclosure, and give the vendor time to fix the issue you found.

Conclusion

If you want to make money from your security research, you must go about it the right way. Know who you are dealing with and follow their guidelines for disclosure.

Be professional when submitting reports, and always follow ethical principles in your activities. If you’re respectful of the people and organizations you are dealing with, chances are that your vulnerability disclosure experience will be a much more pleasant one. You can fulfill whatever motivates you as a security researcher and not get thrown in jail.

Good luck out there! Happy hacking. 🙂


Are you interested in more resources about hacking apps and APIs? Grab my free PDF of The Ultimate Guide to API Hacking Resources.

Dana Epp