Imagine crowdsourced security, with thousands of people dispersed in every time zone, available to warn your company of vulnerabilities within its websites, mobile applications, APIs or IoT devices. Imagine further that these individuals will agree to register with and be vetted by your program, that they will accept your legal terms and conditions regarding which systems and services are in scope, which testing methods are permitted, and which actions are out of bounds (such as intentionally accessing or copying personal data, or hacking third-party providers). Imagine further still that these ethical hackers will not publicly disclose any vulnerabilities they find, without express permission. Finally, imagine that you get to pay out a reward, of your choosing, only to those white hat hackers who actually discover vulnerabilities, and even then only if the bugs were not previously submitted and not easily found by running automated scans.
Sound too good to be true? It’s not, and it’s happening every day through well-organized vulnerability disclosure programs established by highly reputable organizations. Although “bug bounties” have existed for some time, program methodology has matured significantly over the last few years. Perhaps a turning point for institutional acceptance occurred in 2016 when the Department of Defense launched its Hack the Pentagon initiative. During a trial period that lasted less than a month, the Pentagon received and resolved 138 vulnerabilities that were found to be “legitimate, unique and eligible for a bounty,” with a total payout of $75,000 (a drop in the bucket for an organization that spends billions of dollars annually on information security).
Last year, the U.S. Department of Justice sought to facilitate bug bounties by issuing guidance to the private sector concerning the legal parameters of vulnerability disclosure programs. Among a number of other things, DOJ stressed the need for companies to make their intentions clear as to any conduct they are authorizing which otherwise would be considered illegal, and to remain mindful of sensitive data that they may be placing at risk through the program.
Which brings us to this year, when NIST revised its Cybersecurity Frame-work to recommend – consistent with ISO/IEC 29147 – that companies consider establishing processes “to receive, analyze and respond to vulnerabilities disclosed to the organization from internal and external sources (e.g., internal testing, security bulletins, or security researchers).”
The process for engaging the security research community often includes these five steps:
- Ensure traditional code review and penetration tests already have been performed;
- Assess the organization’s capacity to vet and respond to disclosures (a fair number of which will be unhelpful “noise”);
- Consider using third-party vendors to host and publicize the program;
- Decide between a program that is public (known to all researchers) or private (entrusted to selected researchers); and
- Develop sound legal policy that protects both the company and the good-faith researcher.
If you’re still unsure about bug bounties, consider this: what better way is there to harness the power of hackers for the good?
This post was originally published on Security Magazine