When I hand clients a pentest report with TLS findings, their eyes glaze over — then come the lines I hear most days:
"We already redirect to HTTPS, so it's fine."
"It's an internal app with a self-signed cert — low risk."
"The cert's about to expire; we'll renew it next sprint."
"We've enabled HSTS across the domain."
Those answers sound reasonable in a meeting room. They look dangerously optimistic when I'm on the network and can watch the traffic flow. Because SSL stripping — the attack that forces users to stay on HTTP while the attacker talks HTTPS to the real site — still works when small, practical gaps exist. And those "small" gaps are everywhere.
I'm not writing another "what is SSL stripping" post. There are plenty of those. I'm writing what I actually see in pentests, how that makes the attack realistic (not theoretical), and how you prove the risk is gone without playing the exploit demo on your corporate Wi-Fi.
The simple failure I keep seeing: HSTS "enabled" — but not everywhere
Teams love to show me that the homepage has:
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
Great. But then I crawl more pages — /dashboard
, /api/status
, /error
, /cdn/asset
— and some responses have no HSTS header at all.
How does that happen?
- Header injection only at one layer (homepage served from a different host).
- Multiple backends are stitched under one domain; each team configures headers differently.
- CDN or cache misconfiguration strips security headers.
- Selective placement: "We only put HSTS on /login."
If any response for your domain can be served without HSTS, the browser may still make the first HTTP request that the attacker needs to block the redirect. In real terms, the downgrade window is still open.
It's a small operational gap. For attackers, it's an invitation.
Why "we didn't exploit it" is not a good defense
Clients sometimes demand a live exploit demo before they'll prioritize fixes. That's the wrong metric.
You don't need to run an exploit to prove a problem. The evidence that matters — and that convinces engineers and managers — is simple, repeatable, and safe:
- The server redirects from HTTP to HTTPS (so the browser performs an HTTP request first).
- The HSTS header is missing on some endpoints.
- The certificate is self-signed, expired, or has chain issues.
- Session cookies are missing
Secure
/HttpOnly
/SameSite
flags.
These are not assertions; they're artifacts you can paste into a ticket. They show the path an attacker would use. And they're legally and ethically safe to show in a report.
Real-world scenarios I encountered:
"HSTS is on" (but only on login) A client told me, "We've enabled HSTS across the domain." The login page did return:
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
But other routes like /dashboard
or /home/details
did not. In that deployment, those pages were only reachable after authenticating through/login
, so a browser that had already visited /login
would cache the HSTS policy and enforce HTTPS for post-login routes. In that narrow flow, missing post-login headers are not an immediate exploit.
The real problem is first-contact entry points. Users don't always start at/login,
they come from bookmarks, marketing links, or /register
and /contact
pages. If any public entry point does not return HSTS, a fresh browser visiting that endpoint can be downgraded to HTTP before it ever sees /login
. That's the downgrade window attackers exploit.
Fix: don't rely on "login has it." Inject HSTS consistently at the edge (CDN/load balancer/reverse proxy) so that every response — including login, register, contact, and landing pages — carries the header. Then protection does not depend on how users arrive.
Internal VPN/Admin login with a self-signed cert
I still see internal admin consoles and VPN login pages using self-signed certificates on internal IPs. Support trains staff to "click through" warnings during maintenance. That habit is dangerous: once users are conditioned to ignore TLS errors, an attacker on the network can impersonate the service with a fake cert and harvest credentials.
Fix: issue proper certificates from an internal CA or trusted PKI, use hostnames (not raw IPs), automate issuance and renewal, and stop the "click-through" culture. Internal doesn't mean exempt — attackers love internal logins because they're trusted entry points.
Practical acceptance tests (paste these in your ticket)
Use these safe, non-exploit checks to prove the problem and then to validate fixes. Engineers can run them and paste the results into the ticket.
- Check redirect behavior
curl -I http://app.example.com
Expect: HTTP/1.1 301
(redirect to https://...
) — proves initial HTTP hop exists.
2. Check HSTS header on multiple endpoints
curl -I https://app.example.com
curl -I https://app.example.com/dashboard
curl -I https://app.example.com/api/status
Acceptance: Every response must contain Strict-Transport-Security: ...
.
3. Cert chain & expiry
openssl s_client -connect app.example.com:443 -servername app.example.com </dev/null 2>/dev/null | openssl x509 -noout -dates -issuer -subject
Acceptance: cert issued by trusted CA, expiry > 60 days (or automated renewal in place and dry-run successful).
4. Cookie attributes
curl -I -L https://app.example.com | grep -i set-cookie
Acceptance: session cookies include Secure; HttpOnly; SameSite=...
.
5. External TLS health check
- Run SSL Labs or Mozilla Observatory (public sites) and paste the report link or screenshot.
- Acceptance: no critical TLS flags, no TLSv1.0, acceptable grade.
These are the exact artifacts you should attach to a remediation ticket. No live MITM necessary. No "prove it by breaking it" mindset required.
Public demos of SSL-stripping tools can be useful for learning, but they must be run in isolated lab environments or under explicit written authorization. Do not execute exploit code against production networks. If a client insists on a live demo, require a signed Rules of Engagement (ROE) and formal approval before proceeding.
AI and TLS: why old gaps matter more now
You don't need to be an AI engineer to see the next risk. Organisations add API-driven AI features — chat assistants, document summarizers, analytics agents — that call internal endpoints. If those API calls run on routes that miss HSTS or use weak TLS, an attacker who downgrades traffic can feed poisoned inputs to those AI agents. That turns a "stolen credential" problem into a data-poisoning or instruction-injection problem with a wider blast radius.
I'm not claiming every fintech or healthcare core system runs GenAI inside PCI/PHI paths — many don't, and those that do add controls. But where AI sits in the stack (chatbots, analytics pipelines, knowledge agents), treat its inputs with the same TLS discipline as you treat payment and patient data.
The Remediation Ticket template you can copy-paste
Title: Close TLS downgrade window — enforce global HSTS, replace self-signed/expiring certs, and harden cookies.
Summary: Initial HTTP → HTTPS
redirect exists; HSTS missing on secondary endpoints; some certificates are self-signed or expiring; session cookies missing Secure/HttpOnly
. This allows possible network-path downgrade and data interception.
Acceptance tests: Paste the curl
, openssl
and external scan outputs from above. Fix verified when every endpoint returns HSTS, certificates are issued by a trusted CA and automated, and cookies are hardened.
SSL stripping isn't some exotic zero-day. It usually comes down to missed headers, self-signed certs, or expired renewals. These are operational gaps, not hard problems. The fixes are simple: enforce HSTS at the edge, automate certificates, and harden cookies. Do those consistently, and you've already closed off a whole class of man-in-the-middle risks.