In the world of software development, we often talk about "shifting left" — the practice of moving security testing and validation to the earliest possible point in the development lifecycle. It's a core principle of our "Secure Programming" competency, which challenges us to not just apply security standards but to anticipate and analyze potential issues throughout the entire SDLC.

Recently, I encountered a perfect textbook case of this principle in action. A static analysis tool in our CI/CD pipeline flagged a potential vulnerability, allowing us to patch a security hole before it ever had a chance to be merged, let alone deployed.

Here's the story of the bug, the discovery, and the fix.

The Hidden Danger: A Potential SSRF

We have a common utility function called authenticatedRequest. Its job is simple: take an API endpoint and an auth token, and make a fetch call to our backend.

The original code looked like this:

// The vulnerable "before" code
const API_BASE_URL = process.env.NEXT_PUBLIC_API_URL || "http://localhost:8000";

export async function authenticatedRequest(
    endpoint: string,
    token: string,
    options: RequestInit = {},
): Promise<Response> {
    
    // VULNERABILITY: 'endpoint' is directly concatenated
    const response = await fetch(`${API_BASE_URL}${endpoint}`, {
        ...options,
        headers: {
            Authorization: `Bearer ${token}`,
            "Content-Type": "application/json",
            ...options.headers,
        },
    });

    if (response.status === 401 || response.status === 403) {
        throw new AuthError("Authentication failed");
    }

    return response;
}

The problem is subtle but critical: the endpoint string is being directly concatenated to our API_BASE_URL.

This is a classic Server-Side Request Forgery (SSRF) vulnerability, listed on the OWASP Top 10. A malicious user (or another compromised part of the application) could pass in a carefully crafted endpoint value to trick our server into making requests to places it should never talk to.

For example, an attacker could try inputs like:

Because the fetch call happens on our server, it could potentially expose internal network services or send our application's auth token to an attacker's server.

The Automated Watchdog: Discovery via Pipeline

The best part of this story isn't the bug, but how we found it. This vulnerability never made it to a staging environment or even a manual code review. It was caught automatically, thanks to two key pieces of our SDLC:

  1. GitLab CI/CD: Our development process is built around GitLab pipelines that run on every single commit push.
  2. Semgrep: A powerful static analysis (SAST) tool that we have integrated into our pipeline is Semgrep. We have it configured to run against our code, using a ruleset that includes checks for common vulnerabilities like SSRF.

The moment I pushed my branch, the pipeline kicked off. Within minutes, the Semgrep job failed, pointing directly at the fetch(${API_BASE_URL}${endpoint}) line and identifying it as a potential SSRF flaw.

This is "shifting left" in its most practical form. The feedback was immediate, context-specific, and blocked the problematic code from proceeding — no human intervention required.

Building the Fortress: A Defense-in-Depth Fix

Understanding the "why" from the Semgrep report, I refactored the function to apply strict, multi-layered validation. This is the "application of standard secure programming" in practice.

Here is the new, secure version:

// The secure "after" code
const API_BASE_URL = process.env.NEXT_PUBLIC_API_URL || "http://localhost:8000";

export async function authenticatedRequest(
    endpoint: string,
    token: string,
    options: RequestInit = {},
): Promise<Response> {
    
    // 1. VALIDATION: Block dangerous patterns
    if (
        endpoint.startsWith("//") || // Protocol-relative URL
        endpoint.includes("://") || // Absolute URL
        /^[a-zA-Z][a-zA-Z0-9+.-]*:/.test(endpoint) || // Any other scheme
        endpoint.includes("..") || // Path traversal
        endpoint.includes("\\") // Windows path separators
    ) {
        throw new AuthError("Invalid endpoint");
    }

    // 2. NORMALIZATION: Ensure it's a path
    if (!endpoint.startsWith("/")) endpoint = `/${endpoint}`;

    // 3. SAFE CONSTRUCTION: Use the URL constructor, not string concatenation
    const url = new URL(endpoint, API_BASE_URL);

    // 4. FINAL CHECK: Explicitly verify the origin
    if (url.origin !== new URL(API_BASE_URL).origin) {
        throw new AuthError("Invalid endpoint");
    }

    const response = await fetch(url.toString(), {
        ...options,
        headers: {
            Authorization: `Bearer ${token}`,
            "Content-Type": "application/json",
            ...options.headers,
        },
    });

    if (response.status === 401 || response.status === 403) {
        throw new AuthError("Authentication failed");
    }

    return response;
}

This new function is far more robust. Let's break down the layers of defense:

  1. Blocklist Known Bad Patterns: The initial if statement immediately rejects any input that looks like a path traversal (..) or an absolute/protocol-relative URL (://, //). This is a fast first-pass filter.
  2. Normalize the Path: We ensure the endpoint starts with a /. This helps the URL constructor correctly interpret it as a path, not a hostname.
  3. Use a Safe Constructor: We leverage the standard URL constructor, passing the API_BASE_URL as the base. This API is designed to safely combine a base URL and a relative path, correctly handling any edge cases—a much safer approach than simple string building.
  4. Allowlist the Origin: This is the most important check. After the URL object is constructed, we compare its final origin property against the origin of our API_BASE_URL. If they don't match exactly, we throw an error. This single check effectively neutralizes any SSRF attempt, as it makes it impossible for the request to be sent to any other domain.

Analyzing Security Across the Entire SDLC

My experience with Semgrep and SSRF is a perfect example of security in the "Code" and "Build" phases. But a truly robust security posture, as our competency requires, means analyzing the entire lifecycle, from a developer's first keystroke to the application running in production.

This single fix is just one part of a larger security ecosystem. We must also analyze and anticipate other issues across the SDLC.

1. Source Code Integrity (The "Code" Phase)

  • The Issue: How do we trust that the code in our repository actually came from a verified developer? A compromised GitHub or GitLab account could be used to inject malicious code (like a backdoor or a crypto-miner) directly into our main branch.
  • Anticipation: This is precisely the problem solved by commit signing with GPG. By requiring all commits to be cryptographically signed, we move from "trusting the username" to "verifying the cryptographic identity." In our project, we can anticipate this by enabling branch protection rules that reject any unsigned commits, ensuring source code integrity.

2. Software Supply Chain (The "Build" Phase)

  • The Issue: Our application doesn't just contain our code; it contains hundreds of third-party dependencies from npm, Maven, or PyPI. A recent example is the xz-utils backdoor, where a malicious actor nearly compromised millions of servers. How do we trust our dependencies?
  • Anticipation: We can't trust them blindly. We must anticipate this risk by: — Dependency Scanning: Integrating tools like Snyk, Dependabot, or GitLab's own Dependency Scanning into our pipeline. These tools check our package-lock.json against a database of known vulnerabilities (CVEs) and alert us. — Artifact Signing: Just as we sign commits, we can sign our final build artifacts (e.g., Docker images using Sigstore/Notary). This allows our production environment to verify that the image it's about to run is the exact, unmodified image our CI pipeline built.

3. Secrets Management (The "Deploy" Phase)

  • The Issue: A common failure is hardcoding secrets (like the token in my example, database passwords, or API keys) directly in the code. This makes them visible to anyone with code access and permanently logs them in Git history.
  • Anticipation: We anticipate this by establishing clear processes: — Using a Secrets Manager: Never store secrets in code. Instead, we use GitLab CI/CD variables, HashiCorp Vault, or a cloud provider's secrets manager to inject them at build time or runtime. — Implementing Pre-Commit Hooks: We can use tools like gitleaks or trufflehog locally. These hooks scan code before it's even committed, preventing secrets from ever leaving the developer's machine and entering the repository.

By analyzing security at every stage — from the developer's GPG key (Code) to dependency scanning (Build) and artifact signing (Deploy) — we evolve from a reactive "bug-fixing" model to a proactive, defense-in-depth culture.

Conclusion

This experience was a powerful reminder that security isn't a feature we add or a final step we take. It's a continuous process and a shared responsibility.

By building automated security analysis (Semgrep) directly into our core development workflow (GitLab CI), we transformed a potential security crisis into a simple, educational development task. This proactive, automated approach is the single most effective way to uphold our secure programming standards and build applications that are safe by design.