As the number of containers, Kubernetes manifests and Git repositories in an environment grows, security scanning can easily devolve into an ad-hoc, inconsistent practice. Trivy provides a powerful scanning engine, but in many setups it remains a one-off CLI command developers run manually on their own machines. A more robust approach is to expose Trivy through a small, central service that can run scans on remote servers, container images, filesystems and Git repositories, both on demand and on a schedule.

The following sections describe such an approach: how Trivy works under the hood (especially for vulnerabilities and misconfigurations), how its findings are represented in reports, and how a Spring Boot–based backend can orchestrate Trivy via SSH, generate HTML/JSON reports, and schedule periodic scans with Quartz.

How Trivy works under the hood

Trivy's execution model can be viewed in two dimensions:

  • What to scan (target) — image, filesystem, repository, Kubernetes, etc.
  • What to look for (scanner) — vulnerabilities, misconfigurations, secrets, licenses, and so on.

On first execution, Trivy downloads its own vulnerability database and caches it locally (via a trivy-db image hosted on GHCR). This database aggregates information from multiple sources: Linux distribution advisories, ecosystem-specific security advisories (e.g., GitHub Security Advisory, OSV), and global sources such as NVD. The database is updated regularly so that subsequent scans incorporate the latest advisories.

When scanning a container image or filesystem, Trivy first builds an inventory of what is present:

  • OS packages (Alpine, Debian, Ubuntu, RHEL, etc.)
  • Language-level dependencies derived from lockfiles such as package-lock.json, yarn.lock, Pipfile.lock, poetry.lock, Gemfile.lock, composer.lock, and similar artifacts

It then matches each (package, version) pair against its advisory database. When a match is found, the corresponding vulnerability record is attached to the scan result, including the identifier (e.g., CVE, AVD), severity, description, affected package, and, when available, fixed versions and references.

Misconfiguration scanning follows a different path. Trivy identifies infrastructure-as-code and configuration artifacts such as Dockerfiles, Kubernetes manifests, Terraform and CloudFormation templates, and converts them into an intermediate structured representation. These are then evaluated by a policy engine based on Rego (OPA) and Go. The checks themselves are distributed as a "checks bundle" (also via container images), which Trivy downloads and loads into OPA at runtime. Each check encodes a rule such as "containers should not run as root," "S3 buckets should not be public," or "Kubernetes RBAC roles should not be overly permissive," and is applied systematically to the relevant resources.

In summary:

  • Vulnerability scanning is essentially dependency and package inventory + advisory database lookup.
  • Misconfiguration scanning is IaC parsing + policy evaluation.

How findings are represented in reports

Trivy's JSON output encodes scan results in a structured and machine-readable way. For each scanned target, there is a list of result objects, and each vulnerability entry typically includes:

  • VulnerabilityID – often a CVE identifier or an Aqua-specific AVD ID
  • PkgName, InstalledVersion, FixedVersion (if known)
  • Title, Description
  • Severity (e.g., LOW, MEDIUM, HIGH, CRITICAL)
  • SeveritySource – indicating which provider's rating is being used as the primary one (NVD, vendor advisory, etc.)
  • PrimaryURL – usually a canonical page for the issue, e.g., an AVD link
  • References[] – links to NVD, vendor advisories, CVE.org, GitHub Advisory pages and related resources

Vendor-specific severity information is often preserved under VendorSeverity, allowing downstream tooling to understand how different distributions or vendors classify the same issue.

Misconfiguration entries have a similar level of detail, but oriented around configuration rules rather than packages. A typical misconfiguration finding contains:

  • A rule or check ID (often the ID from the Trivy policy repository)
  • Severity
  • The affected resource (for example, Kubernetes kind/name/namespace or a Terraform resource)
  • The file and sometimes line information
  • A short description of the issue and guidance on remediation

These checks are backed by explicit Rego/Go policies, so their exact logic is inspectable for those who need deeper transparency.

Trivy supports both JSON and HTML outputs. JSON lends itself well to automation and integration with other systems (e.g., dashboards, SIEM, ticketing). HTML is useful as a human-friendly format: especially when combined with visualizers such as scan2html, it becomes a single-page, filterable report where each vulnerability row presents severity, the affected package, versions and clickable references; each misconfiguration row similarly exposes rule metadata and remediation hints.

None
Detected Vulnerabilities from Container Image in HTML format
None
Detected Misconfigurations from Container Image in HTML format

Turning Trivy into a small HTTP service

A common pattern in teams is that each developer or operator runs Trivy locally with their own set of flags and scripts. This leads to inconsistent usage and scattered results. A small HTTP service around Trivy can instead provide:

  • A single place where scan configurations (targets, formats, schedules) are defined
  • A consistent way of executing scans on remote servers
  • A uniform representation of reports that the UI can list and expose for download
  • A foundation for later integration with CI/CD and other systems

A typical interaction flow for such a service is:

  1. The user connects through a web UI and provides the IP/hostname and SSH credentials (username/password and optionally a PEM file) for the server that will run Trivy.
  2. The user selects what to scan: a filesystem path, a container image, a Git repository or a Kubernetes cluster, and chooses the desired output format (HTML or JSON).
  3. For manual scans, the user triggers the scan immediately. For periodic scans, the user provides a cron expression and creates a scheduled task.
  4. The backend connects to the specified server over SSH, generates the appropriate Trivy command for the chosen target and format, executes it, and stores the path of the resulting report file.
  5. Reports are associated with manual runs or scheduled tasks in the database, and the UI can list them with download links.

All Trivy-specific details — CLI subcommands, scanner combinations, output file naming and directory layout — are encapsulated inside the backend. Users only describe the "what", "where" and "how often".

Executing Trivy remotely via SSH

To run Trivy on remote servers, the service needs to establish SSH sessions, execute commands and, for downloads, use SCP. Apache Mina SSHD is a practical choice for this.

A common pattern is to encapsulate SSH session lifecycle in a helper component. The following is a simplified example:

public class SshExecutor {

    public interface SessionCallback<T> {
        T doInSession(ClientSession session) throws Exception;
    }

    public <T> T execute(String host, String username, String password,
                         SessionCallback<T> callback) {

        SshClient client = SshClient.setUpDefaultClient();
        client.start();

        try (client) {
            ClientSession session = client.connect(username, host, 22)
                    .verify(10, TimeUnit.SECONDS)
                    .getSession();

            session.addPasswordIdentity(password);
            session.auth().verify(10, TimeUnit.SECONDS);

            try (session) {
                return callback.doInSession(session);
            }
        } catch (Exception e) {
            throw new IllegalStateException("SSH command failed for host " + host, e);
        }
    }
}

Each HTTP request uses this helper to open a fresh SSH session and closes it once its work is complete. There is no shared or global SSH state; session lifetime is intentionally kept at most as long as the request lifecycle, which avoids cross-request interference.

Command execution can then be encapsulated as:

public String runCommand(ClientSession session, String command) throws IOException {
    ByteArrayOutputStream out = new ByteArrayOutputStream();
    ByteArrayOutputStream err = new ByteArrayOutputStream();

    try (ChannelExec channel = session.createExecChannel(command)) {
        channel.setOut(out);
        channel.setErr(err);
        channel.open().verify();

        channel.waitFor(EnumSet.of(ClientChannelEvent.CLOSED), 0);

        if (channel.getExitStatus() != 0) {
            String stderr = err.toString(StandardCharsets.UTF_8);
            throw new IOException("Command failed: " + command + "\n" + stderr);
        }

        return out.toString(StandardCharsets.UTF_8);
    }
}

With these two utilities, running a Trivy command on a remote host is reduced to calling sshExecutor.execute(...) with the appropriate callback.

Building Trivy commands for different targets

Because Trivy uses different subcommands depending on the target (trivy image, trivy fs, trivy repo, trivy k8s), it is helpful to separate "what to scan" from "how to invoke Trivy" by using a small strategy layer.

An example interface:

public interface ScanCommandBuilder {
    ScanCommand build(ScanRequest request);

    record ScanCommand(String command, String remoteOutputPath) {}
}

Here, ScanRequest encapsulates the scan parameters (target string, target type, output format, optional task ID for periodic scans).

An implementation for container images might look like this:

public class ImageScanCommandBuilder implements ScanCommandBuilder {

    @Override
    public ScanCommand build(ScanRequest req) {
        String baseDir;
        String suffix = "";

        if (req.taskId() == null) {
            baseDir = "/tmp/trivy/manual";
        } else {
            baseDir = "/tmp/trivy/periodic/" + req.taskId();
            suffix = "-" + UUID.randomUUID();
        }

        String extension = req.format() == ScanFormat.HTML ? "html" : "json";
        String outputPath = baseDir + "/trivy-scan-output" + suffix + "." + extension;

        String command;
        if (req.format() == ScanFormat.HTML) {
            command = String.format(
                    "mkdir -p %s && trivy scan2html image --scanners vuln,misconfig %s %s",
                    baseDir, shellEscape(req.target()), outputPath
            );
        } else {
            command = String.format(
                    "mkdir -p %s && trivy image --scanners vuln,misconfig -f json -o %s %s",
                    baseDir, outputPath, shellEscape(req.target())
            );
        }

        return new ScanCommand(command, outputPath);
    }

    private String shellEscape(String value) {
        // At minimum, apply basic quoting to avoid breaking the shell
        return "'" + value.replace("'", "'\"'\"'") + "'";
    }
}

Similar builders can be introduced for filesystem, repository and Kubernetes targets. The service layer only needs to know which ScanTarget enum value is involved and can then ask a Map<ScanTarget, ScanCommandBuilder> for the correct builder.

This separation keeps controller and service code free from extensive if/else logic tied to Trivy's CLI.

Orchestrating a scan in the service layer

With SSH execution and command building in place, the core scan orchestration becomes straightforward. A simplified service might be:

public class ScanService {

    private final SshExecutor sshExecutor;
    private final Map<ScanTarget, ScanCommandBuilder> builders;

    public ScanService(SshExecutor sshExecutor,
                       Map<ScanTarget, ScanCommandBuilder> builders) {
        this.sshExecutor = sshExecutor;
        this.builders = builders;
    }

    public ScanResult runScan(ScanRequest req, String host, String username, String password) {
        ScanCommandBuilder builder = builders.get(req.scanTarget());
        ScanCommandBuilder.ScanCommand cmd = builder.build(req);

        String remotePath = cmd.remoteOutputPath();

        sshExecutor.execute(host, username, password, session -> {
            String output = runCommand(session, cmd.command());
            // Logging or additional diagnostics can be added here if needed
            return null;
        });

        return new ScanResult(remotePath, req.format(), req.taskId());
    }

    public record ScanResult(String remotePath, ScanFormat format, Long taskId) {}
}

The remotePath returned here is the location on the remote host where Trivy produced the report. For manual scans, this is typically a fixed path under a "manual" directory. For periodic scans, incorporating the task ID and a random suffix makes it easy to store multiple historical reports per task.

Downloading reports via SCP

After a scan has completed, the report resides on the remote server. The HTTP service can provide a "download" endpoint that retrieves this file over SCP and then streams it to the client.

A minimal implementation for downloading a file with Apache Mina's SCP client:

public class ReportDownloader {

    private final SshExecutor sshExecutor;
    private final Path localBaseDir;

    public ReportDownloader(SshExecutor sshExecutor, Path localBaseDir) {
        this.sshExecutor = sshExecutor;
        this.localBaseDir = localBaseDir;
    }

    public Path download(String host, String username, String password,
                         String remotePath, String extension) {

        return sshExecutor.execute(host, username, password, session -> {
            ScpClient scp = ScpClientCreator.instance().createScpClient(session);

            Files.createDirectories(localBaseDir);
            Path local = localBaseDir.resolve(
                    "trivy-report-" + UUID.randomUUID() + "." + extension
            );

            scp.download(remotePath, local.toString());
            return local;
        });
    }
}

The controller can then read the resulting file into memory (or stream it) and return it with appropriate Content-Type and Content-Disposition headers so that the browser downloads it as a file, for example an .html or .json report.

Scheduling periodic scans with Quartz

Security scanning is most valuable when it is not a one-time activity. To support recurring scans, the service can use Quartz to schedule jobs based on cron expressions.

A typical design is:

  • Store periodic scan definitions in a database table, including host, credentials (or reference to them), target, cron expression and output format.
  • On application startup, read all definitions and register corresponding Quartz jobs and triggers.
  • Each Quartz job uses the same ScanService and SSH logic as manual scans, but is driven by the stored configuration.

A simplified Quartz job might look like this:

@DisallowConcurrentExecution
public class TrivyScanJob implements Job {

    @Override
    public void execute(JobExecutionContext context) {

        JobDataMap data = context.getMergedJobDataMap();
        Long taskId = data.getLong("taskId");
        String host = data.getString("host");
        String username = data.getString("username");
        String password = data.getString("password");
        String target = data.getString("target");
        String targetType = data.getString("targetType");
        String format = data.getString("format");

        ScanRequest req = new ScanRequest(
                target,
                ScanTarget.valueOf(targetType),
                ScanFormat.valueOf(format),
                taskId
        );

        ScanService scanService = lookupScanService(); // injected in a real setup
        ScanService.ScanResult result = scanService.runScan(
                req, host, username, password
        );

        saveTaskFile(taskId, result.remotePath(), result.format());
    }
}

The critical design choice is that the database, not Quartz, is the system of record. Quartz is reconstructed from the database definitions on startup, which ensures periodic tasks persist across application restarts.

Conclusion

Trivy provides a comprehensive scanning engine for container images, filesystems, Git repositories and Kubernetes clusters. Its vulnerability and misconfiguration scanners are fueled by regularly updated advisory data and policy bundles, and its JSON/HTML outputs carry rich metadata and references for each finding.

Wrapping Trivy in a small HTTP service built with Spring Boot turns it from a local, ad-hoc CLI tool into a consistent, shared capability:

  • Remote execution over SSH ensures scans run in the right environment (for example, inside the same network or cluster as the workloads).
  • Strategy-based command building keeps support for different targets and formats modular.
  • A clear SSH session lifecycle per request avoids subtle concurrency and state-sharing issues.
  • SCP-based download endpoints make HTML and JSON reports easily accessible through a UI.
  • Quartz integration provides scheduled scanning for ongoing visibility.

On top of this core, additional capabilities — such as enabling Trivy's secret and license scanners, pushing JSON results into dashboards, computing risk scores or integrating with CI/CD gates — can be introduced incrementally. The combination of Trivy's engine and a dedicated orchestration service offers a practical foundation for making security scanning a routine and visible part of the software lifecycle, rather than an afterthought.

REFERENCES

Aqua Security, Trivy Documentation, trivy.dev Aqua Security, Trivy Databases (trivy-db), trivy.dev / GitHub Aqua Security, Misconfiguration Scanning, trivy.dev Aqua Security & Community, Trivy Reporting & scan2html Plugin, trivy.dev / GitHub