If you've read my other articles, you'll know that I think highly of OWASP ZAP, but one question comes up again and again: whats the best way to automate Zap?

There are a lot of options:

  • Write your own code in your preferred language and leverage ZAP's API backend
  • Using ZAP's SDK directly in your codebase
  • Using ZAP's built-in automation framework
  • Provisioning and running ZAP using the official Docker container

Which Is Best?

Honestly, it depends on what you're most comfortable with.

I've tried all of the above, and my personal preference is using the Docker container.

When I first started, I didn't even realise there was an SDK — the documentation wasn't great at the time (it's massively improved now). Because of that, I ended up writing my own API calls, which actually worked surprisingly well.

No matter which automation flavour you choose, though, everything boils down to the same fundamental model:

Input → Process → Output

Inputs: Scan configuration | URL to scan Process: The scan itself Outputs: The report

Simple in theory — slightly less so in practice.

The Docker vs GUI Gap

One of my biggest bugbears is that although the automation plan can be run both via Docker and the GUI, the results don't always match.

Often this comes down to:

  • Different versions of add-ons
  • Differences in environment setup

However, I also suspect there are some underlying technical reasons why scans run via the GUI and the Docker container don't always produce identical results.

That said, I expect this gap to close as the automation framework continues to mature.

Running ZAP in Docker

So, how do we actually run the Docker container? Well… there are nearly (it seems) a million ways to do it. If you haven't already read it, the official documentation is a good starting point:

https://www.zaproxy.org/docs/docker/about/

Orchestration Challenges in CI/CD

Running ZAP in a pipeline introduces another challenge: orchestration.

In most cases, we don't want ZAP to:

  • Start → Scan → Shut down until we've retrieved: The report and any other artifacts we care about (although you may choose to mount the dir to get the report)

We may also want to ensure that ZAP is fully up and ready before triggering a scan, rather than relying on the container to "just work" in time, or use its api to get access to the backend scan stats.

My Preferred Approach

My preferred solution is to run ZAP using a command like the one below, embedded within Docker Compose.

I often run other containers alongside ZAP to help manage orchestration. If you choose this method

  • Be sure to run the container in detached mode, or your pipeline will hang
  • Use a condition: always (or equivalent) so the container shuts down cleanly after each run
  • There are many ways to run the docker container, I prefer a flavour of this myself:
docker run -u zap -p 8080:8080 -i zaproxy/zap-stable zap.sh \
  -daemon \
  -host 0.0.0.0 \
  -port 8080 \
  -config api.addrs.addr.name=.* \
  -config api.addrs.addr.regex=true \
  -config api.key=<api-key>

Once I've confirmed that ZAP is running and can reach the site under test, I provision ZAP with the scan configuration (zap.yaml).

My usual workflow looks like this:

  1. Add the site under test to the default context in the GUI
  2. Create an automation plan
  3. Run the plan in the GUI to prove it works
  4. Export the plan
  5. Open it in my IDE and make any container-specific changes needed / soft-coding

A very common change here is adjusting file paths, especially if the automation plan was created on a Windows machine but will be executed inside a Linux-based Docker container.

Final Thoughts

Automating ZAP for CI/CD doesn't have a single "right" answer, but with a clear input–process–output model and a well-orchestrated Docker setup, it's absolutely achievable — and scalable.

If you're already using ZAP manually, automation is the natural next step.