AI and machine learning aren't just buzzwords anymore they're the foundation of everything that's being built right now. From large language models to AI-powered applications, these technologies are shaping the next decade of computing. And like anything new and powerful they bring new security challenges.

That's exactly why I decided to take the Certified AI/ML Pentester exam from Pentesting Exams (The SecOps Group). At the time, it was basically the only proper hands-on certification that tested real-world AI/LLM exploitation. Now, more options are appearing, but this one was definitely the first to take it seriously.

Why I Took It

Since AI and ML became the new hot topic, I felt it's not enough to just understand how to build or use these systems we also need to know how to break them and ultimately secure them. These models are literally the building blocks of the future and if we don't understand how to test and harden them we're blind to their risks.

I work in this industry, so I wanted to stay ahead not just reading about AI security in theory, but doing it for real. This exam seemed like the perfect opportunity to challenge myself in that direction.

How I Prepared

None
Photo by Aaron Burden on Unsplash

There's no official training course for this certification it's purely an exam. The website only gives you a syllabus and a few recommended links, so you're completely on your own.

I followed those resources and also trained myself on several open AI-security challenge platforms, especially Lakera (Gandalf) and similar ones that simulate prompt-injection and model-jailbreak scenarios.

Basically, I spent time learning how LLMs think, how to trick them, and how to exploit small design decisions that can lead to serious vulnerabilities. That mindset helped me more than any study guide could.

Exam Day Experience

The exam lasts exactly four hours and believe me, you'll use every minute :) It's eight practical challenges that you have to solve in a live lab environment. You connect through VPN and interact with different AI systems (ChatBot), each designed with unique weaknesses.

At first, it went as expected but some parts definitely caught me off-guard. The difficulty jumped suddenly in a few tasks and I mean really jumped. It took me three hours and fifty-eight minutes to complete all challenges. I refused to leave anything unfinished, I wanted every flag.

Despite the pressure it was genuinely fun one of those rare exams that keeps you thinking creatively instead of mechanically.

The Hardest Part

There was one specific challenge that almost got me. I knew what I had to do conceptually, but figuring out how to make the system reveal what I needed was another story.

I had to step back, completely change my approach and start thinking like the AI or rather like someone trying to manipulate the AI's logic. Once I did that, it finally clicked. Thinking outside the box was absolutely key.

Technical Side

No issues at all. The infrastructure was stable, VPN worked smoothly and the support team responded quickly whenever I had questions. Keep that 15 mins for troubleshooting activities since it's officially 4 hours and 15 mins.

Results & Reflection

None
My exam result and certificate

You get your results instantly after submission no waiting, no guessing. I passed with merit, which means I solved all eight challenges and captured all flags basically a 100% score. Seeing that result after four intense hours felt extremely satisfying.

It's not just about getting the badge it's about proving to yourself that you can actually hack an AI system and understand what's going on under the hood.

Final Thoughts & Advice

If you're into offensive security or you simply want to understand how AI systems can be attacked and secured, I highly recommend this certification.

It's not easy, it's challenging, not a multiple-choice quiz. You'll need creativity, persistence and a deep curiosity about how LLMs and AI models work internally. But that's also what makes it worth it.

If you enjoy breaking things to learn how to protect them, this one's for you.

Good luck ;)