From Coffee to Cold Sweat: How Reading AI Security Reports Led Me to Start This Blog (And Why XBOW Might Change Everything)
- Adrian Munday
- Jun 28
- 7 min read
Updated: Jul 5

A few weeks ago I opened a recent cyber security report from Anthropic that a friend forwarded, roughly motivated by my day job but more out of the general interest I have in AI. This started when I sat fascinated in General Assembly's classrooms for 10 weeks in 2016 taking the plunge into python, data science, machine learning and the basics of neural network implementations.
This report and other recent flurries in AI news made me realise that I needed a better way of making sense of the latest developments. I have been reading, listening and discussing with friends and colleagues, from all walks of life, but still I am left feeling overwhelmed, as I'm sure many of you are.
Getting it down 'on paper' seemed like the sensible option. Hence this personal blog.
This first post is my attempt to make sense of what I discovered in those reports over a Sunday morning coffee - not as a technical AI expert (I'm not), but as a hands-on practitioner and someone who's paid to be professionally paranoid about emerging risks.
As this blog evolves I will share more thoughts on AI as I process them - what I'm seeing and hearing about the impact on society, industry and the way we live our lives; how I'm using the tools and the impact they're having; latest news and developments I'm reading or hearing about; and the blind alleys, mistakes and dark corners I get stuck in.
With that, let's dive in.
What's New: The Democratization of Cybercrime
When my friend forwarded me Anthropic's April 2025 report it was with a simple note: "Catching up but thought you'd find this interesting." By the time I finished reading it and moved on to OpenAI's June report, my Sunday morning coffee had gone cold.
These reports (and the very latest news on the XBOW event - more on that later) took the democratisation of cybercrime trend, which started with ransomware-as-a-service a number of years ago, and tipped it over into mainstream reality. In particular it has caused the rising AI tide to "lift all cybercrime boats", and increase the scale and impact of smaller, less sophisticated threat actors. As a result, the activities of more sophisticated threat actors and their tailored use of AI is beyond the scope of this blog.
# Professional Influence Operations at Scale
One of the most impactful case studies in the reports was the "influence-as-a-service" operation that wasn't just using AI to write fake social media posts - it was using Claude to orchestrate entire disinformation campaigns. As the Anthropic report detailed:
Anthropic identified at least four distinct campaigns through the same infrastructure: from focusing on energy security and cultural identity narratives for targeted audiences through to promoting development initiatives and political figures in one African nation.
The sophistication lies in playing the long game rather than going viral. As the report notes: "The operation's long-term engagement with 10s of thousands of authentic accounts represents a strategic approach to influence that does not rely on content 'breaking out' but instead gradually pulls users into politically aligned echo chambers through seemingly organic interactions."
# The Novice Becomes the Expert
One of the most eye-opening sections of Anthropic's report described a novice actor who used Claude to punch above their technical ability weight class. This was someone with basic skills who suddenly had access to expert-level capabilities.
The report noted how this actor "went from simple scripts to sophisticated systems with the aid of Claude, developing tools that included facial recognition and dark web scanning capabilities." As someone who's personally hacked around in Python (aka googled and spent time on stack overflow) only to discover the super-power of vibe-coding, the speed of this transformation is jarring.
OpenAI's documentation of the "ScopeCreep" case illustrates this chilling point further:
# Professional Digital Personas
Employment fraud campaigns show how professional these operations have become. OpenAI's report detailed how threat actors didn't just create fake resumes - they built entire digital personas:
Reading this, I couldn't help but think what the long-term impact would be on how we represent our digital identities online through to the "Know Your Customer" protocols used in banking.
# When AI Becomes the Hacker: The XBOW Milestone
Just as I was processing these reports, something happened that made all this feel even more immediate. In late June 2025, tech media exploded with a headline that sounds like science fiction: "the best hacker in the US is now an AI."
XBOW - a year-old startup - announced that its autonomous penetration-testing agent had achieved something unprecedented: "For the first time in bug bounty history, an autonomous penetration tester has reached the top spot on the US leaderboard." The AI system had edged out every human participant on HackerOne's United States leaderboard, a platform where the world's best ethical hackers compete to find vulnerabilities in major companies' software.
The scale is impressive. "XBOW submitted nearly 1,060 vulnerabilities." Of these, "54 of the submitted vulnerabilities were classified by the program owner as 'critical,' 242 as 'high,'" with many affecting household names like Disney, AT&T, Ford, and Epic Games.
However, UC Berkeley also published a study earlier this month that found that AI agents were less capable than the XBOW headlines. Their CyberGym benchmark - testing AI agents on 1,507 real-world vulnerabilities across 188 major projects - found that "the top performer was OpenHands combined with Claude-3.7-Sonnet...." but soberingly, "Even the best-performing agent completed only 11.9% of the tasks."
The gap between XBOW's bug bounty success and these academic results is telling. As one security expert noted on Substack, "XBOW seems to be solving the problems that are already solved." The vulnerabilities it highlights - SQL injection, XSS, path traversal - are the "bread and butter of conventional [security testing] tools" that have existed for 20 years.
Yet dismissing XBOW would be a mistake. Even if it's finding the "low-hanging fruit," the implications are profound. As one commenter noted, "The system can scan thousands of web applications simultaneously, something that would require an army of human researchers." This scalability of AI changes everything.
What's Concerning: Global Scale and Sophistication
Both the AI companies' reports and the XBOW development paint a fascinating picture of the global nature of these threats. OpenAI found that the threat landscape was truly international: task scams (fake job offers promising easy money) from Cambodia through to comment spamming (fake posts often to promote products) from the Philippines and beyond.
The XBOW case adds another dimension: legitimate companies using AI for security testing are creating capabilities that could easily be weaponised. The same tooling that helps defend systems can be turned around to attack them. If a startup can build an AI that tops bug bounty leaderboards, what's stopping a well-funded crime syndicate from doing the same?
What's Next: The Paradox of Progress
# Detection Through Digital Breadcrumbs
Here's the twist: the very thing that makes these AI-powered attacks possible also makes them detectable. As Anthropic noted:
It's like watching criminals leave fingerprints at the crime scene. The digital breadcrumbs these actors leave behind when using AI tools create patterns that security teams can track and analyse.
# Early Days, Limited Impact (For Now)
Perhaps the most reassuring finding was how limited the actual impact of many operations has been. OpenAI's assessment of their "High Five" influence operation (Philippines domestic politics activity) found:
Similarly, while XBOW submitted over 1,000 reports, "only 130/~1K resolved cases" had been fully resolved at the time of reporting, with significant numbers marked as duplicates or informative but not actionable.
But I can't shake the feeling that we're in something like the "Wright Brothers" phase of AI-powered threats. Just because the impact of these first attempts can be debated we should not be complacent.
# The Race Between Capabilities and Security
Both Anthropic and OpenAI are investing heavily in detection and disruption capabilities, but as OpenAI's report warns: "As agentic AI systems improve we expect this trend to continue."
The race between AI capabilities and AI security feels like it's just beginning. How will these threats evolve as AI becomes more capable and autonomous? How quickly can the industry adapt to handle AI-augmented attacks? How do we balance AI innovation with security concerns?
Wrap up
After diving deep into these reports and the XBOW development, I'm left with a strange mix of concern and hope. Yes, bad actors are finding increasingly creative ways to weaponise AI and in particular the technology is allowing smaller and less sophisticated actors to scale up. And yes, AI itself is now competing with - and beating - human security researchers at their own game. But the same capabilities that enable misuse are also powering sophisticated detection and prevention systems.
The key takeaway? We're all part of this story now. Whether you're in banking, a tech professional, a business owner, or just someone trying to navigate the digital world safely, understanding these threats isn't optional anymore - it's essential.
I'm curious about your experiences and perspectives. I'd love to hear your thoughts. Drop a comment below or reach out directly via LinkedIn.
And if you found this blog valuable, consider sharing it with others who might benefit. The more we understand these emerging threats together, the better equipped we'll be to face them.
Until next time, you'll find me watching YouTube updates on the latest n8n AI agent automations at 5.30am on a Sunday morning...
Resources & Further Reading
Primary Sources:
- Anthropic's April 2025 Detecting and Countering Malicious Uses of Claude (https://anthropic.com)
- Anthropic's Operating Multi-Client Influence Networks Across Platforms
- OpenAI's June 2025 Disrupting malicious use of AI (https://openai.com)
- XBOW's blog: The road to Top 1 (https://xbow.com/blog/top-1-how-xbow-did-it/)
- UC Berkeley's CyberGym benchmark (https://www.cybergym.io/)
Additional Reading:
- "The Coming Wave" by Mustafa Suleyman - Essential reading on AI's dual-use nature
- Bruce Schneier's blog on AI and security
- Does XBOW AI Hacker Deserve the Hype? (https://utkusen.substack.com/p/does-xbow-ai-hacker-deserve-the-hype) - Critical analysis
Technical Reports:
- UC Berkeley CyberGym paper (arXiv:2506.02548)
- XBOW technical findings (to be presented at Black Hat 2025)
Podcasts Worth Your Time:
- Risky.Biz - Various episodes
- Darknet Diaries - "The AI Con Artist" episode



Super interesting! congrats! Look forward to reading your next blogs. Thank you !