I was doomscrolling on holiday and pigeons made me question my entire career
- Adrian Munday
- Jul 16, 2025
- 10 min read

It’s a sunny Saturday and rather than spending time outdoors I'm doomscrolling Instagram, when mathematician Hannah Fry's post stops me in my tracks. As you know by now, this blog is about artificial intelligence so this post made me immediately think of one of the ways in which we train AI models - reinforcement learning from human feedback (RLHF) - but applied to pigeons. What was going on?!
In 2015 a study was conducted where pigeons (previously untrained in the science of radiology…) learned to diagnose breast cancer with 85% accuracy. When researchers "flock-sourced" their decisions, accuracy hit 99%.
Yes, pigeons. The same birds that steal your sandwich in the park.
I tell you this story because we know now that AI also performs well in diagnosing cancer and whether its pigeons or AI, what does this all mean for those of us who've spent decades building our careers or those just starting out?
This question sent me down yet another AI rabbit hole, one that's really informed how I think about work, skills, and our collective future.
So grab your coffee (or tea, I don't judge), and let me share what I've discovered about navigating this strange new world where birds might be better at pattern recognition than we are.
With that, let’s dive in.
The Fog of Prediction (or: Why Even Bill Gates Got It Wrong)
Before we panic about our pigeon overlords, let's talk about our terrible track record at predicting technology's impact. There's no better example than Bill Gates trying to explain the internet to David Letterman in 1995.
Gates, fresh-faced and enthusiastic, described this revolutionary thing where people could "publish information" and send "electronic mail." Letterman's response? Pure gold. When the discussion turned to listening to baseball games on your computer, Letterman shot: "Does radio ring a bell?" When Gates explained the on-demand nature, Letterman retorted: "Do tape recorders ring a bell?”
Watching this today is deeply humbling. If Gates, the person who had the vision when at high school to see a future where there was a computer in every home, couldn't see the internet’s full potential what chance do we have of predicting AI’s impact on our careers? It reminds me of the quote, widely attributed to Niels Bohr - “Prediction is very difficult, especially if it’s about the future”.
We're all David Letterman now, trying to understand a revolution while it's happening around us.
Navigating the Jagged Frontier (with Ethan Mollick as our Guide)
This is where I've found Wharton professor Ethan Mollick's work invaluable. He is an Associate Professor at Wharton where he focuses on innovation and entrepreneurship. He describes AI capability as a "jagged frontier" - imagine a mountain range with impossibly high peaks next to unexpected valleys. AI can write a perfect sonnet in the style of Jay-Z but fails to write exactly 50 words (it thinks in tokens, not words). It aces the bar exam but stumbles on children's riddles. Mollick describes Artificial Jagged Intelligence (AJI) rather than Artificial General Intelligence (AGI).
Our pigeons perfectly embody this jaggedness. Brilliant at distinguishing cancerous tissue - a task that challenges human experts - they completely failed to generalise beyond memorised training images. Pattern recognition "intelligence," whether biological or digital, isn't smooth. It's jagged.
This matters because AI's impact won't be a predictable wave washing over industries in logical order. It'll be chaotic, surprising, and often counter-intuitive.
What History Teaches Us
History gives us some clues. Each big technological shift - steam, electricity, computers followed the same pattern: short-term disruption, long-term job creation. But here’s the thing that is challenging this time round: the timescale.
It took two centuries for steam power to fully transform the economy.
It took around five decades for electricity to make its full impact felt which enabled unprecedented increases in the productivity of capital and labour and created entirely new industries centred on consumer appliances and communication.
The personal computing revolution of the second half of the 20th century provides the foundational concepts for understanding the current AI revolution in the sense that it was essentially a cognitive revolution in nature. This leap took about two decades to revolutionise the economy from the first IBM PC in the early 1980s through to the dot-com revolution in the early 2000s.
What is challenging this time is that the timescales involved with AI appear to be very short so the chaotic, jagged frontier will impact roles, individual companies and industries at an unprecedented pace and unevenly.
The primary distinction between the PC and AI revolutions is that the PC impact was largely on routine tasks and AI is increasingly impacting non-routine tasks. But this insight - that production is not a combination of factors (capital and labour) but as a completion of a set of tasks is key to understanding the potential impact of AI.
This mental shift that changed my frame of reference when thinking about AI and jobs, came courtesy of economists Daron Acemoglu (MIT) and Pascual Restrepo (Yale):
AI doesn't automate jobs; it automates tasks within jobs.
A job is just a bundle of tasks. Take lawyers - they research precedents, draft documents, negotiate, advise clients, and manage billing. AI might automate the first two, but that transforms rather than eliminates the role. From this research the suggestion is one of a "Great Re-Bundling" - AI unbundles our work, automates some tasks, and creates entirely new ones.
The real question shifts from "Will AI take my job?" to "How will my tasks transform, and what new ones will emerge?”.
The Missing Rung Problem
So now we’ve reframed the question there is one type of role that seems more under threat than any other: AI is particularly good at automating entry-level tasks. Those mind-numbing 100-hour weeks that junior bankers and lawyers endure? The document review, basic research, first-draft writing? That's AI's sweet spot.
The latest data from the Federal Reserve Bank of New York, as of May 2025, shows the university degrees with the highest unemployment rates in the U.S. workforce. Anthropology majors top the list of the 20 fields examined with a 9.4% unemployment rate.
What will no doubt be as surprising to you as it was to me is that computer engineering majors are at 7.5% unemployed and physics and computer science come with unemployment rates of 7.8% and 6.1%, respectively. First sign of the first rung of the career ladder being removed? Perhaps more noise than signal at this point specifically in relation to AI’s impacts but one to watch.
Here's the problem - that graduate grunt work is how we learn. I remember my first roles doing the basics of liquidity and foreign exchange exposure management. Managing trade finance for rubber exports and learning simple things like how bank holidays impact settlement dates in different jurisdictions was how I cut my teeth. That was my apprenticeship.
Without it, how do we develop the judgment and expertise needed for senior roles? We might need to completely reimagine professional development, perhaps borrowing from military training methods that create expertise through structured, intensive programs rather than years of repetitive tasks.
With three children in, or nearly in, university education I’m watching this space with keen interest.
The Great Paradox (And Why There's Hope)
Here's where it gets interesting. Our historical excursion earlier suggests technology creates as many jobs as it destroys, just different ones. A century ago, people in manufacturing worked 50-60 hour weeks; now it's as low as 30 hours in some parts of Europe. Perhaps AI simply continues this trend toward more productive, less grinding work and more leisure time.
The "Jevons Paradox" (a favourite of Microsoft CEO Satya Nadella) offers another lens. When steam engines became more efficient, Britain didn't use less coal - it used exponentially more because cheap power unlocked thousands of new applications. AI could do the same for cognitive work. By making analysis, diagnosis, and creative design radically cheaper, we might unlock massive latent demand.
Consider the mathematics of the long tail: today, a market of 200,000 customers might support only a small engineering team because the economics don’t stack up so the market is poorly served. With AI, you could serve increasingly niche markets cost-effectively - eventually down to markets of one. Personalised education, health coaching, financial advisory - all become economically viable at scale.
This is my personal conviction (ignoring a sudden scaling to superintelligence for the time being) - the balance between the displacement effect (technology allowing capital to take over tasks performed by labour) and the reinstatement effect (creating new, labour-intensive tasks) ends up skewed heavily to the latter.
Timing of this can be debated and there will inevitably be short term disruption, winners and losers across individual companies and industries. However, my strong sense to think that we won't see an explosion in new jobs demonstrates a lack of imagination. We’ll come back to this theme in a second.
To show you what I mean about imagination versus zero-sum thinking, I’ve sketched out some examples below of some of the negative commentary you’ll read about in the media versus the plethora of the types of new potential roles that AI may require. This is highly speculative but hopefully you get the idea:
Industry | Displacement Narrative | Expansion Thesis | Example New Roles |
Entertainment & Media | AI will generate endless movies, displacing creative jobs | AI lowers production costs enabling vast, personalised story universes with continuous updates | Universe Architect, AI Performance Director, Character Co-ordinator |
Marketing & Advertising | AI will automate ad creation and media buying eliminating marketing teams | AI creates hyper-personalised marketing for all businesses creating a need to manage the “Netflix for brands” | Virtual Influencer Manager, AI Brand Therapist, Hyper-Personalisation Strategist |
Healthcare & Wellness | AI will diagnose better than doctors reducing the need for clinicians | Have you tried to see a specialist recently?! Industry shifts to proactive wellness based on automated data insights | Clinical AI ethicist, Personalised wellness coach, AI-augmented Diagnostician |
Education & Learning | AI tutors will replace teachers by automating lessons and grading | AI automates information delivery freeing teachers to become mentors focused on holistic development | AI Learning Path Designer, Student Development Coach, Specialised Skills Development |
We're All Futurologists Now
Like Letterman, I’m left wondering where all this leaves us. What this optimism means in practice is that if we can’t predict which future unfolds, we need to get comfortable working with multiple scenarios. This is the theme of exercising our imagination I referenced in the previous section.
As someone who spends his days thinking about risk, this feels familiar territory. And the one thing I’ve learned? When facing massive uncertainty, you don’t try to predict the future - you prepare for multiple versions of it.
Futurology suggests an elite making predictions about the future in a remote think-tank. I’m suggesting something more practical - we all need to get better at “foresight literacy”.
The alternative to letting a handful of tech bros design our future? We all get better at thinking of what might happen next. We need to democratise the structured methods of foresight. Teaching the principles of scenario planning, the discipline of identifying and mitigating cognitive biases, and the importance of stakeholder analysis to a broader population is essential.
I believe we can all become better at this. In a way, that’s what this blog is about for me. When I experimented with building an AI game, I wasn’t just creating a ‘Butterlion’; I was developing a practical feel for AI’s creative capabilities. When we dug into those security reports together, it was practicing the discipline of identifying emerging risks. This is foresight literacy in action - not as an academic exercise but as a hands-on habit of curiosity and experimentation.
The goal is to equip everyone with the tools to participate meaningfully and critically in the conversation about the future, rather than being passive recipients of a future designed by and for a privileged few.
The Bottom Line
We started with pigeons diagnosing cancer and ended up exploring the future of human work. The journey from anxiety to understanding isn't smooth - much like AI's jagged frontier itself.
Will AI eliminate jobs? Transform them? Create entirely new categories of work? Probably all three, in ways we can't yet imagine. Just as Letterman couldn't envision social media from Gates' description of "electronic mail," we can't fully see where this leads.
What I do know is this: the worst response is paralysis - and this is why zero-sum thinking just isn’t helpful. The frontier rewards explorers, not observers. Our pigeon friends remind us that intelligence itself is being redefined. Our task is to find where human judgment, creativity, and wisdom remain irreplaceable - and where AI can amplify rather than replace our capabilities.
Perhaps the most consistent and important finding across the recent literature is that AI is already shifting the demand towards skills that are uniquely human and complementary to the technology. As AI automates codifiable and analytical tasks, the relative value of "soft" or "durable" skills increases.
An analysis of 12 million US job vacancies found that roles explicitly requiring AI skills were nearly twice as likely to also demand skills like resilience, agility, teamwork, and analytical thinking. Moreover, these complementary skills command a significant wage premium; for instance, data scientists with demonstrated capabilities in resilience or ethics were offered salaries 5-10% higher than their peers.
So what do I think the skill stack for the augmentation age looks like? This is best understood through comparison with the legacy skill stack:
Core Function | Legacy Skill (Information Age) | Emerging Skill (Augmentation Age) |
Research & information gathering | Manual data collection, keyword searching, literature review | Prompt engineering: crafting nuanced queries to extract insights using AI. Data curation: evaluating and selecting the most relevant AI-generated information. |
Analysis & synthesis | Statistical analysis, spreadsheet modelling, summarising findings | Insight synthesis: integrating AI-generated analysis with human intuition to form a strategic narrative. Second-order thinking: questioning and validating AI conclusions |
Creation & design | Technical proficiency in specific tools (e.g. Photoshop) | Creative direction: setting the vision and aesthetic for AI to execute. Generative curation: selecting, iterating and refining the best options from a multitude of AI-generated outputs. |
Communication | Presenting finished data and reports | Strategic narrative building: weaving AI-generated data into a compelling story. Empathetic translation: explaining complex AI outputs to non-expert stakeholders |
Problem solving | Applying established frameworks to solve known problems | Problem framing: defining new, ambiguous problems in a way AI can help solve. Human-AI collaboration and 'context engineering': orchestrating the workflow between human experts and AI tools |
Look at this table. Which ‘Legacy Skill’ forms the bulk of your work today? What steps are you taking in developing its ‘Emerging Skill’ counterpart? Where could you be spending more time? Share your thoughts in the comments - I’m genuinely curious to see how we’re all navigating this. This is a big topic which I will revisit as we see more data on its impact and the state of the art of AI evolves.
Until next time, you'll find me on a Sunday morning, training AI on my preferences while it helps me model the next set of scenarios for me to contemplate...
Resources & Further Reading
Primary Sources:
Levenson, R., et al. (2015). "Pigeons (Columba livia) as Trainable Observers of Pathology and Radiology Breast Cancer Images." PLOS ONE
Mollick, Ethan. (2024). Co-Intelligence: Living and Working with AI
Acemoglu, D., & Restrepo, P. (2019). "Automation and New Tasks: How Technology Displaces and Reinstates Labor." Journal of Economic Perspectives
Mäkelä, E., & Stephany, F. (2024), 'Complement or substitute? How AI increases the demand for human skills (Version 3)', arXiv, https://doi.org/10.48550/ARXIV.2412.19754
https://www.newyorkfed.org/research/college-labor-market#--:explore:outcomes-by-major
Worth Your Time:
The 1995 Bill Gates and David Letterman interview (YouTube)
Hannah Fry's social media for mathematical insights on everyday life
Ethan Mollick's Substack "One Useful Thing"
Podcasts:
Co-Intelligence: An AI Masterclass with Ethan Mollick (Stanford GSB)
Coaching for Leaders: "Principles for Using AI at Work"



Comments