DevReady PodcastAI in Software Development: Hype vs Reality in 2025

Episode Overview

In this follow-up episode of the DevReady Podcast, Anthony Sapountzis sits down again with Bill Lennan, Founder of 40 Percent Better, to explore how AI is changing software development, tech careers, and business decision-making. Bill brings a grounded, executive-level view on what is working, what is not, and why the AI boom feels both exciting and unsettling for teams worldwide. Connect with Bill on LinkedIn for more of his thinking on leadership, technology, and practical innovation. Together, Anthony and Bill unpack what staying relevant in an AI-driven tech industry really requires, and why human skills remain central to future-proofing your career.

Why AI Still Cannot Replace Great Software Engineers

AI tools are increasingly visible across product teams, but Bill is clear on one key point: no executive he has spoken with believes AI can replace top-tier software engineers yet. While AI can assist with generating prototypes or speeding up certain tasks, successful teams still rely on experienced humans to validate, refactor, and secure the code. The most effective use cases Bill sees involve engineers pairing AI outputs with traditional review, ensuring reliability and maintainability before anything reaches production.

Anthony agrees and frames the current moment as a “heat wave” across the tech sector. AI has accelerated trends already under way, including higher expectations for speed, more pressure on budgets, and widening gaps between skilled engineers and those who rely purely on tools. The takeaway for listeners is direct: AI can be transformative, but it is not a substitute for disciplined engineering or deep technical judgement.

Key points

  • AI-generated code still needs human oversight.
  • Security, performance, and long-term maintainability remain human-led responsibilities.
  • High-performing teams treat AI as an assistant, not a replacement.

Executive Adoption: Mixed Results and Unclear ROI

Anthony asks what Bill is hearing from executives across the board, especially those feeling pressure to “do AI” without clarity on what it can deliver. Bill explains that adoption is uneven. Some leaders tried AI tools, found no net positive impact, and stopped using them. Others experienced a clear downside, including reduced throughput, which makes any tool difficult to justify in a competitive environment.

However, Bill also shares a more nuanced reality: AI is delivering strong value in narrow, well-defined niches. One example is implementation teams, often staffed by people who understand customer workflows but are not deep coders. These teams use AI to interpret backend code into plain-language explanations, helping them support clients more effectively when internal documentation is weak. That sort of translation layer is a real productivity gain, even if AI is not writing the system itself.

Anthony adds that he has seen early signs of AI accelerating QA, noting a recent example where AI-generated tests far outpaced manual Selenium-style scripting. Bill acknowledges this as the first compelling story he has heard about AI replacing some QA capacity, and he uses it to underline how quickly the landscape is shifting.

Key points

  • Many companies are still experimenting, and some are abandoning tools after poor results.
  • AI works best when applied to specific, high-clarity tasks.
  • Emerging QA use cases are accelerating quickly, but remain early-stage.

Vibe Coding, Online Polarisation, and the Real Question of “When”

Anthony describes the polarised online conversation around AI in software. On one side, non-technical founders celebrate vibe coding and claim they no longer need engineers. On the other, developers warn that AI tools produce insecure, fragile code and cannot match human quality. Bill sees this conflict as familiar. He compares today’s AI moment to the early internet era, when many people insisted the internet would never scale or survive, because the infrastructure and standards were not yet mature.

To Bill, the real question is not whether AI will reshape software development, but when. Tech history follows a pattern: vision arrives first, then the hard work of building standards, compliance, security frameworks, and sustainable economics catches up. AI is currently in that awkward but predictable stage. We can build fast prototypes, but broad and safe scaling still requires human thinking and responsible engineering.

Bill predicts that AI will gradually develop stronger guardrails. Over time, tools will begin prompting users with security and data-handling questions in ways similar to a seasoned developer. Yet even if AI becomes more proactive, it still cannot define business value on its own. Human judgement remains the final filter for deciding what solutions should exist, who they serve, and whether they are worth building.

Key points

  • The hype-versus-scepticism split mirrors earlier tech revolutions.
  • AI capability is rising, but maturity takes time.
  • Human judgement stays essential for user value, ethics, and ROI.

AI’s Limits in Human-Centred Work and the Data Problem

Anthony shifts the conversation to AI’s limits, arguing that even though AI has seen most of what humans have written, it does not have mature emotional intelligence. It can produce clever outputs, but it cannot consistently deliver empathy, care, or contextual sensitivity, which is why work like nursing cannot be meaningfully automated.

Bill expands that point into a deeper limitation: AI only learns from what humans have documented, and human behaviour research is incomplete. He shares a practical example from his own writing. While researching people’s desires and motivations, he noticed most studies rely on narrow sampling, often drawn from small cohorts and specific cultures. When research is shallow or biased, AI inherits those distortions and produces answers that can sound confident while being wrong.

They also warn about how people consume AI content. Many users assume that if something appears in writing, it must be broadly true. Combined with confirmation bias, this creates real risk, because AI outputs can reinforce incorrect beliefs at scale. The implication for leaders and developers is to treat AI responses as a starting point for thinking, not an authority.

Key points

  • AI lacks emotional intelligence and empathy required in care roles.
  • Poor or narrow research leads to poor AI answers.
  • Confirmation bias makes AI misinformation more dangerous.

Staying Relevant in Tech: Why Soft Skills Matter More Than Ever

Anthony asks Bill what advice he would give to people in tech right now, especially students entering computer science with uncertainty about where the industry is headed. Bill’s answer is blunt but hopeful: broaden your skill set. He notes that the market currently has a surplus of programmers, unlike twenty years ago when developers were scarce. That means career resilience now relies on differentiation.

Bill argues that soft skills are the top priority. Every business is ultimately “business with people”, and teams that thrive do so because they communicate well, understand customers, and collaborate effectively. Even in peak technical demand years, the developers who succeeded were those who could speak with clients, lead discussions, and solve messy real-world problems. Coding is increasingly abstracted, but human relationships and problem framing are not.

Bill shares his own experience overcoming social anxiety. He started with tiny, repeatable practice, such as short conversations at a coffee shop. Over time, these micro-skills opened doors to leadership roles and opportunities he did not even know existed. Anthony echoes this, noting that strong communicators naturally become project leaders and client-facing problem solvers, roles that remain valuable regardless of tooling shifts.

Key points

  • The industry has more programmers than before, so differentiation is crucial.
  • Communication and interpersonal effectiveness are long-term advantages.
  • Small practice habits build confidence fast and unlock career mobility.

The Hidden Costs of AI: Energy Use and Unintended Consequences

Bill highlights a topic rarely discussed in mainstream excitement: AI is expensive to run from an energy perspective. He points to the escalating electricity demand behind large-scale AI infrastructure, including major companies exploring nuclear or on-site generation to power data centres. Anthony agrees that this is the opposite of “green tech” right now, and it reinforces why ROI remains so uncertain for executives.

Bill also warns about unintended consequences at the human level. He references recent findings that students who overuse AI can retain less knowledge, because they rely on outputs rather than learning the underlying reasoning. For employers, that becomes an anti-pattern. If AI reduces effort too far, it may also reduce competence, a trade-off organisations have not fully accounted for yet.

They touch briefly on broader societal questions like job displacement and universal basic income. Bill pushes back on the idea that this is outside anyone’s pay grade. If AI reshapes labour markets, leaders and builders will need to think actively about the kind of society they are helping produce.

Key points

  • AI infrastructure creates significant energy demand.
  • Overreliance can weaken learning and critical thinking.
  • Social consequences of AI require active leadership, not avoidance.

Topics Covered

  • AI’s current limits in replacing senior software engineers
  • Executive sentiment on AI adoption and mixed ROI outcomes
  • Where AI helps most today: narrow niches like implementation support
  • Rapid prototyping and “vibe coding” versus production-grade development
  • Security, quality, and QA implications of AI-generated code
  • Industry polarisation, job loss headlines, and historical tech parallels
  • AI’s lack of emotional intelligence and risks of biased training data
  • Hidden costs of AI, especially energy use and sustainability concerns
  • Future-proofing tech careers through soft skills and adaptability

Key Takeaways

  • AI is not replacing great engineers yet, but it is changing workflows quickly.
  • The best AI outcomes come from specific use cases and tight human review.
  • Security, user value, and ROI still depend on human judgement.
  • Soft skills are the most reliable way to future-proof a tech career.
  • AI’s energy and learning side-effects are real and need honest discussion.

Useful Links

Bill Lennan | LinkedIn

40 Percent Better | LinkedIn

40 Percent Better | Website

HAERT Program | LinkedIn

HAERT Program | Website

Email Bill: bill@40pb.com/ hello@40pb.com

What is the best way for developers to use AI today?

Treat AI as a productivity assistant for prototyping, documentation, and narrow tasks, while keeping human engineers responsible for quality, security, and architecture.

Will AI replace software engineering jobs?

AI may reduce some tasks, but the timeline is unclear. The strongest advantage for engineers is to build soft skills, product thinking, and client-facing problem-solving ability.

Why are some companies quitting AI tools?

Executives report unclear ROI, unpredictable costs, and in some cases reduced throughput when tools are poorly integrated or poorly understood.

©2025 Aerion Technologies. All rights reserved | Terms of Service | Privacy Policy