DevReady PodcastThe Hidden AI Security Risks Every Business Leader Should Understand | Mark Vos | DevReady Podcast

Introduction

Artificial intelligence is rapidly transforming the way organisations operate, innovate and compete. From automation and AI agents to deep learning systems and large language models, businesses are adopting AI tools at an unprecedented pace. Yet with this rapid adoption comes a new generation of AI security risks that many organisations are not fully prepared for.

In this episode of the DevReady Podcast, host Anthony Sapountzis, CTO and Co-Founder of Aerion Technologies and DevReady.ai, speaks with Mark Vos, Founder and CEO of Cyber Impact, about the emerging cybersecurity challenges created by artificial intelligence. Mark brings more than three decades of experience across technology leadership, enterprise risk and cybersecurity strategy, including senior roles as Chief Risk Officer and Chief Information Security Officer at Iress, a platform responsible for a significant portion of Australian financial market infrastructure.

Together they explore the real risks behind modern AI systems, including AI agents with system access, deepfakes, prompt manipulation, governance failures and the pressures driving businesses to deploy AI too quickly. The conversation highlights why organisations must rethink security, governance and risk management as AI continues to reshape the digital landscape.

Mark Vos: From Cybersecurity Leader to AI Safety Advocate

Mark Vos has spent more than 30 years working at the intersection of technology, cybersecurity and enterprise risk. Beginning his career during the early days of the internet boom in the 1990s, Mark quickly established himself as a technologist with deep expertise in digital infrastructure and security.

Over the course of his career he moved from cybersecurity consulting into executive leadership roles that broadened his perspective beyond technology alone. As a partner within a Big Four consulting firm and later as Chief Risk Officer and Chief Information Security Officer at Iress, Mark developed a deep understanding of organisational risk across financial systems, operational infrastructure and reputational exposure.

These experiences ultimately led him to establish Cyber Impact, where he now provides strategic security leadership as a fractional Chief Information Security Officer (CISO). His work increasingly focuses on helping organisations adopt artificial intelligence safely through stronger governance, risk management frameworks and AI security controls.

Why Artificial Intelligence Represents the Next Major Technology Shift

Artificial intelligence is often described as the next major technological revolution after the internet. According to Mark, the speed of AI development suggests it could reshape industries far faster than previous technological waves.

While the internet took decades to reach widespread adoption, AI tools have moved from research labs to everyday business workflows in just a few years. Organisations are now deploying AI to automate processes, analyse data, generate content and support decision-making across nearly every sector.

This rapid growth presents enormous opportunities for productivity and innovation. However, the same capabilities that make AI powerful also introduce new security risks that traditional cybersecurity frameworks were never designed to address.

AI systems can write code, execute tasks autonomously, interact with software environments and generate highly convincing content. When these systems are integrated into business operations without sufficient governance, they can create unexpected vulnerabilities.

The New Cybersecurity Threat Landscape Created by AI

One of the central themes of the discussion is how artificial intelligence introduces an entirely new cybersecurity threat surface.

Unlike traditional software systems that follow deterministic logic, modern AI models operate through complex neural networks with billions of parameters. This makes their behaviour difficult to predict and sometimes difficult to control.

Anthony explains that many people misunderstand how these systems work. AI models do not follow fixed rules in the way traditional software does. Instead they generate responses based on patterns learned during training. This unpredictability can create new opportunities for attackers.

Mark highlights that prompt manipulation and language-based interactions can function similarly to social engineering attacks. By carefully crafting instructions or prompts, malicious actors may be able to influence AI systems to produce harmful outputs or bypass safeguards.

These vulnerabilities become more serious when AI systems are connected to real-world systems such as financial platforms, internal databases or operational software.

AI Agents and Autonomous Systems: Power and Risk

Recent advances in AI have introduced the concept of AI agents. Unlike simple chat interfaces, AI agents can execute tasks, run software processes and interact with external systems.

This new capability dramatically increases the potential impact of AI deployment within organisations. Agents can automate workflows, conduct research, manage processes and even write or deploy software.

However, giving AI systems direct access to tools, infrastructure or financial systems also creates new risks. Without carefully designed safeguards, an AI agent could perform unintended actions or expose sensitive data.

Mark describes examples where AI agents can be manipulated through language interactions despite the presence of platform guardrails. These examples highlight why AI governance and system architecture are critical to safe deployment.

Deepfakes and Synthetic Media: The Growing Trust Problem

Another area of concern is the rapid development of AI-generated media, including deepfake videos, synthetic voices and highly realistic images.

Advances in generative AI have reached a point where even experts can struggle to distinguish between authentic content and AI-generated material. This creates major challenges for digital trust.

Deepfakes can be used to impersonate individuals, manipulate public opinion or conduct fraud. In a business environment, they could potentially be used for social engineering attacks targeting executives or employees.

Anthony also points out that social media algorithms often reinforce misinformation by repeatedly showing users similar content. When combined with realistic AI-generated media, this can make false information appear credible.

For organisations and individuals alike, verifying information sources has become increasingly important.

AI Context Windows, Memory and System Stability

The conversation also explores how AI systems manage memory and context during interactions.

Large language models rely on a context window, which contains the information required for the model to understand the conversation. As interactions continue, the context grows larger.

When the context becomes too large, performance can degrade. Important system instructions may be pushed out of the model’s memory, which can increase the risk of unexpected behaviour.

To manage this challenge, Mark explains a technical approach that involves creating sub-agents to handle specific tasks. These sub-agents operate with limited context and return results to the main system.

This architecture can improve efficiency and reduce the risk of system instability.

Why AI Governance Must Become a Business Priority

Throughout the discussion, Mark emphasises that organisations must focus on AI governance and risk management.

Businesses face strong pressure from investors and markets to deploy AI quickly in order to improve efficiency and competitiveness. However, rapid adoption without proper safeguards can create long-term risks.

Effective AI governance includes:

  • External guardrails that AI systems cannot modify
  • Least privilege access controls
  • Oversight mechanisms for critical actions
  • Clear boundaries for autonomous behaviour
  • Continuous monitoring and auditing

By implementing these controls, organisations can reduce the risk of unintended outcomes while still benefiting from AI innovation.

The Future of AI: Opportunity with Responsibility

Despite the risks discussed, Mark remains optimistic about the future of artificial intelligence.

AI has the potential to deliver enormous productivity gains, accelerate innovation and improve decision-making across industries. Many routine tasks may be automated, allowing humans to focus on more complex and creative work.

At the same time, responsible governance will play a critical role in ensuring these technologies benefit society.

The current moment represents a unique opportunity to shape how artificial intelligence is deployed, regulated and secured before it becomes deeply embedded in every aspect of digital infrastructure.

Open discussion, collaboration and thoughtful leadership will be essential to navigating this transformation.

Key Takeaways

  • Artificial intelligence is creating a new generation of cybersecurity risks.
  • AI agents with system access can introduce unexpected vulnerabilities.
  • Prompt manipulation and language-based attacks can influence AI behaviour.
  • Deepfakes and synthetic media are making digital trust more difficult.
  • Strong AI governance and risk management frameworks are essential.
  • Businesses must balance rapid innovation with responsible deployment.

Useful Links

Mark Vos | LinkedIn

Cyber Impact | LinkedIn

Cyber Impact | Website

FAQs

What are AI security risks for businesses?

AI security risks include vulnerabilities created by AI systems such as prompt manipulation, data exposure, deepfake attacks and autonomous AI agents performing unintended actions.

Why is AI governance important?

AI governance ensures that organisations deploy artificial intelligence responsibly. It includes security controls, oversight frameworks and policies that reduce risk while allowing innovation.

Are AI agents safe for business use?

AI agents can provide significant productivity benefits. However, they require careful architecture, strict access controls and monitoring to ensure they do not perform harmful actions.

How can companies reduce AI cybersecurity risks?

Businesses can reduce risk by implementing strong governance frameworks, limiting system access, maintaining human oversight and continuously monitoring AI system behaviour.

Will AI replace jobs in the future?

AI may automate some routine roles, but it is also expected to create new opportunities in areas such as AI management, cybersecurity, governance and technology development.

©2025 Aerion Technologies. All rights reserved | Terms of Service | Privacy Policy