Debunking 4 AI security myths; What organisations in Northern Ireland need to know

  • Ivor Buckley, Field CTO, Dell Technologies Ireland and Northern Ireland

    As organisations look to embrace AI, they can fall victim to myths that make securing AI seem far more complex than it truly is.

    The truth is safeguarding AI systems doesn’t require a complex overhaul of existing infrastructure or security frameworks. It starts with applying foundational cybersecurity principles and adapting them to the risks and behaviors of AI technologies.

    Myth 1: AI systems are too complex to secure

    Threat actors are using AI to enhance a variety of attack types from ransomware to zero-day exploits and even Distributed Denial of Service (DDoS). They can exploit unprotected AI systems to manipulate outcomes or escalate privileges, resulting in a broader attack surface. There’s a misconception that these risks make AI systems too complex to secure.

    Truth:

    AI systems are complex but securing them is possible by reinforcing current cybersecurity practices and adapting them to AI-specific threats. Northern Ireland organisations can strengthen defenses by engaging with security teams early, applying zero trust principles and building clear data policies.

    According to Dell Technologies Innovation Catalyst Study, 84% of organisations in Northern Ireland view security a key part of their business strategy. The benefits are clear, the study also revealed 100% increase in confidence levels among Northern Irish organisations that have adopted zero trust principles, underscoring its growing value as a security framework. By reducing the attack surface and continuously monitoring AI workloads, organisations can make themselves a harder target for cybercriminals. 

    Myth 2: No existing tools will secure AI

    Organisations may feel they have to adopt new security solutions and tools to secure their AI systems because AI is a newer, rapidly evolving workload. As a result, there’s a misconception that none of an organisation’s existing tools will secure AI.

    Truth:

    Existing cybersecurity investment still provides some value to businesses in Northern Ireland AI may be a different workload with unique elements, but it still benefits from foundational security measures like identity management, network segmentation, and data protection. Maintaining strong cyber hygiene through regular system patching, access control and vulnerability management remains essential.

    To address AI-specific threats like prompt injection or compromised training data, organisations can tailor their current cybersecurity strategies rather than replacing them entirely.  For example, regularly logging and auditing Large Language Model (LLM) inputs and outputs can help spot unusual activity or malicious use.

    To secure AI, Northern organisations need to start by understanding how their current architecture and tools cover AI workloads. After reviewing their current security tools, organisations can spot where they need extra capabilities to address AI risks. This includes tools to monitor AI output, manage decisions and prevent unwanted actions.

    Importantly, 80% of organisations in Northern Ireland agree that when it comes to detection and response to cyber threats, they can do better, highlighting the need to adapt current tools to AI rather than starting from scratch.

    Myth 3: Securing AI is only about protecting data

    LLMs operate by analysing data and generating output based on their findings. Since AI uses and generates large amounts of data, there’s a misconception that data protection alone is sufficient.

    Truth:

    Securing AI goes beyond protecting data alone. While safeguarding inputs and outputs is essential, securing AI involves the entire AI ecosystem, including models, Application Programming Interfaces (APIs), systems, and devices. Organisations in Northern Ireland are particularly cautious, with 76% agreeing that their data and intellectual property are too valuable to be place in an AI tool.

    LLMs, for example, are vulnerable to attacks that manipulate input data to produce misleading or harmful outputs. Addressing this risk requires tools and procedures to manage compliance policies and check AI inputs and outputs for safe responses. APIs, which serve as gateways to AI functionality, must be secured with strong authentication to block unauthorised access. And because AI systems continuously generate outputs, organisations need to monitor for anomalies or patterns that could indicate a breach or malfunction. By expanding the focus beyond data, Northern Irish businesses can build a more resilient and trustworthy AI environment.

    Myth 4: Agentic AI will ultimately replace the need for human oversight

    Agentic AI introduces autonomous agents that independently make decisions. Because these agents can make decisions independently, there’s a misconception that agentic AI will ultimately replace the need for human oversight.

    Truth:

    Even autonomous AI needs governance to ensure they act ethically, predictably and aligned with human values. Without human oversight, these systems risk deviating from assigned goals or exhibiting unintended and potentially harmful behaviors. To prevent misuse and ensure responsible deployment, organisations should set AI boundaries, use layered controls and involve humans in critical decisions. Regular audits and thorough testing are also essential to increase transparency and accountability across AI operations, especially as 96% of organisations report challenges integrating security into wider business strategies. Human oversight is foundational to safe and effective agentic AI.

    Debunking AI myths

    AI-enhanced threats may seem daunting, but the path to securing AI is more familiar than it may appear. By grounding security strategies in cybersecurity principles and adapting them to AI risks, organisations all across Northern Ireland can protect AI systems effectively without unnecessary complexity or cost. Many existing tools and practices can extend to protect AI systems, saving time, reducing risk and maximising existing investments.

    Debunking these myths isn’t just about correcting misconceptions; it’s about empowering teams to take informed, proactive steps toward responsible AI adoption.

    These themes were explored in depth at Dell Technologies Innovate, where experts discussed practical approaches to securing AI across organisations, protecting critical data, and enabling innovation without compromising trust. The conversation reinforced how businesses can build a stronger cybersecurity posture while confidently harnessing AI’s potential.

    For more information, click here

     

    Read the Spring 2026 edition free online →

    Stay connected with NI's tech community:

Share this story