Insights

AI Security: The new cybersecurity battleground

November 14, 2025
AI Security: The new cybersecurity battleground
José Sousa Gonçalves, Application Security Analyst at Celfocus, advocates that the future of Artificial Intelligence depends on robust, regulation-aligned security frameworks to ensure safe, ethical, and sustainable innovation.

|---Module:text|Size:Small---| Artificial Intelligence has moved from promise to everyday reality. From recommendation systems to clinical decision support, and across financial services and telecommunications, AI is transforming industries and redefining business models. While this expansion brings undeniable benefits, it also introduces serious risks, especially when adoption is widespread and, at times, rushed. Chief among these challenges are privacy breaches, lack of transparency and regulatory non-compliance.

This last point is particularly relevant for businesses, as new frameworks recommend integrating security measures from the earliest design stages of intelligent systems. The question is no longer whether AI will be used, but how to ensure it is used safely, ethically and in compliance with applicable law. This is the context in which a new domain emerges for cybersecurity: AI security.

Traditional defences against software flaws are no longer sufficient. Beyond known vulnerabilities, organisations must anticipate risks specific to intelligent systems, such as manipulation of training data and adversarial attacks. These risks are amplified by the democratisation of AI. Deep technical expertise is no longer required to exploit weaknesses. Today, a system can be undermined simply by “talking” to it.

Just as the concept of privacy by design reshaped how we approach data protection, AI security must be embedded from the start. Trust is only possible when security runs through every phase of the technology lifecycle. From planning to continuous operation, including development and validation. This requires combining robust cybersecurity practices with principles of digital ethics and regulatory compliance.

The risks of ignoring this approach are too high. In telecommunications, algorithms managing automated user interactions can expose sensitive personal data that may be harvested for phishing campaigns. In healthcare, AI-based diagnostic systems must guarantee the integrity and reliability of predictions, otherwise patient safety is at risk. In financial services, models used for credit risk assessment or fraud detection must be protected against manipulation and bias, while preserving effectiveness and compliance with European and international standards.

A recent example illustrates the impact of lacking a holistic security view for AI-based systems. Researchers discovered vulnerabilities in the Yellow.ai platform. An agentic AI used by companies such as Sony, Logitech and Hyundai. They were able to manipulate the model to generate instructions containing malicious code, which the platform stored in its own systems. By interacting with the support service, an attacker could then access sensitive data belonging to the human operator. This case shows how “classic” techniques, such as XSS, can be combined with attacks on AI models, increasing the risk of compromising entire organisations.

The future of AI will depend on companies adopting robust security frameworks aligned with European guidance and emerging global standards. The message is clear. Action is required now. Building security, ethics and compliance into AI systems from the ground up is not just a competitive advantage. It is essential to ensure sustainable innovation and protect the digital society we all rely on.

This Article was published in Tek Sapo in October 2025.

DataAI

Written by
José Sousa Gonçalves
Application Security Analyst
Ready for a deep dive?