News

Essential practices for secure AI

November 24, 2025
Essential practices for secure AI
Artificial intelligence is increasingly integrated into products and services, requiring attention to safety, reliability, and ethics from design through to operation.

|---Module:text|Size:Small---| In an interview with Executive Digest, Pedro Tarrinho, Director of Application Security at Celfocus, explains how safety, reliability, and ethics should be embedded at every stage of AI solution development, from design to continuous monitoring, and the challenges companies face in ensuring these technologies are safe, transparent, and responsible.

Here’s the full interview transcription:

When and how did concepts such as Machine Learning (ML) and Natural Language Processing (NLP) start to gain relevance in the development of AI solutions?

Although AI is very much in the spotlight today, concepts like ML and NLP have been studied for decades. Until a few years ago, they were largely confined to academia and highly specialised companies. In my opinion, the turning point was the ability to train models with very large datasets together with advances in hardware and computing. This made it feasible to embed these concepts into real products such as virtual assistants, translators, and image analysis systems and, more recently, in generative models that have brought AI into the daily lives of companies and individuals.

When we talk about using AI in enterprise solutions, what does that actually imply for security and reliability?

It helps to separate two different realities. First, developers use AI as an assistive tool to write code, generate tests or review documentation. This looks harmless but carries risk. Generated code is not always secure, may contain hidden bugs and can adopt patterns that are no longer recommended. Everything must be validated. Do not trust blindly. In AI-assisted development, the rule is simple. The human, not the Large Language Model, is accountable for the code.

Then there is the second reality. AI as part of the end solution, for example a chatbot, a recommendation engine or an intelligent agent. Here the challenge grows because we move from a support tool to an entity that interacts with users, learns from data and takes decisions with real impact. You must ensure the AI system is reliable, does not leak sensitive information, cannot be manipulated by malicious inputs and behaves predictably. Security must be considered end to end.

What exactly does it mean to develop AI securely? Is it different from secure traditional software development?

It is different. Traditional software follows logic and rules we define upfront. AI introduces a degree of unpredictability. Models learn, adapt, may make decisions influenced by factors we do not fully observe and can react unexpectedly. Secure AI development means exerting stronger control over inputs and understanding how the model learns so there are clear bounds to what it can and cannot do. Think of security as continuous, not as a final phase. It is a Shift Left for AI.

How does your approach ensure security is present from the design of AI solutions?

This has been a focus of our work for a long time. We research and test ways to integrate security from the outset, which led us to create a clear, AI-adapted internal process. It covers the entire lifecycle, from planning to monitoring. For us, “security by design” is not a slogan. It is operational practice backed by a well-defined process.

What practices do you apply across the AI lifecycle to prevent ethical risks and bias in models?

We run a process that spans all phases from planning to operations. At project inception we analyse the datasets to ensure we are not introducing skews or imbalances that could influence model behaviour. Later, during training, we test whether the model responds consistently and appropriately across different situations and user profiles. Throughout, we involve people from multiple disciplines, including technical, legal, and business, to bring diverse perspectives and surface potential impacts that are not always obvious.

How do you validate and test the security and resilience of models before deployment?

Before going live, every model goes through a testing phase that replicates normal usage and out-of-distribution scenarios. We simulate malicious interactions, probe behaviour with varied inputs and assess how far the model remains stable and secure. We also evaluate risks of data leakage that should be protected and check for weaknesses in how the model handles more sensitive requests. No system is 100% secure, but our testing aims to minimise risk before it reaches end users.

How do you ensure that the data used are handled securely and in compliance with the GDPR and other European standards?

At the start of each project, we map the data to use and assess whether they are processed correctly and on a lawful basis. At this stage we define measures to be applied, including anonymisation, minimisation or removal, and encryption. Retention rules and data flows are part of our process. This upfront work may extend the initiation phase, but it is an investment that prevents future risks and, above all, increases user confidence in the solution.

What role does continuous monitoring play in maintaining the security and operational trustworthiness of AI systems?

It is one of the most critical phases. Once an AI system is in production you cannot assume that initial behaviour will remain stable. Reality changes, users change, and attack techniques evolve. Continuous monitoring helps detect issues early, whether performance drops, inaccurate responses or signs of malicious usage. Ultimately, it gives us visibility to ensure the AI continues to behave as expected.

How do ethics and transparency in AI models contribute to user and client trust?

People do not trust systems they do not understand. AI is non-deterministic nature, the fact that it does not always return the same answer, adds complexity, and can create distrust. Transparency is decisive. Explain what drives responses, make system limits explicit and ensure decisions are not taken blindly. AI solutions must be designed from the outset with clarity, predictability, and ethical principles. Only then can we build trust and credibility with users and clients.

What are the main challenges companies face when trying to comply with the AI Act and other regulatory requirements?

A first challenge is understanding what applies to the solution being built. The AI Act is a European regulation, which means it applies across EU Member States, although, as with the GDPR and speaking as a non-lawyer, there will be national interfaces with local legislation.

Then there is the practical challenge of governing and documenting the AI project lifecycle, from risk assessment and transparency (explainability) to handling sensitive data. You need a transversal approach with the right teams at the table. Security, legal, compliance, product, and technology.

Finally, risk classification. From the highest-risk domains such as healthcare, justice and critical infrastructure, which require much stricter controls, to limited-risk AI used for document interpretation or process support. This differentiation is essential and must be analysed and accounted for under the AI Act.

What mistakes or gaps do you often see when AI solutions are built without security embedded from the start?

Common issues include technically functional solutions that fail on critical aspects because models were not properly tested against unexpected inputs, lack mechanisms to detect sensitive information disclosure or make inconsistent decisions. It is also frequent to see weak control over data, from provenance to downstream impact on system behaviour. With security in mind from the start, we can trace decisions, provide explanations, strengthen trust and reduce long-term risk, thereby increasing safety.

Looking ahead, what steps are essential to keep AI safe, trustworthy and responsible?

The priority is to approach AI with critical thinking, not with hype or fear. Technology will evolve and models will become more capable, but the fundamentals do not change. We must ensure what we build is secure, transparent and resilient. That means clear processes, diverse perspectives and never treating security as an optional extra. Finally, humility. The day we believe we fully understand AI will probably mark the beginning of the end. That is when we lower our guard and risk losing control.

Access the Interview, in Portuguese, here.

DataAI
Written by
No items found.