Why is securing AI important

Artificial Intelligence (AI) has exploded as a topic worldwide, thanks to the increasing maturity of generative AI and its wide usage. Individuals and organizations have begun exploring what the technology is capable of already. Nevertheless, there are growing concerns about how it will really change our day-to-day lives, beginning with “Is it even safe/secure to be used?“

Matej Zachar & Daniel Filakovsky

Updated on Jul 4, 2023

Published on Jul 3, 2023

Common attacks on AI

AI systems have introduced new attack vectors, allowing attackers to use various techniques to obtain sensitive information or abuse AI for purposes not designed for. 

Some common attacks on AI systems include:

  1. Prompt injection: This type of attack involves carefully crafted instructions that affect AI decision-making and navigate the attacker’s instructions. For example, an attacker may instruct the AI to send all subsequent communication to the attacker’s server.
  2. Model exfiltration: Models are built on a massive amount of data, some of which may be sensitive. If this information is leaked, it could lead to significant privacy breaches for individuals and companies whose data was used in training the model.
  3. Data poisoning: AI system providers carefully select the data used for training their AI. However, if the AI is capable of self-learning, an attacker can teach it to spread malware in response. For example, an AI system may be manipulated to append a malicious link at the end of each response, leading users to phishing pages.

To ensure the safety of these systems, it is crucial to include security in the development and integration lifecycle.

Main challenges and what to do with them

If we break down the security challenges of generative AI, what we are really concerned about are 3 areas:

AreaMost common security concerns
What goes in as a promptHow to ensure/maintain its confidentiality and/or privacy
The operation of the engine itselfWhat happens with the data in a prompt, how does the algorithm reach conclusions, what data did it learn with, how it handles bias, etc.
What goes out as an output, and how is it handledHow to ensure its confidentiality and integrity, how to verify accuracy and trustworthiness

The situation may differ if the organization in question is developing the engines and models themselves. That can bring great challenges on its own. For the sake of simplicity, we assume here that the AI engine is provided by a 3rd party. 

We summarize below what actions can be taken to address those concerns and improve overall security when utilizing generative AI. 

1. What goes in as a prompt

    • Data governance is key. Organizations and individuals should be well aware of what data they are using as prompts and examples, including their data classification and category. Great care should be taken with special categories of data, such as protected health information (PHI) or personal data.

2. The operation of the supplied engine itself

    • As with any other supplier, it is important to review the Terms of Service, Data Processing Agreement, and other relevant contracts. They often discuss what the supplier can do with the data, how they safeguard them, and what their liability is. Is your data going to be used for model training? Can you unsubscribe from that?
    • If the AI provider is transparent about how the model operates, seeking understanding is the next logical step. Learning the AI principles of the supplier, analyzing the training data, and providing assurance are important. 
    • Thorough testing can help uncover misalignments between what the supplier claims and how the model actually operates. In practice, testing nonsensical or unsupported prompts can uncover issues. Be careful, though, to always comply with the supplier’s terms and perform such tests with their approval. 
    • If the model supports it, asking it how it reached a conclusion step-by-step can bring more clarity to the decision-making engine. 

3. What goes out as an output, and how is it handled

    • In principle, any model outputs should be verified before using them further. Organizations should calculate the risks of the model (non)accuracy and its impact on the decisions in their particular use cases. 

Kontent.ai approach

As the first headless CMS with native AI capabilities on the market, we take extra care about the responsible use of AI in both our internal processes and product.  

Our goal is to integrate AI in a way that treats all users fairly and without discrimination and maintains a clear chain of responsibility across the whole AI lifecycle. We aim to make AI explainable and understandable to both employees and customers, all within a safe environment that properly protects user data and privacy.

On the crossroads

With AI being the new big topic, there are certainly concerns about how to achieve trust and security in pace with the technology development. As new attacks such as prompt injection, model exfiltration, or data poisoning appear, organizations need to be ready. 

Covering system inputs, operations, and outputs is a good starting point for breaking down the problem into bite-size pieces. As with any other area in business, the way these need to be secured will depend on the use case and the risk tolerance. 

At Kontent.ai, we invest a lot of effort to remain up to speed with evolving threat landscape of AI technology so that our customers can have confidence in our native AI capabilities and so we can ensure the technology is used responsibly and securely. 

Subscribe to the Kontent.ai newsletter

Get the hottest updates while they’re fresh! For more industry insights, follow our LinkedIn profile.