When a Chatbot Says Too Much: Rules for Safe AI

AI chatbots and assistants are now helping answer customer questions, summarize data, and support administrators. They’re always available, faster than any human, and—if configured properly—surprisingly accurate. But with this convenience comes a pressing question: What happens when AI says more than it should—to your clients, your team, or even the entire world?

This isn’t a hypothetical concern. In 2023, a student cleverly tricked Microsoft’s Bing AI chatbot into revealing internal system rules that were meant to stay hidden. Due to a misconfiguration, even regular users could access this sensitive information.

Cases like this demonstrate that AI is powerful—but still lacks context and risk awareness. That’s why the conversation around AI safety now includes not just companies but regulators too.

As of August 2024, the EU’s AI Act has come into force—the first comprehensive law governing artificial intelligence in the European Union. Its goal is to differentiate between high- and low-risk AI use cases and to establish clear rules where AI could impact users’ privacy, health, or safety.

Today, it’s not enough for developers and companies to simply “have AI.” It must be designed to comply with the AI Act and remain useful, trustworthy, and safe for all users.

In the following section, we’ll explore the most common mistakes companies make when implementing AI—and the practical rules that help prevent them.

How We Do It in Practice:

1. AI Never Gets More Than It Actually Needs

We design every AI solution with the principle of “narrow context” in mind. That means the model receives only the specific data it needs to respond accurately. It is never directly connected to a live database or production system. All inputs are preprocessed, filtered, and structured so only relevant information reaches the model. This significantly reduces the risk of misinterpretation, unintentional data exposure, or contextual errors.

This approach is essential in projects involving sensitive data—like medical records—but we apply it even when working with public data sets.


2. Data Stays in the EU and Is Never Used for Further Training

In any project involving personal or business data, we need to know exactly where the data is stored and what guarantees the infrastructure provides. For example, with the Crossuite medical system based in Belgium, we use AWS hosting in Frankfurt, Germany—fully complying with the AI Act requirements for high-risk systems. These include provisions that personal data collected within the EU must be processed primarily within the EU and always in line with EU regulations.

Amazon’s terms also explicitly guarantee that our input data is not stored, evaluated, or used to train the model further. This is especially critical in fields like healthcare, where any overlap between user data and model training is strictly prohibited.

Our responsibility is not only technical, but legal—clients must know that what AI “sees” stays between them and the model. No one else—not now, and not in the future—should have access to that data.


3. Secure Inputs Are the Foundation of Safe Outputs

AI safety isn’t just about what data the model receives, but how it’s guided to respond. In every project, we design prompts to ensure the model sticks to relevant information, stays on topic, and provides clear, predictable answers.

We use predefined templates that define what information the AI can access, the desired tone of the response, and how it should react to uncertainty. We also monitor whether it stays within these constraints. This approach—known as prompt engineering—turns AI into a trusted tool, not an unpredictable experiment.


4. If Needed, We Anonymize Data Before AI Even Sees It

Not every application handles sensitive data—but any app might one day. Dejmark, for example, uses AI to simplify repeat purchases for their sales team. A photo of an old order, receipt, or ERP screenshot is enough for the AI to identify products, quantities, and volumes, and create a new cart. Currently, this process doesn’t handle personal data—but in the future, it might process invoices or delivery notes, which could include names, addresses, or email contacts.

That’s why we built anonymization into the solution from day one—using regex filters or preprocessing at the application layer. The model never sees raw input, only a clean extract without identifiers.


5. Every Output Is Traceable, Explainable, and Adjustable

Transparency builds trust. For every AI implementation, we create mechanisms to audit outputs. Our goal is to always be able to trace what was asked, how the model responded, and why—so the results can be verified and explained if needed.

This is especially important in healthcare: if there’s a discrepancy in data summaries, we need to understand what the model based its response on. In chatbot applications, we check whether the model stays within its role, and if not, we can retrain it, adjust the prompt, or refine the input handling.

Every AI deployment goes through a two-step testing process: internal review by our developers and final validation by the client before going live.

This allows us to build AI that’s not only functional—but defensible.


6. Trusted Models, Clear Rules, Ongoing Oversight

AI safety begins with choosing the right model and provider. In our projects, we prioritize partners who offer proven technology, transparent policies, and strong data protection—like OpenAI via Azure, Google Vertex AI, AWS Bedrock, or self-hosted solutions. Their infrastructure includes encryption, access control, and regular audits.

On top of this infrastructure, we define strict usage rules—what data the model can process, its limits, and fallback instructions for uncertainty. Every solution undergoes internal testing and client validation. Regular security audits help us adapt these rules as technology, regulations, or data types evolve—so our AI remains safe and reliable long term.

AI Has Limits—And We Set Them So You Can Trust It

The biggest risk with AI isn’t the technology—it’s letting it operate without clear boundaries.

At bart.sk, we design AI solutions to be useful, trustworthy, and—above all—safe. We respect data sensitivity, legal obligations, and user needs. Trust isn’t something you leave to the model. It has to be built into the architecture from day one.


Sources

AI Security FAQ: How to Protect Your Data When Using Artificial Intelligence

Why could an AI chatbot accidentally reveal sensitive information?

If the chatbot’s inputs and access levels aren’t properly configured, it might process and reuse data it shouldn’t — like internal policies, personal information, or confidential files. That’s why it’s essential to follow the “narrow context” principle and only give the AI what it truly needs to respond.

What is the AI Act and who does it apply to?

The AI Act is an EU law effective from August 2024. It classifies AI systems by risk level and introduces rules for transparency, data handling, safety, and auditability — especially for tools working with sensitive or regulated data. Businesses using AI must comply based on their use cases.

Is my AI system processing data outside the EU?

It depends on the model and where it’s hosted. When handling personal data, it’s best to use cloud services based in the EU — and ensure the data isn’t being used to train the model. Hosting through AWS Frankfurt or Azure Europe, for example, aligns with both GDPR and the AI Act.

What is prompt engineering and how does it improve security?

Prompt engineering is the practice of designing clear, safe, and well-scoped inputs for AI systems. It ensures the model responds in the right tone, stays within defined limits, and only handles the data it should. This reduces the risk of inaccurate or inappropriate outputs.

Can AI misuse information from documents I upload?

If the model is not properly configured, it might retain or reuse sensitive parts of your documents. Never feed raw, unfiltered data into an AI tool — anonymize it or extract only what’s necessary before it reaches the model.

What’s the difference between a secure AI model and a typical AI tool?

A secure model includes clear data handling rules, EU-based data storage, access restrictions, encryption, output logging, and full auditability. In contrast, many public AI tools don’t offer these guarantees, making them risky for use in regulated industries.

Can I track what the AI responded and why?

Yes. Professional solutions log who asked what, the exact prompt, the AI’s response, and the data sources used. This kind of audit trail is critical for industries like healthcare, finance, and internal enterprise tools.

Which AI models are considered safe for business use?

Recommended platforms include OpenAI via Azure, Google Vertex AI, AWS Bedrock, or private self-hosted models. These options offer strong guarantees for data security, encryption, transparency, and control over how information is handled.

What should I do before deploying AI in production?

First, define what data the AI will access, who can see it, how it will be filtered, and where it will be stored. Create internal security policies, anonymize sensitive inputs, use prompt templates, and enable logging for all outputs. Only then is it safe to go live.