AI chatbots and assistants are now helping answer customer questions, summarize data, and support administrators. They’re always available, faster than any human, and—if configured properly—surprisingly accurate. But with this convenience comes a pressing question: What happens when AI says more than it should—to your clients, your team, or even the entire world?
This isn’t a hypothetical concern. In 2023, a student cleverly tricked Microsoft’s Bing AI chatbot into revealing internal system rules that were meant to stay hidden. Due to a misconfiguration, even regular users could access this sensitive information.
Cases like this demonstrate that AI is powerful—but still lacks context and risk awareness. That’s why the conversation around AI safety now includes not just companies but regulators too.
As of August 2024, the EU’s AI Act has come into force—the first comprehensive law governing artificial intelligence in the European Union. Its goal is to differentiate between high- and low-risk AI use cases and to establish clear rules where AI could impact users’ privacy, health, or safety.
Today, it’s not enough for developers and companies to simply “have AI.” It must be designed to comply with the AI Act and remain useful, trustworthy, and safe for all users.
In the following section, we’ll explore the most common mistakes companies make when implementing AI—and the practical rules that help prevent them.
How We Do It in Practice:
1. AI Never Gets More Than It Actually Needs
We design every AI solution with the principle of “narrow context” in mind. That means the model receives only the specific data it needs to respond accurately. It is never directly connected to a live database or production system. All inputs are preprocessed, filtered, and structured so only relevant information reaches the model. This significantly reduces the risk of misinterpretation, unintentional data exposure, or contextual errors.
This approach is essential in projects involving sensitive data—like medical records—but we apply it even when working with public data sets.
2. Data Stays in the EU and Is Never Used for Further Training
In any project involving personal or business data, we need to know exactly where the data is stored and what guarantees the infrastructure provides. For example, with the Crossuite medical system based in Belgium, we use AWS hosting in Frankfurt, Germany—fully complying with the AI Act requirements for high-risk systems. These include provisions that personal data collected within the EU must be processed primarily within the EU and always in line with EU regulations.
Amazon’s terms also explicitly guarantee that our input data is not stored, evaluated, or used to train the model further. This is especially critical in fields like healthcare, where any overlap between user data and model training is strictly prohibited.
Our responsibility is not only technical, but legal—clients must know that what AI “sees” stays between them and the model. No one else—not now, and not in the future—should have access to that data.
3. Secure Inputs Are the Foundation of Safe Outputs
AI safety isn’t just about what data the model receives, but how it’s guided to respond. In every project, we design prompts to ensure the model sticks to relevant information, stays on topic, and provides clear, predictable answers.
We use predefined templates that define what information the AI can access, the desired tone of the response, and how it should react to uncertainty. We also monitor whether it stays within these constraints. This approach—known as prompt engineering—turns AI into a trusted tool, not an unpredictable experiment.
4. If Needed, We Anonymize Data Before AI Even Sees It
Not every application handles sensitive data—but any app might one day. Dejmark, for example, uses AI to simplify repeat purchases for their sales team. A photo of an old order, receipt, or ERP screenshot is enough for the AI to identify products, quantities, and volumes, and create a new cart. Currently, this process doesn’t handle personal data—but in the future, it might process invoices or delivery notes, which could include names, addresses, or email contacts.
That’s why we built anonymization into the solution from day one—using regex filters or preprocessing at the application layer. The model never sees raw input, only a clean extract without identifiers.
5. Every Output Is Traceable, Explainable, and Adjustable
Transparency builds trust. For every AI implementation, we create mechanisms to audit outputs. Our goal is to always be able to trace what was asked, how the model responded, and why—so the results can be verified and explained if needed.
This is especially important in healthcare: if there’s a discrepancy in data summaries, we need to understand what the model based its response on. In chatbot applications, we check whether the model stays within its role, and if not, we can retrain it, adjust the prompt, or refine the input handling.
Every AI deployment goes through a two-step testing process: internal review by our developers and final validation by the client before going live.
This allows us to build AI that’s not only functional—but defensible.
6. Trusted Models, Clear Rules, Ongoing Oversight
AI safety begins with choosing the right model and provider. In our projects, we prioritize partners who offer proven technology, transparent policies, and strong data protection—like OpenAI via Azure, Google Vertex AI, AWS Bedrock, or self-hosted solutions. Their infrastructure includes encryption, access control, and regular audits.
On top of this infrastructure, we define strict usage rules—what data the model can process, its limits, and fallback instructions for uncertainty. Every solution undergoes internal testing and client validation. Regular security audits help us adapt these rules as technology, regulations, or data types evolve—so our AI remains safe and reliable long term.
AI Has Limits—And We Set Them So You Can Trust It
The biggest risk with AI isn’t the technology—it’s letting it operate without clear boundaries.
At bart.sk, we design AI solutions to be useful, trustworthy, and—above all—safe. We respect data sensitivity, legal obligations, and user needs. Trust isn’t something you leave to the model. It has to be built into the architecture from day one.
Sources
- https://www.darkreading.com/cyber-risk/shadow-ai-sensitive-data-exposure-workplace-chatbot-use
- https://www.lakera.ai/blog/chatbot-security
- https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
- https://support.park.edu/support/solutions/articles/6000275001-using-ai-chatbots-privacy-and-information-security-considerations
- https://www.theguardian.com/world/2018/jan/28/fitness-tracking-app-gives-away-location-of-secret-us-army-bases
- Ars Technica – AI-powered Bing Chat spills its secrets via prompt injection attack