AI tools are evolving fast, and in demos they can feel incredibly convincing — especially when built on a clean slate. The real difference shows up when they need to become part of a real product, one with history, dependencies, and complex integrations.
AWS Community Day in Košice wasn’t about shiny new features for me. It was about reality — what actually works in production and which changes make sense even for existing systems. These are three takeaways I brought back from the conference as a DevOps engineer and former backend developer at Crossuite (along with a few stickers and a notebook 🙂).
1. AI delivers the most value in clearly defined use cases
Across multiple talks, one theme kept coming up — the gap between what we can quickly build with AI today and what can actually run reliably in production.
Solutions built on LLMs, agents, or tools like Amazon Bedrock looked very impressive at first. The turning point came when the discussion shifted to real-world deployment — specifically how these solutions fit into existing systems, how their behavior is monitored, and how their outputs are controlled.
Key idea:
AI only makes sense when it’s useful, not when it’s trendy.
At Crossuite, this approach has proven itself in practice. We integrate AI into specific steps, such as data processing or working with text, where inputs and outputs are clearly defined and the impact is immediate. This is where AI becomes a natural part of the product, delivering real value.
2. Less code, more controlled flow
In serverless discussions, a clear shift toward simpler and more transparent architecture kept emerging. A standout moment came from Tomáš Sabol’s talk, which clearly articulated how the role of AWS Lambda is evolving.
Lambda remains a powerful tool for handling business logic, but the way we use it is changing. Its greatest value comes when it focuses on a specific operation, while orchestration is handled by other parts of the system — such as native AWS services like API Gateway, EventBridge, or Step Functions.
One recommendation stood out:
The less logic hidden inside Lambda functions, the more readable and stable the system becomes.
Instead of “gluing” services together with custom code, responsibility shifts into the architecture, where flow is explicitly defined and easier to control. This leads to less custom code and systems that are easier to debug, scale, and evolve — while remaining understandable even as complexity grows.

3. Karpenter shows that scaling is solved. Cost is not.
Viktor Vedmich’s talk on Karpenter stood out because it didn’t focus on scaling Kubernetes clusters itself, but on everything around it. He prepared two demos for the session.
The first one was smaller — a cluster running 10 pods. Using a visualization tool, he showed what was happening inside the cluster — Karpenter dynamically added and removed nodes and rescheduled pods between them. It was both impressive and easy to grasp.
The second demo aimed higher. When asked how many pods to scale to, someone from the audience suggested “5000.” Viktor kicked off the scripts, and hundreds of pods were supposed to start spinning up in the background. After about 15 minutes, it became clear that something wasn’t right. A small issue caused the process to stall at the beginning. It happens. The demo didn’t go as planned — but at Crossuite, we know Karpenter works in practice.
The talk included several practical tips on how to further improve Karpenter through configuration. By combining Spot and On-Demand instances (for example in a 50/50 split) and applying consolidation strategies, it’s possible to significantly reduce cluster operating costs.
We already run Karpenter in production at Crossuite. The talk confirmed that our foundations are solid, while also showing there’s still room to push further. Distributing workloads between Spot and On-Demand instances could bring additional cost savings without compromising application stability.
What I’m taking into practice
After the conference, I realized that many of these topics are already part of what we do. Still, it was valuable to see them clearly named and broken down in detail.
Across AI, serverless, and infrastructure, one pattern kept repeating:
The difference doesn’t come from big decisions, but from how the smaller ones are set up — and how often we revisit them over time.
Infrastructure stood out the most for me. Scaling works well today, but how efficiently a system runs between load peaks is where attention to detail really pays off.

Frequently Asked Questions about AI, Cloud, and Production Systems
Why do many AI solutions never move beyond the demo stage?
AI solutions often work well in isolation or early prototypes. The challenge comes when integrating them into real products with existing architecture, data, and operational constraints. That’s where production readiness is truly tested.
When does AI deliver the most value in a product?
AI delivers the most value in clearly defined use cases with structured inputs and expected outputs — such as data processing, automation, or text handling.
What does it mean that AI should be useful, not trendy?
It means AI should solve a real problem or improve an existing process. Its value lies in practical application, not in simply adopting the latest technology.
What is AWS Lambda used for?
AWS Lambda allows you to run code without managing servers. It’s commonly used for handling specific business logic, event-driven processing, and lightweight APIs.
Why does serverless architecture often lead to less code?
Serverless architectures rely on managed services like API Gateway, EventBridge, or Step Functions, which take over orchestration and infrastructure concerns, reducing the need for custom code.
What is Karpenter in Kubernetes?
Karpenter is a Kubernetes autoscaling tool that dynamically provisions infrastructure based on workload demands, helping optimize performance and cost.
Why isn’t scaling enough on its own?
Scaling ensures availability under load, but efficiency depends on how resources are managed outside peak times — including how quickly capacity is released and how workloads are distributed.
How can Kubernetes costs be optimized in AWS?
Costs can be reduced through proper configuration — including NodePools, Spot vs. On-Demand instance usage, and consolidation strategies.
What is the benefit of using Spot and On-Demand instances together?
Combining both allows you to lower costs with Spot instances while maintaining reliability with On-Demand capacity.
Why are conferences like AWS Community Day valuable?
They provide real-world insights, practical examples, and deeper understanding of how technologies behave in production environments.