What You’ll Experience
01
A walkthrough of real performance benchmarks of Kompact AI with COTS CPU hardware.
02
Compare Kompact AI's performance benchmark with widely used contemporary inference stacks.
03
Compare Kompact AI's performance with widely available GPUs
What this Experience Covers
01
Live deployments of Kompact AI across Google Cloud, Amazon AWS, Microsoft Azure, and on-premise CPU servers.
02
Demonstration of production-grade AI use cases—including natural language to SQL, long-document summarisation, conversational chat, math problem solving, and AI-powered code generation—delivered with production-quality outputs, running entirely on CPUs.
03
End-to-end deployment walkthrough — from selecting a Kompact AI image to launching an instance and sending prompts via sample client applications
02
Demonstration of production-grade AI use cases—including natural language to SQL, long-document summarisation, conversational chat, math problem solving, and AI-powered code generation—delivered with production-quality outputs, running entirely on CPUs.
What this Experience Covers
01
Running intelligent RAG systems on Kompact AI runtime at scale
From enterprise documents and large datasets to fast, context-aware responses, and low-latency responses
02
Designing Agentic AI for real work
AI systems that retrieve, summarise, prioritise, and act autonomously across emails, calendars, and enterprise data, using production-ready architectures.
Why attend this Experience?
Insights on building production-ready RAG systems and intelligent AI agents. Securely querying enterprise knowledge and large datasets, operating autonomously, and delivering consistent performance using CPU-based infrastructure.
Experience Format
Duration
90 minutes
Format
Live walkthrough
What this Experience Covers
01
Run your own models on Kompact AI
From open-source and fine-tuned models to custom transformer architectures, integrated quickly using Kompact AI’s developer libraries.
02
From model wiring to benchmarking
See how models are integrated, executed, and evaluated with qualitative and quantitative benchmarks, all on CPU-based infrastructure.
Why attend this Experience?
Opertionalise your own AI models—whether open-source, fine-tuned, or custom-built—on CPU infrastructure, without changing your architecture or relying on GPUs.
Session Details
Duration
90 minutes
Scope
Open-source models, fine-tuned models, and custom transformer architectures
Format
Live demos and hands-on walkthroughs
Code
Reference implementations licensed under Apache 2.0

