Academic and Research Institutions

AI for Academic Innovation

Freedom to Experiement, Build and Scale AI on CPUs.

Research labs, universities, and innovators can develop and deploy Open Source LLMs without infrastructure constraints and output quality deterioration.

No GPU Dependency

Run large language models efficiently on CPUs.

Open Ecosystem

Integrate with open-source tools, frameworks, and datasets.

Accessible Research Infrastructure

AI runtime that adapts to your readily available compute.
CPU-Optimized Inference
Consistent performance on server grade CPUs.

Kompact AI supports 600+ models across text, speech, vision, and multimodal domains.

View All Models 

Optimised for Your Own Models

Run proprietary, or fine-tuned LLMs seamlessly on CPUs with Kompact AI.
We collaborate with universities to optimise custom and specialised models for high-performance CPU execution.

Shared Deployments Supported  Multiple Departments, One Runtime

Kompact AI is free for academic and research use.

Lower energy footprint, higher impact.
Efficient Compute – Reduce carbon cost of research computing.

Getting Started is Simple.

Get your model from Hugging Face. We’ll provide the Kompact AI runtime, which can be deployed on campus infrastructure or in the cloud. Run the model on KAI, and you’re good to go.

Frequently Asked Questions

What does Kompact AI offer for universities and research labs?

With the Kompact AI runtime, universities and research labs are enabled to run any open-source LLM (up to 50B parameters) entirely on CPUs — with no loss in output quality and with GPU-equivalent throughput. Kompact AI removes the barriers that limit AI research.

How is Kompact AI different from using cloud-based AI services like Google Vertex or AWS Bedrock?

Kompact AI is not a managed or hosting service. It is a runtime that enables the execution of open-source LLMs on CPUs with no degradation in output speed and quality. Kompact AI lets researchers and students run and test AI workloads locally or on their preferred compute, without any dependency on GPUs.

  • The runtime to execute the Models.
  • Remote REST‑based server for serving model inferences remotely.
  • Observability to track model and system performance.
  • Client-Side SDKs in Go, Python, Java, .NET, and JavaScript, which are OpenAI Compatible for writing downstream applications that use the Kompact AI models.
Which models are available for research use?

Kompact AI supports a wide range of open-source models across text, speech, vision, and multimodal domains.
Refer to our model catalogue link for the complete list: https://www.ziroh.com/model-listing

Is there a free or academic license version available?

Yes. Kompact AI is free for use.

How to deploy and test models on Kompact AI?

Deployment is simple. You can provision Kompact AI either in your University Data Centre or in your lab. Models can be obtained from Hugging Face.

Does it support running custom proprietary models?

Yes. We collaborate with universities and research labs to optimise proprietary or fine-tuned LLMs for CPU execution.

Does it support running fine-tuned models?

Yes. Kompact AI can execute or run any fine-tuned model.

What are the typical system requirements for running LLMs on CPUs via Kompact AI?

Kompact AI runs on server-grade CPUs. We currently support Intel, AMD, ARM, and Ampere processors.

How does performance compare to running the same models on GPUs?

Kompact AI delivers performance equivalent to a GPU, with lower latency, higher throughput, and predictable costs.

What are the pricing options for academic or research users?

Kompact is free to use.

Can multiple departments or researchers share one deployment?

Yes. If multiple teams use the same model on similar hardware, they can efficiently share a single Kompact AI deployment.

Do you support collaborations with universities for joint research?

Yes. We actively collaborate with universities for joint research programs.

Can we publish papers or research based on experiments done with Kompact AI?

Yes. You can publish papers and journal articles for your AI applications that are built using Kompact AI. For citation, please use the following BibTeX entry.

How can we partner to build domain-specific AI models for academic use cases?

We believe domain-specific models will define the next phase of AI. Kompact AI will support fine-tuning of LLMs on CPUs by Q2 2026, enabling efficient, specialised model development for academia.