Academic and Research Institutions

AI for Academic Innovation-Affordable, Empowered

Freedom to Build and Experiment

Empowering research labs, universities, and innovators to develop and deploy AI without infrastructure constraints.

No GPU Dependency

Run large language models efficiently on CPUs.

Open Ecosystem

Integrate with open-source tools, frameworks, and datasets.

Innovation Without Limits

Build AI applications for diverse disciplines and use cases.

Accessible Research
Infrastructure

AI that adapts to your available compute.
CPU-Optimized Inference
Consistent performance on standard academic infrastructure
Cost-Effective Deployments
Enable more projects with the same budget.

Sustainable AI Research

Lower energy footprint, higher impact.
Efficient Compute
Reduce carbon cost of research computing.
Inclusive AI Access
Enable participation from resource-limited institutions.

Frequently Asked Questions

What does Kompact AI offer for universities and research labs?

With the Kompact AI runtime, universities and research labs are enabled to run any open-source LLM (up to 50B parameters) entirely on CPUs — with no loss in output quality and with GPU-equivalent throughput. Kompact AI removes the barriers that limit AI research.

How is Kompact AI different from using cloud-based AI services like Google Vertex or AWS Bedrock?

Kompact AI is not a managed or hosting service. It is a runtime that enables the execution of open-source LLMs on CPUs with no degradation in output speed and quality. Kompact AI lets researchers and students run and test AI workloads locally or on their preferred compute, without any dependency on GPUs.

  • The runtime to execute the Models.
  • Remote REST‑based server for serving model inferences remotely.
  • Observability to track model and system performance.
  • Client-Side SDKs in Go, Python, Java, .NET, and JavaScript, which are OpenAI Compatible for writing downstream applications that use the Kompact AI models.
Which models are available for research use?

Kompact AI supports a wide range of open-source models across text, speech, vision, and multimodal domains.
Refer to our model catalogue link for the complete list: https://www.ziroh.com/model-listing

Is there a free or academic license version available?

Yes. Kompact AI is free for use.

How to deploy and test models on Kompact AI?

Deployment is simple. You can provision Kompact AI either in your University Data Centre or in your lab. Models can be obtained from Hugging Face.

Does it support running custom proprietary models?

Yes. We collaborate with universities and research labs to optimise proprietary or fine-tuned LLMs for CPU execution.

Does it support running fine-tuned models?

Yes. Kompact AI can execute or run any fine-tuned model.

What are the typical system requirements for running LLMs on CPUs via Kompact AI?

Kompact AI runs on server-grade CPUs. We currently support Intel, AMD, ARM, and Ampere processors.

How does performance compare to running the same models on GPUs?

Kompact AI delivers performance equivalent to a GPU, with lower latency, higher throughput, and predictable costs.

What are the pricing options for academic or research users?

Kompact is free to use.

Can multiple departments or researchers share one deployment?

Yes. If multiple teams use the same model on similar hardware, they can efficiently share a single Kompact AI deployment.

Do you support collaborations with universities for joint research?

Yes. We actively collaborate with universities for joint research programs.

Can we publish papers or research based on experiments done with Kompact AI?

Yes. You can publish papers and journal articles for your AI applications that are built using Kompact AI. For citation, please use the following BibTeX entry.

How can we partner to build domain-specific AI models for academic use cases?

We believe domain-specific models will define the next phase of AI. Kompact AI will support fine-tuning of LLMs on CPUs by Q2 2026, enabling efficient, specialised model development for academia.