The open-source AI bet
Best-in-class open-source tools (LiteLLM, Milvus, LlamaIndex) integrated and ready. When they evolve, you benefit. No vendor lock-in, no licensing fees, no platform constraints. Bet on the ecosystem, not a single vendor.
Complete infrastructure for production AI. Deploy in your data center. Build with confidence. Keep your data in Switzerland.
The Swiss AI Hub is an open-source AI platform that you deploy and control. It's specifically designed for on-premise installation within your own Swiss data center. This means you can run the entire AI infrastructure - including language model gateways and knowledge databases - on your servers, ensuring complete control and data isolation. You can find more details in our Deployment Options.
Ensuring Swiss data sovereignty is a core principle of the Swiss AI Hub community. Because you deploy the platform, you choose where it runs – either on your own servers in Switzerland or within trusted Swiss data centers. By using local Large Language Models (LLMs), sensitive data processing stays entirely within your control, helping you meet FADP (revDSG) requirements. Read about our commitment in The Swiss Way: Privacy, Sovereignty, and Transparency.
The Swiss AI Hub offers an open-source alternative built by a community focused on user control. The platform's core infrastructure is licensed under Apache 2.0, meaning you own your deployment. This lets you avoid vendor lock-in, giving you freedom from specific ecosystems and unpredictable pricing structures common with large cloud providers. See how we compare in the Comparison Matrix.
The Swiss AI Hub community prioritizes transparency for trust. Our platform provides deep observability features. Every step an AI agent takes is visible, decisions are logged with context, and costs are tracked. Tools like Langfuse allow tracing every interaction, so you can always understand why an AI provided a certain answer, which is crucial for compliance and auditing. Explore these features under Auditing & Observability.
Yes, the Swiss AI Hub provides a complete, pre-integrated AI infrastructure stack that you deploy. It bundles essential components like authentication, monitoring, various databases (including vector databases for AI), data processing pipelines, and user interfaces right out of the box. This solves many common production AI challenges from day one. Learn about this in The 'Day 2' Advantage.
You can deploy the entire Swiss AI Hub platform in about 30 minutes using a single command. As an open-source platform you install yourself, it includes pre-built agents and interfaces that start working immediately, offering rapid time-to-value without complex cloud configurations. Get started with the Quick Start Guide.
The Swiss AI Hub platform includes built-in integrations for Microsoft Teams and Slack. This allows your employees to interact securely with AI agents, which have access to relevant company knowledge, directly within the collaboration tools they use every day, improving workflow. See details on Slack & Teams Integrations.
The Swiss AI Hub includes an integrated LLM Proxy (LiteLLM) that acts as a unified gateway to all your configured AI models. You can centrally manage model access, route requests based on policies, track costs across different providers, and even set up failover mechanisms. More info is available under Language Models.
Our community built the Swiss AI Hub with transparent cost control in mind. The integrated LLM Proxy tracks token usage for every interaction, per user or agent. You can monitor AI spending in real-time dashboards and configure budgets to prevent unexpected costs. Learn more about Cost Control.
The Swiss AI Hub is ideal for this. As an open-source platform you deploy, you can install it fully on-premise and use local, self-hosted LLMs. This ensures that absolutely no data (prompts, responses, documents) ever leaves your secure network perimeter. Review our comprehensive Security features.
The Swiss AI Hub provides the necessary production-ready infrastructure that development frameworks often lack. While LangChain helps build the AI logic, our platform delivers the essential surrounding components: robust deployment mechanisms, enterprise authentication, scaling, monitoring, and user interfaces needed for reliable enterprise use. See Our Solution.
The Swiss AI Hub includes a secure Retrieval-Augmented Generation (RAG) system. You configure automated Data Pipelines to ingest documents from your sources (like SharePoint). These pipelines securely process the documents and index them in a vector database that you own and control, allowing agents to access company knowledge safely.
The Swiss AI Hub can serve as your central, unified AI platform. It provides common infrastructure that all teams can build upon, ensures consistent governance and security policies, offers unified monitoring, and includes an OpenAI-Compatible API allowing integration with many existing tools, helping to reduce fragmentation.
The Swiss AI Hub utilizes Data Pipelines, built using the robust orchestrator Dagster. These pipelines automate the entire workflow: connecting to your data sources, parsing various file formats intelligently, creating semantic chunks, generating vector embeddings, and indexing them into your vector store (like Milvus). Find details in the Pipelines section.
Yes. When you deploy the Swiss AI Hub on-premise and configure it to use only self-hosted Large Language Models (LLMs), the entire platform can operate without any external internet connection. This makes it suitable for air-gapped environments with the highest security requirements. See Deployment Options.
Trust is paramount. Swiss AI Hub Agents are designed to follow explicit, defined workflows. They primarily use Retrieval-Augmented Generation (RAG), meaning their answers are based on information retrieved from your verified company documents. Agents also cite their sources, and built-in "guardrails" check if the retrieved information is sufficient, ensuring reliable, fact-based responses. Learn more about Agents.
Yes, our platform supports Human-in-the-Loop (HITL) and Bot-in-the-Loop (BITL) workflows. An AI agent can be designed to pause its process at a specific step, send a request for input or approval to a designated human expert (for example, via a Slack message), and then seamlessly resume its work once the human responds. Explore Agent Fundamentals which enable these patterns.
The platform offers flexible integration options. AI agents can make direct API calls to external systems; external systems can trigger agents via the platform's Agent Interaction API; automated Data Pipelines can sync knowledge from sources like SharePoint; and standard protocols are supported for custom connections. See External Integrations.
Our ecosystem model is built on collaboration. The core platform is open-source, allowing Swiss organizations to pool efforts in building and improving the fundamental AI infrastructure. Everyone benefits from shared advancements, letting individual organizations focus resources on creating unique AI applications specific to their needs, strengthening Switzerland's overall AI capabilities. Read about The Ecosystem Model.
The Swiss AI Hub community designed the platform for simplified deployment and management. The installation uses a single command, and critical infrastructure challenges like authentication, monitoring, and scaling configuration are solved out-of-the-box. This significantly reduces the operational complexity compared to building AI infrastructure from scratch or managing intricate cloud vendor services. Check the Quick Start.