Build With Us
Pangea Chat has open-source instant messaging, federated architecture, and extensible AI tools. Whether you want to self-host, white-label, or contribute, there's a seat at the table.
Self-Host Your Server
Run the platform on your own infrastructure. Built on Matrix (Synapse), a battle-tested open protocol. Our team provides deployment support and ongoing maintenance assistance.
- Dedicated deployment support from our engineering team
- Full admin dashboard for user and room management
- Run a private instance or connect via Matrix federation
- Custom development services available


White-Label Your Platform
Pangea Chat's technology under your institution's name. We work with institutional partners to deliver branded deployments tailored to your needs.
- Custom branding, domain, and app store listing
- All Pangea AI tools included
- Students see your institution's identity
- Ongoing updates and support from our team
Make Your Own AI
For institutions with specific pedagogical needs. Heritage language programs, dialect-specific curricula, or custom feedback styles.
- Fine-tune models for your languages and curricula
- Create custom conversation activities
- Adjust AI feedback to match your teaching approach
- Integrate your own course materials

Open Core
Pangea Chat is open core. The messaging client (Flutter/Dart, AGPL-3.0), the instant messaging server (Synapse, AGPL-3.0), and the Matrix communication protocol are all open source. Our AI language-learning tools are proprietary IP, available through licensing for trusted partners.
Open Source
Client app and server source code on GitHub. Inspect data flows, audit privacy, verify claims, or contribute.
Open Protocol
Built on Matrix, an open communication standard. Portable data, federation, no proprietary lock-in for messaging.
Proprietary AI
Our AI writing tools, grammar feedback, and conversation activities are developed in-house. Licensing available for institutional partners.
Let's build something together
Whether you're exploring self-hosting, planning a white-label deployment, or just curious about the architecture, we'd love to hear from you.