Content Design
We build tools and explainers that make language models clearer, safer, and more useful for everyday, non-technical users.
Make AI less of a black box
AI is moving fast, but most people still meet it as a chat box that “gives answers.” That’s helpful—until it isn’t. When the process is unclear, it’s hard to judge what to trust, what to question, and how to use AI responsibly.
We’re a small start-up with a simple focus: make AI less of a black box. Not by dumbing it down, but by designing better ways to interact with it—so more people can benefit, not just experts.
Clear mental models
Practical explanations of what language models do (and don’t do).
Better interaction
Workflows that help people ask, check, and refine—not just “prompt and hope”.
Real outcomes
Prototypes and applications that turn good practice into something usable.
Real-time AI applications
Our prototypes explore how language models can support reflection, better judgment, and richer experiences in everyday life.
Reflection-first chatbots
Most chatbots jump straight to answers. Ours don’t. We design conversational systems that slow the interaction down—asking follow-up questions to help you clarify what you mean.
"Less instant output. More insight."
Social AI for gatherings
Games built around the people in the room. We're prototyping personalized quizzes and challenges based on the actual participants—inside jokes and shared experiences.
"More laughter, less generic content."
From article to understanding
Paste a source and get a structured explanation. Not just a summary, but a map of "who affects what" and why it matters, without flattening complexity.
"Understanding as a starting point."
Odinette
A reflection partner designed to take meaning seriously.
Odinette treats what you write not as a command to execute, but as a meaningful act. instead of rushing to a solution, Odinette responds with a reflection-first dialogue that can refine—or even change—your starting point.
- Helps users clarify what they mean before acting
- Surfaces assumptions through careful questions
- Supports better judgment without pretending to “know best”
"What do you actually mean by that?"
Why this matters
AI is becoming part of how people learn, decide, and form opinions. When a system can produce confident-sounding text in seconds, the real challenge isn’t speed—it’s judgment: what to trust, what to question, and how to stay accountable.
If only specialists can use these tools well, we get a new kind of inequality. And if AI outputs are treated as “knowledge” without context, public conversation gets noisier and more fragile.
We want everyday users to have:
- A realistic picture of what language models are doing
- Simple habits for checking and refining outputs
- Tools that support thoughtful use instead of autopilot
Better AI isn’t just better answers. It’s better ways for people to make sense of things—together.
Frequently Asked Questions
What do you mean by 'reflection-first'?
We design AI conversations that don’t rush to conclusions. Instead, they ask clarifying questions, surface assumptions, and help users explore alternatives—so the user stays in control of the final judgment.
Are you trying to make AI 'more human'?
No. We’re not building artificial people. We’re building better ways for humans to work with language technology—more clearly, more responsibly, and with fewer misunderstandings.
Can we trust language models?
They can be useful, but they’re not automatically reliable. That’s why our prototypes focus on transparency, verification habits, and interaction patterns that make uncertainty visible.
Who is this for?
Curious, non-technical users—and organizations that serve them (education, public institutions, media, NGOs).
How can we get involved?
If you’d like to test a prototype, run a pilot, or collaborate on a project, reach out. We’re currently prototyping and actively looking for real-world feedback.
Ready to collaborate?
If you’d like to collaborate, fund a pilot, or explore a partnership, we’d love to hear from you.
Get in Touch