At KORA Benchmark, our culture centers on collaboration, openness, and learning from real-world use. We are building an independent, open-source benchmark designed to evaluate how AI models behave when interacting with children and teens. The project is nonprofit and widely accessible: it publishes a public leaderboard, model scorecards, and all conversations and evaluations behind the scores, along with open source code so anyone can run, audit, and improve the data. We prioritize collaboration with researchers, policymakers, and families to ensure transparency, safety, and developmentally appropriate AI.
Led by Mathilde Collin and Quentin Calvez, KORA is a nonprofit initiative built in partnership with leading academic research labs and child safety experts. The team emphasizes measurement and transparency: providing a public leaderboard, open data, and open source code so researchers, parents, policymakers, and AI developers can audit, reproduce, and contribute. The project is designed to help guide safer, developmentally appropriate AI for children and teens, with up-to-date results for frontier models and historical trends.