
More than a hundred students from across Europe gathered this spring to address the complex task of governing AI in critical domains like education and healthcare. Over four online lectures and two in-person workshops, they translated academic concepts into actionable governance frameworks - bridging theory and practice to inform decisions at the highest levels.
The question that started it all
How do you govern artificial intelligence (AI) in critical domains such as education and healthcare? In response, a student-led initiative under the umbrella of CIVICA universities launched the AI Governance Challenge 2025. Over four months, participants navigated the intersection of technology, ethics, and policy through four online lectures and two intensive workshops. Co-funded by CIVICA and the European Union, this initiative demonstrated that participants tackle real problems, and deliver real solutions.
Building the foundation: from theory to practice
The journey began with four virtual sessions. On 22 February, Abhivardhan (Chairperson, Indian Society of Artificial Intelligence and Law) and Bogdan Grigorescu (Senior Technical Lead, Direct Line Group) presented “AI Ethics and Governance: Exploring Frameworks for Ethical AI Development and Deployment,” examining the gap between AI’s promises and its current realities.
On 5 March, Brian K S. (Associate Professor, LSE Department of Education) and Ayça Atabey (Researcher at University of Edinburgh & Digital Futures for Children, LSE AI governance) led “AI Governance in Education,” unpacking challenges in adaptive learning systems, automated grading tools, and the regulatory considerations needed to ensure transparency and fairness.
On 8 March, Anukriti Chaudhari and Mason Kyi (Co-Founders at the FosterHealth.ai) took on “AI Governance in Health,” examining use cases such as AI-assisted diagnostics and patient data management.
The series concluded on 22 March with Alexandra Chesterfield (Behavioural scientist, author and founder) leading “Applying Lessons from Behavioural Risk to AI,” revealing how cognitive biases - like confirmation bias and automation bias - influence both AI developers and end users. Throughout these sessions, attendees engaged with policymakers, legal scholars, and industry professionals, debating technical, legal, and ethical facets of AI governance.
Where ideas meet reality
The London workshop
By April, the real work began. On 4 April, participants convened at the London School of Economics for the first workshop. Holly Marquez (Senior Policy Advisor, UK Cabinet Office) opened with her Hidden Risks Framework, a practical tool for identifying subtle dangers in human - AI interactions, such as misplaced trust in algorithmic recommendations and gaps in accountability when automated systems fail. Jeni Miles (former Google Apps Consultant), serving as on-site mentor, challenged teams to refine their governance strategies, drawing out real-world trade-offs in AI governance.
Participants formed interdisciplinary teams representing stakeholders like a government education agency, an EdTech startup (ClassRanker), the Human Rights Commission, and the Information Commissioner’s Office. Over several hours, each group assessed a hypothetical AI-driven learning platform’s data flows, decision points, and potential impacts on student privacy and learning outcomes.
The winners
The winning education team: Aleksandra Jaremko (MSc Education Policy, LSE), Emi Siraj (MSc Public Policy, LSE), Sakshi Nair (MSc Data Science for Public Policy, LSE), and Tanaya Kulkarni (MSc Education, LSE) presented “Making Discharge Summaries Safer.” Originally designed to standardise patient handoffs, their framework was adapted for educational contexts. By proposing standardized templates, clinician training modules on clear documentation, and feedback loops, the team showed how data standardization and human-centred training can reduce errors and better prepare students for clinical rotations. Judges praised the proposal’s combination of behavioural, technical, and regulatory safeguards, noting its potential to address automation bias, depersonalization of care, and oversight gaps. Presentations were judged by Martin Dinnage (Innovation and Design Strategist), Shyma Jundi (Behavioural Scientist, NHS), and Anukriti Chaudhari (Healthtech Founder).
The Berlin workshop
On 23 April in Berlin, participants engaged in a hands-on policy simulation that applied governance principles to real-world AI use cases in mental health. Case studies highlighted both successes and challenges in education and healthcare through a macro-level lens, incorporating a gender perspective to reveal systemic biases. Siddhi Pal (Senior Policy Researcher, Interface) opened with a session on “Gender Gap in Tech-Driven Migration – AI Talent Pool,” which addressed emerging risks and biases in AI systems. Throughout the day, students collaborated in diverse stakeholder teams—representing agencies, startups, regulators, and advocacy groups—to draft governance frameworks that balance innovation, privacy, and equitable access. By working through realistic scenarios, they identified governance approaches aimed at informing effective and ethical policy design.
The workshop winners
Team Red Lemon, comprising Jasmin Mehnert and Oliver Pollex (both MSc Data Science for Public Policy students at the Hertie School), proposed the project titled “Balancing Innovation and Privacy: Navigating AI Governance Challenges in Germany’s HealthTech Sector.” Their decentralized data governance framework recommended that hospitals and research centers contribute anonymized data to a shared platform while maintaining local control. The framework introduced a tiered consent mechanism, giving patients granular choices over how their data is used, and incorporated a monitoring system to detect unauthorized access. Judges commended Team Red Lemon’s proposal for aligning with European Health Data Space goals, data portability and interoperability, whilst safeguarding patient information. By the end of the Berlin workshop, Team Red Lemon secured first prize in the healthcare track.
People behind the success
This initiative was spearheaded by Padma Bareddy and Shruti Kakade (Hertie School), together with Anika Ghei and Anushka Jain ( LSE). Drawing on a shared commitment to ethical AI and cross-disciplinary collaboration, the team conceptualized and led the challenge from the ground up. From crafting the grant-winning problem statement to designing governance scenarios that pushed participants to think critically about real-world implications, their work ensured a meaningful participant experience.
The Hertie School Data Science Lab was in support during these events. Judges Abhivardhan and Satya Sandeep Kalepu provided feedback throughout both workshops, while Professor Lynn Kaack (Chair, AI & Climate Change, Hertie School) guided the strategic framing of governance questions. Dania Abu-Sharkh and Sarah Lawton-Görlach from CIVICA coordinated across campuses.
Beyond the challenge: building tomorrow’s governance leaders
The AI Governance Challenge 2025 demonstrated that interdisciplinary, cross-campus collaboration can yield actionable policy proposals. Participants gained hands-on experience in risk assessment, regulatory analysis, and stakeholder engagement - skills vital in an AI-driven world. As the final report takes shape, frameworks developed by winning teams will be published for policymakers, academic institutions, and industry stakeholders. This initiative has established a foundation for ongoing research and dialogue around AI governance, equipping the next generation of social scientists and data practitioners with the tools they need to navigate our changing technological landscape.
Participants aren’t just preparing for careers in policy and technology - they are actively shaping the frameworks that will govern AI’s role in society. In an age where technology often outpaces regulation, initiatives like this offer hope that thoughtful governance can keep pace with innovation. The challenge may be over, but the real work of AI governance has just begun.
-
Shruti Kakade, MDS class of 2025
-
Aliya Boranbayeva, Associate Communications and Events