What happens when you tell people that an Instagram post was made by AI? Can changing how we vote actually reduce political polarization? And why does asking ChatGPT the same question 100 times use wildly different amounts of energy each time
These aren't just academic curiosities—they're the urgent questions driving this year's winners of the Hertie School's Data Science Thesis Awards. In a world where AI shapes our daily information diet, democracy feels increasingly fragile, and climate concerns mount, three master's students dove deep into the data to find answers that matter. After careful review by our faculty and instructors, we are proud to recognize the following exceptional works:
- Fabian Pawelczyk — Implied Authenticity Effect? The Impact of Explicit Labels on AI-Generated Content: A Survey Experiment
- Corbin Tyler Cerny — Plurality to Preference: Simulating the Impact of Ranked-Choice Voting on Ohio Senate Polarization
- Henry Baker — Inducing Variation in LLM Inference-time Energy Efficiency for Fixed Computational Workloads
The trust experiment that revealed how our minds handle AI labels
Picture this: you're scrolling through Instagram, and you see a political post. There's a small label that says "AI-generated." Do you trust it less? And here's the kicker - does seeing that label make you more trusting of posts without labels?
Fabian Pawelczyk suspected our brains might play tricks on us when it comes to AI-generated content (AIGC), so he designed a survey experiment with nearly 900 Germans to find out.
"People significantly trusted labeled AI content less," Fabian explains from his new PhD programme at the European University Institute. "Users exposed to labeled content report higher “trust” in unlabeled content. But one big question remains: what drives this spillover effect? Are people getting better at spotting AIGC, or are they becoming overconfident and blindly trusting content without labels?"
This isn't just a psychology experiment. With China implementing AI labeling requirements in September 2025 and similar regulations brewing worldwide, understanding how these labels actually affect us is crucial. Are we training people to spot AI content, or just making them overconfident about everything else?
Fabian's journey to this discovery started with a simple walk around Peking University's campus with Prof. Pu Yan. "We were brainstorming current AI governance issues, and this question just emerged," he recalls. What began as casual academic curiosity became a rigorous pre-registered study with international funding, and findings that could reshape how we think about AI disclosure policies.
An approach to elections that could help fix American polarization
Meanwhile, Corbin Tyler Cerny was exploring how the structure of elections shapes the kind of leaders we elect.
Growing up in Cleveland, Ohio, he became interested in local and state politics early on, noticing how political divisions were playing out not just in Washington but also much closer to home. While most people think first of federal politics, Corbin argues that it is often state and local governments that most directly affect people’s daily lives.
This perspective informed his master’s thesis at the Hertie School, where he studied the effects of voting systems on ideological polarization. Using Ohio’s state legislature as a case study, he simulated how ranked-choice voting (also known as preferential voting) might change the ideological makeup of representative assemblies.
Ranked-choice voting allows voters to rank candidates in order of preference. Instead of rewarding only the candidate with the largest single share of votes, it favors those with broader appeal across the electorate. Corbin’s research supports the claim that such mechanisms can influence the degree of ideological distance between representatives — and may open space for more consensus-driven politics.
“The way our democratic institutions are structured is core to their resilience,” Corbin explains. “My research suggests that electoral mechanisms are not neutral — they play a measurable role in how polarization develops.”
With one more year left in his dual master’s programme, Corbin is already preparing for his next thesis. Whether in a PhD or professional role, he plans to continue exploring how institutional design shapes the strength and resilience of democracy.
The energy detective who asked what AI really costs
Henry Baker’s research began with a straightforward but surprisingly unanswered question: how much energy does a ChatGPT query actually use? At the time, a widely cited statistic claimed it was ten times more than a Google search, but solid evidence was lacking.
As part of his master’s thesis at the Hertie School, Henry investigated the issue through a case study of large language models. His focus was the proposed EU AI Act, which aimed to use computational operations (FLOPs) as a measure of efficiency. The problem, he argued, is that FLOP-counting ignores how models are actually implemented in practice.
To test this, Henry ran a large-scale experiment: over 2,100 different configurations of two open-source models, varying deployment-level parameters such as batching, numerical precision, decoder sampling, and latency conditions. What he found was striking: energy use varied by as much as 516-fold for the smaller model and nearly 300-fold for the larger one, depending on implementation choices.
“The way you deploy an AI system can matter as much as, sometimes more than, the model itself,” Henry explains. “FLOP-counting alone can miss the true energy costs.”
His findings highlight the need for end-to-end benchmarking frameworks that capture the full stack, rather than relying on static model-level metrics. To support further research, Henry released his measurement tool as a Python package, enabling others to test their own models.
Now working as a Research Engineer at Hertie, Henry continues to explore AI and sustainability issues alongside Prof. Lynn Kaack, shifting his focus from technical measurement to the broader policy questions that will determine how AI evolves in an energy-conscious world.
The Future of Data Science Research
These three projects represent something special: data science that doesn't just crunch numbers, but changes how we think about fundamental challenges facing our world. They demonstrate that the most important research often starts with the simplest questions and ends up revealing complex truths about human nature, democratic systems, and technological impact.
As Fabian begins his PhD journey exploring social media and politics, Corbin prepares for his final thesis on democratic governance, and Henry continues building the technical infrastructure for future research, their work reminds us why data science matters. It's not just about algorithms and datasets, it's about using rigorous methods to understand and improve the world around us.
The awards were presented at the Meet the Centre event, where these three researchers inspire the next generation of data scientists. Their message is clear: the biggest challenges of our time need smart people with good data and the courage to ask difficult questions.
In a world full of noise, their research cuts through to signal, and that signal might just help us navigate the complexities ahead.
-
Aliya Boranbayeva, Associate Communications and Events | Data Science Lab
-
Huy Ngoc Dang, Manager | Data Science Lab & Programme Coordinator | Master of Data Science for Public Policy