© Pint of Science, 2025. All rights reserved.
Experience two dynamic perspectives on shaping tomorrow. Finn Rietz, PhD candidate in Computer Sciences, demonstrates how safety constraints in reinforcement learning can create AI that avoids catastrophic errors. Meanwhile, Ulrika Sultan (Lecturer in Technology Education) explores how cultural biases steer girls away from STEM and share ideas to inspire future innovators. Don't miss this thought-provoking session where technology meets inclusion.
Teaching AI What Not to Do: Safe Exploration in Reinforcement Learning
Finn Rietz
(PhD candidate)
Reinforcement Learning (RL) is a powerful branch of AI inspired by Pavlovian trial-and-error learning, capable of solving complex problems and often converging to "superhuman" performance levels. But there's a catch: Since RL learns through trial and error over many attempts, it often has to make catastrophic mistakes, many times, before it figures out which actions are undesirable and should be avoided. But... does it really have to be this way? Why should we let an AI explore in ways we already know to be suboptimal? What if we could build systems that simply never even consider ideas that are obviously bad for us?
In this talk, we'll explore how to design safer AI systems by integrating our existing knowledge, for example in the form of safety constraints, directly into the learning process. Instead of punishing bad behavior after it happens, we'll see how to build agents that are incapable of taking forbidden actions, even as they explore and learn. I'll begin with a brief introduction to RL, then walk through classical approaches to constrained learning, and finally talk about my contemporary research on tools like normalizing flows to embed constraints and human preferences into the AI's decision-making from the start.
Come find out how we're making AI not just intelligent, but safe, trustworthy, and aligned with human values -- without holding back its creativity.
In this talk, we'll explore how to design safer AI systems by integrating our existing knowledge, for example in the form of safety constraints, directly into the learning process. Instead of punishing bad behavior after it happens, we'll see how to build agents that are incapable of taking forbidden actions, even as they explore and learn. I'll begin with a brief introduction to RL, then walk through classical approaches to constrained learning, and finally talk about my contemporary research on tools like normalizing flows to embed constraints and human preferences into the AI's decision-making from the start.
Come find out how we're making AI not just intelligent, but safe, trustworthy, and aligned with human values -- without holding back its creativity.

Tech Isn’t Just for Boys: Rethinking Girls in STEM
Ulrika Sultan
(Lecturer in Technology Education)
Why do so many girls lose interest in tech before they even get started? And what can we do to change that? In this talk, Ulrika Sultan explores the hidden messages and cultural stereotypes that quietly push girls away from science, maths, and technology. Drawing on her research, she shows how early experiences shape who feels like they “belong” in STEM. You’ll hear stories of what works, what doesn’t, and how small shifts in everyday life can help all kids - especially girls -see themselves as future engineers, scientist, and creators.

Map data © OpenStreetMap contributors.
Other Kvarteret&co events
2025-05-21
Hidden forces
Kvarteret&co
Engelbrektsgatan 6 702 12 Örebro, Sweden
2025-05-19
Tastes of Evolution
Kvarteret&co
Engelbrektsgatan 6 702 12 Örebro, Sweden