Speaker
Description
As machine learning systems become more pervasive in our daily lives, the emergence of biases--often disproportionately affecting vulnerable populations--has raised significant concerns. In response, governments have introduced regulatory measures, such as the recent Artificial Intelligence Act. However, addressing bias through regulation remains challenging, as bias generation within machine learning systems is complex and multifaceted, and still poorly understood.
In this presentation, I will offer a novel, theory grounded approach to investigate this issue. Using tools from statistical physics, I will introduce a simplified model that isolates bias arising from the underlying data structure. By analysing both the asymptotic and dynamic behaviour of classifiers, I will demonstrate how bias can emerge and propagate through the system. While the introduction of bias is often remarkably easy, the statistical physics framework reveals unexpected pathways and mechanisms through which bias manifests. This approach sheds new light on some of the root causes of bias, eventually leading to a more rigorous basis for understanding and mitigating its impact.