This dissertation has a goal of approaching the master of liability in cases where
there is Artificial Intelligence operation involved. Specifically, the following paper is
tackling the rapid integration of AI systems into diverse sectors of public and private
life, from transportation to healthcare, which presents immense opportunities
alongside critical ethical and legal challenges. Furthermore, this dissertation delves
into the complex topic of liability attribution surrounding such AI systems, focusing on
its impact on individuals and society. The research begins by defining AI and outlining
its unique risk profile compared to traditional systems. It then explores the current
legal landscape, commenting on EU and UN approaches as well as State national law
approaches, highlighting the problematic that arises in addressing the nuances of AIinduced harm, particularly due to the "black box" nature of algorithms and the
potential for data bias.
The main part of the dissertation focuses dissecting the multifaceted roles of
various stakeholders within the AI lifecycle – developers, deployers, and end users. By
analyzing their responsibilities and potential negligence, the research untangles the
intricate web of accountability when harm manifests. Leveraging real-world cases and
emerging legal frameworks, the dissertation proposes a frame for navigating the
mostly uncertain and opaque problematic of AI liability. It advocates for clear legal
standards, transparency and explainability in AI models, and emphasizes robust risk
management protocols. Ultimately, this dissertation strives to pave the way for
responsible development and deployment of AI systems, ensuring trust, fairness, and
safety in a world increasingly shaped by intelligent machines.
Collections
Show Collections