The governance of innovation is a persistent and rigorous task, and the advent of
Artificial Intelligence (AI) presents further challenges due to its distinctive features. The
intricate nature, lack of transparency, unpredictability of outcomes, and diverse
implications associated with AI technology have left policymakers around the world
perplexed as they strive to keep pace with its rapid evolution. In recent years, the
European Union has emerged as a leading force in the field of AI regulation, with the
primary objective of safeguarding the responsible and humane development and
commercialization of the systems in question. Conversely, the United States has
embraced a less interventionist stance, placing greater emphasis on fostering innovation
and market-driven progress. In light of the input from commentators and stakeholders,
the EU approach is regarded as potentially burdensome but centered on ethical
considerations, whereas the approach adopted by the US is perceived as less stringent
but ultimately deficient in safety provisions. Despite the presence of discernible policy
disparities, it is urgent to acknowledge that both approaches share fundamental
elements that should be highlighted, given the imperative need for international
collaboration in the context of regulating AI systems. The establishment of international
standards for a technology that is inherently transnational enhances legal certainty and
enables firms to effectively prioritize the implementation of regulations that are
pertinent and appropriate for their activities. The protection of the welfare of humans is
also substantially enhanced, enabling the optimal utilization of AI while mitigating
concerns over adverse effects and detrimental outcomes.
Collections
Show Collections