مجله حقوقی بین المللی

مجله حقوقی بین المللی

تنظیم‌گری هوش مصنوعی در حقوق بین‌الملل: چالش‌های پیش روی یک جهان به هم پیوسته

نوع مقاله : پژوهشی

نویسندگان
1 مستقل
2 گروه حقوق عمومی و بین‌الملل، دانشکده حقوق، الهیات و علوم سیاسی، واحد علوم و تحقیقات، دانشگاه آزاد اسلامی، تهران، ایران
10.22066/cilamag.2025.2025772.2584
چکیده
مقاله حاضر با استفاده از روش پژوهش توصیفی-تحلیلی و مطالعه تطبیقی اسناد و تجارب بین‌المللی، به بررسی چالش‌های حقوقی و نظارتی ناشی از ماهیت جهانی، تکامل سریع و کاربردهای متنوع هوش مصنوعی می‌پردازد. فرضیه اصلی مقاله این است که نبود چارچوب نظارتی هماهنگ و الزام‌آور در سطح بین‌المللی، موجب تشدید ابهامات حوزه‌ی قضایی، تعارض استانداردها، و افزایش ریسک‌های اخلاقی و اجتماعی در توسعه و کاربرد هوش مصنوعی می‌شود. یافته‌ها نشان می‌دهد که دستیابی به اجماع میان کشورها با نظام‌های حقوقی و ارزش‌های فرهنگی متفاوت، تنها از طریق تقویت همکاری‌های چندجانبه، توسعه اصول و هنجارهای مشترک، اتخاذ رویکردهای مبتنی بر ریسک، ترویج نوآوری مسئولانه و توانمندسازی کشورهای در حال توسعه امکان‌پذیر است. نتیجه‌گیری مقاله تأکید می‌کند که اتخاذ رویکردی پویا و انعطاف‌پذیر، گفتمان عمومی، مشارکت فعال ذینفعان و تعهد به شفافیت و پاسخگویی، برای ایجاد چارچوبی کارآمد و عادلانه جهت حکمرانی بین‌المللی هوش مصنوعی و تبدیل آن به نیرویی مثبت در جهان ضروری است.
کلیدواژه‌ها

موضوعات


عنوان مقاله English

Regulating Artificial Intelligence in International Law

نویسندگان English

mohammadreza moshrefian 1
mohammadreza Alipour 2
1 Independent
2 Department of Public and International Law, Faculty of Law, Theology and Political Science, Science and Research Branch, Islamic Azad University, Tehran, Iran
چکیده English

Extended Abstract

1. Introduction

Artificial intelligence poses major regulatory challenges due to its global scope, fast development, and wide-ranging uses. Key concerns include ethical issues like bias and privacy, as well as socio-economic effects such as job loss and inequality. The lack of a unified global legal framework risks worsening these problems. Effective AI governance demands international cooperation across governments, industry, academia, and civil society, despite differing legal and cultural contexts. This article examines these issues and offers strategies for responsible global AI development.

2. Research Gap and Objective:

Despite the growing recognition of AI's global implications and ongoing efforts by organizations such as the OECD and initiatives like the Global Partnership on AI (GPAI), there remains a significant gap in achieving a comprehensive and unified international regulatory framework for AI. Existing national regulations are fragmented, reflecting divergent priorities; some focus on stringent data privacy, as seen in the EU's General Data Protection Regulation (GDPR), while others prioritize innovation and economic growth. This regulatory divergence creates barriers for multinational corporations seeking to operate across borders and complicates efforts to establish a cohesive global framework. Furthermore, the absence of a universally accepted definition of AI complicates the delineation of its scope within legal frameworks, hindering effective enforcement and creating legal uncertainty. This research aims to identify key challenges in AI regulation at the international level and propose collaborative, flexible, and innovative strategies for overcoming them, balancing innovation with ethical accountability and ensuring broad societal benefits.

3. Methodology:

This study employs a comparative analytical approach to examine existing regulatory frameworks and initiatives at both national and international levels. It analyzes regulations and policies across various jurisdictions to identify commonalities, divergences, and best practices. The methodology also draws on interdisciplinary insights from technology, law, ethics, and policy studies to provide a comprehensive understanding of the complex issues surrounding AI governance. It critically evaluates successful and unsuccessful international collaborations, such as the OECD Principles on AI and the GPAI, to extract lessons learned and inform future policy recommendations. The research further incorporates stakeholder perspectives through a review of academic literature, industry reports, policy documents, and expert consultations.

4. Key Findings:

1. Regulatory Fragmentation: The lack of unified international standards leads to conflicting national rules, hindering cross-border collaboration and creating compliance burdens for multinational corporations.

2. Jurisdictional Ambiguity: The transnational nature of AI systems blurs legal boundaries, making it difficult to determine which legal framework applies when data is collected in one country, processed in another, and utilized in a third.

3. Technological Obsolescence: Traditional legislative processes struggle to keep pace with AI's rapid advancements, resulting in outdated or insufficient regulations that fail to address emerging ethical and societal challenges.

4. Ethical Risks: Issues such as algorithmic bias, lack of transparency in decision-making processes ("black box" systems), and potential misuse raise significant ethical dilemmas that require careful consideration and proactive regulation.

5. Economic Inequality: Uneven access to AI technologies and resources exacerbates global inequalities between developed and developing nations, potentially creating a digital divide that widens existing disparities.

6. Divergent Priorities: Countries differ in their regulatory focus based on economic interests, cultural values, and technological capabilities, leading to conflicting approaches that hinder the development of a unified global framework.

7. Absence of Common Definition: A lack of universal agreement on what constitutes "AI" complicates the process of setting clear legal boundaries and determining which systems fall under specific regulations.

8. The Role of non-Governmental Actors: AI regulation involves not only governments but also non-governmental actors like tech companies, civil society, and universities, who play crucial roles in shaping standards and policies. Despite their influence and expertise, the absence of binding legal obligations and conflicting interests among stakeholders pose major challenges for accountability and transparency, underscoring the need for stronger cooperation and binding international agreements.

5. Contribution to the Field:

This paper contributes to the growing body of literature on AI governance by providing a comprehensive analysis of regulatory challenges from an international perspective. It highlights the need for harmonized yet flexible frameworks that can accommodate diverse national priorities while addressing shared global concerns. The study underscores the importance of interdisciplinary collaboration among technologists, legal experts, ethicists, policymakers, and civil society in developing effective AI governance mechanisms. By analyzing existing efforts and identifying key gaps, the research provides a roadmap for policymakers and stakeholders seeking to establish a robust and equitable international AI regulatory framework. It adds a nuanced perspective on balancing innovation with ethical considerations and ensures that benefits are fairly distributed across the world.

6. Implications and Applications:

This paper emphasizes that effective international regulation of artificial intelligence requires dynamic, adaptable legal frameworks that can keep pace with rapid technological change. It highlights the importance of global cooperation and standard-setting among governments, international organizations, industry, academia, and civil society to develop shared principles and best practices. The research also stresses the need to bridge the digital divide by empowering developing countries through capacity-building and technology transfer, ensuring equitable participation in the global AI ecosystem. Embedding ethical considerations, transparency, and accountability in AI design and deployment is essential for protecting human rights and social welfare. By integrating AI into legal and regulatory systems and encouraging inclusive governance involving a broad range of stakeholders, the paper provides actionable guidance for balancing innovation with risk management and fostering responsible, equitable, and effective global AI governance.

Conclusion:

In summary, the extended abstract highlights the profound regulatory challenges posed by the global, rapidly evolving, and multifaceted nature of artificial intelligence. Persistent gaps in international legal frameworks-stemming from jurisdictional ambiguities, the swift pace of technological innovation, and divergent ethical and cultural standards-underscore the need for coordinated and dynamic international responses. Achieving effective AI governance requires harmonizing standards, strengthening international cooperation, fostering responsible innovation, ensuring transparency and accountability, and empowering developing countries to participate in global rulemaking. By proposing flexible, risk-based approaches and advocating for the involvement of a broad spectrum of stakeholders-including governments, industry, academia, and civil society-this research underscores the importance of adaptive and inclusive regulatory strategies. It contributes to international law by mapping both the obstacles and possible pathways toward a more equitable and effective global governance framework for artificial intelligence. Ultimately, only through collective commitment and innovative legal mechanisms can AI be harnessed as a force for global good, minimizing risks while maximizing societal benefits.



Key Words:

Artificial Intelligence, Regulating, International Law, Human Rights, Harmonizing

کلیدواژه‌ها English

Artificial Intelligence
Regulating
International Law
Human Rights
Harmonizing

مقالات آماده انتشار، پذیرفته شده
انتشار آنلاین از 10 مرداد 1404

  • تاریخ دریافت 22 مرداد 1403
  • تاریخ بازنگری 12 خرداد 1404
  • تاریخ پذیرش 10 مرداد 1404