حفظ کرامت انسانی در عصر هوش مصنوعی: دیدگاه‌های قرآنی و چالش‌های اخلاقی

نوع مقاله : ویژه‌نامه زمستان ۱۴۰۴

نویسندگان

1 دانشیار و عضو هیئت علمی سازمان پژوهش و برنامه ریزی آموزشی

2 استادیارگروه علوم تربیتی، دانشگاه پیام نور، تهران، ایران

3 گروه آموزش علوم تربیتی، دانشگاه فرهنگیان، صندوق پستی ۸۸۹-۱۴۶۶۵ تهران، ایران

چکیده

پژوهش حاضر به بررسی چالش‌های اخلاقی حفظ کرامت انسانی در عصر هوش مصنوعی می‌پردازد و هدف اصلی آن تأکید بر ضرورت پاسداشت کرامت انسان در کاربردهای این فناوری نوین است. با توجه به گسترش استفاده از هوش مصنوعی در حوزه‌هایی چون قضاوت، پزشکی و آموزش، تهدیداتی نظیر تبعیض الگوریتمی، نقض حریم خصوصی و کاهش اختیار و عاملیت انسانی پدید آمده است. این پژوهش با استناد به آموزه‌های قرآنی «تکریم بنی‌آدم» و «خلیفه‌الله» نشان می‌دهد که حفظ کرامت انسانی باید مبنای مواجهه اخلاقی با فناوری‌های نوین باشد.روش تحقیق، تحلیل مضمون است و جامعه آماری آن مجموعه‌ای از منابع دینی و علمی شامل قرآن کریم، روایات معتبر، تفاسیر، متون فقهی و اخلاق اسلامی، و نیز ادبیات معاصر فلسفی و علمی در حوزه اخلاق فناوری و هوش مصنوعی را در برمی‌گیرد. نمونه‌ها به‌صورت هدفمند و نظری انتخاب و ازنظر کیفیت و میزان ارتباط ارزیابی شدند. داده‌ها با بهره‌گیری از نرم‌افزارهای مدیریت منابع مانند اندنوت و پایگاه‌های تخصصی داخلی و خارجی ساماندهی و سپس با نرم‌افزار مکس‌کیودیای تحلیل شدند. فرآیند تحلیل شامل کدگذاری باز، محوری و مضمون‌محور بود که به استخراج مضمون‌های کلی و جزئی انجامید. اصول پیشینی قرآنی و فقهی مرتبط باکرامت، عدالت، مسئولیت‌پذیری و حفظ حریم خصوصی چارچوب مفهومی تحلیل را شکل دادند.اعتبار تحلیل از طریق کدگذاری مجدد مستقل و محاسبه شاخص توافق کدگذاران با ضریب کاپای کوهن (82/0) تأیید شد و مستندسازی کامل با ماتریس کد–منبع، شفافیت و قابلیت ردیابی داده‌ها را تضمین کرد. یافته‌ها نشان می‌دهد که اصول قرآنی کرامت، عدالت و مسئولیت می‌توانند مبنای طراحی اخلاقی هوش مصنوعی قرار گیرند و از پیامدهایی چون تبعیض و تضعیف اختیار انسان جلوگیری کنند. درنهایت، پژوهش تأکید می‌کند که تلفیق آموزه‌های قرآنی با استانداردهای فنی و اخلاقی روز، هوش مصنوعی را به ابزاری در خدمت تعالی معنوی، عدالت اجتماعی و حفظ کرامت انسانی بدل می‌سازد و آموزش عمومی در این زمینه نقشی اساسی دارد.

کلیدواژه‌ها


عنوان مقاله [English]

Preserving Human Dignity in the Age of Artificial Intelligence: Quranic Perspectives and Ethical Challenges

نویسندگان [English]

  • Nayereh Shahmohammadi 1
  • Parvaneh Mehrjoo 2
  • Esmaeil Rahimi 3
1 Associate Professor and Faculty Member at the Educational Research and Planning Organization
2 Assistant Professor, Department of Educational Sciences Payame Noor University
3 Department of Educational Sciences, Farhangian University, P.O. Box 14665-889, Tehran, Iran
چکیده [English]

This study explores the ethical challenges of preserving human dignity in the age of artificial intelligence, aiming to emphasize the necessity of safeguarding human dignity in the applications of this emerging technology. With the increasing use of artificial intelligence in sensitive domains such as judicial decision making, medicine, and education, significant ethical risks have emerged, including algorithmic discrimination, violations of privacy, and the erosion of human autonomy and agency. Drawing on the Qur’anic teachings of takrim bani Adam, the honoring of humankind, and the concept of khalifat Allah, which views human beings as God’s vicegerents on earth, the study argues that preserving human dignity must form the ethical foundation for engaging with modern technologies. The research adopts a thematic analysis methodology. The study corpus consists of diverse religious and academic sources, including the Holy Quran, authoritative narrations, Qur’anic exegeses, jurisprudential texts, works on Islamic ethics, and contemporary philosophical and scientific literature on technology ethics and artificial intelligence. Sources were selected through purposive and theoretical sampling and evaluated in terms of quality and relevance. Data were organized using reference management software such as EndNote and specialized national and international databases, and subsequently analyzed using MAXQDA software. The analytical process involved open coding, axial coding, and thematic coding, leading to the extraction of overarching themes and subsidiary categories. A priori Qur’anic and jurisprudential principles related to human dignity, justice, responsibility, and the protection of privacy constituted the conceptual framework guiding the analysis. The credibility of the findings was ensured through independent recoding and the calculation of inter coder agreement using Cohen’s kappa coefficient of 0.82, indicating reliability. Documentation through a code source matrix enhanced transparency. The findings demonstrate that Qur’anic principles of dignity, justice and responsibility provide an ethical framework for designing artificial intelligence, pevrenting discrimination, protecting privacy and preserving human agency.

کلیدواژه‌ها [English]

  • Human Dignity
  • Artificial Intelligence
  • Quranic Ethics
  • Ethical Challenges
Acemoglu, D & Restrepo, P. (2020). Robots and jobs: Evidence from US labor markets. Journal of Political Economy128(6), 2188–2244. https://doi.org/10.1086/705716
AI Now Institute. (2018). Litigating algorithms 2018: Challenging government use of algorithmic decision systems. New York University. https://ainowinstitute.org/reports/litigating-algorithms/
Ananny, M & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Angwin, J. Larson, J. Mattu, S & Kirchner, L. (2016). How we analyzed the COMPAS recidivism algorithm. ProPublica. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
Barocas, S & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671–732. https://doi.org/10.15779/Z38BG2HF9K
Carlini, N. Tramèr, F. Wallace, E. Jagielski, M. Herbert-Voss, A. Lee, K & Erlingsson, Ú. (2021). Extracting training data from large language models. In Proceedings of the 30th USENIX Security Symposium (pp. 2633–2650). https://doi.org/10.48550/arXiv.2012.07805
Cavoukian, A. (2009). Privacy by design: The 7 foundational principles. Information and Privacy Commissioner of Ontario. https://www.ipc.on.ca/wp-content/uploads/Resources/7foundationalprinciples.pdf
Dixon, L. Li, J. Sorensen, J. Thain, N & Vasserman, L. (2018). Measuring and mitigating unintended bias in text classification. In Proceedings of the 1st Conference on AI, Ethics, and Society (pp. 67–73). ACM. https://doi.org/10.1145/3278721.3278729
Doshi-Velez, F & Kim, B. (2017). Towards a rigorous science of interpretable machine learning (arXiv:1702.08608). https://doi.org/10.48550/arXiv.1702.08608
Dwork, C & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3–4), 211–407. https://doi.org/10.1561/0400000042
Fakhr Razi, M. ibn U. (n.d.). Al‑Tafsir al‑Kabir. Tehran: Asatir Publications. [in Persian]
Fayz Kashani, M. B. (n.d.). Tafsir al‑Safi. Qom: Islamic Publications Institute. [in Persian]
Foroughinia, H. Malek, S & Hosseini, F. S. (2019). An overview of the concept of human dignity in light of Islamic jurisprudential thought. Qanunyar Legal Research Quarterly, 2(5), 113–140. Retrieved from https://ensani.ir/fa/article/401699/ [in Persian]
Fukuyama, F. (1992). The end of history and the last man. Free Press.​
Gebru, T. Morgenstern, J. Vecchione, B. Vaughan, J. W. Wallach, H. Daumé III, H. & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–94. https://doi.org/10.1145/3458723
Hardt, M., Price, E & Srebro, N. (2016). Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems, 29. https://proceedings.neurips.cc/paper/2016/file/4f4b1c8e0b5d8d8d8d8d8d8d8d8d8d8d-Paper.pdf
Hunter, I. (2013). The morals of metaphysics: Kant's Groundwork as a philosophical text. American Political Thought, 2(1), 99–129. https://doi.org/10.1086/670398
Ito, J. (2016). The future of work in the age of artificial intelligence. IEICE Transactions on Information and Systems, E99-D(6), 1500–1504. https://doi.org/10.1587/transinf.2016_EDB_L0001
Javadi Amoli, A. (2009). Foundations of ethics in the Qur’an: Thematic interpretation of the Holy Qur’an (Vol. 10). Qom: Esra Publications. [in Persian]
Kant, I. (1785). Groundwork of the metaphysics of morals (M. Gregor, Trans.). Cambridge University Press. (Original work published 1785)​
Kardannejad, N & Ghasemi, G. A. (2020). Limitations of the principle of transparency in the Qur’an and Iranian public law. Comparative Research of Islamic and Western Law, 7(4), 179–208. https://doi.org/10.22091/csiw.2020.3986.1510 [in Persian]
Khamenei, R. (1999). Sahifeh-ye Imam: Collected works of Imam Khomeini (Vols. 1–22). Tehran: Institute for Compilation and Publication of Imam Khomeini’s Works. [in Persian]
Kleinberg, J. Mullainathan, S & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. In Proceedings of the 2017 ACM Conference on Economics and Computation (pp. 43–44). https://doi.org/10.1145/3033274.3084096
Latifzadeh, M. Ghabouli Derafshan, S. M. M. Mohseni, S & Abedi, M. (2022). Explaining the grounds for legitimacy of personal data processing from the perspective of European Union and Iranian law. Legal Studies, 14(3), 154–178. https://doi.org/10.22099/jls.2022.40620.4390 [in Persian]
Machanavajjhala, A. Kifer, D. Gehrke, J & Venkitasubramanian, M. (2007). l-diversity: Privacy beyond k-anonymity. ACM Transactions on Knowledge Discovery from Data, 1(1), Article 3. https://doi.org/10.1145/1217299.1217302
Majlesi, M. B. (n.d.). Bihar al‑Anwar. Beirut: Dar Ihya al‑Turath al‑Arabi. [in Persian]
Makarem Shirazi, N. (2008). Tafsir Nemooneh. Qom: Dar al‑Kutub al‑Islamiyah. [in Persian]
Marcuse, H. (1964). One-dimensional man: Studies in the ideology of advanced industrial society. Beacon Press.​
Maslej, N. Fattorini, L. Brynjolfsson, E. Etchemendy, J. Ligett, K. Lyons, T. Manyika, J. Ngo, H. Niebles, J. C. Parli, V. Shoham, Y. Wald, R. Clark, J & Perrault, R. (2023). The AI index 2023 annual report. Stanford Institute for Human-Centered Artificial Intelligence. https://aiindex.stanford.edu/report/
Matlabi‑Nejad, A. Fazeli, F & Navaei, E. (2023). A systematic review of the promises and challenges of artificial intelligence for teachers. Journal of Technology and Scholarship in Education, 3(1), 23–44. https://doi.org/10.30473/t-edu.2023.68819.1101 [in Persian]
Mehta, P. C, R. Polisetty, A & Suraj, P. G. (2025). A systematic review on decent work through automation. Discover Sustainability, 6, Article 786. https://doi.org/10.1007/s43621-025-01720-w
Mitchell, M. Wu, S. Zaldivar, A. Barnes, P., Vasserman, L. Hutchinson, B & Gebru, T. (2019). Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 220–229). ACM. https://doi.org/10.1145/3287560.3287596
Motahhari, M. (1991). The perfect human. Tehran: Sadra Publications. [in Persian]
Nahj al‑Balagha. (n.d.). Compiled by Sharif Razi. Qom: Dar al‑Hadith. [in Persian]
NIST. (2020). Special Publication 800-53 Rev. 5: Security and privacy controls for information systems and organizations. National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.800-53r5
NIST. (2023). AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.AI.100-1
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press. https://nyupress.org/9781479837243/algorithms-of-oppression/
OECD. (2023). OECD employment outlook 2023: Artificial intelligence and the labour market. OECD Publishing. https://doi.org/10.1787/08785bba-en
Perez Alvarez, M. A. Havens, J & Winfield, A. F. T. (2017). Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems.IEEE. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf
Peykani, H & Nasr Esfahani, M. (2018). Explaining the philosophical relationship between human beings and technology with emphasis on ethics. Science and Technology Policy Letter, 1(8), 50–57. Retrieved from https://stpl.ristip.sharif.ir/article_21104_0361900ba035f875f556d969ce271777.pdf [in Persian]
Rahmati, H. A. (2013). A bibliography of ethics in information and communication technology. Qom University. [in Persian]
Regulation (EU) 2016/679 (GDPR). (2016). Official Journal of the European Union, L119, 1–88. https://eur-lex.europa.eu/eli/reg/2016/679/oj ​
Regulation (EU) 2024/1689 (EU AI Act). (2024). Official Journal of the European Union, L2489. https://eur-lex.europa.eu/eli/reg/2024/1689/oj ​
Salehnia, S & Farhadi, M. (2024). Examining the status of human dignity indicators from the perspective of Islam at the individual level (case study: Instagram social network users). Islam and Social Sciences Quarterly, 16(31), 201–224. https://doi.org/10.30471/soci.2024.9961.1975 [in Persian]
Shokri, R. Stronati, M. Song, C & Shmatikov, V. (2017). Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP) (pp. 3–18). IEEE. https://doi.org/10.1109/SP.2017.41
Soleimanzadeh, S. (2024). A review of the report “Artificial intelligence governance, protection of human rights and human dignity.” National Research Institute for Science Policy. Retrieved from https://thecsri.ir/person/5346 [in Persian]
Soroush, A. (2021). What is science? What is philosophy? Tehran: Serat Publications. [in Persian]
Stahl, B. C. (2023). Exploring ethics and human rights in artificial intelligence. Technological Forecasting and Social Change, 190, Article 122483. https://doi.org/10.1016/j.techfore.2023.122483
Tabataba’i, S. M. H. (1971). Al‑Mizan fi tafsir al‑Qur’an. Qom: Daftar Nashr‑e Islami. https://lib.eshia.ir/50081/0/7 [in Persian]
The Holy Qur’an. (n.d.). Qom: Islamic Culture Printing and Publishing Organization. [in Persian]
Tunstall, S. L. (2018). Models as weapons: Review of Weapons of Math Destruction by Cathy O'Neil (2016). Numeracy, 11(1), Article 10. https://doi.org/10.5038/1936-4660.11.1.10
UNESCO. (2021). Recommendation on the ethics of artificial intelligence. United Nations Educational, Scientific and Cultural Organization. https://unesdoc.unesco.org/ark:/48223/pf0000377897
Wang, X. Wu, Y. C. Ji, X & Fu, H. (2024). Algorithmic discrimination: Examining its types and regulatory measures with emphasis on US legal practices. Frontiers in Artificial Intelligence, 7, Article 1320277. https://doi.org/10.3389/frai.2024.1320277
Wen, H. (2025). Artificial intelligence and social ethics: Opportunities, challenges, and boundaries ethical reflections in the age of technological waves. Lecture Notes in Education Psychology and Public Media, 107(1), 90–96. https://doi.org/10.54254/2753-7048/2025.LD27419
World Economic Forum. (2024). Leveraging generative AI for job augmentation and workforce productivity. https://www.weforum.org/publications/leveraging-generative-ai-for-job-augmentation-and-workforce-productivity/
World Economic Forum. (2025). The future of jobs report 2025. https://www.weforum.org/publications/the-future-of-jobs-report-2025/
Yapo, A & Weiss, J. W. (2018). Ethical implications of bias in machine learning. In Proceedings of the 51st Hawaii International Conference on System Sciences. https://doi.org/10.24251/HICSS.2018.201
Yilmaz, M & Özkan, S. (2025). AI and automation: Reshaping the labor market. Business and Information: International Business and Information Management, 6(1), 1–25. https://doi.org/10.31539/biibfd.91477
Zangeneh, A. H. Hajazi, E & Salehi, K. (2025). Factors affecting the acceptance of artificial intelligence technology among faculty members of the University of Tehran. Journal of Technology and Educational Knowledge Research, 5(1), 65–80. https://doi.org/10.30473/t-edu.2025.73017.1228 [In Persian] ​
دوره 5، ویژه نامه زمستان - شماره پیاپی 19
"هوش مصنوعی و تحول در آموزش و یادگیری"
بهمن 1404
صفحه 113-129
  • تاریخ دریافت: 16 مهر 1404
  • تاریخ بازنگری: 17 آذر 1404
  • تاریخ پذیرش: 29 دی 1404
  • تاریخ اولین انتشار: 30 دی 1404
  • تاریخ انتشار: 01 اسفند 1404