PEDAGOGICAL ETHICS IN THE AGE OF ARTIFICIAL INTELLIGENCE: CAN A LECTURER REMAIN AN AUTHOR?

Authors

DOI:

https://doi.org/10.32782/ped-uzhnu/2025-10-18

Keywords:

pedagogical ethics; artificial intelligence; academic integrity; lecturer authorship; ethical  responsibility; Generative AI; transparency; human-centered education; evidence-based education; digital age

Abstract

The article examines the phenomenon of pedagogical ethics in the context of the rapid integration of generative artificial intelligence (GenAI) into higher education. The changes brought by tools such as ChatGPT, Grammarly, and Copilot transcend technical modernization: they affect the nature of authorship, the structure of academic integrity, assessment systems, and the professional identity of the lecturer. The author raises a key question: Can a lecturer remain an author in a world where part of the text, decisions, and instructional scenarios are produced by a machine? The study substantiates the thesis that a lecturer’s ethical standing in the digital age is not merely defined by the degree of technological proficiency, but by the unique capacity to preserve human subjectivity, accountability, and transparency, making the lecturer’s role more crucial than ever.The paper argues that the key threat to contemporary academic integrity is not the use of AI per se but the loss of a human sense of authorship, when knowledge ceases to be associated with personal contribution and moral responsibility. To address this challenge, a new integrative E³-Author model (Ethical – Empathic – Evidence-based Authoring Model) is proposed for university lecturers. The model establishes a multi-level system of actions that ensures the ethical sustainability of pedagogical authorship under conditions of digital co-creation.The ethical level of the model sets the norms of human–technology interaction: a deliberate decision to employ AI, transparent documentation of the process, final human oversight, and readiness to explain the provenance of each fragment. The empathic level emphasizes humanity and the moral connection between the lecturer and the produced material. The evidence-based level entails accountability: the lecturer retains versions, drafts, prompts, conducts fact-checking, and is ready to demonstrate the creation process. Within the model, practical implementation mechanisms are proposed: – Ethical audit of instructional materials as a form of collective reflection without a punitive thrust; – Ethical disclosure–a brief description of how AI was used at the end of a course or article; – Reflective assessment–stage-by-stage recording of the contributions of students and lecturers; – Community of practice–departmental and inter-university platforms for discussing new standards of academic ethics.As a result, the university receives not a set of prohibitions but a support system for professional autonomy in which ethics rests on trust and transparency.The proposed model aligns with a humanistic paradigm: education must remain human-centered, and technologies should serve as instruments for empowerment rather than control. The E³-Author approach does not deny AI’s role but places it within a framework of accountability and co-creation, where the lecturer retains the status of the chief moral agent of the educational process. Practically, the model enables universities to integrate GenAI without jeopardizing integrity while laying the groundwork for new assessment formats– oral, process-based, and multimodal–that improve learning quality and simultaneously curb misuse, providing a practical and effective solution for the challenges of the digital age.The E³-Author model offers a universal scaffold for rethinking the lecturer’s role as a moral leader, facilitator of integrity, and guarantor of educational quality. Prospects for further research include validating the model across academic disciplines, developing indicators of the “sense of authorship,” and analyzing relationships among empathy, process transparency, and academic trust. Globally, the approach can serve as a conceptual basis for a new pedagogy of AI-era ethics–one that fuses innovation with humanity, inspiring hope for a future where technology and ethics coexist harmoniously.

References

Committee on Publication Ethics. Authorship and AI tools (COPE position statement) 2023, February 13. URL: https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools

Draxler F., Starke A., Diefenbach S. The AI ghostwriter effect: When users do not perceive ownership of AI-generated text. In: Proceedings of the CHI Conference on Human Factors in Computing Systems, 2024. DOI: https://doi.org/10.1145/3637875

International Committee of Medical Journal Editors. Defining the role of authors and contributors [Електронний ресурс]. 2023, December. URL: https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html

Kofinas A. K., Tsak C. H., Pike D. The impact of generative AI on academic integrity of authentic assessments within a higher education context. British Journal of Educational Technology, 2025, 56(2). DOI: https://doi.org/10.1111/bjet.13585

Miao F., Holmes W. Guidance for generative AI in education and research. Paris: UNESCO, 2023. URL: https://unesdoc.unesco.org/ark:/48223/pf0000386694

Tertiary Education Quality and Standards Agency (TEQSA). The evolving risk to academic integrity posed by generative artificial intelligence: Options for immediate action [Електронний ресурс]. 2024. URL: https://www.teqsa.gov.au/sites/default/files/2024-08/evolving-risk-to-academic-integrity-posed-by-generative-artificial-intelligence.pdf

Weber-Wulff D., Tschuggnall M., Breslin S., Bialke M., Gipp B., Stein B. Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 2023, 19(26). DOI: https://doi.org/10.1007/s40979-023-00146-z

Downloads

Published

2025-12-01

Issue

Section

SECTION 2 SPECIAL AND INCLUSIVE EDUCATION