U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Artificial intelligence in education: Addressing ethical challenges in K-12 settings

Selin akgun.

Michigan State University, East Lansing, MI USA

Christine Greenhow

Associated data.

Not applicable.

Artificial intelligence (AI) is a field of study that combines the applications of machine learning, algorithm productions, and natural language processing. Applications of AI transform the tools of education. AI has a variety of educational applications, such as personalized learning platforms to promote students’ learning, automated assessment systems to aid teachers, and facial recognition systems to generate insights about learners’ behaviors. Despite the potential benefits of AI to support students’ learning experiences and teachers’ practices, the ethical and societal drawbacks of these systems are rarely fully considered in K-12 educational contexts. The ethical challenges of AI in education must be identified and introduced to teachers and students. To address these issues, this paper (1) briefly defines AI through the concepts of machine learning and algorithms; (2) introduces applications of AI in educational settings and benefits of AI systems to support students’ learning processes; (3) describes ethical challenges and dilemmas of using AI in education; and (4) addresses the teaching and understanding of AI by providing recommended instructional resources from two providers—i.e., the Massachusetts Institute of Technology’s (MIT) Media Lab and Code.org. The article aims to help practitioners reap the benefits and navigate ethical challenges of integrating AI in K-12 classrooms, while also introducing instructional resources that teachers can use to advance K-12 students’ understanding of AI and ethics.

Introduction

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” — Stephen Hawking.

We may not think about artificial intelligence (AI) on a daily basis, but it is all around us, and we have been using it for years. When we are doing a Google search, reading our emails, getting a doctor’s appointment, asking for driving directions, or getting movie and music recommendations, we are constantly using the applications of AI and its assistance in our lives. This need for assistance and our dependence on AI systems has become even more apparent during the COVID-19 pandemic. The growing impact and dominance of AI systems reveals itself in healthcare, education, communications, transportation, agriculture, and more. It is almost impossible to live in a modern society without encountering applications powered by AI  [ 10 , 32 ].

Artificial intelligence (AI) can be defined briefly as the branch of computer science that deals with the simulation of intelligent behavior in computers and their capacity to mimic, and ideally improve, human behavior [ 43 ]. AI dominates the fields of science, engineering, and technology, but also is present in education through machine-learning systems and algorithm productions [ 43 ]. For instance, AI has a variety of algorithmic applications in education, such as personalized learning systems to promote students’ learning, automated assessment systems to support teachers in evaluating what students know, and facial recognition systems to provide insights about learners’ behaviors [ 49 ]. Besides these platforms, algorithm systems are prominent in education through different social media outlets, such as social network sites, microblogging systems, and mobile applications. Social media are increasingly integrated into K-12 education [ 7 ] and subordinate learners’ activities to intelligent algorithm systems [ 17 ]. Here, we use the American term “K–12 education” to refer to students’ education in kindergarten (K) (ages 5–6) through 12th grade (ages 17–18) in the United States, which is similar to primary and secondary education or pre-college level schooling in other countries. These AI systems can increase the capacity of K-12 educational systems and support the social and cognitive development of students and teachers [ 55 , 8 ]. More specifically, applications of AI can support instruction in mixed-ability classrooms; while personalized learning systems provide students with detailed and timely feedback about their writing products, automated assessment systems support teachers by freeing them from excessive workloads [ 26 , 42 ].

Despite the benefits of AI applications for education, they pose societal and ethical drawbacks. As the famous scientist, Stephen Hawking, pointed out that weighing these risks is vital for the future of humanity. Therefore, it is critical to take action toward addressing them. The biggest risks of integrating these algorithms in K-12 contexts are: (a) perpetuating existing systemic bias and discrimination, (b) perpetuating unfairness for students from mostly disadvantaged and marginalized groups, and (c) amplifying racism, sexism, xenophobia, and other forms of injustice and inequity [ 40 ]. These algorithms do not occur in a vacuum; rather, they shape and are shaped by ever-evolving cultural, social, institutional and political forces and structures [ 33 , 34 ]. As academics, scientists, and citizens, we have a responsibility to educate teachers and students to recognize the ethical challenges and implications of algorithm use. To create a future generation where an inclusive and diverse citizenry can participate in the development of the future of AI, we need to develop opportunities for K-12 students and teachers to learn about AI via AI- and ethics-based curricula and professional development [ 2 , 58 ]

Toward this end, the existing literature provides little guidance and contains a limited number of studies that focus on supporting K-12 students and teachers’ understanding of social, cultural, and ethical implications of AI [ 2 ]. Most studies reflect university students’ engagement with ethical ideas about algorithmic bias, but few addresses how to promote students’ understanding of AI and ethics in K-12 settings. Therefore, this article: (a) synthesizes ethical issues surrounding AI in education as identified in the educational literature, (b) reflects on different approaches and curriculum materials available for teaching students about AI and ethics (i.e., featuring materials from the MIT Media Lab and Code.org), and (c) articulates future directions for research and recommendations for practitioners seeking to navigate AI and ethics in K-12 settings.

Next, we briefly define the notion of artificial intelligence (AI) and its applications through machine-learning and algorithm systems. As educational and educational technology scholars working in the United States, and at the risk of oversimplifying, we provide only a brief definition of AI below, and recognize that definitions of AI are complex, multidimensional, and contested in the literature [ 9 , 16 , 38 ]; an in-depth discussion of these complexities, however, is beyond the scope of this paper. Second, we describe in more detail five applications of AI in education, outlining their potential benefits for educators and students. Third, we describe the ethical challenges they raise by posing the question: “how and in what ways do algorithms manipulate us?” Fourth, we explain how to support students’ learning about AI and ethics through different curriculum materials and teaching practices in K-12 settings. Our goal here is to provide strategies for practitioners to reap the benefits while navigating the ethical challenges. We acknowledge that in centering this work within U.S. education, we highlight certain ethical issues that educators in other parts of the world may see as less prominent. For example, the European Union (EU) has highlighted ethical concerns and implications of AI, emphasized privacy protection, surveillance, and non-discrimination as primary areas of interest, and provided guidelines on how trustworthy AI should be [ 3 , 15 , 23 ]. Finally, we reflect on future directions for educational and other research that could support K-12 teachers and students in reaping the benefits while mitigating the drawbacks of AI in education.

Definition and applications of artificial intelligence

The pursuit of creating intelligent machines that replicate human behavior has accelerated with the realization of artificial intelligence. With the latest advancements in computer science, a proliferation of definitions and explanations of what counts as AI systems has emerged. For instance, AI has been defined as “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings” [ 49 ]. This particular definition highlights the mimicry of human behavior and consciousness. Furthermore, AI has been defined as “the combination of cognitive automation, machine learning, reasoning, hypothesis generation and analysis, natural language processing, and intentional algorithm mutation producing insights and analytics at or above human capability” [ 31 ]. This definition incorporates the different sub-fields of AI together and underlines their function while reaching at or above human capability.

Combining these definitions, artificial intelligence can be described as the technology that builds systems to think and act like humans with the ability of achieving goals . AI is mainly known through different applications and advanced computer programs, such as recommender systems (e.g., YouTube, Netflix), personal assistants (e.g., Apple’s Siri), facial recognition systems (e.g., Facebook’s face detection in photographs), and learning apps (e.g., Duolingo) [ 32 ]. To build on these programs, different sub-fields of AI have been used in a diverse range of applications. Evolutionary algorithms and machine learning are most relevant to AI in K-12 education.

Algorithms are the core elements of AI. The history of AI is closely connected to the development of sophisticated and evolutionary algorithms. An algorithm is a set of rules or instructions that is to be followed by computers in problem-solving operations to achieve an intended end goal. In essence, all computer programs are algorithms. They involve thousands of lines of codes which represent mathematical instructions that the computer follows to solve the intended problems (e.g., as computing numerical calculation, processing an image, and grammar-checking in an essay). AI algorithms are applied to fields that we might think of as essentially human behavior—such as speech and face recognition, visual perception, learning, and decision-making and learning. In that way, algorithms can provide instructions for almost any AI system and application we can conceive [ 27 ].

Machine learning

Machine learning is derived from statistical learning methods and uses data and algorithms to perform tasks which are typically performed by humans [ 43 ]. Machine learning is about making computers act or perform without being given any line-by-line step [ 29 ]. The working mechanism of machine learning is the learning model’s exposure to ample amounts of quality data [ 41 ]. Machine-learning algorithms first analyze the data to determine patterns and to build a model and then predict future values through these models. In other words, machine learning can be considered a three-step process. First, it analyzes and gathers the data, and then, it builds a model to excel for different tasks, and finally, it undertakes the action and produces the desired results successfully without human intervention [ 29 , 56 ]. The widely known AI applications such as recommender or facial recognition systems have all been made possible through the working principles of machine learning.

Benefits of AI applications in education

Personalized learning systems, automated assessments, facial recognition systems, chatbots (social media sites), and predictive analytics tools are being deployed increasingly in K-12 educational settings; they are powered by machine-learning systems and algorithms [ 29 ]. These applications of AI have shown promise to support teachers and students in various ways: (a) providing instruction in mixed-ability classrooms, (b) providing students with detailed and timely feedback on their writing products, (c) freeing teachers from the burden of possessing all knowledge and giving them more room to support their students while they are observing, discussing, and gathering information in their collaborative knowledge-building processes [ 26 , 50 ]. Below, we outline benefits of each of these educational applications in the K-12 setting before turning to a synthesis of their ethical challenges and drawbacks.

Personalized learning systems

Personalized learning systems, also known as adaptive learning platforms or intelligent tutoring systems, are one of the most common and valuable applications of AI to support students and teachers. They provide students access to different learning materials based on their individual learning needs and subjects [ 55 ]. For example, rather than practicing chemistry on a worksheet or reading a textbook, students may use an adaptive and interactive multimedia version of the course content [ 39 ]. Comparing students’ scores on researcher-developed or standardized tests, research shows that the instruction based on personalized learning systems resulted in higher test scores than traditional teacher-led instruction [ 36 ]. Microsoft’s recent report (2018) of over 2000 students and teachers from Singapore, the U.S., the UK, and Canada shows that AI supports students’ learning progressions. These platforms promise to identify gaps in students’ prior knowledge by accommodating learning tools and materials to support students’ growth. These systems generate models of learners using their knowledge and cognition; however, the existing platforms do not yet provide models for learners’ social, emotional, and motivational states [ 28 ]. Considering the shift to remote K-12 education during the COVID-19 pandemic, personalized learning systems offer a promising form of distance learning that could reshape K-12 instruction for the future [ 35 ].

Automated assessment systems

Automated assessment systems are becoming one of the most prominent and promising applications of machine learning in K-12 education [ 42 ]. These scoring algorithm systems are being developed to meet the need for scoring students’ writing, exams and assignments, and tasks usually performed by the teacher. Assessment algorithms can provide course support and management tools to lessen teachers’ workload, as well as extend their capacity and productivity. Ideally, these systems can provide levels of support to students, as their essays can be graded quickly [ 55 ]. Providers of the biggest open online courses such as Coursera and EdX have integrated automated scoring engines into their learning platforms to assess the writings of hundreds of students [ 42 ]. On the other hand, a tool called “Gradescope” has been used by over 500 universities to develop and streamline scoring and assessment [ 12 ]. By flagging the wrong answers and marking the correct ones, the tool supports instructors by eliminating their manual grading time and effort. Thus, automated assessment systems deal very differently with marking and giving feedback to essays compared to numeric assessments which analyze right or wrong answers on the test. Overall, these scoring systems have the potential to deal with the complexities of the teaching context and support students’ learning process by providing them with feedback and guidance to improve and revise their writing.

Facial recognition systems and predictive analytics

Facial recognition software is used to capture and monitor students’ facial expressions. These systems provide insights about students’ behaviors during learning processes and allow teachers to take action or intervene, which, in turn, helps teachers develop learner-centered practices and increase student’s engagement [ 55 ]. Predictive analytics algorithm systems are mainly used to identify and detect patterns about learners based on statistical analysis. For example, these analytics can be used to detect university students who are at risk of failing or not completing a course. Through these identifications, instructors can intervene and get students the help they need [ 55 ].

Social networking sites and chatbots

Social networking sites (SNSs) connect students and teachers through social media outlets. Researchers have emphasized the importance of using SNSs (such as Facebook) to expand learning opportunities beyond the classroom, monitor students’ well-being, and deepen student–teacher relations [ 5 ]. Different scholars have examined the role of social media in education, describing its impact on student and teacher learning and scholarly communication [ 6 ]. They point out that the integration of social media can foster students’ active learning, collaboration skills, and connections with communities beyond the classroom [ 6 ]. Chatbots also take place in social media outlets through different AI systems [ 21 ]. They are also known as dialogue systems or conversational agents [ 26 , 52 ]. Chatbots are helpful in terms of their ability to respond naturally with a conversational tone. For instance, a text-based chatbot system called “Pounce” was used at Georgia State University to help students through the registration and admission process, as well as financial aid and other administrative tasks [ 7 ].

In summary, applications of AI can positively impact students’ and teachers’ educational experiences and help them address instructional challenges and concerns. On the other hand, AI cannot be a substitute for human interaction [ 22 , 47 ]. Students have a wide range of learning styles and needs. Although AI can be a time-saving and cognitive aide for teachers, it is but one tool in the teachers’ toolkit. Therefore, it is critical for teachers and students to understand the limits, potential risks, and ethical drawbacks of AI applications in education if they are to reap the benefits of AI and minimize the costs [ 11 ].

Ethical concerns and potential risks of AI applications in education

The ethical challenges and risks posed by AI systems seemingly run counter to marketing efforts that present algorithms to the public as if they are objective and value-neutral tools. In essence, algorithms reflect the values of their builders who hold positions of power [ 26 ]. Whenever people create algorithms, they also create a set of data that represent society’s historical and systemic biases, which ultimately transform into algorithmic bias. Even though the bias is embedded into the algorithmic model with no explicit intention, we can see various gender and racial biases in different AI-based platforms [ 54 ].

Considering the different forms of bias and ethical challenges of AI applications in K-12 settings, we will focus on problems of privacy, surveillance, autonomy, bias, and discrimination (see Fig.  1 ). However, it is important to acknowledge that educators will have different ethical concerns and challenges depending on their students’ grade and age of development. Where strategies and resources are recommended, we indicate the age and/or grade level of student(s) they are targeting (Fig. ​ (Fig.2 2 ).

An external file that holds a picture, illustration, etc.
Object name is 43681_2021_96_Fig1_HTML.jpg

Potential ethical and societal risks of AI applications in education

An external file that holds a picture, illustration, etc.
Object name is 43681_2021_96_Fig2_HTML.jpg

Student work from the activity of “Youtube Redesign” (MIT Media Lab, AI and Ethics Curriculum, p.1, [ 45 ])

One of the biggest ethical issues surrounding the use of AI in K-12 education relates to the privacy concerns of students and teachers [ 47 , 49 , 54 ]. Privacy violations mainly occur as people expose an excessive amount of personal information in online platforms. Although existing legislation and standards exist to protect sensitive personal data, AI-based tech companies’ violations with respect to data access and security increase people’s privacy concerns [ 42 , 54 ]. To address these concerns, AI systems ask for users’ consent to access their personal data. Although consent requests are designed to be protective measures and to help alleviate privacy concerns, many individuals give their consent without knowing or considering the extent of the information (metadata) they are sharing, such as the language spoken, racial identity, biographical data, and location [ 49 ]. Such uninformed sharing in effect undermines human agency and privacy. In other words, people’s agency diminishes as AI systems reduce introspective and independent thought [ 55 ]. Relatedly, scholars have raised the ethical issue of forcing students and parents to use these algorithms as part of their education even if they explicitly agree to give up privacy [ 14 , 48 ]. They really have no choice if these systems are required by public schools.

Another ethical concern surrounding the use of AI in K-12 education is surveillance or tracking systems which gather detailed information about the actions and preferences of students and teachers. Through algorithms and machine-learning models, AI tracking systems not only necessitate monitoring of activities but also determine the future preferences and actions of their users [ 47 ]. Surveillance mechanisms can be embedded into AI’s predictive systems to foresee students’ learning performances, strengths, weaknesses, and learning patterns . For instance, research suggests that teachers who use social networking sites (SNSs) for pedagogical purposes encounter a number of problems, such as concerns in relation to boundaries of privacy, friendship authority, as well as responsibility and availability [ 5 ]. While monitoring and patrolling students’ actions might be considered part of a teacher’s responsibility and a pedagogical tool to intervene in dangerous online cases (such as cyber-bullying or exposure to sexual content), such actions can also be seen as surveillance systems which are problematic in terms of threatening students’ privacy. Monitoring and tracking students’ online conversations and actions also may limit their participation in the learning event and make them feel unsafe to take ownership for their ideas. How can students feel secure and safe, if they know that AI systems are used for surveilling and policing their thoughts and actions? [ 49 ].

Problems also emerge when surveillance systems trigger issues related to autonomy, more specifically, the person’s ability to act on her or his own interest and values. Predictive systems which are powered by algorithms jeopardize students and teachers’ autonomy and their ability to govern their own life [ 46 , 47 ]. Use of algorithms to make predictions about individuals’ actions based on their information raise questions about fairness and self-freedom [ 19 ]. Therefore, the risks of predictive analysis also include the perpetuation of existing bias and prejudices of social discrimination and stratification [ 42 ].

Finally, bias and discrimination are critical concerns in debates of AI ethics in K-12 education [ 6 ]. In AI platforms, the existing power structures and biases are embedded into machine-learning models [ 6 ]. Gender bias is one of the most apparent forms of this problem, as the bias is revealed when students in language learning courses use AI to translate between a gender-specific language and one that is less-so. For example, while Google Translate translated the Turkish equivalent of “S he/he is a nurse ” into the feminine form, it also translated the Turkish equivalent of “ She/he is a doctor ” into the masculine form [ 33 ]. This shows how AI models in language translation carry the societal biases and gender-specific stereotypes in the data [ 40 ]. Similarly, a number of problematic cases of racial bias are also associated with AI’s facial recognition systems. Research shows that facial recognition software has improperly misidentified a number of African American and Latino American people as convicted felons [ 42 ].

Additionally, biased decision-making algorithms reveal themselves throughout AI applications in K-12 education: personalized learning, automated assessment, SNSs, and predictive systems in education. Although the main promise of machine-learning models is increased accuracy and objectivity, current incidents have revealed the contrary. For instance, England’s A-level and GCSE secondary level examinations were cancelled due to the pandemic in the summer of 2020 [ 1 , 57 ]. An alternative assessment method was implemented to determine the qualification grades of students. The grade standardization algorithm was produced by the regulator Ofqual. With the assessment of Ofqual’s algorithm based on schools' previous examination results, thousands of students were shocked to receive unexpectedly low grades. Although a full discussion of the incident is beyond the scope of this article [ 51 ] it revealed how the score distribution favored students who attended private or independent schools, while students from underrepresented groups were hit hardest. Unfortunately, automated assessment algorithms have the potential to reconstruct unfair and inconsistent results by disrupting student’s final scores and future careers [ 53 ].

Teaching and understanding AI and ethics in educational settings

These ethical concerns suggest an urgent need to introduce students and teachers to the ethical challenges surrounding AI applications in K-12 education and how to navigate them. To meet this need, different research groups and nonprofit organizations offer a number of open-access resources based on AI and ethics. They provide instructional materials for students and teachers, such as lesson plans and hands-on activities, and professional learning materials for educators, such as open virtual learning sessions. Below, we describe and evaluate three resources: “AI and Ethics” curriculum and “AI and Data Privacy” workshop from the Massachusetts Institute of Technology (MIT) Media Lab as well as Code.org’s “AI and Oceans” activity. For readers who seek to investigate additional approaches and resources for K-12 level AI and ethics interaction, see: (a) The Chinese University of Hong Kong (CUHK)’s AI for the Future Project (AI4Future) [ 18 ]; (b) IBM’s Educator’s AI Classroom Kit [ 30 ], Google’s Teachable Machine [ 25 ], UK-based nonprofit organization Apps for Good [ 4 ], and Machine Learning for Kids [ 37 ].

"AI and Ethics Curriulum" for middle school students by MIT Media Lab

The MIT Media Lab team offers an open-access curriculum on AI and ethics for middle school students and teachers. Through a series of lesson plans and hand-on activities, teachers are guided to support students’ learning of the technical terminology of AI systems as well as the ethical and societal implications of AI [ 2 ]. The curriculum includes various lessons tied to learning objectives. One of the main learning goals is to introduce students to basic components of AI through algorithms, datasets, and supervised machine-learning systems all while underlining the problem of algorithmic bias [ 45 ]. For instance, in the activity “ AI Bingo” , students are given bingo cards with various AI systems, such as online search engine, customer service bot, and weather app. Students work with their partners collaboratively on these AI systems. In their AI Bingo chart, students try to identify what prediction the selected AI system makes and what dataset it uses. In that way, they become more familiar with the notions of dataset and prediction in the context of AI systems [ 45 ].

In the second investigation, “Algorithms as Opinions” , students think about algorithms as recipes, which are created by set of instructions that modify an input to produce an output [ 45 ]. Initially, students are asked to write an algorithm to make the “ best” jelly sandwich and peanut butter. They explore what it means to be “ best” and see how their opinions of best in their recipes are reflected in their algorithms. In this way, students are able to figure out that algorithms can have various motives and goals. Following this activity, students work on the “Ethical Matrix” , building on the idea of the algorithms as opinions [ 45 ]. During this investigation, students first refer back to their developed algorithms through their “best” jelly sandwich and peanut butter. They discuss what counts as the “best” sandwich for themselves (most healthy, practical, delicious, etc.). Then, through their ethical matrix (chart), students identify different stakeholders (such as their parents, teacher, or doctor) who care about their peanut butter and jelly sandwich algorithm. In this way, the values and opinions of those stakeholders also are embedded in the algorithm. Students fill out an ethical matrix and look for where those values conflict or overlap with each other. This matrix is a great tool for students to recognize different stakeholders in a system or society and how they are able to build and utilize the values of the algorithms in an ethical matrix.

The final investigation which teaches about the biased nature of algorithms is “Learning and Algorithmic Bias” [ 45 ]. During the investigation, students think further about the concept of classification. Using Google’s Teachable Machine tool [ 2 ], students explore the supervised machine-learning systems. Students train a cat–dog classifier using two different datasets. While the first dataset reflects the cats as the over-represented group, the second dataset indicates the equal and diverse representation between dogs and cats [ 2 ]. Using these datasets, students compare the accuracy between the classifiers and then discuss which dataset and outcome are fairer. This activity leads students into a discussion about the occurrence of bias in facial recognition algorithms and systems [ 2 ].

In the rest of the curriculum, similar to the AI Bingo investigation, students work with their partners to determine the various forms of AI systems in the YouTube platform (such as its recommender algorithm and advertisement matching algorithm). Through the investigation of “ YouTube Redesign”, students redesign YouTube’s recommender system. They first identify stakeholders and their values in the system, and then use an ethical matrix to reflect on the goals of their YouTube’s recommendation algorithm [ 45 ]. Finally, through the activity of “YouTube Socratic Seminar” , students read an abridged version of Wall Street Journal article by participating in a Socratic seminar. The article was edited to shorten the text and to provide more accessible language for middle school students. They discuss which stakeholders were most influential or significant in proposing changes in the YouTube Kids app and whether or not technologies like auto play should ever exist. During their discussion, students engage with the questions of: “Which stakeholder is making the most change or has the most power?”, “Have you ever seen an inappropriate piece of content on YouTube? What did you do?” [ 45 ].

Overall, the MIT Media Lab’s AI and Ethics curriculum is a high quality, open-access resource with which teachers can introduce middle school students to the risks and ethical implications of AI systems. The investigations described above involve students in collaborative, critical thinking activities that force them to wrestle with issues of bias and discrimination in AI, as well as surveillance and autonomy through the predictive systems and algorithmic bias.

“AI and Data Privacy” workshop series for K-9 students by MIT Media Lab

Another quality resource from the MIT Media Lab’s Personal Robots Group is a workshop series designed to teach students (between the ages 7 and 14) about data privacy and introduce them to designing and prototyping data privacy features. The group has made the content, materials, worksheets, and activities of the workshop series into an open-access online document, freely available to teachers [ 44 ].

The first workshop in the series is “ Mystery YouTube Viewer: A lesson on Data Privacy” . During the workshop, students engage with the question of what privacy and data mean [ 44 ]. They observe YouTube’s home page from the perspective of a mystery user. Using the clues from the videos, students make predictions about what the characters in the videos might look like or where they might live. In a way, students imitate YouTube algorithms’ prediction mode about the characters. Engaging with these questions and observations, students think further about why privacy and boundaries are important and how each algorithm will interpret us differently based on who creates the algorithm itself.

The second workshop in the series is “ Designing ads with transparency: A creative workshop” . Through this workshop, students are able to think further about the meaning, aim, and impact of advertising and the role of advertisements in our lives [ 44 ]. Students collaboratively create an advertisement using an everyday object. The objective is to make the advertisement as “transparent” as possible. To do that, students learn about notions of malware and adware, as well as the components of YouTube advertisements (such as sponsored labels, logos, news sections, etc.). By the end of the workshop, students design their ads as a poster, and they share with their peers.

The final workshop in MIT’s AI and data privacy series is “ Designing Privacy in Social Media Platforms”. This workshop is designed to teach students about YouTube, design, civics, and data privacy [ 44 ]. During the workshop, students create their own designs to solve one of the biggest challenges of the digital era: problems associated with online consent. The workshop allows students to learn more about the privacy laws and how they impact youth in terms of media consumption. Students consider YouTube within the lenses of the Children’s Online Privacy Protections Rule (COPPA). In this way, students reflect on one of the components of the legislation: how might students get parental permission (or verifiable consent)?

Such workshop resources seem promising in helping educate students and teachers about the ethical challenges of AI in education. Specifically, social media such as YouTube are widely used as a teaching and learning tool within K-12 classrooms and beyond them, in students’ everyday lives. These workshop resources may facilitate teachers’ and students’ knowledge of data privacy issues and support them in thinking further about how to protect privacy online. Moreover, educators seeking to implement such resources should consider engaging students in the larger question: who should own one’s data? Teaching students the underlying reasons for laws and facilitating debate on the extent to which they are just or not could help get at this question.

Investigation of “AI for Oceans” by Code.org

A third recommended resource for K-12 educators trying to navigate the ethical challenges of AI with their students comes from Code.org, a nonprofit organization focused on expanding students’ participation in computer science. Sponsored by Microsoft, Facebook, Amazon, Google, and other tech companies, Code.org aims to provide opportunities for K-12 students to learn about AI and machine-learning systems [ 20 ]. To support students (grades 3–12) in learning about AI, algorithms, machine learning, and bias, the organization offers an activity called “ AI for Oceans ”, where students are able to train their machine-learning models.

The activity is provided as an open-access tutorial for teachers to help their students explore how to train, model and classify data , as well as to understand how human bias plays a role in machine-learning systems. During the activity, students first classify the objects as either “fish” or “not fish” in an attempt to remove trash from the ocean. Then, they expand their training data set by including other sea creatures that belong underwater. Throughout the activity, students are also able to watch and interact with a number of visuals and video tutorials. With the support of their teachers, they discuss machine learning, steps and influences of training data, as well as the formation and risks of biased data [ 20 ].

Future directions for research and teaching on AI and ethics

In this paper, we provided an overview of the possibilities and potential ethical and societal risks of AI integration in education. To help address these risks, we highlighted several instructional strategies and resources for practitioners seeking to integrate AI applications in K-12 education and/or instruct students about the ethical issues they pose. These instructional materials have the potential to help students and teachers reap the powerful benefits of AI while navigating ethical challenges especially related to privacy concerns and bias. Existing research on AI in education provides insight on supporting students’ understanding and use of AI [ 2 , 13 ]; however, research on how to develop K-12 teachers’ instructional practices regarding AI and ethics is still in its infancy.

Moreover, current resources, as demonstrated above, mainly address privacy and bias-related ethical and societal concerns of AI. Conducting more exploratory and critical research on teachers’ and students’ surveillance and autonomy concerns will be important to designing future resources. In addition, curriculum developers and workshop designers might consider centering culturally relevant and responsive pedagogies (by focusing on students’ funds of knowledge, family background, and cultural experiences) while creating instructional materials that address surveillance, privacy, autonomy, and bias. In such student-centered learning environments, students voice their own cultural and contextual experiences while trying to critique and disrupt existing power structures and cultivate their social awareness [ 24 , 36 ].

Finally, as scholars in teacher education and educational technology, we believe that educating future generations of diverse citizens to participate in the ethical use and development of AI will require more professional development for K-12 teachers (both pre-service and in-service). For instance, through sustained professional learning sessions, teachers could engage with suggested curriculum resources and teaching strategies as well as build a community of practice where they can share and critically reflect on their experiences with other teachers. Further research on such reflective teaching practices and students’ sense-making processes in relation to AI and ethics lessons will be essential to developing curriculum materials and pedagogies relevant to a broad base of educators and students.

This work was supported by the Graduate School at Michigan State University, College of Education Summer Research Fellowship.

Data availability

Code availability, declarations.

The authors declare that they have no conflict of interest.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Selin Akgun, Email: ude.usm@lesnugka .

Christine Greenhow, Email: ude.usm@wohneerg .

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

  • DOI: 10.1109/ICCSE.2018.8468773
  • Corpus ID: 52303965

Artificial Intelligence Education Ethical Problems and Solutions

  • Sijing Li , Wang Lan
  • Published in International Conference on… 1 August 2018
  • Education, Computer Science, Philosophy

21 Citations

The future of artificial intelligence in special education technology, the role of artificial intelligence in the future of education, digital transformation of education and artificial intelligence, cognitive technologies and artificial intelligence in social perception, the role of artificial intelligence in education: current trends and future prospects, bridging the gap between ethical ai implementations, ai-based assessment for teaching and learning enhancement, exploring efl teachers’ insights regarding artificial intelligence driven tools in student-centered writing instructions, educational practices resulting from digital intelligence, artificial intelligence: how are gen z’s choosing their careers, related papers.

Showing 1 through 3 of 0 Related Papers

The challenges and opportunities of Artificial Intelligence in education

artificial intelligence education ethical problems and solutions

Artificial Intelligence (AI) is producing new teaching and learning solutions that are currently being tested globally. These solutions require advanced infrastructures and an ecosystem of thriving innovators. How does that affect countries around the world, and especially developing nations? Should AI be a priority to tackle in order to reduce the digital and social divide?

These are some of the questions explored in a Working Paper entitled ‘ Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development ’ presented by UNESCO and ProFuturo at Mobile Learning Week 2019 . It features cases studies on how AI technology is helping education systems use data to improve educational equity and quality.

Concrete examples from countries such as China, Brazil and South Africa are examined on AI’s contribution to learning outcomes, access to education and teacher support. Case studies from countries including the United Arab Emirates, Bhutan and Chile are presented on how AI is helping with data analytics in education management.  

The Paper also explores the curriculum and standards dimension of AI, with examples from the European Union, Singapore and the Republic of Korea on how learners and teachers are preparing for an AI-saturated world.

Beyond the opportunities, the Paper also addresses the challenges and policy implications of introducing AI in education and preparing students for an AI-powered future. The challenges presented revolve around:

  • Developing a comprehensive view of public policy on AI for sustainable development : The complexity of the technological conditions needed to advance in this field require the alignment of multiple factors and institutions. Public policies have to work in partnership at international and national levels to create an ecosystem of AI that serves sustainable development.
  • Ensuring inclusion and equity for AI in education : The least developed countries are at risk of suffering new technological, economic and social divides with the development of AI. Some main obstacles such as basic technological infrastructure must be faced to establish the basic conditions for implementing new strategies that take advantage of AI to improve learning.
  • Preparing teachers for an AI-powered education : Teachers must learn new digital skills to use AI in a pedagogical and meaningful way and AI developers must learn how teachers work and create solutions that are sustainable in real-life environments.
  • Developing quality and inclusive data systems : If the world is headed towards the datafication of education, the quality of data should be the main chief concern. It´s essential to develop state capabilities to improve data collection and systematization. AI developments should be an opportunity to increase the importance of data in educational system management.
  • Enhancing research on AI in education : While it can be reasonably expected that research on AI in education will increase in the coming years, it is nevertheless worth recalling the difficulties that the education sector has had in taking stock of educational research in a significant way both for practice and policy-making.
  • Dealing with ethics and transparency in data collection, use and dissemination : AI opens many ethical concerns regarding access to education system, recommendations to individual students, personal data concentration, liability, impact on work, data privacy and ownership of data feeding algorithms. AI regulation will require public discussion on ethics, accountability, transparency and security.

The key discussions taking place at Mobile Learning Week 2019 address these challenges, offering the international educational community, governments and other stakeholders a unique opportunity to explore together the opportunities and threats of AI in all areas of education.

  • Download the working paper

More on this subject

Sixth International Conference on Learning Cities

Event International Conference of the Memory of the World Programme, incorporating the 4th Global Policy Forum 28 October 2024 - 29 October 2024

Other recent news

Building Futures in Crisis: The UNESCO-AICS Initiative for Sudanese Youth

Breadcrumbs Section. Click here to navigate to respective pages.

The Ethics of Artificial Intelligence in Education

The Ethics of Artificial Intelligence in Education

DOI link for The Ethics of Artificial Intelligence in Education

Get Citation

The Ethics of Artificial Intelligence in Education identifies and confronts key ethical issues generated over years of AI research, development, and deployment in learning contexts. Adaptive, automated, and data-driven education systems are increasingly being implemented in universities, schools, and corporate training worldwide, but the ethical consequences of engaging with these technologies remain unexplored. Featuring expert perspectives from inside and outside the AIED scholarly community, this book provides AI researchers, learning scientists, educational technologists, and others with questions, frameworks, guidelines, policies, and regulations to ensure the positive impact of artificial intelligence in learning.

TABLE OF CONTENTS

Chapter | 19  pages, introduction, part i | 125  pages, introduction to part i, chapter 1 | 22  pages, learning to learn differently, chapter 2 | 27  pages, educational research and aied, chapter 3 | 17  pages, ai in education, chapter 4 | 22  pages, student-centred requirements for the ethics of ai in education, chapter 5 | 33  pages, pitfalls and pathways for trustworthy artificial intelligence in education, part ii | 135  pages, introduction to part ii, chapter 6 | 23  pages, equity and artificial intelligence in education, chapter 7 | 29  pages, algorithmic fairness in education, chapter 8 | 37  pages, beyond “fairness”, chapter 9 | 15  pages, the overlapping ethical imperatives of human teachers and their artificially intelligent assistants, chapter 10 | 16  pages, integrating ai ethics across the computing curriculum, chapter | 11  pages, conclusions.

  • Privacy Policy
  • Terms & Conditions
  • Cookie Policy
  • Taylor & Francis Online
  • Taylor & Francis Group
  • Students/Researchers
  • Librarians/Institutions

Connect with us

Registered in England & Wales No. 3099067 5 Howick Place | London | SW1P 1WG © 2024 Informa UK Limited

ACM Digital Library home

  • Advanced Search

Artificial Intelligence in Education: Ethical Issues and its Regulations

School of Education, Shaanxi Normal University, China

School of Education, Shaanxi Normal University, China and College of Humanities and Foreign Languages, Xi'an University of Science and Technology, China

New Citation Alert added!

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

  • Publisher Site

ICBDE '22: Proceedings of the 5th International Conference on Big Data and Education

ACM Digital Library

After the birth of human beings, biological evolution essentially ceased. Since then, the evolution of human civilization has been initiated, with curiosity driving science and the desire for control driving technology. The development of science and technology has unfolded in two dimensions of outward and inward, the outward from the solar system, the galaxy to the universe; the inward pointing to human itself, from the movement of life to the movement of consciousness, giving rise to artificial intelligence (AI). For the first time, the "human-like" nature of AI has shaken the social activity in which the world is the stage and humans are the main actors, leading to the ethical issues of AI, of course, including the ethical issues of AI in education(AIEd). The ethical issue of AIEd is the problem caused by the transformation of the actors by implanting humanoids into the human educational community. The main issues include the ethical problem of AI, the adaption of new human and new norms of human behaviors, and the stage for the activities of the actors—the field of AIEd. The regulations of ethical issues of AIEd should aim at the pursuit of human well-being and focus on practice, mainly from the aspects of customs transfer, norms construction and legislation constraints.

Social and professional topics

Professional topics

Computing profession

Recommendations

Analysis on ethical problems of artificial intelligence technology.

In recent years, artificial intelligence has been significantly developed. Artificial intelligence has made great contributions to the progress of human society and has changed the traditional production methods and modes of thinking in human society. ...

Research on Ethical Issues of Artificial Intelligence in Education

The application of artificial intelligence technology in the field of education is becoming more and more extensive, and the ethical issues that come with it are common. The development of responsible and trustworthy artificial intelligence has ...

Attitude of college students towards ethical issues of artificial intelligence in an international university in Japan

We have examined the attitude and moral perception of 228 college students (63 Japanese and 165 non-Japanese) towards artificial intelligence (AI) in an international university in Japan. The students were asked to select a single most significant ...

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

  • Information
  • Contributors

Published in

cover image ACM Other conferences

Copyright © 2022 ACM

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

In-Cooperation

Association for Computing Machinery

New York, NY, United States

Publication History

  • Published: 26 July 2022

Permissions

Request permissions about this article.

Check for updates

Author tags.

  • AI in Education
  • Artificial Intelligence
  • Educational Ethics
  • Ethical Issues
  • Regulations
  • research-article
  • Refereed limited

Funding Sources

Other metrics.

  • Bibliometrics
  • Citations 1

Article Metrics

  • 1 Total Citations View Citations
  • 578 Total Downloads
  • Downloads (Last 12 months) 348
  • Downloads (Last 6 weeks) 21

View or Download as a PDF file.

View online with eReader.

Digital Edition

View this article in digital edition.

HTML Format

View this article in HTML Format .

Share this Publication link

https://dl.acm.org/doi/10.1145/3524383.3524406

Share on Social Media

  • 0 References

Export Citations

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

  • Undergraduate
  • Master’s
  • Graduate Specializations and Certificates
  • Departments
  • Program Rankings
  • Our Faculty
  • Office of the Dean

Certification

  • MSU Interns
  • Post Bachelor’s
  • Teachers & Administrators
  • School Psychologists & Counselors
  • Non-Traditional Certification
  • Student Affairs Office
  • Centers & Institutes
  • Research Projects
  • Office of Research Administration
  • Recent Awards
  • Opportunities for Students
  • Faculty Research Profiles
  • Research News
  • Undergraduate Research Opportunities
  • Graduate Research Opportunities
  • K-12 Schools
  • Urban Areas
  • International
  • Education Policy Innovation Collaborative
  • Office of K-12 Outreach
  • Office of International Studies in Education
  • Education Policy Forum
  • For Students
  • For Faculty/Staff
  • Technology & Data
  • Buildings & Facilities
  • Undergraduate Student Scholarships
  • Graduate Student Scholarships & Fellowships
  • College Merchandise
  • ENewsletter
  • For Reporters

Exploring the ethics of artificial intelligence in K-12 education

Artificial intelligence is everywhere: from reading emails to referencing a search engine. It is in the classroom too, such as with personalized learning or assessment systems. In recent months, AI in the K-12 classroom became even more prevalent as learning shifted to and, in some cases, remained online due to COVID-19. But what about its societal and ethical implications?

Two Michigan State University scholars explored the use of AI in K-12 classrooms, including possible benefits and consequences. Here’s what their research found.

artificial intelligence education ethical problems and solutions

“Artificial intelligence can help students get quicker and helpful feedback and can decrease workload for teachers, among other affordances,” said Selin Akgun, a doctoral student in the College of Education’s Curriculum, Instruction and Teacher Education (CITE) program, and lead author on the paper , published in AI and Ethics . For example, teachers may use social media to encourage conversations amongst students or use platforms to support instruction in hybrid or mixed-ability classrooms. “There are a lot of affordances, but we also wanted to discuss concerns.”

Akgun and co-author Associate Professor Christine Greenhow identified four key areas teachers should consider when using AI in their classroom.

  • Privacy . Many AI systems ask users to consent to the program using and accessing personal data in ways they may or may not understand. Consider the “Terms & Conditions” often shared when downloading a new software. Users may just click “Accept” without fully reading and digesting how their data may be used. Or, if they do read and understand it, there are other layered ways the program could be using their data, like the system knowing their location. Moreover, if platforms are required as part of curricula, some argue parents and children are being “forced” to share their data.
  • Surveillance . AI systems may also follow how a user is interacting with things; the resulting experience provides a personalized experience. In education, this may include systems identifying strengths, weaknesses, and patterns in a students’ performance. While teachers do this to some degree in their teaching, Akgun and Greenhow say, “monitoring and tracking students’ online conversations and actions also may limit [student] participation … and make them feel unsafe to take ownership for their ideas.”
  • Autonomy . Because AI systems rely on algorithms—such as predicting how a student may perform on a test—students and teachers may find it difficult to feel independence in their work. It also, the scholars say, “raise[s] questions about fairness and self-freedom.”
  • Bias and discrimination. These factors can appear in a variety of ways in AI systems like through gendered language translation (“She is a nurse,” but “he is a doctor”). Whenever algorithms are created, the scholars say, the makers also build “a set of data that represent society’s historical and systemic biases, which ultimately transform into algorithmic biases. Even though the bias is embedded into the algorithmic model with no explicit intention, we can see various gender and racial biases in different AI-based platforms.”

“Artificial intelligence can manipulate us in ways we don’t always think about,” reiterated Greenhow, co-author and a faculty member in Educational Psychology and Educational Technology at MSU.

The publication came as a result of the College of Education’s Mind, Media and Learning graduate course, which encourages students to develop a paper based on an area of research interest.

artificial intelligence education ethical problems and solutions

“We want to ultimately cultivate different pedagogies, materials and better support for teachers and students,” said Akgun, who is also a research assistant in MSU’s CREATE for STEM Institute on the ML-PBL project .

As one way to assist in that process, the paper outlines three free sources for teachers to use in a classroom. Akgun and Greenhow chose these—a few of many available—that provided several options including collaborative and hands-on activities for students. It also gives considerations and suggestions of where education can go from here.

“The questions this article raised became increasingly important during the COVID-19 pandemic,” Greenhow said. “More and more online tools were being integrated into the classroom—sometimes on the fly and with little time to think. This paper raised important ethical considerations to think about as we move forward with those applications.”

More from our researchers

Akgun was featured on The Sci-Files podcast in February 2021 , talking about educational research during a pandemic.

Greenhow recently answered questions about online and classroom learning during our second school year dealing with the pandemic. Read her expertise , and check out her website for even more .

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Practical Ethical Issues for Artificial Intelligence in Education

Profile image of PAULO ROBERTO CORDOVA

2022, Communications in computer and information science

Due to the increasing use of Artificial Intelligence (AI) in Education, as well as in other areas, different ethical questions have been raised in recent years. Despite this, only a few practical proposals related to ethics in AI for Education can be found in scientific databases. For this reason, aiming to help fulfill this gap, this work proposes a solution in ethics by design for teaching and learning processes using a top-down approach for Artificial Moral Agents (AMA), following the assumptions defended by the Values Alignment (VA) in the AI area. Therefore, using the classic Beliefs, Desires, and Intentions (BDI) model, we propose an architecture that implements a hybrid solution applying both the utilitarian and the deontological ethical frameworks. Thus, while the deontological dimension of the agent will guide its behavior by means of ethical principles, its utilitarian dimension will help the AMA to solve ethical dilemmas. Whit this, it is expected to contribute to the development of a safer and more reliable AI for the Education area.

Related Papers

International Journal of Development Research

PAULO ROBERTO CORDOVA

As the area of artificial intelligence (AI) evolves and reaches new spaces, showing itself capable of promoting real changes in the way people interact, solve problems and make decisions, it becomes more urgent to make it predictable, responsible, and reliable. Thus, solutions for the values alignment (VA) in AI have been proposed in recent years. The present study proposes a model of artificial moral pedagogical agents (AMPA), adopting a top-down approach and the classic BDI model. In this article, we describe why the top-down approach is the best approach to educational grounds. Next, we explain in more detail the internal structure of the proposed model. Finally, we present some discussions on the topic and a possible situation in which such an agent could be applicable.

artificial intelligence education ethical problems and solutions

A Proposal for Artificial Moral Pedagogical Agents

While Artificial intelligence technologies continue to proliferate in all areas of contemporary life, researchers are looking for ways to make them safe for users. In the teaching-learning context, this is a trickier problem because it must be clear which principles or ethical frameworks are guiding processes supported by artificial intelligence. After all, people education are at stake. This inquiry presents an approach to value alignment, in educational contexts using artificial pedagogical moral agents (AMPA) adopting the classic BDI model. Besides, we propose a top-down approach explaining why the bottom-up or the hybrid one may would not be advisable in educational grounds.

International Journal of Artificial Intelligence in Education

Robert Aiken

Boris Grozdanoff

In 2015 Savulescu and Maslen argued, influentially, that a moral artificial intelligence (MAI) could be developed with the main function of moral advising human agents. 2 Their proposal is one of the first practically oriented solution suggestions in the broader context of the so-called ethical AI (EAI). The EAI, however, is most of the time framed in terms of an AI being ethical and much less often, if at all, grasped as an engine for ethical comprehension, advise and, let alone, action execution, based on such advise. The status quo both in the academic world and the social layers of government, including states' and industry management, is exclusively focused on defining a set of norms, widely accepted by policy makers and professional academics, as ethical. This set, as we may well see in the near future, being aware of non-trivial differences that regulate interests and goals of different states, 3 is to be employed later in programming the actions of AI based and AI run electronic systems at large. Foreseen and unforeseen consequences for the life and, according to some, even survival of humanity, could hardly be overestimated; they are going to be anything but trivial for the future of human society. Thus, it is of timely importance to speed up the formulation of both theoretical systems of ethical AI and their practical implementation. Savulescu's and Mahlen's suggestion of MAI follows this trend in a novel and not particularly traditional way. While most of professional ethicists would perhaps agree that an AI system could have in theory something to contribute to the millennia old philosophical discipline of ethics, few of them, if any, would accept that it would do better than humans in this task. I, having a tremendous intellectual respect for the achievements of history of ethics, do disagree, however, and perhaps along the same line of the proposed MAI. Where I differ, however, is in the modesty of the MAI proposal, quite apt for the normal tempo of evolution of academic ideas. Such modesty, however, follows the natural speed of evolution of academic ideas and not at all the pace of modern technological development; thus, it is inadequate as a timely response to the pressing need of yesterday's AI, which is fully disinterested in the fact of the presence of an ethical system for AI, or its lack thereof. In what follows I will offer first, a critical overview of the ethical AI theme and second, a detailed proposal for an ethical AI agent.

Proceedings of the AAAI Conference on Artificial Intelligence

Judy Goldsmith

We argue that it is crucial to the future of AI that our students be trained in multiple complementary modes of ethical reasoning, so that they may make ethical design and implementation choices, ethical career decisions, and that their software will be programmed to take into account the complexities of acting ethically in the world.

Mrinalini Luthra

The increasing pervasiveness, autonomy and complexity of artificially intelligent technologies in human society has challenged the traditional conception of moral responsibility. To this extent, it has been proposed that the existing notion of moral responsibility be expanded in order to be able to account for the morality of technologies. Machine ethics is the field of study dedicated to studying the computational entity as a moral entity whose goal is to develop technologies capable of autonomous moral reasoning, namely artificial moral agents. This thesis begins by surveying the basic assumptions and definitions underlying this conception of artificial moral agency. It is followed by an investigation into why (and how) society would benefit from the development of such agents. Finally, it explores the main approaches for the development of artificial moral agents. In effect, this research serves as a critique on the emerging field of machine ethics.

Hanyang Law Review

A Suggestion on the Ethics in Artificial Intelligence Yong Eui Kim Now and in the future, AI is inevitable and necessary for our lives in almost every respect. There are many aspects and consequences our use of AI and the AI itself (as an independent actor in the future) generate, some of which are good, valuable as intended and desired by us human beings, but some of which are bad, harmful, or dangerous whether it is intended or not. Ever since AI was started to be used, there has been a variety of discussions on the ethics which may work as guidelines for the regulation of its development and use. Some of the ethics have not yet become enforceable norm and some others exist already as a part of regulation enforceable under the power of governments. The designers or developers of such ethics are diverse from an individual to international organizations. Almost all of the AI Ethics are not sufficiently satisfying the requirements, needs, and hopes of the society members not only local level, but, national or international level. They lack something in ensuring to make all the stake-holders’ participation in developing the ethics and to achieve such key objectives as the accountability, explainability, traceability, no-bias, and privacy protection in the development, use, and improvement of AI. Based upon the review and analysis of the currently available AI Ethics, this article tries to find and suggest a method to design, develop, and improve continuously the AI Ethics through the National AI Ethics Platform where all the relevant stake-holders participate and exchange ideas and opinions together with the AI itself as a device to help, with its great capacity to deal with big data, all the processing and operation of the ethics through simulations utilizing all the input data provided by the participants and the situations surrounding the participants not in a static mode but a dynamic continuing mode.

International Journal for Educational Integrity

Irene Glendinning

2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW)

Humans have invented intelligent machinery to enhance their rational decision-making procedure, which is why it has been named 'augmented intelligence'. The usage of artificial intelligence (AI) technology is increasing enormously with every passing year, and it is becoming a part of our daily life. We are using this technology not only as a tool to enhance our rationality but also heightening them as the autonomous ethical agent for our future society. Norbert Wiener envisaged 'Cybernetics' with a view of a brain-machine interface to augment human beings' biological rationality. Being an autonomous ethical agent presupposes an 'agency' in moral decision-making procedure. According to agency's contemporary theories, AI robots might be entitled to some minimal rational agency. However, that minimal agency might not be adequate for a fully autonomous ethical agent's performance in the future. If we plan to implement them as an ethical agent for the future society, it will be difficult for us to judge their actual stand as a moral agent. It is well known that any kind of moral agency presupposes consciousness and mental representations, which cannot be articulated synthetically until today. We can only anticipate that this milestone will be achieved by AI scientists shortly, which will further help them triumph over 'the problem of ethical agency in AI'. Philosophers are currently trying a probe of the pre-existing ethical theories to build a guidance framework for the AI robots and construct a tangible overview of artificial moral agency. Although, no unanimous solution is available yet. It will land up in another conflicting situation between biological, moral agency and autonomous ethical agency, which will leave us in a baffled state. Creating rational and ethical AI machines will be a fundamental future research problem for the AI field. This paper aims to investigate 'the problem of moral agency in AI' from a philosophical outset and hold a survey of the relevant philosophical discussions to find a resolution for the same.

Alice Pavaloiu , Utku Köse

Artificial Intelligence (AI) is an effective science which employs strong enough approaches, methods, and techniques to solve unsolvable real-world based problems. Because of its unstoppable rise towards the future, there are also some discussions about its ethics and safety. Shaping an AI-friendly environment for people and a people-friendly environment for AI can be a possible answer for finding a shared context of values for both humans and robots. In this context, objective of this paper is to address the ethical issues of AI and explore the moral dilemmas that arise from ethical algorithms, from pre-set or acquired values. In addition, the paper will also focus on the subject of AI safety. As general, the paper will briefly analyze the concerns and potential solutions to solving the ethical issues presented and increase readers' awareness on AI safety as another related research interest.

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)

Fabrice Muhlenbach

Redshine , Swedon

Saket Bihari

Frontiers in Psychology

Ana Luize Correa Bertoncini

Advances in Robotics & Mechanical Engineering

Thibault de Swarte

Artificial Intelligence, A Protocol for Setting Moral and Ethical Operational Standars

Daniel Raphael, PhD

Challenges of Aligning AI with Human Values

Margit Sutrop

AI Ethics in Higher Education: Insights from Africa and Beyond

ABRAHAM SAM

AI and Ethics

Brian D. Earp , Hossein Dabbagh

Christine Greenhow

Mark R Waser

Massimiliano L Cappuccio

Giovanni Landi

Proceedings of the 8th WSEAS international …

Liliana Rogozea

Alfredo Peña Castaño

Catherine Adams

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Artificial intelligence in education: Addressing ethical challenges in K-12 settings

Affiliation.

  • 1 Michigan State University, East Lansing, MI USA.
  • PMID: 34790956
  • PMCID: PMC8455229
  • DOI: 10.1007/s43681-021-00096-7

Artificial intelligence (AI) is a field of study that combines the applications of machine learning, algorithm productions, and natural language processing. Applications of AI transform the tools of education. AI has a variety of educational applications, such as personalized learning platforms to promote students' learning, automated assessment systems to aid teachers, and facial recognition systems to generate insights about learners' behaviors. Despite the potential benefits of AI to support students' learning experiences and teachers' practices, the ethical and societal drawbacks of these systems are rarely fully considered in K-12 educational contexts. The ethical challenges of AI in education must be identified and introduced to teachers and students. To address these issues, this paper (1) briefly defines AI through the concepts of machine learning and algorithms; (2) introduces applications of AI in educational settings and benefits of AI systems to support students' learning processes; (3) describes ethical challenges and dilemmas of using AI in education; and (4) addresses the teaching and understanding of AI by providing recommended instructional resources from two providers-i.e., the Massachusetts Institute of Technology's (MIT) Media Lab and Code.org. The article aims to help practitioners reap the benefits and navigate ethical challenges of integrating AI in K-12 classrooms, while also introducing instructional resources that teachers can use to advance K-12 students' understanding of AI and ethics.

Keywords: Artificial intelligence; Ethics; K-12 education; Teacher education.

© The Author(s), under exclusive licence to Springer Nature Switzerland AG 2021.

PubMed Disclaimer

Conflict of interest statement

Conflict of interestThe authors declare that they have no conflict of interest.

Potential ethical and societal risks…

Potential ethical and societal risks of AI applications in education

Student work from the activity…

Student work from the activity of “Youtube Redesign” (MIT Media Lab, AI and…

Similar articles

  • Teachers' AI digital competencies and twenty-first century skills in the post-pandemic world. Ng DTK, Leung JKL, Su J, Ng RCW, Chu SKW. Ng DTK, et al. Educ Technol Res Dev. 2023;71(1):137-161. doi: 10.1007/s11423-023-10203-6. Epub 2023 Feb 21. Educ Technol Res Dev. 2023. PMID: 36844361 Free PMC article.
  • [Subverting the Future of Teaching: Artificial Intelligence Innovation in Nursing Education]. Wu HS. Wu HS. Hu Li Za Zhi. 2024 Apr;71(2):20-25. doi: 10.6224/JN.202404_71(2).04. Hu Li Za Zhi. 2024. PMID: 38532671 Chinese.
  • The emergent role of artificial intelligence, natural learning processing, and large language models in higher education and research. Alqahtani T, Badreldin HA, Alrashed M, Alshaya AI, Alghamdi SS, Bin Saleh K, Alowais SA, Alshaya OA, Rahman I, Al Yami MS, Albekairy AM. Alqahtani T, et al. Res Social Adm Pharm. 2023 Aug;19(8):1236-1242. doi: 10.1016/j.sapharm.2023.05.016. Epub 2023 Jun 4. Res Social Adm Pharm. 2023. PMID: 37321925 Review.
  • Integrating Ethics and Career Futures with Technical Learning to Promote AI Literacy for Middle School Students: An Exploratory Study. Zhang H, Lee I, Ali S, DiPaola D, Cheng Y, Breazeal C. Zhang H, et al. Int J Artif Intell Educ. 2022 May 9:1-35. doi: 10.1007/s40593-022-00293-3. Online ahead of print. Int J Artif Intell Educ. 2022. PMID: 35573722 Free PMC article.
  • Applications and Challenges of Implementing Artificial Intelligence in Medical Education: Integrative Review. Chan KS, Zary N. Chan KS, et al. JMIR Med Educ. 2019 Jun 15;5(1):e13930. doi: 10.2196/13930. JMIR Med Educ. 2019. PMID: 31199295 Free PMC article. Review.
  • Evaluating generative AI integration in Saudi Arabian education: a mixed-methods study. Alammari A. Alammari A. PeerJ Comput Sci. 2024 Feb 16;10:e1879. doi: 10.7717/peerj-cs.1879. eCollection 2024. PeerJ Comput Sci. 2024. PMID: 38435558 Free PMC article.
  • Proposing a Principle-Based Approach for Teaching AI Ethics in Medical Education. Weidener L, Fischer M. Weidener L, et al. JMIR Med Educ. 2024 Feb 9;10:e55368. doi: 10.2196/55368. JMIR Med Educ. 2024. PMID: 38285931 Free PMC article.
  • Educating the next generation of radiologists: a comparative report of ChatGPT and e-learning resources. Meşe İ, Altıntaş Taşlıçay C, Kuzan BN, Kuzan TY, Sivrioğlu AK. Meşe İ, et al. Diagn Interv Radiol. 2024 May 13;30(3):163-174. doi: 10.4274/dir.2023.232496. Epub 2023 Dec 25. Diagn Interv Radiol. 2024. PMID: 38145370 Free PMC article. Review.
  • Development of immersive learning framework (ILF) in achieving the goals of higher education: measuring the impact using a pre-post design. Devassy SM, Scaria L, Metzger J, Thampi K, Jose J, Joseph B. Devassy SM, et al. Sci Rep. 2023 Oct 17;13(1):17692. doi: 10.1038/s41598-023-45035-0. Sci Rep. 2023. PMID: 37848670 Free PMC article.
  • Adams, R., McIntyre, N.: England A-level downgrades hit pupils from disadvantaged areas hardest. https://www.theguardian.com/education/2020/aug/13/england-a-level-downgr... (2020). Accessed 10 September 2020
  • Ali, S. A., Payne, B. H., Williams, R., Park, H. W., Breazeal, C.: Constructionism, ethics, and creativity: developing primary and middle school artificial intelligence education. Paper presented at International Workshop on Education in Artificial Intelligence (EDUAI). Palo Alto, CA, USA. (2019)
  • Almeida D, Shmarko K, Lomas E. The ethics of facial recognition technologies, surveillance, and accountability in an age of artificial intelligence: a comparative analysis of US, EU, and UK regulatory frameworks. AI Ethics. 2021 doi: 10.1007/s43681-021-00077-w. - DOI - PMC - PubMed
  • Apps for Good: https://www.appsforgood.org/about Accessed 28 August 2021
  • Asterhan CSC, Rosenberg H. The promise, reality and dilemmas of secondary school teacher–student interactions in Facebook: the teacher perspective. Comput. Educ. 2015;85:134–148. doi: 10.1016/j.compedu.2015.02.003. - DOI

Publication types

  • Search in MeSH

Related information

Linkout - more resources, full text sources.

  • Europe PubMed Central
  • PubMed Central

Miscellaneous

  • NCI CPTAC Assay Portal
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • Speakers & Mentors
  • AI services

Addressing the Ethical Dilemmas in Artificial Intelligence Education – Innovative Solutions for a Balanced Future

Intelligence has always been a fascinating subject for humans. From ancient times, philosophers and scientists have tried to unravel the mysteries of the human mind and create machines that can replicate human intelligence. With the advent of artificial intelligence (AI), this dream has become closer to reality than ever before. AI has the potential to revolutionize various industries, and education is no exception.

Artificial intelligence in education offers many benefits, such as personalized learning, real-time feedback, and improved accessibility. However, along with these advantages come a set of ethical challenges that need to be addressed. One of the main problems is the potential for bias in AI algorithms. These algorithms are trained on large datasets, which can unintentionally encode biases present in society, leading to unfair treatment of certain individuals or groups.

Another ethical concern is privacy and data security. AI systems in education collect massive amounts of data about students’ performance, behavior, and personal information. It is vital to ensure that this data is handled with care and is protected from unauthorized access or misuse. Additionally, there is the issue of transparency and accountability. AI-powered educational tools often make decisions or recommendations based on complex algorithms, making it challenging to understand how these decisions were reached and who is responsible for them.

Understanding the Ethical Challenges

In the field of artificial intelligence education, there are several ethical challenges that need to be addressed. These challenges arise due to the nature of artificial intelligence and the impact it has on society.

1. Problems with bias

One of the main ethical problems in AI education is the issue of bias. Artificial intelligence systems can unintentionally inherit the biases present in the data they are trained on. This can lead to discriminatory outcomes and reinforce societal biases. It is important to develop solutions that mitigate bias and promote fairness in AI systems.

2. Lack of transparency

Another ethical challenge in AI education is the lack of transparency in how AI systems work. Many AI algorithms are complex and difficult to understand, which makes it challenging to determine how decisions are being made. This lack of transparency can lead to doubts about the fairness or accuracy of AI systems. It is crucial to develop solutions that provide transparency and allow for accountability in AI systems.

3. Ethical decision making

Artificial intelligence systems often need to make ethical decisions, such as determining which data to collect, how to process it, and how to use it. These decisions can have significant impacts on individuals and society as a whole. Therefore, it is important to develop ethical frameworks and guidelines that can guide the decision-making process in AI systems.

  • One possible solution is to involve ethicists and philosophers in the development of AI systems, to ensure that ethical considerations are integrated into the design and implementation process.
  • Another solution is to establish regulatory frameworks that require AI systems to meet certain ethical standards and undergo ethical audits.

In conclusion, the ethical challenges in artificial intelligence education are complex and multifaceted. It is important to recognize these challenges and develop solutions that promote fairness, transparency, and ethical decision making in AI systems.

Addressing Bias in AI Education

As the field of artificial intelligence continues to advance, it becomes increasingly important to address the ethical problems that arise in AI education. One of the major challenges in this domain is the issue of bias.

Bias in AI education refers to the tendency of AI systems to favor certain groups or perspectives over others. This can result in unfair treatment and discrimination, perpetuating existing social inequalities. It is crucial to recognize and rectify these biases to ensure a fair and inclusive AI education.

Identifying Bias

The first step in addressing bias in AI education is to identify its presence. This involves examining AI algorithms and models for any indicators of discriminatory patterns or unequal representation of certain groups.

One common source of bias is the data used to train AI systems. If the training data is not diverse and representative of different demographics, it can result in biased outcomes. Therefore, it is important to collect and curate data that encompasses a wide range of perspectives, identities, and experiences.

Addressing Bias

Once bias has been identified, appropriate solutions can be devised to address it. One solution is to enhance diversity and inclusion in AI education by including a wide range of voices and perspectives in the curriculum. This can help students develop a more holistic understanding of AI and its impact on society.

Another solution is to implement stringent quality control measures to ensure that AI algorithms and models are unbiased and fair. This can involve rigorous testing, validation, and auditing processes to detect and eliminate any biases that may have been inadvertently introduced into the system.

Furthermore, AI education should emphasize the importance of ethical considerations and responsible AI development. Students should be taught to critically analyze and question the biases that exist within AI systems and work towards creating more inclusive and fair AI solutions.

In conclusion, addressing bias in AI education is essential to ensure the development of ethical and responsible AI systems. By identifying and rectifying biases, fostering diversity and inclusion, and emphasizing ethical considerations, we can create a more equitable and just AI education.

The Importance of Transparency

In the field of artificial intelligence education, transparency plays a crucial role in addressing the ethical problems and finding effective solutions. As AI technology becomes more advanced and integrated into various aspects of our lives, it is important to ensure that the decision-making processes and algorithms used are transparent and accountable.

Transparency helps create trust and understanding between the users and creators of AI systems. It allows individuals to have a clear understanding of how AI algorithms work, how they are trained, and what data is being used. This is especially important in education, where students and educators need to be able to trust the AI systems that are being used to provide them with quality learning experiences.

Addressing Algorithmic Bias

Transparency is essential in addressing algorithmic bias, which is a major ethical problem in AI education. AI algorithms are trained on vast amounts of data, and if this data is biased or reflects societal prejudices, the algorithms can perpetuate and amplify these biases. By making the training process and data used transparent, educators and developers can identify and rectify any biases that may exist in the system.

Accountability and User Control

Transparency also enables accountability and user control. By understanding how AI systems work, users can hold developers and educators accountable for any negative outcomes or biases that may arise. Additionally, transparency allows users to have control over their data and privacy. They can make informed choices about the data they provide to AI systems and understand how their data is being used and protected.

Benefits of Transparency in AI Education
Builds trust between users and creators
Addresses algorithmic bias
Enables accountability and user control
Empowers users to make informed choices about their data

In conclusion, transparency is of utmost importance in the field of artificial intelligence education. It helps address ethical problems such as algorithmic bias and enables accountability and user control. By promoting transparency, we can ensure that AI systems in education are fair, unbiased, and trustworthy.

Data Privacy and Security Concerns

As the use of artificial intelligence (AI) becomes more prevalent in education, concerns about data privacy and security are also increasing. The collection and storage of student data by educational institutions and AI systems raise ethical questions about how this data is used and protected.

One of the main concerns is the potential for misuse or unauthorized access to sensitive student information. AI systems can collect a vast amount of personal data, including academic performance, behavior patterns, and even biometric information. This wealth of data creates a significant risk if it falls into the wrong hands, potentially leading to identity theft or other malicious activities.

Data breaches are another major issue that can compromise student privacy. Educational institutions must have robust security measures in place to protect student data from external threats. This includes secure storage systems, encryption protocols, and regular security audits.

Another concern relates to the ethical use of student data. AI systems rely on large datasets to train their algorithms and make accurate predictions or recommendations. However, the use of student data for AI purposes must be done ethically and with the informed consent of students and their parents or guardians.

Transparency and informed consent are crucial in maintaining trust and ensuring that student data is used responsibly. Educational institutions should clearly communicate their data collection practices, the purposes for which the data is used, and provide options for individuals to opt-in or opt-out of data collection and sharing.

To address these concerns, it is essential for educational institutions and AI developers to prioritize data privacy and security. They must establish clear policies and guidelines for data storage, access, and usage. Regular audits and oversight should also be in place to ensure compliance with these policies and address any potential ethical breaches.

By addressing data privacy and security concerns, the integration of artificial intelligence into education can provide significant benefits. It can enhance personalized learning, improve student outcomes, and help identify areas for improvement in educational systems. However, it is crucial that these advancements are made with a strong commitment to ethical practices and respect for student privacy rights.

Fostering Inclusivity in AI

As artificial intelligence continues to play a significant role in education, it is crucial to address the ethical problems and solutions that arise in this field. One of the key challenges is fostering inclusivity in AI education.

Education should be accessible to all individuals, regardless of their background or abilities. However, without proper attention, AI can reinforce existing biases and widen the gap between privileged and marginalized groups.

To foster inclusivity in AI, it is important to develop and implement ethical guidelines and practices. This can involve:

  • Creating diverse and inclusive datasets: AI systems rely on large amounts of data, and using biased datasets can perpetuate discrimination. By ensuring that datasets include a diverse range of individuals, AI education can promote inclusivity.
  • Implementing ethical algorithms: Algorithms should be designed to avoid discrimination, prejudice, and bias. AI developers must work to eliminate biases that may exist within their algorithms and ensure fair and equitable outcomes.
  • Providing accessible AI tools and resources: It is crucial to make AI education accessible to individuals with disabilities or those from marginalized communities. This can involve developing tools that are compatible with various assistive technologies or providing resources in multiple languages.
  • Encouraging diverse participation: It is essential to foster diversity in the AI field itself. This can involve creating mentorship programs, scholarships, and initiatives that promote the involvement of individuals from underrepresented groups in the development and implementation of AI education.

By implementing these solutions, AI education can move towards a more inclusive and equitable future. Fostering inclusivity in AI will not only ensure equal opportunities for all learners but also contribute to the development of ethical and responsible AI systems.

Ensuring Fairness in AI Education

As artificial intelligence (AI) continues to develop and become more integrated into education, it is crucial to address the potential problems and ensure fairness in its implementation. While AI offers numerous benefits in education, such as personalized learning experiences and efficient administrative tasks, it also poses ethical challenges that must be navigated. One such challenge is the potential for bias in AI algorithms.

AI algorithms are trained on large datasets, which can inadvertently include biased information. If these biased datasets are used to train AI models for educational purposes, it can lead to biased outcomes. For example, if a language processing algorithm is trained on text that includes biased language or stereotypes, it may exhibit gender or racial biases in its responses. This can have detrimental effects on students, perpetuating stereotypes or marginalizing certain groups.

To ensure fairness in AI education, it is essential to carefully select and curate the datasets used for training. This involves thorough examination of the data sources and removing any biased or discriminatory content. Additionally, ongoing monitoring and evaluation of the AI algorithms are necessary to detect and address any biases that may arise during usage. This can be done through regular audits and feedback from users.

Another solution to promoting fairness in AI education is through diverse representation. It is important to have diverse teams involved in the development and implementation of AI algorithms and systems. By including individuals with different backgrounds, perspectives, and experiences, biases can be identified and eliminated more effectively. Diverse representation also ensures that AI technologies consider the needs and perspectives of all students, promoting inclusivity in education.

Furthermore, education and awareness around AI and its ethical implications are crucial. Educational institutions should provide training and resources for both educators and students on understanding and addressing biases in AI. Students should be empowered to critically analyze the outputs of AI algorithms and question any potential biases or unfairness they may detect. This fosters a culture of transparency and accountability in AI education.

In conclusion, ensuring fairness in AI education is paramount to prevent the perpetuation of biases and to promote inclusive learning environments. By carefully curating datasets, promoting diverse representation, and fostering education and awareness, educational institutions can mitigate the ethical challenges associated with AI and create a more equitable future for AI education.

Implications of AI for Labor Market

The rapid advancements in artificial intelligence (AI) have the potential to significantly impact the labor market, leading to both ethical problems and possible solutions.

Ethical Problems

One ethical problem that arises from the integration of AI into the labor market is the displacement of human workers. As AI technology becomes more advanced, there is a concern that many jobs traditionally performed by humans may become automated, leading to a loss of employment opportunities for many individuals. This raises questions about socioeconomic inequality and the distribution of wealth in society.

Another ethical problem is the potential for bias in AI algorithms. AI systems are trained on large datasets, which can inadvertently reflect existing biases and perpetuate discriminatory practices. This can result in unfair hiring practices or biased decision-making, reinforcing existing inequalities and discrimination in the labor market.

Possible Solutions

One possible solution to address the displacement of human workers is through education and retraining programs. By providing opportunities for current workers to learn new skills and transition into emerging fields, societies can mitigate the negative impact of AI on the labor market. This includes investing in vocational training programs, online courses, and reskilling initiatives to ensure that individuals can adapt to the changing job landscape.

To mitigate bias in AI algorithms, it is essential to prioritize diversity and inclusivity in the development and deployment of AI systems. This means increasing representation in the field of AI, ensuring that diverse perspectives are taken into account when designing algorithms, and regularly evaluating and auditing AI systems for bias. Additionally, regulatory frameworks can be put in place to enforce fairness and transparency in AI algorithms used for decision-making in the labor market.

Ethical Problems Possible Solutions
Displacement of human workers Education and retraining programs
Bias in AI algorithms Prioritize diversity and inclusivity

Exploring AI’s Impact on Society

Artificial intelligence (AI) has emerged as a powerful technology that has the potential to greatly impact society in various ways. However, AI also brings forth a range of ethical problems that need to be addressed in order to ensure its responsible and beneficial use.

  • Privacy concerns: AI systems often rely on collecting and analyzing large amounts of personal data, raising serious privacy concerns.
  • Algorithmic bias: AI systems can perpetuate and amplify existing biases in society, leading to discriminatory outcomes.
  • Job displacement: The widespread adoption of AI may lead to job displacement, impacting individuals and communities.
  • Transparency and accountability: AI algorithms can be complex and opaque, making it difficult to understand and hold accountable for their decisions.

Impact on Society

AI has the potential to greatly impact society in various spheres:

  • Healthcare: AI can enable faster and more accurate diagnosis, treatment, and drug discovery.
  • Transportation: Autonomous vehicles powered by AI can enhance safety and efficiency on the roads.
  • Education: AI can personalize learning experiences and provide tailored recommendations for students.
  • Social media: AI algorithms can shape what content users see, potentially influencing opinions and behaviors.

In order to address the ethical problems associated with AI and maximize its positive impact on society, various solutions need to be considered. These solutions may include:

  • Developing robust privacy laws and regulations to protect individuals’ personal data.
  • Ensuring diverse and inclusive teams are involved in the development of AI systems to mitigate algorithmic bias.
  • Investing in retraining and reskilling programs to mitigate the impact of job displacement due to AI.
  • Advocating for transparency and explainability in AI algorithms to promote accountability.

By exploring the ethical problems and implementing these solutions, society can harness the power of artificial intelligence in a responsible and beneficial manner.

Ethical Design Principles in AI Education

As the field of artificial intelligence continues to advance, education in this area is becoming increasingly important. However, there are a number of ethical problems that can arise with the integration of AI into education systems. These problems need to be addressed in order to ensure that AI education is conducted in a responsible and ethical manner. In this section, we will explore some ethical design principles that can be applied to AI education to mitigate these problems and promote positive ethical outcomes.

  • Transparency: AI education systems should be transparent in their design and operation. It should be clear to students and educators how the AI algorithms work and make decisions. This transparency will help to build trust and promote ethical behavior.
  • Fairness: AI education systems should be designed in a way that promotes fairness for all students, regardless of their backgrounds or characteristics. Measures should be taken to prevent biases and discrimination in AI algorithms and decision-making processes.
  • Privacy: AI education systems should respect the privacy of students and educators. Personal data should be protected and used only for legitimate educational purposes. Policies and practices should be in place to safeguard sensitive information.
  • Accountability: Those responsible for the design and implementation of AI education systems should be held accountable for any negative consequences that may arise. There should be clear mechanisms in place to address and rectify any ethical problems that occur.
  • Education: Students and educators should be educated about the ethical implications of AI in education. They should be provided with the knowledge and skills to critically evaluate and use AI technologies in an ethical manner.

By following these ethical design principles, we can address the problems that arise from the integration of artificial intelligence into education systems, and ensure that AI education is conducted in a responsible and ethical manner. This will help to promote positive outcomes for students and educators, and contribute to the advancement of AI education as a whole.

The Role of Ethics Training

As artificial intelligence continues to advance and become integrated into various aspects of society, the need for ethics training in AI education becomes increasingly important. It is crucial that individuals working in the field of artificial intelligence are equipped with the knowledge and skills to navigate the ethical challenges that arise.

Education plays a vital role in shaping the ethical framework of future AI professionals. By incorporating ethics training into AI education, students are exposed to a range of ethical issues related to artificial intelligence. This training allows them to develop a deeper understanding of the potential implications and unintended consequences of their work.

One of the key goals of ethics training in AI education is to instill a sense of responsibility and accountability. AI professionals should be aware of the potential biases and discriminatory practices that can be embedded in AI algorithms. By providing students with ethical guidelines and frameworks, they can learn to identify and mitigate these issues, working towards fair and unbiased AI systems.

In addition to fostering ethical awareness, ethics training can also cultivate critical thinking skills. AI professionals need to be able to question the ethical implications of their work and consider alternative approaches. This requires a level of ethical reasoning and analysis that can be developed through education and training.

Furthermore, ethics training can serve as a platform for fostering collaboration and dialogue. By bringing together students from diverse backgrounds and disciplines, AI education can create an environment where ethical discussions and debates can take place. This interdisciplinary approach can help students gain different perspectives and challenge their own assumptions, leading to more comprehensive and thoughtful solutions to ethical problems.

In conclusion, ethics training plays a critical role in AI education by equipping students with the necessary skills and knowledge to address the ethical challenges posed by artificial intelligence. By fostering ethical awareness, critical thinking, and collaboration, this training can help ensure that the future of AI is guided by ethical principles and values.

Regulatory Frameworks for AI Education

The rapid advancements in artificial intelligence (AI) technology have presented various challenges in the field of education. While AI offers numerous opportunities for revolutionizing the learning process, it also brings about ethical concerns and potential problems that need to be addressed. In order to mitigate these issues, the establishment of regulatory frameworks for AI education is crucial.

One of the main problems associated with AI education is the potential bias and discrimination in the training data used to develop AI models. If these models are trained on data that is biased or discriminatory, it can perpetuate and amplify existing social inequalities. Therefore, regulatory frameworks should outline guidelines for ensuring fairness and equity in AI education, including the use of diverse training data and evaluation processes.

AI’s Role in Decision-making Processes

In the field of education, artificial intelligence (AI) has become an increasingly common tool used to assist in decision-making processes. AI algorithms are being developed and integrated into educational systems to help analyze and make decisions based on large amounts of data. However, the use of AI in decision-making processes raises a number of ethical problems that need to be addressed.

The Ethical Problems Arising from AI in Education

One of the main ethical problems with AI in education is the potential for bias. AI algorithms are developed using data from past experiences and decisions, which may contain inherent bias. If these biases are not identified and addressed, they can perpetuate and amplify existing biases in education systems.

Furthermore, the collection and use of personal data by AI systems raises concerns about privacy and security. AI algorithms often require access to personal information in order to make accurate predictions or recommendations. It is important to establish robust safeguards to protect this data and ensure that it is used responsibly and ethically.

Solutions for Ethical Challenges in AI Education

To address the ethical problems associated with AI in education, several solutions can be implemented. First, it is essential to enhance transparency and accountability in AI systems. This can be achieved by making the decision-making processes of AI algorithms more explainable and understandable to users. Additionally, implementing mechanisms for auditing and monitoring AI systems can help ensure that any biases or errors are identified and corrected.

Second, diversity and inclusivity should be key considerations in the development and deployment of AI systems in education. This involves ensuring that diverse perspectives are represented in the data used to train AI algorithms and that the algorithms themselves are designed to account for individual differences and promote fairness.

Lastly, education and awareness play a crucial role in addressing the ethical challenges of AI in education. Educating students, teachers, and administrators about the potential biases and ethical implications of AI systems can empower them to critically evaluate and make informed decisions about the use of AI in education.

In conclusion, the use of AI in decision-making processes in education brings both great potential and ethical challenges. By acknowledging and addressing these challenges, we can harness the power of AI to enhance education while ensuring fairness, accountability, and privacy.

AI Ethics in Classroom Settings

Artificial intelligence (AI) is playing an increasingly prominent role in education, assisting teachers in various ways. However, the implementation of AI in the classroom also raises ethical concerns and challenges that need to be addressed.

Ethical Problems:

One of the main ethical problems in incorporating AI in education is concerns over privacy. As AI systems collect and analyze data on students, there is a risk of compromising the privacy of students and their families. Additionally, there is a concern that AI algorithms may perpetuate bias and discrimination, potentially leading to unequal treatment of students.

Potential Solutions:

To address these ethical problems, it is crucial to prioritize data privacy and protection. Schools and educational institutions should implement strict policies regarding data collection, storage, and usage, ensuring that student information is safeguarded. Additionally, AI algorithms need to be regularly tested and audited for bias and fairness to mitigate the risk of discrimination.

Furthermore, it is important to educate students about AI ethics and the potential limitations of AI systems. By fostering a critical understanding of AI and its ethical implications, students can become active participants in shaping the responsible use of AI in their education.

In conclusion, while AI has the potential to revolutionize education, it is essential to address the ethical concerns associated with its integration into the classroom. By implementing robust privacy policies, monitoring AI algorithms for bias, and educating students about AI ethics, we can ensure that AI is used responsibly and ethically in the context of education.

Collaboration between Humans and AI

With the rapid advancement of artificial intelligence in education, there has been a growing interest in exploring the collaboration between humans and AI. While AI can bring various benefits to the field of education, there are also ethical problems that need to be addressed.

Benefits of Collaboration

The collaboration between humans and AI in education has the potential to enhance the learning experience for students. AI can provide personalized learning materials and adaptive feedback, catering to the individual needs and preferences of each student. It can also assist in automating administrative tasks, freeing up more time for teachers to focus on students’ individual needs.

However, the collaboration between humans and AI also raises ethical problems that need to be carefully considered. One of the main concerns is the potential for bias in AI algorithms, which can perpetuate inequalities and discrimination in education. It is essential to ensure that AI systems are trained on diverse and unbiased data to avoid perpetuating existing socio-economic disparities.

Another ethical concern is the privacy and security of student data. AI systems often require access to vast amounts of personal data to provide personalized learning experiences. It is crucial to establish strict guidelines and regulations to protect students’ privacy and prevent any misuse or unauthorized access to their data.

To address these ethical problems, collaboration between humans and AI should be approached with transparency and accountability. AI algorithms should be audited regularly to detect and mitigate biases. Additionally, there should be clear guidelines and regulations in place to safeguard student data and ensure its secure handling.

Furthermore, there should be ongoing training and professional development for educators to understand how to effectively collaborate with AI systems. This will empower teachers to leverage AI technology in a responsible and ethical manner while ensuring that they retain their critical role in the educational process.

In conclusion, the collaboration between humans and AI in education has the potential to revolutionize the learning experience. However, it is crucial to address the ethical problems associated with artificial intelligence and find solutions that prioritize fairness, privacy, and security. By doing so, we can create an educational environment that harnesses the benefits of AI while maintaining the values and principles of education.

Ensuring Accountability in AI Education

In the field of artificial intelligence education, ethical problems often arise due to the rapid advancements and limited regulation. It is essential to establish mechanisms for ensuring accountability to address these issues and promote responsible AI education.

One of the main challenges is the potential for bias in AI algorithms used in educational settings. These biases can lead to discrimination and unfair practices, affecting students’ learning experiences and outcomes. To tackle this problem, educators and developers must prioritize the development and implementation of unbiased algorithms. Additionally, establishing diverse and inclusive development teams can help mitigate unconscious biases and ensure fairness in AI education.

Transparency is another crucial aspect of accountability in AI education. It is essential for educators and developers to be transparent about the data sources, algorithms, and processes used in AI systems. This transparency enables stakeholders to understand how AI systems make decisions and helps identify any potential biases or ethical concerns. Educators should also disclose the limitations of AI systems to students to ensure they are aware of the potential pitfalls and uncertainties associated with AI education.

The implementation of comprehensive oversight and auditing mechanisms is vital to ensure accountability in AI education. These mechanisms can involve independent bodies or committees responsible for evaluating and monitoring AI systems used in educational settings. They can assess AI algorithms for fairness, accuracy, and ethical compliance. Regular audits and evaluations can help identify and rectify any issues promptly.

Education and training in ethics should be an integral part of AI education to foster responsible and ethical AI practices. It is crucial for both educators and students to understand the potential ethical implications of AI systems. This education can help students navigate ethical challenges, promote critical thinking, and instill a sense of responsibility in developing and using AI technology.

In conclusion, ensuring accountability in AI education is essential to address the ethical problems that arise due to the rapid advancements in artificial intelligence. By prioritizing unbiased algorithms, promoting transparency, implementing oversight mechanisms, and providing education in ethics, the field of AI education can promote responsible and ethical practices.

Building Trust in AI Systems

Artificial intelligence (AI) has the potential to revolutionize various industries and improve our daily lives, but with this power comes ethical problems that need to be addressed. Trust is a fundamental aspect in the adoption and acceptance of AI systems, as users need to feel confident in the decisions made by these intelligent machines.

The Problems

One of the key issues in building trust in AI systems is the lack of transparency. Many AI algorithms and models are considered “black boxes,” meaning their decision-making process is not easily explainable or understandable to humans. This lack of transparency can result in a loss of trust, as users may feel uncomfortable relying on a system they do not fully understand.

Another problem is bias in AI systems. Bias can be inadvertently introduced through the data that these systems are trained on, or it can be a result of biased programming. Either way, biased AI systems can lead to unfair or discriminatory outcomes, eroding trust in the technology.

The Solutions

To address the lack of transparency, efforts should be made to develop explainable AI systems. This involves designing algorithms and models that can provide clear explanations for their decisions, making the decision-making process more understandable and trustworthy. Techniques such as interpretable machine learning and rule-based AI can help in achieving explainability.

To tackle bias in AI systems, it is crucial to ensure that the datasets used for training are diverse and representative of different demographics. Additionally, continuous monitoring and auditing of AI systems can help identify and address any biases that may arise. Adopting ethical guidelines and standards for AI development and deployment can also play a significant role in mitigating bias and building trust.

Education and awareness are also essential in building trust. Informing users about the limitations, potential biases, and risks of AI systems can help manage expectations and prevent misunderstandings. Additionally, fostering a culture of ethics and responsibility within the AI community can contribute to the development of trustworthy AI systems.

In conclusion, building trust in AI systems requires addressing the problems of transparency and bias. By developing explainable AI systems, ensuring diverse and representative datasets, and promoting education and awareness, we can work towards fostering trust in artificial intelligence.

Question-answer:

What are the ethical problems in artificial intelligence education.

Some of the ethical problems in artificial intelligence education include bias and discrimination in algorithms, invading privacy, and the potential loss of jobs due to automation.

How can bias and discrimination be addressed in artificial intelligence education?

Bias and discrimination in AI education can be addressed by creating diverse and inclusive datasets, using explainable AI methods, and implementing ethical guidelines and regulations.

What are the potential solutions to the privacy issues in AI education?

Potential solutions to privacy issues in AI education include implementing strict data protection measures, anonymizing data, and obtaining informed consent from individuals before collecting and using their data.

What impact can artificial intelligence education have on job loss?

Artificial intelligence education has the potential to automate certain tasks and jobs, leading to job displacement. However, it can also create new job opportunities in fields related to AI development and maintenance.

How can ethical issues in AI education be addressed?

Ethical issues in AI education can be addressed by integrating ethics into AI curriculum, promoting transparency and accountability, and fostering discussions and debates on ethical implications of AI.

What are some ethical problems in artificial intelligence education?

Some ethical problems in artificial intelligence education include issues of privacy and data protection, biases in AI algorithms, and the potential for job displacement.

How can privacy and data protection be addressed in artificial intelligence education?

Privacy and data protection can be addressed in artificial intelligence education through the implementation of strict data handling and storage practices, as well as teaching students about the legal and ethical implications of collecting and using personal data.

What are some potential solutions for the problem of biases in AI algorithms in education?

Some potential solutions for the problem of biases in AI algorithms in education include diversifying the teams that develop AI algorithms, conducting regular audits to identify and address biases, and promoting transparency and accountability in AI systems.

How can the potential for job displacement be mitigated in the field of artificial intelligence education?

The potential for job displacement in the field of artificial intelligence education can be mitigated by providing retraining and upskilling opportunities for individuals whose jobs are at risk, as well as focusing on the development and implementation of AI systems that complement human abilities rather than replacing them.

What are the long-term implications of artificial intelligence education?

The long-term implications of artificial intelligence education include advancements in various industries, increased efficiency and productivity, as well as potential changes in the job market and the need for continuous learning and adaptation to new technologies.

Related posts:

Default Thumbnail

About the author

' src=

AI and Handyman: The Future is Here

Embrace ai-powered cdps: the future of customer engagement, elon musk’s vision ai, ai bot for telegram.

' src=

4 Things to Know About AI’s ‘Murky’ Ethics

artificial intelligence education ethical problems and solutions

  • Share article

Overworked teachers and stressed-out high schoolers are turning to artificial intelligence to lighten their workloads.

But they aren’t sure just how much they can trust the technology—and they see plenty of ethical gray areas and potential for long-term problems with AI.

How are both groups navigating the ethics of this new technology—and what can school districts to do to help them make the most of it, responsibly?

That’s what Jennifer Rubin, a senior researcher at foundry10, an organization focused on improving learning, set to find out last year. She and her team conducted small focus groups on AI ethics with a total of 15 teachers nationwide as well as 33 high-school students.

Rubin’s research is scheduled to be presented at the International Society for Technology in Education’s annual conference later this month in Denver.

Here are four big takeaways from her team’s extensive interviews with students and teachers:

1. Teachers see potential for generative AI tools to lighten their workload, but they also see big problems

Teachers said they dabble with using AI tools like ChatGPT to help with tasks such as lesson planning or creating quizzes . But many educators aren’t sure how much they can trust the information AI generates , or were unhappy with the quality of the responses they received, Rubin said.

The teachers “raised a lot of concerns [about] information credibility,” Rubin said. “They also found that some of the information from ChatGPT was really antiquated, or wasn’t aligned with learning standards,” and therefore wasn’t particularly useful.

Teachers are also worried that students might become overly reliant on AI tools to complete their writing assignments and would “therefore not develop the critical thinking skills that will be important” in their future careers, Rubin said.

2. Teachers and students need to understand the technology’s strengths and weaknesses

There’s a perception that adults understand how AI works and know how to use the tech responsibly.

But that’s “not the case,” Rubin said. That’s why school and district leaders “should also think about ethical-use guidelines for teachers” as well as students.

Teachers have big ethical questions about which tasks can be outsourced to AI, Rubin added. For instance, most teachers interviewed by the researcher saw using AI to grade student work or even offer feedback as an “ethically murky area because of the importance of human connection in how we deliver feedback to students in regards to their written work,” Rubin said.

And some teachers reverted to using pen and paper rather than digital technologies so that students couldn’t use AI tools to cheat. That frustrated students who are accustomed to taking notes on a digital device— and goes contrary to what many experts recommend.

“AI might have this unintended backlash where some teachers within our focus groups were actually taking away the use of technology within the classroom altogether, in order to get around the potential for academic dishonesty,” Rubin said.

3. Students have a more nuanced perspective on AI than you might expect

The high schoolers Rubin and her team talked to don’t see AI as the technological equivalent of a classmate who can write their papers for them.

Instead, they use AI tools for the same reason adults do: To cope with a stressful, overwhelming workload.

Teenagers talked about “having an extremely busy schedule with schoolwork, extracurriculars, working after school,” Rubin said. Any conversation about student use of AI needs to be grounded in how students use these tools to “help alleviate some of that pressure,” she said.

For the most part, high schoolers use AI for help in research and writing for their humanities classes, as opposed to math and science, Rubin said. They might use it to brainstorm essay topics, to get feedback on a thesis statement for a paper, or to help smooth out grammar and word choices. Most said they were not using it for whole-sale plagiarism.

Students were more likely to rely on AI if they felt that they were doing the same assignment over and over and had already “mastered that skill or have done it enough repeatedly,” Rubin said.

4. Students need to be part of the process in crafting ethical use guidelines for their schools

Students have their own ethical concerns about AI, Rubin said. For instance, “they’re really worried about the murkiness and unfairness that some students are using it and others aren’t and they’re receiving grades on something that can impact their future,” Rubin said.

Students told researchers they wanted guidance on how to use AI ethically and responsibly but weren’t getting that advice from their teachers or schools.

“There’s a lot of policing” for plagiarism, Rubin said, “but not a lot of productive conversation in classrooms with teachers and adults.”

Students “want to understand what the ethical boundaries of using ChatGPT and other generative AI tools are,” Rubin said. “They want to have guidelines and policies around what this could look like for them. And yet they were not, at the time these focus groups [happened], receiving that from their teachers or their districts, and even their parents.”

Sign Up for EdWeek Tech Leader

Edweek top school jobs.

Illustration of woman using AI.

Ethics guidelines for trustworthy AI

On 8 April 2019, the High-Level Expert Group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence. This followed the publication of the guidelines' first draft in December 2018 on which more than 500 comments were received through an open consultation.

According to the Guidelines, trustworthy AI should be:

(1) lawful -  respecting all applicable laws and regulations

(2) ethical - respecting ethical principles and values

(3) robust - both from a technical perspective while taking into account its social environment

artificial intelligence education ethical problems and solutions

Download the Guidelines in your language below:

BG | CS |  DE | DA |  EL | EN | ES | ET | FI | FR | HR | HU | IT | LT | LV | MT |  NL  | PL | PT | RO | SK | SL | SV

The Guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches
  • Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.
  • Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.
  • Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
  • Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
  • Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered. 
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured.

The AI HLEG has also prepared a document which elaborates on a Definition of Artificial Intelligence used for the purpose of the Guidelines.

Download the Definition of AI in your language below:

Piloting process.

The document also provides an assessment list that operationalises the key requirements and offers guidance to implement them in practice. Starting from the 26th of June, this assessment list underwent a piloting process, to which all stakeholders were invited to test the assessment list and provide practical feedback on how it can be improved.

Feedback was received through different tracks:

  • An open survey or “quantitative analysis” sent to all those who registered to the piloting
  • In-depth interviews with a number of representative organisations to gather more detailed feedback for different sectors 
  • Continuous possibility to upload feedback and best practices through the European AI Alliance

The piloting phase closed on 1 December 2019

Based on the feedback received, the AI HLEG  presented the final Assessment List for Trustworthy AI (ALTAI)  in July 2020. ALTAI is practical tool that translates the Ethics Guidelines into an accessible and dynamic (self-assessment) checklist. The checklist can be used by developers and deployers of AI who want to implement the key requirements in practice. This new list is available as a prototype web based tool and in PDF format .

Last update

31 January 2024

chrome icon

Artificial Intelligence Education Ethical Problems and Solutions

Chat with Paper: Save time, read 10X faster with AI

Citation Count

Cognitive Technologies and Artificial Intelligence in Social Perception

Teachers’ ai digital competencies and twenty-first century skills in the post-pandemic world, the role of artificial intelligence in the future of education, digital transformation of education and artificial intelligence, the role of learners’ competencies in artificial intelligence education, related papers (5), trending questions (3).

The provided paper does not specifically mention the ethical considerations of using artificial intelligence in science education.

The challenges of artificial intelligence in evaluating students include algorithmic irrationality, incomplete data, and inaccurate content.

The problems in student's academic performance using artificial intelligence include algorithmic irrationality, incomplete data, and inaccurate content.

  • Original article
  • Open access
  • Published: 19 June 2024

Knowledge, attitudes, and perceived Ethics regarding the use of ChatGPT among generation Z university students

  • Benicio Gonzalo Acosta-Enriquez   ORCID: orcid.org/0000-0002-9591-861X 1 ,
  • Marco Agustín Arbulú Ballesteros   ORCID: orcid.org/0000-0001-7940-7580 2 ,
  • Carmen Graciela Arbulu Perez Vargas   ORCID: orcid.org/0000-0002-8463-6553 3 ,
  • Milca Naara Orellana Ulloa   ORCID: orcid.org/0000-0001-5952-5766 4 ,
  • Cristian Raymound Gutiérrez Ulloa   ORCID: orcid.org/0000-0001-9791-9627 1 ,
  • Johanna Micaela Pizarro Romero   ORCID: orcid.org/0000-0003-2515-4455 4 ,
  • Néstor Daniel Gutiérrez Jaramillo   ORCID: orcid.org/0000-0001-9487-6342 4 ,
  • Héctor Ulises Cuenca Orellana   ORCID: orcid.org/0000-0002-5464-8844 5 ,
  • Diego Xavier Ayala Anzoátegui   ORCID: orcid.org/0009-0006-9795-352X 5 &
  • Carlos López Roca   ORCID: orcid.org/0000-0003-0751-5755 6  

International Journal for Educational Integrity volume  20 , Article number:  10 ( 2024 ) Cite this article

135 Accesses

Metrics details

Artificial intelligence (AI) has been integrated into higher education (HE), offering numerous benefits and transforming teaching and learning. Since its launch, ChatGPT has become the most popular learning model among Generation Z college students in HE. This study aimed to assess the knowledge, concerns, attitudes, and ethics of using ChatGPT among Generation Z college students in HE in Peru. An online survey was administered to 201 HE students with prior experience using the ChatGPT for academic activities. Two of the six proposed hypotheses were confirmed: Perceived Ethics (B = 0.856) and Student Concerns (B = 0.802). The findings suggest that HE students’ knowledge and positive attitudes toward ChatGPT do not guarantee its effective adoption and use. It is important to investigate how attitudes of optimism, skepticism, or apathy toward AI develop and how these attitudes influence the intention to use technologies such as the ChatGPT in HE settings. The dependence on ChatGPT raises ethical concerns that must be addressed with responsible use programs in HE. No sex or age differences were found in the relationship between the use of ChatGPTs and perceived ethics among HE students. However, further studies with diverse HE samples are needed to determine this relationship. To promote the ethical use of the ChatGPT in HE, institutions must develop comprehensive training programs, guidelines, and policies that address issues such as academic integrity, privacy, and misinformation. These initiatives should aim to educate students and university teachers on the responsible use of ChatGPT and other AI-based tools, fostering a culture of ethical adoption of AI to leverage its benefits and mitigate its potential risks, such as a lack of academic integrity.

Introduction

Artificial intelligence (AI) is increasingly being integrated into higher education, offering numerous benefits and transforming teaching and learning experiences (Essien et al. 2020 ; Kuka et al. 2022 ). The use of AI technologies in education allows for the automation of processes and the development of innovative learning solutions (Nikolaeva et al. n.d. ; Rath et al. 2023 ). AI can improve student learning outcomes by providing personalized recommendations and feedback (Kuka et al. 2022 ). It also has the potential to enhance the role of teachers and promote the development of digital competencies (Ligorio 2022 ). However, the implementation of AI in higher education comes with challenges, such as the need to address diversity and inclusion, reduce socioeconomic barriers, and ensure the ethical use of AI (Isaac et al. 2023 ). As AI continues to advance, it is important for educators and institutions to adapt and equip students with the necessary skills for the digital age (Kuka et al. 2022 ).

Globally, the integration of AI in academia poses a concerning challenge that is not exclusive to any country or region but reflects a global trend toward digitalization and automation in education (Crompton and Burke 2023 ). Various studies have revealed that the use of the Chat Generative Pretrained Transformer (ChatGPT) by university students offers opportunities such as providing personalized assistance and support, particularly for those facing language barriers or other challenges (Hassan 2023 ). Students perceive the ChatGPT as a tool that supports autonomous learning, helping them with academic writing, language teaching, and in-class learning (Hassan 2023 ; Karakose and Tülübaş 2023 ). Likewise, the ChatGPT can enhance teaching and learning activities by providing creative ideas, solutions, and personalized learning environments, for example, by developing course content that resonates with diverse student needs and learning styles. This tailoring to individual preferences facilitates engagement and fosters a deeper understanding of complex subjects. Furthermore, teachers are increasingly relying on AI to generate innovative teaching materials and methods that incorporate multimedia elements and interactive activities, which enrich the learning experience and appeal to various learning modalities. It can also assist teachers in assessing student work and designing assessment rubrics (Hassan 2023 ; Karakose and Tülübaş 2023 ).

Academic integrity, a fundamental principle that encompasses dedication to honesty, fairness, trust, responsibility, and respect in all academic endeavors (Holden et al. 2021 ), as well as the practice of researching and completing academic work with equity and coherence and adhering to the highest ethical standards (Guerrero-Dib et al. 2020 ), has become more critical than ever in the current context. This urgency arises as the modern technological revolution and the increasing popularity of artificial intelligence chatbots such as ChatGPT have facilitated access to information and content generation (Bin-Nashwan et al. 2023 ).

The use of ChatGPT in academic settings raises concerns about academic integrity, as students may become overly reliant on the tool, leading to a decline in higher-order thinking skills such as critical thinking, creativity, and problem solving (Črček & Patekar 2023a ; Putra et al. 2023 ). One of the most significant risks associated with ChatGPT is plagiarism, as students may use the tool to generate entire assignments or substantial portions of their work without properly acknowledging the source or engaging in original thought (Črček & Patekar 2023a ; Putra et al. 2023 ). This form of academic misconduct can occur when students rely on ChatGPT for the complete development of tasks, effectively passing off AI-generated content as their own work. Moreover, students have expressed concerns about the potential for cheating, the spread of misinformation, and fairness issues related to the use of ChatGPT in academic contexts (Famaye et al. 2023 ).

Singh et al. ( 2023 ) revealed that students believe that universities should provide clearer guidelines and better education on how and where ChatGPT can be used for learning activities. Therefore, it is essential that educational institutions establish regulations and guidelines to ensure the responsible and ethical use of ChatGPT by university students (Zeb et al. 2024a ). Furthermore, ChatGPT can enhance the teaching-learning process, but its successful implementation requires that teachers be trained for proper use (Montenegro-Rueda et al. 2023 ).

Consequently, teachers and policymakers must balance the benefits of the ChatGPT with the need to maintain ethical practices that promote critical thinking, originality, and integrity among students (Farhi et al. 2023 ).

Moreover, significant changes in how students perceive and use the ChatGPT are observed internationally, reflecting cultural, educational, and regulatory differences (Roberts et al. 2023 ). In the U.S., concerns have focused on academic integrity, with debates on how ChatGPT can be used for original content generation versus the potential facilitation of plagiarism (Kleebayoon and Wiwanitkit 2023 ; Sarkar 2023 ). In the United Kingdom, the focus is on educational quality and equity. There is growing concern about whether the use of the ChatGPT could deepen the gap between students who have access to advanced technology and those who do not (Roberts et al. 2023 ). With its vast student population, India faces unique challenges related to access (Taylor et al. 2023 ). Here, the concern is how to democratize access to technologies such as the ChatGPT to ensure that all students, regardless of their location or resources, can benefit from it. In India, this is reflected in the effort to integrate AI into its vast digital infrastructure, which faces unique challenges in terms of security and regulation. This initiative highlights the importance of adapting emerging technology to diverse social and economic contexts (Gupta and Guglani 2023 ). In China, the debate revolves around privacy and data security. The integration of the ChatGPT into the Chinese educational system raises questions about how student data are handled and protected in an artificial intelligence (AI) environment (Ming and Bacon 2023 ). Germany, with its strong emphasis on technical and vocational education, is interested in how ChatGPT can be used to enhance specific skills and practical applications, maintaining an ethical and quality balance (von Garrel and Mayer 2023 ). Finally, in Australia, concerns have focused on how to integrate the ChatGPT in a way that complements and enriches traditional teaching methods without replacing human contact and experiential learning (Prem 2019 ).

The widespread adoption of ChatGPT by Generation Z university students has sparked debate among researchers (Singh et al. 2023 ; Gundu and Chibaya 2023 ; Montenegro-Rueda et al. 2023 ). Several studies have examined student attitudes toward using ChatGPT as a learning tool and have found a high level of positive attitudes toward its use (Ajlouni et al. 2023 a). Furthermore, it has been demonstrated that ChatGPT can generate positive emotions (β = 0.418***), which influences the intention to frequently use the tool (Acosta-Enriquez et al. 2024 ). Furthermore, studies indicate that ChatGPT has the potential to facilitate the learning process (Ajlouni et al. 2023 ). However, there are doubts about the accuracy of the data produced by the ChatGPT, and some students feel uncomfortable using the platform (Ajlouni et al. 2023 ). It is important that educational institutions provide guidelines and training for students on how and where ChatGPT can be used for learning activities (Singh et al. 2023 ). Additionally, to successfully implement ChatGPT in education, it is necessary to train teachers on how to use the tool properly (Montenegro-Rueda et al. 2023 ). Overall, the ChatGPT can enhance educational experience if used responsibly and in conjunction with specific strategies (Gundu and Chibaya 2023 ).

Therefore, the overall goal of this study is to assess the knowledge, concerns, attitudes, and perceptions of ethics regarding the use of ChatGPT among Generation Z university students in the Peruvian context. This study is relevant because it analyses how knowledge and attitudes influence the use of the ChatGPT and how the use of this tool influences the perceived ethics and concerns of Peruvian students. In Peru, studies on the adoption of AI and its impact on higher education are still scarce. Consequently, it was relevant to conduct this study because it allowed for understanding from the students’ perspective how knowledge about ChatGPT and attitudes toward using ChatGPT influence their experiences. Additionally, it was possible to determine how the use of ChatGPT affects the perceived ethics and concerns of student users regarding ChatGPT.

Theoretical framework

Chatgpt and higher education.

ChatGPT is a language model developed by OpenAI that has garnered significant attention since its launch on November 30, 2022, with potential applications across various fields, such as tourism, education, and software engineering. Mich and Garigliano ( 2023 ) suggested that the introduction of ChatGPT is a revolutionary force in digital transformation processes that involves a series of risks, such as inappropriate and unethical use, in addition to completely revolutionizing the way users carry out their activities, whether work-related or academic. It has rapidly spread not only in developed countries but also in developing ones (Kshetri 2023 ). In the context of education, the ChatGPT has had positive impacts on the teaching-learning process, but proper teacher training is crucial for successful implementation (Montenegro-Rueda et al. 2023 ). Furthermore, ChatGPT has been used in software engineering tasks, demonstrating its potential for integration into workflows and its ability to break down coding tasks into smaller parts (Abrahamsson et al. 2023 ). However, it is important to note that ChatGPT has inherent limitations and risks that must be considered (Mich and Garigliano 2023 ).

On the other hand, ChatGPT has gained significant popularity in higher education (Arista et al., n.d.-a; Hassan 2023 ) compared to other language models developed by Google and Meta (Farhi et al. 2023 ). There are experiences where it has been used to provide personalized assistance and support to students, help them navigate university systems, answer questions, and provide feedback (Hassan 2023 ). However, the introduction of the ChatGPT has raised concerns about academic integrity (Arista et al. 2023 ). Nonetheless, it is important that teachers receive training on the proper use of ChatGPT to maximize its benefits (Montenegro-Rueda et al. 2023 ).

Student knowledge predicts ChatGPT use

The technology acceptance model (TAM) has been widely used to explain technology acceptance among students in various educational contexts. This model includes constructs such as perceived usefulness and ease of use, which are fundamental in determining technology adoption (Tang and Hsiao 2023 ). Considering the rapid development and potential impact of AI technologies such as ChatGPT, it is essential to understand how students from different disciplines perceive and adopt such tools.

Surveys of university students have revealed that their attitudes toward technology adoption vary according to their area of knowledge. For example, according to Huedo-Martínez et al. ( 2023 ), engineering and architecture students are more likely to adopt technologies before social sciences and humanities students are. These findings suggest that students’ familiarity with technology and their perception of its relevance to their field of study can influence their willingness to adopt new tools.

In the context of the ChatGPT, a survey among computer science students showed that while many are aware of the tool, they do not routinely use it for academic purposes. They express skepticism about its positive impact on learning and believe that universities should provide more guidelines and education on its responsible use by university students and faculty (Singh et al. 2023 ). Furthermore, a study with senior computer engineering students revealed that although students admire ChatGPTs’ capabilities and find them interesting and motivating, they also believe that their responses are not always accurate and that good prior knowledge is needed to work effectively with this language model (Sane et al. 2023 ; Shoufan 2023 ).

However, the TAM has been applied to various student groups beyond those in engineering and computer science. Duong et al. ( 2023 ) indicated that students who believe that ChatGPT can serve as a facilitator of knowledge exchange (a platform where they can exchange ideas, gather multiple perspectives, and collaboratively address academic challenges) will not only be more motivated but also more likely to actively engage in using this technology. This finding highlights the importance of perceived usefulness in driving technology adoption across different disciplines.

In summary, students’ knowledge and perceptions of ChatGPT play a crucial role in predicting its use. Factors such as familiarity with the tool, perception of its capabilities, and understanding of its limitations influence students’ decisions to use ChatGPT for academic purposes (Castillo-Vergara et al. 2109 ; Huedo-Martínez et al. 2023 ; Tang and Hsiao 2023 ). By considering the insights gained from the TAM and its application to various student groups, we can better understand the factors that drive the adoption of AI technologies such as ChatGPT in educational settings.

Hypothesis 1

Knowledge about the ChatGPT influences its use.

Attitudes toward artificial intelligence in education

An attitude is an evaluation of a psychological object and is characterized by dimensions such as good versus bad and pleasant versus unpleasant (Ajzen 2001 ; Svenningsson et al. 2022 ). Similarly, self-compassion refers to an individual’s mental readiness to perform certain behaviors (Almasan et al. 2023 ). Attitude has traditionally been divided into affective, cognitive, and behavioral components (Abd-El-Khalick et al. 2015 ; Breckler 1984 ; Fishbein and Ajzen 1975 ).

Education attitudes toward AI are formed and modified through complex interactions among experiences, beliefs, and knowledge. Psychological theories, such as classical and operant conditioning, social learning theory, and cognitive dissonance theory, provide a basis for understanding how these attitudes develop. Repeated exposure to positive experiences with AI in educational contexts can lead to a more favorable attitude toward it. Conversely, cognitive dissonance can arise when previous experiences or beliefs about AI conflict with new information or current experiences (Chan and Hu 2023 ).

Students’ attitudes toward AI, such as optimism, skepticism, or apathy, have a significant impact on their willingness to interact and learn with these technologies (Thong et al. 2023 ). A positive attitude can encourage greater exploration and use of tools such as ChatGPT (Yasin 2022 ). However, a negative attitude can result in resistance to using AI, limiting learning opportunities (Irfan et al. 2023 ). A study conducted at the University of Jordan revealed a high level of positive attitudes toward using the ChatGPT as a learning tool, with 73.2% of respondents agreeing on its potential to facilitate the learning process (Ajlouni et al. 2023 ).

Several factors influence attitudes toward AI in education. The cultural and social context is crucial, as cultural beliefs and social values play a significant role in shaping attitudes (Hsu et al. 2021 ). Previous educational experience with technologies in the educational environment can significantly influence attitudes toward AI (Adelekan et al. 2018 ). Based on the above, we formulate the following:

Hypothesis 2

The attitude toward the ChatGPT influences its use.

ChatGPT use influences students’ concerns

University students have shown a high level of positive attitudes toward using ChatGPT as a learning tool, with moderate affective components and high behavioral and cognitive components of attitudes (Ajlouni et al. 2023 ).

Famaye et al. ( 2023 ) used the theory of reasoned action to interpret students’ perceptions and dispositions toward ChatGPT, revealing that students perceived ChatGPT as a valuable tool to support learning but had concerns about cheating, misinformation, and fairness issues. Likewise, it has been found that ChatGPT has a substantial impact on student motivation and engagement in the learning process, with a significant correlation between teachers’ and students’ perceptions of ChatGPT use (Muñoz et al. 2023 ).

Studies specify that students have concerns about the use of the ChatGPT in educational settings, including skepticism about its positive effects on learning and concerns about potential fraud, misinformation, and fairness issues related to its use (Famaye et al. 2023 ; Singh et al. 2023 ).

Concerns have been raised about the accuracy of the data produced by the ChatGPT, discomfort in using the platform, and anxiety when unable to access ChatGPT services (Ajlouni et al. 2023 ). Additionally, students are skeptical about the positive impact of the ChatGPT on learning and believe that universities should provide clearer guidelines and better education on how and where the tool can be used for learning activities (Singh et al. 2023 ).

Furthermore, not only have students expressed their concerns, but educators have also voiced their concerns about the integration of the ChatGPT in educational settings, emphasizing the need for responsible and successful application in teaching or research (Halaweh 2023 ). Based on the reviewed literature, the following is proposed:

Hypothesis 3

Experience with the use of ChatGPT influences students’ concerns.

The use of the ChatGPT influences students’ perceptions of ethics

In the context of AI, certain ethical principles are fundamental for its responsible and fair use. These principles include transparency, where AI algorithms and operations must be understandable to users. Justice and equity are essential for ensuring that AI applications do not perpetuate biases or discrimination (Köbis and Mehner 2021 ). Regarding privacy, it is crucial to protect users’ personal and sensitive information. Moreover, accountability must be clearly established, especially in contexts where AI decisions have significant consequences (Elendu et al. 2023 ).

Ethical dilemmas in the use of AI in higher education include the use of student data, where privacy and consent issues arise (Khairatun Hisan and Miftahul Amri 2022 ). Biases in machine learning that could lead to unfair educational outcomes must also be considered (Goodman 2023 ). Academic autonomy must be balanced with the implementation of AI tools such as ChatGPT (Garani-Papadatos et al. 2022 ).

For the ethical implementation of AI in education, policies and regulations guiding its use are needed. This includes data protection regulations such as the General Data Protection Regulation (GDPR) in Europe and ethical guidelines for the development and use of AI provided by international and academic bodies (Ayhan 2023 ). Additionally, specific institutional policies of educational institutions must regulate the use of AI, ensuring alignment with the ethical values and principles of the academic community.

Students perceive the ChatGPT as a valuable tool for learning but have concerns about deception, misinformation, and equity (Famaye et al. 2023 ). Furthermore, the use of ChatGPT in academia has both negative and positive implications, with concerns about academic dishonesty and dependence on technology (Arista et al. 2023 ; Črček & Patekar 2023a ; Farhi et al. 2023 ; Fuchs and Aguilos 2023 ; Hung and Chen 2023 .; Ogugua et al. 2023 ; Robledo et al. 2023 ; Zeb et al. 2024a ; Zhong et al. 2023 ). On the other hand, studies show that students use the ChatGPT for various academic activities, including generating ideas, summarizing, and paraphrasing with different perceptions of ethical acceptability (Črček & Patekar 2023a ; Farhi et al. 2023 ).

Students expressed concern about the potential negative effects on cognitive abilities when they relied too much on the ChatGPT, indicating a moderate level of trust in the accuracy of the information provided by the ChatGPT (Bodani et al. 2023 ). Additionally, they believe that improvements to the ChatGPT are necessary and are optimistic that this will happen soon, emphasizing the need for developers to improve the accuracy of the responses given and for educators to guide students on effective prompting techniques and how to evaluate the generated responses (Shoufan 2023 ). Based on the above, the following is proposed:

Hypothesis 4

The use of the ChatGPT influences students’ perceptions of ethics.

The use of the ChatGPT and perceived ethics according to demographic variables

However, few studies have used gender and age as moderators of the use of the ChatGPT for comparison with students’ perceptions of ethics according to these demographic variables. However, they found that students perceive the use of ChatGPT as an idea generator to be more ethically acceptable, while other uses, such as writing a part of the assignment, cheating, and misinformation regarding sources, are considered unethical, raising concerns about academic misconduct and fairness (Črček & Patekar 2023a ; Famaye et al. 2023 ).

Previous research has shown that demographic factors such as gender and age can influence the adoption and perception of new technologies, including AI-based tools such as ChatGPT. For example, a study by Lopes et al. ( 2023 ) involving participants aged 18 to 43 years revealed significant differences between responses generated by ChatGPT and those generated by humans, suggesting that the perceived reliability of ChatGPT may vary across age groups. Additionally, a study conducted in Jordan revealed that undergraduate students generally held positive attitudes toward using ChatGPT as a learning tool, although they expressed concerns about the data accuracy and discomfort associated with using the platform (Ajlouni et al. 2023 ). These findings indicate that age may play a role in shaping students’ perceptions and use of ChatGPTs.

Furthermore, gender differences have been observed in technology adoption and perception. Studies have shown that men and women often have different attitudes toward technology, with women sometimes expressing more concerns about privacy, security, and ethical implications (Venkatesh et al. 2000 ; Goswami and Dutta 2015 ). In the context of the ChatGPT, Singh et al. ( 2023 ) found that students in the United Kingdom were skeptical about the positive effects of the ChatGPT on learning and emphasized the need for clearer guidelines and training on its use in academic activities. It is plausible that these concerns and attitudes may vary between male and female students.

Given the evidence of age and gender differences in technology adoption and perception, it is reasonable to hypothesize that the relationship between the use of ChatGPT and perceived ethics may differ by sex (Hypothesis 5) and age group (Hypothesis 6). Investigating these hypotheses will provide valuable insights into how demographic factors influence students’ engagement with and attitudes toward AI-based tools such as ChatGPT. Understanding these differences can help educators and institutions develop targeted strategies to address concerns, provide appropriate guidance, and promote the responsible use of ChatGPTs in academic settings. Therefore, the following hypothesis is proposed:

Hypothesis 5

The relationship between the use of the ChatGPT and perceived ethics differs by sex.

Hypothesis 6

The relationship between the use of the ChatGPT and perceived ethics differs by age group. Figure 1 presents the proposed research model, along with the previously substantiated research hypotheses.

figure 1

Proposed research model

The arrows indicate the direction of the hypothesized relationships, and the dotted line represents a potential moderating effect

This research was oriented toward a quantitative approach of exploratory and explanatory nature because its purpose was to evaluate the knowledge, attitudes, and perceptions of ethics regarding the use of the ChatGPT among university students. Additionally, the study involved a nonexperimental and cross-sectional design because it was conducted over a single period. Furthermore, the hypothetical-deductive method was employed because, based on a literature review, hypotheses were formulated, followed by an empirical evaluation to test these hypotheses.

Participants

In the study, a non-probabilistic accidental sampling method was employed, involving 201 participants who voluntarily completed the survey. Table 1 presents the socio-demographic characteristics of the participants. Regarding their fields of study, 1.49% were from agronomy and veterinary science, 1.49% from architecture and urban planning, 2.99% from exact and natural sciences, 46.27% from social sciences and humanities, 8.96% from health sciences, 9.45% from economics and administration, and 29.35% from engineering. Of the total participants from this generation, the majority were female (54.7%), and a smaller percentage were male (45.3%). The predominant age ranges of the participants were 20–22 and 17–19 years. The majority of participants were undergraduate students (9.5%). Laptops, computers and mobile phones were the devices most commonly used by participants to connect to the ChatGPT. Finally, regarding usage time, 42.8% indicated that they had been using ChatGPT for 1 to 2 months, and 18.4% reported using it for 3 to 4 months.

The data collection instrument was created in Google Forms ( https://forms.gle/dchuMwzrQsZNAoeHA ) and presented in Spanish, contextualized to the Peruvian setting. The anonymous survey consisted of two sections: the first section presented the description of the questionnaire (objective, ethical aspects to consider, and contact of the principal investigator for inquiries) and included the question “Do you voluntarily agree to participate in the study?” With branching, participants who selected “yes” responded to the survey, and if they answered “no”, the form automatically closed. This section included sociodemographic questions such as age, sex, type of university, level of education, professional career, type of device, and approximate usage time of the ChatGPT to thoroughly understand the characteristics of the participants. In the second section, items for the constructs were placed, where knowledge (5 items adapted from Bodani et al. 2023 ), attitude (7 items adapted from Bodani et al. 2023 ), and concern (5 items adapted from Farhi et al. 2023 ) were measured with a three-point Likert scale (1 = Yes; 2 = No; 3 = Maybe). Moreover, the Perceived Ethics construct (6 items adapted from Malmström et al. 2023 ) and the Use of ChatGPT construct (6 items adapted from Haleem et al. 2022 ) used a five-point Likert scale ranging from 1 (Strongly Disagree) to 5 (Strongly Agree).

Procedure and data analysis

An online survey was administered to assess the concerns, knowledge, attitudes, and perceptions of ethics of university students regarding the use of the ChatGPT. The study was conducted during the months of September and December of 2023 at four Peruvian universities located in the La Libertad and Lima departments. An online form containing two sections was submitted; the first section included an information sheet for participants and sociodemographic questions, while the second section contained the questionnaire items. In total, 225 responses were collected from participants; however, 201 responses were used for analysis since, of the 24 responses, students did not agree to participate in the study by selecting the “no, I accept” option in a mandatory branching question at the beginning of the form.

Regarding ethical aspects, the research protocol was approved by the ethics committee of the National University of Trujillo, and then the data were collected through an anonymous online survey. In addition, before participating in the study, all participants read the informed consent form and freely and voluntarily agreed to participate in the study.

The results of the analysis of the sociodemographic data were analyzed, and frequencies and percentages were obtained using Excel. To test the research hypotheses, structural equation modeling (SEM) was performed with the statistical software Smart-PLS v.4.0.9.8, which is based on the partial least squares (PLS) technique, to test the research hypotheses of the SEM (Ringle et al. 2022 ). Reliability was assessed using Cronbach’s alpha coefficient and composite reliability (CR), with values above 0.7 (Table  2 ). Convergent validity was assessed with the average variance extracted (AVE), with values above 0.5 (Table  2 ). Moreover, to evaluate discriminant validity, the criterion of Fornell and Larcker ( 1981 ) was followed, analyzing the square root of the AVE of each construct to ensure that its values were not higher than the correlations between all the other constructs and the specific construct (Table  3 ).

Descriptive results

Table  4 provides a descriptive analysis of the evaluated constructs. For the Knowledge construct, the KNW1 to KNW5 items exhibited medians ranging from 1.000 to 2.000, suggesting that the responses tended toward the lower end of the scale. The standard deviation ranged from 0.441 to 0.906, indicating moderate variability in the responses. Additionally, the range of responses is 2 for all items, implying that responses are distributed across two distinct points on the scale.

Regarding the Attitude construct, items ATT1 to ATT7 predominantly have a median of 1.000, except for ATT4 and ATT7, for which the median is 2.000, suggesting a generally negative or neutral attitude. The standard deviation ranged from 0.623 to 0.936, indicating significant variability in the responses. The range is 2, indicating a distribution across two distinct points on the scale for all measured attitudes.

In terms of student concerns, items SC1 to SC5 show medians varying from 1,000 to 4,000, reflecting greater concern in certain aspects. The standard deviation ranges from 0.928 to 0.974, suggesting high variability in student concerns. The range is 3, indicating that the responses span three distinct points on the scale.

For the Perceived Ethics construct, items PE1 to PE6 consistently had medians of 4.000, indicating a positive perception of ethics. The standard deviation varies between 0.904 and 0.994, showing considerable variability in ethical perception. The range is 3, suggesting a distribution across three distinct points on the scale.

Last, in the use of the ChatGPT construct, the GPTUS1 to GPTUS6 items presented medians ranging from 1.000 to 4.000, indicating varying levels of use and attitudes toward ChatGPT. The standard deviation ranged from 0.869 to 1.097, indicating high variability in the use of ChatGPT. The range is 4, implying that the responses cover the entire 5-point scale, reflecting a wide range of experiences and perceptions regarding the use of the ChatGPT.

Model measurement results

For convergent validity, as shown in Table  2 , factorial loadings, Cronbach’s alpha (α), composite reliability (CR), and average variance extracted (AVE) were analyzed. According to the criteria of (Hair 2009 ), it is recommended that the factorial loadings of all items surpass the threshold of 0.50, and precisely, the results of the study construct items show factorial loadings ranging from 0.703 to 0.977, satisfying this threshold. Based on the criterion of Nunnally ( 1994 ), when α and CR present values greater than 0.70, they are considered adequate, and as shown in Table  2 , all the constructs meet this criterion. Finally, according to (Teo and Noyes 2014 ), AVE values are considered adequate when they are above 0.50, and as evidenced in Table  2 , all the constructs exceed this threshold.

To analyze the discriminant validity of the research model, the criterion of Fornell and Larcker ( 1981 ) was used, according to which there is discriminant validity if the square root of the AVE located in the diagonal must be significantly greater than the correlations of the constructs located outside the diagonal. Table  3 shows that the diagonal values are significantly greater than the correlation values of the constructs that are outside the diagonal; consequently, the SEM has high discriminant validity.

Research hypothesis testing

Hypothesis testing for the SEM was conducted using the partial least squares (PLS) technique in SmartPLS software version 4.0.9.8. First, the goodness-of-fit indices were considerably acceptable: χ²= 4248.817, SRMR = 0.087, d_ULS = 4.009, d_G = 1.628, and NFI = 0.959.

Second, Table  5 ; Fig.  2 show the standardized path coefficients, p values, and other results. Two out of the six hypotheses are accepted. The verified hypotheses presented path coefficients greater than 0.50, suggesting a significant and positive relationship. Furthermore, the two paths of the accepted hypotheses were highly influential (ChatGPT Usage → Perceived Ethics = 12.985; ChatGPT Usage → Student´s Concerns = 23.754).

Third, the values of the coefficient of determination (R2) indicate that knowledge and attitude explain 84.2% of the variation in ChatGPT usage. On the other hand, ChatGPT Usage explains 59% of the variation in Perceived Ethics and 64% of the variation in Students’ Concerns.

figure 2

Research model solved

Figure 2 illustrates the resolved structural model from a study on ChatGPT usage, detailing the standardized path coefficients for hypothesized relationships between constructs such as knowledge, attitude, student concerns, and perceived ethics and their direct or indirect influence on ChatGPT usage. It also includes the moderating effects of gender and age on these relationships. The coefficients on the arrows represent the strength and direction of these relationships, with significant paths often highlighted to denote their impact on the model

The main objective of the study was to evaluate the concerns, knowledge, attitudes, and ethics of university students regarding the use of the ChatGPT. The research model initially showed acceptable fit indices. Moreover, the R2 values demonstrated that knowledge and attitude explained 84.2% of the variation in ChatGPT usage. The ChatGPT Usage construct explains 59% of the variation in Perceived Ethics and 64% of the variation in Student´s Concerns.

Regarding hypothesis 1, the results indicate that knowledge about ChatGPT among students does not positively influence the use of this system (B=-0.072; P value = 0.586 > 0.05). In another context, Duong et al. ( 2023 ) reported that students who believe that ChatGPT serves as a facilitator of knowledge exchange are more motivated and more likely to use it in the future. On the other hand, (Huedo-Martínez et al. 2023 ) maintain that attitudes toward the adoption and use of technology vary depending on their area of knowledge; therefore, it is likely that this hypothesis was not confirmed because the participants belonged to Generation Z but rather from different professional careers. In addition, previous familiarity with similar technologies could have played an important role in students’ perceptions and use of the ChatGPT. Those with prior experience in artificial intelligence tools or chatbot systems may show differences in their approach and valuation of ChatGPT compared with those without such experience. This aspect, which was not directly covered by our research, could explain the variability in the acceptance and use of the ChatGPT among students from different disciplines.

Regarding the second research hypothesis, the results showed that attitude does not influence the use of ChatGPT (B=-0.189; P value = 0.404 > 0.05). In this respect, [53] indicated that students’ attitudes toward AI, such as optimism, skepticism, or apathy, have a significant impact on their willingness to interact and acquire knowledge through these technologies. On the other hand, Halaweh ( 2023 ) noted that a negative attitude results in resistance to using AI, limiting learning opportunities. Therefore, this relationship was likely not confirmed because students exhibit skepticism about the benefits of ChatGPT, which affects their willingness to use this language model. The skepticism surrounding the ChatGPT may stem from various sources, such as concerns about the accuracy of information provided by this AI-based tool, fears of an overreliance on technology in learning environments, or the belief that these tools could undermine critical and analytical thinking abilities. These issues underscore the intricate dynamics between attitudes toward artificial intelligence and its successful integration into educational contexts. Although positive views on AI may promote the intention to utilize tools such as ChatGPT, it is crucial to acknowledge that the adoption of AI technologies is not solely influenced by attitudes alone. Other elements, including perceived usefulness, ease of use, and potential risks associated with the technology, also significantly contribute to the adoption process. Thus, the relationship between attitudes toward AI and the effective implementation of AI-based tools should not be viewed as straightforward. Instead, it represents a complex interaction of multiple factors that must be meticulously evaluated when incorporating AI technologies into educational strategies.

The impact of contextual and individual factors on shaping attitudes toward ChatGPT should not be overlooked. Factors such as prior experience with technology, the level of understanding of AI, and even social and cultural contexts can profoundly influence perceptions and attitudes toward ChatGPT. This insight underscores the need for interventions designed to enhance receptiveness to ChatGPT. These should address the foundational elements by disseminating clear and precise information about its functions, limitations, and potential benefits for educational practices. When effectively incorporated into educational settings, ChatGPT can provide numerous advantages that augment the learning experience. For instance, it offers immediate feedback, personalized assistance, and access to an extensive database of information, thus enabling deeper exploration of subjects and fostering autonomous learning. Additionally, the ChatGPT can support educators in developing interactive educational content, creating assessment items, and providing tailored assistance to students with varied learning requirements. By elucidating these benefits and illustrating how ChatGPT can supplement and enhance conventional teaching approaches, educators can facilitate better appreciation among students and other stakeholders of the significant role this AI tool can play in education. Such understanding is likely to lead to more favorable attitudes and increased readiness to embrace ChatGPT as an essential educational resource.

On the other hand, according to hypothesis 3, the results demonstrated that the use of ChatGPT influenced students’ concerns (B = 0.802; P value = 0.000 < 0.05). According to these findings (Famaye et al. 2023 ; Singh et al. 2023 ), students are concerned about the use of ChatGPT in educational settings, including skepticism about its positive effects on learning and concerns about potential fraud, misinformation, and equity issues related to its use. Furthermore, they are skeptical about the positive impact of the ChatGPT on learning and believe that universities should provide clearer guidelines and better education on how and where the tool can be used for learning activities (Singh et al. 2023 ). Therefore, the use of the ChatGPT generates concerns among students about its positive consequences, privacy, and misinformation. This finding underlines the need for a comprehensive framework that addresses both the potentialities and challenges presented by the ChatGPT in the educational realm (Ayhan 2023 ). To address these concerns, it is essential that educational institutions adopt a proactive approach, offering specific training on the ethical and effective use of the ChatGPT, as well as on identifying and preventing fraud and misinformation.

To address concerns regarding academic integrity and the appropriate utilization of ChatGPT in educational contexts, it is imperative to establish explicit policies and guidelines that extend beyond mere training programs. These policies should be crafted to ensure that technological applications enhance conventional teaching methodologies, promoting deeper and more critical learning without undermining academic integrity or student equity. An effective strategy involves the integration of academic integrity principles and practices directly within teaching and learning modules. For instance, educators can construct assignments and activities that compel students to critically evaluate the data generated by ChatGPT, juxtapose it with alternative sources, and contemplate the ramifications of employing AI-based tools within their learning trajectory. By incorporating these activities into the curriculum, students can attain a more profound understanding of the responsible use of ChatGPT and develop the competencies necessary to uphold academic integrity amid technological progress. Furthermore, being transparent about how ChatGPT can serve as a beneficial resource for specific learning scenarios and being clear about its constraints are crucial for fostering trust in its application. Educators should facilitate open dialogs with students concerning the advantages and challenges associated with using ChatGPT, offering advice on when and how to employ the tool both effectively and ethically. By embedding academic integrity principles and practices within teaching and learning modules, educators can foster a culture of responsible AI usage that transcends isolated training initiatives and becomes a fundamental component of the educational framework.

Similarly, in relation to hypothesis 4, the results suggest that the use of ChatGPT influences students’ perceptions of ethics (B = 0.856; P value = 0.000 < 0.05). In this respect, Famaye et al. (n.d.-d) noted that the ChatGPT is a valuable tool for learning, but they have concerns about deception, misinformation, and equity. On the other hand, the use of ChatGPT in academia has both negative and positive effects, with concerns about academic dishonesty and dependence on technology (Arista et al. 2023 ; Črček & Patekar 2023a ; Farhi et al. 2023 ; Fuchs and Aguilos 2023 ; Hung & Chen, 380 C.E.; Ogugua et al. 2023 ; Robledo et al. 2023 ; Zeb et al. 2024a ; Zhong et al. 2023 ). Similarly, when students rely too much on the ChatGPT, they are concerned about the potential negative effects on their cognitive abilities (Bodani et al. 2023 ). Therefore, it is important to address these ethical issues associated with the use of the ChatGPT in educational settings to ensure that its implementation positively contributes to the academic and personal development of students. The influence of ChatGPT use on students’ perceived ethics underscores the importance of fostering ongoing dialog about ethical values and individual responsibilities in the use of AI technologies.

We believe that to address students’ ethical concerns, it is essential for educational institutions to implement training programs that specifically address issues of academic integrity and responsible technology use. These programs should teach students how to use ChatGPTs in a way that complements their learning without compromising their intellectual development or academic honesty. This includes guiding them on how to properly cite AI-generated material and distinguishing between acceptable collaboration with these tools and plagiarism.

Furthermore, it is crucial to promote a deeper understanding of the limitations of ChatGPT and other AI tools, as well as their implications for privacy and data security. By better understanding these limitations, students can make more informed and ethical decisions about when and how to use these technologies.

Moderation differences in the use of the ChatGPT and perceived ethics by sex and age

The findings of the present study indicate that neither gender nor age moderates the influence of ChatGPT use and perceived ethics among Generation Z students. In reality, few studies have used these demographic variables to evaluate this relationship. In another context, (Črček & Patekar 2023a ; Famaye et al. 2023 ) indicate that students who perceive the most ethically acceptable use of the ChatGPT as an idea generator, while other uses are seen as unethical, leading to concerns about academic misconduct and equity. Lopes et al. ( 2023 ) reported differences between the responses generated by the ChatGPT and those generated by students aged 18 to 43 years. In the United Kingdom, students expressed some skepticism about the positive benefits of ChatGPTs for their learning. Therefore, these relationships may not be confirmed by various contextual factors, thus providing evidence that both male and female Generation Z students perceive risks such as dependence and misinformation by interacting with this system in the same way.

Theoretical and practical implications

The knowledge and attitudes of students toward ChatGPT do not automatically result in its broader adoption, underscoring the complexity of the factors that influence its acceptance. The ethical concerns raised by the use of ChatGPT highlight the necessity of developing theories that address the wider psychosocial implications of relying on and delegating tasks to AI, especially within the realm of academic integrity.

As students increasingly depend on ChatGPT for their academic tasks, they may outsource essential cognitive skills such as critical thinking, problem solving, and the generation of original content to this technology. This shift can lead to a reduced sense of personal accountability in upholding academic integrity, as students might view AI as a replacement for their own intellectual labor. Consequently, this transfer of functions to AI can contribute to the degradation of academic integrity, manifesting in behaviors such as plagiarism, the uncritical acceptance of AI-generated content, and a deficiency in original thought.

The evidence indicates that merely possessing knowledge of or holding favorable attitudes toward ChatGPT does not guarantee its effective integration into educational practices. A comprehensive approach that considers the interplay of multiple factors in the technological acceptance model while also addressing the psychosocial effects of dependency on AI and its potential impact on academic integrity is essential for thoroughly comprehending and mitigating the challenges posed by the integration of ChatGPTs in educational contexts.

Existing theoretical models, such as the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT2), could be expanded to include specific factors related to AI technologies such as the ChatGPT. These models should consider how previous experiences with similar technologies, performance expectations, the perceived effort needed, social influence, and facilitating conditions impact predisposition toward AI use. Furthermore, it is critical to understand how attitudes toward AI, such as optimism or skepticism, develop and how these attitudes not only affect the intention to use but also affect effective adoption and usage practices.

Limitations and future studies

The main limitations of the study were as follows: First, there was no scale for measuring students’ concerns or ethics perceptions regarding the use of the ChatGPT; therefore, although the quality tests of the study model were acceptable, these constructs cannot be measured with greater precision. Second, the survey was based on the experience of Generation Z students using the ChatGPT. Finally, the sample was obtained through nonprobabilistic sampling accidentally, so it is possible that the study’s results may not be generalizable to other contexts.

For future work, it would be important to create new scales to assess students’ perceived ethics. Additionally, it would be beneficial to expand the scope of the study to include participants from different generations, as well as those from various academic disciplines and cultures, to better understand how these variables may influence the perception and use of ChatGPTs and other artificial intelligence technologies. This expansion of the study could provide a more nuanced and generalizable view of AI acceptance in educational contexts.

Another important aspect for future research is the detailed analysis of how different educational and professional experiences with AI, including both positive and negative experiences, affect the willingness to adopt technologies such as ChatGPT. Understanding the dynamics of these previous experiences could offer valuable insights for the development of pedagogical and technological strategies that foster more effective and ethical adoption of AI in education.

When investigating the implementation strategies and regulatory frameworks that educational institutions might establish to promote the ethical and responsible utilization of ChatGPT, it is crucial to acknowledge the diverse needs and challenges that vary across different academic disciplines. Establishing clear policies concerning acceptable use, academic integrity, data privacy, and security is essential; however, these policies may require adaptation to meet the specific needs and contexts of each discipline.

For instance, in disciplines such as creative writing or the arts, the use of the ChatGPT could raise distinct concerns regarding originality and authenticity, necessitating policies tailored to address these specific issues. Conversely, in fields such as mathematics or computer science, the emphasis might shift toward ensuring that students comprehend the underlying principles and can apply these principles independently rather than depending on the solutions generated by AI.

Additionally, the creation of training programs for both students and educators should consider the varied applications of the ChatGPT across different disciplines. These programs should offer guidance for effectively integrating ChatGPT into particular disciplinary contexts while also addressing the potential risks and challenges posed by its use in each area.

By acknowledging the need for nuanced and context-specific approaches to regulatory frameworks and implementation strategies, educational institutions can more effectively support the ethical and responsible application of the ChatGPT across a broad spectrum of disciplines. This approach ensures that the technology is leveraged in a manner that augments learning outcomes while preserving academic integrity.

Furthermore, given the importance of attitudes toward AI in the adoption of technologies such as ChatGPT, future research could explore in depth the factors that contribute to the development of these attitudes. This could include studies examining the impact of informational campaigns, practical experiences with technology, and the role of media and social networks in shaping perceptions about AI.

Future studies should investigate how perceptions regarding the ethics and utilization of ChatGPT may differ based on demographic variables such as gender, age, and cultural background. This could yield important insights into the distinct concerns of various groups, potentially guiding the development of more inclusive, equitable, and culturally sensitive tools and policies. For instance, should specific age demographics show a tendency to negatively affect ChatGPT’s impact on academic fairness, educational institutions could develop tailored interventions to mitigate these perceptions. Moreover, cultural factors might significantly influence attitudes toward AI-based tools such as ChatGPT. Students of diverse cultural origins might exhibit different levels of familiarity with confidence in or receptiveness to AI technologies, which could affect their readiness to embrace and utilize ChatGPT for educational objectives. Additionally, cultural norms and values concerning education, academic integrity, and the role of technology in education may vary among cultures. These variances could result in different perceptions regarding the ethicality and suitability of employing the ChatGPT in educational contexts. For example, some cultural groups may value individual effort and originality more highly, whereas others might favor collaborative learning and the integration of technological support. By conducting research that addresses cultural barriers and differences, educational institutions can devise more culturally attuned approaches to incorporating ChatGPT into their pedagogical strategies. This might entail customizing training programs, support services, and policies to meet the particular needs and concerns of students from varied cultural backgrounds, ensuring that the technology is employed in a way that respects and accommodates cultural distinctions while fostering academic excellence and integrity.

Conclusions

The research contributes to the literature by suggesting that models such as the TAM and UTAUT2 incorporate specific variables related to AI technologies. Thus, a deeper understanding of how utility perceptions, effort, expectations, and previous experiences affect students’ disposition toward the use of ChatGPT can be obtained, providing a more solid foundation for the development of AI-oriented educational strategies and technologies.

The knowledge and positive attitudes of students toward ChatGPT do not guarantee its adoption and effective use. Despite the common belief that knowledge and positive attitudes toward a technology drive its use, our findings suggest that reality is more nuanced. Factors such as the area of knowledge, previous experience with similar technologies, and cultural and social context play crucial roles in how students perceive and decide to use ChatGPTs. Therefore, deeper theoretical models that consider context, disciplinary variables, and previous experiences with AI are needed.

It is important to investigate how attitudes of optimism, skepticism, or apathy toward AI develop and how these attitudes influence the intention to use technologies such as the ChatGPT.

The dependence of students on AI tools such as ChatGPT raises ethical concerns that must be addressed with training programs on responsible use. The significant impact of ChatGPT use on students’ ethical concerns highlights the critical need to develop and apply strong ethical frameworks in the implementation of AI in education.

This study examined the relationship between ChatGPT usage and perceived ethics, taking into account potential variations based on gender and age. Perceived ethics refers to students’ beliefs concerning the moral and ethical consequences of utilizing ChatGPT for academic purposes. The results indicated no significant differences in this relationship with respect to gender or age, suggesting that both male and female students, as well as students across various age groups, share similar views on the ethical implications of using the ChatGPT. Nonetheless, further research involving diverse samples and varying educational contexts is required to deepen the analysis of this relationship and investigate potential demographic variations. Understanding how students perceive the ethical ramifications of using ChatGPT is vital for the development of effective guidelines, policies, and training programs that foster the responsible and ethical use of AI-driven tools in education.

Finally, HEIs must develop policies, specific guidelines, and training programs to promote the ethical use of the ChatGPT, addressing issues such as academic integrity, privacy, and misinformation. In addition, educational institutions must strive to offer specific training on the ethical use of the ChatGPT, addressing issues such as misinformation, fraud, and academic integrity. This proactive approach will not only help mitigate students’ concerns but also promote more responsible and critical use of these technologies.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Abbreviations

  • Artificial intelligence

Higher education institutions

Technology acceptance model

The unified theory of acceptance and use of technology

Cronbach’s alpha

Average variance extracted

Composite reliability

Chat-Generative Pretrained Transformer

Structural equation model

Partial least squares

Abd-El-Khalick F, Summers R, Said Z, Wang S, Culbertson M (2015) Development and large-scale validation of an instrument to assess arabic-speaking students’ attitudes toward Science. Int J Sci Educ 37(16):2637–2663. https://doi.org/10.1080/09500693.2015.1098789

Article   Google Scholar  

Abrahamsson P, Anttila T, Hakala J, Ketola J, Knappe A, Lahtinen D, Liukko V, Poranen T, Ritala T-M, Setälä M (2023) ChatGPT as a Fullstack Web Developer - Early Results. Volume 489 LNBIP, Pages 201–209, Amsterdam. https://doi.org/10.1007/978-3-031-48550-3_20

Acosta-Enriquez BG, Arbulú Ballesteros MA, Huamaní Jordan O, López Roca C, Tirado S, K (2024) Analysis of college students’ attitudes toward the use of ChatGPT in their academic activities: Effect of intent to use, verification of information and responsible use. BMC Psychol 12(1). https://doi.org/10.1186/s40359-024-01764-z . Scopus

Adelekan S, Williamson M, Atiku S (2018) Influence of social entrepreneurship pedagogical initiatives on students’ attitudes and behaviours. J Bus Retail Manage Res 12(03). https://doi.org/10.24052/JBRMR/V12IS03/ART-15

Ajlouni A, Wahba F, Almahaireh A (2023) Students’ attitudes towards using ChatGPT as a Learning Tool: the case of the University of Jordan. 17(18):99–117. https://doi.org/10.3991/ijim.v17i18.41753

Ajzen I (2001) Nature and operation of attitudes. Ann Rev Psychol 52:27–58. https://doi.org/10.1146/ANNUREV.PSYCH.52.1.27

Almasan O, Buduru S, Lin Y, Karan-Romero M, Salazar-Gamarra RE, Leon-Rios XA (2023) Evaluation of attitudes and perceptions in students about the Use of Artificial Intelligence in Dentistry. Dentistry J 2023 11(5):125. https://doi.org/10.3390/DJ11050125

Arista A, Shuib L, Ismail M (2023) An Overview chatGPT in Higher Education in Indonesia and Malaysia. Pages 273–277, Online. https://doi.org/10.1109/ICIMCIS60089.2023.10349053

Ayhan Y (2023) The impact of Artificial Intelligence on Psychiatry: benefits and Concerns-An assay from a disputed ‘author’. Turkish J Psychiatry. https://doi.org/10.5080/u27365

Bin-Nashwan SA, Sadallah M, Bouteraa M (2023) Use of ChatGPT in academia: academic integrity hangs in the balance. Technol Soc 75:102370. https://doi.org/10.1016/j.techsoc.2023.102370

Bodani N, Lal A, Maqsood A, Altamash S, Ahmed N, Heboyan A (2023) Knowledge, attitude, and practices of General Population toward utilizing ChatGPT: a cross-sectional study. 13(4). https://doi.org/10.1177/21582440231211079

Breckler SJ (1984) Empirical validation of affect, behavior, and cognition as distinct components of attitude. J Personal Soc Psychol 47(6):1191–1205. https://doi.org/10.1037/0022-3514.47.6.1191

Castillo-Vergara M, Álvarez-Marín A, Pinto V, E., Valdez-Juárez LE (2109) Technological Acceptance of Industry 4.0 by students from rural areas 11(14). https://doi.org/10.3390/electronics11142109 .

Chan C, Hu W (2023) Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. Int J Educational Technol High Educ 20(1):43. https://doi.org/10.1186/s41239-023-00411-8

Črček N, Patekar J (2023a) Writing with AI: University Students’ Use of ChatGPT. 9(4):128–138. https://doi.org/10.17323/jle.2023.17379

Crompton H, Burke D (2023) Artificial intelligence in higher education: the state of the field. Int J Educational Technol High Educ 20(1). https://doi.org/10.1186/s41239-023-00392-8

Duong C, Vu T, Ngo T (2023) Applying a modified technology acceptance model to explain higher education students’ usage of ChatGPT: a serial multiple mediation model with knowledge sharing as a moderator. Int J Manage Educ 21(3):100883. https://doi.org/10.1016/J.IJME.2023.100883

Elendu C, Amaechi D, Elendu T, Jingwa K, Okoye O, John Okah M, Ladele J, Farah A, Alimi HA (2023) Ethical implications of AI and robotics in healthcare: a review. Medicine 102(50). https://doi.org/10.1097/MD.0000000000036671

Essien A, Chukwukelu G, Essien V (2020) Opportunities and challenges of adopting artificial intelligence for learning and teaching in higher education 67–78. https://doi.org/10.4018/978-1-7998-4846-2.ch005

Famaye T, Adisa I, Irgens G (2023) To Ban or Embrace: students’ perceptions towards adopting Advanced AI Chatbots in Schools. 1895:140–154. https://doi.org/10.1007/978-3-031-47014-1_10

Farhi F, Jeljeli R, Aburezeq I, Dweikat FF, Al-shami SA, Slamene R (2023) Analyzing the students’ views, concerns, and perceived ethics about chat GPT usage. Computers Education: Artif Intell 5(5). https://doi.org/10.1016/J.CAEAI.2023.100180

Fishbein M, Ajzen I (1975) Strategies of Change: Active Participation. Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research 411–450

Fornell C, Larcker D (1981) Evaluating Structural equation models with unobservable variables and measurement error. J Mark Res 18(1):39. https://doi.org/10.2307/3151312

Fuchs K, Aguilos V (2023) Integrating Artificial Intelligence in Higher Education: empirical insights from students about using ChatGPT. 13(9):1365–1371. https://doi.org/10.18178/ijiet.2023.13.9.1939

Garani-Papadatos T, Natsiavas P, Meyerheim M, Hoffmann S, Karamanidou C, Payne SA (2022) Ethical principles in Digital Palliative Care for children: the MyPal Project and experiences made in Designing a Trustworthy Approach. Front Digit Health 4. https://doi.org/10.3389/fdgth.2022.730430

Goodman B (2023) Privacy without persons: a buddhist critique of surveillance capitalism. AI Ethics 3(3):781–792. https://doi.org/10.1007/s43681-022-00204-1

Goswami A, Dutta S (2015) Gender differences in technology Usage—A literature review. Open J Bus Manage 4(1). https://doi.org/10.4236/ojbm.2016.41006

Guerrero-Dib JG, Portales L, Heredia-Escorza Y (2020) Impact of academic integrity on workplace ethical behavior. Int J Educational Integr 16(1). https://doi.org/10.1007/s40979-020-0051-3

Gundu T, Chibaya C (2023) Demystifying the Impact of ChatGPT on Teaching and Learning. 1862, 93–104, 1862, Gauteng. https://doi.org/10.1007/978-3-031-48536-7_7

Gupta A, Guglani A (2023) Scenario Analysis of Malicious Use of Artificial Intelligence and Challenges to Psychological Security in India. In The Palgrave Handbook of Malicious Use of AI and Psychological Security. https://doi.org/10.1007/978-3-031-22552-9_15

Hair J (2009) Multivariate Data Analysis. Faculty and Research Publications. https://digitalcommons.kennesaw.edu/facpubs/2925

Halaweh M (2023) ChatGPT in education: strategies for responsible implementation 15, 15(2). https://doi.org/10.30935/cedtech/13036

Haleem A, Javaid M, Singh RP (2022) An era of ChatGPT as a significant futuristic support tool: a study on features, abilities, and challenges. BenchCouncil Trans Benchmarks Stand Evaluations 2(4). https://doi.org/10.1016/J.TBENCH.2023.100089

Hassan A (2023) The Usage of Artificial Intelligence in Education in Light of the Spread of ChatGPT. Springer Science and Business Media Deutschland GmbH 687–702. https://doi.org/10.1007/978-981-99-6101-6_50

Holden OL, Norris ME, Kuhlmeier VA (2021) Academic Integrity in Online Assessment: A Research Review. Front Educ 6. https://doi.org/10.3389/feduc.2021.639814

Hsu S, Li T, Zhang Z, Fowler M, Zilles C, Karahalios K (2021) Attitudes Surrounding an Imperfect AI Autograder. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems 1–15. https://doi.org/10.1145/3411764.3445424

Huedo-Martínez S, Molina-Carmona R, Llorens-Largo F (2023) Study on the attitude of Young people towards Technology. 10925:26–43. https://doi.org/10.1007/978-3-319-91152-6_3

Hung J, Chen J (2023) The benefits, risks and regulation of using ChatGPT in Chinese academia: a content analysis 12(7). https://doi.org/10.3390/socsci12070380

Irfan M, Aldulaylan F, Alqahtani Y (2023) Ethics and privacy in Irish higher education: a comprehensive study of Artificial Intelligence (AI) tools implementation at University of Limerick. Global Social Sci Rev VIII(II):201–210. https://doi.org/10.31703/gssr.2023(VIII-II).19

Isaac F, Diaz N, Kapphahn J, Mott O, Dworaczyk D, Luna-Gracía R, Rangel A (2023) Introduction to AI in Undergraduate Engineering Education 2023. https://doi.org/10.1109/FIE58773.2023.10343187

Karakose T, Tülübaş T (2023) How can ChatGPT facilitate teaching and learning: implications for Contemporary Education. 12(4):7–16. https://doi.org/10.22521/EDUPIJ.2023.124.1

Khairatun Hisan U, Miftahul Amri M (2022) Artificial Intelligence for Human Life: a critical opinion from Medical Bioethics perspective – part II. J Public Health Sci 1(02):112–130. https://doi.org/10.56741/jphs.v1i02.215

Kleebayoon A, Wiwanitkit V (2023) Artificial Intelligence, Chatbots, Plagiarism and Basic Honesty: comment. Cell Mol Bioeng 16(2):173–174. https://doi.org/10.1007/s12195-023-00759-x

Köbis L, Mehner C (2021) Ethical questions raised by AI-Supported mentoring in Higher Education. Front Artif Intell 4. https://doi.org/10.3389/frai.2021.624050

Kshetri N (2023) ChatGPT Developing Economies 25(2):16–19. https://doi.org/10.1109/MITP.2023.3254639

Kuka L, Hörmann C, Sabitzer B (2022) Teaching and learning with AI in higher education: a scoping review. Springer Sci Bus Media Deutschland GmbH 456:551–571. https://doi.org/10.1007/978-3-031-04286-7_26

Ligorio M (2022) Artificial Intelligence and learning [INTELLIGENZA ARTIFICIALE E APPRENDIMENTO]. 34(1):21–26. https://doi.org/10.1422/103844

Lopes E, Jain G, Carlbring P, Pareek S (2023) Talking Mental Health: a Battle of Wits Between Humans and AI 2023. https://doi.org/10.1007/s41347-023-00359-6

Malmström H, Stöhr C, Ou A (2023) Chatbots and other AI for learning: a survey of use and views among university students in Sweden. https://doi.org/10.17196/CLS.CSCLHE/2023/01

Mich L, Garigliano R (2023) ChatGPT for e-Tourism: a technological perspective. Volume 25(1):1–12. https://doi.org/10.1007/s40558-023-00248-x

Ming W, Bacon K (2023) How artificial intelligence promotes the education in China. ACM Int Conf Proceeding Ser 124–128. https://doi.org/10.1145/3588243.3588273

Montenegro-Rueda M, Fernández-Cerero J, Fernández-Batanero JM, López-Meneses E (2023) Impact of the implementation of ChatGPT in education: a systematic review. Computers, 12(8), Article 8. https://doi.org/10.3390/computers12080153

Muñoz S, Gayoso G, Huambo A, Tapia R, Incaluque J, Aguila O, Cajamarca J, Acevedo J, Huaranga H, Arias-Gonzáles J (2023) Examining the impacts of ChatGPT on Student Motivation and Engagement. 23(1):1–27

Nikolaeva IV, Levchenko AV, Zizikova S I. (n.d.). Artificial Intelligence Technologies for Evaluating Quality and Efficiency of Education 378, 360–365, 378. https://doi.org/10.1007/978-3-031-38122-5_50

Nunnally JC (1994) Bernstein: psychometric theory. McGraw-Hill, New York 1994:2015–2018. https://search.worldcat.org/title/28221417

Ogugua D, Yoon S, Lee D (2023) Academic Integrity in a Digital era: should the Use of ChatGPT be banned in schools? 28(7):1–10. https://doi.org/10.17549/gbfr.2023.28.7.1

Prem E (2019) Artificial intelligence for innovation in Austria. Technol Innov Manage Rev 9(12):5–15. https://doi.org/10.22215/timreview/1287

Putra F, Rangka I, Aminah S, Aditama M (2023) ChatGPT in the higher education environment: perspectives from the theory of high order thinking skills. 45(4):e840–e841. https://doi.org/10.1093/pubmed/fdad120

Rath K, Senapati A, Dalla V, Kumar A, Sahu S, Das R (2023) GROWING role of Ai toward digital transformation in higher education systems. Apple Academic, pp 3–26. https://doi.org/10.1201/9781003300458-2

Ringle CM, Wende S, Becker J-M (2022) SmartPLS 4. Oststeinbek: SmartPLS GmbH. http://Www.Smartpls.Com . https://www.smartpls.com/documentation/getting-started/cite/

Roberts H, Babuta A, Morley J, Thomas C, Taddeo M, Floridi L (2023) Artificial intelligence regulation in the United Kingdom: a path to good governance and global leadership? Internet Policy Rev 12(2). https://doi.org/10.14763/2023.2.1709

Robledo D, Zara C, Montalbo S, Gayeta N, Gonzales A, Escarez M, Maalihan E (2023) Development and validation of a Survey Instrument on Knowledge, attitude, and practices (KAP) regarding the Educational Use of ChatGPT among Preservice teachers in the Philippines. 13(10):1582–1590. https://doi.org/10.18178/ijiet.2023.13.10.1965

Sane A, Albuquerque M, Gupta M, Valadi J (2023) ChatGPT didn’t take me very far, did it? Proceedings of the ACM Conference on Global Computing Education Vol 2, 204. https://doi.org/10.1145/3617650.3624947

Sarkar A (2023) Exploring perspectives on the impact of Artificial Intelligence on the Creativity of Knowledge Work: Beyond Mechanised Plagiarism and Stochastic parrots. ACM Int Conf Proceeding Ser. https://doi.org/10.1145/3596671.3597650

Shoufan A (2023) Exploring students’ perceptions of ChatGPT: thematic analysis and Follow-Up survey. 11:38805–38818. https://doi.org/10.1109/ACCESS.2023.3268224

Singh H, Tayarani-Najaran MH, Yaqoob M (2023) Exploring Computer Science Students’ perception of ChatGPT in Higher Education: a descriptive and correlation study 13(9). https://doi.org/10.3390/educsci13090924

Svenningsson J, Höst G, Hultén M, Hallström J (2022) Students’ attitudes toward technology: exploring the relationship among affective, cognitive and behavioral components of the attitude construct. Int J Technol Des Educ 32(3):1531–1551. https://doi.org/10.1007/S10798-021-09657-7/FIGURES/2

Tang KY, Hsiao CH (2023) Review of TAM used in Educational Technology Research: a proposal. 2:714–718

Taylor S, Gulson K, McDuie-Ra D (2023) Artificial Intelligence from Colonial India: race, statistics, and Facial Recognition in the Global South. Sci Technol Hum Values 48(3):663–689. https://doi.org/10.1177/01622439211060839

Teo T, Noyes J (2014) Explaining the intention to use technology among pre-service teachers: a multigroup analysis of the Unified Theory of Acceptance and Use of Technology. Interact Learn Environ 22(1):51–66. https://doi.org/10.1080/10494820.2011.641674

Thong C, Butson R, Lim W (2023) Understanding the impact of ChatGPT in education. ASCILITE Publications 234–243. https://doi.org/10.14742/apubs.2023.461

Venkatesh V, Morris MG, Ackerman PL (2000) A longitudinal field investigation of gender differences in Individual Technology Adoption Decision-Making processes. Organ Behav Hum Decis Process 83(1):33–60. https://doi.org/10.1006/obhd.2000.2896

von Garrel J, Mayer J (2023) Artificial Intelligence in studies—use of ChatGPT and AI-based tools among students in Germany. Humanit Social Sci Commun 10(1). https://doi.org/10.1057/s41599-023-02304-7

Yasin M (2022) Youth perceptions and attitudes about artificial intelligence. Izv Saratov Univ Philos Psychol Pedagogy 22(2):197–201. https://doi.org/10.18500/1819-7671-2022-22-2-197-201

Zeb A, Ullah R, Karim R (2024a) Exploring the role of ChatGPT in higher education: opportunities, challenges and ethical considerations 44(1):99–111. https://doi.org/10.1108/IJILT-04-2023-0046

Zhong Y, Ng DTK, Chu SKW (2023) ICCE 2023 Exploring the Social Media Discourse: The Impact of ChatGPT on Teachers’ Roles and Identity. 1:838–848, 1.

Download references

Acknowledgements

Not applicable.

Author information

Authors and affiliations.

Universidad Nacional de Trujillo, Trujillo, Perú

Benicio Gonzalo Acosta-Enriquez & Cristian Raymound Gutiérrez Ulloa

Universidad Tecnológica del Perú, Lima, Perú

Marco Agustín Arbulú Ballesteros

Universidad César Vallejo, Trujillo, Perú

Carmen Graciela Arbulu Perez Vargas

Universidad Técnica de Machala, Machala, Ecuador

Milca Naara Orellana Ulloa, Johanna Micaela Pizarro Romero & Néstor Daniel Gutiérrez Jaramillo

Unidad educativa particular bilingüe Principito y Marcel Laniado de Wind, Machala, Ecuador

Héctor Ulises Cuenca Orellana & Diego Xavier Ayala Anzoátegui

Universidad Nacional Mayor de San Marcos, Lima, Perú

Carlos López Roca

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: Benicio Gonzalo Acosta-Enriquez, Carmen Graciela Arbulu Perez Vargas, Marco Agustín Arbulú Ballesteros; Methodology: Milca Naara Orellana Ulloa, Cristian Raymound Gutiérrez Ulloa, Johanna Micaela Pizarro Romero; Formal analysis: Benicio Gonzalo Acosta-Enriquez, Marco Arbulú Ballasteros; Writing - preparation of the original draft: Néstor Daniel Gutiérrez Jaramillo, Héctor Ulises Cuenca Orellana, Carlos López Roca; Writing - revision and editing: Diego Xavier Ayala Anzoátegui, Carmen Graciela Arbulu Perez Vargas, Carlos López Roca. All the authors have read and approved the final manuscript.

Corresponding author

Correspondence to Benicio Gonzalo Acosta-Enriquez .

Ethics declarations

Conflict of interest.

The authors declare that they have no competing interests that could bias the results of the manuscript.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Acosta-Enriquez, B.G., Arbulú Ballesteros, M.A., Arbulu Perez Vargas, C.G. et al. Knowledge, attitudes, and perceived Ethics regarding the use of ChatGPT among generation Z university students. Int J Educ Integr 20 , 10 (2024). https://doi.org/10.1007/s40979-024-00157-4

Download citation

Received : 28 February 2024

Accepted : 04 June 2024

Published : 19 June 2024

DOI : https://doi.org/10.1007/s40979-024-00157-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Perceived ethics
  • College students
  • Educational technology
  • Higher education

International Journal for Educational Integrity

ISSN: 1833-2595

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

artificial intelligence education ethical problems and solutions

Artificial intelligence1

  • Ethics of AI
  • Innovation in teaching and learning
  • Women’s access & participation in technological developments
  • Ethics of Science, Technology & Bioethics
  • For Policymakers
  • For the judiciary
  • Developing policies & capacities
  • Fostering gender equality and youth inclusion
  • Rights, Openness, Accessibility, & Multi-stakeholder
  • Report of COMEST on robotics ethics
  • Global South map of emerging areas in Artificial Intelligence
  • 7 minutes to understand AI
  • On the Ethics of Artificial Intelligence
  • On a possible standard-setting instrument on the ethics of AI
  • On the technical and legal aspects relating to a standard-setting instrument on the ethics of AI

Ethics of Artificial Intelligence

Rendering of a digital planet earth in a green color palette

Policy Dialogue on AI Governance, 28 June 2024, UNESCO Paris

For the very first time, UNESCO brings together a newly established platform of top international AI experts for the Policy Dialogue on AI Governance to discuss real-world solutions for a more ethical AI ecosystem. Join us on 28 June 2024 at UNESCO in Paris, Room IX, and online.  

Global Forum on the Ethics of Artificial Intelligence 2024 - Changing the Landscape of AI Governance (main banner)

Global AI Ethics and Governance Observatory

Getting AI governance right is one of the most consequential challenges of our time, calling for mutual learning based on the lessons and good practices emerging from the different jurisdictions around the world.

The aim of the Global AI Ethics and Governance Observatory is to provide a global resource for policymakers, regulators, academics, the private sector and civil society to find solutions to the most pressing challenges posed by Artificial Intelligence.

The Observatory showcases information about the readiness of countries to adopt AI ethically and responsibly.

It also hosts the AI Ethics and Governance Lab, which gathers contributions, impactful research, toolkits and good practices.

With its unique mandate, UNESCO has led the international effort to ensure that science and technology develop with strong ethical guardrails for decades.

Be it on genetic research, climate change, or scientific research, UNESCO has delivered global standards to maximize the benefits of the scientific discoveries, while minimizing the downside risks, ensuring they contribute to a more inclusive, sustainable, and peaceful world. It has also identified frontier challenges in areas such as the ethics of neurotechnology, on climate engineering, and the internet of things.

AI - Artificial intelligence

The rapid rise in artificial intelligence (AI) has created many opportunities globally , from facilitating healthcare diagnoses to enabling human connections through social media and creating labour efficiencies through automated tasks.

However, these rapid changes also raise profound ethical concerns . These arise from the potential AI systems have to embed biases, contribute to climate degradation, threaten human rights and more. Such risks associated with AI have already begun to compound on top of existing inequalities, resulting in further harm to already marginalised groups.

Artificial intelligence plays a role in billions of people’s lives

In no other field is the ethical compass more relevant than in artificial intelligence. These general-purpose technologies are re-shaping the way we work, interact, and live. The world is set to change at a pace not seen since the deployment of the printing press six centuries ago. AI technology brings major benefits in many areas, but without the ethical guardrails , it risks reproducing real world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms.

Gabriela Ramos

Recommendation on the Ethics of Artificial Intelligence

UNESCO produced the first-ever global standard on AI ethics – the ‘Recommendation on the Ethics of Artificial Intelligence ’ in November 2021. This framework was adopted by all 193 Member States. The protection of human rights and dignity is the cornerstone of the Recommendation, based on the advancement of fundamental principles such as transparency and fairness, always remembering the importance of human oversight of AI systems. However, what makes the Recommendation exceptionally applicable are its extensive Policy Action Areas , which allow policymakers to translate the core values and principles into action with respect to data governance, environment and ecosystems, gender, education and research, and health and social wellbeing, among many other spheres.

Four core values

Respect, protection and promotion of human rights and fundamental freedoms and human dignity

just, and interconnected societies

A dynamic understanding of AI

The Recommendation interprets AI broadly as systems with the ability to process data in a way which resembles intelligent behaviour.

This is crucial as the rapid pace of technological change would quickly render any fixed, narrow definition outdated, and make future-proof policies infeasible.

A human rights approach to AI

The use of AI systems must not go beyond what is necessary to achieve a legitimate aim. Risk assessment should be used to prevent harms which may result from such uses.

Unwanted harms (safety risks) as well as vulnerabilities to attack (security risks) should be avoided and addressed by AI actors.

Privacy must be protected and promoted throughout the AI lifecycle. Adequate data protection frameworks should also be established.

International law & national sovereignty must be respected in the use of data. Additionally, participation of diverse stakeholders is necessary for inclusive approaches to AI governance.

AI systems should be auditable and traceable. There should be oversight, impact assessment, audit and due diligence mechanisms in place to avoid conflicts with human rights norms and threats to environmental wellbeing.

The ethical deployment of AI systems depends on their transparency & explainability (T&E). The level of T&E should be appropriate to the context, as there may be tensions between T&E and other principles such as privacy, safety and security.

Member States should ensure that AI systems do not displace ultimate human responsibility and accountability.

AI technologies should be assessed against their impacts on ‘sustainability’, understood as a set of constantly evolving goals including those set out in the UN’s Sustainable Development Goals.

Public understanding of AI and data should be promoted through open & accessible education, civic engagement, digital skills & AI ethics training, media & information literacy.

AI actors should promote social justice, fairness, and non-discrimination while taking an inclusive approach to ensure AI’s benefits are accessible to all.

Actionable policies

Key policy areas make clear arenas where Member States can make strides towards responsible developments in AI

While values and principles are crucial to establishing a basis for any ethical AI framework, recent movements in AI ethics have emphasised the need to move beyond high-level principles and toward practical strategies.

The Recommendation does just this by setting out eleven key areas for policy actions .

Recommendation on the Ethics of Artificial Intelligence - 11 Key policy areas

Implementing the Recommendation

The RAM is designed to help assess whether Member States are prepared to effectively implement the Recommendation. It will help them identify their status of preparedness & provide a basis for UNESCO to custom-tailor its capacity-building support.

EIA is a structured process which helps AI project teams, in collaboration with the affected communities, to identify & assess the impacts an AI system may have. It allows to reflect on its potential impact & to identify needed harm prevention actions.

Women4Ethical AI expert platform to advance gender equality

UNESCO's Women4Ethical AI is a new collaborative platform to support governments and companies’ efforts to ensure that women are represented equally in both the design and deployment of AI . The platform’s members will also contribute to the advancement of all the ethical provisions in the Recommendation on the Ethics of AI.

The platform unites 17 leading female experts from academia, civil society, the private sector and regulatory bodies, from around the world. They will share research and contribute to a repository of good practices. The platform will drive progress on non-discriminatory algorithms and data sources, and incentivize girls, women and under-represented groups to participate in AI.

Women 4 Ethical AI

Business Council for Ethics of AI

The Business Council for Ethics of AI is a collaborative initiative between UNESCO and companies operating in Latin America that are involved in the development or use of artificial intelligence (AI) in various sectors.

The Council serves as a platform for companies to come together, exchange experiences, and promote ethical practices within the AI industry. By working closely with UNESCO, it aims to ensure that AI is developed and utilized in a manner that respects human rights and upholds ethical standards.

Currently co-chaired by Microsoft and Telefonica, the Council is committed to strengthening technical capacities in ethics and AI, designing and implementing the Ethical Impact Assessment tool mandated by the Recommendation on the Ethics of AI, and contributing to the development of intelligent regional regulations. Through these efforts, it strives to create a competitive environment that benefits all stakeholders and promotes the responsible and ethical use of AI.

Artificial Intelligence

Ideas, news & stories

artificial intelligence education ethical problems and solutions

Examples of ethical dilemmas

Examples of gender bias in artificial intelligence, originating from stereotypical representations deeply rooted in our societies.

The use of AI in judicial systems around the world is increasing, creating more ethical questions to explore.

The use of AI in culture raises interesting ethical reflections. For instance, what happens when AI has the capacity to create works of art itself?

An autonomous car is a vehicle that is capable of sensing its environment and moving with little or no human involvement.

Do you know AI or AI knows you better? Thinking Ethics of AI (version with multilingual subtitles)

Discover our resources

Publication

Related items

  • Social and human sciences
  • Natural sciences
  • Artificial intelligence
  • Norms & Standards
  • Policy Advice
  • UN & International cooperation
  • High technology
  • Information technology
  • Information technology (hardware)
  • Information technology (software)
  • Computer science
  • Ethics of science
  • Science and development
  • Science and society
  • Science policy
  • Social science policy
  • Social sciences
  • Ethics of artificial intelligence
  • Ethics of technology
  • See more add

Advertisement

Advertisement

Artificial intelligence in education: Addressing ethical challenges in K-12 settings

  • Published: 22 September 2021
  • Volume 2 , pages 431–440, ( 2022 )

Cite this article

artificial intelligence education ethical problems and solutions

  • Selin Akgun   ORCID: orcid.org/0000-0002-6672-498X 1 &
  • Christine Greenhow 1  

94k Accesses

158 Citations

41 Altmetric

Explore all metrics

Artificial intelligence (AI) is a field of study that combines the applications of machine learning, algorithm productions, and natural language processing. Applications of AI transform the tools of education. AI has a variety of educational applications, such as personalized learning platforms to promote students’ learning, automated assessment systems to aid teachers, and facial recognition systems to generate insights about learners’ behaviors. Despite the potential benefits of AI to support students’ learning experiences and teachers’ practices, the ethical and societal drawbacks of these systems are rarely fully considered in K-12 educational contexts. The ethical challenges of AI in education must be identified and introduced to teachers and students. To address these issues, this paper (1) briefly defines AI through the concepts of machine learning and algorithms; (2) introduces applications of AI in educational settings and benefits of AI systems to support students’ learning processes; (3) describes ethical challenges and dilemmas of using AI in education; and (4) addresses the teaching and understanding of AI by providing recommended instructional resources from two providers—i.e., the Massachusetts Institute of Technology’s (MIT) Media Lab and Code.org. The article aims to help practitioners reap the benefits and navigate ethical challenges of integrating AI in K-12 classrooms, while also introducing instructional resources that teachers can use to advance K-12 students’ understanding of AI and ethics.

Similar content being viewed by others

artificial intelligence education ethical problems and solutions

Students’ voices on generative AI: perceptions, benefits, and challenges in higher education

artificial intelligence education ethical problems and solutions

The Promises and Challenges of Artificial Intelligence for Teachers: a Systematic Review of Research

artificial intelligence education ethical problems and solutions

Systematic review of research on artificial intelligence applications in higher education – where are the educators?

Avoid common mistakes on your manuscript.

1 Introduction

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” — Stephen Hawking.

We may not think about artificial intelligence (AI) on a daily basis, but it is all around us, and we have been using it for years. When we are doing a Google search, reading our emails, getting a doctor’s appointment, asking for driving directions, or getting movie and music recommendations, we are constantly using the applications of AI and its assistance in our lives. This need for assistance and our dependence on AI systems has become even more apparent during the COVID-19 pandemic. The growing impact and dominance of AI systems reveals itself in healthcare, education, communications, transportation, agriculture, and more. It is almost impossible to live in a modern society without encountering applications powered by AI  [ 10 , 32 ].

Artificial intelligence (AI) can be defined briefly as the branch of computer science that deals with the simulation of intelligent behavior in computers and their capacity to mimic, and ideally improve, human behavior [ 43 ]. AI dominates the fields of science, engineering, and technology, but also is present in education through machine-learning systems and algorithm productions [ 43 ]. For instance, AI has a variety of algorithmic applications in education, such as personalized learning systems to promote students’ learning, automated assessment systems to support teachers in evaluating what students know, and facial recognition systems to provide insights about learners’ behaviors [ 49 ]. Besides these platforms, algorithm systems are prominent in education through different social media outlets, such as social network sites, microblogging systems, and mobile applications. Social media are increasingly integrated into K-12 education [ 7 ] and subordinate learners’ activities to intelligent algorithm systems [ 17 ]. Here, we use the American term “K–12 education” to refer to students’ education in kindergarten (K) (ages 5–6) through 12th grade (ages 17–18) in the United States, which is similar to primary and secondary education or pre-college level schooling in other countries. These AI systems can increase the capacity of K-12 educational systems and support the social and cognitive development of students and teachers [ 55 , 8 ]. More specifically, applications of AI can support instruction in mixed-ability classrooms; while personalized learning systems provide students with detailed and timely feedback about their writing products, automated assessment systems support teachers by freeing them from excessive workloads [ 26 , 42 ].

Despite the benefits of AI applications for education, they pose societal and ethical drawbacks. As the famous scientist, Stephen Hawking, pointed out that weighing these risks is vital for the future of humanity. Therefore, it is critical to take action toward addressing them. The biggest risks of integrating these algorithms in K-12 contexts are: (a) perpetuating existing systemic bias and discrimination, (b) perpetuating unfairness for students from mostly disadvantaged and marginalized groups, and (c) amplifying racism, sexism, xenophobia, and other forms of injustice and inequity [ 40 ]. These algorithms do not occur in a vacuum; rather, they shape and are shaped by ever-evolving cultural, social, institutional and political forces and structures [ 33 , 34 ]. As academics, scientists, and citizens, we have a responsibility to educate teachers and students to recognize the ethical challenges and implications of algorithm use. To create a future generation where an inclusive and diverse citizenry can participate in the development of the future of AI, we need to develop opportunities for K-12 students and teachers to learn about AI via AI- and ethics-based curricula and professional development [ 2 , 58 ]

Toward this end, the existing literature provides little guidance and contains a limited number of studies that focus on supporting K-12 students and teachers’ understanding of social, cultural, and ethical implications of AI [ 2 ]. Most studies reflect university students’ engagement with ethical ideas about algorithmic bias, but few addresses how to promote students’ understanding of AI and ethics in K-12 settings. Therefore, this article: (a) synthesizes ethical issues surrounding AI in education as identified in the educational literature, (b) reflects on different approaches and curriculum materials available for teaching students about AI and ethics (i.e., featuring materials from the MIT Media Lab and Code.org), and (c) articulates future directions for research and recommendations for practitioners seeking to navigate AI and ethics in K-12 settings.

Next, we briefly define the notion of artificial intelligence (AI) and its applications through machine-learning and algorithm systems. As educational and educational technology scholars working in the United States, and at the risk of oversimplifying, we provide only a brief definition of AI below, and recognize that definitions of AI are complex, multidimensional, and contested in the literature [ 9 , 16 , 38 ]; an in-depth discussion of these complexities, however, is beyond the scope of this paper. Second, we describe in more detail five applications of AI in education, outlining their potential benefits for educators and students. Third, we describe the ethical challenges they raise by posing the question: “how and in what ways do algorithms manipulate us?” Fourth, we explain how to support students’ learning about AI and ethics through different curriculum materials and teaching practices in K-12 settings. Our goal here is to provide strategies for practitioners to reap the benefits while navigating the ethical challenges. We acknowledge that in centering this work within U.S. education, we highlight certain ethical issues that educators in other parts of the world may see as less prominent. For example, the European Union (EU) has highlighted ethical concerns and implications of AI, emphasized privacy protection, surveillance, and non-discrimination as primary areas of interest, and provided guidelines on how trustworthy AI should be [ 3 , 15 , 23 ]. Finally, we reflect on future directions for educational and other research that could support K-12 teachers and students in reaping the benefits while mitigating the drawbacks of AI in education.

2 Definition and applications of artificial intelligence

The pursuit of creating intelligent machines that replicate human behavior has accelerated with the realization of artificial intelligence. With the latest advancements in computer science, a proliferation of definitions and explanations of what counts as AI systems has emerged. For instance, AI has been defined as “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings” [ 49 ]. This particular definition highlights the mimicry of human behavior and consciousness. Furthermore, AI has been defined as “the combination of cognitive automation, machine learning, reasoning, hypothesis generation and analysis, natural language processing, and intentional algorithm mutation producing insights and analytics at or above human capability” [ 31 ]. This definition incorporates the different sub-fields of AI together and underlines their function while reaching at or above human capability.

Combining these definitions, artificial intelligence can be described as the technology that builds systems to think and act like humans with the ability of achieving goals . AI is mainly known through different applications and advanced computer programs, such as recommender systems (e.g., YouTube, Netflix), personal assistants (e.g., Apple’s Siri), facial recognition systems (e.g., Facebook’s face detection in photographs), and learning apps (e.g., Duolingo) [ 32 ]. To build on these programs, different sub-fields of AI have been used in a diverse range of applications. Evolutionary algorithms and machine learning are most relevant to AI in K-12 education.

2.1 Algorithms

Algorithms are the core elements of AI. The history of AI is closely connected to the development of sophisticated and evolutionary algorithms. An algorithm is a set of rules or instructions that is to be followed by computers in problem-solving operations to achieve an intended end goal. In essence, all computer programs are algorithms. They involve thousands of lines of codes which represent mathematical instructions that the computer follows to solve the intended problems (e.g., as computing numerical calculation, processing an image, and grammar-checking in an essay). AI algorithms are applied to fields that we might think of as essentially human behavior—such as speech and face recognition, visual perception, learning, and decision-making and learning. In that way, algorithms can provide instructions for almost any AI system and application we can conceive [ 27 ].

2.2 Machine learning

Machine learning is derived from statistical learning methods and uses data and algorithms to perform tasks which are typically performed by humans [ 43 ]. Machine learning is about making computers act or perform without being given any line-by-line step [ 29 ]. The working mechanism of machine learning is the learning model’s exposure to ample amounts of quality data [ 41 ]. Machine-learning algorithms first analyze the data to determine patterns and to build a model and then predict future values through these models. In other words, machine learning can be considered a three-step process. First, it analyzes and gathers the data, and then, it builds a model to excel for different tasks, and finally, it undertakes the action and produces the desired results successfully without human intervention [ 29 , 56 ]. The widely known AI applications such as recommender or facial recognition systems have all been made possible through the working principles of machine learning.

3 Benefits of AI applications in education

Personalized learning systems, automated assessments, facial recognition systems, chatbots (social media sites), and predictive analytics tools are being deployed increasingly in K-12 educational settings; they are powered by machine-learning systems and algorithms [ 29 ]. These applications of AI have shown promise to support teachers and students in various ways: (a) providing instruction in mixed-ability classrooms, (b) providing students with detailed and timely feedback on their writing products, (c) freeing teachers from the burden of possessing all knowledge and giving them more room to support their students while they are observing, discussing, and gathering information in their collaborative knowledge-building processes [ 26 , 50 ]. Below, we outline benefits of each of these educational applications in the K-12 setting before turning to a synthesis of their ethical challenges and drawbacks.

3.1 Personalized learning systems

Personalized learning systems, also known as adaptive learning platforms or intelligent tutoring systems, are one of the most common and valuable applications of AI to support students and teachers. They provide students access to different learning materials based on their individual learning needs and subjects [ 55 ]. For example, rather than practicing chemistry on a worksheet or reading a textbook, students may use an adaptive and interactive multimedia version of the course content [ 39 ]. Comparing students’ scores on researcher-developed or standardized tests, research shows that the instruction based on personalized learning systems resulted in higher test scores than traditional teacher-led instruction [ 36 ]. Microsoft’s recent report (2018) of over 2000 students and teachers from Singapore, the U.S., the UK, and Canada shows that AI supports students’ learning progressions. These platforms promise to identify gaps in students’ prior knowledge by accommodating learning tools and materials to support students’ growth. These systems generate models of learners using their knowledge and cognition; however, the existing platforms do not yet provide models for learners’ social, emotional, and motivational states [ 28 ]. Considering the shift to remote K-12 education during the COVID-19 pandemic, personalized learning systems offer a promising form of distance learning that could reshape K-12 instruction for the future [ 35 ].

3.2 Automated assessment systems

Automated assessment systems are becoming one of the most prominent and promising applications of machine learning in K-12 education [ 42 ]. These scoring algorithm systems are being developed to meet the need for scoring students’ writing, exams and assignments, and tasks usually performed by the teacher. Assessment algorithms can provide course support and management tools to lessen teachers’ workload, as well as extend their capacity and productivity. Ideally, these systems can provide levels of support to students, as their essays can be graded quickly [ 55 ]. Providers of the biggest open online courses such as Coursera and EdX have integrated automated scoring engines into their learning platforms to assess the writings of hundreds of students [ 42 ]. On the other hand, a tool called “Gradescope” has been used by over 500 universities to develop and streamline scoring and assessment [ 12 ]. By flagging the wrong answers and marking the correct ones, the tool supports instructors by eliminating their manual grading time and effort. Thus, automated assessment systems deal very differently with marking and giving feedback to essays compared to numeric assessments which analyze right or wrong answers on the test. Overall, these scoring systems have the potential to deal with the complexities of the teaching context and support students’ learning process by providing them with feedback and guidance to improve and revise their writing.

3.3 Facial recognition systems and predictive analytics

Facial recognition software is used to capture and monitor students’ facial expressions. These systems provide insights about students’ behaviors during learning processes and allow teachers to take action or intervene, which, in turn, helps teachers develop learner-centered practices and increase student’s engagement [ 55 ]. Predictive analytics algorithm systems are mainly used to identify and detect patterns about learners based on statistical analysis. For example, these analytics can be used to detect university students who are at risk of failing or not completing a course. Through these identifications, instructors can intervene and get students the help they need [ 55 ].

3.4 Social networking sites and chatbots

Social networking sites (SNSs) connect students and teachers through social media outlets. Researchers have emphasized the importance of using SNSs (such as Facebook) to expand learning opportunities beyond the classroom, monitor students’ well-being, and deepen student–teacher relations [ 5 ]. Different scholars have examined the role of social media in education, describing its impact on student and teacher learning and scholarly communication [ 6 ]. They point out that the integration of social media can foster students’ active learning, collaboration skills, and connections with communities beyond the classroom [ 6 ]. Chatbots also take place in social media outlets through different AI systems [ 21 ]. They are also known as dialogue systems or conversational agents [ 26 , 52 ]. Chatbots are helpful in terms of their ability to respond naturally with a conversational tone. For instance, a text-based chatbot system called “Pounce” was used at Georgia State University to help students through the registration and admission process, as well as financial aid and other administrative tasks [ 7 ].

In summary, applications of AI can positively impact students’ and teachers’ educational experiences and help them address instructional challenges and concerns. On the other hand, AI cannot be a substitute for human interaction [ 22 , 47 ]. Students have a wide range of learning styles and needs. Although AI can be a time-saving and cognitive aide for teachers, it is but one tool in the teachers’ toolkit. Therefore, it is critical for teachers and students to understand the limits, potential risks, and ethical drawbacks of AI applications in education if they are to reap the benefits of AI and minimize the costs [ 11 ].

4 Ethical concerns and potential risks of AI applications in education

The ethical challenges and risks posed by AI systems seemingly run counter to marketing efforts that present algorithms to the public as if they are objective and value-neutral tools. In essence, algorithms reflect the values of their builders who hold positions of power [ 26 ]. Whenever people create algorithms, they also create a set of data that represent society’s historical and systemic biases, which ultimately transform into algorithmic bias. Even though the bias is embedded into the algorithmic model with no explicit intention, we can see various gender and racial biases in different AI-based platforms [ 54 ].

Considering the different forms of bias and ethical challenges of AI applications in K-12 settings, we will focus on problems of privacy, surveillance, autonomy, bias, and discrimination (see Fig.  1 ). However, it is important to acknowledge that educators will have different ethical concerns and challenges depending on their students’ grade and age of development. Where strategies and resources are recommended, we indicate the age and/or grade level of student(s) they are targeting (Fig. 2 ).

figure 1

Potential ethical and societal risks of AI applications in education

figure 2

Student work from the activity of “Youtube Redesign” (MIT Media Lab, AI and Ethics Curriculum, p.1, [ 45 ])

One of the biggest ethical issues surrounding the use of AI in K-12 education relates to the privacy concerns of students and teachers [ 47 , 49 , 54 ]. Privacy violations mainly occur as people expose an excessive amount of personal information in online platforms. Although existing legislation and standards exist to protect sensitive personal data, AI-based tech companies’ violations with respect to data access and security increase people’s privacy concerns [ 42 , 54 ]. To address these concerns, AI systems ask for users’ consent to access their personal data. Although consent requests are designed to be protective measures and to help alleviate privacy concerns, many individuals give their consent without knowing or considering the extent of the information (metadata) they are sharing, such as the language spoken, racial identity, biographical data, and location [ 49 ]. Such uninformed sharing in effect undermines human agency and privacy. In other words, people’s agency diminishes as AI systems reduce introspective and independent thought [ 55 ]. Relatedly, scholars have raised the ethical issue of forcing students and parents to use these algorithms as part of their education even if they explicitly agree to give up privacy [ 14 , 48 ]. They really have no choice if these systems are required by public schools.

Another ethical concern surrounding the use of AI in K-12 education is surveillance or tracking systems which gather detailed information about the actions and preferences of students and teachers. Through algorithms and machine-learning models, AI tracking systems not only necessitate monitoring of activities but also determine the future preferences and actions of their users [ 47 ]. Surveillance mechanisms can be embedded into AI’s predictive systems to foresee students’ learning performances, strengths, weaknesses, and learning patterns . For instance, research suggests that teachers who use social networking sites (SNSs) for pedagogical purposes encounter a number of problems, such as concerns in relation to boundaries of privacy, friendship authority, as well as responsibility and availability [ 5 ]. While monitoring and patrolling students’ actions might be considered part of a teacher’s responsibility and a pedagogical tool to intervene in dangerous online cases (such as cyber-bullying or exposure to sexual content), such actions can also be seen as surveillance systems which are problematic in terms of threatening students’ privacy. Monitoring and tracking students’ online conversations and actions also may limit their participation in the learning event and make them feel unsafe to take ownership for their ideas. How can students feel secure and safe, if they know that AI systems are used for surveilling and policing their thoughts and actions? [ 49 ].

Problems also emerge when surveillance systems trigger issues related to autonomy, more specifically, the person’s ability to act on her or his own interest and values. Predictive systems which are powered by algorithms jeopardize students and teachers’ autonomy and their ability to govern their own life [ 46 , 47 ]. Use of algorithms to make predictions about individuals’ actions based on their information raise questions about fairness and self-freedom [ 19 ]. Therefore, the risks of predictive analysis also include the perpetuation of existing bias and prejudices of social discrimination and stratification [ 42 ].

Finally, bias and discrimination are critical concerns in debates of AI ethics in K-12 education [ 6 ]. In AI platforms, the existing power structures and biases are embedded into machine-learning models [ 6 ]. Gender bias is one of the most apparent forms of this problem, as the bias is revealed when students in language learning courses use AI to translate between a gender-specific language and one that is less-so. For example, while Google Translate translated the Turkish equivalent of “S he/he is a nurse ” into the feminine form, it also translated the Turkish equivalent of “ She/he is a doctor ” into the masculine form [ 33 ]. This shows how AI models in language translation carry the societal biases and gender-specific stereotypes in the data [ 40 ]. Similarly, a number of problematic cases of racial bias are also associated with AI’s facial recognition systems. Research shows that facial recognition software has improperly misidentified a number of African American and Latino American people as convicted felons [ 42 ].

Additionally, biased decision-making algorithms reveal themselves throughout AI applications in K-12 education: personalized learning, automated assessment, SNSs, and predictive systems in education. Although the main promise of machine-learning models is increased accuracy and objectivity, current incidents have revealed the contrary. For instance, England’s A-level and GCSE secondary level examinations were cancelled due to the pandemic in the summer of 2020 [ 1 , 57 ]. An alternative assessment method was implemented to determine the qualification grades of students. The grade standardization algorithm was produced by the regulator Ofqual. With the assessment of Ofqual’s algorithm based on schools' previous examination results, thousands of students were shocked to receive unexpectedly low grades. Although a full discussion of the incident is beyond the scope of this article [ 51 ] it revealed how the score distribution favored students who attended private or independent schools, while students from underrepresented groups were hit hardest. Unfortunately, automated assessment algorithms have the potential to reconstruct unfair and inconsistent results by disrupting student’s final scores and future careers [ 53 ].

5 Teaching and understanding AI and ethics in educational settings

These ethical concerns suggest an urgent need to introduce students and teachers to the ethical challenges surrounding AI applications in K-12 education and how to navigate them. To meet this need, different research groups and nonprofit organizations offer a number of open-access resources based on AI and ethics. They provide instructional materials for students and teachers, such as lesson plans and hands-on activities, and professional learning materials for educators, such as open virtual learning sessions. Below, we describe and evaluate three resources: “AI and Ethics” curriculum and “AI and Data Privacy” workshop from the Massachusetts Institute of Technology (MIT) Media Lab as well as Code.org’s “AI and Oceans” activity. For readers who seek to investigate additional approaches and resources for K-12 level AI and ethics interaction, see: (a) The Chinese University of Hong Kong (CUHK)’s AI for the Future Project (AI4Future) [ 18 ]; (b) IBM’s Educator’s AI Classroom Kit [ 30 ], Google’s Teachable Machine [ 25 ], UK-based nonprofit organization Apps for Good [ 4 ], and Machine Learning for Kids [ 37 ].

5.1 "AI and Ethics Curriulum" for middle school students by MIT Media Lab

The MIT Media Lab team offers an open-access curriculum on AI and ethics for middle school students and teachers. Through a series of lesson plans and hand-on activities, teachers are guided to support students’ learning of the technical terminology of AI systems as well as the ethical and societal implications of AI [ 2 ]. The curriculum includes various lessons tied to learning objectives. One of the main learning goals is to introduce students to basic components of AI through algorithms, datasets, and supervised machine-learning systems all while underlining the problem of algorithmic bias [ 45 ]. For instance, in the activity “ AI Bingo” , students are given bingo cards with various AI systems, such as online search engine, customer service bot, and weather app. Students work with their partners collaboratively on these AI systems. In their AI Bingo chart, students try to identify what prediction the selected AI system makes and what dataset it uses. In that way, they become more familiar with the notions of dataset and prediction in the context of AI systems [ 45 ].

In the second investigation, “Algorithms as Opinions” , students think about algorithms as recipes, which are created by set of instructions that modify an input to produce an output [ 45 ]. Initially, students are asked to write an algorithm to make the “ best” jelly sandwich and peanut butter. They explore what it means to be “ best” and see how their opinions of best in their recipes are reflected in their algorithms. In this way, students are able to figure out that algorithms can have various motives and goals. Following this activity, students work on the “Ethical Matrix” , building on the idea of the algorithms as opinions [ 45 ]. During this investigation, students first refer back to their developed algorithms through their “best” jelly sandwich and peanut butter. They discuss what counts as the “best” sandwich for themselves (most healthy, practical, delicious, etc.). Then, through their ethical matrix (chart), students identify different stakeholders (such as their parents, teacher, or doctor) who care about their peanut butter and jelly sandwich algorithm. In this way, the values and opinions of those stakeholders also are embedded in the algorithm. Students fill out an ethical matrix and look for where those values conflict or overlap with each other. This matrix is a great tool for students to recognize different stakeholders in a system or society and how they are able to build and utilize the values of the algorithms in an ethical matrix.

The final investigation which teaches about the biased nature of algorithms is “Learning and Algorithmic Bias” [ 45 ]. During the investigation, students think further about the concept of classification. Using Google’s Teachable Machine tool [ 2 ], students explore the supervised machine-learning systems. Students train a cat–dog classifier using two different datasets. While the first dataset reflects the cats as the over-represented group, the second dataset indicates the equal and diverse representation between dogs and cats [ 2 ]. Using these datasets, students compare the accuracy between the classifiers and then discuss which dataset and outcome are fairer. This activity leads students into a discussion about the occurrence of bias in facial recognition algorithms and systems [ 2 ].

In the rest of the curriculum, similar to the AI Bingo investigation, students work with their partners to determine the various forms of AI systems in the YouTube platform (such as its recommender algorithm and advertisement matching algorithm). Through the investigation of “ YouTube Redesign”, students redesign YouTube’s recommender system. They first identify stakeholders and their values in the system, and then use an ethical matrix to reflect on the goals of their YouTube’s recommendation algorithm [ 45 ]. Finally, through the activity of “YouTube Socratic Seminar” , students read an abridged version of Wall Street Journal article by participating in a Socratic seminar. The article was edited to shorten the text and to provide more accessible language for middle school students. They discuss which stakeholders were most influential or significant in proposing changes in the YouTube Kids app and whether or not technologies like auto play should ever exist. During their discussion, students engage with the questions of: “Which stakeholder is making the most change or has the most power?”, “Have you ever seen an inappropriate piece of content on YouTube? What did you do?” [ 45 ].

Overall, the MIT Media Lab’s AI and Ethics curriculum is a high quality, open-access resource with which teachers can introduce middle school students to the risks and ethical implications of AI systems. The investigations described above involve students in collaborative, critical thinking activities that force them to wrestle with issues of bias and discrimination in AI, as well as surveillance and autonomy through the predictive systems and algorithmic bias.

5.2 “AI and Data Privacy” workshop series for K-9 students by MIT Media Lab

Another quality resource from the MIT Media Lab’s Personal Robots Group is a workshop series designed to teach students (between the ages 7 and 14) about data privacy and introduce them to designing and prototyping data privacy features. The group has made the content, materials, worksheets, and activities of the workshop series into an open-access online document, freely available to teachers [ 44 ].

The first workshop in the series is “ Mystery YouTube Viewer: A lesson on Data Privacy” . During the workshop, students engage with the question of what privacy and data mean [ 44 ]. They observe YouTube’s home page from the perspective of a mystery user. Using the clues from the videos, students make predictions about what the characters in the videos might look like or where they might live. In a way, students imitate YouTube algorithms’ prediction mode about the characters. Engaging with these questions and observations, students think further about why privacy and boundaries are important and how each algorithm will interpret us differently based on who creates the algorithm itself.

The second workshop in the series is “ Designing ads with transparency: A creative workshop” . Through this workshop, students are able to think further about the meaning, aim, and impact of advertising and the role of advertisements in our lives [ 44 ]. Students collaboratively create an advertisement using an everyday object. The objective is to make the advertisement as “transparent” as possible. To do that, students learn about notions of malware and adware, as well as the components of YouTube advertisements (such as sponsored labels, logos, news sections, etc.). By the end of the workshop, students design their ads as a poster, and they share with their peers.

The final workshop in MIT’s AI and data privacy series is “ Designing Privacy in Social Media Platforms”. This workshop is designed to teach students about YouTube, design, civics, and data privacy [ 44 ]. During the workshop, students create their own designs to solve one of the biggest challenges of the digital era: problems associated with online consent. The workshop allows students to learn more about the privacy laws and how they impact youth in terms of media consumption. Students consider YouTube within the lenses of the Children’s Online Privacy Protections Rule (COPPA). In this way, students reflect on one of the components of the legislation: how might students get parental permission (or verifiable consent)?

Such workshop resources seem promising in helping educate students and teachers about the ethical challenges of AI in education. Specifically, social media such as YouTube are widely used as a teaching and learning tool within K-12 classrooms and beyond them, in students’ everyday lives. These workshop resources may facilitate teachers’ and students’ knowledge of data privacy issues and support them in thinking further about how to protect privacy online. Moreover, educators seeking to implement such resources should consider engaging students in the larger question: who should own one’s data? Teaching students the underlying reasons for laws and facilitating debate on the extent to which they are just or not could help get at this question.

5.3 Investigation of “AI for Oceans” by Code.org

A third recommended resource for K-12 educators trying to navigate the ethical challenges of AI with their students comes from Code.org, a nonprofit organization focused on expanding students’ participation in computer science. Sponsored by Microsoft, Facebook, Amazon, Google, and other tech companies, Code.org aims to provide opportunities for K-12 students to learn about AI and machine-learning systems [ 20 ]. To support students (grades 3–12) in learning about AI, algorithms, machine learning, and bias, the organization offers an activity called “ AI for Oceans ”, where students are able to train their machine-learning models.

The activity is provided as an open-access tutorial for teachers to help their students explore how to train, model and classify data , as well as to understand how human bias plays a role in machine-learning systems. During the activity, students first classify the objects as either “fish” or “not fish” in an attempt to remove trash from the ocean. Then, they expand their training data set by including other sea creatures that belong underwater. Throughout the activity, students are also able to watch and interact with a number of visuals and video tutorials. With the support of their teachers, they discuss machine learning, steps and influences of training data, as well as the formation and risks of biased data [ 20 ].

6 Future directions for research and teaching on AI and ethics

In this paper, we provided an overview of the possibilities and potential ethical and societal risks of AI integration in education. To help address these risks, we highlighted several instructional strategies and resources for practitioners seeking to integrate AI applications in K-12 education and/or instruct students about the ethical issues they pose. These instructional materials have the potential to help students and teachers reap the powerful benefits of AI while navigating ethical challenges especially related to privacy concerns and bias. Existing research on AI in education provides insight on supporting students’ understanding and use of AI [ 2 , 13 ]; however, research on how to develop K-12 teachers’ instructional practices regarding AI and ethics is still in its infancy.

Moreover, current resources, as demonstrated above, mainly address privacy and bias-related ethical and societal concerns of AI. Conducting more exploratory and critical research on teachers’ and students’ surveillance and autonomy concerns will be important to designing future resources. In addition, curriculum developers and workshop designers might consider centering culturally relevant and responsive pedagogies (by focusing on students’ funds of knowledge, family background, and cultural experiences) while creating instructional materials that address surveillance, privacy, autonomy, and bias. In such student-centered learning environments, students voice their own cultural and contextual experiences while trying to critique and disrupt existing power structures and cultivate their social awareness [ 24 , 36 ].

Finally, as scholars in teacher education and educational technology, we believe that educating future generations of diverse citizens to participate in the ethical use and development of AI will require more professional development for K-12 teachers (both pre-service and in-service). For instance, through sustained professional learning sessions, teachers could engage with suggested curriculum resources and teaching strategies as well as build a community of practice where they can share and critically reflect on their experiences with other teachers. Further research on such reflective teaching practices and students’ sense-making processes in relation to AI and ethics lessons will be essential to developing curriculum materials and pedagogies relevant to a broad base of educators and students.

Data availability

Not applicable.

Code availability

Adams, R., McIntyre, N.: England A-level downgrades hit pupils from disadvantaged areas hardest. https://www.theguardian.com/education/2020/aug/13/england-a-level-downgrades-hit-pupils-from-disadvantaged-areas-hardest (2020). Accessed 10 September 2020

Ali, S. A., Payne, B. H., Williams, R., Park, H. W., Breazeal, C.: Constructionism, ethics, and creativity: developing primary and middle school artificial intelligence education. Paper presented at International Workshop on Education in Artificial Intelligence (EDUAI). Palo Alto, CA, USA. (2019)

Almeida, D., Shmarko, K., Lomas, E.: The ethics of facial recognition technologies, surveillance, and accountability in an age of artificial intelligence: a comparative analysis of US, EU, and UK regulatory frameworks. AI Ethics (2021). https://doi.org/10.1007/s43681-021-00077-w

Article   Google Scholar  

Apps for Good: https://www.appsforgood.org/about Accessed 28 August 2021

Asterhan, C.S.C., Rosenberg, H.: The promise, reality and dilemmas of secondary school teacher–student interactions in Facebook: the teacher perspective. Comput. Educ. 85 , 134–148 (2015)

Krutka, D., Manca, S., Galvin, S., Greenhow, C., Koehler, M., Askari, E.: Teaching “against” social media: confronting problems of profit in the curriculum. Teachers College Record 121 (14), 1–42 (2019)

Greenhow, C., Galvin, S., Brandon, D., Askari, E.: A decade of research on K-12 teaching and teacher learning with social media: insights on the state-of-the-field. Teachers College Record 122 (6), 1–7 (2020)

Greenhow, C., Galvin, S., Staudt Willet, K.B.: Inquiring tweets want to know: #Edchat supports for #remoteteaching during COVID-19. British Journal of Educational Technology. 1–21 , (2021)

Barr, A., Feigenbaum, E.A.: Handbook of artificial intelligence, vol. 1and 2. Kaufmann, Los Altos (1981)

MATH   Google Scholar  

Bendici, R.: Rise of the machines. Univ. Bus. 21 (10), 53–54 (2018)

Google Scholar  

Blank, G., Bolsover, G., Dubois, E.: A new privacy paradox: young people and privacy on social network site. Oxford Internet Institute (2014)

Blumenstyk, G.: Can artificial intelligence make teaching more personal? The Chronicle of Higher Education. https://www.chronicle.com/article/Can-Artificial-Intelligence/243023 (2018)

Borenstein, J., Howard, A.: Emerging challenges in AI and the need for AI ethics education. AI Ethics 1 , 61–65 (2020)

Bulger, M.: Personalized learning: the conversations we’re not having. Data and Society Research Institute. https://datasociety.net/library/personalized-learning-the-conversations-were-not-having/ (2016)

Cath, C., Wachter, S., Mittelstadt, B., Tadder, M., Floridi, L.: Artificial Intelligence and the ‘good society’: the US, EU, and UK approach. Sci. Eng. Ethics 24 , 505–528 (2018)

Chaudhry, M.A., Kazim, E.: Artificial intelligence in education (AIEd): a high-level academic and industry note 2021. AI Ethics (2021). https://doi.org/10.1007/s43681-021-00074-z

Cheney-Lippold, J.: We are data: algorithms and the making of our digital selves. New York University Press, New York (2017)

Book   Google Scholar  

Chiu, T.K.F., Meng, H., Chai, C.S., King, I., Wong, S., Yam, Y.: Creation and evaluation of a pre-tertiary artificial intelligence (AI) curriculum. IEEE Trans. Educ. (2021). https://doi.org/10.1109/TE.2021.3085878

Citron, D.K., Pasquale, F.A.: The scored society: due process for automated predictions. Wash. Law Rev. 89 , 1–33 (2014)

Code.org.: AI for Oceans. https://code.org/oceans (2020)

Dignum, V.: Ethics in artificial intelligence: introduction to the special issue. Ethics Inf. Technol. 20 , 1–3 (2018)

Dishon, G.: New data, old tensions: big data, personalized learning, and the challenges of progressive education. Theory Res. Educ. 15 (3), 272–289 (2017)

Tiple, Vasile, Recommendations on the European Commission’s WHITE PAPER on Artificial Intelligence - A European approach to excellence and trust, COM(2020) 65 final (the 'AI White Paper') (2020). https://doi.org/10.2139/ssrn.3706099

Gay, G.: Culturally responsive teaching: theory, research, and practice. Teachers College Press, New York (2010)

Google Teachable Machine: https://teachablemachine.withgoogle.com/ Accessed 28 August 2021

Hrastinski, S., Olofsson, A.D., Arkenback, C., Ekström, S., Ericsson, E., Fransson, G., Jaldemark, J., Ryberg, T., Öberg, L., Fuentes, A., Gustafsson, U., Humble, N., Mozelius, P., Sundgren, M., Utterberg, M.: Critical imaginaries and reflections on artificial intelligence and robots in postdigital K-12 education. Postdigit. Sci. Educ. 1 , 427–445 (2019)

Henderson, P., Sinha, K., Angelard-Gontier, N., Ke, N. R., Fried, G., Lowe, R., Pineau, J.: Ethical challenges in data-driven dialogue systems. In: Proceedings of AAAI/ACM Conference on AI Ethics and Society (AIES-18), New Orleans, Lousiana, USA. (2000)

Herder, E., Sosnovsky, S., Dimitrova, V.: Adaptive intelligent learning environments. In: Duval, Erik, Sharples, Mike, Sutherland, Rosamund (eds.) Technology enhanced learning, pp. 109–114. Springer International Publishing, Cham (2017)

Chapter   Google Scholar  

Holmes, W., Bialik, M., Fadel, C.: Artificial intelligence in education: promises and implications for teaching and learning. Center for Curriculum Redesign, Boston (2019)

IBM the Educator's AI in the Classroom Toolkit: https://docs.google.com/document/d/1Zqi74ejYYYLAAEjFBJIYuXUcOMQI56R6K0ZsvQbDmOA/edit Accessed 28 August 2021

IEEE Corporate Advisory Group (CAG): IEEE guide for terms and concepts in intelligent process automation. The Institute of Electrical and Electronics Engineers Standards Association. 1–16 (2017). https://ieeexplore.ieee.org/iel7/8070669/8070670/08070671.pdf

Iman, M., Arabnia, H. R., Branchinst, R. M.: Pathways to artificial general intelligence: a brief overview of developments and ethical issues via artificial intelligence, machine learning, deep learning, and data science. Paper presented at International Conference on Artificial Intelligence (ICAI). Las Vegas, Nevada, USA. (2020)

Johnson, M.: A scalable approach to reducing gender bias in Google translate. https://ai.googleblog.com/2020/04/a-scalable-approach-to-reducing-gender.html Accessed 26 March 2021

Ko, A.J., Oleson, A., Ryan, N., Register, Y., Xie, B., Tari, M., Davidson, M., Druga, S., Loksa, D.: It is time for more critical CS education. Commun. ACM 63 (11), 31–33 (2020)

Krueger, N.: Artificial intelligence has infiltrated our lives. Can it improve learning? International Society for Technology in Education (ISTE). (2017).

Ladson-Billings, G.: Toward a theory of culturally relevant pedagogy. Am. Educ. Res. J. 32 (3), 465–491 (1995)

Machine Learning for Kids: https://machinelearningforkids.co.uk/#!/links . Accessed 28 August 2021

McCarthy, J.: What is artificial intelligence? http://jmc.stanford.edu/artificial-intelligence/what-is-ai/ Accessed 28 August 2021

McMurtrie, B.: How artificial intelligence is changing teaching. The Chronicle of Higher Education. https://www.chronicle.com/article/How-Artificial-Intelligence-Is/244231 (2018)

Miller, F.A., Katz, J.H., Gans, R.: The OD imperative to add inclusion to the algorithms of artificial intelligence. OD Practitioner. 5 (1), 6–12 (2018)

Mohri, M., Rostamizadeh, A., Talwalkar, A.: Foundations of machine learning. MIT Press, Cambridge (2012)

Murphy, R. F.: Artificial intelligence applications to support k–12 teachers and teaching: a review of promising applications, challenges, and risks. Perspective. 1–20 (2019). https://doi.org/10.7249/PE315

Naqvi, A.: Artificial intelligence for audit, forensic accounting, and valuation: a strategic perspective. Wiley (2020)

Nguyen, S., DiPaola, D.: AI & data privacy activities for k-9 students. MIT Media Lab. https://www.media.mit.edu/projects/data-privacy-design-for-youth/overview/ (2020)

Payne, B. H.: An ethics of artificial intelligence curriculum for middle school students. MIT Media Lab. https://www.media.mit.edu/projects/ai-ethics-for-middle-school/overview/ (2019)

Piano, S.L.: Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward. Humanit. Soc. Sci. Commun. 7 (9), 1–7 (2020)

Regan, P.M., Jesse, J.: Ethical challenges of edtech, big data and personalized learning: twenty-first century student sorting and tracking. Ethics Inf. Technol. 21 , 167–179 (2019)

Regan, P. M., Steeves, V.: Education, privacy, and big data algorithms: taking the persons out of personalized learning. First Monday,  24 (11), (2019)

Remian, D.: Augmenting education: ethical considerations for incorporating artificial intelligence in education (Unpublished master’s thesis). University of Massachusetts, Boston (2019)

Roll, I., Wylie, R.: Evolution and revolution in artificial intelligence in education. Int. J. Artif. Intell. Educ. 26 , 582–599 (2016)

Smith, H.: Algorithmic bias: should students pay the price? AI Soc. 35 , 1077–1078 (2020)

Smutny, P., Schreiberova, P.: Chatbots for learning: a review of educational chatbots for the Facebook Messenger. Comput. Educ. 151 , 1–11 (2020)

Specia, M.: Parents, Students and Teachers Give Britain a Failing Grade Over Exam Results. The New York Times, (2020). https://www.nytimes.com/2020/08/14/world/europe/england-a-level-results.html

Stahl, B.C., Wright, D.: Ethics and privacy in ai and big data: implementing responsible research and innovation. IEEE Secur. Priv. 16 (3), 26–33 (2018)

The Institute for Ethical AI in Education. Interim report: towards a shared vision of ethical AI in education. https://www.buckingham.ac.uk/wp-content/uploads/2020/02/The-Institute-for-Ethical-AI-in-Educations-Interim-Report-Towards-a-Shared-Vision-of-Ethical-AI-in-Education.pdf . (2020)

Waller, M., Paul, W.: Why predictive algorithms are so risky for public sector bodies. Social Science Research Network, Rochester (2020)

Weale, S., Stewart, H.: A-level and GCSE results in England to be based on teacher assessments in U-turn. The Guardian, (2020).

Zimmerman, M.: Teaching AI: exploring new frontiers for learning. International Society for Technology in Education, Portland (2018)

Download references

This work was supported by the Graduate School at Michigan State University, College of Education Summer Research Fellowship.

Author information

Authors and affiliations.

Michigan State University, East Lansing, MI, USA

Selin Akgun & Christine Greenhow

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Selin Akgun .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Akgun, S., Greenhow, C. Artificial intelligence in education: Addressing ethical challenges in K-12 settings. AI Ethics 2 , 431–440 (2022). https://doi.org/10.1007/s43681-021-00096-7

Download citation

Received : 09 July 2021

Accepted : 01 September 2021

Published : 22 September 2021

Issue Date : August 2022

DOI : https://doi.org/10.1007/s43681-021-00096-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • K-12 education
  • Teacher education
  • Find a journal
  • Publish with us
  • Track your research

The state of AI in 2023: Generative AI’s breakout year

You have reached a page with older survey data. please see our 2024 survey results here ..

The latest annual McKinsey Global Survey  on the current state of AI confirms the explosive growth of generative AI (gen AI) tools . Less than a year after many of these tools debuted, one-third of our survey respondents say their organizations are using gen AI regularly in at least one business function. Amid recent advances, AI has risen from a topic relegated to tech employees to a focus of company leaders: nearly one-quarter of surveyed C-suite executives say they are personally using gen AI tools for work, and more than one-quarter of respondents from companies using AI say gen AI is already on their boards’ agendas. What’s more, 40 percent of respondents say their organizations will increase their investment in AI overall because of advances in gen AI. The findings show that these are still early days for managing gen AI–related risks, with less than half of respondents saying their organizations are mitigating even the risk they consider most relevant: inaccuracy.

The organizations that have already embedded AI capabilities have been the first to explore gen AI’s potential, and those seeing the most value from more traditional AI capabilities—a group we call AI high performers—are already outpacing others in their adoption of gen AI tools. 1 We define AI high performers as organizations that, according to respondents, attribute at least 20 percent of their EBIT to AI adoption.

The expected business disruption from gen AI is significant, and respondents predict meaningful changes to their workforces. They anticipate workforce cuts in certain areas and large reskilling efforts to address shifting talent needs. Yet while the use of gen AI might spur the adoption of other AI tools, we see few meaningful increases in organizations’ adoption of these technologies. The percent of organizations adopting any AI tools has held steady since 2022, and adoption remains concentrated within a small number of business functions.

Table of Contents

  • It’s early days still, but use of gen AI is already widespread
  • Leading companies are already ahead with gen AI
  • AI-related talent needs shift, and AI’s workforce effects are expected to be substantial
  • With all eyes on gen AI, AI adoption and impact remain steady

About the research

1. it’s early days still, but use of gen ai is already widespread.

The findings from the survey—which was in the field in mid-April 2023—show that, despite gen AI’s nascent public availability, experimentation with the tools  is already relatively common, and respondents expect the new capabilities to transform their industries. Gen AI has captured interest across the business population: individuals across regions, industries, and seniority levels are using gen AI for work and outside of work. Seventy-nine percent of all respondents say they’ve had at least some exposure to gen AI, either for work or outside of work, and 22 percent say they are regularly using it in their own work. While reported use is quite similar across seniority levels, it is highest among respondents working in the technology sector and those in North America.

Organizations, too, are now commonly using gen AI. One-third of all respondents say their organizations are already regularly using generative AI in at least one function—meaning that 60 percent of organizations with reported AI adoption are using gen AI. What’s more, 40 percent of those reporting AI adoption at their organizations say their companies expect to invest more in AI overall thanks to generative AI, and 28 percent say generative AI use is already on their board’s agenda. The most commonly reported business functions using these newer tools are the same as those in which AI use is most common overall: marketing and sales, product and service development, and service operations, such as customer care and back-office support. This suggests that organizations are pursuing these new tools where the most value is. In our previous research , these three areas, along with software engineering, showed the potential to deliver about 75 percent of the total annual value from generative AI use cases.

In these early days, expectations for gen AI’s impact are high : three-quarters of all respondents expect gen AI to cause significant or disruptive change in the nature of their industry’s competition in the next three years. Survey respondents working in the technology and financial-services industries are the most likely to expect disruptive change from gen AI. Our previous research shows  that, while all industries are indeed likely to see some degree of disruption, the level of impact is likely to vary. 2 “ The economic potential of generative AI: The next productivity frontier ,” McKinsey, June 14, 2023. Industries relying most heavily on knowledge work are likely to see more disruption—and potentially reap more value. While our estimates suggest that tech companies, unsurprisingly, are poised to see the highest impact from gen AI—adding value equivalent to as much as 9 percent of global industry revenue—knowledge-based industries such as banking (up to 5 percent), pharmaceuticals and medical products (also up to 5 percent), and education (up to 4 percent) could experience significant effects as well. By contrast, manufacturing-based industries, such as aerospace, automotives, and advanced electronics, could experience less disruptive effects. This stands in contrast to the impact of previous technology waves that affected manufacturing the most and is due to gen AI’s strengths in language-based activities, as opposed to those requiring physical labor.

Responses show many organizations not yet addressing potential risks from gen AI

According to the survey, few companies seem fully prepared for the widespread use of gen AI—or the business risks these tools may bring. Just 21 percent of respondents reporting AI adoption say their organizations have established policies governing employees’ use of gen AI technologies in their work. And when we asked specifically about the risks of adopting gen AI, few respondents say their companies are mitigating the most commonly cited risk with gen AI: inaccuracy. Respondents cite inaccuracy more frequently than both cybersecurity and regulatory compliance, which were the most common risks from AI overall in previous surveys. Just 32 percent say they’re mitigating inaccuracy, a smaller percentage than the 38 percent who say they mitigate cybersecurity risks. Interestingly, this figure is significantly lower than the percentage of respondents who reported mitigating AI-related cybersecurity last year (51 percent). Overall, much as we’ve seen in previous years, most respondents say their organizations are not addressing AI-related risks.

2. Leading companies are already ahead with gen AI

The survey results show that AI high performers—that is, organizations where respondents say at least 20 percent of EBIT in 2022 was attributable to AI use—are going all in on artificial intelligence, both with gen AI and more traditional AI capabilities. These organizations that achieve significant value from AI are already using gen AI in more business functions than other organizations do, especially in product and service development and risk and supply chain management. When looking at all AI capabilities—including more traditional machine learning capabilities, robotic process automation, and chatbots—AI high performers also are much more likely than others to use AI in product and service development, for uses such as product-development-cycle optimization, adding new features to existing products, and creating new AI-based products. These organizations also are using AI more often than other organizations in risk modeling and for uses within HR such as performance management and organization design and workforce deployment optimization.

AI high performers are much more likely than others to use AI in product and service development.

Another difference from their peers: high performers’ gen AI efforts are less oriented toward cost reduction, which is a top priority at other organizations. Respondents from AI high performers are twice as likely as others to say their organizations’ top objective for gen AI is to create entirely new businesses or sources of revenue—and they’re most likely to cite the increase in the value of existing offerings through new AI-based features.

As we’ve seen in previous years , these high-performing organizations invest much more than others in AI: respondents from AI high performers are more than five times more likely than others to say they spend more than 20 percent of their digital budgets on AI. They also use AI capabilities more broadly throughout the organization. Respondents from high performers are much more likely than others to say that their organizations have adopted AI in four or more business functions and that they have embedded a higher number of AI capabilities. For example, respondents from high performers more often report embedding knowledge graphs in at least one product or business function process, in addition to gen AI and related natural-language capabilities.

While AI high performers are not immune to the challenges of capturing value from AI, the results suggest that the difficulties they face reflect their relative AI maturity, while others struggle with the more foundational, strategic elements of AI adoption. Respondents at AI high performers most often point to models and tools, such as monitoring model performance in production and retraining models as needed over time, as their top challenge. By comparison, other respondents cite strategy issues, such as setting a clearly defined AI vision that is linked with business value or finding sufficient resources.

The findings offer further evidence that even high performers haven’t mastered best practices regarding AI adoption, such as machine-learning-operations (MLOps) approaches, though they are much more likely than others to do so. For example, just 35 percent of respondents at AI high performers report that where possible, their organizations assemble existing components, rather than reinvent them, but that’s a much larger share than the 19 percent of respondents from other organizations who report that practice.

Many specialized MLOps technologies and practices  may be needed to adopt some of the more transformative uses cases that gen AI applications can deliver—and do so as safely as possible. Live-model operations is one such area, where monitoring systems and setting up instant alerts to enable rapid issue resolution can keep gen AI systems in check. High performers stand out in this respect but have room to grow: one-quarter of respondents from these organizations say their entire system is monitored and equipped with instant alerts, compared with just 12 percent of other respondents.

3. AI-related talent needs shift, and AI’s workforce effects are expected to be substantial

Our latest survey results show changes in the roles that organizations are filling to support their AI ambitions. In the past year, organizations using AI most often hired data engineers, machine learning engineers, and Al data scientists—all roles that respondents commonly reported hiring in the previous survey. But a much smaller share of respondents report hiring AI-related-software engineers—the most-hired role last year—than in the previous survey (28 percent in the latest survey, down from 39 percent). Roles in prompt engineering have recently emerged, as the need for that skill set rises alongside gen AI adoption, with 7 percent of respondents whose organizations have adopted AI reporting those hires in the past year.

The findings suggest that hiring for AI-related roles remains a challenge but has become somewhat easier over the past year, which could reflect the spate of layoffs at technology companies from late 2022 through the first half of 2023. Smaller shares of respondents than in the previous survey report difficulty hiring for roles such as AI data scientists, data engineers, and data-visualization specialists, though responses suggest that hiring machine learning engineers and AI product owners remains as much of a challenge as in the previous year.

Looking ahead to the next three years, respondents predict that the adoption of AI will reshape many roles in the workforce. Generally, they expect more employees to be reskilled than to be separated. Nearly four in ten respondents reporting AI adoption expect more than 20 percent of their companies’ workforces will be reskilled, whereas 8 percent of respondents say the size of their workforces will decrease by more than 20 percent.

Looking specifically at gen AI’s predicted impact, service operations is the only function in which most respondents expect to see a decrease in workforce size at their organizations. This finding generally aligns with what our recent research  suggests: while the emergence of gen AI increased our estimate of the percentage of worker activities that could be automated (60 to 70 percent, up from 50 percent), this doesn’t necessarily translate into the automation of an entire role.

AI high performers are expected to conduct much higher levels of reskilling than other companies are. Respondents at these organizations are over three times more likely than others to say their organizations will reskill more than 30 percent of their workforces over the next three years as a result of AI adoption.

4. With all eyes on gen AI, AI adoption and impact remain steady

While the use of gen AI tools is spreading rapidly, the survey data doesn’t show that these newer tools are propelling organizations’ overall AI adoption. The share of organizations that have adopted AI overall remains steady, at least for the moment, with 55 percent of respondents reporting that their organizations have adopted AI. Less than a third of respondents continue to say that their organizations have adopted AI in more than one business function, suggesting that AI use remains limited in scope. Product and service development and service operations continue to be the two business functions in which respondents most often report AI adoption, as was true in the previous four surveys. And overall, just 23 percent of respondents say at least 5 percent of their organizations’ EBIT last year was attributable to their use of AI—essentially flat with the previous survey—suggesting there is much more room to capture value.

Organizations continue to see returns in the business areas in which they are using AI, and they plan to increase investment in the years ahead. We see a majority of respondents reporting AI-related revenue increases within each business function using AI. And looking ahead, more than two-thirds expect their organizations to increase their AI investment over the next three years.

The online survey was in the field April 11 to 21, 2023, and garnered responses from 1,684 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. Of those respondents, 913 said their organizations had adopted AI in at least one function and were asked questions about their organizations’ AI use. To adjust for differences in response rates, the data are weighted by the contribution of each respondent’s nation to global GDP.

The survey content and analysis were developed by Michael Chui , a partner at the McKinsey Global Institute and a partner in McKinsey’s Bay Area office, where Lareina Yee is a senior partner; Bryce Hall , an associate partner in the Washington, DC, office; and senior partners Alex Singla and Alexander Sukharevsky , global leaders of QuantumBlack, AI by McKinsey, based in the Chicago and London offices, respectively.

They wish to thank Shivani Gupta, Abhisek Jena, Begum Ortaoglu, Barr Seitz, and Li Zhang for their contributions to this work.

This article was edited by Heather Hanselman, an editor in the Atlanta office.

Explore a career with us

Related articles.

McKinsey partners Lareina Yee and Michael Chui

The economic potential of generative AI: The next productivity frontier

A green apple split into 3 parts on a gray background. Half of the apple is made out of a digital blue wireframe mesh.

What is generative AI?

Circular hub element virtual reality of big data, technology concept.

Exploring opportunities in the generative AI value chain

artificial intelligence education ethical problems and solutions

  • Online Degrees
  • Find your New Career
  • Join for Free

What Is Artificial Intelligence? Definition, Uses, and Types

Learn what artificial intelligence actually is, how it’s used today, and what it may do in the future.

[Featured Image] Waves of 0 and 1 digits on a blue background.

Artificial intelligence (AI) refers to computer systems capable of performing complex tasks that historically only a human could do, such as reasoning, making decisions, or solving problems. 

Today, the term “AI” describes a wide range of technologies that power many of the services and goods we use every day – from apps that recommend tv shows to chatbots that provide customer support in real time. But do all of these really constitute artificial intelligence as most of us envision it? And if not, then why do we use the term so often? 

In this article, you’ll learn more about artificial intelligence, what it actually does, and different types of it. In the end, you’ll also learn about some of its benefits and dangers and explore flexible courses that can help you expand your knowledge of AI even further.  

Want to try out your AI skills? Enroll in AI for Everyone, an online program offered by DeepLearning.AI. In just 6 hours , you'll gain foundational knowledge about AI terminology , strategy , and the workflow of machine learning projects . Your first week is free .

What is artificial intelligence?

Artificial intelligence (AI) is the theory and development of computer systems capable of performing tasks that historically required human intelligence, such as recognizing speech, making decisions, and identifying patterns. AI is an umbrella term that encompasses a wide variety of technologies, including machine learning , deep learning , and natural language processing (NLP) . 

Although the term is commonly used to describe a range of different technologies in use today, many disagree on whether these actually constitute artificial intelligence. Instead, some argue that much of the technology used in the real world today actually constitutes highly advanced machine learning that is simply a first step towards true artificial intelligence, or “general artificial intelligence” (GAI).

Yet, despite the many philosophical disagreements over whether “true” intelligent machines actually exist, when most people use the term AI today, they’re referring to a suite of machine learning-powered technologies, such as Chat GPT or computer vision, that enable machines to perform tasks that previously only humans can do like generating written content, steering a car, or analyzing data. 

Artificial intelligence examples 

Though the humanoid robots often associated with AI (think Star Trek: The Next Generation’s Data or Terminator’s   T-800) don’t exist yet, you’ve likely interacted with machine learning-powered services or devices many times before. 

At the simplest level, machine learning uses algorithms trained on data sets to create machine learning models that allow computer systems to perform tasks like making song recommendations, identifying the fastest way to travel to a destination, or translating text from one language to another. Some of the most common examples of AI in use today include: 

ChatGPT : Uses large language models (LLMs) to generate text in response to questions or comments posed to it. 

Google Translate: Uses deep learning algorithms to translate text from one language to another. 

Netflix: Uses machine learning algorithms to create personalized recommendation engines for users based on their previous viewing history. 

Tesla: Uses computer vision to power self-driving features on their cars. 

Read more: Deep Learning vs. Machine Learning: Beginner’s Guide

The increasing accessibility of generative AI tools has made it an in-demand skill for many tech roles . If you're interested in learning to work with AI for your career, you might consider a free, beginner-friendly online program like Google's Introduction to Generative AI .

AI in the workforce

Artificial intelligence is prevalent across many industries. Automating tasks that don't require human intervention saves money and time, and can reduce the risk of human error. Here are a couple of ways AI could be employed in different industries:

Finance industry. Fraud detection is a notable use case for AI in the finance industry. AI's capability to analyze large amounts of data enables it to detect anomalies or patterns that signal fraudulent behavior.

Health care industry. AI-powered robotics could support surgeries close to highly delicate organs or tissue to mitigate blood loss or risk of infection.

Not ready to take classes or jump into a project yet? Consider subscribing to our weekly newsletter, Career Chat . It's a low-commitment way to stay current with industry trends and skills you can use to guide your career path.

What is artificial general intelligence (AGI)? 

Artificial general intelligence (AGI) refers to a theoretical state in which computer systems will be able to achieve or exceed human intelligence. In other words, AGI is “true” artificial intelligence as depicted in countless science fiction novels, television shows, movies, and comics. 

As for the precise meaning of “AI” itself, researchers don’t quite agree on how we would recognize “true” artificial general intelligence when it appears. However, the most famous approach to identifying whether a machine is intelligent or not is known as the Turing Test or Imitation Game, an experiment that was first outlined by influential mathematician, computer scientist, and cryptanalyst Alan Turing in a 1950 paper on computer intelligence. There, Turing described a three-player game in which a human “interrogator” is asked to communicate via text with another human and a machine and judge who composed each response. If the interrogator cannot reliably identify the human, then Turing says the machine can be said to be intelligent [ 1 ]. 

To complicate matters, researchers and philosophers also can’t quite agree whether we’re beginning to achieve AGI, if it’s still far off, or just totally impossible. For example, while a recent paper from Microsoft Research and OpenAI argues that Chat GPT-4 is an early form of AGI, many other researchers are skeptical of these claims and argue that they were just made for publicity [ 2 , 3 ].

Regardless of how far we are from achieving AGI, you can assume that when someone uses the term artificial general intelligence, they’re referring to the kind of sentient computer programs and machines that are commonly found in popular science fiction. 

Strong AI vs. Weak AI

When researching artificial intelligence, you might have come across the terms “strong” and “weak” AI. Though these terms might seem confusing, you likely already have a sense of what they mean. 

Strong AI is essentially AI that is capable of human-level, general intelligence. In other words, it’s just another way to say “artificial general intelligence.” 

Weak AI , meanwhile, refers to the narrow use of widely available AI technology, like machine learning or deep learning, to perform very specific tasks, such as playing chess, recommending songs, or steering cars. Also known as Artificial Narrow Intelligence (ANI), weak AI is essentially the kind of AI we use daily.

Read more: Machine Learning vs. AI: Differences, Uses, and Benefits

The 4 Types of AI 

As researchers attempt to build more advanced forms of artificial intelligence, they must also begin to formulate more nuanced understandings of what intelligence or even consciousness precisely mean. In their attempt to clarify these concepts, researchers have outlined four types of artificial intelligence .

Here’s a summary of each AI type, according to Professor Arend Hintze of the University of Michigan [ 4 ]: 

1. Reactive machines

Reactive machines are the most basic type of artificial intelligence. Machines built in this way don’t possess any knowledge of previous events but instead only “react” to what is before them in a given moment. As a result, they can only perform certain advanced tasks within a very narrow scope, such as playing chess, and are incapable of performing tasks outside of their limited context. 

2. Limited memory machines

Machines with limited memory possess a limited understanding of past events. They can interact more with the world around them than reactive machines can. For example, self-driving cars use a form of limited memory to make turns, observe approaching vehicles, and adjust their speed. However, machines with only limited memory cannot form a complete understanding of the world because their recall of past events is limited and only used in a narrow band of time. 

3. Theory of mind machines

Machines that possess a “theory of mind” represent an early form of artificial general intelligence. In addition to being able to create representations of the world, machines of this type would also have an understanding of other entities that exist within the world. As of this moment, this reality has still not materialized. 

4. Self-aware machines

Machines with self-awareness are the theoretically most advanced type of AI and would possess an understanding of the world, others, and itself. This is what most people mean when they talk about achieving AGI. Currently, this is a far-off reality. 

AI benefits and dangers

AI has a range of applications with the potential to transform how we work and our daily lives. While many of these transformations are exciting, like self-driving cars, virtual assistants, or wearable devices in the healthcare industry, they also pose many challenges.

It’s a complicated picture that often summons competing images: a utopia for some, a dystopia for others. The reality is likely to be much more complex. Here are a few of the possible benefits and dangers AI may pose: 

Greater accuracy for certain repeatable tasks, such as assembling vehicles or computers.Job loss due to increased automation.
Decreased operational costs due to greater efficiency of machines.Potential for bias or discrimination as a result of the data set on which the AI is trained.
Increased personalization within digital services and products.Possible cybersecurity concerns.
Improved decision-making in certain situations.Lack of transparency over how decisions are arrived at, resulting in less than optimal solutions.
Ability to quickly generate new content, such as text or images.Potential to create misinformation, as well as inadvertently violate laws and regulations.

These are just some of the ways that AI provides benefits and dangers to society. When using new technologies like AI, it’s best to keep a clear mind about what it is and isn’t. With great power comes great responsibility, after all. 

Read more: AI Ethics: What It Is and Why It Matters

Build AI skills on Coursera

Artificial Intelligence is quickly changing the world we live in. If you’re interested in learning more about AI and how you can use it at work or in your own life, consider taking a relevant course on Coursera today. 

In DeepLearning.AI’s AI For Everyone course , you’ll learn what AI can realistically do and not do, how to spot opportunities to apply AI to problems in your own organization, and what it feels like to build machine learning and data science projects. 

In DeepLearning.AI’s AI For Good Specialization , meanwhile, you’ll build skills combining human and machine intelligence for positive real-world impact using AI in a beginner-friendly, three-course program. 

Article sources

UMBC. “ Computing Machinery and Intelligence by A. M. Turing , https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf.” Accessed March 30, 2024.

ArXiv. “ Sparks of Artificial General Intelligence: Early experiments with GPT-4 , https://arxiv.org/abs/2303.12712.” Accessed March 30, 2024.

Wired. “ What’s AGI, and Why Are AI Experts Skeptical? , https://www.wired.com/story/what-is-artificial-general-intelligence-agi-explained/.” Accessed March 30, 2024.

GovTech. “ Understanding the Four Types of Artificial Intelligence , https://www.govtech.com/computing/understanding-the-four-types-of-artificial-intelligence.html.” Accessed March 30, 2024.

Keep reading

Coursera staff.

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.

Encyclopedia Britannica

  • Games & Quizzes
  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center
  • Introduction & Top Questions
  • Problem solving
  • Symbolic vs. connectionist approaches
  • Artificial general intelligence (AGI), applied AI, and cognitive simulation
  • Theoretical work
  • The Turing test
  • The first AI programs
  • Evolutionary computing
  • Logical reasoning and problem solving
  • English dialogue
  • AI programming languages
  • Microworld programs
  • Knowledge and inference
  • The CYC project
  • Creating an artificial neural network
  • Perceptrons
  • Conjugating verbs
  • Other neural networks
  • New foundations
  • The situated approach
  • Machine learning
  • Autonomous vehicles
  • Large language models and natural language processing
  • Virtual assistants
  • Is artificial general intelligence (AGI) possible?

artificial intelligence

What is artificial intelligence?

Motion Abstract background binary code ,futuristic Design Abstract Wave line Infinite Loop for Business science and Technology, articiial intelligence, algorithm

artificial intelligence

Our editors will review what you’ve submitted and determine whether to revise the article.

  • National Center for Biotechnology Information - PubMed Central - The rise of artificial intelligence in healthcare applications
  • Lifewire - What is artificial intelligence?
  • Harvard University - Science in the News - The History of Artificial Intelligence
  • Computer History Museum - AI and Robotics
  • Internet Encyclopedia of Philosophy - Artificial Intelligence
  • artificial intelligence - Children's Encyclopedia (Ages 8-11)
  • artificial intelligence (AI) - Student Encyclopedia (Ages 11 and up)
  • Table Of Contents

artificial intelligence

Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the  intellectual processes characteristic of humans , such as the ability to reason. Although there are as yet no AIs that match full human flexibility over wider domains or in tasks requiring much everyday knowledge, some AIs perform specific tasks as well as humans. Learn more.

Are artificial intelligence and machine learning the same?

No, artificial intelligence and machine learning are not the same, but they are closely related. Machine learning is the method to train a computer to learn from its inputs but without explicit programming for every circumstance. Machine learning helps a computer to achieve artificial intelligence.

Recent News

Trusted Britannica articles, summarized using artificial intelligence, to provide a quicker and simpler reading experience. This is a beta feature. Please verify important information in our full article.

This summary was created from our Britannica article using AI. Please verify important information in our full article.

artificial intelligence (AI) , the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks—such as discovering proofs for mathematical theorems or playing chess —with great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match full human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis , computer search engines , voice or handwriting recognition, and chatbots.

(Read Ray Kurzweil’s Britannica essay on the future of “Nonbiological Man.”)

What is intelligence?

All but the simplest human behaviour is ascribed to intelligence, while even the most complicated insect behaviour is usually not taken as an indication of intelligence. What is the difference? Consider the behaviour of the digger wasp , Sphex ichneumoneus . When the female wasp returns to her burrow with food, she first deposits it on the threshold , checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasp’s instinctual behaviour is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced. Intelligence—conspicuously absent in the case of Sphex —must include the ability to adapt to new circumstances.

(Read Yuval Noah Harari’s Britannica essay on the future of “Nonconscious Man.”)

computer chip. computer. Hand holding computer chip. Central processing unit (CPU). history and society, science and technology, microchip, microprocessor motherboard computer Circuit Board

Psychologists generally characterize human intelligence not by just one trait but by the combination of many diverse abilities. Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solving , perception , and using language.

There are a number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that the next time the computer encountered the same position it would recall the solution. This simple memorizing of individual items and procedures—known as rote learning—is relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalization . Generalization involves applying past experience to analogous new situations. For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless it previously had been presented with jumped , whereas a program that is able to generalize can learn the “add ed ” rule and so form the past tense of jump based on experience with similar verbs.

Media Statements

Privacy commissioner’s office publishes “artificial intelligence: model personal data protection framework”, privacy commissioner’s office publishes “artificial intelligence: model personal data protection framework”.

  • Establish AI Strategy and Governance: Formulate the organisation’s AI strategy and governance considerations for procuring AI solutions, establish an AI governance committee (or similar body) and provide employees with training relevant to AI;
  • Conduct Risk Assessment and Human Oversight: Conduct comprehensive risk assessments, formulate a risk management system, adopt a “risk-based” management approach, and, depending on the levels of the risks posed by AI, adopt proportionate risk mitigating measures, including deciding on the level of human oversight;
  • Customisation of AI Models and Implementation and Management of AI Systems: Prepare and manage data, including personal data, for customisation and/or use of AI systems, test and validate AI models during the process of customising and implementing AI systems, ensure system security and data security, and manage and continuously monitor AI systems; and
  • Communication and Engagement with Stakeholders: Communicate and engage regularly and effectively with stakeholders, in particular internal staff, AI suppliers, individual customers and regulators, in order to enhance transparency and build trust.

artificial intelligence education ethical problems and solutions

  • Organisations should have an internal AI governance strategy, which generally comprises (i) an AI strategy, (ii) governance considerations for procuring AI solutions, and (iii) an AI governance committee (or similar body) to steer the process, including providing directions on the purposes for which AI solutions may be procured, and how AI systems should be implemented and used; 
  • Consider governance issues in the procurement of AI solutions, including whether the potential AI suppliers have followed international technical and governance standards, the general criteria for submission of an AI solution to the AI governance committee (or similar body) for review and the relevant procedures, any data processor agreements to be signed, and the policy on handling the output generated by the AI system (e.g. employing techniques to anonymise personal data contained in AI-generated content, label or watermark AI-generated content and filter out AI-generated content that may pose ethical concerns);
  • Establish an internal governance structure with sufficient resources, expertise and authority to steer the implementation of the AI strategy and oversee the procurement, implementation and use of the AI system, including establishing an AI governance committee (or similar body) which should report to the board, and establish effective internal reporting mechanisms for reporting any system failure or raising any data protection or ethical concerns to facilitate proper monitoring by the AI governance committee; and
  • Provide AI-related training to employees to ensure that they have the appropriate knowledge, skills and awareness to work in an environment using AI systems. For instance, for AI system users (including operational personnel in the business), the training topics may include compliance with data protection laws, regulations and internal policies, cybersecurity risks, and general AI technology.  
  • Comprehensive risk assessment is necessary for organisations to systematically identify, analyse and evaluate the risks, including privacy risks, involved in the procurement, use and management of AI systems. Factors that should be considered in a risk assessment include requirements of the PDPO; the volume, sensitivity and quality of data (including personal data); security of data; the probability that privacy risks (e.g. excessive collection, misuse or leakage of personal data) will materialise and the potential severity of the harm that might result. For example, an AI system which assesses the credit worthiness of individuals tends to carry a higher risk than an AI system used to present individuals with personalised advertisements because the former may deny them access to credit facilities which, generally speaking, has a higher impact than the latter;
  • Adopt risk management measures that are proportionate to the relevant risks, including deciding on an appropriate level of human oversight, for example, “human-out-of-the-loop” (where AI makes decisions without human intervention), “human-in-command” (where human actors oversee the operation of AI and intervene whenever necessary), and “human-in-the-loop” (where human actors retain control in the decision-making process to prevent and/or mitigate improper output  and/or decisions by AI); and
  • When seeking to mitigate AI risks to comply with the Ethical Principles for AI, organisations may need to strike a balance when conflicting criteria emerge and make trade-offs between the criteria. Organisations may need to consider the context in which they are deploying the AI to make decisions or generate contents and thus decide how to justifiably address the trade-offs that arise. For example, explainability may be relatively important in a context where a decision of an AI system affects a customer’s access to services, and a human reviewer who performs human oversight would need to explain the AI system’s decision to the customer.  
  • Adopt measures to ensure compliance with the requirements of the PDPO when preparing and managing data (including personal data) for customisation and/or use of the AI model, such as using the minimum amount of personal data, ensuring the quality of data, and properly documenting the handling of data for the customisation and use of AI;
  • During the process of customisation and implementation of the relevant AI system, validate it with respect to privacy obligations and ethical requirements including fairness, transparency and interpretability; test the AI model for errors to ensure reliability, robustness and fairness; and perform rigorous User Acceptance Test;
  • Ensure system security and data security, such as implementing measures (e.g. red teaming) to minimise the risk of attacks against machine learning models; implementing internal guidelines for staff on the acceptable input to be fed into and the permitted / prohibited prompts to be entered into the AI system; and establishing mechanisms to enable the traceability and auditability of the AI system’s output;
  • Manage and continuously monitor AI system, adopt review mechanism (including conducting re-assessments of the AI system to identify and address new risks, especially when there is a significant change to the functionality or operation of the AI system or to the regulatory or technological environments);
  • Establish an AI Incident Response Plan, encompassing elements of defining, monitoring for, reporting, containing, investigating and recovering from an AI incident; and
  • Internal audits (and independent assessments, where necessary) should be conducted regularly to ensure that the use of AI continues to comply with the requirements of the relevant policies of the organisation and align with its AI strategy.  
  • Communicate and engage regularly and effectively with stakeholders, in particular internal staff, AI suppliers, individual customers and regulators, in order to enhance transparency and build trust;
  • Handle data access and correction requests, and provide feedback channels;
  • Provide explanations for decisions made by and output generated by AI, disclose the use of the AI system, disclose the risks, and consider allowing opt-out; and
  • Use plain language that is clear and understandable to lay persons when communicating with stakeholders, particularly consumers.

IMAGES

  1. Ethical challenges of AI : r/artificial

    artificial intelligence education ethical problems and solutions

  2. Comprehending Ethical AI Challenges and it's Solutions

    artificial intelligence education ethical problems and solutions

  3. Artificial Intelligence Ethics

    artificial intelligence education ethical problems and solutions

  4. Infographic: The Ethics of Artificial Intelligence

    artificial intelligence education ethical problems and solutions

  5. The curious case of Ethics in AI

    artificial intelligence education ethical problems and solutions

  6. Artificial intelligence education ethical problems and solutions

    artificial intelligence education ethical problems and solutions

VIDEO

  1. Khan Academy CEO on how AI will change education

  2. 5 Ethical Implications of AI in Education: A Guideline for Responsible Classroom Implementation

  3. Artificial Intelligence: Learning from the Practical Application of AI in Diagnosis&Decision support

  4. How AGI is Disrupting the Technology

  5. Artificial Intelligence (AI): Fundamental Skills for Educators & Students (CDE)

  6. AI is about to make school better

COMMENTS

  1. Artificial intelligence in education: Addressing ethical challenges in K-12 settings

    Abstract. Artificial intelligence (AI) is a field of study that combines the applications of machine learning, algorithm productions, and natural language processing. Applications of AI transform the tools of education. AI has a variety of educational applications, such as personalized learning platforms to promote students' learning ...

  2. Artificial Intelligence Education Ethical Problems and Solutions

    Artificial intelligence technology is an opportunity for education, but it is also a challenge. We do not deny the changes that artificial intelligence technology brings to education. At the same time, we must also consider the problems in artificial intelligence education, such as the fairness and inclusiveness of AI education. Based on these, this paper analyzes the causes of the problems ...

  3. PDF Artificial intelligence in education: Addressing ethical challenges in

    To address these issues, this paper (1) briefly defines AI through the concepts of machine learning and algorithms; (2) introduces applications of AI in educational settings and benefits of AI systems to support students' learning processes; (3) describes ethical challenges and dilemmas of using AI in education; and (4) addresses the teaching ...

  4. Artificial Intelligence Education Ethical Problems and Solutions

    Through analysis, it is found that the root of problems is in people, so this paper divides people into three categories according to the different aspects that they are responsible for in artificial intelligence education. Artificial intelligence technology is an opportunity for education, but it is also a challenge. We do not deny the changes that artificial intelligence technology brings to ...

  5. Artificial Intelligence Education Ethical Problems and Solutions

    Artificial Intelligence Education Ethical Problems and Solutions. August 2018. DOI: 10.1109/ICCSE.2018.8468773. Conference: 2018 13th International Conference on Computer Science & Education ...

  6. The challenges and opportunities of Artificial Intelligence in education

    The challenges and opportunities of Artificial Intelligence in education. Artificial Intelligence (AI) is producing new teaching and learning solutions that are currently being tested globally. These solutions require advanced infrastructures and an ecosystem of thriving innovators. How does that affect countries around the world, and ...

  7. Ethical principles for artificial intelligence in education

    The application of artificial intelligence (AI) in education has been featured as one of the most pivotal developments of the century (Becker et al., 2018; Seldon with Abidoye, 2018).Despite the rapid growth of AI for education (AIED) and the surge in its demands under the COVID-19 impacts, little is known about what ethical principles should be in guiding the design, development, and ...

  8. Ethical principles for artificial intelligence in K-12 education

    Advances in Artificial Intelligence in Education (AIED) are providing teachers with a wealth of new tools and smart services to facilitate student learning. ... (K-12 education) 1 has also lagged behind, even though the ethical issues involving AI in the classroom are "equally, if not more acute" than those troubling AI in the larger ...

  9. ENAI Recommendations on the ethical use of Artificial Intelligence in

    Academic integrity (AI) can be defined as "compliance with ethical and professional principles, standards, practices, and a consistent system of values that serves as guidance for making decisions and taking actions in education, research and scholarship" (Tauginienė et al. 2018, p. 8). Artificial Intelligence refers to systems that appear to have "intelligent behaviour by analysing ...

  10. AI ethics and learning: EdTech companies' challenges and solutions

    2. Ethical issues in AI-based learning contexts. AI is not a new topic (Turing, Citation 1950 [Citation 2009]), nor AI in education (AIED) which has taken its first steps already in the beginning of 1970s (Self, Citation 2016).Furthermore, many AI-related issues have been identified as ethically challenging some time ago (e.g. Mason, Citation 1986).Jobin et al. (Citation 2019) conclude that ...

  11. The Ethics of Artificial Intelligence in Education

    TheEthics of Artificial Intelligence in Education identifies and confronts key ethical issues generated over years of AI research, development, and deployment in learning contexts. Adaptive, automated, and data-driven education systems are increasingly being implemented in universities, schools, and corporate training worldwide, but the ethical ...

  12. Artificial Intelligence in Education: Ethical Issues and its

    Artificial Intelligence in Education: Ethical Issues and its Regulations ... Centrifugal Effects in Technology-enhanced Learning Environments: Phenomena, Causes and Solutions. E-Education Research, vol.40, no.12, pp. 44-50. ... Research on Ethical Issues and Ethical Principles of Educational Artificial Intelligence. E-Education Research, vol ...

  13. Exploring the ethics of artificial intelligence in K-12 education

    November 3, 2021. Artificial intelligence is everywhere: from reading emails to referencing a search engine. It is in the classroom too, such as with personalized learning or assessment systems. In recent months, AI in the K-12 classroom became even more prevalent as learning shifted to and, in some cases, remained online due to COVID-19.

  14. PDF CHAPTER 5: Ethical Challenges of AI Applications

    the new Artificial Intelligence, Ethics, and Society Conference by the Association for the Advancement of Artificial Intelligence and the Conference on Fairness, Accountability, and Transparency by the Association for Computing Machinery. 5.3 ETHICS AT AI CONFERENCES CHAPTER 5: ETHICAL CHALLENGES OF AI APPLICATIONS 5.3 ETHICS AT AI CONFERENCES

  15. Artificial intelligence in education: Addressing ethical challenges in

    considered in K-12 educational contexts. The e thical challenges of AI in education must be identified and introduced to. teachers and students. To address these issues, this paper (1) briefly ...

  16. (PDF) Artificial Intelligence in Education and Ethics

    The second. 7 Artificial Intelligence in Education and Ethics 103. implication is that online, distance, and digital systems have increasingly incorpo-. rated elements of AI in order to make such ...

  17. Practical Ethical Issues for Artificial Intelligence in Education

    This happens by introducing mechanisms capable of keeping group members updated about the state and changes of the shared virtual space, about the actions that other members Practical Ethical Issues for Artificial Intelligence in Education 441 are performing, and, more recently, about the emotional states and movement of the colleagues' eyes ...

  18. Artificial intelligence in education: Addressing ethical challenges in

    The ethical challenges of AI in education must be identified and introduced to teachers and students. To address these issues, this paper (1) briefly defines AI through the concepts of machine learning and algorithms; (2) introduces applications of AI in educational settings and benefits of AI systems to support students' learning processes; (3 ...

  19. Exploring the Ethical Problems and Solutions in Artificial Intelligence

    In the field of artificial intelligence education, transparency plays a crucial role in addressing the ethical problems and finding effective solutions. As AI technology becomes more advanced and integrated into various aspects of our lives, it is important to ensure that the decision-making processes and algorithms used are transparent and ...

  20. Practical Ethical Issues for Artificial Intelligence in Education

    Artificial Intelligence (AI) technologies are increasingly present in contemporary life and proving themselves capable of promoting significant changes in how people interact, solve problems, and make decisions [].This makes evident the need to encourage discussions and seek solutions to the impacts that this can pose on the different dimensions of social life.

  21. 4 Things to Know About AI's 'Murky' Ethics

    That's what Jennifer Rubin, a senior researcher at foundry10, an organization focused on improving learning, set to find out last year. She and her team conducted small focus groups on AI ethics ...

  22. Ethics guidelines for trustworthy AI

    On 8 April 2019, the High-Level Expert Group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence. This followed the publication of the guidelines' first draft in December 2018 on which more than 500 comments were received through an open consultation. ... ethical - respecting ethical principles and values (3) robust - both ...

  23. Artificial Intelligence Education Ethical Problems and Solutions

    (DOI: 10.1109/ICCSE.2018.8468773) Artificial intelligence technology is an opportunity for education, but it is also a challenge. We do not deny the changes that artificial intelligence technology brings to education. At the same time, we must also consider the problems in artificial intelligence education, such as the fairness and inclusiveness of AI education. Based on these, this paper ...

  24. Knowledge, attitudes, and perceived Ethics regarding the use of ChatGPT

    Artificial intelligence (AI) has been integrated into higher education (HE), offering numerous benefits and transforming teaching and learning. Since its launch, ChatGPT has become the most popular learning model among Generation Z college students in HE. This study aimed to assess the knowledge, concerns, attitudes, and ethics of using ChatGPT among Generation Z college students in HE in Peru ...

  25. UNESCO

    This question is for testing whether you are a human visitor and to prevent automated spam submission. What code is in the image? submit Your support ID is: 6609120682765220484.

  26. Artificial intelligence in education: Addressing ethical challenges in

    Artificial intelligence (AI) is a field of study that combines the applications of machine learning, algorithm productions, and natural language processing. Applications of AI transform the tools of education. AI has a variety of educational applications, such as personalized learning platforms to promote students' learning, automated assessment systems to aid teachers, and facial ...

  27. The state of AI in 2023: Generative AI's breakout year

    2. Leading companies are already ahead with gen AI. The survey results show that AI high performers—that is, organizations where respondents say at least 20 percent of EBIT in 2022 was attributable to AI use—are going all in on artificial intelligence, both with gen AI and more traditional AI capabilities.

  28. What Is Artificial Intelligence? Definition, Uses, and Types

    What is artificial intelligence? Artificial intelligence (AI) is the theory and development of computer systems capable of performing tasks that historically required human intelligence, such as recognizing speech, making decisions, and identifying patterns. AI is an umbrella term that encompasses a wide variety of technologies, including machine learning, deep learning, and natural language ...

  29. Artificial intelligence (AI)

    Summarize This Article artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past ...

  30. Privacy Commissioner's Office Publishes "Artificial Intelligence: Model

    The Model Framework covers recommended measures in the following four areas: Establish AI Strategy and Governance: Organisations should have an internal AI governance strategy, which generally comprises (i) an AI strategy, (ii) governance considerations for procuring AI solutions, and (iii) an AI governance committee (or similar body) to steer the process, including providing directions on the ...