IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

  •  Sign into My Research
  •  Create My Research Account
  • Company Website
  • Our Products
  • About Dissertations
  • Español (España)
  • Support Center

Select language

  • Bahasa Indonesia
  • Português (Brasil)
  • Português (Portugal)

Welcome to My Research!

You may have access to the free features available through My Research. You can save searches, save documents, create alerts and more. Please log in through your library or institution to check if you have access.

Welcome to My Research!

Translate this article into 20 different languages!

If you log in through your library or institution you might have access to this article in multiple languages.

Translate this article into 20 different languages!

Get access to 20+ different citations styles

Styles include MLA, APA, Chicago and many more. This feature may be available for free if you log in through your library or institution.

Get access to 20+ different citations styles

Looking for a PDF of this document?

You may have access to it for free by logging in through your library or institution.

Looking for a PDF of this document?

Want to save this document?

You may have access to different export options including Google Drive and Microsoft OneDrive and citation management tools like RefWorks and EasyBib. Try logging in through your library or institution to get access to these tools.

Want to save this document?

  • Document 1 of 1
  • More like this
  • Scholarly Journal

Educational Design Principles of Using AI Chatbot That Supports Self-Regulated Learning in Education: Goal Setting, Feedback, and Personalization

No items selected.

Please select one or more items.

Select results items first to use the cite, email, save, and export options

1. Introduction

Educational chatbots, also called conversational agents, hold immense potential in delivering personalized and interactive learning experiences to students [1,2]. However, the advent of ChatGPT or generative AI poses a substantial challenge to the role of educators, as it gives rise to concerns that students may exploit generative AI tools to obtain academic recognition without actively engaging in the learning process. In light of this transformative development, it is observable that AI represents a contemporary trend in education, and it will be used by learners inevitably. Rather than attempting to suppress using AI in education, educators should proactively seek and explore ways to adapt its presence. This adaptation can be effectively achieved by establishing fruitful collaborations between educators, instructional designers, and researchers in the AI field. Such partnerships should strive to explore the integration of pedagogical principles within AI platforms, ensuring that students not only derive benefits from AI but also acquire the essential skills mandated by the educational curriculum. Consequently, it becomes crucial for chatbot designers and educators to collaborate closely, considering key pedagogical principles such as goal setting, self-assessment, and personalization at various stages of learning [3,4]. These principles should guide the design process, guaranteeing that the chatbot effectively supports the student learning experience.

In this paper, drawing from Barry Zimmerman’s Self-Regulated Learning (SRL) framework, we propose several key pedagogical principles that can be considered when teachers decide to integrate AI chatbots in classrooms in order to foster SRL. While composing this paper, it is evident that the majority of research on generative AI tools (like ChatGPT) mainly focuses on their wide-ranging applications and potential drawbacks in education. However, there has been a notable shortage of studies that actively engage teachers and instructional designers to determine the most effective ways to incorporate these AI tools in classroom settings. In one of our pedagogical principles, we will specifically draw on Judgement of Learning (JOL), which refers to assessing and evaluating student understanding and progress [5,6], and explore how JOL can be integrated into AI-based pedagogy and instructional design that fosters students’ SRL. By integrating Zimmerman’s SRL framework with JOL, we hope to address the major cognitive, metacognitive, and socio-educational concerns contributing to the enhancement and personalization of AI in teaching and learning.

2. Theoretical Framework

Let us conceptualize a learning scenario on writing and learning. A student accesses their institution’s learning management system (LMS) and selects the course titled “ENGL 100—Introduction to Literature”, a foundational writing course under the Department of English. Upon navigating to an assignment, the student delves into its details and reads the assignment instructions. After a brief review, the student copies the assignment’s instructions. In a separate browser tab, the student opens up ChatGPT and decides to engage with it. The student then pastes the assignment instructions, prompting ChatGPT with, “Plan my essay based on the provided assignment instructions, [copied assignment instructions]”.

In response, ChatGPT outlines a structured plan, beginning with the crafting of an introduction. However, the student is puzzled about the nature and structure of an introduction, so the student inquires and re-prompts again, “Could you provide an example instruction for the assignment?” ChatGPT then offers a sample. After studying the example, the student clicks a word processing software on their computer and commences the writing process. Upon completing the introduction, the student seeks feedback from ChatGPT, asking, “Could you assess and evaluate the quality of my introduction?” ChatGPT provides its evaluation. Throughout the writing process, the student frequently consults ChatGPT for assistance on various elements, such as topic sentences, examples, and argumentation, refining their work until the student is satisfied with the work they produce for the ENGL 100 assignment.

This scenario depicts a perfect and ideal SRL cycle executed by the student, encompassing goal-setting, standard reproduction, iterative engagement with ChatGPT, and solicitation of evaluative feedback. However, in real-world educational contexts, students might not recognize this cycle. They might perceive ChatGPT merely as a problem-solving AI chatbot, which can help them with the assignment. On the side of instructors, instructors are not fully aware of how AI tools can be integrated as part of their pedagogy, yet they are afraid that students will use this AI chatbot unethically for learning.

In our position, we argue that generative AI tools, like ChatGPT, have the potential to facilitate SRL when instructors are aware of the SRL process from which students can benefit. To harness the potential of generative AI tools, educators must be cognizant of their capabilities, functions, and pedagogical values. To this end, we employ Zimmerman’s multi-level SRL framework, which will be elaborated upon in the subsequent section.

2.1. Review of Zimmerman’s Multi-Level Self-Regulated Learning Framework

Zimmerman’s multi-level SRL framework [7,8] encompasses four distinct levels: observation, emulation, self-control, and self-regulation (see Figure 1). Each level represents a progressive stage in the development of SRL skills. This framework guides us to explore how a chatbot can facilitate SRL at each stage of Zimmerman’s framework. For example, when students use AI chatbots for their learning, they treat the chatbots as a resource. They enter questions or commands into the AI chatbots, hoping to seek clarifications or information from the chatbots for the task at hand. We assume that this type of utilization of AI chatbots elicits students’ self-regulation. We propose that Zimmerman’s multi-level SRL framework helps to interpret the SRL processes undertaken by students.

Specifically, the observation level denotes a stage where students possess prior knowledge of how conversations occur in a real-life context and their general goal for the learning task. During this phase, students may set their goals, or primarily observe and learn from others who prompt the chatbot, gaining insights into the expected outcomes and interactions. Moving onto the emulation level, students demonstrate their comprehension of the task requirements by independently prompting the chatbot using their own words or similar phrases they have observed or recommended by others. At this stage, students strive to replicate successful interactions they have witnessed, applying their understanding of the task to engage with the chatbot. At this stage, they may also use their goals as the prompts being fed into a chatbot, or they can use the prompts they observe from others. The self-control level, on the other hand, represents a critical juncture where students face decisions about their learning. Such decisions can be ethical conduct and academic integrity decisions, or further re-engagement (re-prompting the chatbot). Specifically, once the chatbot generates a response, students must choose between potentially resorting to taking the chatbot’s responses verbatim for the assignments (academic integrity and ethical conduct) and modifying their approach, such as re-prompting, or sorting out other strategies. This phase provides an opportunity for the chatbot to contribute by offering evaluations and feedback on students’ work, guiding them to determine whether their output meets the required standards or if further revisions are necessary. In sum, at this self-control stage, it can be considered as a two-way interaction between the chatbot and students.

As students march into the self-regulation level when they use the chatbot, they begin to recognize the potential benefits of the chatbot as a useful, efficient, and valuable learning tool to assist their learning. Regarding the self-regulation level, students may seek an evaluation of their revised paragraph generated by the chatbot. Moreover, they might request the chatbot to provide their learning analytics report. Fine-grained student data can be visualized as learning analytics in the chatbot or receive recommendations for further learning improvement. This stage exemplifies the students’ growing understanding of how the chatbot can facilitate their learning process, guiding them toward achieving specific objectives and refining their SRL skills. Zimmerman’s multi-level SRL framework provides a comprehensive perspective on the gradual development of increasing SRL abilities. It illustrates how students proceed from observing and emulating others, exercising self-control, and ultimately achieving self-regulation by harnessing the chatbot’s capabilities as a supportive learning resource.

2.2. Definition and Background of JOL

In Zimmerman’s self-control and self-regulation phases of SRL, students have to engage in some levels of judgement about the chatbot’s output, so they can decide what their next actions are. Such judgement is known as self-assessment, and self-assessment is grounded in Judgement of Learning (JOL), a concept dominant in educational psychology.

JOL is a psychological and educational concept that refers to an individual’s evaluation of their learning [6]. It reflects the extent to which an individual believes they have learned or retained new information, which can impact their motivation and behavior during the learning process [5]. Several studies have indicated that various factors could impact an individual’s JOL, including the difficulty of the material, the individual’s pre-existing knowledge and skills, and the effectiveness of the learning strategy used [5,6]. There is empirical evidence showing that people with a higher JOL tend to be more motivated to learn and more likely to engage in SRL activities, while those with a lower JOL may be less motivated and avoid difficult learning tasks [9,10]. JOL can also serve as a feedback mechanism for learners by allowing them to identify areas where they need to focus more effort and adjust their learning strategies accordingly [11,12]. Additionally, JOL can influence an individual’s confidence, which in turn can affect their overall approach to learning [11].

One of the most influential theories of JOL is the cue-utilization approach, which proposes that individuals use various cues, or indicators, to assess their learning [5]. These cues can include things like how difficult the material was to learn, how much time was spent studying, and how well the material was understood. According to Koriat [5], individuals are more likely to have higher JOL if they encounter more favorable cues while learning (e.g., domain-specific knowledge), and more likely to have a low JOL if they encounter less favorable cues (e.g., feelings of unfamiliarity or difficulty). Another important outcome of JOL is metacognitive awareness, which emphasizes the role of metacognitive processes, or higher-order thinking skills, in the learning process. Research [13,14] shows that individuals use metacognitive strategies, such as planning, monitoring, and evaluating, to guide their learning and assess their progress. As a result, individuals with higher JOL are more likely to use effective metacognitive strategies and be more successful learners. In certain conditions, students recognize their lack of understanding of specific concepts, a phenomenon referred to as “negative JOL” [15], which may result in the improvement of previously adopted learning skills and strategies. Suppose the student does not change their strategy use following such judgement. In that case, the student’s metacognitive behavior is called “static”, implying that they are aware of their knowledge deficit but are resistant to change [16]. Various models of JOL have been proposed. For example, the social cognitive model [17] emphasizes the influence of social and environmental factors on learning, and the self-perception model suggests that individuals’ JOL is influenced by their perceptions of their abilities and self-worth [18].

Taken together, incorporating Zimmerman’s SRL theoretical framework and JOL into the existing capacity of AI in Education has significant potential for improving students’ SRL. Currently, AI technology operates in a unidirectional manner, where users (or students) prompt the generative AI tool to fulfill its intended function and purposes (in the following section, we also call it “goal setting”), as what we have shown above with respect to the emulation and the self-control stages. However, in education, it is crucial to emphasize the importance of bidirectional interaction (from user to AI and AI to user). Enabling AI to initiate personalized learning feedback (i.e., learning analytics, which we will elaborate in the Section 3.4) to users can create meaningful and educational interactions. In the sections below, we propose several educational principles that can guide the integration of chatbots into various aspects of educational practices.

3. Educational Principles That Guide Integration of Chatbots

3.1. Define Chatbots and Describe Their Potential Use in Educational Settings

The term “chatbot” refers to computer programs that communicate with users using natural language [19]. The history of chatbots can be extended back to the early 1950s [20]. In particular, ELIZA [21] and A.L.I.C.E. [22] were well-known early chatbot systems simulating real human communication. Chatbots are technological innovations that may efficiently supplement services delivered to humans. In addition to educational chatbots [23,24] and applying deep learning algorithms in learning management systems [25], chatbots have been used as a tool for many purposes and have a wide range of industrial applications, such as medical education [26,27], counseling consultations [28], marketing education [29], and telecommunications support and in financial industries [30,31].

In particular, research has been conducted to investigate the methods and impacts of chatbot implementation in education in recent years [25,32,33]. Chatbots’ interactive learning feature and their flexibility in terms of time and location have made their usage more appealing and gained popularity in the field of education [23]. Several studies have shown that utilizing chatbots in educational settings may provide students with a positive learning experience, as human-to-chatbot interaction allows real-time engagement [34], improves students’ communication skills [35], and improves students’ efficiency of learning [36].

The growing need for AI technology has opened a new avenue for constructing chatbots when combined with natural language processing capabilities and machine learning techniques [37]. Smutny and Schreiberova’s study [2] showed that chatbots have the potential to become smart teaching assistants in the future, as they might be capable of supplementing in-class instructions alongside instructors. In the case of ChatGPT, some students might have used it as personal assistants, regardless of its underlying ethical conduct in academia. However, we would like to argue that utilizing generative AI chatbots, like ChatGPT, can be a platform for students to become self-regulated under the conditions that they are taught about the context of appropriate use, such as when, where, and how they should use the AI chatbot system for learning. In addition, according to a meta-analysis conducted by Deng and Yu [38], chatbots can potentially have a medium-to-high effect on achievement or learning outcomes. Therefore, integrating AI chatbots into classrooms has now been a question of how educators should do it appropriately to foster learning rather than how educators should suppress it so students will observe the boundary of ethical conduct.

Conventional teaching approaches, such as giving students feedback, encouraging students, or customizing course material to student groups, are still dominant pedagogical practices. Suppose we can take these conventional approaches into account while integrating AI into pedagogy. In that case, we believe that computers and other digital gadgets can bring up far-reaching possibilities that have yet to be completely realized. For example, incorporating process data in student learning may offer students some opportunities to monitor their understanding of materials as well as additional opportunities for formative feedback, self-reflection, and competence development [39]. Hattie [40] has argued that the effect of feedback has a median effect size of d = 0.75 in terms of achievement. On the other hand, Wisniewski et al. [41] have shown that feedback can produce an effect size of d = 0.99 for highly informative feedback on student achievement. Such feedback may foster an SRL process and strong metacognitive monitoring and control [8,15,42]. With these pieces of evidence, we can propose that AI that model teachers’ scaffolding and feedback mechanism after students prompt the AI will support SRL activities.

As stated earlier, under the unidirectional condition (student-to-AI), it has been unclear what instructional and pedagogical functions of chatbots can serve to produce learning effects. In particular, it is unclear what teaching and learning implications are when students use a chatbot to learn. We, therefore, propose an educational framework for integrating an AI educational chatbot based on learning science—Zimmerman’s SRL framework along with JOL.

To our best knowledge, the design of chatbots has focused greatly on the backend design [43], user interface [44], and improving learning [36,45,46]. For example, Winkler and Söllner [46] reviewed the application of chatbots in improving student learning outcomes and suggested that chatbots could support individuals’ development of procedural knowledge and competency skills such as information searching, data collection, decision making, and analytical thinking.

Specifically for learning improvement, since the rise of Open AI’s ChatGPT, there have been several emerging calls for examining how ChatGPT can be integrated pedagogically to support the SRL process. As Dwivedi et al. [47] writes, “Applications like ChatGPT can be used either as a companion or tutor, [or] to support … self-regulated learning” [47] (p. 9). A recent case study also found that ChatGPT gave feedback to student assignments is comparable to that of a human instructor [48]. Lin and Chang’s study [49] and Lin’s doctoral dissertation have also provided a clear bulletin for designing and implementing chatbots for educational purposes and documented several interaction pathways leading to effective peer reviewing activities and writing achievement [49]. Similarly, Zhu et al. [50] argued that “self-regulated learning has been widely promoted in educational settings, the provision of personalized support to sustain self-regulated learning is crucial but inadequately accomplished” (p. 146). Therefore, we are addressing the emerging need to integrate chatbots in education and how chatbots can be developed or used to support learners’ SRL activities. This will be the reason why the fundamental educational principles of pedagogical AI chatbots need to be established. To do so, we have identified several instructional dimensions that we argue should be featured in the design of educational chatbots to facilitate effective learning for students or at least to supplement classroom instructions. These instructional dimensions include (1) goal setting, (2) feedback and self-assessment, and (3) personalization and adaptation.

3.2. Goal Setting and Prompting

Goals and motivation are two highly correlated constructs in education. These two instructional dimensions can guide the design of educational chatbots. In the field of education, the three terms, learning goals, learning objectives, and learning outcomes, have been used interchangeably, though with some conceptual differences [51]. Prøitz [51] mentioned: “the two terms [learning outcomes and learning objectives] are often intertwined and interconnected in the literature makes it difficult to distinguish between them” (p. 122). In the context of SRL and AI chatbots, we argue that the three are inherently similar to some extent. It is because, according to Burke [52] and Prøitz [51], these teacher-written statements contain learning orientation and purpose orientation that manifest their expectations from students. Therefore, these orientations can serve as process-oriented or result-oriented goals that guide learners’ strategies and SRL activities.

In goal-setting theory, learning goals (objectives or outcomes) that are process-oriented, specific, challenging, and achievable can motivate students and serve SRL functions. For instance, Locke and Latham [53] explained that goals may help shape students’ strategies to tackle a learning task, monitor their progress in a studying session, and increase engagement and motivation. Let us take a scenario. Imagine that a student needs to write a report. This result-oriented goal can give rise to two process-based sub-goals: first, they want to synthesize information A, B, and C during a writing session. Secondly, they want to generate an argument. In order to synthesize information, the student may need to apply some strategies. The student’s synthesis goal can drive the student to use some process-oriented writing strategies, such as classifying, listing, or comparing and contrasting. To generate an argument, the student may need to find out what is missing in the synthesized information or what is common among the syntheses. Thus, this example demonstrates that goals articulate two dimensions of learning: the focus of attention and resources needed to achieve the result. As Leake and Ram [54] argued, “a goal-driven learner determines what to learn by reasoning about the information it needs, and determines how to learn by reasoning about the relative merits of alternative learning strategies in the current circumstances” (p. 389).

SRL also consists of learners exercising their metacognitive control and metacognitive monitoring. These two processes are guided by pre-determined result-oriented outcomes: objectives or goals [8,42,55,56,57]. SRL researchers generally agree that goals can trigger several SRL events and metacognitive activities that should be investigated as they occur during learning and problem-solving activities [55,58,59]. Moreover, Paans et al.’s study [60] argues that learner-initiated SRL activities occurring at the micro-level and macro-level can be developed and occur simultaneously, including goal setting or knowledge acquisition. It implies that, in certain pedagogical tasks or problem-solving environments, such as working with chatbots, students need to identify goals by prompting the AI chatbot in a learning session corresponding to the tasks.

Additionally, goals can function as benchmarks by which learners assess the efficacy of their learning endeavors. When students possess the capacity to monitor their progress toward these goals, they are more likely to sustain their motivation and active involvement in the learning process [61]. Within the context of AI chatbot interaction, consider a scenario where a student instructs a chatbot to execute certain actions, such as synthesizing a given set of information. Subsequently, the chatbot provides the requested synthesis, allowing students to evaluate its conformity with their expectations and the learning context. Within Zimmerman’s framework of Self-Regulated Learning, this process aligns with the stages of emulation and self-control. Once a student prompts the chatbot for a response, they continuously monitor and self-assess its quality, subsequently re-prompting the chatbot for further actions. This bidirectional interaction transpires within the stages of simulation and self-control, as students actively participate in a cycle of prompts, monitoring and adjustments, and subsequent re-prompts, persisting until they attain a satisfactory outcome. Yet we have to acknowledge that the interaction assumes student autonomy, in which students keep prompting the chatbot and relying on the chatbot’s output. A more sophisticated way of student–chatbot interaction is bidirectional, where a chatbot is capable of reverse prompting, a concept which we will dive into deeper in our next section.

We believe it is crucial to teach students how to effectively prompt a generative AI chatbot. As we mentioned earlier, prompts are the goals that students set for the AI chatbot, but often students just want the tool’s output without engaging in the actual process. To better understand this, we can break prompts down into two types: cognitive prompts and metacognitive prompts, by drawing on Bloom’s Taxonomy [62]. Cognitive prompts are goal-oriented, strategic inquiries that learners feed into a generative AI chatbot. Metacognitive prompts, on the other hand, are to foster learners’ learning judgement and metacognitive growth. For example, in the case of a writing class, a cognitive prompt could be, “Help me grasp the concept of a thesis statement”. An outcome-based prompt might be, “Revise the following sentence for clarity”. In the case of metacognitive prompts, a teacher could encourage the students to reflect on their essays by asking the AI chatbot, “Evaluate my essay and suggest improvements”. The AI chatbot may function as a writing consultant that provides feedback. Undeniably, students might take a quicker route by framing the process more “outcome-oriented”, such as asking the AI, “Refine and improve this essay”. This is where the educator’s role comes in to explain the ethics of conduct and its associated consequences. Self-regulated learners stand as ethical AI users who care about the learning journey, valuing more than just the end product. In summary, goals, outcomes, or objectives can be utilized as defined learning pathways (also known as prompts) when students interact with chatbots. Students defining goals while working with a chatbot can be seen as setting a parameter for their learning. This goal defining (or prompting) helps students clearly understand what they are expected to achieve during a learning session and facilitates their work self-assessment while working with a chatbot.

3.3. Feedback and Self-Assessment Mechanism

Self-assessment is a process in which individuals evaluate their learning, performance, and understanding of a particular subject or skill. Research has shown that self-assessment can positively impact learning outcomes, motivation, and metacognitive skills [63,64,65]. Specifically, self-assessment can help learners identify their strengths and weaknesses, re-set goals, and monitor their progress toward achieving those goals. Self-assessment, grounded in JOL, involves learners reflecting on their learning and making judgements about their level of understanding and progress [66]. Self-assessment is also a component of SRL, as it allows learners to monitor their progress and adjust their learning strategies or learning goals as needed [67]. Self-assessment can therefore be a feature of a chatbot regardless of whether learners employ it to self-assess their learning, or it can be automatically promoted by the chatbot system to guide students to self-assess.

However, so far, we have found that the current AI-powered chatbots, like ChatGPT, have limited capabilities in reverse prompting when used for educational purposes. Reverse prompting functions as guiding questions after students prompt the chatbot. As suggested in the last section, after learners identify their prompts and goals, chatbots can ask learners to reflect on their learning and provide “reverse prompts” for self-assessment. The concept of reverse prompts is similar to reciprocal questioning. Reciprocal questioning is a group-based process in which two students pose their own questions for each other to answer [68]. This method has been used mainly to facilitate the reading process for emergent readers [69,70,71]. For instance, a chatbot could ask a learner an explanatory question like “Now, I give you two thesis statements you requested. Can you provide more examples of the relationship between the two statements of X and Y?” or “Can you provide more details on the requested speech or action?” as well as reflective questions like “How do you generalize this principle to similar cases?” to rate their understanding of a particular concept on a scale from 1 to 5 or to identify areas where they need more practice. We mock an example of such a conversation below in Figure 2.

The chatbot could then provide feedback and resources to help the learner improve in areas with potential knowledge gaps and low confidence levels. In this way, chatbots can be an effective tool for encouraging student self-assessment and SRL. A great body of evidence shows that the integrative effect of self-assessment and just-in-time feedback goes beyond understanding and learning new concepts and skills [72]. Goal-oriented and criteria-based self-assessment (e.g., self-explanation and reflection prompts) allows the learner to identify the knowledge gaps and misconceptions that often lead to incorrect conceptions or cognitive conflicts. Just-in-time feedback (i.e., the information provided by an agent/tutor in response to the diagnosed gap) can then act as a knowledge repair mechanism if the provided information is perceived as clear, logical, coherent, and applicable by the learner [73].

Based on Table 1 and the previous section on prompting and reverse prompting, teachers can also focus on facilitating learning judgement while teaching students to work with an AI chatbot. However, we propose that reverse prompting from an AI chatbot is also important so that educational values and SRL can be achieved.

According to Zimmerman [8], a chatbot is the social assistance that students can obtain. If the chatbot can provide reverse prompts that guide thinking, reflection, and self-assessment, students can then execute strategies that fit their goals and knowledge level. When learners engage in self-assessment activities, they are engaging in the process of making judgments about their learning. Throughout self-assessment, learners develop an awareness of their strengths and weaknesses, which can help them modify or set new goals. If they are satisfied with their goals, they can use their goals to monitor their progress and adjust their strategies as needed. This process also aligns with Zimmerman’s SRL model of self-control. At this phase, students can decide whether to go with what the chatbot suggests or if they need to take what they have and implement the suggestions that the chatbot provides. For example, a chatbot could reversibly ask learners to describe their strategies to solve a particular problem or reflect on what they have learned from a particular activity. This type of reflection can help learners become more aware of their learning processes and develop more effective strategies for learning [74,75]. Thus, the reverse interaction from chatbot to students provides an opportunity for developing self-awareness because learners become more self-directed or self-regulated and independent in their learning while working with the chatbot, which can lead to improved academic performance and overall success. Furthermore, by incorporating self-assessment prompts into educational chatbots, learners can receive immediate feedback and support as they engage in the self-assessment process, which can help to develop their metacognitive skills further and promote deeper learning.

3.4. Facilitating Self-Regulation: Personalization and Adaptation

Personalization and adaptation are unique characteristics of learning technology. When students engage with an LMS, the LMS platform inherently captures and records their behaviors and interactions. This can encompass actions such as page views, time allocation per page, link traversal, and page-specific operations. Even the act of composing content within a discussion forum can offer comprehensive trace data, such as temporal markers indicating the writing and conclusion of a discussion forum post, syntactic structures employed, discernible genre attributes, and lexical choices. This collection of traceable data forms the foundation for the subsequent generation of comprehensive learning analytics for learners, being manifested as either textual reports or information visualizations, both encapsulating a synthesis of pertinent insights regarding the students’ learning trajectories [76]. These fine-grained analytical outputs can fulfill a key role in furnishing students with a holistic overview of how they learn and what they learn, fostering opportunities for reflection, evaluation, and informed refinement of their learning tactics. Therefore, by using data-driven insights and algorithms described above, chatbots can be tailored to the individual needs of learners, providing personalized feedback and guidance that supports their unique learning goals and preferences. However, we believe that the current AI-powered chatbot is inadequate in education; in particular, chatbots thus far lack capabilities for learning personalization and adaptation. A chatbot, like ChatGPT, often acts as a knowledge giver unless a learner knows how to feed the prompts. Our framework repositions the role of educational AI chatbots from knowledge providers to facilitators in the learning process. By encouraging students to initiate interactions through prompts, the chatbot assumes the role of a learning partner that progressively understands the students’ requirements. As outlined in the preceding section, the chatbot possesses the capability to tactfully prompt learners when necessary, offering guidance and directions instead of outright solutions based on the given prompts.

Learner adaptation can be effectively facilitated through the utilization of learning analytics, which serves as a valuable method for collecting learner data and enhancing overall learning outcomes [75]. Chatbots have become more practical and intelligent by improving natural language, data mining, and machine-learning techniques. The chatbot could use the trace data collected on LMS to provide students with the best course of action. Data that the chatbot can collect from the LMS can include analysis of students’ time spent on a page, students’ clicking behaviors, deadlines set by the instructors, or prompts (goals) initiated by the students. For example, a student has not viewed their module assignment pages on a learning management system for a long time, but they request the chatbot to generate a sample essay for their assignments. Instead of giving the direct output of a sample essay, the chatbot can direct the student to view the assignment pages more closely (i.e., “It looks like you haven’t spent enough time on this page, I suggest you review this page before attempting to ask me to give you an essay”), as shown in Figure 3. In this way, learning analytics can also help learners take ownership of their learning by providing real-time feedback on their progress and performance. By giving learners access to their learning analytics, educators can empower students to actively learn and make informed decisions about improving their performance [75,77]. An example is shown in Figure 4. Therefore, through personalized and adaptive chatbot interactions, learners can receive feedback and resources that are tailored to their specific needs and performance, helping to improve their metacognitive skills and ultimately enhancing their overall learning outcomes.

4. Limitations

Lo’s [78] comprehensive rapid review indicates three primary limitations inherent in generative AI tools: 1. biased information, 2. constrained access to current knowledge, and 3. propensity for disseminating false information [78]. Baidoo-Anu and Ansa [79] underscore that the efficacy of generative AI tools is intricately linked to the training data that were fed into the tool, wherein the composition of training data can inadvertently contain biases that subsequently manifest in the AI-generated content, potentially compromising the neutrality, objectivity, and reliability of information imparted to student users. Moreover, the precision and accuracy of the information generated by generative AI tools further emerge as a key concern. Scholarly investigations have discovered several instances where content produced by ChatGPT has demonstrated inaccuracy and spuriousness, particularly when tasked with generating citations for academic papers [79,80].

Amidst these acknowledged limitations, our position leans toward an emphasis on students’ educational use of these tools, transcending the preoccupation with the tools’ inherent characteristics of bias, inaccuracy, or falsity. Based on our proposal, we want to develop students’ capacity for self-regulation and discernment when evaluating received information. Furthermore, educators bear an important role in guiding students on harnessing the potential of generative AI tools to enhance the learning process, instead of the generative AI tools can provide information akin to a textbook. This justifies the reason why we integrate Zimmerman’s SRL model, illustrating how the judicious incorporation of generative AI tools can foster students’ self-regulation, synergizing with the guidance of educators and the efficacy of instructional technology design.

5. Concluding Remarks

This paper explores how educational chatbots, or so-called conversational agents, can support student self-regulatory processes and self-evaluation in the learning process. As shown in Figure 5 below, drawing on Zimmerman’s SRL framework, we postulate that chatbot designers should consider pedagogical principles, such as goal setting and planning, self-assessment, and personalization, to ensure that the chatbot effectively supports student learning and improves academic performance. We suggest that such a chatbot could provide personalized feedback to students on their understanding of course material and promote self-assessment by prompting them to reflect on their learning process. We also emphasize the importance of establishing the pedagogical functions of chatbots to fit the actual purposes of education and supplement teacher instruction. The paper provides examples of successful implementations of educational chatbots that can inform SRL process as well as self-assessment and reflection based on JOL principles. Overall, this paper highlights the potential benefits of educational chatbots for personalized and interactive learning experiences while emphasizing the importance of considering pedagogical principles in their design. Educational chatbots may supplement classroom instruction by providing personalized feedback and prompting reflection on student learning progress. However, chatbot designers must carefully consider how these tools fit into existing pedagogical practices to ensure their effectiveness in supporting student learning.

Through the application of our framework, future researchers are encouraged to delve into three important topics of inquiry that can empirically validate our conceptual model. The first dimension entails scrutiny of educational principles. For instance, how can AI chatbots be designed to support learners in setting and pursuing personalized learning goals, fostering a sense of ownership over the learning process? Addressing this question involves exploring how learners can form a sense of ownership over their interactions with the AI chatbots, while working towards the learning objectives.

The second dimension involves a closer examination of the actual Self-Regulated Learning (SRL) process. This necessitates an empirical exploration of the ways AI chatbots can effectively facilitate learners’ self-regulated reflections and the honing of self-regulation skills. For example, how effective is AI’s feedback to a student’s essay and how do students develop subsequent SRL strategies to address the AI’s feedback and evaluation? Additionally, inquiries might also revolve around educators’ instructional methods in leveraging AI chatbots to not only nurture learners’ skills in interacting with the technology but also foster their self-regulatory processes. Investigating the extent to which AI chatbots can provide learning analytics as feedback that harmonizes with individual learners’ self-regulation strategies is also of significance. Moreover, ethical considerations must be taken into account when integrating AI chatbots into educational settings, ensuring the preservation of learners’ autonomy and self-regulation.

The third dimension is related to user interface research. A research endeavor could revolve around identifying which conversational interface proves the most intuitive for learners as they engage with an AI chatbot. Additionally, an inquiry might probe the extent to which the AI chatbot should engage in dialogue within educational contexts. Furthermore, delineating the circumstances under which AI chatbots should abstain from delivering outcome-based outputs to learners constitutes a worthwhile avenue of investigation. Numerous additional inquiries can be derived from our conceptual model, yet the central message that we want to deliver remains clear: Our objective is to engage educators, instructional designers, and students in the learning process while navigating in this AI world. It is important to educate students on the potential of AI chatbots to enhance their self-regulation skills while also emphasizing the importance of avoiding actions that contravene the principles of academic integrity.

Conceptualization, D.H.C. and M.P.-C.L.; writing—original draft preparation, D.H.C.; writing—review and editing, D.H.C., M.P.-C.L., S.H. and Q.Q.W.; visualization, Q.Q.W.; funding acquisition, D.H.C. All authors have read and agreed to the published version of the manuscript.

Not applicable.

The authors declare no conflict of interest.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

View Image - Figure 1. Zimmerman’s multi-level SRL Framework (adopted from Panadero [7]).

Figure 1. Zimmerman’s multi-level SRL Framework (adopted from Panadero [7]).

View Image - Figure 2. A mocked example of reverse prompting from a chatbot.

Figure 2. A mocked example of reverse prompting from a chatbot.

View Image - Figure 3. An example of a chatbot in a learning management system that supports SRL by delivering personalized feedback.

Figure 3. An example of a chatbot in a learning management system that supports SRL by delivering personalized feedback.

View Image - Figure 4. An example of a chatbot that supports SRL by delivering learning analytics.

Figure 4. An example of a chatbot that supports SRL by delivering learning analytics.

View Image - Figure 5. Putting it all together: educational principles, SRL, and directionality.

Figure 5. Putting it all together: educational principles, SRL, and directionality.

Types of prompts based on Bloom’s Taxonomy [ ].

Prompt Types Process-Based Outcome-Based
Cognitive UnderstandRemember CreateApply
Metacognitive Evaluate

1. Pérez, J.Q.; Daradoumis, T.; Puig, J.M.M. Rediscovering the use of chatbots in education: A systematic literature review. Comput. Appl. Eng. Educ. ; 2020; 28 , pp. 1549-1565. [DOI: https://dx.doi.org/10.1002/cae.22326]

2. Smutny, P.; Schreiberova, P. Chatbots for learning: A review of educational chatbots for the Facebook Messenger. Comput. Educ. ; 2020; 151 , 103862. [DOI: https://dx.doi.org/10.1016/j.compedu.2020.103862]

3. Kuhail, M.A.; Alturki, N.; Alramlawi, S.; Alhejori, K. Interacting with educational chatbots: A systematic review. Educ. Inf. Technol. ; 2023; 28 , pp. 973-1018. [DOI: https://dx.doi.org/10.1007/s10639-022-11177-3]

4. Okonkwo, C.W.; Ade-Ibijola, A. Chatbots applications in education: A systematic review. Comput. Educ. Artif. Intell. ; 2021; 2 , 100033. [DOI: https://dx.doi.org/10.1016/j.caeai.2021.100033]

5. Koriat, A. Monitoring one’s own knowledge during study: A cue-utilization approach to judgments of learning. J. Exp. Psychol. Gen. ; 1997; 126 , pp. 349-370. [DOI: https://dx.doi.org/10.1037/0096-3445.126.4.349]

6. Son, L.K.; Metcalfe, J. Judgments of learning: Evidence for a two-stage process. Mem. Cogn. ; 2005; 33 , pp. 1116-1129. [DOI: https://dx.doi.org/10.3758/BF03193217] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/16496730]

7. Panadero, E. A Review of Self-regulated Learning: Six Models and Four Directions for Research. Front. Psychol. ; 2017; 8 , 422. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28503157][DOI: https://dx.doi.org/10.3389/fpsyg.2017.00422]

8. Zimmerman, B.J. Attaining Self-Regulation. Handbook of Self-Regulation ; Elsevier: Amsterdam, The Netherlands, 2000; pp. 13-39. [DOI: https://dx.doi.org/10.1016/B978-012109890-2/50031-7]

9. Baars, M.; Wijnia, L.; de Bruin, A.; Paas, F. The Relation Between Students’ Effort and Monitoring Judgments During Learning: A Meta-analysis. Educ. Psychol. Rev. ; 2020; 32 , pp. 979-1002. [DOI: https://dx.doi.org/10.1007/s10648-020-09569-3]

10. Leonesio, R.J.; Nelson, T.O. Do different metamemory judgments tap the same underlying aspects of memory?. J. Exp. Psychol. Learn. Mem. Cogn. ; 1990; 16 , pp. 464-470. [DOI: https://dx.doi.org/10.1037/0278-7393.16.3.464]

11. Double, K.S.; Birney, D.P.; Walker, S.A. A meta-analysis and systematic review of reactivity to judgements of learning. Memory ; 2018; 26 , pp. 741-750. [DOI: https://dx.doi.org/10.1080/09658211.2017.1404111]

12. Janes, J.L.; Rivers, M.L.; Dunlosky, J. The influence of making judgments of learning on memory performance: Positive, negative, or both?. Psychon. Bull. Rev. ; 2018; 25 , pp. 2356-2364. [DOI: https://dx.doi.org/10.3758/s13423-018-1463-4]

13. Hamzah, M.I.; Hamzah, H.; Zulkifli, H. Systematic Literature Review on the Elements of Metacognition-Based Higher Order Thinking Skills (HOTS) Teaching and Learning Modules. Sustainability ; 2022; 14 , 813. [DOI: https://dx.doi.org/10.3390/su14020813]

14. Veenman, M.V.J.; Van Hout-Wolters, B.H.A.M.; Afflerbach, P. Metacognition and learning: Conceptual and methodological considerations. Metacognition Learn. ; 2006; 1 , pp. 3-14. [DOI: https://dx.doi.org/10.1007/s11409-006-6893-0]

15. Nelson, T.; Narens, L. Why investigate metacognition. Metacognition: Knowing about Knowing ; MIT Press: Cambridge, MA, USA, 1994; [DOI: https://dx.doi.org/10.7551/mitpress/4561.003.0003]

16. Tuysuzoglu, B.B.; Greene, J.A. An investigation of the role of contingent metacognitive behavior in self-regulated learning. Metacognition Learn. ; 2015; 10 , pp. 77-98. [DOI: https://dx.doi.org/10.1007/s11409-014-9126-y]

17. Bandura, A. Social Cognitive Theory: An Agentic Perspective. Asian J. Soc. Psychol. ; 1999; 2 , pp. 21-41. [DOI: https://dx.doi.org/10.1111/1467-839X.00024]

18. Bem, D.J. Self-Perception Theory. Advances in Experimental Social Psychology ; Berkowitz, L. Academic Press: Cambridge, MA, USA, 1972; Volume 6 , pp. 1-62. [DOI: https://dx.doi.org/10.1016/s0065-2601(08)60024-6]

19. Abu Shawar, B.; Atwell, E. Different measurements metrics to evaluate a chatbot system. Proceedings of the Workshop on Bridging the Gap: Academic and Industrial Research in Dialog Technologies ; Rochester, NY, USA, 26 April 2007; pp. 89-96. [DOI: https://dx.doi.org/10.3115/1556328.1556341]

20. Turing, A.M. Computing machinery and intelligence. Mind ; 1950; 59 , pp. 433-460. [DOI: https://dx.doi.org/10.1093/mind/LIX.236.433]

21. Weizenbaum, J. ELIZA—A computer program for the study of natural language communication between man and machine. Commun. ACM ; 1966; 9 , pp. 36-45. [DOI: https://dx.doi.org/10.1145/365153.365168]

22. Wallace, R.S. The anatomy of A.L.I.C.E. Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer ; Epstein, R.; Roberts, G.; Beber, G. Springer: Dordrecht, The Netherlands, 2009; pp. 181-210. [DOI: https://dx.doi.org/10.1007/978-1-4020-6710-5_13]

23. Hwang, G.-J.; Chang, C.-Y. A review of opportunities and challenges of chatbots in education. Interact. Learn. Environ. ; 2021; pp. 1-14. [DOI: https://dx.doi.org/10.1080/10494820.2021.1952615]

24. Yamada, M.; Goda, Y.; Matsukawa, H.; Hata, K.; Yasunami, S. A Computer-Supported Collaborative Learning Design for Quality Interaction. IEEE MultiMedia ; 2016; 23 , pp. 48-59. [DOI: https://dx.doi.org/10.1109/MMUL.2015.95]

25. Muniasamy, A.; Alasiry, A. Deep Learning: The Impact on Future eLearning. Int. J. Emerg. Technol. Learn. (iJET) ; 2020; 15 , pp. 188-199. [DOI: https://dx.doi.org/10.3991/ijet.v15i01.11435]

26. Bendig, E.; Erb, B.; Schulze-Thuesing, L.; Baumeister, H. The Next Generation: Chatbots in Clinical Psychology and Psychotherapy to Foster Mental Health—A Scoping Review. Verhaltenstherapie ; 2022; 32 , pp. 64-76. [DOI: https://dx.doi.org/10.1159/000501812]

27. Kennedy, C.M.; Powell, J.; Payne, T.H.; Ainsworth, J.; Boyd, A.; Buchan, I. Active Assistance Technology for Health-Related Behavior Change: An Interdisciplinary Review. J. Med. Internet Res. ; 2012; 14 , e80. [DOI: https://dx.doi.org/10.2196/jmir.1893]

28. Poncette, A.-S.; Rojas, P.-D.; Hofferbert, J.; Sosa, A.V.; Balzer, F.; Braune, K. Hackathons as Stepping Stones in Health Care Innovation: Case Study with Systematic Recommendations. J. Med. Internet Res. ; 2020; 22 , e17004. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32207691][DOI: https://dx.doi.org/10.2196/17004]

29. Ferrell, O.C.; Ferrell, L. Technology Challenges and Opportunities Facing Marketing Education. Mark. Educ. Rev. ; 2020; 30 , pp. 3-14. [DOI: https://dx.doi.org/10.1080/10528008.2020.1718510]

30. Behera, R.K.; Bala, P.K.; Ray, A. Cognitive Chatbot for Personalised Contextual Customer Service: Behind the Scene and beyond the Hype. Inf. Syst. Front. ; 2021; pp. 1-21. [DOI: https://dx.doi.org/10.1007/s10796-021-10168-y]

31. Crolic, C.; Thomaz, F.; Hadi, R.; Stephen, A.T. Blame the Bot: Anthropomorphism and Anger in Customer–Chatbot Interactions. J. Mark. ; 2022; 86 , pp. 132-148. [DOI: https://dx.doi.org/10.1177/00222429211045687]

32. Clarizia, F.; Colace, F.; Lombardi, M.; Pascale, F.; Santaniello, D. Chatbot: An education support system for student. CSS 2018: Cyberspace Safety and Security ; Castiglione, A.; Pop, F.; Ficco, M.; Palmieri, F. Lecture Notes in Computer Science Book Series; Springer International Publishing: Cham, Switzerland, 2018; Volume 11161 , pp. 291-302. [DOI: https://dx.doi.org/10.1007/978-3-030-01689-0_23]

33. Firat, M. What ChatGPT means for universities: Perceptions of scholars and students. J. Appl. Learn. Teach. ; 2023; 6 , pp. 57-63. [DOI: https://dx.doi.org/10.37074/jalt.2023.6.1.22]

34. Kim, H.-S.; Kim, N.Y. Effects of AI chatbots on EFL students’ communication skills. Commun. Ski. ; 2021; 21 , pp. 712-734.

35. Hill, J.; Ford, W.R.; Farreras, I.G. Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations. Comput. Hum. Behav. ; 2015; 49 , pp. 245-250. [DOI: https://dx.doi.org/10.1016/j.chb.2015.02.026]

36. Wu, E.H.-K.; Lin, C.-H.; Ou, Y.-Y.; Liu, C.-Z.; Wang, W.-K.; Chao, C.-Y. Advantages and Constraints of a Hybrid Model K-12 E-Learning Assistant Chatbot. IEEE Access ; 2020; 8 , pp. 77788-77801. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.2988252]

37. Brandtzaeg, P.B.; Følstad, A. Why people use chatbots. INSCI 2017: Internet Science ; Kompatsiaris, I.; Cave, J.; Satsiou, A.; Carle, G.; Passani, A.; Kontopoulos, E.; Diplaris, S.; McMillan, D. Lecture Notes in Computer Science Book Series; Springer International Publishing: Cham, Switzerland, 2017; Volume 10673 , pp. 377-392. [DOI: https://dx.doi.org/10.1007/978-3-319-70284-1_30]

38. Deng, X.; Yu, Z. A Meta-Analysis and Systematic Review of the Effect of Chatbot Technology Use in Sustainable Education. Sustainability ; 2023; 15 , 2940. [DOI: https://dx.doi.org/10.3390/su15042940]

39. de Quincey, E.; Briggs, C.; Kyriacou, T.; Waller, R. Student Centred Design of a Learning Analytics System. Proceedings of the 9th International Conference on Learning Analytics & Knowledge ; Tempe, AZ, USA, 4 March 2019; pp. 353-362. [DOI: https://dx.doi.org/10.1145/3303772.3303793]

40. Hattie, J. The black box of tertiary assessment: An impending revolution. Tertiary Assessment & Higher Education Student Outcomes: Policy, Practice & Research ; Ako Aotearoa: Wellington, New Zealand, 2009; pp. 259-275.

41. Wisniewski, B.; Zierer, K.; Hattie, J. The Power of Feedback Revisited: A Meta-Analysis of Educational Feedback Research. Front. Psychol. ; 2020; 10 , 3087. [DOI: https://dx.doi.org/10.3389/fpsyg.2019.03087]

42. Winne, P.H. Cognition and metacognition within self-regulated learning. Handbook of Self-Regulation of Learning and Performance ; 2nd ed. Routledge: London, UK, 2017.

43. Serban, I.V.; Sankar, C.; Germain, M.; Zhang, S.; Lin, Z.; Subramanian, S.; Kim, T.; Pieper, M.; Chandar, S.; Ke, N.R. et al. A deep reinforcement learning chatbot. arXiv ; 2017; arXiv: 1709.02349

44. Shneiderman, B.; Plaisant, C. Designing the User Interface: Strategies for Effective Human-Computer Interaction ; 4th ed. Pearson: Boston, MA, USA, Addison Wesley: Hoboken, NJ, USA, 2004.

45. Abbasi, S.; Kazi, H. Measuring effectiveness of learning chatbot systems on student’s learning outcome and memory retention. Asian J. Appl. Sci. Eng. ; 2014; 3 , pp. 57-66. [DOI: https://dx.doi.org/10.15590/ajase/2014/v3i7/53576]

46. Winkler, R.; Soellner, M. Unleashing the Potential of Chatbots in Education: A State-Of-The-Art Analysis. Acad. Manag. Proc. ; 2018; 2018 , 15903. [DOI: https://dx.doi.org/10.5465/AMBPP.2018.15903abstract]

47. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M. et al. Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. ; 2023; 71 , 102642. [DOI: https://dx.doi.org/10.1016/j.ijinfomgt.2023.102642]

48. Dai, W.; Lin, J.; Jin, F.; Li, T.; Tsai, Y.S.; Gasevic, D.; Chen, G. Can large language models provide feedback to students? A case study on ChatGPT. 2023; preprint [DOI: https://dx.doi.org/10.35542/osf.io/hcgzj]

49. Lin, M.P.-C.; Chang, D. Enhancing post-secondary writers’ writing skills with a Chatbot: A mixed-method classroom study. J. Educ. Technol. Soc. ; 2020; 23 , pp. 78-92.

50. Zhu, C.; Sun, M.; Luo, J.; Li, T.; Wang, M. How to harness the potential of ChatGPT in education?. Knowl. Manag. E-Learn. ; 2023; 15 , pp. 133-152. [DOI: https://dx.doi.org/10.34105/j.kmel.2023.15.008]

51. Prøitz, T.S. Learning outcomes: What are they? Who defines them? When and where are they defined?. Educ. Assess. Eval. Account. ; 2010; 22 , pp. 119-137. [DOI: https://dx.doi.org/10.1007/s11092-010-9097-8]

52. Burke, J. Outcomes, Learning and the Curriculum: Implications for Nvqs, Gnvqs and Other Qualifications ; Routledge: London, UK, 1995; [DOI: https://dx.doi.org/10.4324/9780203485835]

53. Locke, E.A. New Developments in Goal Setting and Task Performance ; 1st ed. Routledge: London, UK, 2013; [DOI: https://dx.doi.org/10.4324/9780203082744]

54. Leake, D.B.; Ram, A. Learning, goals, and learning goals: A perspective on goal-driven learning. Artif. Intell. Rev. ; 1995; 9 , pp. 387-422. [DOI: https://dx.doi.org/10.1007/BF00849065]

55. Greene, J.A.; Azevedo, R. A Theoretical Review of Winne and Hadwin’s Model of Self-Regulated Learning: New Perspectives and Directions. Rev. Educ. Res. ; 2007; 77 , pp. 334-372. [DOI: https://dx.doi.org/10.3102/003465430303953]

56. Pintrich, P.R. A Conceptual Framework for Assessing Motivation and Self-Regulated Learning in College Students. Educ. Psychol. Rev. ; 2004; 16 , pp. 385-407. [DOI: https://dx.doi.org/10.1007/s10648-004-0006-x]

57. Schunk, D.H.; Greene, J.A. Handbook of Self-Regulation of Learning and Performance ; 2nd ed. In Educational Psychology Handbook Series Routledge: New York, NY, USA, Taylor & Francis Group: Milton Park, UK, 2018.

58. Chen, C.-H.; Su, C.-Y. Using the BookRoll e-book system to promote self-regulated learning, self-efficacy and academic achievement for university students. J. Educ. Technol. Soc. ; 2019; 22 , pp. 33-46.

59. Michailidis, N.; Kapravelos, E.; Tsiatsos, T. Interaction analysis for supporting students’ self-regulation during blog-based CSCL activities. J. Educ. Technol. Soc. ; 2018; 21 , pp. 37-47.

60. Paans, C.; Molenaar, I.; Segers, E.; Verhoeven, L. Temporal variation in children’s self-regulated hypermedia learning. Comput. Hum. Behav. ; 2019; 96 , pp. 246-258. [DOI: https://dx.doi.org/10.1016/j.chb.2018.04.002]

61. Morisano, D.; Hirsh, J.B.; Peterson, J.B.; Pihl, R.O.; Shore, B.M. Setting, elaborating, and reflecting on personal goals improves academic performance. J. Appl. Psychol. ; 2010; 95 , pp. 255-264. [DOI: https://dx.doi.org/10.1037/a0018478]

62. Krathwohl, D.R. A Revision of Bloom’s Taxonomy: An Overview. Theory Pract. ; 2002; 41 , pp. 212-218. [DOI: https://dx.doi.org/10.1207/s15430421tip4104_2]

63. Bouffard, T.; Boisvert, J.; Vezeau, C.; Larouche, C. The impact of goal orientation on self-regulation and performance among college students. Br. J. Educ. Psychol. ; 1995; 65 , pp. 317-329. [DOI: https://dx.doi.org/10.1111/j.2044-8279.1995.tb01152.x]

64. Javaherbakhsh, M.R. The Impact of Self-Assessment on Iranian EFL Learners’ Writing Skill. Engl. Lang. Teach. ; 2010; 3 , pp. 213-218. [DOI: https://dx.doi.org/10.5539/elt.v3n2p213]

65. Zepeda, C.D.; Richey, J.E.; Ronevich, P.; Nokes-Malach, T.J. Direct instruction of metacognition benefits adolescent science learning, transfer, and motivation: An in vivo study. J. Educ. Psychol. ; 2015; 107 , pp. 954-970. [DOI: https://dx.doi.org/10.1037/edu0000022]

66. Ndoye, A. Peer/self assessment and student learning. Int. J. Teach. Learn. High. Educ. ; 2017; 29 , pp. 255-269.

67. Schunk, D.H. Goal and Self-Evaluative Influences During Children’s Cognitive Skill Learning. Am. Educ. Res. J. ; 1996; 33 , pp. 359-382. [DOI: https://dx.doi.org/10.3102/00028312033002359]

68. King, A. Enhancing Peer Interaction and Learning in the Classroom Through Reciprocal Questioning. Am. Educ. Res. J. ; 1990; 27 , pp. 664-687. [DOI: https://dx.doi.org/10.3102/00028312027004664]

69. Mason, L.H. Explicit Self-Regulated Strategy Development Versus Reciprocal Questioning: Effects on Expository Reading Comprehension Among Struggling Readers. J. Educ. Psychol. ; 2004; 96 , pp. 283-296. [DOI: https://dx.doi.org/10.1037/0022-0663.96.2.283]

70. Newman, R.S. Adaptive help seeking: A strategy of self-regulated learning. Self-Regulation of Learning and Performance: Issues and Educational Applications ; Lawrence Erlbaum Associates, Inc.: Hillsdale, NJ, USA, 1994; pp. 283-301.

71. Rosenshine, B.; Meister, C. Reciprocal Teaching: A Review of the Research. Rev. Educ. Res. ; 1994; 64 , pp. 479-530. [DOI: https://dx.doi.org/10.3102/00346543064004479]

72. Baleghizadeh, S.; Masoun, A. The Effect of Self-Assessment on EFL Learners’ Self-Efficacy. TESL Can. J. ; 2014; 31 , 42. [DOI: https://dx.doi.org/10.18806/tesl.v31i1.1166]

73. Moghadam, S.H. What Types of Feedback Enhance the Effectiveness of Self-Explanation in a Simulation-Based Learning Environment?. Available online: https://summit.sfu.ca/item/34750 (accessed on 14 July 2023).

74. Vanichvasin, P. Effects of Visual Communication on Memory Enhancement of Thai Undergraduate Students, Kasetsart University. High. Educ. Stud. ; 2020; 11 , pp. 34-41. [DOI: https://dx.doi.org/10.5539/hes.v11n1p34]

75. Schumacher, C.; Ifenthaler, D. Features students really expect from learning analytics. Comput. Hum. Behav. ; 2018; 78 , pp. 397-407. [DOI: https://dx.doi.org/10.1016/j.chb.2017.06.030]

76. Marzouk, Z.; Rakovic, M.; Liaqat, A.; Vytasek, J.; Samadi, D.; Stewart-Alonso, J.; Ram, I.; Woloshen, S.; Winne, P.H.; Nesbit, J.C. What if learning analytics were based on learning science?. Australas. J. Educ. Technol. ; 2016; 32 , pp. 1-18. [DOI: https://dx.doi.org/10.14742/ajet.3058]

77. Akhtar, S.; Warburton, S.; Xu, W. The use of an online learning and teaching system for monitoring computer aided design student participation and predicting student success. Int. J. Technol. Des. Educ. ; 2015; 27 , pp. 251-270. [DOI: https://dx.doi.org/10.1007/s10798-015-9346-8]

78. Lo, C.K. What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature. Educ. Sci. ; 2023; 13 , 410. [DOI: https://dx.doi.org/10.3390/educsci13040410]

79. Baidoo-Anu, D.; Ansah, L.O. Education in the era of generative Artificial Intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. SSRN Electron. J. ; 2023; pp. 1-22. [DOI: https://dx.doi.org/10.2139/ssrn.4337484]

80. Mogali, S.R. Initial impressions of ChatGPT for anatomy education. Anat. Sci. Educ. ; 2023; pp. 1-4. [DOI: https://dx.doi.org/10.1002/ase.2261] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36749034]

You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer

Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

The invention of ChatGPT and generative AI technologies presents educators with significant challenges, as concerns arise regarding students potentially exploiting these tools unethically, misrepresenting their work, or gaining academic merits without active participation in the learning process. To effectively navigate this shift, it is crucial to embrace AI as a contemporary educational trend and establish pedagogical principles for properly utilizing emerging technologies like ChatGPT to promote self-regulation. Rather than suppressing AI-driven tools, educators should foster collaborations among stakeholders, including educators, instructional designers, AI researchers, and developers. This paper proposes three key pedagogical principles for integrating AI chatbots in classrooms, informed by Zimmerman’s Self-Regulated Learning (SRL) framework and Judgment of Learning (JOL). We argue that the current conceptualization of AI chatbots in education is inadequate, so we advocate for the incorporation of goal setting (prompting), self-assessment and feedback, and personalization as three essential educational principles. First, we propose that teaching prompting is important for developing students’ SRL. Second, configuring reverse prompting in the AI chatbot’s capability will help to guide students’ SRL and monitoring for understanding. Third, developing a data-driven mechanism that enables an AI chatbot to provide learning analytics helps learners to reflect on learning and develop SRL strategies. By bringing in Zimmerman’s SRL framework with JOL, we aim to provide educators with guidelines for implementing AI in teaching and learning contexts, with a focus on promoting students’ self-regulation in higher education through AI-assisted pedagogy and instructional design.

VIAFID ORCID Logo

Suggested sources

  • About ProQuest
  • Terms of Use
  • Privacy Policy
  • Cookie Policy

Designing an Educational Chatbot: A Case Study of CikguAIBot. .css-ct9vl7{-webkit-text-decoration:none;text-decoration:none;}.css-ct9vl7:hover{-webkit-text-decoration:underline;text-decoration:underline;} .css-1bz2xua{margin:0;font:inherit;color:#79153B;-webkit-text-decoration:none;text-decoration:none;}.css-1bz2xua:hover{-webkit-text-decoration:underline;text-decoration:underline;} .css-15bhlhy{-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;width:1em;height:1em;display:inline-block;fill:currentColor;-webkit-flex-shrink:0;-ms-flex-negative:0;flex-shrink:0;-webkit-transition:fill 200ms cubic-bezier(0.4, 0, 0.2, 1) 0ms;transition:fill 200ms cubic-bezier(0.4, 0, 0.2, 1) 0ms;font-size:1.5rem;font-size:1.75rem;margin-left:2.8px;}

  • information systems
  • educational technology
  • distance education
  • learning community
  • educational games
  • educational experiences
  • formal education
  • educational systems
  • emerging technologies
  • interactive learning
  • game based learning
  • web based learning
  • blended learning
  • learning technologies
  • learning environment

designing an educational chatbot a case study of cikguaibot

Design of a Chatbot Learning System: Case Study of a Reading Chatbot

' src=

Bulletin of the Technical Committee on Learning Technology (ISSN: 2306-0212)
Volume 22, Number 1, 2-7 (2022)
Received December 13, 2021
Accepted January 29, 2022
Published online February 21, 2022
This work is under Creative Commons CC-BY-NC 3.0 license. For more information, see

email

Technology is more than a tool; the use of technology is also a skill that needs to be developed. Teachers are expected to have the ability to integrate technology into their teaching methods, but whether they have the required technological expertise is often neglected. Therefore, technologies that can be easily used by teachers must be developed. In this study, an algorithm was developed that integrates Google Sheets with Line to offer teachers who are unfamiliar with programming a quick method for constructing a chatbot based on their teaching plan, question design, and the material prepared for a reading class. To meet the needs for reading classes, reading theories and effective reading instruction are incorporated into the learning and teaching mechanism of the chatbot system. To create a guidance structure that is suitable for students of various levels of ability, Nelson’s multipath digital reading model was employed because it can maintain a reading context while simultaneously responding to the diverse reading experiences of different readers.

Keywords: Educational technology, Learning management systems, Mobile learning

I. INTRODUCTION

According to [ 1 ], the use of technological tools to supplement teaching and learning activities can help students access information efficiently, develop self-directed learning in students, and improve the quality of instruction. However, many teachers continue to adhere to traditional teaching methods rather than integrating technology into teaching because of their negative attitude toward and low confidence in technology use; insufficient professional knowledge and technological competence; and a lack of technological resources, training, and support [ 2 ], [ 3 ]. Therefore, this study focused on the development of a system to cater to teachers’ technological abilities and teaching needs.

In this study, a reading class was considered the target context. In accordance with the theory of reading comprehension, Kintsch’s single-text [ 4 ] and Britt’s multitext [ 5 ] reading comprehension models were integrated into the learning mechanism design. To provide assistance to students on the basis of the level of comprehension with which they have difficulty, Nelson’s multipath reading model was employed to design the question and answer mechanism [ 6 ].

To make the system easily operable and accessible for teachers who lack a programming background, Line, which is the most used communication platform in Taiwan, was used as the front-end interface, and Google Sheets, a commonly used cloud-based spreadsheet, was employed as the database containing teaching content and learning records. Moreover, programs and algorithms were developed using Google App Script to connect the Line and Google Sheets services.

A. Models of Reading Comprehension

According to Kintsch’s reading comprehension model, which is called the construction–integration model, reading comprehension is a process of continuous construction and integration [ 4 ], [ 7 ]. In this model, each sentence in a text is transformed into a semantic unit, which is called a proposition. The reader then constructs a coherent understanding by continually recombining these propositions in an orderly fashion. Reference [ 8 ] reviewed studies on single-text processing and assumed that the reading process involves at least three levels of memory representation. The surface level represents decoding of word meaning in the early reading stage. The textbase level represents the process of transforming a text into a set of propositions on the basis of lexical knowledge, syntactic analysis, and information retrieved from memory. The situation level represents the process of constructing a coherent understanding of the situation described in the text through experience accumulated in life.

The main limitation of a single text is that it only reflects the viewpoint of a specific author rather than offering the comprehensive viewpoints. Even when arguments are objectively summarized in a literature review, the author still selects from among original sources.  According to [ 5 ] and [ 9 ], if students are to address an issue critically and know how to construct a complete understanding of an issue, they should be allowed to learn by reading actual texts, practice selecting and organizing information, and interpret thoughts in their own manner. In multitext reading, texts have the role of providing raw information; reader must thus be clear on the purpose to their reading if they are to select and integrate relevant information and manage diverse or even contradictory viewpoints; otherwise, they may become lost in the ocean of information. Britt et al. extended the Kintsch model to propose the documents model and suggested that a higher level of proposition is event related and includes several clauses and paragraphs; this level involves understanding construction in multitext reading [ 5 ], [ 10 ]. Reference [ 8 ] reviewed studies on multitext reading and concluded that the reading process involves at least three memory representations: the integrated model represents the reader’s global understanding of the situation described in several texts, the intertext model represents their understanding of the source material, and the task model represents their understanding of their goals and appropriate strategies for achieving these goals. Compared with Kintsch’s theory, multitext reading theory is more reader-directed and emphasizes the reader’s approach to constructing a coherent and reasonable understanding from texts representing various viewpoints.

As suggested in [ 8 ], the challenge faced in the teaching of multiple-document reading is how to design a guidance structure that considers the reading paths of different students. Nelson proposed a digital reading model that can maintain a context and simultaneously respond to the diverse reading experiences of different readers. Nelson suggested breaking a text into smaller units and inserting hyperlinks in these units, allowing readers to jump from the current document to the content pointed to by the hyperlinks without affecting the structure of the text [ 6 ]. Moreover, reference [ 11 ] used Nelson’s model in a clear manner by treating reading units as nodes, interunit relationships as links, and reading experience as a network composed of nodes and links. Therefore, the collection of content with which the reader interacts can be treated as a representation of the reader’s reading process. Nelson’s multipath digital reading model inspired us to shift the complex teacher–student interaction during reading instruction to a chatbot system. Learning content can be considered a node, and question–answer pairs can be considered links to related learning content. If question–answer pairs fully represent students’ understanding, the students can be guided to the content they require on the basis of the answer they select. The following section explains the factors that must be accounted for within a well-designed question–answer framework.

B. Design of Questions and Instructions

Two particular reading interventions are employed to promote comprehension: an instructional framework based on self-regulated learning targets, which is used for basic-level comprehension, and a framework based on teacher-facilitated discussion targets, which is employed for  high-level comprehension and critical–analytic thinking [ 12 ]. Among interventions for teacher-facilitated discussion, questioning differs from direct explanation and strategic interventions, which help students develop reading comprehension through direct transfer of skills. Instead, questioning, involving asking students questions in a step-by-step manner, helps them actively construct and develop their understanding of a text.

A good question does not always come to mind easily; thus, teachers must prepare well before class. According to [ 13 ], before designing questions, teachers must have a general understanding of the text, consider probable student reactions, possess specific thinking skills, and decide which competencies should be evaluated.  According to [ 14 ] and [ 15 ], when designing questions, the level of the question should be based on the complexity of the cognitive processing required to answer the question. For example, factual questions, requiring the lowest level of processing, require students to recall relevant information from the text; paraphrased questions require students to recall specific concepts and express them in their own way; interpretive questions require students to search for and deduce a relationship among concepts that are not explicitly stated in the text; and evaluative questions, requiring the highest level of processing, require students to analyze and evaluate a concept in the text by using the context and their prior knowledge.

Questions can not only reflect the level of comprehension but also promote thinking. If higher-level questions are posed, students are more likely to think beyond the surface of the topic [ 16 ]. For example, even if a student can answer factual questions correctly, they do not necessarily gain insight from the facts. If the teacher then asks the student to paraphrase or interpret a concept, which would indicate whether the student can link facts together, the student is likely to demonstrate higher-level competency [ 16 ].

In recent years, the OECD’s Programme for International Student Assessment reading comprehension standards [ 17 ] have increasingly emphasized the role of the reader’s personal reflection in reading comprehension. However, irrespective of whether the questions require the students to organize information from texts, use their prior knowledge, or reflect on their life experiences, students must respond in accordance with the context of the text. In other words, they should not express their opinions and feelings freely as they wish. If making deductions from a text is the main competency to be assessed, the level of students’ comprehension can be determined by evaluating their selection of original sources while expressing their thoughts. Moreover, if students are asked to cite original sources, they are more likely to avoid straying from the topic and to demonstrate critical thinking [ 9 ].

To create a good questioning practice, teachers must consider the different levels of the various students and provide assistance accordingly. The different types of questions represent different levels of reading comprehension. Higher-order questions involve more complex cognitive strategies than strategic lower-order questions. Reference [ 18 ] stated that for students who have trouble in constructing meaning from a text, teachers should provide a supporting task, such as word recognition. References [ 14 ] and [ 19 ] have highlighted that for students who need help answering challenging questions, teachers should encourage more advanced use of thinking skills, such as metacognition and awareness of thinking.

The instant feedback that a teacher can provide on the basis of a student’s reply cannot be easily replaced by a predetermined instructional framework. Instead of replacing face-to-face instruction in a class, the system aims to solve the problems encountered during oral question-and-answer sessions and to provide teachers with students’ learning information to enable further counseling. Because identifying how students make inferences from texts is difficult for a teacher during oral communication, a recording mechanism is needed to help the teacher note the source of a student’s inference. According to [ 20 ], even if a teacher is well prepared, poor oral presentation skills can affect students’ understanding of questions. Therefore, a digital tool that fully implements a teacher’s questioning framework can be used to prevent misunderstanding. According to [ 21 ], some students fail to take the opportunity to practice because they feel reluctant to express themselves in public; thus, an individual-oriented learning system can ensure that every student practices an equal amount.

By summarizing the aspects that needed to be considered in the design of questions and instructions, the main guidelines of this system were defined as follows. The question design should support true/false questions, multiple-choice questions, and essay questions for different levels of students. The mechanism of replying to a question should support self-expression and connection with corresponding resources. The system must provide a basic mechanism for determining students’ level of reading comprehension from their qualitative reply and guide them to reread the text for self-modification and self-monitoring.

C. Application of a Chatbot

The earliest chatbot—ELISA, developed by Weizenbaum in 1966—used algorithmic processing and predefined response content to interact with humans [ 22 ]. Chatbots are commonly used to assist individuals in completing specific tasks, and the dialogues are designed to be purposeful and guided [ 23 ].

Recently, chatbots have been widely applied in educational settings and have been demonstrated to have beneficial effects on learning. For example, in [ 24 ] and [ 25 ], chatbots were applied to language learning and determined to induce interest and motivation in learning and increase students’ willingness to express themselves.  The results of one study [ 26 ], in which chatbots were applied to computer education revealed that students who learned in the chatbot-based learning environment performed comparably to those who learned through traditional methods. Moreover, [ 27 ] recently developed a chatbot framework by using natural language processing (NLP) to generate appropriate responses to inputs. They used NLP to distinguish and label students’ learning difficulties, connect students with the corresponding grade-level learning subjects, and quickly search for learning content that met the students’ needs. Other scholars [ 28 ] applied a chatbot to the learning management system of a university and employed artificial intelligence to analyze the factors that motivate students to learn actively, monitor academic performance, and provide academic advice. The results indicated that the method improved student participation in their course.

Many commonly used communication platforms and free digital resources now support the development of chatbots. Designing and maintaining a system of teaching aids would be time-consuming.  Chatbots already have high usability and are accepted by the public, meaning that using an existing platform to develop a chatbot would reduce users’ cognitive load during the learning process. Therefore, this study developed algorithms to link the services of two open source platforms, Google and Line, and create a cloud spreadsheet that can act as a database for storing teaching content and learning records. Because the algorithms connect with a spreadsheet, creating a new chatbot learning system by using the proposed approach is easy; the spreadsheet would be duplicated, and the setting would be updated with information on the new chatbot.

II. DESIGN OF SYSTEM

A. instructional flow design, 1)   structure.

A piece of text contains several propositions, and the propositions may be parallel or subordinate to a large proposition. Therefore, the structure of textual analysis and the teaching structure are hierarchical. The proposed system has three levels: the text, chapter, and content levels (Fig. 1). Each card in a carousel template represents one text, and having multiple texts is acceptable (Fig. 2).  Chatbot designers can update the chatbot interface and carousel template on the basis of their teaching structure once they have added a new text in Google Sheets (Fig. 2). Students can select any text they want to learn from at any time because of a menu button, called “Classes for Guided Reading of Texts,” which prompts the carousel template to pop up (Fig. 3). Each chapter has its own ID number, and the system connects the chapter’s learning content by the ID. For example, the ID of “Characteristic” is “01”; thus, if students press the button showing “Characteristic”, the system searches for the teaching content labeled “010000” for publishing on the chatbot and then moves to the next content in accordance with the next ID assigned by the designer (Fig. 4).

designing an educational chatbot a case study of cikguaibot

Fig. 1. Teaching structure (for a sample text).

Fig. 2. carousel template..

designing an educational chatbot a case study of cikguaibot

Fig. 3. Rich menu.

designing an educational chatbot a case study of cikguaibot

Fig. 4. Teaching content.

2) instructional content design.

According to Kintsch’s theory, instructions should assist students on the basis of the level at which they fail to arrive at a correct understanding of the text. In the surface level, instructions should provide word explanations. In the textbase level, instructions should help connect propositions that the students have ignored. In the situation level, the system should guide students in expressing a concept in their own way and in accordance with their experience. In some cases, the coherence between instruction contents that are not distinct is strong. Therefore, the teacher’s instructional flow can be designed as a linear structure or created with branches and flexibility to help guide students to the content at an appropriate level depending on whether the student knows specific concepts.

Teaching content that comprises an instructional flow is coded. The content in question form can be used to create a branch for the instructional flow. Each question can accept up to 13 branches. To arrange the next content to be published, the system requires the teacher to assign IDs to the branches of each question. According to multitext reading theory, at the integrated level, instructions should guide students to construct a global understanding of the texts. Therefore, each content ID is generated uniquely so that the next ID to be assigned is not limited to the range of texts currently being learned. For paraphrased questions that require students to respond in their own way and when no answer accurately represents a student’s thoughts, the system allows the student to reply by typing out a response if the next ID is set to “000000” (Fig. 4). The system stores the student’s learning progress by recording the order in which the student encountered the content, the buttons they pressed, and their replies (Fig. 5).

For both multiple-choice and paraphrased questions, the system asks the student to provide their qualitative reasoning and original sources; their responses enabled us to understand how students interpret texts (details in section II-B-5). In the case of a student’s thought not being represented by any answer, the student’s qualitative reply is treated as an exceptional case not considered by the teacher during the design stage, and all such replies are collected and given to the teacher.

designing an educational chatbot a case study of cikguaibot

Fig. 5. Learning record.

B. design of question and answer mechanism, 1) questioning mechanism.

Whether the students answer a question correctly does not reflect whether they fully understand a text. Examining the process of students’ interpretation can be a way to accurately follow their real thinking. According to Kintsch’s construction–integration model, a text is a combination of multiple propositions. Similarly, a reading comprehension question must be answered by combining different propositions. Therefore, by comparing the combinations of propositions used by the teacher and the students, it can be determined whether students have overlooked specific aspects, and appropriate guiding questions can then be provided to help the students review the text.

2) Text Processing

To help the teacher more effectively identify the connection between student responses and the text, the system cuts and codes the text provided by teachers by using the punctuation mark as a breakpoint. The system then creates a webpage on the basis of these sentence units and gives students the link to the webpage in a chatbot dialog (Fig. 6). The webpage has a guide that helps students reply, explain their reasoning, and pick sentences as resources to support their viewpoint (Fig. 7). The webpage is connected to the Line Login service; thus, the user’s identity is recognized and students’ replies are recorded and sent back to the same Google Sheet for the chatbot system and another subsheet for storage (Fig. 8)

designing an educational chatbot a case study of cikguaibot

Fig. 6. Chatbot dialog when answering a question.

designing an educational chatbot a case study of cikguaibot

Fig. 7. The webpage.

designing an educational chatbot a case study of cikguaibot

Fig. 8. Record of the reply in Google Sheets.

3) sentence marker.

When a teacher designs questions, they usually have a reference answer in mind and need to refer to specific information and propositions from the text for support, interpretation, and inference. Therefore, teachers are asked to provide reference answers with the corresponding textual sources when designing questions. Similarly, students must select corresponding textual sentences as the basis for their interpretations. According to multitext reading theory, at the intertext level, sourcing across texts is one of the main competencies that must be developed and evaluated if each sentence is to be coded uniquely. Students can pick sentences across texts.

4) Sentence Match

To calculate the similarity between a student’s answer and the reference answer provided by the teacher, the system compares the references of both. On the basis of the difference between the references, the system can distinguish and label the completeness of the student’s comprehension and provide a guiding question with which the student can review the text.

5) Qualitative Replies Classification and Analysis

Because the learning patterns of a group of students are unknown at the beginning of a course, the teacher should track students’ learning process in the long term and observe how students’ explanations and sentence selection evolve under the influence of the guiding questions provided by the system. Before analysis, if a user’s replies include multiple-choice selections and qualitative explanations with supporting sentences, the replies are classified into correct and incorrect. If a user’s replies are paraphrased replies rather than multiple-choice selections, their correctness is determined manually because the system is not yet capable of automatically determining correctness. Another area of analysis in which we are interested is comparing how different students interpret a given question; thus, we plan to classify qualitative explanations on the basis of sentence IDs.

III. SUMMARY AND FUTURE RESEARCH

The integration of technology into teaching requires consideration of many aspects, such as the teacher’s attitude, teacher’s technological knowledge and ability, and teaching needs, which are often overlooked. Because we believe that tools should be useful, not just usable, this study aimed to develop a teacher-friendly teaching-aid system based on theories of the teaching and learning of reading and empirical studies of technology applications.

Thanks to the advancement of technology and the willingness of each platform to release development permission, we were able to link Google, Line, and web services by using algorithmic mechanisms. The advantage of this integration is that we do not need to spend considerable time and money to develop a system but use the existing advantages and convenience of these platforms to achieve a similar experience. Moreover, as system developers, we are able to focus on the development and implementation of pedagogical theories rather than the basic operation and maintenance of the system.

To investigate the usability of the system and to help us improve the system, we will invite students and teachers as participants. This system is a prototype. Some message types follow a Line template, and thus, there are limitations, such as the number of buttons, length of the content, and appearance of the message. In addition, in the Google Sheet employed in this study, restrictions and drop-down lists cannot be implemented to prevent designers from constructing learning content with an incorrect format. Therefore, many functions need to be implemented and improved to make the system more accessible for designers. Moreover, because students’ data stored in Google Sheets cannot currently be read easily, the data must be organized; we expect to take the same Google Sheet format as the basis for developing another chatbot with which teachers can produce statistical analyses of students’ learning records.

The system is expected to be a tool that can help teachers understand how students make interpretations and inferences when reading a text. Especially for students who cannot obtain the correct understanding, the relationship between their explanations and text sentences can help teachers to counsel such students or help researchers analyze the factors causing misunderstanding. In the future, we expect to apply machine learning models to further distinguish and label students’ reading difficulties.

[ 1 ]     J. Fu, “Complexity of ICT in education: A critical literature review and its implications,”  International Journal of education and Development using ICT,  vol. 9, no. 1, pp. 112-125, 2013.

[ 2 ]    P. A. Ertmer, A. T. Ottenbreit-Leftwich, O. Sadik, E. Sendurur, and P. Sendurur, “Teacher beliefs and technology integration practices: A critical relationship,” Computers & education,  vol. 59, no. 2, pp. 423-435, 2012, DOI: 10.1016/j.compedu.2012.02.001

[ 3 ]     F. A. Inan and D. L. Lowther, “Factors affecting technology integration in K-12 classrooms: A path model,” Educational technology research and development,  vol. 58, no. 2, pp. 137-154, 2010, DOI: 1 0.1007/s11423-009-9132-y

[ 4 ]     W. Kintsch and C. Walter Kintsch,  Comprehension: A paradigm for cognition . Cambridge: Cambridge university press, 1998.

[ 5 ]     M. A. Britt, C. A. Perfetti, R. Sandak, and J.-F. Rouet, “Content integration and source separation in learning from multiple texts,” in  Narrative comprehension, causality, and coherence: Essays in honor of Tom Trabasso . Mahwah, NJ: Lawrence Erlbaum Associates, 1999,    pp. 209-233.

[ 6 ]     T. H. Nelson, “Complex information processing: a file structure for the complex, the changing and the indeterminate,” in  Proceedings of the 1965 20th national conference , 1965, pp. 84-100, DOI: 10.1145/800197.806036

[ 7 ]     T. A. Van Dijk and W. Kintsch,  Strategies of discourse comprehension , New York: Academic Press, 1983.

[ 8 ]     S. R. Goldman  et al. , “Disciplinary literacies and learning to read for understanding: A conceptual framework for disciplinary literacy,”  Educational Psychologist,  vol. 51, no. 2, pp. 219-246, 2016, DOI: 10.1080/00461520.2016.1168741

[ 9 ]     Ø. Anmarkrud, I. Bråten, and H. I. Strømsø, “Multiple-documents literacy: Strategic processing, source awareness, and argumentation when reading multiple conflicting documents,”  Learning and Individual Differences,  vol. 30, pp. 64-76, 2014, DOI: 10.1016/j.lindif.2013.01.007

[ 10 ]   M. A. Britt and J.-F. Rouet, “Learning with multiple documents: Component skills and their acquisition,” in  Enhancing the quality of learning: Dispositions, instruction, and learning processes,  J. R. Kirby and M. J. Lawson, Eds. Cambridge: Cambridge University Press, 2012 ,  pp. 276-314.

[ 11 ]  T. J. Berners-Lee and R. Cailliau, “Worldwideweb: Proposal for a hypertext project,” CERN European Laboratory for Particle Physics, Nov. 1990.

[ 12 ]   M. Li  et al. , “Promoting reading comprehension and critical–analytic thinking: A comparison of three approaches with fourth and fifth graders,”  Contemporary Educational Psychology,  vol. 46, pp. 101-115, 2016, DOI: 10.1016/j.cedpsych.2016.05.002

[ 13 ]   W. J. Therrien and C. Hughes, “Comparison of Repeated Reading and Question Generation on Students’ Reading Fluency and Comprehension,”  Learning disabilities: A contemporary journal,  vol. 6, no. 1, pp. 1-16, 2008.

[ 14 ]   P. Afflerbach and B.-Y. Cho, “Identifying and describing constructively responsive comprehension strategies in new and traditional forms of reading,”  Handbook of research on reading comprehension . New York: Routledge, 2009,   pp. 69-90.

[ 15 ]   T. Andre, “Does answering higher-level questions while reading facilitate productive learning?,”  Review of Educational Research,  vol. 49, no. 2, pp. 280-318, 1979, DOI:  0.3102/00346543049002280

[ 16 ]   S. Degener and J. Berne, “Complex questions promote complex thinking,”  The Reading Teacher,  vol. 70, no. 5, pp. 595-599, 2017, DOI: 10.1002/trtr.1535

[ 17 ]   OECD. PISA 2018 assessment and analytical framework, Paris: OECD Publishing, 2019.

[ 18 ]   B. M. Taylor, P. D. Pearson, K. F. Clark, and S. Walpole, “Beating the Odds in Teaching All Children To Read,” Center for the Improvement of Early Reading Achievement, University of Michigan, Ann Arbor, Michigan, CIERA-R-2-006, 1999.

[ 18 ]   B. M. Taylor, P. D. Pearson, D. S. Peterson, and M. C. Rodriguez, “Reading growth in high-poverty classrooms: The influence of teacher practices that encourage cognitive engagement in literacy learning,”  The Elementary School Journal,  vol. 104, no. 1, pp. 3-28, 2003, DOI: 10.1086/499740

[ 19 ]   B. M. Taylor, P. D. Pearson, D. S. Peterson, and M. C. Rodriguez, “Reading growth in high-poverty classrooms: The influence of teacher practices that encourage cognitive engagement in literacy learning,”  The Elementary School Journal,  vol. 104, no. 1, pp. 3-28, 2003, DOI:  10.1086/499740

[ 20 ]   E. A. O’connor and M. C. Fish, “Differences in the Classroom Systems of Expert and Novice Teachers,” in  the meetings of the American Educational Research Association , 1998.

[ 21 ]   D. Linvill, “Student interest and engagement in the classroom: Relationships with student personality and developmental variables,”  Southern Communication Journal,  vol. 79, no. 3, pp. 201-214, 2014, DOI: 10.1080/1041794X.2014.884156

[ 22 ]   G. Davenport and B. Bradley, “The care and feeding of users,”  IEEE multimedia,  vol. 4, no. 1, pp. 8-11, 1997, DOI: 10.1109/93.580390

[ 23 ]   A. Rastogi, X. Zang, S. Sunkara, R. Gupta, and P. Khaitan, “Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset,” in  Proceedings of the AAAI Conference on Artificial Intelligence , 2020, vol. 34, no. 05, pp. 8689-8696, DOI: 10.1609/aaai.v34i05.6394

[ 24 ]   L. Fryer and R. Carpenter, “Bots as language learning tools,”  Language Learning & Technology,  vol. 10, no. 3, pp. 8-14, 2006, DOI: 10125/44068

[ 25 ]   L. K. Fryer, K. Nakao, and A. Thompson, “Chatbot learning partners: Connecting learning experiences, interest and competence,”  Computers in Human Behavior,  vol. 93, pp. 279-289, 2019, DOI: 10.1016/j.chb.2018.12.023

[ 26 ]   J. Yin, T.-T. Goh, B. Yang, and Y. Xiaobin, “Conversation technology with micro-learning: The impact of chatbot-based learning on students’ learning motivation and performance,”  Journal of Educational Computing Research,  vol. 59, no. 1, pp. 154-177, 2021, DOI: 10.1177/0735633120952067

[ 27 ]   P. Tracey, M. Saraee, and C. Hughes, “Applying NLP to build a cold reading chatbot,” in  2021 International Symposium on Electrical, Electronics and Information Engineering , 2021, pp. 77-80, DOI: 10.1145/3459104.3459119

[ 28 ]   W. Villegas-Ch, A. Arias-Navarrete, and X. Palacios-Pacheco, “Proposal of an Architecture for the Integration of a Chatbot with Artificial Intelligence in a Smart Campus for the Improvement of Learning,”  Sustainability,  vol. 12, no. 4, p. 1500, 2020, DOI: 10.3390/su12041500

This research project is grateful for the support of Taiwan Ministry of Science and Technology (MOST 110-2410-H-007-059-MY2.).

Wen-Hsiu Wu

Wen-Hsiu Wu

received her B.S. and M.S in Physics from National Tsing Hua University (NTHU, Taiwan, R.O.C.). She is currently continuing her research as a graduate student in the Institute of Learning Sciences and Technologies at National Tsing Hua University. Her research interests include digital learning environment analysis and design, specifically designing for cross-disciplinary learning and reading comprehension.

Guan-Ze Liao

Guan-Ze Liao

received his doctoral degree in Design programs at National Taiwan University of Science and Technology (NTUST). During his academic study at the university, he incorporated professional knowledge from various disciplines (e.g., multimedia, interaction design, visual and information design, arts and science, interactive media design, computer science, and information communication) into human interaction design, communication and media design research studies and applications. His cross-disciplinary research interests involve methods in information media research, interaction/interface design, multimedia game, and Computation on Geometric Patterns. Now he is a professor in the Institute of Learning Sciences and Technologies at National Tsing Hua University (NTHU, Taiwan, R.O.C.). His professional experience is focused in the fields of digital humanity, game-based learning, visual narrative, and information design, and the domain of design for learning.

Deriving Design Principles for Educational Chatbots from Empirical Studies on Human–Chatbot Interaction

  • Journal of Digital Contents Society 21(3):487-493
  • 21(3):487-493

Hyojung Jung at Dankook University

  • Dankook University
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Chaeyeon Park at Hanyang University

  • Hanyang University

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Daniel Stattkus

  • Md Ashraful Bari

Jakia Sultana

  • Md Amjad Hossen
  • مُحَمَّدْ أَحْمَدْ عَبْدِ اَلْحَمِيد

Ahmad Samer Wazan

  • E. V. Komarova

Renato Chavez Cajahuanca

  • Aldo Camargo
  • فوزية بنت عبد الله المدهوني

Juan Carlos Farah

  • Denis Gillet

Ricardo Moguel-Sánchez

  • C. S. Sergio Martínez-Palacios

Jorge Octavio Ocharán-Hernández

  • A. J. Sánchez-García
  • شوقي محمد محمود محمد
  • Jean-Baptiste Louvet

Guillaume Dubuisson Duplessis

  • Nathalie Chaignaud
  • Jean-Philippe Kotowicz
  • USER MODEL USER-ADAP

Jason Harley

  • Cassia K. Carter
  • Niki Papaionnou
  • Lana Karabachian
  • COMPUT HUM BEHAV
  • Kevin Corti

Alex Gillespie

  • Hans van der Meij

Jan van der Meij

  • Matthew Lombard
  • J PRAGMATICS

Nicole Novielli

  • Fiorella de Rosis

Irene Mazzotta

  • Peleg Tuchman

Doron Friedman

  • J COMPUT ASSIST LEAR

Yanghee Kim

  • Dianne C. Berry

Laurie T Butler

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
  • Research article
  • Open access
  • Published: 15 December 2021

Educational chatbots for project-based learning: investigating learning outcomes for a team-based design course

  • Jeya Amantha Kumar   ORCID: orcid.org/0000-0002-6920-0348 1  

International Journal of Educational Technology in Higher Education volume  18 , Article number:  65 ( 2021 ) Cite this article

21k Accesses

46 Citations

8 Altmetric

Metrics details

Educational chatbots (ECs) are chatbots designed for pedagogical purposes and are viewed as an Internet of Things (IoT) interface that could revolutionize teaching and learning. These chatbots are strategized to provide personalized learning through the concept of a virtual assistant that replicates humanized conversation. Nevertheless, in the education paradigm, ECs are still novel with challenges in facilitating, deploying, designing, and integrating it as an effective pedagogical tool across multiple fields, and one such area is project-based learning. Therefore, the present study investigates how integrating ECs to facilitate team-based projects for a design course could influence learning outcomes. Based on a mixed-method quasi-experimental approach, ECs were found to improve learning performance and teamwork with a practical impact. Moreover, it was found that ECs facilitated collaboration among team members that indirectly influenced their ability to perform as a team. Nevertheless, affective-motivational learning outcomes such as perception of learning, need for cognition, motivation, and creative self-efficacy were not influenced by ECs. Henceforth, this study aims to add to the current body of knowledge on the design and development of EC by introducing a new collective design strategy and its pedagogical and practical implications.

Introduction

Chatbots are defined as computer programs that replicate human-like conversations by using natural language structures (Garcia Brustenga et al., 2018 ; Pham et al., 2018 ) in the form of text messages (websites or mobile applications), voice-based (Alexa or Siri), or a combination of both (Pereira et al., 2019 ; Sandoval, 2018 ). These automated conversational agents (Riel, 2020 ) have been significantly used to replicate customer service interaction (Holotescu, 2016 ) in various domains (Khan et al., 2019 ; Wang et al., 2021 ) to an extent it has become a common trend (Wang et al., 2021 ). The use of chatbots are further expanded due to the affordance, cost (Chocarro et al., 2021 ), development options (Sreelakshmi et al., 2019 ; Wang et al., 2021 ), and adaption facilitated by social network and mobile instant messaging (MIM) applications (apps) (Brandtzaeg & Følstad, 2018 ; Cunningham-Nelson et al., 2019 ) such as WhatsApp, Line, Facebook, and Telegrams.

Accordingly, chatbots popularized by social media and MIM applications have been widely accepted (Rahman et al., 2018 ; Smutny & Schreiberova, 2020 ) and referred to as mobile-based chatbots. These bots have been found to facilitates collaborative learning (Schmulian & Coetzee, 2019 ), multimodal communication (Haristiani et al., 2019 ), scaffolding, real-time feedback (Gonda et al., 2019 ), personalized learning (Oke & Fernandes, 2020 ; Verleger & Pembridge, 2019 ), scalability, interactivity (Dekker et al., 2020 ) and fosters knowledge creation and dissipation effectively (Verleger & Pembridge, 2019 ). Nevertheless, given the possibilities of MIM in conceptualizing an ideal learning environment, we often overlook if instructors are capable of engaging in high-demand learning activities, especially around the clock (Kumar & Silva, 2020 ). Chatbots can potentially be a solution to such a barrier (Schmulian & Coetzee, 2019 ), especially by automatically supporting learning communication and interactions (Eeuwen, 2017 ; Garcia Brustenga et al., 2018 ) for even a large number of students.

Nevertheless, Wang et al. ( 2021 ) claims while the application of chatbots in education are novel, it is also impacted by scarcity. Smutny and Schreiberova ( 2020 ), Wang et al. ( 2021 ), and Winkler and Söllner ( 2018 ) added that the current domain of research in educational chatbots (EC) has been focusing on language learning (Vázquez-Cano et al., 2021 ), economics, medical education, and programming courses. Henceforth, it is undeniable that the role of EC, while not been widely explored outside these contexts (Schmulian & Coetzee, 2019 ; Smutny & Schreiberova, 2020 ) due to being in the introductory stages (Chen et al., 2020 ), are also constrained with limited pedagogical examples in the educational context (Stathakarou et al., 2020 ). Nevertheless, while this absence is inevitable, it also provides a potential for exploring innovations in educational technology across disciplines (Wang et al., 2021 ). Furthermore, according to Tegos et al. ( 2020 ), investigation on integration and application of chatbots is still warranted in the real-world educational settings. Therefore, the objective of this study is first to address research gaps based on literature, application, and design and development strategies for EC. Next, by situating the study based on these selected research gaps, the effectiveness of EC is explored for team-based projects in a design course using a quasi-experimental approach.

Literature review

The term “chatbot” was derived to represent two main attributes which are “chat” in lieu of the conversational attributes and “bot” short for robot (Chocarro et al., 2021 ). Chatbots are automated programs designed to execute instructions based on specific inputs (Colace et al., 2018 ) and provide feedback that replicates natural conversational style (Ischen et al., 2020 ). According to Adamopoulou and Moussiades ( 2020 ), there are six main chatbots parameters that determines design and development consideration:

knowledge domain—open and closed domains

services—interpersonal, intrapersonal, and inter-agent chatbots

goals—informative, chat-based, or task-based

input processing and response generation—rule-based model, retrieval-based model, and generative model

build—open-source or closed platforms.

These parameters convey that a chatbot can fulfill numerous communication and interaction functionalities based on needs, platforms, and technologies. Typically, they are an exemplary use of artificial intelligence (AI) which conversely initiated various state-of-the-art platforms for developing chatbots such as Google’s DialogFlow, IBM Watson Conversation, Amazon Lex, Flow XO, and Chatterbot (Adamopoulou & Moussiades, 2020 ). However, while using AI is impressive, chatbots application is limited as it primarily uses the concept of artificial narrow intelligence (ANI) (Holotescu, 2016 ). Therefore, it can only perform a single task based on a programmed response, such as examining inputs, providing information, and predicting subsequent moves. While limited, ANI is the only form of AI that humanity has achieved to date (Schmulian & Coetzee, 2019 ). Conversely, such limitation also enables a non-technical person to design and develop chatbots without much knowledge of AI, machine learning, or neuro-linguistic programming (Gonda et al., 2019 ). While this creates an “openness with IT” (Schlagwein et al., 2017 ) across various disciplines, big-tech giants such as Google, Facebook, and Microsoft also view chatbots as the next popular technology for the IoT era (Følstad & Brandtzaeg, 2017 ). Henceforth, if chatbots are able to gain uptake, it will change how people obtain information, communicate (Følstad et al., 2019 ), learn and gather information (Wang et al., 2021 ); hence the introduction of chatbots for education.

Chatbots in education

Chatbots deployed through MIM applications are simplistic bots known as messenger bots (Schmulian & Coetzee, 2019 ). These platforms, such as Facebook, WhatsApp, and Telegram, have largely introduced chatbots to facilitate automatic around-the-clock interaction and communication, primarily focusing on the service industries. Even though MIM applications were not intended for pedagogical use, but due to affordance and their undemanding role in facilitating communication, they have established themselves as a learning platform (Kumar et al., 2020 ; Pereira et al., 2019 ). Henceforth, as teaching is an act of imparting knowledge through effective communication, the ubiquitous format of a mobile-based chatbot could also potentially enhance the learning experience (Vázquez-Cano et al., ( 2021 ); thus, chatbots strategized for educational purposes are described as educational chatbots.

Bii ( 2013 ) defined educational chatbots as chatbots conceived for explicit learning objectives, whereas Riel ( 2020 ) defined it as a program that aids in achieving educational and pedagogical goals but within the parameters of a traditional chatbot. Empirical studies have positioned ECs as a personalized teaching assistant or learning partner (Chen et al., 2020 ; Garcia Brustenga et al., 2018 ) that provides scaffolding (Tutor Support) through practice activities (Garcia Brustenga et al., 2018 ). They also support personalized learning, multimodal content (Schmulian & Coetzee, 2019 ), and instant interaction without time limits (Chocarro et al., 2021 ). All the same, numerous benefits have been reported reflecting positive experiences (Ismail & Ade-Ibijola, 2019 ; Schmulian & Coetzee, 2019 ) that improved learning confidence (Chen et al., 2020 ), motivation, self-efficacy, learner control (Winkler & Söllner, 2018 ), engagement (Sreelakshmi et al., 2019 ), knowledge retention (Cunningham-Nelson et al., 2019 ) and access of information (Stathakarou et al., 2020 ). Furthermore, ECs were found to provide value and learning choices (Yin et al., 2021 ), which in return is beneficial in customizing learning preferences (Tamayo et al., 2020 ).

Besides, as ECs promotes anytime anywhere learning strategies (Chen et al., 2020 ; Ondas et al., 2019 ), it is individually scalable (Chocarro et al., 2021 ; Stathakarou et al., 2020 ) to support learning management (Colace et al., 2018 ) and delivery of context-sensitive information (Yin et al., 2021 ). Henceforth, encouraging participation (Tamayo et al., ( 2020 ); Verleger & Pembridge, 2019 ) and disclosure (Brandtzaeg & Følstad, 2018 ; Ischen et al., 2020 ; Wang et al., 2021 ) of personal aspects that were not possible in a traditional classroom or face to face interaction. Conversely, it may provide an opportunity to promote mental health (Dekker et al., 2020 ) as it can be reflected as a ‘safe’ environment to make mistakes and learn (Winkler & Söllner, 2018 ). Furthermore, ECs can be operated to answer FAQs automatically, manage online assessments (Colace et al., 2018 ; Sandoval, 2018 ), and support peer-to-peer assessment (Pereira et al., 2019 ).

Moreover, according to Cunningham-Nelson et al. ( 2019 ), one of the key benefits of EC is that it can support a large number of users simultaneously, which is undeniably an added advantage as it reduces instructors' workload. Colace et al. ( 2018 ) describe ECs as instrumental when dealing with multiple students, especially testing behavior, keeping track of progress, and assigning tasks. Furthermore, ECs were also found to increase autonomous learning skills and tend to reduce the need for face-to-face interaction between instructors and students (Kumar & Silva, 2020 ; Yin et al., 2021 ). Conversely, this is an added advantage for online learning during the onset of the pandemic. Likewise, ECs can also be used purely for administrative purposes, such as delivering notices, reminders, notifications, and data management support (Chocarro et al., 2021 ). Moreover, it can be a platform to provide standard information such as rubrics, learning resources, and contents (Cunningham-Nelson et al., 2019 ). According to Meyer von Wolff et al ( 2020 ), chatbots are a suitable instructional tool for higher education and student are acceptive towards its application.

Conversely, Garcia Brustenga et al. ( 2018 ) categorized ECs based on eight tasks in the educational context as described in Table 1 . Correspondingly, these tasks reflect that ECs may be potentially beneficial in fulfilling the three learning domains by providing a platform for information retrieval, emotional and motivational support, and skills development.

Albeit, from the instructor’s perspective, ECs could be intricate and demanding, especially when they do not know to code (Schmulian & Coetzee, 2019 ); automation of some of these interactions could benefit educators in focusing on other pedagogical needs (Gonda et al., 2019 ). Nevertheless, enhancing such skills is often time-consuming, and teachers are usually not mentally prepared to take up a designer's (Kim, 2021 ) or programmer's role. The solution may be situated in developing code-free chatbots (Luo & Gonda, 2019 ), especially via MIM (Smutny & Schreiberova, 2020 ).

By so, for EC development, it is imperative to ensure there are design principles or models that can be adapted for pedagogical needs. At the same time, numerous models have been applied in the educational context, such as CommonKADS (Cameron et al., 2018 ), Goal-Oriented Requirements Engineering (GORE) (Arruda et al., 2019 ), and retrieval-based and QANet models (Wu et al., 2020 ). Nevertheless, these models reflect a coding approach that does not emphasize strategies or principles focusing on achieving learning goals. While Garcia Brustenga et al. ( 2018 ), Gonda et al. ( 2019 ), Kerly et al. ( 2007 ), Satow ( 2017 ), Smutny and Schreiberova ( 2020 ), and Stathakarou et al. ( 2020 ) have highlighted some design guidelines for EC, imperatively a concise model was required. Therefore, based on the suggestions of these empirical studies, the researcher identified three main design attributes: reliability, pedagogy, and experience (Table 2 ).

Nevertheless, it was observed that the communicative aspect was absent. Undeniably, chatbots are communication tools that stimulate interpersonal communication (Ischen et al., 2020 ; Wang et al., 2021 ); therefore, integrating interpersonal communication was deemed essential. Interpersonal communication is defined as communication between two individuals who have established a relationship (Devito, 2018 ), and such a relationship is also significant through MIM to represent the communication between peers and instructors (Chan et al., 2020 ). Furthermore, according to Han and Xu ( 2020 ), interpersonal communication moderates the relationship and perception that influences the use of an online learning environment. According to Hobert and Berens ( 2020 ), while chatbot interaction could facilitate small talk that could influence learning, such capabilities should not be overemphasize. Therefore, it was concluded that four fundamental attributes or strategies were deemed critical for EC design: Reliability, i nterpersonal communication, Pedagogy, and E xperience (RiPE), which are explained in Table 3 .

Nevertheless, ECs are not without flaws (Fryer et al., 2019 ). According to Kumar and Silva ( 2020 ), acceptance, facilities, and skills are still are a significant challenge to students and instructors. Similarly, designing and adapting chatbots into existing learning systems is often taxing (Luo & Gonda, 2019 ) as instructors sometimes have limited competencies and strategic options in fulfilling EC pedagogical needs (Sandoval, 2018 ). Moreover, the complexity of designing and capturing all scenarios of how a user might engage with a chatbot also creates frustrations in interaction as expectations may not always be met for both parties (Brandtzaeg & Følstad, 2018 ). Hence, while ECs as conversational agents may have been projected to substitute learning platforms in the future (Følstad & Brandtzaeg, 2017 ), much is still to be explored from stakeholders' viewpoint in facilitating such intervention.

Research gaps in EC research

Three categories of research gaps were identified from empirical findings (i) learning outcomes, (ii) design issues, and (iii) assessment and testing issues. Firstly, research gaps concerning learning outcomes are such as measuring effectiveness (Schmulian & Coetzee, 2019 ), perception, social influence (Chaves & Gerosa, 2021 ), personality traits, affective outcomes (Ciechanowski et al., 2019 ; Winkler & Söllner, 2018 ), acceptance (Chen et al., 2020 ; Chocarro et al., 2021 ), satisfaction (Stathakarou et al., 2020 ), interest (Fryer et al., 2019 ), motivation, learning performance (Yin et al., 2021 ), mental health (Brandtzaeg & Følstad, 2018 ), engagement (Riel, 2020 ) and cognitive effort (Nguyen & Sidorova, 2018 ). EC studies have primarily focused on language learning, programming, and health courses, implying that EC application and the investigation of learning outcomes have not been investigated in various educational domains and levels of education.

Next, as for design and implementation issues, a need to consider strategies that align ECs application for teaching and learning (Haristiani et al., 2019 ; Sjöström et al., 2018 ) mainly to supplement activities that can be used to replace face-to-face interactions (Schmulian & Coetzee, 2019 ) has been implied. According to Schmulian and Coetzee ( 2019 ), there is still scarcity in mobile-based chatbot application in the educational domain, and while ECs in MIM has been gaining momentum, it has not instigated studies to address its implementation. Furthermore, there are also limited studies in strategies that can be used to improvise ECs role as an engaging pedagogical communication agent (Chaves & Gerosa, 2021 ). Besides, it was stipulated that students' expectations and the current reality of simplistic bots may not be aligned as Miller ( 2016 ) claims that ANI’s limitation has delimited chatbots towards a simplistic menu prompt interaction.

Lastly, in regards to assessment issues, measurement strategies for both intrinsic and extrinsic learning outcomes (Sjöström et al., 2018 ) by applying experimental approaches to evaluate user experience (Fryer et al., 2019 ; Ren et al., 2019 ) and psychophysiological reactions (Ciechanowski et al., 2019 ) has been lacking. Nevertheless, Hobert ( 2019 ) claims that the main issue with EC assessment is the narrow view used to evaluate outcomes based on specific fields rather than a multidisciplinary approach. Moreover, evaluating the effectiveness of ECs is a complex process (Winkler & Söllner, 2018 ) as it is unclear what are the characteristics that are important in designing a specific chatbot (Chaves & Gerosa, 2021 ) and how the stakeholders will adapt to its application to support teaching and learning (Garcia Brustenga et al., 2018 ). Furthermore, there is a need for understanding how users experience chatbots (Brandtzaeg & Følstad, 2018 ), especially when they are not familiar with such intervention (Smutny & Schreiberova, 2020 ). Conversely, due to the novelty of ECs, the author has not found any studies pertaining to ECs in design education, project-based learning, and focusing on teamwork outcomes.

Purpose of the study

This study aims to investigate the effects of ECs for an Instructional Design course that applies team-based project towards learning outcomes, namely learning performance, perception of learning, need for cognition, motivation, creative self-efficacy, and teamwork. Learning performance is defined as the students' combined scores accumulated from the project-based learning activities in this study. Next, perception of the learning process is described as perceived benefits obtained from the course (Wei & Chou, 2020 ) and the need for cognition as an individual’s tendency to participate and take pleasure in cognitive activities (de Holanda Coelho et al., 2020 ). The need for cognition also indicates positive acceptance towards problem-solving (Cacioppo et al., 1996 ), enjoyment (Park et al., 2008 ), and it is critical for teamwork, as it fosters team performance and information-processing motivation (Kearney et al., 2009 ). Henceforth, we speculated that EC might influence the need for cognition as it aids in simplifying learning tasks (Ciechanowski et al., 2019 ), especially for teamwork.

Subsequently, motivational beliefs are reflected by perceived self-efficacy and intrinsic values students have towards their cognitive engagement and academic performance (Pintrich & de Groot, 1990 ). According to Pintrich et al. ( 1993 ), self-efficacy and intrinsic value strongly correlate with task value (Eccles & Wigfield, 2002 ), such as interest, enjoyment, and usefulness. Furthermore, Walker and Greene ( 2009 ) explain that motivational factors that facilitate learning are not always solely reliant on self-efficacy, and Pintrich and de Groot ( 1990 ) claims that a combination of self-efficacy and intrinsic value is better in explaining the extent to which students are willing to take on the learning task. Ensuing, the researcher also considered creative self-efficacy, defined as the students' belief in producing creative outcomes (Brockhus et al., 2014 ). Prior research has not mentioned creativity as a learning outcome in EC studies. However, according to Pan et al. ( 2020 ), there is a positive relationship between creativity and the need for cognition as it also reflects individual innovation behavior. Likewise, it was deemed necessary due to the nature of the project, which involves design. Lastly, teamwork perception was defined as students' perception of how well they performed as a team to achieve their learning goals. According to Hadjielias et al. ( 2021 ), the cognitive state of teams involved in digital innovations is usually affected by the task involved within the innovation stages. Hence, the consideration of these variables is warranted.

Therefore, it was hypothesized that using ECs could improve learning outcomes, and a quasi-experimental design comparing EC and traditional (CT) groups were facilitated, as suggested by Wang et al. ( 2021 ), to answer the following research questions.

Does the EC group perform better than students who learn in a traditional classroom setting?

Do students who learn with EC have a better perception of learning, need for cognition, motivational belief, and creative self-efficacy than students in a traditional classroom setting?

Does EC improve teamwork perception in comparison to students in a traditional classroom setting?

Educational chatbot design, development, and deployment

According to Adamopoulou and Moussiades ( 2020 ), it is impossible to categorize chatbots due to their diversity; nevertheless, specific attributes can be predetermined to guide design and development goals. For example, in this study, the rule-based approach using the if-else technique (Khan et al., 2019 ) was applied to design the EC. The rule-based chatbot only responds to the rules and keywords programmed (Sandoval, 2018 ), and therefore designing EC needs anticipation on what the students may inquire about (Chete & Daudu, 2020 ). Furthermore, a designer should also consider chatbot's capabilities for natural language conversation and how it can aid instructors, especially in repetitive and low cognitive level tasks such as answering FAQs (Garcia Brustenga et al., 2018 ). As mentioned previously, the goal can be purely administrative (Chocarro et al., 2021 ) or pedagogical (Sandoval, 2018 ).

Next, as for the design and development of the EC, Textit ( https://textit.com/ ), an interactive chatbots development platform, was utilized. Textit is a third-party software developed by Nyaruka and UNICEF that offers chatbots building possibilities without coding but using the concept of flows and deployment through various platforms such as Facebook Messenger, Twitter, Telegram, and SMS. For the design of this EC, Telegram was used due to data encryption security (de Oliveira et al., 2016 ), cloud storage, and the privacy the student and instructor would have without using their personal social media platforms. Telegram has been previously used in this context for retrieving learning contents (Rahayu et al., 2018 ; Thirumalai et al., 2019 ), information and progress (Heryandi, 2020 ; Setiaji & Paputungan, 2018 ), learning assessment (Pereira, 2016 ), project-based learning, teamwork (Conde et al., 2021 ) and peer to peer assessment (P2P) (Pereira et al., 2019 ).

Subsequently, the chatbot named after the course code (QMT212) was designed as a teaching assistant for an instructional design course. It was targeted to be used as a task-oriented (Yin et al., 2021 ), content curating, and long-term EC (10 weeks) (Følstad et al., 2019 ). Students worked in a group of five during the ten weeks, and the ECs' interactions were diversified to aid teamwork activities used to register group members, information sharing, progress monitoring, and peer-to-peer feedback. According to Garcia Brustenga et al. ( 2018 ), EC can be designed without educational intentionality where it is used purely for administrative purposes to guide and support learning. Henceforth, 10 ECs (Table 4 ) were deployed throughout the semester, where EC1-EC4 was used for administrative purposes as suggested by Chocarro et al. ( 2021 ), EC5-EC6 for assignment (Sjöström et al., 2018 ), EC7 for user feedback (Kerly et al., 2007 ) and acceptance (Yin et al., 2021 ), EC8 for monitoring teamwork progress (Colace et al., 2018 ), EC9 as a project guide FAQ (Sandoval, 2018 ) and lastly EC10 for peer to peer assessment (Colace et al., 2018 ; Pereira et al., 2019 ). The ECs were also developed based on micro-learning strategies to ensure that the students do not spend long hours with the EC, which may cause cognitive fatigue (Yin et al., 2021 ). Furthermore, the goal of each EC was to facilitate group work collaboration around a project-based activity where the students are required to design and develop an e-learning tool, write a report, and present their outcomes. Next, based on the new design principles synthesized by the researcher, RiPE was contextualized as described in Table 5 .

Example flow diagrams from Textit for the design and development of the chatbot are represented in Fig.  1 . The number of choices and possible outputs determine the complexity of the chatbot where some chatbots may have simple interaction that requires them to register their groups (Fig.  2 ) or much more complex interaction for peer-to-peer assessment (Fig.  3 ). Example screenshots from Telegram are depicted in Fig.  4 .

figure 1

Textit flow diagrams

figure 2

Textit flow diagram for group registration

figure 3

Textit flow diagram for peer to peer evaluation

figure 4

Telegram screenshots of the EC

Methodology

Participants.

The participants of this study were second-year Bachelor of Education (Teaching English to Speakers of Other Languages (TESOL)) who are minoring in multimedia and currently enrolled in a higher learning institute in Malaysia. The 60 students were grouped into two classes (30 students per class) as either traditional learning class (control group-CT) or chatbot learning class (treatment group-EC). Out of the 60 participants, only 11 were male, 49 were female, and such distribution is typical for this learning program. Both groups were exposed to the same learning contents, class duration, and instructor, where the difference is only denoted towards different class schedules, and only the treatment group was exposed to EC as an aid for teaching and learning the course. Both groups provided written consent to participate in the study and were given honorarium for participation. However, additional consent was obtained from the EC group in regards of data protection act as the intervention includes the use of social media application and this was obtained through EC1: Welcome Bot.      

The instructional design course aims to provide fundamental skills in designing effective multimedia instructional materials and covers topics such as need analysis, instructional analysis, learner analysis, context analysis, defining goals and objectives, developing instructional strategy and materials, developing assessment methods, and assessing them by conducting formative and summative assessments. The teaching and learning in both classes are identical, wherein the students are required to design and develop a multimedia-based instructional tool that is deemed their course project. Students independently choose their group mates and work as a group to fulfill their project tasks. Moreover, both classes were also managed through the institution's learning management system to distribute notes, attendance, and submission of assignments.

This study applies an interventional study using a quasi-experimental design approach. Creswell ( 2012 ) explained that education-based research in most cases requires intact groups, and thus creating artificial groups may disrupt classroom learning. Therefore, one group pretest–posttest design was applied for both groups in measuring learning outcomes, except for learning performance and perception of learning which only used the post-test design. The total intervention time was ten weeks, as represented in Fig.  5 . The EC is usually deployed for the treatment class one day before the class except for EC6 and EC10, which were deployed during the class. Such a strategy was used to ensure that the instructor could guide the students the next day if there were any issues.

figure 5

Study procedure

This study integrates five instruments which measure perception of learning (Silva et al., 2017 ), perceived motivation belief using the Motivated Strategies for Learning Questionnaire (MSLQ) (Pintrich & de Groot, 1990 ) and modified MSLQ (Silva et al., 2017 ), need for cognition using the Need for Cognition Scale–6 (NCS-6) (de Holanda Coelho et al., 2020 ), creative self-efficacy from the Creative Self-Efficacy (QCSE) (Brockhus et al., 2014 ) and teamwork using a modified version of Team Assessment Survey Questions (Linse, 2007 ). The teamwork survey had open-ended questions, which are:

Give one specific example of something you learned from the team that you probably would not have learned on your own.

Give one specific example of something other team members learned from you that they probably would not have learned without you.

What problems have you had interacting as a team so far?

Suggest one specific, practical change the team could make that would help improve everyone’s learning.

The instruments were rated based on the Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree) and administered using Google Forms for both groups. Where else, learning performance was assessed based on the assessment of the project, which includes report, product, presentation, and peer-to-peer assessment.

A series of one-way analyses of covariance (ANCOVA) was employed to evaluate the difference between the EC and CT groups relating to the need for cognition, motivational belief for learning, creative self-efficacy, and team assessment. As for learning performance, and perception of learning, a t-test was used to identify the difference between the groups. The effect size was evaluated according to Hattie ( 2015 ), where an average effect size (Cohen’s d ) of 0.42 for an intervention using technologies for college students is reflected to improve achievement (Hattie, 2017 ). Furthermore, as the teamwork has open-ended questions, the difference between the groups was evaluated qualitatively using Text analysis performed using the Voyant tool at https://voyant-tools.org/ (Sinclair & Rockwell, 2021 ). Voyant tools is an open-source online tool for text analysis and visualization (Hetenyi et al., 2019 ), and in this study, the collocates graphs were used to represent keywords and terms that occur in close proximity representing a directed network graph.

Learning performance for the course

The EC group (µ = 42.500, SD = 2.675) compared the CT group (µ = 39.933, SD = 2.572) demonstrated significant difference at t (58) = 3.788, p  = 0.000, d  = 0.978; hence indicating difference in learning achievement where the EC group outperformed the control group. The Cohen’s d value as described by Hattie ( 2017 ) indicated that learning performance improved by the intervention.

Need for cognition

The initial Levine’s test and normality indicated that the homogeneity of variance assumptions was met at F (1,58) = 0.077, p = 0.782. The adjusted means of µ = 3.416 for the EC group and µ = 3.422 for the CT group indicated that the post-test scores were not significant at F (1, 57) = 0.002, p  = 0.969, η2p = 0.000, d  = 0.012); hence indicating that student’s perception of enjoyment and tendency to engage in the course is similar for both groups.

Motivational beliefs

The initial Levine’s test and normality indicated that the homogeneity of variance assumptions was met at F (1,58) = 0.062, p = 0.804. The adjusted means of µ = 4.228 for the EC group and µ = 4.200 for the CT group indicated that the post-test scores were not significant at F (1, 57) = 0.046, p  = 0.832, η2p = 0.001, d  = 0.056); hence indicating that the student’s motivation to engage in the course are similar for both groups.

Creative self-efficacy

The initial Levine’s test and normality indicated that the homogeneity of variance assumptions was met at F (1,58) = 0.808, p = 0.372. The adjusted means of µ = 3.566 for the EC group and µ = 3.627 for the CT group indicated that the post-test scores were not significant at F (1, 57) = 0.256, p  = 0.615, η2p = 0.004, d  = 0.133); hence indicating that the student’s perception of creative self-efficacy was similar for both groups.

Perception of learning

The EC group (µ = 4.370, SD = 0.540) compared the CT group (µ = 4.244, SD = 0.479) demonstrated no significant difference at t (58) = 0.956, p = 0.343, d  = 0.247; hence indicating no difference in how students perceived their learning process quantitively. Nevertheless, we also questioned what impacted their learning (project design and development) the most during the course, and the findings, as shown in Table 6 , indicated that both groups (EC = 50.00% and CT = 86.67%) found the group learning activity as having the most impact. The control group was more partial towards the group activities than the EC group indicating online feedback and guidance (30.00%) and interaction with the lecturer as an inequitable influence. It was also indicated in both groups that constructive feedback was mostly obtained from fellow course mates (EC = 56.67%, CT = 50.00%) and the instructor (EC = 36.67%, CT = 43.33%) (Table 7 ) while minimum effort was made to get feedback outside the learning environment.

Team assessment

The initial Levine’s test and normality indicated that the homogeneity of variance assumptions was met at F (1,58) = 3.088, p = 0.051. The adjusted means of µ = 4.518 for the experimental group and µ = 4.049 for the CT group indicated that the post-test scores were significantly different at F (1, 57) = 5.950, p  = 0.018, η2p = 0.095, d  = 0.641; hence indicating that there was a significant difference between groups in how they performed in teams. The Cohen’s d value, as described by Hattie ( 2017 ), indicated that the intervention improved teamwork.

Next, we questioned their perception of teamwork based on what they learned from their teammates, what they felt others learn from them, the problem faced as a team, and recommendations to improve their experience in the course. Based on the feedback, themes such as teamwork, technology, learning management, emotional management, creativity, and none were identified to categories the feedback. The descriptive data are represented in Table 8 for both the groups and the trends reflecting the changes in feedback are described as follow:

Respondent learned from teammates

This question reflects on providing feedback on one aspect they have learned from their team that they probably would not have learned independently. Based on Fig.  6 , the illustration describes changes in each group (EC and CT) pre and post-intervention. First, teamwork showed an increasing trend for EC, whereas CT showed slight changes pre and post-intervention.

Next, using text analysis collocates graphs (Fig.  7 ) for EC post-intervention, a change was observed indicating teamwork perception resonating from just learning new ideas, communicating, and accepting opinions towards a need to cooperate as a team to ensure they achieve their goal of developing the project. It was observed that communicating merely was not the main priority anymore as cooperation towards problem-solving is of utmost importance. Example feedbacks are such as, “I learned teamwork and how to solve complicated problems” and “The project was completed in a shorter period of time, compared to if I had done it by myself.” Next, in both groups, creativity seems to have declined from being an essential aspect in the project's initial phase as it declines towards the end of the semester, whereas an increment was noticed in giving more importance to emotional management when handling matters of the project. Example feedback is such as “I learn to push myself more and commit to the project's success.” Nevertheless, in both groups, all the trends are almost similar.

figure 6

Change in perception pre and post-intervention based on aspects learn from teammates

figure 7

Change in perception for the EC group based on aspects learn from teammate

Teammates learned from the respondent.

This question reflects on an aspect the respondent believes that their team members have learned from them. Initially, both groups reported being unaware of their contribution by stating “nothing” or “I don’t know” which was classified as “other” (Fig.  8 ). Nevertheless, intriguingly both groups showed a decline in such negative perception post-intervention, which can be attributed to self-realization of their contribution in task completion. Furthermore, different trends were observed between both groups for teamwork, where the EC group showed more references to increased teamwork contribution, where else the CT group remained unaffected post-intervention. In terms of technology application, the respondents in both groups described how they were a valuable resource for teaching their peers about technology, where one respondent stated that “My friends learn how to make an application from me.”

figure 8

Change in perception pre and post-intervention based on aspects teammates learned from respondents

Problem respondent faced as a team

Based on the analysis, it was found that the main issue faced in both groups were related to teamwork (Fig.  9 ). The CT group reflected higher teamwork issues than the EC group, and in both groups, these issues escalated during the learning process.

figure 9

Graphical representation of issues faced as a team

Based on analyzing the text, initially, the EC group found issues related to identifying an appropriate time to have group discussions as some teammates were either absent or unavailable (Fig.  10 ), where a respondent stated that “We can barely meet as a group.” Post-intervention, the group found similar issues, highlighting a lack of communication and availability due to insufficient time and being busy with their learning schedule. Example respond, “We do not have enough time to meet up, and most of us have other work to do.” As for the CT group pre-intervention, similar issues were observed as denoted for the EC group, but communication issues were more prevalent as respondents mentioned differences in opinions or void in feedback which affected how they solved problems collectively (Fig. 11 ). Example feedback is “One of the members rarely responds in the group discussion.” Post-intervention, the CT group claimed that the main issues besides communication were non-contributing members and bias in task distribution. Examples are “Some of my teammates were not really contributing” and “The task was not distributed fairly.”

figure 10

Change in perception for the EC group based on issues faced as a team

figure 11

Change in perception for the CT group based on issues faced as a team

Recommendations to improve teamwork

Two interesting trends were observed from Fig.  12 , which are (a) EC group reflected more need teamwork whereas the CT group showed otherwise (b) CT group emphasized learning management for teamwork whereas the EC group showed otherwise. When assessing the changes in the EC group (Fig.  13 ), transformations were observed between pre and post-intervention, where students opined the need for more active collaboration in providing ideas and acceptance. One respondent from the treatment group reflected that acceptance is vital for successful collaboration, stating that “Teamwork and acceptance in a group are important.” Next, for the CT group (Fig.  14 ), the complexity of defining teamwork pre-intervention, such as communicating, confidence, and contribution of ideas, was transformed to reflect more need for commitment by stating, “Make sure everyone is committed and available to contribute accordingly.”

figure 12

Graphical representation of recommendations pre and post-intervention for both groups

figure 13

Changes in perception for the EC group based on recommendations for learning improvement as a team

figure 14

Changes in perception for the CT group based on recommendations for learning improvement as a team

According to Winkler and Söllner ( 2018 ), ECs have the potential to improve learning outcomes due to their ability to personalize the learning experience. This study aims to evaluate the difference in learning outcomes based on the impact of EC on a project-based learning activity. The outcomes were compared quantitively and qualitatively to explore how the introduction of EC will influence learning performance, need for cognition, motivational belief, creative self-efficacy, perception of learning, and teamwork. Based on the findings, EC has influenced learning performance ( d  = 0.978) and teamwork ( d  = 0.641), and based on the Cohen’s d value being above 0.42, a significant impact on the outcome was deduced. However, other outcomes such as the need for cognition, motivational belief, creative self-efficacy, and perception of learning did not reflect significant differences between both groups.

Firstly, Kearney et al. ( 2009 ) explained that in homogenous teams (as investigated in this study), the need for cognition might have a limited amount of influence as both groups are required to be innovative simultaneously in providing project solutions. Lapina ( 2020 ) added that problem-based learning and solving complex problems could improve the need for cognition. Hence, when both classes had the same team-based project task, the homogenous nature of the sampling may have attributed to the similarities in the outcome that overshadowed the effect of the ECs. Equally, for motivational belief, which is the central aspect needed to encourage strategic learning behavior (Yen, 2018 ). A positive relation with cognitive engagement, performance, and the use of metacognitive strategies (Pintrich & de Groot, 1990 ) is accredited to the need to regulate and monitor learning (Yilmaz & Baydas, 2017 ), especially for project-based learning activities (Sart, 2014 ). Therefore, in both groups, due to the same learning task, these attributes are apparent for both groups as they were able to complete their task (cognitive engagement), and to do so, they were required to plan their task, schedule teamwork activities (metacognition), and design and develop their product systematically.

Moreover, individual personality traits such as motivation have also been found to influence creativity (van Knippenberg & Hirst, 2020 ) which indirectly influenced the need for cognition (Pan et al., 2020 ). Nevertheless, these nonsignificant findings may have some interesting contribution as it implies that project-based learning tends to improve these personality-based learning outcomes. At the same time, the introduction of ECs did not create cognitive barriers that would have affected the cognition, motivational and creative processes involved in project-based learning. Furthermore, as there is a triangulated relationship between these outcomes, the author speculates that these outcomes were justified, especially with the small sample size used, as Rosenstein ( 2019 ) explained.

However, when EC is reflected as a human-like conversational agent (Ischen et al., 2020 ) used as a digital assistant in managing and monitoring students (Brindha et al., 2019 ), the question arises on how do we measure such implication and confirm its capabilities in transforming learning? As a digital assistant, the EC was designed to aid in managing the team-based project where it was intended to communicate with students to inquire about challenges and provide support and guidance in completing their tasks. According to Cunningham-Nelson et al. ( 2019 ), such a role improves academic performance as students prioritize such needs. Conversely, for teamwork, technology-mediated communication, such as in ECs, has been found to encourage interaction in team projects (Colace et al., 2018 ) as they perceived the ECs as helping them to learn more, even when they have communication issues (Fryer et al., 2019 ). Therefore, supporting the outcome of this study that observed that the EC groups learning performance and teamwork outcome had a more significant effect size than the CT group.

As for the qualitative findings, firstly, even though the perception of learning did not show much variation statistically, the EC group showed additional weightage that implicates group activities, online feedback, and interaction with the lecturer as impactful. Interestingly, the percentage of students that found “interaction with lecturer” and “online feedback and guidance” for the EC was higher than the control group, and this may be reflected as a tendency to perceive the chatbot as an embodiment of the lecturer. Furthermore, as for constructive feedback, the outcomes for both groups were very similar as the critiques were mainly from the teammates and the instructor, and the ECs were not designed to critique the project task.

Next, it was interesting to observe the differences and the similarities in both groups for teamwork. In the EC group, there were changes in terms of how students identified learning from other individual team members towards a collective perspective of learning from the team. Similarly, there was also more emphasis on how they contributed as a team, especially in providing technical support. As for CT, not much difference were observed pre and post-intervention for teamwork; however, the post-intervention in both groups reflected a reduced need for creativity and emphasizing the importance of managing their learning task cognitively and emotionally as a team. Concurrently, it was evident that the self-realization of their value as a contributing team member in both groups increased from pre-intervention to post-intervention, which was higher for the CT group.

Furthermore, in regard to problems faced, it was observed that in the EC group, the perception transformed from collaboration issues towards communicative issues, whereas it was the opposite for the CT group. According to Kumar et al. ( 2021 ), collaborative learning has a symbiotic relationship with communication skills in project-based learning. This study identifies a need for more active collaboration in the EC group and commitment for the CT group. Overall, it can be observed that the group task performed through ECs contributed towards team building and collaboration, whereas for the CT group, the concept of individuality was more apparent. Interestingly, no feedback from the EC group mentioned difficulties in using the EC nor complexity in interacting with it. It was presumed that students welcomed such interaction as it provided learning support and understood its significance.

Furthermore, the feedbacks also justified why other variables such as the need for cognition, perception of learning, creativity, self-efficacy, and motivational belief did not show significant differences. For instance, both groups portrayed high self-realization of their value as a team member at the end of the course, and it was deduced that their motivational belief was influenced by higher self-efficacy and intrinsic value. Next, in both groups, creativity was overshadowed by post-intervention teamwork significance. Therefore, we conclude that ECs significantly impact learning performance and teamwork, but affective-motivational improvement may be overshadowed by the homogenous learning process for both groups. Furthermore, it can be perceived that the main contribution of the ECs was creating a “team spirit” especially in completing administrative tasks, interactions, and providing feedback on team progress, and such interaction was fundamental in influencing their learning performance.

Theoretical and practical implication

This study report theoretical and practical contributions in the area of educational chatbots. Firstly, given the novelty of chatbots in educational research, this study enriched the current body of knowledge and literature in EC design characteristics and impact on learning outcomes. Even though the findings are not practically satisfactory with positive outcomes regarding the affective-motivational learning outcomes, ECs as tutor support did facilitate teamwork and cognitive outcomes that support project-based learning in design education. In view of that, it is worth noting that the embodiment of ECs as a learning assistant does create openness in interaction and interpersonal relationships among peers, especially if the task were designed to facilitate these interactions.

Limitation and future studies

This study focuses on using chatbots as a learning assistant from an educational perspective by comparing the educational implications with a traditional classroom. Therefore, the outcomes of this study reflected only on the pedagogical outcomes intended for design education and project-based learning and not the interaction behaviors. Even though empirical studies have stipulated the role of chatbots in facilitating learning as a communicative agent, nevertheless instructional designers should consider the underdeveloped role of an intelligent tutoring chatbot (Fryer et al., 2019 ) and question its limits in an authentic learning environment. As users, the students may have different or higher expectations of EC, which are potentially a spillover from use behavior from chatbots from different service industries. Moreover, questions to ponder are the ethical implication of using EC, especially out of the learning scheduled time, and if such practices are welcomed, warranted, and accepted by today's learner as a much-needed learning strategy. According to Garcia Brustenga et al. ( 2018 ), while ECs can perform some administrative tasks and appear more appealing with multimodal strategies, the author questions how successful such strategies will be as a personalized learning environment without the teacher as the EC’s instructional designer. Therefore, future studies should look into educators' challenges, needs, and competencies and align them in fulfill EC facilitated learning goals. Furthermore, there is much to be explored in understanding the complex dynamics of human–computer interaction in realizing such a goal, especially educational goals that are currently being influenced by the onset of the Covid-19 pandemic. Conversely, future studies should look into different learning outcomes, social media use, personality, age, culture, context, and use behavior to understand the use of chatbots for education.

Availability of data and materials

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Abbreviations

Educational chatbots

Control group

Reliability, interpersonal communication, pedagogy, and experience

Goal-oriented requirements engineering

Adamopoulou, E., & Moussiades, L. (2020). An overview of chatbot technology. In: Maglogiannis, I., Iliadis, L. & Pimenidis, E. (eds) Artificial intelligence applications and innovations. AIAI 2020. IFIP advances in information and communication technology , vol 584 (pp. 373–383). Springer. https://doi.org/10.1007/978-3-030-49186-4_31 .

Arruda, D., Marinho, M., Souza, E. & Wanderley, F. (2019) A Chatbot for goal-oriented requirements modeling. In: Misra S. et al. (eds) Computational science and its applications—ICCSA 2019. ICCSA 2019. Lecture Notes in Computer Science , vol 11622 (pp. 506–519). Springer. https://doi.org/10.1007/978-3-030-24305-0_38 .

Bii, P. (2013). Chatbot technology: A possible means of unlocking student potential to learn how to learn. Educational Research, 4 (2), 218–221.

Google Scholar  

Brandtzaeg, P. B., & Følstad, A. (2018). Chatbots: User changing needs and motivations. Interactions, 25 (5), 38–43. https://doi.org/10.1145/3236669 .

Article   Google Scholar  

Brindha, S., Dharan, K. R. D., Samraj, S. J. J., & Nirmal, L. M. (2019). AI based chatbot for education management. International Journal of Research in Engineering, Science and Management, 2 (3), 2–4.

Brockhus, S., van der Kolk, T. E. C., Koeman, B., & Badke-Schaub, P. G. (2014). The influence of ambient green on creative performance. Proceeding of International Design Conference (DESIGN 2014) , Croatia, 437–444.

Cacioppo, J. T., Petty, R. E., Feinstein, J. A., & Jarvis, W. B. G. (1996). Dispositional differences in cognitive motivation: The life and times of individuals varying in need for cognition. Psychological Bulletin, 119 , 197–253.

Cameron, G., Cameron, D. M., Megaw, G., Bond, R. B., Mulvenna, M., O’Neill, S. B., Armour, C., & McTear, M. (2018). Back to the future: Lessons from knowledge engineering methodologies for chatbot design and development. Proceedings of British HCI 2018 , 1–5. https://doi.org/10.14236/ewic/HCI2018.153 .

Chan, T. J., Yong, W. K. Y., & Harmizi, A. (2020). Usage of whatsapp and interpersonal communication skills among private university students. Journal of Arts & Social Sciences, 3 (January), 15–25.

Chaves, A. P., & Gerosa, M. A. (2021). How should my chatbot interact? A survey on social characteristics in human–chatbot interaction design. International Journal of Human-Computer Interaction, 37 (8), 729–758. https://doi.org/10.1080/10447318.2020.1841438 .

Chen, H. L., Widarso, G. V., & Sutrisno, H. (2020). A chatbot for learning Chinese: Learning achievement and technology acceptance. Journal of Educational Computing Research, 58 (6), 1161–1189. https://doi.org/10.1177/0735633120929622 .

Chete, F. O., & Daudu, G. O. (2020). An approach towards the development of a hybrid chatbot for handling students’ complaints. Journal of Electrical Engineering, Electronics, Control and Computer Science, 6 (22), 29–38.

Chocarro, R., Cortiñas, M., & Marcos-Matás, G. (2021). Teachers’ attitudes towards chatbots in education: A technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics. Educational Studies, 00 (00), 1–19. https://doi.org/10.1080/03055698.2020.1850426 .

Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems, 92 , 539–548. https://doi.org/10.1016/j.future.2018.01.055 .

Colace, F., De Santo, M., Lombardi, M., Pascale, F., Pietrosanto, A., & Lemma, S. (2018). Chatbot for e-learning: A case of study. International Journal of Mechanical Engineering and Robotics Research, 7 (5), 528–533. https://doi.org/10.18178/ijmerr.7.5.528-533 .

Conde, M. Á., Rodríguez-Sedano, F. J., Hernández-García, Á., Gutiérrez-Fernández, A., & Guerrero-Higueras, Á. M. (2021). Your teammate just sent you a new message! The effects of using Telegram on individual acquisition of teamwork competence. International Journal of Interactive Multimedia and Artificial Intelligence, 6 (6), 225. https://doi.org/10.9781/ijimai.2021.05.007 .

Cunningham-Nelson, S., Boles, W., Trouton, L., & Margerison, E. (2019). A review of chatbots in education: practical steps forward. In 30th Annual conference for the australasian association for engineering education (AAEE 2019): Educators becoming agents of change: innovate, integrate, motivate. Engineers Australia , 299–306.

Creswell, J. W. (2012). Educational Research : Planning, Conducting and Evaluating Quantitative and Qualitative Research (4th ed.). Pearson Education.

de Holanda Coelho, G. L., Hanel, H. P., & Wolf, J. L. (2020). The very efficient assessment of need for cognition: Developing a six-item version. Assessment, 27 (8), 1870–1885. https://doi.org/10.1177/1073191118793208 .

de Oliveira, J. C., Santos, D. H., & Neto, M. P. (2016). Chatting with Arduino platform through Telegram Bot. 2016 IEEE International Symposium on Consumer Electronics (ISCE) , 131–132. https://doi.org/10.1109/ISCE.2016.7797406 .

Dekker, I., De Jong, E. M., Schippers, M. C., De Bruijn-Smolders, M., Alexiou, A., & Giesbers, B. (2020). Optimizing students’ mental health and academic performance: AI-enhanced life crafting. Frontiers in Psychology, 11 (June), 1–15. https://doi.org/10.3389/fpsyg.2020.01063 .

Devito, J. (2018). The interpersonal communication book (15th ed.). Pearson Education Limited.

Eccles, J. S., & Wigfield, A. (2002). Motivational Beliefs, Values, and Goals. Annual Review of Psychology , 53 (1), 109–132. https://doi.org/10.1146/annurev.psych.53.100901.135153 .

Eeuwen, M. V. (2017). Mobile conversational commerce: messenger chatbots as the next interface between businesses and consumers . Unpublished Master's thesis. University of Twente.

Følstad, A., Skjuve, M., & Brandtzaeg, P. B. (2019). Different chatbots for different purposes: towards a typology of chatbots to understand interaction design. In: Bodrunova S. et al. (eds) Internet Science. INSCI 2018. Lecture Notes in Computer Science , vol 11551 (pp. 145–156). Springer. https://doi.org/10.1007/978-3-030-17705-8_13 .

Følstad, A., & Brandtzaeg, P. B. (2017). Chatbots and the new world of HCI. Interactions, 24 (4), 38–42. https://doi.org/10.1145/3085558 .

Fryer, L. K., Nakao, K., & Thompson, A. (2019). Chatbot learning partners: Connecting learning experiences, interest and competence. Computers in Human Behavior, 93 , 279–289. https://doi.org/10.1016/j.chb.2018.12.023 .

Garcia Brustenga, G., Fuertes-Alpiste, M., & Molas-Castells, N. (2018). Briefing paper: Chatbots in education . eLearn Center, Universitat Oberta de Catalunya. https://doi.org/10.7238/elc.chatbots.2018 .

Gonda, D. E., Luo, J., Wong, Y. L., & Lei, C. U. (2019). Evaluation of developing educational chatbots based on the seven principles for good teaching. Proceedings of the 2018 IEEE international conference on teaching, assessment, and learning for engineering, TALE 2018 , Australia, 446–453. IEEE. https://doi.org/10.1109/TALE.2018.8615175 .

Hadjielias, E., Dada, O., Discua Cruz, A., Zekas, S., Christofi, M., & Sakka, G. (2021). How do digital innovation teams function? Understanding the team cognition-process nexus within the context of digital transformation. Journal of Business Research, 122 , 373–386. https://doi.org/10.1016/j.jbusres.2020.08.045 .

Han, R., & Xu, J. (2020). A comparative study of the role of interpersonal communication, traditional media and social media in pro-environmental behavior: A China-based study. International Journal of Environmental Research and Public Health . https://doi.org/10.3390/ijerph17061883 .

Haristiani, N., Danuwijaya, A. A., Rifai, M. M., & Sarila, H. (2019). Gengobot: A chatbot-based grammar application on mobile instant messaging as language learning medium. Journal of Engineering Science and Technology, 14 (6), 3158–3173.

Hattie, J. (2017). Visible Learningplus 250+ influences on student achievement. In Visible learning plus . www.visiblelearningplus.com/content/250-influences-student-achievement .

Hattie, J. (2015). The applicability of Visible Learning to higher education. Scholarship of Teaching and Learning in Psychology, 1 (1), 79–91. https://doi.org/10.1037/stl0000021 .

Heryandi, A. (2020). Developing chatbot for academic record monitoring in higher education institution. IOP Conference Series: Materials Science and Engineering . https://doi.org/10.1088/1757-899X/879/1/012049 .

Hetenyi, G., Lengyel, A., & Szilasi, M. (2019). Quantitative analysis of qualitative data: Using voyant tools to investigate the sales-marketing interface. Journal of Industrial Engineering and Management, 12 (3), 393–404. https://doi.org/10.3926/jiem.2929 .

Hobert, S. (2019). How are you, chatbot? Evaluating chatbots in educational settings—Results of a literature review. In N. Pinkwart & J. Konert (Eds.), DELFI 2019 (pp. 259–270). Gesellschaft für Informatik, Bonn. https://doi.org/10.18420/delfi2019_289 .

Hobert S. & Berens F. (2020). Small talk conversations and the long-term use of chatbots in educational settings—experiences from a field study. In: Følstad A. et al. (eds) Chatbot research and design. CONVERSATIONS 2019. Lecture Notes in Computer Science, vol 11970 (pp. 260–272). Springer. https://doi.org/10.1007/978-3-030-39540-7_18 .

Holotescu, C. (2016). MOOCBuddy: A chatbot for personalized learning with MOOCs. In: A. Iftene & J. Vanderdonckt (Eds.), Proceedings of the 13th international conference on human-computer interaction RoCHI’2016 , Romania, 91–94.

Ischen C., Araujo T., Voorveld H., van Noort G., Smit E. (2020) Privacy concerns in chatbot interactions. In: Følstad A. et al. (eds) Chatbot research and design. CONVERSATIONS 2019. Lecture Notes in Computer Science , vol 11970 (pp. 34–48). Springer. https://doi.org/10.1007/978-3-030-39540-7_3 .

Ismail, M., & Ade-Ibijola, A. (2019). Lecturer’s Apprentice: A chatbot for assisting novice programmers. Proceedings—2019 International multidisciplinary information technology and engineering conference, IMITEC 2019 . South Africa, 1–8. IEEE. https://doi.org/10.1109/IMITEC45504.2019.9015857 .

Kearney, E., Gebert, D., & Voelpel, S. (2009). When and how diversity benefits teams: The importance of team members’ need for cognition. Academy of Management Journal, 52 (3), 581–598. https://doi.org/10.5465/AMJ.2009.41331431 .

Kerly, A., Hall, P., & Bull, S. (2007). Bringing chatbots into education: Towards natural language negotiation of open learner models. Knowledge-Based Systems, 20 (2), 177–185. https://doi.org/10.1016/j.knosys.2006.11.014 .

Khan, A., Ranka, S., Khakare, C., & Karve, S. (2019). NEEV: An education informational chatbot. International Research Journal of Engineering and Technology, 6 (4), 492–495.

Kim, M. S. (2021). A systematic review of the design work of STEM teachers. Research in Science & Technological Education, 39 (2), 131–155. https://doi.org/10.1080/02635143.2019.1682988 .

Kumar, J. A., & Silva, P. A. (2020). Work-in-progress: A preliminary study on students’ acceptance of chatbots for studio-based learning. Proceedings of the 2020 IEEE Global Engineering Education Conference (EDUCON) , Portugal, 1627–1631. IEEE https://doi.org/10.1109/EDUCON45650.2020.9125183 .

Kumar, J. A., Bervell, B., Annamalai, N., & Osman, S. (2020). behavioral intention to use mobile learning : Evaluating the role of self-efficacy, subjective norm, and WhatsApp use habit. IEEE Access, 8 , 208058–208074. https://doi.org/10.1109/ACCESS.2020.3037925 .

Kumar, J. A., Silva, P. A., & Prelath, R. (2021). Implementing studio-based learning for design education: A study on the perception and challenges of Malaysian undergraduates. International Journal of Technology and Design Education, 31 (3), 611–631. https://doi.org/10.1007/s10798-020-09566-1 .

Lapina, A. (2020). Does exposure to problem-based learning strategies increase postformal thought and need for cognition in higher education students? A quasi-experimental study (Publication No. 28243240) Doctoral dissertation, Texas State University-San Marcos. ProQuest Dissertations & Theses Global.

Linse, A. R. (2007). Team peer evaluation. In Schreyer Institute for Teaching Excellence . http://www.schreyerinstitute.psu.edu/ .

Luo, C. J., & Gonda, D. E. (2019). Code Free Bot: An easy way to jumpstart your chatbot! Proceeding of the 2019 IEEE International Conference on Engineering, Technology and Education (TALE 2019) , Australia, 1–3, IEEE. https://doi.org/10.1109/TALE48000.2019.9226016 .

Meyer von Wolff, R., Nörtemann, J., Hobert, S., Schumann, M. (2020) Chatbots for the information acquisition at universities—A student’s view on the application area. In: Følstad A. et al. (eds) Chatbot research and design. CONVERSATIONS 2019. Lecture Notes in Computer Science , vol 11970 (pp. 231–244). Springer. https://doi.org/10.1007/978-3-030-39540-7_16 .

Miller, E. (2016). How chatbots will help education. Venturebeat. http://venturebeat.com/2016/09/29/how-chatbots-will-help-education/ .

Nguyen, Q. N., & Sidorova, A. (2018). Understanding user interactions with a chatbot: A self-determination theory approach. Proceedings of the Twenty-Fourth Americas Conference on Information Systems , United States of America, 1–5. Association for Information Systems (AIS).

Oke, A., & Fernandes, F. A. P. (2020). Innovations in teaching and learning: Exploring the perceptions of the education sector on the 4th industrial revolution (4IR). Journal of Open Innovation: Technology, Market, and Complexity., 6 (2), 31. https://doi.org/10.3390/JOITMC6020031 .

Ondas, S., Pleva, M., & Hladek, D. (2019). How chatbots can be involved in the education process. Proccedings of the 17th IEEE international conference on emerging eLearning technologies and applications ICETA 2019, Slovakia , 575–580. https://doi.org/10.1109/ICETA48886.2019.9040095 .

Pan, Y., Shang, Y., & Malika, R. (2020). Enhancing creativity in organizations: The role of the need for cognition. Management Decision . https://doi.org/10.1108/MD-04-2019-0516 .

Park, H. S., Baker, C., & Lee, D. W. (2008). Need for cognition, task complexity, and job satisfaction. Journal of Management in Engineering, 24 (2), 111–117. https://doi.org/10.1061/(asce)0742-597x(2008)24:2(111) .

Pereira, J. (2016). Leveraging chatbots to improve self-guided learning through conversational quizzes. Proceedings of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality—TEEM ’16 , Spain, 911–918, ACM. https://doi.org/10.1145/3012430.3012625 .

Pereira, J., Fernández-Raga, M., Osuna-Acedo, S., Roura-Redondo, M., Almazán-López, O., & Buldón-Olalla, A. (2019). Promoting learners’ voice productions using chatbots as a tool for improving the learning process in a MOOC. Technology, Knowledge and Learning, 24 (4), 545–565. https://doi.org/10.1007/s10758-019-09414-9 .

Pham, X. L., Pham, T., Nguyen, Q. M., Nguyen, T. H., & Cao, T. T. H. (2018). Chatbot as an intelligent personal assistant for mobile language learning. Proceedings of the 2018 2nd international conference on education and e-Learning—ICEEL 2018 , Indonesia, 16–21. ACM. https://doi.org/10.1145/3291078.3291115 .

Pintrich, P. R., & de Groot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82 (1), 33–40. https://doi.org/10.1037/0022-0663.82.1.33 .

Pintrich, P. R., Smith, D. A. F., Garcia, T., & Mckeachie, W. J. (1993). Reliability and predictive validity of the motivated strategies for learning questionnaire (MSLQ). Educational and Psychological Measurement, 53 (3), 801–813. https://doi.org/10.1177/0013164493053003024 .

Rahayu, Y. S., Wibawa, S. C., Yuliani, Y., Ratnasari, E., & Kusumadewi, S. (2018). The development of BOT API social media Telegram about plant hormones using Black Box Testing. IOP Conference Series: Materials Science and Engineering . https://doi.org/10.1088/1757-899X/434/1/012132 .

Rahman, A. M., Al Mamun, A., & Islam, A. (2018). Programming challenges of chatbot: Current and future prospective. 5th IEEE Region 10 Humanitarian Technology Conference 2017 (R10-HTC 2017) , India, 75–78, IEEE. https://doi.org/10.1109/R10-HTC.2017.8288910 .

Ren, R., Castro, J. W., Acuña, S. T., & De Lara, J. (2019). Evaluation techniques for chatbot usability: A systematic mapping study. International Journal of Software Engineering and Knowledge Engineering, 29 (11–12), 1673–1702. https://doi.org/10.1142/S0218194019400163 .

Riel, J. (2020). Essential features and critical issues with educational chatbots: toward personalized learning via digital agents. In: M. Khosrow-Pour (Ed.), Handbook of research on modern educational technologies, applications, and management (pp. 246–262). IGI Global. https://doi.org/10.1142/S0218194019400163 .

Rosenstein, L. D. (2019). Research design and analysis: A primer for the non-statistician . Wiley.

Book   Google Scholar  

Sandoval, Z. V. (2018). Design and implementation of a chatbot in online higher education settings. Issues in Information Systems, 19 (4), 44–52. https://doi.org/10.48009/4_iis_2018_44-52 .

Article   MathSciNet   Google Scholar  

Sart, G. (2014). The effects of the development of metacognition on project-based learning. Procedia—Social and Behavioral Sciences, 152 , 131–136. https://doi.org/10.1016/j.sbspro.2014.09.169 .

Satow, L. (2017). Chatbots as teaching assistants: Introducing a model for learning facilitation by AI Bots. SAP Community . https://blogs.sap.com/2017/07/12/chatbots-as-teaching-assistants-introducing-a-model-for-learning-facilitation-by-ai-bots/ .

Schlagwein, D., Conboy, K., Feller, J., Leimeister, J. M., & Morgan, L. (2017). “Openness” with and without information technology: A framework and a brief history. Journal of Information Technology, 32 (4), 297–305. https://doi.org/10.1057/s41265-017-0049-3 .

Schmulian, A., & Coetzee, S. A. (2019). The development of Messenger bots for teaching and learning and accounting students’ experience of the use thereof. British Journal of Educational Technology, 50 (5), 2751–2777. https://doi.org/10.1111/bjet.12723 .

Setiaji, H., & Paputungan, I. V. (2018). Design of Telegram Bots for campus information sharing. IOP Conference Series: Materials Science and Engineering, 325 , 1–6. https://doi.org/10.1088/1757-899X/325/1/012005 .

Silva, P.A., Polo, B.J., Crosby, M.E. (2017). Adapting the studio based learning methodology to computer science education. In: Fee S., Holland-Minkley A., Lombardi T. (eds) New directions for computing education (pp. 119–142). Springer. https://doi.org/10.1007/978-3-319-54226-3_8 .

Sinclair, S., & Rockwell, G. (2021). Voyant tools (2.4). https://voyant-tools.org/ .

Sjöström, J., Aghaee, N., Dahlin, M., & Ågerfalk, P. J. (2018). Designing chatbots for higher education practice. Proceedings of the 2018 AIS SIGED International Conference on Information Systems Education and Research .

Smutny, P., & Schreiberova, P. (2020). Chatbots for learning: A review of educational chatbots for the Facebook Messenger. Computers and Education, 151 (February), 103862. https://doi.org/10.1016/j.compedu.2020.103862 .

Sreelakshmi, A. S., Abhinaya, S. B., Nair, A., & Jaya Nirmala, S. (2019). A question answering and quiz generation chatbot for education. Grace Hopper Celebration India (GHCI), 2019 , 1–6. https://doi.org/10.1109/GHCI47972.2019.9071832 .

Stathakarou, N., Nifakos, S., Karlgren, K., Konstantinidis, S. T., Bamidis, P. D., Pattichis, C. S., & Davoody, N. (2020). Students’ perceptions on chatbots’ potential and design characteristics in healthcare education. In J. Mantas, A. Hasman, & M. S. Househ (Eds.), The importance of health informatics in public health during a pandemic (Vol. 272, pp. 209–212). IOS Press. https://doi.org/10.3233/SHTI200531 .

Tamayo, P. A., Herrero, A., Martín, J., Navarro, C., & Tránchez, J. M. (2020). Design of a chatbot as a distance learning assistant. Open Praxis, 12 (1), 145. https://doi.org/10.5944/openpraxis.12.1.1063 .

Tegos, S., Demetriadis, S., Psathas, G. & Tsiatsos T. (2020) A configurable agent to advance peers’ productive dialogue in MOOCs. In: Følstad A. et al. (eds) Chatbot research and design. CONVERSATIONS 2019. Lecture Notes in Computer Science , vol 11970 (pp. 245–259). Springer. https://doi.org/10.1007/978-3-030-39540-7_17 .

Thirumalai, B., Ramanathan, A., Charania, A., & Stump, G. (2019). Designing for technology-enabled reflective practice: teachers’ voices on participating in a connected learning practice. In R. Setty, R. Iyenger, M. A. Witenstein, E. J. Byker, & H. Kidwai (Eds.), Teaching and teacher education: south asian perspectives (pp. 243–273). Palgrave Macmillan. https://doi.org/10.1016/S0742-051X(01)00046-4 .

van Knippenberg, D., & Hirst, G. (2020). A motivational lens model of person × situation interactions in employee creativity. Journal of Applied Psychology, 105 (10), 1129–1144. https://doi.org/10.1037/apl0000486 .

Vázquez-Cano, E., Mengual-Andrés, S., & López-Meneses, E. (2021). Chatbot to improve learning punctuation in Spanish and to enhance open and flexible learning environments. International Journal of Educational Technology in Higher Education, 18 (1), 33. https://doi.org/10.1186/s41239-021-00269-8 .

Verleger, M., & Pembridge, J. (2019). A pilot study integrating an AI-driven chatbot in an introductory programming course. Proceeding of the 2018 IEEE Frontiers in Education Conference (FIE) , USA. IEEE. https://doi.org/10.1109/FIE.2018.8659282 .

Walker, C. O., & Greene, B. A. (2009). The relations between student motivational beliefs and cognitive engagement in high school. Journal of Educational Research, 102 (6), 463–472. https://doi.org/10.3200/JOER.102.6.463-472 .

Wang, J., Hwang, G., & Chang, C. (2021). Directions of the 100 most cited chatbot-related human behavior research: A review of academic publications. Computers and Education: Artificial Intelligence, 2 , 1–12. https://doi.org/10.1016/j.caeai.2021.100023 .

Wei, H. C., & Chou, C. (2020). Online learning performance and satisfaction: Do perceptions and readiness matter? Distance Education, 41 (1), 48–69. https://doi.org/10.1080/01587919.2020.1724768 .

Winkler, R., & Söllner, M. (2018). Unleashing the potential of chatbots in education: A state-of-the-art analysis. In Academy of Management Annual Meeting (AOM) . https://www.alexandria.unisg.ch/254848/1/JML_699.pdf .

Wu, E. H. K., Lin, C. H., Ou, Y. Y., Liu, C. Z., Wang, W. K., & Chao, C. Y. (2020). Advantages and constraints of a hybrid model K-12 E-Learning assistant chatbot. IEEE Access, 8 , 77788–77801. https://doi.org/10.1109/ACCESS.2020.2988252 .

Yen, A. M. N. L. (2018). The influence of self-regulation processes on metacognition in a virtual learning environment. Educational Studies, 46 (1), 1–17. https://doi.org/10.1080/03055698.2018.1516628 .

Yilmaz, R. M., & Baydas, O. (2017). An examination of undergraduates’ metacognitive strategies in pre-class asynchronous activity in a flipped classroom. Educational Technology Research and Development, 65 (6), 1547–1567. https://doi.org/10.1007/s11423-017-9534-1 .

Yin, J., Goh, T. T., Yang, B., & Xiaobin, Y. (2021). Conversation technology with micro-learning: The impact of chatbot-based learning on students’ learning motivation and performance. Journal of Educational Computing Research, 59 (1), 154–177. https://doi.org/10.1177/0735633120952067 .

Download references

Acknowledgements

Not applicable.

This study was funded under the Universiti Sains Malaysia Short Term Research Grant 304/PMEDIA/6315219.

Author information

Authors and affiliations.

Centre for Instructional Technology and Multimedia, Universiti Sains Malaysia, Minden, Pulau Pinang, Malaysia

Jeya Amantha Kumar

You can also search for this author in PubMed   Google Scholar

Contributions

The author read and approved the final manuscript.

Corresponding author

Correspondence to Jeya Amantha Kumar .

Ethics declarations

Competing interests.

The author declares that there is no conflict of interest.

Ethical approval and consent to participate

Informed consent was obtained from all participants for being included in the study based on the approval of The Human Research Ethics Committee of Universiti Sains Malaysia (JEPeM) Ref No: USM/JEPeM/18050247.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Kumar, J.A. Educational chatbots for project-based learning: investigating learning outcomes for a team-based design course. Int J Educ Technol High Educ 18 , 65 (2021). https://doi.org/10.1186/s41239-021-00302-w

Download citation

Received : 02 July 2021

Accepted : 23 September 2021

Published : 15 December 2021

DOI : https://doi.org/10.1186/s41239-021-00302-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Design education
  • Project-based learning
  • Collaborative learning
  • Mobile learning

designing an educational chatbot a case study of cikguaibot

  • DOI: 10.18178/IJMERR.7.5.528-533
  • Corpus ID: 181632636

Chatbot for e-learning: A case of study

  • F. Colace , M. D. Santo , +3 authors Saverio Lemma
  • Published 2018
  • Computer Science, Education

Figures from this paper

figure 1

93 Citations

Student chatbot system: a review on educational chatbot, development of chatbot application to support academic staff works for academic student services, a framework to implement ai-integrated chatbot in educational institutes, utilization of chabot in an educational system, use of chatbots in e-learning context: a systematic review, evolution in education: chatbots, chatbot to improve learning punctuation in spanish and to enhance open and flexible learning environments, application of chatbot technology in the study of the discipline «quality management» *, development of a chatbot system based on learning and discussion status in pair work, implementing the bashayer chatbot in saudi higher education: measuring the influence on students' motivation and learning strategies, 26 references, automated reply to students' queries in e-learning environment using web-bot, review of integrated applications with aiml based chatbot, building a chatbot with serverless computing, botwheels: a petri net based chatbot for recommending tires, a web-based platform for collection of human-chatbot interactions, an intelligent question answering conversational agent using naïve bayesian classifier, the forensic challenger, an ontological approach to digital storytelling, self-regulated learning with approximate reasoning and situation awareness, artificial intelligence technologies for personnel learning management systems, related papers.

Showing 1 through 3 of 0 Related Papers

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

information-logo

Article Menu

designing an educational chatbot a case study of cikguaibot

  • Subscribe SciFeed
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Designing a chatbot for contemporary education: a systematic literature review.

designing an educational chatbot a case study of cikguaibot

1. Introduction

2. related work, 3.1. eligibility criteria, 3.2. information sources, 3.3. search strategy, 3.4. selection process and data collection process, 3.5. data items, 4.1. educational grade levels, 4.2. what are the steps for designing an educational chatbot for contemporary education, 4.2.1. research, 4.2.2. analysis, 4.2.3. definition, 4.2.4. shaping and formulation, 4.2.5. adaptation and modifications, 4.2.6. design principles and approaches, 4.2.7. implementation and training of the eca, 4.2.8. testing of the eca, 4.2.9. guidance and motivation provision for the adoption of the eca from the students, 4.2.10. application in the learning process, 4.2.11. evaluation of an eca, 5. discussion, 6. limitations, 7. conclusions and further research, author contributions, data availability statement, conflicts of interest.

  • Adamopoulou, E.; Moussiades, L. Chatbots: History, technology, and applications. Mach. Learn. Appl. 2020 , 2 , 100006. [ Google Scholar ] [ CrossRef ]
  • Pérez, J.Q.; Daradoumis, T.; Puig, J.M.M. Rediscovering the use of chatbots in education: A systematic literature review. Comput. Appl. Eng. Educ. 2020 , 28 , 1549–1565. [ Google Scholar ] [ CrossRef ]
  • Okonkwo, C.W.; Ade-Ibijola, A. Chatbots applications in education: A systematic review. Comput. Educ. Artif. Intell. 2021 , 2 , 100033. [ Google Scholar ] [ CrossRef ]
  • Hwang, G.-J.; Chang, C.-Y. A review of opportunities and challenges of chatbots in education. Interact. Learn. Environ. 2021 , 1–14. [ Google Scholar ] [ CrossRef ]
  • Wollny, S.; Schneider, J.; Di Mitri, D.; Weidlich, J.; Rittberger, M.; Drachsler, H. Are We There Yet?—A Systematic Literature Review on Chatbots in Education. Front. Artif. Intell. 2021 , 4 , 654924. [ Google Scholar ]
  • Kuhail, M.A.; Alturki, N.; Alramlawi, S.; Alhejori, K. Interacting with educational chatbots: A systematic review. Educ. Inf. Technol. 2023 , 28 , 973–1018. [ Google Scholar ] [ CrossRef ]
  • Smutny, P.; Schreiberova, P. Chatbots for learning: A review of educational chatbots for the Facebook Messenger. Comput. Educ. 2020 , 151 , 103862. [ Google Scholar ] [ CrossRef ]
  • Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. Ann. Intern. Med. 2009 , 151 , 264–269. [ Google Scholar ] [ CrossRef ]
  • Abdelghani, R.; Oudeyer, P.-Y.; Law, E.; de Vulpillières, C.; Sauzéon, H. Conversational agents for fostering curiosity-driven learning in children. Int. J. Hum. Comput. Stud. 2022 , 167 , 102887. [ Google Scholar ] [ CrossRef ]
  • Chien, Y.-Y.; Wu, T.-Y.; Lai, C.-C.; Huang, Y.-M. Investigation of the Influence of Artificial Intelligence Markup Language-Based LINE ChatBot in Contextual English Learning. Front. Psychol. 2022 , 13 , 785752. [ Google Scholar ] [ CrossRef ]
  • Chiu, T.K.F.; Moorhouse, B.L.; Chai, C.S.; Ismailov, M. Teacher support and student motivation to learn with Artificial Intelligence (AI) based chatbot. Interact. Learn. Environ. 2023 , 1–17. [ Google Scholar ] [ CrossRef ]
  • Chuang, C.-H.; Lo, J.-H.; Wu, Y.-K. Integrating Chatbot and Augmented Reality Technology into Biology Learning during COVID-19. Electronics 2023 , 12 , 222. [ Google Scholar ] [ CrossRef ]
  • Ericsson, E.; Sofkova Hashemi, S.; Lundin, J. Fun and frustrating: Students’ perspectives on practising speaking English with virtual humans. Cogent Educ. 2023 , 10 , 2170088. [ Google Scholar ] [ CrossRef ]
  • Haristiani, N.; Dewanty, V.L.; Rifai, M.M. Autonomous Learning Through Chatbot-based Application Utilization to Enhance Basic Japanese Competence of Vocational High School Students. J. Tech. Educ. Train. 2022 , 14 , 143–155. [ Google Scholar ] [ CrossRef ]
  • Kabiljagić, M.; Wachtler, J.; Ebner, M.; Ebner, M. Math Trainer as a Chatbot Via System (Push) Messages for Android. Int. J. Interact. Mob. Technol. 2022 , 16 , 75–87. [ Google Scholar ] [ CrossRef ]
  • Katchapakirin, K.; Anutariya, C.; Supnithi, T. ScratchThAI: A conversation-based learning support framework for computational thinking development. Educ. Inf. Technol. 2022 , 27 , 8533–8560. [ Google Scholar ] [ CrossRef ]
  • Mageira, K.; Pittou, D.; Papasalouros, A.; Kotis, K.; Zangogianni, P.; Daradoumis, A. Educational AI Chatbots for Content and Language Integrated Learning. Appl. Sci. 2022 , 12 , 3239. [ Google Scholar ] [ CrossRef ]
  • Mathew, A.N.; Rohini, V.; Paulose, J. NLP-based personal learning assistant for school education. Int. J. Electr. Comput. Eng. 2021 , 11 , 4522–4530. [ Google Scholar ] [ CrossRef ]
  • Sarosa, M.; Wijaya, M.H.; Tolle, H.; Rakhmania, A.E. Implementation of Chatbot in Online Classes using Google Classroom. Int. J. Comput. 2022 , 21 , 42–51. [ Google Scholar ] [ CrossRef ]
  • Tärning, B.; Silvervarg, A. “I didn’t understand, i’m really not very smart”—How design of a digital tutee’s self-efficacy affects conversation and student behavior in a digital math game. Educ. Sci. 2019 , 9 , 197. [ Google Scholar ] [ CrossRef ]
  • Deveci Topal, A.; Dilek Eren, C.; Kolburan Geçer, A. Chatbot application in a 5th grade science course. Educ. Inf. Technol. 2021 , 26 , 6241–6265. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Yang, H.; Kim, H.; Lee, J.H.; Shin, D. Implementation of an AI chatbot as an English conversation partner in EFL speaking classes. ReCALL 2022 , 34 , 327–343. [ Google Scholar ] [ CrossRef ]
  • Al-Sharafi, M.A.; Al-Emran, M.; Iranmanesh, M.; Al-Qaysi, N.; Iahad, N.A.; Arpaci, I. Understanding the impact of knowledge management factors on the sustainable use of AI-based chatbots for educational purposes using a hybrid SEM-ANN approach. Interact. Learn. Environ. 2022 , 1–20. [ Google Scholar ] [ CrossRef ]
  • Bailey, D.; Southam, A.; Costley, J. Digital storytelling with chatbots: Mapping L2 participation and perception patterns. Interact. Technol. Smart Educ. 2020 , 18 , 85–103. [ Google Scholar ] [ CrossRef ]
  • Bagramova, N.V.; Kudryavtseva, N.F.; Panteleeva, L.V.; Tyutyunnik, S.I.; Markova, I.V. Using chat bots when teaching a foreign language as an important condition for improving the quality of foreign language training of future specialists in the field of informatization of education. Perspekt. Nauk. I Obraz. 2022 , 58 , 617–633. [ Google Scholar ]
  • Belda-Medina, J.; Calvo-Ferrer, J.R. Using Chatbots as AI Conversational Partners in Language Learning. Appl. Sci. 2022 , 12 , 8427. [ Google Scholar ] [ CrossRef ]
  • Cai, W.; Grossman, J.; Lin, Z.J.; Sheng, H.; Wei, J.T.-Z.; Williams, J.J.; Goel, S. Bandit algorithms to personalize educational chatbots. Mach. Learn. 2021 , 110 , 2389–2418. [ Google Scholar ] [ CrossRef ]
  • Černý, M. Educational Psychology Aspects of Learning with Chatbots without Artificial Intelligence: Suggestions for Designers. Eur. J. Investig. Health Psychol. Educ. 2023 , 13 , 284–305. [ Google Scholar ] [ CrossRef ]
  • Chaiprasurt, C.; Amornchewin, R.; Kunpitak, P. Using motivation to improve learning achievement with a chatbot in blended learning. World J. Educ. Technol. Curr. Issues 2022 , 14 , 1133–1151. [ Google Scholar ] [ CrossRef ]
  • Chen, Y.; Jensen, S.; Albert, L.J.; Gupta, S.; Lee, T. Artificial Intelligence (AI) Student Assistants in the Classroom: Designing Chatbots to Support Student Success. Inf. Syst. Front. 2023 , 25 , 161–182. [ Google Scholar ] [ CrossRef ]
  • Chien, Y.-Y.; Yao, C.-K. Development of an ai userbot for engineering design education using an intent and flow combined framework. Appl. Sci. 2020 , 10 , 7970. [ Google Scholar ] [ CrossRef ]
  • Colace, F.; De Santo, M.; Lombardi, M.; Pascale, F.; Pietrosanto, A.; Lemma, S. Chatbot for e-learning: A case of study. Int. J. Mech. Eng. Robot. Res. 2018 , 7 , 528–533. [ Google Scholar ] [ CrossRef ]
  • Coronado, M.; Iglesias, C.A.; Carrera, Á.; Mardomingo, A. A cognitive assistant for learning java featuring social dialogue. Int. J. Hum. Comput. Stud. 2018 , 117 , 55–67. [ Google Scholar ] [ CrossRef ]
  • Essel, H.B.; Vlachopoulos, D.; Tachie-Menson, A.; Johnson, E.E.; Baah, P.K. The impact of a virtual teaching assistant (chatbot) on students’ learning in Ghanaian higher education. Int. J. Educ. Technol. High. Educ. 2022 , 19 , 28. [ Google Scholar ] [ CrossRef ]
  • Fryer, L.K.; Nakao, K.; Thompson, A. Chatbot learning partners: Connecting learning experiences, interest and competence. Comput. Hum. Behav. 2019 , 93 , 279–289. [ Google Scholar ] [ CrossRef ]
  • Durall Gazulla, E.; Martins, L.; Fernández-Ferrer, M. Designing learning technology collaboratively: Analysis of a chatbot co-design. Educ. Inf. Technol. 2023 , 28 , 109–134. [ Google Scholar ] [ CrossRef ]
  • González, L.A.; Neyem, A.; Contreras-McKay, I.; Molina, D. Improving learning experiences in software engineering capstone courses using artificial intelligence virtual assistants. Comput. Appl. Eng. Educ. 2022 , 30 , 1370–1389. [ Google Scholar ] [ CrossRef ]
  • González-Castro, N.; Muñoz-Merino, P.J.; Alario-Hoyos, C.; Kloos, C.D. Adaptive learning module for a conversational agent to support MOOC learners. Australas. J. Educ. Technol. 2021 , 37 , 24–44. [ Google Scholar ] [ CrossRef ]
  • Han, J.-W.; Park, J.; Lee, H. Analysis of the effect of an artificial intelligence chatbot educational program on non-face-to-face classes: A quasi-experimental study. BMC Med. Educ. 2022 , 22 , 830. [ Google Scholar ] [ CrossRef ]
  • Han, S.; Liu, M.; Pan, Z.; Cai, Y.; Shao, P. Making FAQ Chatbots More Inclusive: An Examination of Non-Native English Users’ Interactions with New Technology in Massive Open Online Courses. Int. J. Artif. Intell. Educ. 2022 , 33 , 752–780. [ Google Scholar ] [ CrossRef ]
  • Haristiani, N.; Rifai, M.M. Chatbot-based application development and implementation as an autonomous language learning medium. Indones. J. Sci. Technol. 2021 , 6 , 561–576. [ Google Scholar ] [ CrossRef ]
  • Hew, K.F.; Huang, W.; Du, J.; Jia, C. Using chatbots to support student goal setting and social presence in fully online activities: Learner engagement and perceptions. J. Comput. High. Educ. 2022 , 35 , 40–68. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Hsu, M.-H.; Chen, P.-S.; Yu, C.-S. Proposing a task-oriented chatbot system for EFL learners speaking practice. Interact. Learn. Environ. 2021 , 1–12. [ Google Scholar ] [ CrossRef ]
  • Hsu, H.-H.; Huang, N.-F. Xiao-Shih: A Self-Enriched Question Answering Bot with Machine Learning on Chinese-Based MOOCs. IEEE Trans. Learn. Technol. 2022 , 15 , 223–237. [ Google Scholar ] [ CrossRef ]
  • Huang, W.; Hew, K.F.; Gonda, D.E. Designing and evaluating three chatbot-enhanced activities for a flipped graduate course. Int. J. Mech. Eng. Robot. Res. 2019 , 8 , 813–818. [ Google Scholar ] [ CrossRef ]
  • Jasin, J.; Ng, H.T.; Atmosukarto, I.; Iyer, P.; Osman, F.; Wong, P.Y.K.; Pua, C.Y.; Cheow, W.S. The implementation of chatbot-mediated immediacy for synchronous communication in an online chemistry course. Educ. Inf. Technol. 2023 , 28 , 10665–10690. [ Google Scholar ] [ CrossRef ]
  • Lee, Y.-F.; Hwang, G.-J.; Chen, P.-Y. Impacts of an AI-based chabot on college students’ after-class review, academic performance, self-efficacy, learning attitude, and motivation. Educ. Technol. Res. Dev. 2022 , 70 , 1843–1865. [ Google Scholar ] [ CrossRef ]
  • Li, K.-C.; Chang, M.; Wu, K.-H. Developing a task-based dialogue system for english language learning. Educ. Sci. 2020 , 10 , 306. [ Google Scholar ] [ CrossRef ]
  • Li, Y.S.; Lam, C.S.N.; See, C. Using a Machine Learning Architecture to Create an AI-Powered Chatbot for Anatomy Education. Med. Sci. Educ. 2021 , 31 , 1729–1730. [ Google Scholar ] [ CrossRef ]
  • Liu, Q.; Huang, J.; Wu, L.; Zhu, K.; Ba, S. CBET: Design and evaluation of a domain-specific chatbot for mobile learning. Univers. Access Inf. Soc. 2020 , 19 , 655–673. [ Google Scholar ] [ CrossRef ]
  • Mendez, S.L.; Johanson, K.; Conley, V.M.; Gosha, K.; Mack, N.; Haynes, C.; Gerhardt, R. Chatbots: A tool to supplement the future faculty mentoring of doctoral engineering students. Int. J. Dr. Stud. 2020 , 15 , 373–392. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Neo, M. The Merlin Project: Malaysian Students’ Acceptance of an AI Chatbot in Their Learning Process. Turk. Online J. Distance Educ. 2022 , 23 , 31–48. [ Google Scholar ] [ CrossRef ]
  • Neo, M.; Lee, C.P.; Tan, H.Y.-J.; Neo, T.K.; Tan, Y.X.; Mahendru, N.; Ismat, Z. Enhancing Students’ Online Learning Experiences with Artificial Intelligence (AI): The MERLIN Project. Int. J. Technol. 2022 , 13 , 1023–1034. [ Google Scholar ] [ CrossRef ]
  • Ong, J.S.H.; Mohan, P.R.; Han, J.Y.; Chew, J.Y.; Fung, F.M. Coding a Telegram Quiz Bot to Aid Learners in Environmental Chemistry. J. Chem. Educ. 2021 , 98 , 2699–2703. [ Google Scholar ] [ CrossRef ]
  • Rodríguez, J.A.; Santana, M.G.; Perera, M.V.A.; Pulido, J.R. Embodied conversational agents: Artificial intelligence for autonomous learning. Pixel-Bit Rev. De Medios Y Educ. 2021 , 62 , 107–144. [ Google Scholar ]
  • Rooein, D.; Bianchini, D.; Leotta, F.; Mecella, M.; Paolini, P.; Pernici, B. aCHAT-WF: Generating conversational agents for teaching business process models. Softw. Syst. Model. 2022 , 21 , 891–914. [ Google Scholar ] [ CrossRef ]
  • Sáiz-Manzanares, M.C.; Marticorena-Sánchez, R.; Martín-Antón, L.J.; González Díez, I.; Almeida, L. Perceived satisfaction of university students with the use of chatbots as a tool for self-regulated learning. Heliyon 2023 , 9 , e12843. [ Google Scholar ] [ CrossRef ]
  • Schmulian, A.; Coetzee, S.A. The development of Messenger bots for teaching and learning and accounting students’ experience of the use thereof. Br. J. Educ. Technol. 2019 , 50 , 2751–2777. [ Google Scholar ] [ CrossRef ]
  • Suárez, A.; Adanero, A.; Díaz-Flores García, V.; Freire, Y.; Algar, J. Using a Virtual Patient via an Artificial Intelligence Chatbot to Develop Dental Students’ Diagnostic Skills. Int. J. Environ. Res. Public Health 2022 , 19 , 8735. [ Google Scholar ] [ CrossRef ]
  • Vázquez-Cano, E.; Mengual-Andrés, S.; López-Meneses, E. Chatbot to improve learning punctuation in Spanish and to enhance open and flexible learning environments. Int. J. Educ. Technol. High. Educ. 2021 , 18 , 33. [ Google Scholar ] [ CrossRef ]
  • Villegas-Ch, W.; Arias-Navarrete, A.; Palacios-Pacheco, X. Proposal of an Architecture for the Integration of a Chatbot with Artificial Intelligence in a Smart Campus for the Improvement of Learning. Sustainability 2020 , 12 , 1500. [ Google Scholar ] [ CrossRef ]
  • Wambsganss, T.; Zierau, N.; Söllner, M.; Käser, T.; Koedinger, K.R.; Leimeister, J.M. Designing Conversational Evaluation Tools. Proc. ACM Hum.-Comput. Interact. 2022 , 6 , 1–27. [ Google Scholar ] [ CrossRef ]
  • Wan Hamzah, W.M.A.F.; Ismail, I.; Yusof, M.K.; Saany, S.I.M.; Yacob, A. Using Learning Analytics to Explore Responses from Student Conversations with Chatbot for Education. Int. J. Eng. Pedagog. 2021 , 11 , 70–84. [ Google Scholar ] [ CrossRef ]
  • Yildiz Durak, H. Conversational agent-based guidance: Examining the effect of chatbot usage frequency and satisfaction on visual design self-efficacy, engagement, satisfaction, and learner autonomy. Educ. Inf. Technol. 2023 , 28 , 471–488. [ Google Scholar ] [ CrossRef ]
  • Yin, J.; Goh, T.-T.; Yang, B.; Xiaobin, Y. Conversation Technology with Micro-Learning: The Impact of Chatbot-Based Learning on Students’ Learning Motivation and Performance. J. Educ. Comput. Res. 2021 , 59 , 154–177. [ Google Scholar ] [ CrossRef ]
  • Bakouan, M.; Kamagate, B.H.; Kone, T.; Oumtanaga, S.; Babri, M. A chatbot for automatic processing of learner concerns in an online learning platform. Int. J. Adv. Comput. Sci. Appl. 2018 , 9 , 168–176. [ Google Scholar ] [ CrossRef ]
  • Briel, A. Toward an eclectic and malleable multiagent educational assistant. Comput. Appl. Eng. Educ. 2022 , 30 , 163–173. [ Google Scholar ] [ CrossRef ]
  • González-González, C.S.; Muñoz-Cruz, V.; Toledo-Delgado, P.A.; Nacimiento-García, E. Personalized Gamification for Learning: A Reactive Chatbot Architecture Proposal. Sensors 2023 , 23 , 545. [ Google Scholar ] [ CrossRef ]
  • Janati, S.E.; Maach, A.; Ghanami, D.E. Adaptive e-learning AI-powered chatbot based on multimedia indexing. Int. J. Adv. Comput. Sci. Appl. 2020 , 11 , 299–308. [ Google Scholar ] [ CrossRef ]
  • Jimenez Flores, V.J.; Jimenez Flores, O.J.; Jimenez Flores, J.C.; Jimenez Castilla, J.U. Performance comparison of natural language understanding engines in the educational domain. Int. J. Adv. Comput. Sci. Appl. 2020 , 11 , 753–757. [ Google Scholar ]
  • Karra, R.; Lasfar, A. Impact of Data Quality on Question Answering System Performances. Intell. Autom. Soft Comput. 2023 , 35 , 335–349. [ Google Scholar ] [ CrossRef ]
  • Kharis, M.; Schön, S.; Hidayat, E.; Ardiansyah, R.; Ebner, M. Mobile Gramabot: Development of a Chatbot App for Interactive German Grammar Learning. Int. J. Emerg. Technol. Learn. 2022 , 17 , 52–63. [ Google Scholar ] [ CrossRef ]
  • Kohnke, L. A Pedagogical Chatbot: A Supplemental Language Learning Tool. RELC J. 2022 , 1–11. [ Google Scholar ] [ CrossRef ]
  • Lippert, A.; Shubeck, K.; Morgan, B.; Hampton, A.; Graesser, A. Multiple Agent Designs in Conversational Intelligent Tutoring Systems. Technol. Knowl. Learn. 2020 , 25 , 443–463. [ Google Scholar ] [ CrossRef ]
  • Mateos-Sanchez, M.; Melo, A.C.; Blanco, L.S.; García, A.M.F. Chatbot, as Educational and Inclusive Tool for People with Intellectual Disabilities. Sustainability 2022 , 14 , 1520. [ Google Scholar ] [ CrossRef ]
  • Memon, Z.; Aghian, H.; Sarfraz, M.S.; Hussain Jalbani, A.; Oskouei, R.J.; Jalbani, K.B.; Hussain Jalbani, G. Framework for Educational Domain-Based Multichatbot Communication System. Sci. Program. 2021 , 2021 , 5518309. [ Google Scholar ] [ CrossRef ]
  • Nguyen, H.D.; Tran, D.A.; Do, H.P.; Pham, V.T. Design an Intelligent System to automatically Tutor the Method for Solving Problems. Int. J. Integr. Eng. 2020 , 12 , 211–223. [ Google Scholar ] [ CrossRef ]
  • Pashev, G.; Gaftandzhieva, S. Facebook Integrated Chatbot for Bulgarian Language Aiding Learning Content Delivery. TEM J. 2021 , 10 , 1011–1015. [ Google Scholar ] [ CrossRef ]
  • Schmitt, A.; Wambsganss, T.; Leimeister, J.M. Conversational Agents for Information Retrieval in the Education Domain: A User-Centered Design Investigation. Proc. ACM Hum.-Comput. Interact. 2022 , 6 , 1–22. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

ReferenceTypeYearNumber of Primary Studies/Number of Data SourcesMethodSubject
[ ]SLR202129/1Coding scheme based on Chang & Hwang (2019) and Hsu et al. (2012)Learning domains of ECAs; Learning strategies used by ECAs;
Research design of studies in the field;
Analysis methods utilized in relevant studies;
Nationalities of authors and journals publishing relevant studies;
Productive authors in the field
[ ]SLR202336/3Guidelines based on Keele et al. (2007)Fields in which ECAs are used;
Platforms on which the ECAs operate on;
Roles that ECAs play when interacting with students;
Interaction styles that are supported by the ECAs;
Principles that are used to guide the design of ECAs;
Empirical evidence that exists to support the capability of using ECAs as teaching assistants for students;
Challenges of applying and using ECAs in the classroom
[ ]SLR202153/6Methods based on Kitchenham et al. (2007), Wohlin et al. (2012) and Aznoli & Navimipour (2017)The most recent research status or profile for ECA applications in the education domain;
The primary benefits of ECA applications in education;
The challenges faced in the implementation of an ECA system in education;
The potential future areas of education that could benefit from the use of ECAs
[ ]SLR202080/8 PRISMA frameworkThe different types of educational and/or educational environment chatbots currently in use;
The way ECAs affect student learning or service improvement;
The type of technology ECAs use and the learning result that is obtained from each of them;
The cases in which a chatbot helps learning under conditions similar to those of a human tutor;
The possibility of evaluating the quality of chatbots and the techniques that exist for that
[ ]SLR202047 CAs/1undefinedQualitative assessment of ECAs that operate on Meta Messenger
[ ]SLR202174/4PRISMA frameworkThe objectives for implementing chatbots in education;
The pedagogical roles of chatbots;
The application scenarios that have been used to mentor students;
The extent in which chatbots are adaptable to personal students’ needs;
The domains in which chatbots have been applied so far
Inclusion CriteriaExclusion Criteria
IC1: The examined chatbot application was used in teaching a subject to students and contains information about designing, integrating or evaluating it, as well as mentioning the tools or environments used in order to achieve that.EC1: The chatbot application was designed for the training of specific target groups but not students or learners of an educational institution
IC2: The publication year of the article is between 2018 and 2023
IC3: The document type is a journal articleEC2: Articles that are focused too much on the results for the learners and do not describe the role of the CA and how it contributes to the results
IC4: The retrieved article is written in English
Educational Grade LevelReferences
K-12 education (14)[ , , , , , , , , , , , , , ]
Tertiary education (43)[ , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ]
Unspecified (16)[ , , , , , , , , , , , , , , , ]
CategoryReferences
Linear
(28)
[ , , , , , , , , , , , , , , , , , , , , , , , , , , , ]
Iterative
(3)
[ , , ]
Kind of InformationReferences
A suitable methodology to develop an ECA.[ ]
Communication methods and strategies that are going to be used to form the function of the educational tutor.[ ]
Design requirements to develop an ECA.[ ]
Design principles to develop an ECA.[ ]
Empirical information by examining similar applications.[ , , , ]
Learning methods that are going to be used to form the function of the educational tutor.[ ]
Mechanism prototypes that will be used to develop the educational agent.[ ]
Ready-to-use chatbots for the learning process.[ , , ]
Suitable environments and tools to develop an ECA.[ , , , , ]
Technical requirements to develop an ECA.[ , ]
Theories relevant to the development of an ECA.[ ]
RequirementsReferences
Users’ needs and expectations.[ , , , , , , , , ]
Technical requirements.[ ]
Collection of students’ common questions and most searched topics to be used in the educational material.[ , , , , , , , ]
Objects to Be DefinedReferences
An application flow to show the function of the ECA.[ ]
Communication channels the ECA is going to be accessible from.Common to all
Education plan.[ ]
Learning material and items the chatbot is going to use.Common to all
Learning methods and techniques that will be used to develop the agent.[ , ]
Learning objectives, tasks and goals.[ , ]
Student personas and conversational standards of the student–agent interaction.[ ]
Teaching and learning processes.[ , , , ]
The conversational flow between the student and the chatbot.[ , ]
The design principles of the chatbot.[ , , ]
The processing mechanisms of the chatbots that are going to be used.[ , ]
The purpose of the educational CA.[ , , , ]
Tools and environments or the ready solutions of already built CAs that are going to be used.Common to all
Usage Roles of ECAsReferences
Course evaluator[ ]
Learner taught by the student[ , ]
Learning guide in a gamification environment[ ]
Self-Evaluator (learning partner/educational tutor)[ , , , , , , , , , , , , , , , , , , , , , , , ]
Storytelling conversational assistant[ ]
Student assistant (provider of supportive educational material)[ , , , , , ]
Question solver[ , ]
Proposed StepsReferences
Evaluating students’ questions to measure their complexity and bias rate.[ ]
Shaping of the final learning material that is going to be used from the educational tutor.Common to all
Proposed StepsReferences
Evaluating students’ competence in answering questions to modify the learning content accordingly.[ ]
Enriching the educational material with various forms of the educational material apart from text messages.[ , , , ]
Studying the curriculum of the educational institute and adapting the design principles of the ECA to it.[ , , ]
Studying the curriculum of the educational institute and modifying the function of the ECA to it.[ , ]
Adapting the function of the ECA to the teaching process.[ ]
Getting domain experts’ opinions to modify the function and the learning material used by the ECA.[ , ]
Design DirectionDesign SuggestionReferences
Adaption of the ECA’s function to students’ needsAdaptation of the ECA’s function to the emotional needs of the students[ ]
Alignment of the ECA’s function with students’ learning needs[ , , , , ]
Adjustment of the ECA’s function to the curriculum of the students[ , ]
Modification of the ECA’s function to match user expectations[ , ]
Definition of the ECA’s vocabulary and expression style to be suitable with the students’ linguistic capabilities[ ]
Construction of the ECA in order to be an inclusive educational tool (suitable for every learner)[ ]
AccessibilityAlignment of the ECA’s function with the selected communication channels[ ]
The ECA should be accessible from various communication channels[ , ]
Guaranteed chatbot availability regardless of the external conditions[ , ]
Conversational traitsEquipment of the ECA with many variations for phrases with the same meaning[ , , ]
Acceptance of oral messages as input[ , ]
The ECA should address students by their name to provide personalized conversations[ , ]
Avoidance of spelling errors[ ]
Capability to discuss wide range for discussion topics including casual, non-educational subjects[ , , , ]
The ECA should collect more information when the user cannot be understood by discussing with them to identify their intent[ , ]
The ECA should let the user define the conversational flow when the CA cannot respond[ ]
Provision of messages about the limitation of the ECA when it cannot respond[ ]
Provision of motivational comments and rewarding messages to students[ , , , , , , ]
The ECA should produce quick responses[ , ]
Redirection of students to an educator’s communication channel when the ECA cannot respond[ ]
Wise usage of humor in the user interaction[ ]
Usage of easy-to-understand language in the response[ ]
Utilization of human-like conversational characteristics such as emoticons, avatar representations and greeting messages[ , , , , ]
Utilization of previous students’ responses to improve its conversational ability and provide personalized communication[ , , , , ]
Usage of button-formed topic suggestion for quicker interaction[ ]
Usage of “start” and “restart” interaction buttons [ ]
Design process general suggestions Engage every possible stakeholder of the development process to gain better results[ ]
Make the database of the system expendable[ ]
Handling of the educational materialExplanation of the educational material from various perspectives[ , , ]
The ECA should predict different subjects the students did not comprehend and provide relevant supportive material[ ]
Suggestion of external learning sources to students when it cannot respond[ ]
Proposition of learning topics similar to the current to help students learn on their own[ ]
The ECA should provide educational material in small segments with specific content[ , , ]
Provision of educational material in various forms apart from text message[ , , , , , , , , ]
Navigation buttons between the segments of the presented educational material[ ]
Oral narration to accompany the offered learning material[ ]
Handling of quizzes, tests or self-evaluation material[ , , , , , , , , , , , , , , , , , , , , , , , ]
Recommendation of suitable practice exercises to the students[ ]
Instruction provisionThe ECA should provide usage instructions through the functional environment of the ECA[ , ]
Provided learning experienceIntegration of other technologies such as AR to provide better user experience[ ]
The ECA should provide feedback to students[ , , , , ]
The ECA should provide personalized learning experience[ ]
Use of gamification elements[ ]
Question handling to and control by the studentsAddition of buttons so students can handle the questions they cannot answer[ , ]
The ECA should allow students to trace back to previous exercises and definitions [ ]
Provision of hints to students when they cannot answer a question[ ]
Use of button-formed reappearance of wrong questions so as to be easier to answer[ ]
Regulations for the function of the systemAlignment of the ECA’s function with the ethics policies and rules for the protection of the user data[ , , , , , ]
Adjustment of the ECA’s function to the policies of the educational institution [ , ]
Students’ notificationsThe ECA should provide updates to students for important course events such as deadlines[ ]
Traits of the provided learning activitiesProvision of challenging and interesting student learning activities[ , , ]
The ECA should provide collaborative learning activities[ ]
Utilization of competitive learning activities[ , ]
Teacher supportAlignment of the ECA’s function with the form of the teaching material[ ]
Adjustment of the ECA’s function to the teaching style of the educator[ ]
The ECA should provide goal-setting possibilities to the teachers[ ]
Tutoring approachAlignment of the ECA’s function with specific learning theories[ , ]
Adjustment of the ECA’s function to specific motivational framework[ , ]
Alignment of the ECA’s function with the learning purpose[ ]
Design of the ECA as a student that learns from the students[ ]
The ECA should utilize predefined learning paths[ ]
Usage of students’ previous knowledge and skills to help them learn new information[ ]
Utilization of learning motives such as students’ grades to increase students’ engagement willingness[ ]
Proposed StepsReferences
Using previously collected or preconstructed material to train the chatbot [ , ]
Training MethodReferences
Datasets of the development platforms[ , ]
Existing corpora of data[ ]
Educational material (predefined sets of questions that students have done or formed by domain experts or educators)[ , , , , , , , ]
Machine learning techniques[ , ]
Proposed StepsReferences
Trying a pilot application with a few students, teachers or domain experts to evaluate the first function of the ECA.[ , , , , , , , , , ]
Applying modifications based on the testing results.
Testing MethodReferences
Domain expert or educator testing[ , ]
Student testing[ , , , , , , ]
Testing using performance metrics[ , ]
Proposed StepsReferences
Providing guidance to the students on how to use the chatbot. [ , , , , , , , ]
Motivating students to use the agent.[ , , ]
Proposed Steps.References
Evaluating the chatbot based on system analytics and user evaluations.common-to-all
Restarting the procedure from the design stage and using the evaluation data to improve the agent.
Evaluation InstrumentsReferences
Interviews[ , , , , , , , , , , ]
Learning and interaction analytics[ , , , , , , , , , , , , , , , ]
Questionnaires[ , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ]
Student performance[ , , , , , , ]
Technical performance metrics[ , , , , , , ]
Evaluation MethodReferences
Comparison of the current ECA with other ECAs [ , ]
Functional assessments by domain experts [ , ]
Student usage evaluationCommon to all
Usability evaluation[ , , , , ]
Workshops for users or stakeholders[ ]
Evaluation CategoryEvaluation MetricReferences
Interaction metricsAcceptance of chat messages[ ]
Acceptance or indifference to the agent’s feedback[ , ]
Average duration of the interaction between the user and the ECA[ , ]
Capability of answering user questions and providing a pleasant interaction[ ]
Capability of conducting a personalized conversation[ ]
Capability of providing a human-like interaction and keeping a conversation going[ ]
Capability of understanding and responding to the user[ , , , , , ]
Capability of understanding different uses of the language[ ]
Periodicity of chat messages[ ]
Quality and accuracy of responses[ , ]
The average number of words in the messages written by the students[ ]
The number of propositions that were utilized by the learners[ ]
The total duration of students’ time spent interacting with the ECA [ , , ]
The total number of buttons that were pressed by the learners[ ]
The total number of propositions the ECA offered to the learners.[ ]
The total number of users that utilized the ECA[ ]
The total number of words that were produced by the ECA[ ]
The total number of words written by the students[ ]
The number of user inputs that were formed using natural language and were understood by the chatbot[ ]
Total number of interactions between the student and the ECA[ , ]
Total number of messages between the student and the ECA[ ]
Support and scaffolding of the educational processCapability of supporting the achievement of students’ learning goals and tasks[ , , ]
Fulfillment of the initial purpose of the agent[ ]
Rate of irrelevant (in an educational context) questions asked by the students[ ]
Students’ rate of correct answers (to questions posed by the chatbot)[ ]
The quality of the educational material suggestions made by the ECA[ , , , , ]
Technical suitability and performanceCompatibility with other software[ ]
Maintenance needs[ ]
User experienceStudents’ self-efficacy and learning autonomy[ , ]
Students’ workload[ ]
Usability[ , , , , ]
User motivation[ , ]
User satisfaction[ , , , ]
SuggestionReferences
Evaluation based on the initial purpose of the ECA[ ]
Utilization of specific evaluation plan[ , ]
Usage of progress bar and “skip buttons” in the evaluation form[ ]
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Ramandanis, D.; Xinogalos, S. Designing a Chatbot for Contemporary Education: A Systematic Literature Review. Information 2023 , 14 , 503. https://doi.org/10.3390/info14090503

Ramandanis D, Xinogalos S. Designing a Chatbot for Contemporary Education: A Systematic Literature Review. Information . 2023; 14(9):503. https://doi.org/10.3390/info14090503

Ramandanis, Dimitrios, and Stelios Xinogalos. 2023. "Designing a Chatbot for Contemporary Education: A Systematic Literature Review" Information 14, no. 9: 503. https://doi.org/10.3390/info14090503

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

designing an educational chatbot a case study of cikguaibot

A framework to implement AI-integrated chatbot in educational institutes

  • Babpu Debnath Middle East College
  • Aparna Agarwal Middle East College

The purpose of this paper is to explore the usefulness of chatbot in educational institutes such as schools and colleges and to propose a chatbot development plan that meet the needs. Usually chatbots are built for one specific purpose, for example, to answer general queries prospective students might have regarding admission. This paper aims to provide an artificial-intelligence(AI) integrated chatbot framework that can help develop a multi-use chatbot. Study is based highly on qualitative data collected from case studies and journal articles. Primary data is also collected from interviews and questionnaires presented to appropriate staffs and students in college, in this case, Middle East College. Integrating AI into the chatbot to make it self-reliant, intelligent and learn from user interaction is necessary to make it deal with multiple fields. This requires complex algorithms, database management and extensive labor, thus making it very costly. However, if developed, this single chatbot can help students, faculties and other staffs greatly, not just as an assistant in answering frequently asked questions, but also in learning and teaching. The chatbot can be integrated with mobile app making it a part of daily life. Due to over complexity, the chatbot will first developed for use in one field then gradually expanded to other. A chatbot built for multiple purposes certainly holds more complexity than a single general purpose chatbot. This being said, having the software developed and tested in real life would have helped to better understand its flexibility and functionality.

References or Bibliography

Abbasi, S. & Kazi, H. (2014). Measuring effectiveness of learning chatbot systems on Student’s learning outcome and memory retention. Asian Journal of Applied Science and Engineering, 3, 57-66. doi: 10.15590/ajase/2014/v3i7/53576

Abdul-Kader, S. A. & Woods, J. (2015). Survey on Chatbot Design Techniques in Speech Conversation Systems. International Journal of Advanced Computer Science and Applications, 6(7), 72-80. Retrieved from https://thesai.org/Downloads/Volume6No7/Paper_12-Survey_on_Chatbot_Design_Techniques_in_Speech_Conversation_Systems.pdf

Agarwal, A. (Speaker). (2013). Why massive open online courses (still) matter [Video]. Edinburgh, Scotland: TEDGlobal 2013. Retrieved from https://www.ted.com/talks/anant_agarwal_why_massive_open_online_courses_still_matter?utm_source=tedcomshare&utm_medium=email&utm_campaign=tedspread

Bradesko, L., & Mladenic, D. (2012). A Survey of Chabot Systems through a Loebner Prize Competition. Retrieved from https://pdfs.semanticscholar.org/9447/1160f13e9771df3199b3684e085729110428.pdf?_ga=2.108868619.847311103.1574207030-206774130.1574207030

Brandtzaeg, P.B. & Følstad A. (2017). Why People Use Chatbots. In: Kompatsiaris I. et al. (Eds.), Internet Science. INSCI 2017. Lecture Notes in Computer Science (pp 377-392). Springer, Cham. Retrieved from https://doi.org/10.1007/978-3-319-70284-1_30

Colace, F., Santo, M. D., Lombardi, M., Pascale, F., Pietrosanto, A. & Lemma, S. (2018) Chatbot for E-Learning: A Case of Study. International Journal of Mechanical Engineering and Robotics Research, 7(5), 528-533. doi: 10.18178/ijmerr.7.5.528-533

Cui, A. (2015, February 20). Massive Open Online Courses (MOOCs) and the next generation [Video file]. Retrieved from https://www.youtube.com/watch?v=gbkeWebvW1M

Fadhil, A. & Schiavo, G. (2019). Designing for Health Chatbots. ArXiv, abs/1902.09022.

Gonda, D. E., Luo, J., Wong, Y. and Lei, C. (2019). Evaluation of Developing Educational Chatbots Based on the Seven Principles for Good Teaching. 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE) (pp. 446-453). Wollongong, NSW, Australia. doi: 10.1109/TALE.2018.8615175

Huang, J., Zhou, M. & Yang, D. (2007). Extracting Chatbot Knowledge from Online Discussion Forums. IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence (pp. 423-428). Hyderabad, India. Retrieved from https://www.ijcai.org/Proceedings/07/Papers/066.pdf

Molnár, G. & Sz?ts, Z. (2018). The Role of Chatbots in Formal Education. IEEE 16th International Symposium on Intelligent Systems and Informatics (pp. 197-201). Subotica, Serbia. doi: 10.1109/SISY.2018.8524609

Rahman, AM., Mamun, AA & Islam, A. (2017). Programming challenges of chatbot: Current and future prospective. 2017 IEEE Region 10 Humanitarian Technology Conference (pp. 75-78). Dhaka, Bangladesh: IEEE. doi: 10.1109/R10-HTC.2017.8288910

Rouse, M. (2019). Chatbot. Retrieved 20 November, 2019 from https://searchcustomerexperience.techtarget.com/definition/chatbot

Sjöström, J., Aghaee, N., Dahlin, M. & Ågerfalk, P. J. (2018). Designing Chatbots for Higher Education Practice. International Conference on Information Systems Education and Research (pp. 1-10). San Francisco, CA. Retrieved from https://www.researchgate.net/publication/328245964_Designing_Chatbots_for_Higher_Education_Practice

V-Soft Consulting. (n.d.). Understanding The Conversational Chatbot Architecture. Retrieved 22 November, 2019 from https://blog.vsoftconsulting.com/blog/understanding-the-architecture-of-conversational-chatbot

How to Cite

  • Endnote/Zotero/Mendeley (RIS)

Copyright (c) 2020 SHAIK Mazhar Hussain

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License .

Copyright holder(s) granted JSR a perpetual, non-exclusive license to distriute & display this article.

Announcements

Call for papers: volume 13 issue 4.

If you are an undergraduate or graduate student at a college or university aspiring to publish, we are accepting submissions. Submit Your Article Now!

Deadline: 11:59 p.m. August 31, 2024

About this Publishing System

Grab your spot at the free arXiv Accessibility Forum

Help | Advanced Search

Computer Science > Human-Computer Interaction

Title: developing effective educational chatbots with chatgpt prompts: insights from preliminary tests in a case study on social media literacy (with appendix).

Abstract: Educational chatbots come with a promise of interactive and personalized learning experiences, yet their development has been limited by the restricted free interaction capabilities of available platforms and the difficulty of encoding knowledge in a suitable format. Recent advances in language learning models with zero-shot learning capabilities, such as ChatGPT, suggest a new possibility for developing educational chatbots using a prompt-based approach. We present a case study with a simple system that enables mixed-turn chatbot interactions and discuss the insights and preliminary guidelines obtained from initial tests. We examine ChatGPT's ability to pursue multiple interconnected learning objectives, adapt the educational activity to users' characteristics, such as culture, age, and level of education, and its ability to use diverse educational strategies and conversational styles. Although the results are encouraging, challenges are posed by the limited history maintained for the conversation and the highly structured form of responses by ChatGPT, as well as their variability, which can lead to an unexpected switch of the chatbot's role from a teacher to a therapist. We provide some initial guidelines to address these issues and to facilitate the development of effective educational chatbots.
Comments: Poster version accepted at the 31st International Conference on Computers in Education (ICCE)
Subjects: Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Computers and Society (cs.CY)
Cite as: [cs.HC]
  (or [cs.HC] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Designing an Educational Chatbot: A Case Study of CikguAIBot

Nurul Amelina Nasharuddin , N. Sharef , E. Mansor +6 more  ·  Jun 15, 2021

Influential Citations  

2021 Fifth International Conference on Information Retrieval and Knowledge Management (CAMP)

Key takeaway

Cikguaibot, a chatbot application for teaching ai in malay language, successfully achieved its objectives and is fostering successful learning in malaysia's education system..

This research aims to design a chatbot application to teach Artificial Intelligent (AI) in Malay language. CikguAIBot offers learners the possibility to learn and interact with an agent without the need for a teacher to be around. The development of CikguAIBot is based on the RAD model with the involvement of a number of experts and real users. The main focus of this paper is on the contents and flow design of the chatbot so that the objectives of the chatbot are achieved. Results from the expert review sessions were reported and a detailed evaluation strategy with the students is also included although the evaluation session is in the future plan. This research is expected to foster the usage of chatbot technology in supporting successful learning in Malaysia’s education system.

Overthink Group

10 Case Studies on Chatbots

by Ryan Nelson | Nov 7, 2017 | Case study | 0 comments

texting chat bubbles on a blue background

No customer service rep wants to answer the same question a hundred times a day. No sales rep wants to talk to people who aren’t going to buy. And if you’re leading an organization, you can’t afford to let either of those scenarios be the norm.

Chatbots (more affectionately known as virtual assistants) provide a solution to both of these problems. Their infinite capacity helps free up your employees and scale your organization’s efforts . Whether you use chatbots for customer service, sales, or something else, their artificial intelligence ensures that your human resources are only used when they’re needed, and that your organization communicates with the most people possible.

But the fear many organizations have is that chatbots are heavy on the artificial and light on the intelligence. Few things are more infuriating when you need help than having to repeatedly rephrase your question or jump through hoops to talk to a real person. Most of us prefer talking to humans, and that’s OK. That’s why chatbots are most-suited for highly specialized tasks.

The best chatbots interact with more people faster than humans will ever be able to. The trick is knowing when and how to use them. In many cases, you’ll find that chatbots are basically a more informal way for people to navigate your website.

To help you see if there are opportunities for your organization to use chatbots, we found 10 case studies of companies that used them successfully. We’ll show you what they did, how they did it, and where you can go to see the full case study.

Some of these organizations started with live chat systems before switching to chatbots. Some used chatbots conservatively, and others used them for everything.

Check out these 10 case studies on chatbots.

1. Amtrak: 5 million questions answered by chatbots annually

Chatbot system: Next IT Industry: Public transportation Key stats:

  • 800% return on investment.
  • Increased bookings by 25%.
  • Saved $1,000,000 in customer service expenses in a single year.
  • Over 5,000,000 questions answered every year.
  • Bookings through chatbots generate 30% more revenue.

Major takeaways:

  • Chatbots with advanced AI provide site visitors with a “self-service” option.

Where the study came from: Next IT shared this chatbot case study on their website about Amtrak’s experience with “Julie”, which began in 2012.

Amtrak is the largest organization you’ll find in our list of case studies. They have 20,000 employees and serve 30 million passengers per year. At the time Next IT published this case study, Amtrak.com was getting 375,000 visitors every day .

Using Next IT’s advanced AI chat platform , they created “ Ask Julie ” to help visitors find what they needed without having to call or email customer service.

Here’s what Next IT says she’s capable of:

“Travelers can book rail travel by simply stating where and when they’d like to travel. Julie assists them by pre-filling forms on Amtrak’s scheduling tool and providing guidance through the rest of the booking process. And, of course, she’s easily capable of providing information on what items can be carried on trains or helping make hotel and rental-car reservations.”

Instead of making a phone call or waiting for customer service to email them back, more and more visitors are turning to Julie. In fact, Next IT reported a 50% growth in Julie’s usage year over year.

Julie “was designed to function like Amtrak’s best customer service representative,” and with 5 million answered questions per year, it’s hard to argue that she isn’t their best customer service rep.

Not to mention, when Julie answers questions, she tacks on subtle upsells like these:

Ask Julie chatbot

Image source: Next IT

So in addition to answering more questions and increasing the number of bookings, Julie actually increased the value of bookings. Bookings made through Julie resulted in an average of 30% more revenue than bookings made through other means.

Clearly, the self-serve model is working for Amtrak. What’s interesting about Julie is that despite the smiling face, you know you’re talking to a robot. It doesn’t feel like AI that they’re trying to pass off as a real person. It’s almost like she’s a more advanced search feature of the website. When visitors ask questions, she pulls in only the relevant information, and it’s all contextualized to fit their specific question.

Maybe it’s just me, but that could be the difference between a helpful tool and a frustrating conversation.

2. Anymail finder: 90% of “big customers” chat before buying

Chatbot system: Intercom Industry: Email verification software (SAAS) Key stats:

  • 1 in 3 buyers used the chat system before making a purchase.
  • 9 out of 10 “big buyers” used the chat system before making a purchase.
  • Estimated 60% of revenue comes from chatbots.
  • Average response time was three minutes in their first 30 days of using Intercom.
  • Chatbots allowed a two-person team to stay on top of support and sales.
  • Popular customer questions provided content ideas, and eventually prewritten responses.

Where the study came from: Pardeep Kullar published this case study on the Upscope blog in 2017.

As a two-person marketing startup, Anymail finder was stretched thin between sales, marketing, and support. They were answering the same few questions over and over via email.

Intercom’s operator bot helped this two-person team look and feel like they had a full-fledged support department.

Pardeep Kullar of Anymail finder says that the same handful of questions kept popping up in the chat window. They were usually questions like “how are you different from your competitor?” or “how do I upload this file?”

So Pardeep and his colleague wrote detailed articles that answered these popular questions and any related ones, then incorporated the articles in readymade responses and automated messages. Website visitors encountered one of 10 automated chat messages, depending on the page they arrived at.

Anymail finder Intercom chatbot

Image source: Anymail finder

It’s like putting multiple fishing lines in the water at once, waiting for potential customers or users to bite. When a visitor replied to an automated message, employees got a push notification so they could promptly respond to every inquiry. Anymail finder’s prewritten responses to popular questions let them reply to some inquiries within seconds.

Intercom’s messaging metrics let Anymail finder gauge which automated messages were producing the best results:

Intercom chatbot stats

Image source: Upscope

One of the primary benefits of chatbot services is that they can answer most questions customers have and qualify your leads without eating up valuable time from your customer service or sales staff.

But Intercom is pretty anti-AI , and their chatbots serve a more limited role. For Anymail finder, real people were waiting behind every automated message, but chatbots still helped them provide superior customer service with a limited team.

3. RapidMiner: replaced all lead capture forms with chatbots

Chatbot system: Drift Industry: data science software Key stats:

  • 4,000 leads generated by chatbots
  • 25% of sales pipeline was influenced by chatbots
  • Chatbots can bypass lengthy lead generation campaigns

Where the study came from: Drift published this case study on their website.

RapidMiner went all-in—they replaced every lead capture form on their site with a chatbot. (Even for their whitepapers !) RapidMiner realized that automated conversations could filter and qualify leads in minutes, whereas a sequential email campaign could take weeks.

Chatbots let them circumvent this messy process and direct the best leads straight to sales:

lead nurture campaign sequence

Image source: Drift

For CMO Tom Wentworth, the change was about understanding why people came to RapidMiner’s site:

“People who come to our website aren’t coming there because they want to surf our site, they’re coming there because they have a specific problem, whether it’s a question about our product or what it does, whether it’s some technical support they need, or whether it’s they want to talk to someone in sales.”

Chatbots made it possible to address the reasons people came to RapidMiner.com without bombarding the sales team with unqualified leads. In an article on the Harvard Business Review , RapidMiner shared:

“The Drift bot now conducts about a thousand chats per month. It resolves about two-thirds of customer inquiries; those that it cannot, it routes to humans.”

Wentworth went on to say, “It’s the most productive thing I’m doing in marketing.”

Drift’s Leadbot asked visitors the same questions salespeople would’ve asked—and it never sleeps, so leads trickle in 24/7. So far, it’s brought in over 4,000 leads, influenced 25% of their open sales pipeline, and accounted for 10% of all new sales.

This case study provides some helpful insights into the differences between a chatbot and a live chat service, but it’s worth noting: ditching forms altogether is a pretty drastic step.

If you’re using blogging to increase your traffic , gated whitepapers and sequential email campaigns help you build an audience and create long-term relationships. A chatbot on your blog is bound to convert some visitors into leads, but this probably isn’t going to grow an email list you can reach out to again and again:

RapidMiner chatbot

Image source: RapidMiner

4. MongoDB: increased new leads by 70% in three months

Chatbot system: Drift Industry: database/development platform Key stats:

  • Increased net new leads by 70%
  • Increased total messaging response by 100%
  • Chatbots are more scalable than live chat services.
  • Chatbots can help your customer service and sales teams by scheduling meetings and screening inquiries

MongoDB was having a lot of success with live chat, but like all humans, their salespeople were limited by things like “time” and “space.” They couldn’t significantly increase the number of conversations they were having without significantly increasing the size of their team.

As their director of demand generation puts it:

“We needed a messaging tool that could scale with our business and increase the volume of our conversations, leading to the increase of our pipeline and Sales Accepted Leads (SALs)—the metrics my Demand Generation team are measured on.”

Like RapidMiner, MongoDB let Drift’s Leadbot ensure that their sales reps only talked to the people who were most likely to buy. And with Drift’s meeting scheduler , people didn’t have to play phone tag to make an appointment:

chatbot scheduling a meeting

For MongoDB, automating lead-qualifying conversations allowed them to have more conversations, and automating the scheduling process let them turn more of those conversations into leads.

5. Leadpages: welcome messages led to 267% more conversations

Chatbot system: Drift Industry: Drag-and-drop landing page creator Key stats:

  • Welcome messages led to a 267% increase in chat conversations.
  • Website conversion rate increased by 36%.
  • Targeted messages had an open rate of 30% and a 21% clickthrough rate.
  • Welcome messages help chatbots and live chat get more engagement.
  • In the right place at the right time, relevant, automated messages can be highly effective.

Where the study came from: Drift’s case studies page shared how Leadpages used automated messages .

LeadPages started using Drift’s chat system to let their site visitors ask questions. Within a few weeks, they were averaging 100+ questions per week. And they didn’t even have a welcome message. They quickly realized that there was a much bigger opportunity to encourage conversations that lead to conversions.

LeadPages CMO Dustin Robertson says, “Site visitors ask questions through Drift as they consider purchasing our software. But there’s more to Drift than just chat. We can proactively reach out to visitors.”

So they added a welcome message.

In the month prior to adding the message, they had 310 visitors use the chat system. In the month after, they had 1,168. That’s a 267% increase.

But the quantity of messages wasn’t the only thing they were improving. LeadPages started using targeted, automated messages to try to increase conversions on specific pages. Depending on where visitors were on the website, they’d see a different message that fit with the page and asked them to take a specific action.

Like this message on their comparison page:

Leadpages comparison chat

These targeted messages had an open rate of 30% and a click- through rate of 21%.

“With Drift’s automation features, we’ve been able to increase the conversion rate of our site visitors by 36%,” Robertson says.

Interestingly, at the time we prepared this case study roundup, Leadpages didn’t appear to be using chatbots on their comparison page , which has been reworked to feature in-depth comparison reports.

6. Perfecto Mobile: Increased website conversion rate by 230%

Chatbot system: Drift Industry: Web, mobile, and IOT testing platform Key stats:

  • Visitor-to-lead conversion rate increased from 6% to 20% in six months.
  • Targeted chat let sales reps focus on leads who were likely to buy.
  • Bypassing forms meant qualified leads could move through the pipeline faster.

Where the study came from: Perfecto Mobile helped Drift prepare this case study , which was published on Drift.com.

Perfecto Mobile had a problem. Most of their “leads” weren’t within their target audience. They didn’t want their sales development reps wasting that kind of time on a live chat system, so they went with Drift.

“Our leads tend to be 70% out of our target, 30% in,” says Perfecto CMO Chris Willis. “Now, I expected with web chat we’d see about the same thing. So people chatting and just essentially taking up the time of our SDRs when they could be working on more productive activities. And so right out of the gate, we identified with Drift that we were going to see the ability to manage that process. So we’re able to, by IP address, identify companies by their size, and only present to our SDRs chats that come from companies that we want to sell to.”

If a website visitor was coming from a company that was too small to be in Perfecto’s target audience, they didn’t see the chatbot.

Check out what Chris has to say about their experience:

The other major benefit Perfecto noticed was that chatbots allowed them to capitalize on leads at the most opportune time.

“Leads that come in through chat tend to have a higher velocity,” Chris says. “So you’re able to solve the problem or meet the needs of the request in real-time. So you think in terms of somebody coming to a website, and having a question, and filling in a contact us form. And they’ll hear back in 24 hours, or two days…that problem might not be there anymore. If they’re able to initiate a conversation, so skip the form, and have a conversation in real-time, we’re seeing that move very quickly.”

Here’s an actual example Chris shared about how this worked for Perfecto:

  • An anonymous visitor came to the Perfecto Mobile website and started a conversation through Drift.
  • Based on IP address, the conversation was routed to an SDR.
  • The anonymous visitor turned out to be someone from a major sports brand, and they wanted to meet in-person with a sales rep.
  • During that conversation, the SDR called the sales rep and gave him all the info.
  • Two days later that sales rep was standing in that major sports brand’s offices in New York.

For Perfecto Mobile, chatbots helped them qualify leads faster, and hand them off to the right people at the right time.

7. Charter Communications: 500% ROI in six months

Chatbot system: Next IT Industry: Cable/Internet provider Key stats:

  • 500% ROI within six months.
  • Reduced live chat volume by 83%.
  • Decreased time it took customers to reset passwords by 50%.
  • Common customer service questions are now handled completely through the chatbot.
  • Chatbots can resolve issues faster by reducing handoffs.

Where the study came from: Charter Communications implemented Next IT’s chatbot in 2012. Next IT published this case study on their website.

Charter Communications is the second largest cable provider and the fifth largest phone providers in the U.S. They have 16,000 employees and 25 million customers.

Before switching to a chatbot service, Charter Communications had 200,000 live chats per month. 38% of these live chat conversations were for forgotten usernames and passwords. That’s 76,000 ridiculously simple requests that had to be handled by a real person every month.

Obviously, all of those conversations take up a lot of customer service time. Since so many people were accustomed to resolving issues through chat, Charter didn’t wanted to pull the plug on the entire chat system, but they needed a self-serve option to save their customer service reps for more complex problems.

When they switched to a chatbot, it didn’t just take over those basic password and username questions. 83% of all of chat communications were handled by the bot. That’s 166,000 chat requests per month that Charter no longer had to worry about.

But Charter’s chatbot wasn’t just bumbling its way through these conversations, either. Part of their goal was to increase first-contact resolution rates, so customers wouldn’t need to be relayed through several people to get what they needed. The chatbot could also handle those tedious password and username requests 50% faster than a real person.

Ultimately, chatbots delivered a solid win for Charter and for their customers.

Facebook Messenger bot case studies

Since Facebook opened up its Messenger app for developers to create their own bots , a lot of brands have seized the opportunity to interact with their audience this way. The case studies you’ll see below are a little lighter than the ones we’ve looked at so far, but they showcase a few ways organizations are successfully using Facebook Messenger bots. Some ecommerce sites have had a lot of success with Messenger bots, but the three examples we’re going to look at are all primarily content-focused brands.

Something to think about: while the other chatbots we’ve looked at live on your website, this one lives in an app people are already using, and they can find your bot there. Facebook shares Messenger bots in the discover tab , and if you open Messenger right now and search, you’ll find “bots” right below “people.” In other words, a Facebook Messenger bot could grow your audience.

8. BabyCenter: 53% click through rate from Facebook Messenger

Chatbot system: Facebook messenger Industry: baby products Key stats:

  • 84% read rate on automated messages.
  • 53% click through rate from Facebook Messenger to BabyCenter.com.
  • Used a Messenger bot to drive traffic to the website.
  • Facebook messages were opened more than emails.

Where the study came from: BabyCenter asked ubisend to design a Facebook Messenger bot in 2016. Ubisend published this case study on their website.

BabyCenter is one of the most trusted pregnancy websites out there (seriously, I’ve seen my wife’s OBGYN check this site during appointments). One of their biggest draws is a sequential email campaign that follows you every step of the way through pregnancy, and their revenue model is based on advertisements and a strong affiliate sales program.

Through ubisend, BabyCenter created a bot on Facebook Messenger to do two things:

  • Drive traffic to their website.
  • Provide an alternative content delivery system.

As you can see in the GIF below, the bot also provided a more interactive way for people to consume BabyCenter’s content.

BabyCentre chatbot

Image source: ubisend

The new bot accomplished both objectives, with some impressive results. On average, 84% of people read the message, and 53% of those who opened also clicked through to the website. Ubisend compares that to MailChimp’s open and click-through rates, and with some unstated math determined that the Messenger bot had a 1,428% higher engagement rate. I can’t speak to the validity of that claim, but here are a couple of reasons why the bot may have had better open and click-through rates than email:

  • The floating messenger icon and that little red number is a lot harder to ignore than an email.
  • People are used to glancing at a subject line without opening the email.
  • Far fewer brands are on Messenger, so a notification is more likely to be from someone you know. (And unless you’re avoiding someone, you’re probably going to open it.)
  • The load time for a Facebook message is almost instant. Email? Not so much.
  • It only takes two taps to open a message and click through. Email takes a little more navigation.

Whatever the reason, a Messenger bot was clearly a viable content delivery system for BabyCenter. If enough people adopt it, the Messenger bot may even rival their well-established sequential email campaign.

9. Good Spa Guide: 29% increase in website traffic

Chatbot system: Facebook Messenger Industry: Spa reviews Key stats:

  • 47% click through rate on automated messages.
  • 29% increase in website traffic in six weeks.
  • 13% increase in spa bookings.
  • Messenger bots can help consumers navigate the website before they even get there.

Where the study came from: Good Spa Guide solicited ubisend’s services in 2016. Ubisend published this case study on their website.

As the name implies, Good Spa Guide reviews spas. They make money when people use the site to book a spa, so not surprisingly, they really value website traffic.

Like BabyCenter, Good Spa Guide was looking for an alternative to their email list. They used ubisend to design a Messenger bot that functions a lot like Amtrak’s “Ask Julie” bot. It basically provides a more conversational way to navigate the website—but without actually being on the website.

Check it out:

Good Spa Guide chatbot

After a short conversation with the bot, people can go to the exact spa review page they need, and continue their hunt on the website.

With a 29% increase in traffic and a 13% increase in spa bookings, it looks like a Facebook Messenger bot helped Good Spa Guide either tap into a new audience, or engage their existing audience in a better way.

10. MyTradingHub: 59% decrease in churn

Chatbot system: Facebook Messenger Industry: Forex trading education Key stats:

  • 59% decrease in churn.
  • 17% increase in website traffic.
  • Messenger bots can be very effective at keeping your audience consistently engaged.

Where the study came from: In 2016, MyTradingHub was struggling to keep subscribers engaged, so they turned to ubisend. This case study was published on ubisend.com.

MyTrainingHub is a web-based social and educational platform for people who trade on the foreign exchange market. They’re after users, not customers, and they use a sequential email campaign to keep their users engaged.

Their primary metric is what they call “Trader Training Completion,” which measures the number of people who have viewed 80% of MyTradingHub’s content and performed specific tasks like quizzes. When this metric started declining, they learned that users weren’t completing the training because “they forgot about it.”

They decided to try an interactive Messenger bot to bring up the number of people who made it through training. They wound up creating a bot that could help people interact with the trading platform and continue their training.

MyTradingHub chatbot

MyTradingHub saw their TTC metric increase by 59% following the launch of the bot, and their training pages saw 17% more traffic.

In this case, it looks like a Messenger bot functioned as a sort of half-measure. MyTradingHub has been around since 2010, but to continue to be a strong “social platform,” they probably need their own app. In the meantime, MyTradingHub’s Messenger bot appears to be keeping users more engaged with their existing content.

Honorable mention: PG Tips 150 messages per second

Chatbot system: Facebook Messenger Industry: Tea Key stats:

  • In six weeks, ubisend designed a branded chatbot with 215 conversation topics.
  • The bot was capable of sending more than 150 messages per second.
  • Chatbots have an insane capacity for simultaneous conversations.

Where the study came from: PG Tips asked ubisend to design a chatbot for a charity promotion. Ubisend then published this case study on their website in 2017.

PG Tips (a brand by Unilever) decided to turn their “Most Famous Monkey” into an AI chatbot to generate donations for charity. They wanted a conversational chatbot to tell jokes for their “one million laughs” campaign.

It took six weeks for ubisend to turn a chatbot into a mediocre standup comic. (They went pretty heavy on the dad jokes.) The AI could handle 150 conversations per second and handle 215 different conversation topics.

We only included this one because it shows how quickly you can set up a fairly intelligent, completely custom chatbot.

Bonus: Facebook Messenger bot examples

After allowing developers to create their own chatbots for Messenger, Facebook shared this roundup of brands successfully using chatbots . More than 30,000 chatbots were created in the first six months they were supported on Facebook Messenger. The roundup highlights four that Facebook thinks are worth checking out.

What do these chatbots all have in common?

In most cases, chatbots aren’t going to fool anyone. The chatbots we’ve looked at here are obviously not real people. The brands that use them and the companies that make them might be excited about how human they seem, but that’s not the point.

In the right situations, chatbots can provide customers and users with a better experience because they process your request instantly, and it doesn’t matter how many other conversations they’re having. And if you’re waiting around for basic help (like, say, password reset), you’re really not going to care if the person who’s helping you is a Bob or a bot.

Unless you’re looking for something gimmicky (like a chatbot monkey that tells dad jokes), most chatbots simply provide a more conversational way for your audience to consume the information on your website. It’s certainly not for everyone—some people (myself included) would rather navigate websites the old fashioned way and read blog posts on a blog—but for many people, chatbots provide a helpful shortcut to the information they’re looking for. And that’s something you should probably care about.

One clear takeaway: if you’re using a live chat service right now, a chatbot can either outright replace it or vastly improve it. Ask your customer service reps what questions they get the most and how often they get them. Go ahead, ask them.

But even if you’re not already using some sort of chat service, chatbots can:

  • Automate the lead generation process.
  • Deliver your content in new ways (maybe even to new people).
  • Drive traffic to key pages of your website.
  • Save your customer service and sales reps a boatload of time.

Guaranteed to make you look smarter. (Cuz you will be.)

Guaranteed to make you look smarter. (Cuz you will be.)

Start every week with all the content marketing stories, data, teardowns, case studies, and weird news you need to drop in your next marketing standup meeting.

No ads. No sponsorships. No crap. Unless it’s hilarious crap. We love that.

You have Successfully Subscribed!

Submit a comment cancel reply.

Your email address will not be published. Required fields are marked *

Notify me of follow-up comments by email.

Notify me of new posts by email.

Get weekly content strategy news

Recent posts.

  • The Content Value Pyramid: When Is Content “Good Enough” to Publish?
  • The Beginner’s Guide to B2B SEO Strategy in 2024
  • 12 SEO Tips for B2B SaaS Industry Pages
  • 9 SEO Tips for B2B SaaS Solutions Pages
  • Product Category SEO: How to Get High-Intent B2B Traffic
  • Industry report

Privacy Overview

IMAGES

  1. CikguAIBot -A Chatbot that Teaches Artificial Intelligence Concepts in

    designing an educational chatbot a case study of cikguaibot

  2. Education Chatbot: Features & Use Cases

    designing an educational chatbot a case study of cikguaibot

  3. Automates the Teaching system using Educational Chatbot

    designing an educational chatbot a case study of cikguaibot

  4. The best AI chatbots for education

    designing an educational chatbot a case study of cikguaibot

  5. How using chatbots in education can change the way we learn?

    designing an educational chatbot a case study of cikguaibot

  6. Building a Chatbot: Case Study

    designing an educational chatbot a case study of cikguaibot

COMMENTS

  1. Designing an Educational Chatbot: A Case Study of CikguAIBot

    This research aims to design a chatbot application to teach Artificial Intelligent (AI) in Malay language. CikguAIBot offers learners the possibility to learn and interact with an agent without the need for a teacher to be around. The development of CikguAIBot is based on the RAD model with the involvement of a number of experts and real users. The main focus of this paper is on the contents ...

  2. Designing an Educational Chatbot: A Case Study of CikguAIBot

    The integration of chatbots in education has gained significant attention due to their potential to provide personalized and adaptive learning experiences. These intelligent conversational agents ...

  3. Educational Design Principles of Using AI Chatbot

    Several studies have shown that utilizing chatbots in educational settings may provide students with a positive learning experience, as human-to-chatbot interaction allows real-time engagement [34], improves students' communication skills [35], and improves students' efficiency of learning [36].

  4. Designing an Educational Chatbot: A Case Study of CikguAIBot

    Designing an Educational Chatbot: A Case Study of CikguAIBot. Nurul Amelina Nasharuddin Nurfadhlina Mohd Sharef Evi Indriasari Mansor Normalia Samian Masrah Azrifah Azmi Murad Mohd Khaizer Omar Noreen Izza Arshad Faaizah Shahbodin Mohammad Hamiruce Marhaban

  5. Using AI Chatbots in Education: Recent Advances Challenges and Use Case

    This paper presents a review of the different methods and tools devoted to the design of chatbots with an emphasis on their use and challenges in the education field.

  6. Design of a Chatbot Learning System: Case Study of a Reading Chatbot

    Designing and maintaining a system of teaching aids would be time-consuming. Chatbots already have high usability and are accepted by the public, meaning that using an existing platform to develop a chatbot would reduce users' cognitive load during the learning process.

  7. (PDF) Deriving Design Principles for Educational Chatbots from

    PDF | On Mar 31, 2020, Hyojung Jung and others published Deriving Design Principles for Educational Chatbots from Empirical Studies on Human-Chatbot Interaction | Find, read and cite all the ...

  8. Chatbots applications in education: A systematic review

    The purpose of this study was to conduct a systematic review of the literature on Chatbot applications in education to gain a better understanding of their current status, benefits, problems, and future potential. Four broad research questions were specified in relation to the objectives.

  9. Educational chatbots for project-based learning: investigating learning

    Educational chatbots (ECs) are chatbots designed for pedagogical purposes and are viewed as an Internet of Things (IoT) interface that could revolutionize teaching and learning. These chatbots are strategized to provide personalized learning through the concept of a virtual assistant that replicates humanized conversation. Nevertheless, in the education paradigm, ECs are still novel with ...

  10. Developing Effective Educational Chatbots with GPT: Insights ...

    This study presents research on the development process of GPT-based educational chatbots. A case study methodology was employed to address the process of designing, implementing, and evaluating a prototype that functioned as a personal tutor for the Sociology of Education course in the Primary Education Teaching Degree. The objective is to provide valuable insights into the processes ...

  11. Designing an Educational Chatbot: A Case Study of CikguAIBot

    This research aims to design a chatbot application to teach Artificial Intelligent (AI) in Malay language and foster the usage of chatbot technology in supporting successful learning in Malaysia's education system. This research aims to design a chatbot application to teach Artificial Intelligent (AI) in Malay language. CikguAIBot offers learners the possibility to learn and interact with an ...

  12. Chatbot for e-learning: A case of study

    A comprehensive review of current studies on the use of chatbots in education, especially in the field of e-learning, was conducted and insights include the scope of the research, the benefits and obstacles encountered, and potential areas for further study are provided.

  13. Designing an Educational Chatbot: A Case Study of CikguAIBot

    This research aims to design a chatbot application to teach Artificial Intelligent (AI) in Malay language. CikguAIBot offers learners the possibility to learn and interact with an agent without the need for a teacher to be around. The development of CikguAIBot is based on the RAD model with the involvement of a number of experts and real users.

  14. Designing a Chatbot for Contemporary Education: A Systematic ...

    With a view to that, this paper intends to systematically review the literature on the applications of educational chatbots and propose frameworks, steps, tools and recommendations for designing educational chatbots, with the aim of providing guidelines and directions for both experienced and inexperienced educators.

  15. A framework to implement AI-integrated chatbot in educational

    This paper aims to provide an artificial-intelligence (AI) integrated chatbot framework that can help develop a multi-use chatbot. Study is based highly on qualitative data collected from case studies and journal articles.

  16. [2306.10645] Developing Effective Educational Chatbots with ChatGPT

    We present a case study with a simple system that enables mixed-turn chatbot interactions and discuss the insights and preliminary guidelines obtained from initial tests.

  17. Chatbots: History, technology, and applications

    Students are facilitated in the study as chatbots can answer questions concerning the educational material. A chatbot can also help students with school administration issues, such as enrolling in a course, the exam schedule, their grades, and other related details to their studies so that the pressure on the school departments is considerably ...

  18. ‪Nurul Amelina Nasharuddin‬

    Nurul Amelina Nasharuddin. Senior Lecturer, Universiti Putra Malaysia. Verified email at upm.edu.my - Homepage. Information Retrieval Educational Technologies Personalised Learning Usability Studies Multimedia Computing.

  19. Designing an Educational Chatbot: A Case Study of CikguAIBot

    Key takeaway: 'CikguAIBot, a chatbot application for teaching AI in Malay language, successfully achieved its objectives and is fostering successful learning in Malaysia's education system.'

  20. PDF Exploring the Trend and Potential Distribution of Chatbot in Education

    Abstract—This study reviews recently published scientific literature on the use of chatbot in education, in order to: (a) identify the potential contribution of the incorporation of chatbot as educational tool in educational institutions, (b) present a synthesis of the available empirical evidence on the educational effectiveness of chatbot ...

  21. Chatbot: An Education Support System for Student

    , Talk2Learn: A Framework for Chatbot Learning , , An E-learning Bot for Bioprocess Systems Engineering , , +2 more Trending Questions (1) The paper does not mention any specific chatbots for education in the Philippines. The paper focuses on the design and implementation of a prototype chatbot in the educational domain. See answers from 5 ...

  22. Teacher ChatGPT in Malaysia?

    In Malaysia, ChatGPT has been explored for educational purposes. Studies have shown that ChatGPT can be utilized as a valuable tool for English teachers in designing teaching content and enhancing instructional strategies. Additionally, a chatbot application called CikguAIBot has been developed to teach Artificial Intelligence in Malay language, aiming to facilitate learning without the ...

  23. 10 Case Studies on Chatbots

    Some used chatbots conservatively, and others used them for everything. Check out these 10 case studies on chatbots. 1. Amtrak: 5 million questions answered by chatbots annually. Chatbot system: Next IT. Industry: Public transportation. Key stats: 800% return on investment. Increased bookings by 25%.