Volume 22, Number 1, 2-7 (2022)
Received December 13, 2021
Accepted January 29, 2022
Published online February 21, 2022
This work is under Creative Commons CC-BY-NC 3.0 license. For more information, see
Technology is more than a tool; the use of technology is also a skill that needs to be developed. Teachers are expected to have the ability to integrate technology into their teaching methods, but whether they have the required technological expertise is often neglected. Therefore, technologies that can be easily used by teachers must be developed. In this study, an algorithm was developed that integrates Google Sheets with Line to offer teachers who are unfamiliar with programming a quick method for constructing a chatbot based on their teaching plan, question design, and the material prepared for a reading class. To meet the needs for reading classes, reading theories and effective reading instruction are incorporated into the learning and teaching mechanism of the chatbot system. To create a guidance structure that is suitable for students of various levels of ability, Nelson’s multipath digital reading model was employed because it can maintain a reading context while simultaneously responding to the diverse reading experiences of different readers.
Keywords: Educational technology, Learning management systems, Mobile learning
According to [ 1 ], the use of technological tools to supplement teaching and learning activities can help students access information efficiently, develop self-directed learning in students, and improve the quality of instruction. However, many teachers continue to adhere to traditional teaching methods rather than integrating technology into teaching because of their negative attitude toward and low confidence in technology use; insufficient professional knowledge and technological competence; and a lack of technological resources, training, and support [ 2 ], [ 3 ]. Therefore, this study focused on the development of a system to cater to teachers’ technological abilities and teaching needs.
In this study, a reading class was considered the target context. In accordance with the theory of reading comprehension, Kintsch’s single-text [ 4 ] and Britt’s multitext [ 5 ] reading comprehension models were integrated into the learning mechanism design. To provide assistance to students on the basis of the level of comprehension with which they have difficulty, Nelson’s multipath reading model was employed to design the question and answer mechanism [ 6 ].
To make the system easily operable and accessible for teachers who lack a programming background, Line, which is the most used communication platform in Taiwan, was used as the front-end interface, and Google Sheets, a commonly used cloud-based spreadsheet, was employed as the database containing teaching content and learning records. Moreover, programs and algorithms were developed using Google App Script to connect the Line and Google Sheets services.
According to Kintsch’s reading comprehension model, which is called the construction–integration model, reading comprehension is a process of continuous construction and integration [ 4 ], [ 7 ]. In this model, each sentence in a text is transformed into a semantic unit, which is called a proposition. The reader then constructs a coherent understanding by continually recombining these propositions in an orderly fashion. Reference [ 8 ] reviewed studies on single-text processing and assumed that the reading process involves at least three levels of memory representation. The surface level represents decoding of word meaning in the early reading stage. The textbase level represents the process of transforming a text into a set of propositions on the basis of lexical knowledge, syntactic analysis, and information retrieved from memory. The situation level represents the process of constructing a coherent understanding of the situation described in the text through experience accumulated in life.
The main limitation of a single text is that it only reflects the viewpoint of a specific author rather than offering the comprehensive viewpoints. Even when arguments are objectively summarized in a literature review, the author still selects from among original sources. According to [ 5 ] and [ 9 ], if students are to address an issue critically and know how to construct a complete understanding of an issue, they should be allowed to learn by reading actual texts, practice selecting and organizing information, and interpret thoughts in their own manner. In multitext reading, texts have the role of providing raw information; reader must thus be clear on the purpose to their reading if they are to select and integrate relevant information and manage diverse or even contradictory viewpoints; otherwise, they may become lost in the ocean of information. Britt et al. extended the Kintsch model to propose the documents model and suggested that a higher level of proposition is event related and includes several clauses and paragraphs; this level involves understanding construction in multitext reading [ 5 ], [ 10 ]. Reference [ 8 ] reviewed studies on multitext reading and concluded that the reading process involves at least three memory representations: the integrated model represents the reader’s global understanding of the situation described in several texts, the intertext model represents their understanding of the source material, and the task model represents their understanding of their goals and appropriate strategies for achieving these goals. Compared with Kintsch’s theory, multitext reading theory is more reader-directed and emphasizes the reader’s approach to constructing a coherent and reasonable understanding from texts representing various viewpoints.
As suggested in [ 8 ], the challenge faced in the teaching of multiple-document reading is how to design a guidance structure that considers the reading paths of different students. Nelson proposed a digital reading model that can maintain a context and simultaneously respond to the diverse reading experiences of different readers. Nelson suggested breaking a text into smaller units and inserting hyperlinks in these units, allowing readers to jump from the current document to the content pointed to by the hyperlinks without affecting the structure of the text [ 6 ]. Moreover, reference [ 11 ] used Nelson’s model in a clear manner by treating reading units as nodes, interunit relationships as links, and reading experience as a network composed of nodes and links. Therefore, the collection of content with which the reader interacts can be treated as a representation of the reader’s reading process. Nelson’s multipath digital reading model inspired us to shift the complex teacher–student interaction during reading instruction to a chatbot system. Learning content can be considered a node, and question–answer pairs can be considered links to related learning content. If question–answer pairs fully represent students’ understanding, the students can be guided to the content they require on the basis of the answer they select. The following section explains the factors that must be accounted for within a well-designed question–answer framework.
Two particular reading interventions are employed to promote comprehension: an instructional framework based on self-regulated learning targets, which is used for basic-level comprehension, and a framework based on teacher-facilitated discussion targets, which is employed for high-level comprehension and critical–analytic thinking [ 12 ]. Among interventions for teacher-facilitated discussion, questioning differs from direct explanation and strategic interventions, which help students develop reading comprehension through direct transfer of skills. Instead, questioning, involving asking students questions in a step-by-step manner, helps them actively construct and develop their understanding of a text.
A good question does not always come to mind easily; thus, teachers must prepare well before class. According to [ 13 ], before designing questions, teachers must have a general understanding of the text, consider probable student reactions, possess specific thinking skills, and decide which competencies should be evaluated. According to [ 14 ] and [ 15 ], when designing questions, the level of the question should be based on the complexity of the cognitive processing required to answer the question. For example, factual questions, requiring the lowest level of processing, require students to recall relevant information from the text; paraphrased questions require students to recall specific concepts and express them in their own way; interpretive questions require students to search for and deduce a relationship among concepts that are not explicitly stated in the text; and evaluative questions, requiring the highest level of processing, require students to analyze and evaluate a concept in the text by using the context and their prior knowledge.
Questions can not only reflect the level of comprehension but also promote thinking. If higher-level questions are posed, students are more likely to think beyond the surface of the topic [ 16 ]. For example, even if a student can answer factual questions correctly, they do not necessarily gain insight from the facts. If the teacher then asks the student to paraphrase or interpret a concept, which would indicate whether the student can link facts together, the student is likely to demonstrate higher-level competency [ 16 ].
In recent years, the OECD’s Programme for International Student Assessment reading comprehension standards [ 17 ] have increasingly emphasized the role of the reader’s personal reflection in reading comprehension. However, irrespective of whether the questions require the students to organize information from texts, use their prior knowledge, or reflect on their life experiences, students must respond in accordance with the context of the text. In other words, they should not express their opinions and feelings freely as they wish. If making deductions from a text is the main competency to be assessed, the level of students’ comprehension can be determined by evaluating their selection of original sources while expressing their thoughts. Moreover, if students are asked to cite original sources, they are more likely to avoid straying from the topic and to demonstrate critical thinking [ 9 ].
To create a good questioning practice, teachers must consider the different levels of the various students and provide assistance accordingly. The different types of questions represent different levels of reading comprehension. Higher-order questions involve more complex cognitive strategies than strategic lower-order questions. Reference [ 18 ] stated that for students who have trouble in constructing meaning from a text, teachers should provide a supporting task, such as word recognition. References [ 14 ] and [ 19 ] have highlighted that for students who need help answering challenging questions, teachers should encourage more advanced use of thinking skills, such as metacognition and awareness of thinking.
The instant feedback that a teacher can provide on the basis of a student’s reply cannot be easily replaced by a predetermined instructional framework. Instead of replacing face-to-face instruction in a class, the system aims to solve the problems encountered during oral question-and-answer sessions and to provide teachers with students’ learning information to enable further counseling. Because identifying how students make inferences from texts is difficult for a teacher during oral communication, a recording mechanism is needed to help the teacher note the source of a student’s inference. According to [ 20 ], even if a teacher is well prepared, poor oral presentation skills can affect students’ understanding of questions. Therefore, a digital tool that fully implements a teacher’s questioning framework can be used to prevent misunderstanding. According to [ 21 ], some students fail to take the opportunity to practice because they feel reluctant to express themselves in public; thus, an individual-oriented learning system can ensure that every student practices an equal amount.
By summarizing the aspects that needed to be considered in the design of questions and instructions, the main guidelines of this system were defined as follows. The question design should support true/false questions, multiple-choice questions, and essay questions for different levels of students. The mechanism of replying to a question should support self-expression and connection with corresponding resources. The system must provide a basic mechanism for determining students’ level of reading comprehension from their qualitative reply and guide them to reread the text for self-modification and self-monitoring.
The earliest chatbot—ELISA, developed by Weizenbaum in 1966—used algorithmic processing and predefined response content to interact with humans [ 22 ]. Chatbots are commonly used to assist individuals in completing specific tasks, and the dialogues are designed to be purposeful and guided [ 23 ].
Recently, chatbots have been widely applied in educational settings and have been demonstrated to have beneficial effects on learning. For example, in [ 24 ] and [ 25 ], chatbots were applied to language learning and determined to induce interest and motivation in learning and increase students’ willingness to express themselves. The results of one study [ 26 ], in which chatbots were applied to computer education revealed that students who learned in the chatbot-based learning environment performed comparably to those who learned through traditional methods. Moreover, [ 27 ] recently developed a chatbot framework by using natural language processing (NLP) to generate appropriate responses to inputs. They used NLP to distinguish and label students’ learning difficulties, connect students with the corresponding grade-level learning subjects, and quickly search for learning content that met the students’ needs. Other scholars [ 28 ] applied a chatbot to the learning management system of a university and employed artificial intelligence to analyze the factors that motivate students to learn actively, monitor academic performance, and provide academic advice. The results indicated that the method improved student participation in their course.
Many commonly used communication platforms and free digital resources now support the development of chatbots. Designing and maintaining a system of teaching aids would be time-consuming. Chatbots already have high usability and are accepted by the public, meaning that using an existing platform to develop a chatbot would reduce users’ cognitive load during the learning process. Therefore, this study developed algorithms to link the services of two open source platforms, Google and Line, and create a cloud spreadsheet that can act as a database for storing teaching content and learning records. Because the algorithms connect with a spreadsheet, creating a new chatbot learning system by using the proposed approach is easy; the spreadsheet would be duplicated, and the setting would be updated with information on the new chatbot.
A. instructional flow design, 1) structure.
A piece of text contains several propositions, and the propositions may be parallel or subordinate to a large proposition. Therefore, the structure of textual analysis and the teaching structure are hierarchical. The proposed system has three levels: the text, chapter, and content levels (Fig. 1). Each card in a carousel template represents one text, and having multiple texts is acceptable (Fig. 2). Chatbot designers can update the chatbot interface and carousel template on the basis of their teaching structure once they have added a new text in Google Sheets (Fig. 2). Students can select any text they want to learn from at any time because of a menu button, called “Classes for Guided Reading of Texts,” which prompts the carousel template to pop up (Fig. 3). Each chapter has its own ID number, and the system connects the chapter’s learning content by the ID. For example, the ID of “Characteristic” is “01”; thus, if students press the button showing “Characteristic”, the system searches for the teaching content labeled “010000” for publishing on the chatbot and then moves to the next content in accordance with the next ID assigned by the designer (Fig. 4).
Fig. 2. carousel template..
2) instructional content design.
According to Kintsch’s theory, instructions should assist students on the basis of the level at which they fail to arrive at a correct understanding of the text. In the surface level, instructions should provide word explanations. In the textbase level, instructions should help connect propositions that the students have ignored. In the situation level, the system should guide students in expressing a concept in their own way and in accordance with their experience. In some cases, the coherence between instruction contents that are not distinct is strong. Therefore, the teacher’s instructional flow can be designed as a linear structure or created with branches and flexibility to help guide students to the content at an appropriate level depending on whether the student knows specific concepts.
Teaching content that comprises an instructional flow is coded. The content in question form can be used to create a branch for the instructional flow. Each question can accept up to 13 branches. To arrange the next content to be published, the system requires the teacher to assign IDs to the branches of each question. According to multitext reading theory, at the integrated level, instructions should guide students to construct a global understanding of the texts. Therefore, each content ID is generated uniquely so that the next ID to be assigned is not limited to the range of texts currently being learned. For paraphrased questions that require students to respond in their own way and when no answer accurately represents a student’s thoughts, the system allows the student to reply by typing out a response if the next ID is set to “000000” (Fig. 4). The system stores the student’s learning progress by recording the order in which the student encountered the content, the buttons they pressed, and their replies (Fig. 5).
For both multiple-choice and paraphrased questions, the system asks the student to provide their qualitative reasoning and original sources; their responses enabled us to understand how students interpret texts (details in section II-B-5). In the case of a student’s thought not being represented by any answer, the student’s qualitative reply is treated as an exceptional case not considered by the teacher during the design stage, and all such replies are collected and given to the teacher.
B. design of question and answer mechanism, 1) questioning mechanism.
Whether the students answer a question correctly does not reflect whether they fully understand a text. Examining the process of students’ interpretation can be a way to accurately follow their real thinking. According to Kintsch’s construction–integration model, a text is a combination of multiple propositions. Similarly, a reading comprehension question must be answered by combining different propositions. Therefore, by comparing the combinations of propositions used by the teacher and the students, it can be determined whether students have overlooked specific aspects, and appropriate guiding questions can then be provided to help the students review the text.
To help the teacher more effectively identify the connection between student responses and the text, the system cuts and codes the text provided by teachers by using the punctuation mark as a breakpoint. The system then creates a webpage on the basis of these sentence units and gives students the link to the webpage in a chatbot dialog (Fig. 6). The webpage has a guide that helps students reply, explain their reasoning, and pick sentences as resources to support their viewpoint (Fig. 7). The webpage is connected to the Line Login service; thus, the user’s identity is recognized and students’ replies are recorded and sent back to the same Google Sheet for the chatbot system and another subsheet for storage (Fig. 8)
3) sentence marker.
When a teacher designs questions, they usually have a reference answer in mind and need to refer to specific information and propositions from the text for support, interpretation, and inference. Therefore, teachers are asked to provide reference answers with the corresponding textual sources when designing questions. Similarly, students must select corresponding textual sentences as the basis for their interpretations. According to multitext reading theory, at the intertext level, sourcing across texts is one of the main competencies that must be developed and evaluated if each sentence is to be coded uniquely. Students can pick sentences across texts.
To calculate the similarity between a student’s answer and the reference answer provided by the teacher, the system compares the references of both. On the basis of the difference between the references, the system can distinguish and label the completeness of the student’s comprehension and provide a guiding question with which the student can review the text.
Because the learning patterns of a group of students are unknown at the beginning of a course, the teacher should track students’ learning process in the long term and observe how students’ explanations and sentence selection evolve under the influence of the guiding questions provided by the system. Before analysis, if a user’s replies include multiple-choice selections and qualitative explanations with supporting sentences, the replies are classified into correct and incorrect. If a user’s replies are paraphrased replies rather than multiple-choice selections, their correctness is determined manually because the system is not yet capable of automatically determining correctness. Another area of analysis in which we are interested is comparing how different students interpret a given question; thus, we plan to classify qualitative explanations on the basis of sentence IDs.
The integration of technology into teaching requires consideration of many aspects, such as the teacher’s attitude, teacher’s technological knowledge and ability, and teaching needs, which are often overlooked. Because we believe that tools should be useful, not just usable, this study aimed to develop a teacher-friendly teaching-aid system based on theories of the teaching and learning of reading and empirical studies of technology applications.
Thanks to the advancement of technology and the willingness of each platform to release development permission, we were able to link Google, Line, and web services by using algorithmic mechanisms. The advantage of this integration is that we do not need to spend considerable time and money to develop a system but use the existing advantages and convenience of these platforms to achieve a similar experience. Moreover, as system developers, we are able to focus on the development and implementation of pedagogical theories rather than the basic operation and maintenance of the system.
To investigate the usability of the system and to help us improve the system, we will invite students and teachers as participants. This system is a prototype. Some message types follow a Line template, and thus, there are limitations, such as the number of buttons, length of the content, and appearance of the message. In addition, in the Google Sheet employed in this study, restrictions and drop-down lists cannot be implemented to prevent designers from constructing learning content with an incorrect format. Therefore, many functions need to be implemented and improved to make the system more accessible for designers. Moreover, because students’ data stored in Google Sheets cannot currently be read easily, the data must be organized; we expect to take the same Google Sheet format as the basis for developing another chatbot with which teachers can produce statistical analyses of students’ learning records.
The system is expected to be a tool that can help teachers understand how students make interpretations and inferences when reading a text. Especially for students who cannot obtain the correct understanding, the relationship between their explanations and text sentences can help teachers to counsel such students or help researchers analyze the factors causing misunderstanding. In the future, we expect to apply machine learning models to further distinguish and label students’ reading difficulties.
[ 1 ] J. Fu, “Complexity of ICT in education: A critical literature review and its implications,” International Journal of education and Development using ICT, vol. 9, no. 1, pp. 112-125, 2013.
[ 2 ] P. A. Ertmer, A. T. Ottenbreit-Leftwich, O. Sadik, E. Sendurur, and P. Sendurur, “Teacher beliefs and technology integration practices: A critical relationship,” Computers & education, vol. 59, no. 2, pp. 423-435, 2012, DOI: 10.1016/j.compedu.2012.02.001
[ 3 ] F. A. Inan and D. L. Lowther, “Factors affecting technology integration in K-12 classrooms: A path model,” Educational technology research and development, vol. 58, no. 2, pp. 137-154, 2010, DOI: 1 0.1007/s11423-009-9132-y
[ 4 ] W. Kintsch and C. Walter Kintsch, Comprehension: A paradigm for cognition . Cambridge: Cambridge university press, 1998.
[ 5 ] M. A. Britt, C. A. Perfetti, R. Sandak, and J.-F. Rouet, “Content integration and source separation in learning from multiple texts,” in Narrative comprehension, causality, and coherence: Essays in honor of Tom Trabasso . Mahwah, NJ: Lawrence Erlbaum Associates, 1999, pp. 209-233.
[ 6 ] T. H. Nelson, “Complex information processing: a file structure for the complex, the changing and the indeterminate,” in Proceedings of the 1965 20th national conference , 1965, pp. 84-100, DOI: 10.1145/800197.806036
[ 7 ] T. A. Van Dijk and W. Kintsch, Strategies of discourse comprehension , New York: Academic Press, 1983.
[ 8 ] S. R. Goldman et al. , “Disciplinary literacies and learning to read for understanding: A conceptual framework for disciplinary literacy,” Educational Psychologist, vol. 51, no. 2, pp. 219-246, 2016, DOI: 10.1080/00461520.2016.1168741
[ 9 ] Ø. Anmarkrud, I. Bråten, and H. I. Strømsø, “Multiple-documents literacy: Strategic processing, source awareness, and argumentation when reading multiple conflicting documents,” Learning and Individual Differences, vol. 30, pp. 64-76, 2014, DOI: 10.1016/j.lindif.2013.01.007
[ 10 ] M. A. Britt and J.-F. Rouet, “Learning with multiple documents: Component skills and their acquisition,” in Enhancing the quality of learning: Dispositions, instruction, and learning processes, J. R. Kirby and M. J. Lawson, Eds. Cambridge: Cambridge University Press, 2012 , pp. 276-314.
[ 11 ] T. J. Berners-Lee and R. Cailliau, “Worldwideweb: Proposal for a hypertext project,” CERN European Laboratory for Particle Physics, Nov. 1990.
[ 12 ] M. Li et al. , “Promoting reading comprehension and critical–analytic thinking: A comparison of three approaches with fourth and fifth graders,” Contemporary Educational Psychology, vol. 46, pp. 101-115, 2016, DOI: 10.1016/j.cedpsych.2016.05.002
[ 13 ] W. J. Therrien and C. Hughes, “Comparison of Repeated Reading and Question Generation on Students’ Reading Fluency and Comprehension,” Learning disabilities: A contemporary journal, vol. 6, no. 1, pp. 1-16, 2008.
[ 14 ] P. Afflerbach and B.-Y. Cho, “Identifying and describing constructively responsive comprehension strategies in new and traditional forms of reading,” Handbook of research on reading comprehension . New York: Routledge, 2009, pp. 69-90.
[ 15 ] T. Andre, “Does answering higher-level questions while reading facilitate productive learning?,” Review of Educational Research, vol. 49, no. 2, pp. 280-318, 1979, DOI: 0.3102/00346543049002280
[ 16 ] S. Degener and J. Berne, “Complex questions promote complex thinking,” The Reading Teacher, vol. 70, no. 5, pp. 595-599, 2017, DOI: 10.1002/trtr.1535
[ 17 ] OECD. PISA 2018 assessment and analytical framework, Paris: OECD Publishing, 2019.
[ 18 ] B. M. Taylor, P. D. Pearson, K. F. Clark, and S. Walpole, “Beating the Odds in Teaching All Children To Read,” Center for the Improvement of Early Reading Achievement, University of Michigan, Ann Arbor, Michigan, CIERA-R-2-006, 1999.
[ 18 ] B. M. Taylor, P. D. Pearson, D. S. Peterson, and M. C. Rodriguez, “Reading growth in high-poverty classrooms: The influence of teacher practices that encourage cognitive engagement in literacy learning,” The Elementary School Journal, vol. 104, no. 1, pp. 3-28, 2003, DOI: 10.1086/499740
[ 19 ] B. M. Taylor, P. D. Pearson, D. S. Peterson, and M. C. Rodriguez, “Reading growth in high-poverty classrooms: The influence of teacher practices that encourage cognitive engagement in literacy learning,” The Elementary School Journal, vol. 104, no. 1, pp. 3-28, 2003, DOI: 10.1086/499740
[ 20 ] E. A. O’connor and M. C. Fish, “Differences in the Classroom Systems of Expert and Novice Teachers,” in the meetings of the American Educational Research Association , 1998.
[ 21 ] D. Linvill, “Student interest and engagement in the classroom: Relationships with student personality and developmental variables,” Southern Communication Journal, vol. 79, no. 3, pp. 201-214, 2014, DOI: 10.1080/1041794X.2014.884156
[ 22 ] G. Davenport and B. Bradley, “The care and feeding of users,” IEEE multimedia, vol. 4, no. 1, pp. 8-11, 1997, DOI: 10.1109/93.580390
[ 23 ] A. Rastogi, X. Zang, S. Sunkara, R. Gupta, and P. Khaitan, “Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset,” in Proceedings of the AAAI Conference on Artificial Intelligence , 2020, vol. 34, no. 05, pp. 8689-8696, DOI: 10.1609/aaai.v34i05.6394
[ 24 ] L. Fryer and R. Carpenter, “Bots as language learning tools,” Language Learning & Technology, vol. 10, no. 3, pp. 8-14, 2006, DOI: 10125/44068
[ 25 ] L. K. Fryer, K. Nakao, and A. Thompson, “Chatbot learning partners: Connecting learning experiences, interest and competence,” Computers in Human Behavior, vol. 93, pp. 279-289, 2019, DOI: 10.1016/j.chb.2018.12.023
[ 26 ] J. Yin, T.-T. Goh, B. Yang, and Y. Xiaobin, “Conversation technology with micro-learning: The impact of chatbot-based learning on students’ learning motivation and performance,” Journal of Educational Computing Research, vol. 59, no. 1, pp. 154-177, 2021, DOI: 10.1177/0735633120952067
[ 27 ] P. Tracey, M. Saraee, and C. Hughes, “Applying NLP to build a cold reading chatbot,” in 2021 International Symposium on Electrical, Electronics and Information Engineering , 2021, pp. 77-80, DOI: 10.1145/3459104.3459119
[ 28 ] W. Villegas-Ch, A. Arias-Navarrete, and X. Palacios-Pacheco, “Proposal of an Architecture for the Integration of a Chatbot with Artificial Intelligence in a Smart Campus for the Improvement of Learning,” Sustainability, vol. 12, no. 4, p. 1500, 2020, DOI: 10.3390/su12041500
This research project is grateful for the support of Taiwan Ministry of Science and Technology (MOST 110-2410-H-007-059-MY2.).
received her B.S. and M.S in Physics from National Tsing Hua University (NTHU, Taiwan, R.O.C.). She is currently continuing her research as a graduate student in the Institute of Learning Sciences and Technologies at National Tsing Hua University. Her research interests include digital learning environment analysis and design, specifically designing for cross-disciplinary learning and reading comprehension.
received his doctoral degree in Design programs at National Taiwan University of Science and Technology (NTUST). During his academic study at the university, he incorporated professional knowledge from various disciplines (e.g., multimedia, interaction design, visual and information design, arts and science, interactive media design, computer science, and information communication) into human interaction design, communication and media design research studies and applications. His cross-disciplinary research interests involve methods in information media research, interaction/interface design, multimedia game, and Computation on Geometric Patterns. Now he is a professor in the Institute of Learning Sciences and Technologies at National Tsing Hua University (NTHU, Taiwan, R.O.C.). His professional experience is focused in the fields of digital humanity, game-based learning, visual narrative, and information design, and the domain of design for learning.
Discover the world's research
International Journal of Educational Technology in Higher Education volume 18 , Article number: 65 ( 2021 ) Cite this article
21k Accesses
46 Citations
8 Altmetric
Metrics details
Educational chatbots (ECs) are chatbots designed for pedagogical purposes and are viewed as an Internet of Things (IoT) interface that could revolutionize teaching and learning. These chatbots are strategized to provide personalized learning through the concept of a virtual assistant that replicates humanized conversation. Nevertheless, in the education paradigm, ECs are still novel with challenges in facilitating, deploying, designing, and integrating it as an effective pedagogical tool across multiple fields, and one such area is project-based learning. Therefore, the present study investigates how integrating ECs to facilitate team-based projects for a design course could influence learning outcomes. Based on a mixed-method quasi-experimental approach, ECs were found to improve learning performance and teamwork with a practical impact. Moreover, it was found that ECs facilitated collaboration among team members that indirectly influenced their ability to perform as a team. Nevertheless, affective-motivational learning outcomes such as perception of learning, need for cognition, motivation, and creative self-efficacy were not influenced by ECs. Henceforth, this study aims to add to the current body of knowledge on the design and development of EC by introducing a new collective design strategy and its pedagogical and practical implications.
Chatbots are defined as computer programs that replicate human-like conversations by using natural language structures (Garcia Brustenga et al., 2018 ; Pham et al., 2018 ) in the form of text messages (websites or mobile applications), voice-based (Alexa or Siri), or a combination of both (Pereira et al., 2019 ; Sandoval, 2018 ). These automated conversational agents (Riel, 2020 ) have been significantly used to replicate customer service interaction (Holotescu, 2016 ) in various domains (Khan et al., 2019 ; Wang et al., 2021 ) to an extent it has become a common trend (Wang et al., 2021 ). The use of chatbots are further expanded due to the affordance, cost (Chocarro et al., 2021 ), development options (Sreelakshmi et al., 2019 ; Wang et al., 2021 ), and adaption facilitated by social network and mobile instant messaging (MIM) applications (apps) (Brandtzaeg & Følstad, 2018 ; Cunningham-Nelson et al., 2019 ) such as WhatsApp, Line, Facebook, and Telegrams.
Accordingly, chatbots popularized by social media and MIM applications have been widely accepted (Rahman et al., 2018 ; Smutny & Schreiberova, 2020 ) and referred to as mobile-based chatbots. These bots have been found to facilitates collaborative learning (Schmulian & Coetzee, 2019 ), multimodal communication (Haristiani et al., 2019 ), scaffolding, real-time feedback (Gonda et al., 2019 ), personalized learning (Oke & Fernandes, 2020 ; Verleger & Pembridge, 2019 ), scalability, interactivity (Dekker et al., 2020 ) and fosters knowledge creation and dissipation effectively (Verleger & Pembridge, 2019 ). Nevertheless, given the possibilities of MIM in conceptualizing an ideal learning environment, we often overlook if instructors are capable of engaging in high-demand learning activities, especially around the clock (Kumar & Silva, 2020 ). Chatbots can potentially be a solution to such a barrier (Schmulian & Coetzee, 2019 ), especially by automatically supporting learning communication and interactions (Eeuwen, 2017 ; Garcia Brustenga et al., 2018 ) for even a large number of students.
Nevertheless, Wang et al. ( 2021 ) claims while the application of chatbots in education are novel, it is also impacted by scarcity. Smutny and Schreiberova ( 2020 ), Wang et al. ( 2021 ), and Winkler and Söllner ( 2018 ) added that the current domain of research in educational chatbots (EC) has been focusing on language learning (Vázquez-Cano et al., 2021 ), economics, medical education, and programming courses. Henceforth, it is undeniable that the role of EC, while not been widely explored outside these contexts (Schmulian & Coetzee, 2019 ; Smutny & Schreiberova, 2020 ) due to being in the introductory stages (Chen et al., 2020 ), are also constrained with limited pedagogical examples in the educational context (Stathakarou et al., 2020 ). Nevertheless, while this absence is inevitable, it also provides a potential for exploring innovations in educational technology across disciplines (Wang et al., 2021 ). Furthermore, according to Tegos et al. ( 2020 ), investigation on integration and application of chatbots is still warranted in the real-world educational settings. Therefore, the objective of this study is first to address research gaps based on literature, application, and design and development strategies for EC. Next, by situating the study based on these selected research gaps, the effectiveness of EC is explored for team-based projects in a design course using a quasi-experimental approach.
The term “chatbot” was derived to represent two main attributes which are “chat” in lieu of the conversational attributes and “bot” short for robot (Chocarro et al., 2021 ). Chatbots are automated programs designed to execute instructions based on specific inputs (Colace et al., 2018 ) and provide feedback that replicates natural conversational style (Ischen et al., 2020 ). According to Adamopoulou and Moussiades ( 2020 ), there are six main chatbots parameters that determines design and development consideration:
knowledge domain—open and closed domains
services—interpersonal, intrapersonal, and inter-agent chatbots
goals—informative, chat-based, or task-based
input processing and response generation—rule-based model, retrieval-based model, and generative model
build—open-source or closed platforms.
These parameters convey that a chatbot can fulfill numerous communication and interaction functionalities based on needs, platforms, and technologies. Typically, they are an exemplary use of artificial intelligence (AI) which conversely initiated various state-of-the-art platforms for developing chatbots such as Google’s DialogFlow, IBM Watson Conversation, Amazon Lex, Flow XO, and Chatterbot (Adamopoulou & Moussiades, 2020 ). However, while using AI is impressive, chatbots application is limited as it primarily uses the concept of artificial narrow intelligence (ANI) (Holotescu, 2016 ). Therefore, it can only perform a single task based on a programmed response, such as examining inputs, providing information, and predicting subsequent moves. While limited, ANI is the only form of AI that humanity has achieved to date (Schmulian & Coetzee, 2019 ). Conversely, such limitation also enables a non-technical person to design and develop chatbots without much knowledge of AI, machine learning, or neuro-linguistic programming (Gonda et al., 2019 ). While this creates an “openness with IT” (Schlagwein et al., 2017 ) across various disciplines, big-tech giants such as Google, Facebook, and Microsoft also view chatbots as the next popular technology for the IoT era (Følstad & Brandtzaeg, 2017 ). Henceforth, if chatbots are able to gain uptake, it will change how people obtain information, communicate (Følstad et al., 2019 ), learn and gather information (Wang et al., 2021 ); hence the introduction of chatbots for education.
Chatbots deployed through MIM applications are simplistic bots known as messenger bots (Schmulian & Coetzee, 2019 ). These platforms, such as Facebook, WhatsApp, and Telegram, have largely introduced chatbots to facilitate automatic around-the-clock interaction and communication, primarily focusing on the service industries. Even though MIM applications were not intended for pedagogical use, but due to affordance and their undemanding role in facilitating communication, they have established themselves as a learning platform (Kumar et al., 2020 ; Pereira et al., 2019 ). Henceforth, as teaching is an act of imparting knowledge through effective communication, the ubiquitous format of a mobile-based chatbot could also potentially enhance the learning experience (Vázquez-Cano et al., ( 2021 ); thus, chatbots strategized for educational purposes are described as educational chatbots.
Bii ( 2013 ) defined educational chatbots as chatbots conceived for explicit learning objectives, whereas Riel ( 2020 ) defined it as a program that aids in achieving educational and pedagogical goals but within the parameters of a traditional chatbot. Empirical studies have positioned ECs as a personalized teaching assistant or learning partner (Chen et al., 2020 ; Garcia Brustenga et al., 2018 ) that provides scaffolding (Tutor Support) through practice activities (Garcia Brustenga et al., 2018 ). They also support personalized learning, multimodal content (Schmulian & Coetzee, 2019 ), and instant interaction without time limits (Chocarro et al., 2021 ). All the same, numerous benefits have been reported reflecting positive experiences (Ismail & Ade-Ibijola, 2019 ; Schmulian & Coetzee, 2019 ) that improved learning confidence (Chen et al., 2020 ), motivation, self-efficacy, learner control (Winkler & Söllner, 2018 ), engagement (Sreelakshmi et al., 2019 ), knowledge retention (Cunningham-Nelson et al., 2019 ) and access of information (Stathakarou et al., 2020 ). Furthermore, ECs were found to provide value and learning choices (Yin et al., 2021 ), which in return is beneficial in customizing learning preferences (Tamayo et al., 2020 ).
Besides, as ECs promotes anytime anywhere learning strategies (Chen et al., 2020 ; Ondas et al., 2019 ), it is individually scalable (Chocarro et al., 2021 ; Stathakarou et al., 2020 ) to support learning management (Colace et al., 2018 ) and delivery of context-sensitive information (Yin et al., 2021 ). Henceforth, encouraging participation (Tamayo et al., ( 2020 ); Verleger & Pembridge, 2019 ) and disclosure (Brandtzaeg & Følstad, 2018 ; Ischen et al., 2020 ; Wang et al., 2021 ) of personal aspects that were not possible in a traditional classroom or face to face interaction. Conversely, it may provide an opportunity to promote mental health (Dekker et al., 2020 ) as it can be reflected as a ‘safe’ environment to make mistakes and learn (Winkler & Söllner, 2018 ). Furthermore, ECs can be operated to answer FAQs automatically, manage online assessments (Colace et al., 2018 ; Sandoval, 2018 ), and support peer-to-peer assessment (Pereira et al., 2019 ).
Moreover, according to Cunningham-Nelson et al. ( 2019 ), one of the key benefits of EC is that it can support a large number of users simultaneously, which is undeniably an added advantage as it reduces instructors' workload. Colace et al. ( 2018 ) describe ECs as instrumental when dealing with multiple students, especially testing behavior, keeping track of progress, and assigning tasks. Furthermore, ECs were also found to increase autonomous learning skills and tend to reduce the need for face-to-face interaction between instructors and students (Kumar & Silva, 2020 ; Yin et al., 2021 ). Conversely, this is an added advantage for online learning during the onset of the pandemic. Likewise, ECs can also be used purely for administrative purposes, such as delivering notices, reminders, notifications, and data management support (Chocarro et al., 2021 ). Moreover, it can be a platform to provide standard information such as rubrics, learning resources, and contents (Cunningham-Nelson et al., 2019 ). According to Meyer von Wolff et al ( 2020 ), chatbots are a suitable instructional tool for higher education and student are acceptive towards its application.
Conversely, Garcia Brustenga et al. ( 2018 ) categorized ECs based on eight tasks in the educational context as described in Table 1 . Correspondingly, these tasks reflect that ECs may be potentially beneficial in fulfilling the three learning domains by providing a platform for information retrieval, emotional and motivational support, and skills development.
Albeit, from the instructor’s perspective, ECs could be intricate and demanding, especially when they do not know to code (Schmulian & Coetzee, 2019 ); automation of some of these interactions could benefit educators in focusing on other pedagogical needs (Gonda et al., 2019 ). Nevertheless, enhancing such skills is often time-consuming, and teachers are usually not mentally prepared to take up a designer's (Kim, 2021 ) or programmer's role. The solution may be situated in developing code-free chatbots (Luo & Gonda, 2019 ), especially via MIM (Smutny & Schreiberova, 2020 ).
By so, for EC development, it is imperative to ensure there are design principles or models that can be adapted for pedagogical needs. At the same time, numerous models have been applied in the educational context, such as CommonKADS (Cameron et al., 2018 ), Goal-Oriented Requirements Engineering (GORE) (Arruda et al., 2019 ), and retrieval-based and QANet models (Wu et al., 2020 ). Nevertheless, these models reflect a coding approach that does not emphasize strategies or principles focusing on achieving learning goals. While Garcia Brustenga et al. ( 2018 ), Gonda et al. ( 2019 ), Kerly et al. ( 2007 ), Satow ( 2017 ), Smutny and Schreiberova ( 2020 ), and Stathakarou et al. ( 2020 ) have highlighted some design guidelines for EC, imperatively a concise model was required. Therefore, based on the suggestions of these empirical studies, the researcher identified three main design attributes: reliability, pedagogy, and experience (Table 2 ).
Nevertheless, it was observed that the communicative aspect was absent. Undeniably, chatbots are communication tools that stimulate interpersonal communication (Ischen et al., 2020 ; Wang et al., 2021 ); therefore, integrating interpersonal communication was deemed essential. Interpersonal communication is defined as communication between two individuals who have established a relationship (Devito, 2018 ), and such a relationship is also significant through MIM to represent the communication between peers and instructors (Chan et al., 2020 ). Furthermore, according to Han and Xu ( 2020 ), interpersonal communication moderates the relationship and perception that influences the use of an online learning environment. According to Hobert and Berens ( 2020 ), while chatbot interaction could facilitate small talk that could influence learning, such capabilities should not be overemphasize. Therefore, it was concluded that four fundamental attributes or strategies were deemed critical for EC design: Reliability, i nterpersonal communication, Pedagogy, and E xperience (RiPE), which are explained in Table 3 .
Nevertheless, ECs are not without flaws (Fryer et al., 2019 ). According to Kumar and Silva ( 2020 ), acceptance, facilities, and skills are still are a significant challenge to students and instructors. Similarly, designing and adapting chatbots into existing learning systems is often taxing (Luo & Gonda, 2019 ) as instructors sometimes have limited competencies and strategic options in fulfilling EC pedagogical needs (Sandoval, 2018 ). Moreover, the complexity of designing and capturing all scenarios of how a user might engage with a chatbot also creates frustrations in interaction as expectations may not always be met for both parties (Brandtzaeg & Følstad, 2018 ). Hence, while ECs as conversational agents may have been projected to substitute learning platforms in the future (Følstad & Brandtzaeg, 2017 ), much is still to be explored from stakeholders' viewpoint in facilitating such intervention.
Three categories of research gaps were identified from empirical findings (i) learning outcomes, (ii) design issues, and (iii) assessment and testing issues. Firstly, research gaps concerning learning outcomes are such as measuring effectiveness (Schmulian & Coetzee, 2019 ), perception, social influence (Chaves & Gerosa, 2021 ), personality traits, affective outcomes (Ciechanowski et al., 2019 ; Winkler & Söllner, 2018 ), acceptance (Chen et al., 2020 ; Chocarro et al., 2021 ), satisfaction (Stathakarou et al., 2020 ), interest (Fryer et al., 2019 ), motivation, learning performance (Yin et al., 2021 ), mental health (Brandtzaeg & Følstad, 2018 ), engagement (Riel, 2020 ) and cognitive effort (Nguyen & Sidorova, 2018 ). EC studies have primarily focused on language learning, programming, and health courses, implying that EC application and the investigation of learning outcomes have not been investigated in various educational domains and levels of education.
Next, as for design and implementation issues, a need to consider strategies that align ECs application for teaching and learning (Haristiani et al., 2019 ; Sjöström et al., 2018 ) mainly to supplement activities that can be used to replace face-to-face interactions (Schmulian & Coetzee, 2019 ) has been implied. According to Schmulian and Coetzee ( 2019 ), there is still scarcity in mobile-based chatbot application in the educational domain, and while ECs in MIM has been gaining momentum, it has not instigated studies to address its implementation. Furthermore, there are also limited studies in strategies that can be used to improvise ECs role as an engaging pedagogical communication agent (Chaves & Gerosa, 2021 ). Besides, it was stipulated that students' expectations and the current reality of simplistic bots may not be aligned as Miller ( 2016 ) claims that ANI’s limitation has delimited chatbots towards a simplistic menu prompt interaction.
Lastly, in regards to assessment issues, measurement strategies for both intrinsic and extrinsic learning outcomes (Sjöström et al., 2018 ) by applying experimental approaches to evaluate user experience (Fryer et al., 2019 ; Ren et al., 2019 ) and psychophysiological reactions (Ciechanowski et al., 2019 ) has been lacking. Nevertheless, Hobert ( 2019 ) claims that the main issue with EC assessment is the narrow view used to evaluate outcomes based on specific fields rather than a multidisciplinary approach. Moreover, evaluating the effectiveness of ECs is a complex process (Winkler & Söllner, 2018 ) as it is unclear what are the characteristics that are important in designing a specific chatbot (Chaves & Gerosa, 2021 ) and how the stakeholders will adapt to its application to support teaching and learning (Garcia Brustenga et al., 2018 ). Furthermore, there is a need for understanding how users experience chatbots (Brandtzaeg & Følstad, 2018 ), especially when they are not familiar with such intervention (Smutny & Schreiberova, 2020 ). Conversely, due to the novelty of ECs, the author has not found any studies pertaining to ECs in design education, project-based learning, and focusing on teamwork outcomes.
This study aims to investigate the effects of ECs for an Instructional Design course that applies team-based project towards learning outcomes, namely learning performance, perception of learning, need for cognition, motivation, creative self-efficacy, and teamwork. Learning performance is defined as the students' combined scores accumulated from the project-based learning activities in this study. Next, perception of the learning process is described as perceived benefits obtained from the course (Wei & Chou, 2020 ) and the need for cognition as an individual’s tendency to participate and take pleasure in cognitive activities (de Holanda Coelho et al., 2020 ). The need for cognition also indicates positive acceptance towards problem-solving (Cacioppo et al., 1996 ), enjoyment (Park et al., 2008 ), and it is critical for teamwork, as it fosters team performance and information-processing motivation (Kearney et al., 2009 ). Henceforth, we speculated that EC might influence the need for cognition as it aids in simplifying learning tasks (Ciechanowski et al., 2019 ), especially for teamwork.
Subsequently, motivational beliefs are reflected by perceived self-efficacy and intrinsic values students have towards their cognitive engagement and academic performance (Pintrich & de Groot, 1990 ). According to Pintrich et al. ( 1993 ), self-efficacy and intrinsic value strongly correlate with task value (Eccles & Wigfield, 2002 ), such as interest, enjoyment, and usefulness. Furthermore, Walker and Greene ( 2009 ) explain that motivational factors that facilitate learning are not always solely reliant on self-efficacy, and Pintrich and de Groot ( 1990 ) claims that a combination of self-efficacy and intrinsic value is better in explaining the extent to which students are willing to take on the learning task. Ensuing, the researcher also considered creative self-efficacy, defined as the students' belief in producing creative outcomes (Brockhus et al., 2014 ). Prior research has not mentioned creativity as a learning outcome in EC studies. However, according to Pan et al. ( 2020 ), there is a positive relationship between creativity and the need for cognition as it also reflects individual innovation behavior. Likewise, it was deemed necessary due to the nature of the project, which involves design. Lastly, teamwork perception was defined as students' perception of how well they performed as a team to achieve their learning goals. According to Hadjielias et al. ( 2021 ), the cognitive state of teams involved in digital innovations is usually affected by the task involved within the innovation stages. Hence, the consideration of these variables is warranted.
Therefore, it was hypothesized that using ECs could improve learning outcomes, and a quasi-experimental design comparing EC and traditional (CT) groups were facilitated, as suggested by Wang et al. ( 2021 ), to answer the following research questions.
Does the EC group perform better than students who learn in a traditional classroom setting?
Do students who learn with EC have a better perception of learning, need for cognition, motivational belief, and creative self-efficacy than students in a traditional classroom setting?
Does EC improve teamwork perception in comparison to students in a traditional classroom setting?
According to Adamopoulou and Moussiades ( 2020 ), it is impossible to categorize chatbots due to their diversity; nevertheless, specific attributes can be predetermined to guide design and development goals. For example, in this study, the rule-based approach using the if-else technique (Khan et al., 2019 ) was applied to design the EC. The rule-based chatbot only responds to the rules and keywords programmed (Sandoval, 2018 ), and therefore designing EC needs anticipation on what the students may inquire about (Chete & Daudu, 2020 ). Furthermore, a designer should also consider chatbot's capabilities for natural language conversation and how it can aid instructors, especially in repetitive and low cognitive level tasks such as answering FAQs (Garcia Brustenga et al., 2018 ). As mentioned previously, the goal can be purely administrative (Chocarro et al., 2021 ) or pedagogical (Sandoval, 2018 ).
Next, as for the design and development of the EC, Textit ( https://textit.com/ ), an interactive chatbots development platform, was utilized. Textit is a third-party software developed by Nyaruka and UNICEF that offers chatbots building possibilities without coding but using the concept of flows and deployment through various platforms such as Facebook Messenger, Twitter, Telegram, and SMS. For the design of this EC, Telegram was used due to data encryption security (de Oliveira et al., 2016 ), cloud storage, and the privacy the student and instructor would have without using their personal social media platforms. Telegram has been previously used in this context for retrieving learning contents (Rahayu et al., 2018 ; Thirumalai et al., 2019 ), information and progress (Heryandi, 2020 ; Setiaji & Paputungan, 2018 ), learning assessment (Pereira, 2016 ), project-based learning, teamwork (Conde et al., 2021 ) and peer to peer assessment (P2P) (Pereira et al., 2019 ).
Subsequently, the chatbot named after the course code (QMT212) was designed as a teaching assistant for an instructional design course. It was targeted to be used as a task-oriented (Yin et al., 2021 ), content curating, and long-term EC (10 weeks) (Følstad et al., 2019 ). Students worked in a group of five during the ten weeks, and the ECs' interactions were diversified to aid teamwork activities used to register group members, information sharing, progress monitoring, and peer-to-peer feedback. According to Garcia Brustenga et al. ( 2018 ), EC can be designed without educational intentionality where it is used purely for administrative purposes to guide and support learning. Henceforth, 10 ECs (Table 4 ) were deployed throughout the semester, where EC1-EC4 was used for administrative purposes as suggested by Chocarro et al. ( 2021 ), EC5-EC6 for assignment (Sjöström et al., 2018 ), EC7 for user feedback (Kerly et al., 2007 ) and acceptance (Yin et al., 2021 ), EC8 for monitoring teamwork progress (Colace et al., 2018 ), EC9 as a project guide FAQ (Sandoval, 2018 ) and lastly EC10 for peer to peer assessment (Colace et al., 2018 ; Pereira et al., 2019 ). The ECs were also developed based on micro-learning strategies to ensure that the students do not spend long hours with the EC, which may cause cognitive fatigue (Yin et al., 2021 ). Furthermore, the goal of each EC was to facilitate group work collaboration around a project-based activity where the students are required to design and develop an e-learning tool, write a report, and present their outcomes. Next, based on the new design principles synthesized by the researcher, RiPE was contextualized as described in Table 5 .
Example flow diagrams from Textit for the design and development of the chatbot are represented in Fig. 1 . The number of choices and possible outputs determine the complexity of the chatbot where some chatbots may have simple interaction that requires them to register their groups (Fig. 2 ) or much more complex interaction for peer-to-peer assessment (Fig. 3 ). Example screenshots from Telegram are depicted in Fig. 4 .
Textit flow diagrams
Textit flow diagram for group registration
Textit flow diagram for peer to peer evaluation
Telegram screenshots of the EC
Participants.
The participants of this study were second-year Bachelor of Education (Teaching English to Speakers of Other Languages (TESOL)) who are minoring in multimedia and currently enrolled in a higher learning institute in Malaysia. The 60 students were grouped into two classes (30 students per class) as either traditional learning class (control group-CT) or chatbot learning class (treatment group-EC). Out of the 60 participants, only 11 were male, 49 were female, and such distribution is typical for this learning program. Both groups were exposed to the same learning contents, class duration, and instructor, where the difference is only denoted towards different class schedules, and only the treatment group was exposed to EC as an aid for teaching and learning the course. Both groups provided written consent to participate in the study and were given honorarium for participation. However, additional consent was obtained from the EC group in regards of data protection act as the intervention includes the use of social media application and this was obtained through EC1: Welcome Bot.
The instructional design course aims to provide fundamental skills in designing effective multimedia instructional materials and covers topics such as need analysis, instructional analysis, learner analysis, context analysis, defining goals and objectives, developing instructional strategy and materials, developing assessment methods, and assessing them by conducting formative and summative assessments. The teaching and learning in both classes are identical, wherein the students are required to design and develop a multimedia-based instructional tool that is deemed their course project. Students independently choose their group mates and work as a group to fulfill their project tasks. Moreover, both classes were also managed through the institution's learning management system to distribute notes, attendance, and submission of assignments.
This study applies an interventional study using a quasi-experimental design approach. Creswell ( 2012 ) explained that education-based research in most cases requires intact groups, and thus creating artificial groups may disrupt classroom learning. Therefore, one group pretest–posttest design was applied for both groups in measuring learning outcomes, except for learning performance and perception of learning which only used the post-test design. The total intervention time was ten weeks, as represented in Fig. 5 . The EC is usually deployed for the treatment class one day before the class except for EC6 and EC10, which were deployed during the class. Such a strategy was used to ensure that the instructor could guide the students the next day if there were any issues.
Study procedure
This study integrates five instruments which measure perception of learning (Silva et al., 2017 ), perceived motivation belief using the Motivated Strategies for Learning Questionnaire (MSLQ) (Pintrich & de Groot, 1990 ) and modified MSLQ (Silva et al., 2017 ), need for cognition using the Need for Cognition Scale–6 (NCS-6) (de Holanda Coelho et al., 2020 ), creative self-efficacy from the Creative Self-Efficacy (QCSE) (Brockhus et al., 2014 ) and teamwork using a modified version of Team Assessment Survey Questions (Linse, 2007 ). The teamwork survey had open-ended questions, which are:
Give one specific example of something you learned from the team that you probably would not have learned on your own.
Give one specific example of something other team members learned from you that they probably would not have learned without you.
What problems have you had interacting as a team so far?
Suggest one specific, practical change the team could make that would help improve everyone’s learning.
The instruments were rated based on the Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree) and administered using Google Forms for both groups. Where else, learning performance was assessed based on the assessment of the project, which includes report, product, presentation, and peer-to-peer assessment.
A series of one-way analyses of covariance (ANCOVA) was employed to evaluate the difference between the EC and CT groups relating to the need for cognition, motivational belief for learning, creative self-efficacy, and team assessment. As for learning performance, and perception of learning, a t-test was used to identify the difference between the groups. The effect size was evaluated according to Hattie ( 2015 ), where an average effect size (Cohen’s d ) of 0.42 for an intervention using technologies for college students is reflected to improve achievement (Hattie, 2017 ). Furthermore, as the teamwork has open-ended questions, the difference between the groups was evaluated qualitatively using Text analysis performed using the Voyant tool at https://voyant-tools.org/ (Sinclair & Rockwell, 2021 ). Voyant tools is an open-source online tool for text analysis and visualization (Hetenyi et al., 2019 ), and in this study, the collocates graphs were used to represent keywords and terms that occur in close proximity representing a directed network graph.
The EC group (µ = 42.500, SD = 2.675) compared the CT group (µ = 39.933, SD = 2.572) demonstrated significant difference at t (58) = 3.788, p = 0.000, d = 0.978; hence indicating difference in learning achievement where the EC group outperformed the control group. The Cohen’s d value as described by Hattie ( 2017 ) indicated that learning performance improved by the intervention.
The initial Levine’s test and normality indicated that the homogeneity of variance assumptions was met at F (1,58) = 0.077, p = 0.782. The adjusted means of µ = 3.416 for the EC group and µ = 3.422 for the CT group indicated that the post-test scores were not significant at F (1, 57) = 0.002, p = 0.969, η2p = 0.000, d = 0.012); hence indicating that student’s perception of enjoyment and tendency to engage in the course is similar for both groups.
The initial Levine’s test and normality indicated that the homogeneity of variance assumptions was met at F (1,58) = 0.062, p = 0.804. The adjusted means of µ = 4.228 for the EC group and µ = 4.200 for the CT group indicated that the post-test scores were not significant at F (1, 57) = 0.046, p = 0.832, η2p = 0.001, d = 0.056); hence indicating that the student’s motivation to engage in the course are similar for both groups.
The initial Levine’s test and normality indicated that the homogeneity of variance assumptions was met at F (1,58) = 0.808, p = 0.372. The adjusted means of µ = 3.566 for the EC group and µ = 3.627 for the CT group indicated that the post-test scores were not significant at F (1, 57) = 0.256, p = 0.615, η2p = 0.004, d = 0.133); hence indicating that the student’s perception of creative self-efficacy was similar for both groups.
The EC group (µ = 4.370, SD = 0.540) compared the CT group (µ = 4.244, SD = 0.479) demonstrated no significant difference at t (58) = 0.956, p = 0.343, d = 0.247; hence indicating no difference in how students perceived their learning process quantitively. Nevertheless, we also questioned what impacted their learning (project design and development) the most during the course, and the findings, as shown in Table 6 , indicated that both groups (EC = 50.00% and CT = 86.67%) found the group learning activity as having the most impact. The control group was more partial towards the group activities than the EC group indicating online feedback and guidance (30.00%) and interaction with the lecturer as an inequitable influence. It was also indicated in both groups that constructive feedback was mostly obtained from fellow course mates (EC = 56.67%, CT = 50.00%) and the instructor (EC = 36.67%, CT = 43.33%) (Table 7 ) while minimum effort was made to get feedback outside the learning environment.
The initial Levine’s test and normality indicated that the homogeneity of variance assumptions was met at F (1,58) = 3.088, p = 0.051. The adjusted means of µ = 4.518 for the experimental group and µ = 4.049 for the CT group indicated that the post-test scores were significantly different at F (1, 57) = 5.950, p = 0.018, η2p = 0.095, d = 0.641; hence indicating that there was a significant difference between groups in how they performed in teams. The Cohen’s d value, as described by Hattie ( 2017 ), indicated that the intervention improved teamwork.
Next, we questioned their perception of teamwork based on what they learned from their teammates, what they felt others learn from them, the problem faced as a team, and recommendations to improve their experience in the course. Based on the feedback, themes such as teamwork, technology, learning management, emotional management, creativity, and none were identified to categories the feedback. The descriptive data are represented in Table 8 for both the groups and the trends reflecting the changes in feedback are described as follow:
Respondent learned from teammates
This question reflects on providing feedback on one aspect they have learned from their team that they probably would not have learned independently. Based on Fig. 6 , the illustration describes changes in each group (EC and CT) pre and post-intervention. First, teamwork showed an increasing trend for EC, whereas CT showed slight changes pre and post-intervention.
Next, using text analysis collocates graphs (Fig. 7 ) for EC post-intervention, a change was observed indicating teamwork perception resonating from just learning new ideas, communicating, and accepting opinions towards a need to cooperate as a team to ensure they achieve their goal of developing the project. It was observed that communicating merely was not the main priority anymore as cooperation towards problem-solving is of utmost importance. Example feedbacks are such as, “I learned teamwork and how to solve complicated problems” and “The project was completed in a shorter period of time, compared to if I had done it by myself.” Next, in both groups, creativity seems to have declined from being an essential aspect in the project's initial phase as it declines towards the end of the semester, whereas an increment was noticed in giving more importance to emotional management when handling matters of the project. Example feedback is such as “I learn to push myself more and commit to the project's success.” Nevertheless, in both groups, all the trends are almost similar.
Change in perception pre and post-intervention based on aspects learn from teammates
Change in perception for the EC group based on aspects learn from teammate
Teammates learned from the respondent.
This question reflects on an aspect the respondent believes that their team members have learned from them. Initially, both groups reported being unaware of their contribution by stating “nothing” or “I don’t know” which was classified as “other” (Fig. 8 ). Nevertheless, intriguingly both groups showed a decline in such negative perception post-intervention, which can be attributed to self-realization of their contribution in task completion. Furthermore, different trends were observed between both groups for teamwork, where the EC group showed more references to increased teamwork contribution, where else the CT group remained unaffected post-intervention. In terms of technology application, the respondents in both groups described how they were a valuable resource for teaching their peers about technology, where one respondent stated that “My friends learn how to make an application from me.”
Change in perception pre and post-intervention based on aspects teammates learned from respondents
Problem respondent faced as a team
Based on the analysis, it was found that the main issue faced in both groups were related to teamwork (Fig. 9 ). The CT group reflected higher teamwork issues than the EC group, and in both groups, these issues escalated during the learning process.
Graphical representation of issues faced as a team
Based on analyzing the text, initially, the EC group found issues related to identifying an appropriate time to have group discussions as some teammates were either absent or unavailable (Fig. 10 ), where a respondent stated that “We can barely meet as a group.” Post-intervention, the group found similar issues, highlighting a lack of communication and availability due to insufficient time and being busy with their learning schedule. Example respond, “We do not have enough time to meet up, and most of us have other work to do.” As for the CT group pre-intervention, similar issues were observed as denoted for the EC group, but communication issues were more prevalent as respondents mentioned differences in opinions or void in feedback which affected how they solved problems collectively (Fig. 11 ). Example feedback is “One of the members rarely responds in the group discussion.” Post-intervention, the CT group claimed that the main issues besides communication were non-contributing members and bias in task distribution. Examples are “Some of my teammates were not really contributing” and “The task was not distributed fairly.”
Change in perception for the EC group based on issues faced as a team
Change in perception for the CT group based on issues faced as a team
Recommendations to improve teamwork
Two interesting trends were observed from Fig. 12 , which are (a) EC group reflected more need teamwork whereas the CT group showed otherwise (b) CT group emphasized learning management for teamwork whereas the EC group showed otherwise. When assessing the changes in the EC group (Fig. 13 ), transformations were observed between pre and post-intervention, where students opined the need for more active collaboration in providing ideas and acceptance. One respondent from the treatment group reflected that acceptance is vital for successful collaboration, stating that “Teamwork and acceptance in a group are important.” Next, for the CT group (Fig. 14 ), the complexity of defining teamwork pre-intervention, such as communicating, confidence, and contribution of ideas, was transformed to reflect more need for commitment by stating, “Make sure everyone is committed and available to contribute accordingly.”
Graphical representation of recommendations pre and post-intervention for both groups
Changes in perception for the EC group based on recommendations for learning improvement as a team
Changes in perception for the CT group based on recommendations for learning improvement as a team
According to Winkler and Söllner ( 2018 ), ECs have the potential to improve learning outcomes due to their ability to personalize the learning experience. This study aims to evaluate the difference in learning outcomes based on the impact of EC on a project-based learning activity. The outcomes were compared quantitively and qualitatively to explore how the introduction of EC will influence learning performance, need for cognition, motivational belief, creative self-efficacy, perception of learning, and teamwork. Based on the findings, EC has influenced learning performance ( d = 0.978) and teamwork ( d = 0.641), and based on the Cohen’s d value being above 0.42, a significant impact on the outcome was deduced. However, other outcomes such as the need for cognition, motivational belief, creative self-efficacy, and perception of learning did not reflect significant differences between both groups.
Firstly, Kearney et al. ( 2009 ) explained that in homogenous teams (as investigated in this study), the need for cognition might have a limited amount of influence as both groups are required to be innovative simultaneously in providing project solutions. Lapina ( 2020 ) added that problem-based learning and solving complex problems could improve the need for cognition. Hence, when both classes had the same team-based project task, the homogenous nature of the sampling may have attributed to the similarities in the outcome that overshadowed the effect of the ECs. Equally, for motivational belief, which is the central aspect needed to encourage strategic learning behavior (Yen, 2018 ). A positive relation with cognitive engagement, performance, and the use of metacognitive strategies (Pintrich & de Groot, 1990 ) is accredited to the need to regulate and monitor learning (Yilmaz & Baydas, 2017 ), especially for project-based learning activities (Sart, 2014 ). Therefore, in both groups, due to the same learning task, these attributes are apparent for both groups as they were able to complete their task (cognitive engagement), and to do so, they were required to plan their task, schedule teamwork activities (metacognition), and design and develop their product systematically.
Moreover, individual personality traits such as motivation have also been found to influence creativity (van Knippenberg & Hirst, 2020 ) which indirectly influenced the need for cognition (Pan et al., 2020 ). Nevertheless, these nonsignificant findings may have some interesting contribution as it implies that project-based learning tends to improve these personality-based learning outcomes. At the same time, the introduction of ECs did not create cognitive barriers that would have affected the cognition, motivational and creative processes involved in project-based learning. Furthermore, as there is a triangulated relationship between these outcomes, the author speculates that these outcomes were justified, especially with the small sample size used, as Rosenstein ( 2019 ) explained.
However, when EC is reflected as a human-like conversational agent (Ischen et al., 2020 ) used as a digital assistant in managing and monitoring students (Brindha et al., 2019 ), the question arises on how do we measure such implication and confirm its capabilities in transforming learning? As a digital assistant, the EC was designed to aid in managing the team-based project where it was intended to communicate with students to inquire about challenges and provide support and guidance in completing their tasks. According to Cunningham-Nelson et al. ( 2019 ), such a role improves academic performance as students prioritize such needs. Conversely, for teamwork, technology-mediated communication, such as in ECs, has been found to encourage interaction in team projects (Colace et al., 2018 ) as they perceived the ECs as helping them to learn more, even when they have communication issues (Fryer et al., 2019 ). Therefore, supporting the outcome of this study that observed that the EC groups learning performance and teamwork outcome had a more significant effect size than the CT group.
As for the qualitative findings, firstly, even though the perception of learning did not show much variation statistically, the EC group showed additional weightage that implicates group activities, online feedback, and interaction with the lecturer as impactful. Interestingly, the percentage of students that found “interaction with lecturer” and “online feedback and guidance” for the EC was higher than the control group, and this may be reflected as a tendency to perceive the chatbot as an embodiment of the lecturer. Furthermore, as for constructive feedback, the outcomes for both groups were very similar as the critiques were mainly from the teammates and the instructor, and the ECs were not designed to critique the project task.
Next, it was interesting to observe the differences and the similarities in both groups for teamwork. In the EC group, there were changes in terms of how students identified learning from other individual team members towards a collective perspective of learning from the team. Similarly, there was also more emphasis on how they contributed as a team, especially in providing technical support. As for CT, not much difference were observed pre and post-intervention for teamwork; however, the post-intervention in both groups reflected a reduced need for creativity and emphasizing the importance of managing their learning task cognitively and emotionally as a team. Concurrently, it was evident that the self-realization of their value as a contributing team member in both groups increased from pre-intervention to post-intervention, which was higher for the CT group.
Furthermore, in regard to problems faced, it was observed that in the EC group, the perception transformed from collaboration issues towards communicative issues, whereas it was the opposite for the CT group. According to Kumar et al. ( 2021 ), collaborative learning has a symbiotic relationship with communication skills in project-based learning. This study identifies a need for more active collaboration in the EC group and commitment for the CT group. Overall, it can be observed that the group task performed through ECs contributed towards team building and collaboration, whereas for the CT group, the concept of individuality was more apparent. Interestingly, no feedback from the EC group mentioned difficulties in using the EC nor complexity in interacting with it. It was presumed that students welcomed such interaction as it provided learning support and understood its significance.
Furthermore, the feedbacks also justified why other variables such as the need for cognition, perception of learning, creativity, self-efficacy, and motivational belief did not show significant differences. For instance, both groups portrayed high self-realization of their value as a team member at the end of the course, and it was deduced that their motivational belief was influenced by higher self-efficacy and intrinsic value. Next, in both groups, creativity was overshadowed by post-intervention teamwork significance. Therefore, we conclude that ECs significantly impact learning performance and teamwork, but affective-motivational improvement may be overshadowed by the homogenous learning process for both groups. Furthermore, it can be perceived that the main contribution of the ECs was creating a “team spirit” especially in completing administrative tasks, interactions, and providing feedback on team progress, and such interaction was fundamental in influencing their learning performance.
This study report theoretical and practical contributions in the area of educational chatbots. Firstly, given the novelty of chatbots in educational research, this study enriched the current body of knowledge and literature in EC design characteristics and impact on learning outcomes. Even though the findings are not practically satisfactory with positive outcomes regarding the affective-motivational learning outcomes, ECs as tutor support did facilitate teamwork and cognitive outcomes that support project-based learning in design education. In view of that, it is worth noting that the embodiment of ECs as a learning assistant does create openness in interaction and interpersonal relationships among peers, especially if the task were designed to facilitate these interactions.
This study focuses on using chatbots as a learning assistant from an educational perspective by comparing the educational implications with a traditional classroom. Therefore, the outcomes of this study reflected only on the pedagogical outcomes intended for design education and project-based learning and not the interaction behaviors. Even though empirical studies have stipulated the role of chatbots in facilitating learning as a communicative agent, nevertheless instructional designers should consider the underdeveloped role of an intelligent tutoring chatbot (Fryer et al., 2019 ) and question its limits in an authentic learning environment. As users, the students may have different or higher expectations of EC, which are potentially a spillover from use behavior from chatbots from different service industries. Moreover, questions to ponder are the ethical implication of using EC, especially out of the learning scheduled time, and if such practices are welcomed, warranted, and accepted by today's learner as a much-needed learning strategy. According to Garcia Brustenga et al. ( 2018 ), while ECs can perform some administrative tasks and appear more appealing with multimodal strategies, the author questions how successful such strategies will be as a personalized learning environment without the teacher as the EC’s instructional designer. Therefore, future studies should look into educators' challenges, needs, and competencies and align them in fulfill EC facilitated learning goals. Furthermore, there is much to be explored in understanding the complex dynamics of human–computer interaction in realizing such a goal, especially educational goals that are currently being influenced by the onset of the Covid-19 pandemic. Conversely, future studies should look into different learning outcomes, social media use, personality, age, culture, context, and use behavior to understand the use of chatbots for education.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Educational chatbots
Control group
Reliability, interpersonal communication, pedagogy, and experience
Goal-oriented requirements engineering
Adamopoulou, E., & Moussiades, L. (2020). An overview of chatbot technology. In: Maglogiannis, I., Iliadis, L. & Pimenidis, E. (eds) Artificial intelligence applications and innovations. AIAI 2020. IFIP advances in information and communication technology , vol 584 (pp. 373–383). Springer. https://doi.org/10.1007/978-3-030-49186-4_31 .
Arruda, D., Marinho, M., Souza, E. & Wanderley, F. (2019) A Chatbot for goal-oriented requirements modeling. In: Misra S. et al. (eds) Computational science and its applications—ICCSA 2019. ICCSA 2019. Lecture Notes in Computer Science , vol 11622 (pp. 506–519). Springer. https://doi.org/10.1007/978-3-030-24305-0_38 .
Bii, P. (2013). Chatbot technology: A possible means of unlocking student potential to learn how to learn. Educational Research, 4 (2), 218–221.
Google Scholar
Brandtzaeg, P. B., & Følstad, A. (2018). Chatbots: User changing needs and motivations. Interactions, 25 (5), 38–43. https://doi.org/10.1145/3236669 .
Article Google Scholar
Brindha, S., Dharan, K. R. D., Samraj, S. J. J., & Nirmal, L. M. (2019). AI based chatbot for education management. International Journal of Research in Engineering, Science and Management, 2 (3), 2–4.
Brockhus, S., van der Kolk, T. E. C., Koeman, B., & Badke-Schaub, P. G. (2014). The influence of ambient green on creative performance. Proceeding of International Design Conference (DESIGN 2014) , Croatia, 437–444.
Cacioppo, J. T., Petty, R. E., Feinstein, J. A., & Jarvis, W. B. G. (1996). Dispositional differences in cognitive motivation: The life and times of individuals varying in need for cognition. Psychological Bulletin, 119 , 197–253.
Cameron, G., Cameron, D. M., Megaw, G., Bond, R. B., Mulvenna, M., O’Neill, S. B., Armour, C., & McTear, M. (2018). Back to the future: Lessons from knowledge engineering methodologies for chatbot design and development. Proceedings of British HCI 2018 , 1–5. https://doi.org/10.14236/ewic/HCI2018.153 .
Chan, T. J., Yong, W. K. Y., & Harmizi, A. (2020). Usage of whatsapp and interpersonal communication skills among private university students. Journal of Arts & Social Sciences, 3 (January), 15–25.
Chaves, A. P., & Gerosa, M. A. (2021). How should my chatbot interact? A survey on social characteristics in human–chatbot interaction design. International Journal of Human-Computer Interaction, 37 (8), 729–758. https://doi.org/10.1080/10447318.2020.1841438 .
Chen, H. L., Widarso, G. V., & Sutrisno, H. (2020). A chatbot for learning Chinese: Learning achievement and technology acceptance. Journal of Educational Computing Research, 58 (6), 1161–1189. https://doi.org/10.1177/0735633120929622 .
Chete, F. O., & Daudu, G. O. (2020). An approach towards the development of a hybrid chatbot for handling students’ complaints. Journal of Electrical Engineering, Electronics, Control and Computer Science, 6 (22), 29–38.
Chocarro, R., Cortiñas, M., & Marcos-Matás, G. (2021). Teachers’ attitudes towards chatbots in education: A technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics. Educational Studies, 00 (00), 1–19. https://doi.org/10.1080/03055698.2020.1850426 .
Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems, 92 , 539–548. https://doi.org/10.1016/j.future.2018.01.055 .
Colace, F., De Santo, M., Lombardi, M., Pascale, F., Pietrosanto, A., & Lemma, S. (2018). Chatbot for e-learning: A case of study. International Journal of Mechanical Engineering and Robotics Research, 7 (5), 528–533. https://doi.org/10.18178/ijmerr.7.5.528-533 .
Conde, M. Á., Rodríguez-Sedano, F. J., Hernández-García, Á., Gutiérrez-Fernández, A., & Guerrero-Higueras, Á. M. (2021). Your teammate just sent you a new message! The effects of using Telegram on individual acquisition of teamwork competence. International Journal of Interactive Multimedia and Artificial Intelligence, 6 (6), 225. https://doi.org/10.9781/ijimai.2021.05.007 .
Cunningham-Nelson, S., Boles, W., Trouton, L., & Margerison, E. (2019). A review of chatbots in education: practical steps forward. In 30th Annual conference for the australasian association for engineering education (AAEE 2019): Educators becoming agents of change: innovate, integrate, motivate. Engineers Australia , 299–306.
Creswell, J. W. (2012). Educational Research : Planning, Conducting and Evaluating Quantitative and Qualitative Research (4th ed.). Pearson Education.
de Holanda Coelho, G. L., Hanel, H. P., & Wolf, J. L. (2020). The very efficient assessment of need for cognition: Developing a six-item version. Assessment, 27 (8), 1870–1885. https://doi.org/10.1177/1073191118793208 .
de Oliveira, J. C., Santos, D. H., & Neto, M. P. (2016). Chatting with Arduino platform through Telegram Bot. 2016 IEEE International Symposium on Consumer Electronics (ISCE) , 131–132. https://doi.org/10.1109/ISCE.2016.7797406 .
Dekker, I., De Jong, E. M., Schippers, M. C., De Bruijn-Smolders, M., Alexiou, A., & Giesbers, B. (2020). Optimizing students’ mental health and academic performance: AI-enhanced life crafting. Frontiers in Psychology, 11 (June), 1–15. https://doi.org/10.3389/fpsyg.2020.01063 .
Devito, J. (2018). The interpersonal communication book (15th ed.). Pearson Education Limited.
Eccles, J. S., & Wigfield, A. (2002). Motivational Beliefs, Values, and Goals. Annual Review of Psychology , 53 (1), 109–132. https://doi.org/10.1146/annurev.psych.53.100901.135153 .
Eeuwen, M. V. (2017). Mobile conversational commerce: messenger chatbots as the next interface between businesses and consumers . Unpublished Master's thesis. University of Twente.
Følstad, A., Skjuve, M., & Brandtzaeg, P. B. (2019). Different chatbots for different purposes: towards a typology of chatbots to understand interaction design. In: Bodrunova S. et al. (eds) Internet Science. INSCI 2018. Lecture Notes in Computer Science , vol 11551 (pp. 145–156). Springer. https://doi.org/10.1007/978-3-030-17705-8_13 .
Følstad, A., & Brandtzaeg, P. B. (2017). Chatbots and the new world of HCI. Interactions, 24 (4), 38–42. https://doi.org/10.1145/3085558 .
Fryer, L. K., Nakao, K., & Thompson, A. (2019). Chatbot learning partners: Connecting learning experiences, interest and competence. Computers in Human Behavior, 93 , 279–289. https://doi.org/10.1016/j.chb.2018.12.023 .
Garcia Brustenga, G., Fuertes-Alpiste, M., & Molas-Castells, N. (2018). Briefing paper: Chatbots in education . eLearn Center, Universitat Oberta de Catalunya. https://doi.org/10.7238/elc.chatbots.2018 .
Gonda, D. E., Luo, J., Wong, Y. L., & Lei, C. U. (2019). Evaluation of developing educational chatbots based on the seven principles for good teaching. Proceedings of the 2018 IEEE international conference on teaching, assessment, and learning for engineering, TALE 2018 , Australia, 446–453. IEEE. https://doi.org/10.1109/TALE.2018.8615175 .
Hadjielias, E., Dada, O., Discua Cruz, A., Zekas, S., Christofi, M., & Sakka, G. (2021). How do digital innovation teams function? Understanding the team cognition-process nexus within the context of digital transformation. Journal of Business Research, 122 , 373–386. https://doi.org/10.1016/j.jbusres.2020.08.045 .
Han, R., & Xu, J. (2020). A comparative study of the role of interpersonal communication, traditional media and social media in pro-environmental behavior: A China-based study. International Journal of Environmental Research and Public Health . https://doi.org/10.3390/ijerph17061883 .
Haristiani, N., Danuwijaya, A. A., Rifai, M. M., & Sarila, H. (2019). Gengobot: A chatbot-based grammar application on mobile instant messaging as language learning medium. Journal of Engineering Science and Technology, 14 (6), 3158–3173.
Hattie, J. (2017). Visible Learningplus 250+ influences on student achievement. In Visible learning plus . www.visiblelearningplus.com/content/250-influences-student-achievement .
Hattie, J. (2015). The applicability of Visible Learning to higher education. Scholarship of Teaching and Learning in Psychology, 1 (1), 79–91. https://doi.org/10.1037/stl0000021 .
Heryandi, A. (2020). Developing chatbot for academic record monitoring in higher education institution. IOP Conference Series: Materials Science and Engineering . https://doi.org/10.1088/1757-899X/879/1/012049 .
Hetenyi, G., Lengyel, A., & Szilasi, M. (2019). Quantitative analysis of qualitative data: Using voyant tools to investigate the sales-marketing interface. Journal of Industrial Engineering and Management, 12 (3), 393–404. https://doi.org/10.3926/jiem.2929 .
Hobert, S. (2019). How are you, chatbot? Evaluating chatbots in educational settings—Results of a literature review. In N. Pinkwart & J. Konert (Eds.), DELFI 2019 (pp. 259–270). Gesellschaft für Informatik, Bonn. https://doi.org/10.18420/delfi2019_289 .
Hobert S. & Berens F. (2020). Small talk conversations and the long-term use of chatbots in educational settings—experiences from a field study. In: Følstad A. et al. (eds) Chatbot research and design. CONVERSATIONS 2019. Lecture Notes in Computer Science, vol 11970 (pp. 260–272). Springer. https://doi.org/10.1007/978-3-030-39540-7_18 .
Holotescu, C. (2016). MOOCBuddy: A chatbot for personalized learning with MOOCs. In: A. Iftene & J. Vanderdonckt (Eds.), Proceedings of the 13th international conference on human-computer interaction RoCHI’2016 , Romania, 91–94.
Ischen C., Araujo T., Voorveld H., van Noort G., Smit E. (2020) Privacy concerns in chatbot interactions. In: Følstad A. et al. (eds) Chatbot research and design. CONVERSATIONS 2019. Lecture Notes in Computer Science , vol 11970 (pp. 34–48). Springer. https://doi.org/10.1007/978-3-030-39540-7_3 .
Ismail, M., & Ade-Ibijola, A. (2019). Lecturer’s Apprentice: A chatbot for assisting novice programmers. Proceedings—2019 International multidisciplinary information technology and engineering conference, IMITEC 2019 . South Africa, 1–8. IEEE. https://doi.org/10.1109/IMITEC45504.2019.9015857 .
Kearney, E., Gebert, D., & Voelpel, S. (2009). When and how diversity benefits teams: The importance of team members’ need for cognition. Academy of Management Journal, 52 (3), 581–598. https://doi.org/10.5465/AMJ.2009.41331431 .
Kerly, A., Hall, P., & Bull, S. (2007). Bringing chatbots into education: Towards natural language negotiation of open learner models. Knowledge-Based Systems, 20 (2), 177–185. https://doi.org/10.1016/j.knosys.2006.11.014 .
Khan, A., Ranka, S., Khakare, C., & Karve, S. (2019). NEEV: An education informational chatbot. International Research Journal of Engineering and Technology, 6 (4), 492–495.
Kim, M. S. (2021). A systematic review of the design work of STEM teachers. Research in Science & Technological Education, 39 (2), 131–155. https://doi.org/10.1080/02635143.2019.1682988 .
Kumar, J. A., & Silva, P. A. (2020). Work-in-progress: A preliminary study on students’ acceptance of chatbots for studio-based learning. Proceedings of the 2020 IEEE Global Engineering Education Conference (EDUCON) , Portugal, 1627–1631. IEEE https://doi.org/10.1109/EDUCON45650.2020.9125183 .
Kumar, J. A., Bervell, B., Annamalai, N., & Osman, S. (2020). behavioral intention to use mobile learning : Evaluating the role of self-efficacy, subjective norm, and WhatsApp use habit. IEEE Access, 8 , 208058–208074. https://doi.org/10.1109/ACCESS.2020.3037925 .
Kumar, J. A., Silva, P. A., & Prelath, R. (2021). Implementing studio-based learning for design education: A study on the perception and challenges of Malaysian undergraduates. International Journal of Technology and Design Education, 31 (3), 611–631. https://doi.org/10.1007/s10798-020-09566-1 .
Lapina, A. (2020). Does exposure to problem-based learning strategies increase postformal thought and need for cognition in higher education students? A quasi-experimental study (Publication No. 28243240) Doctoral dissertation, Texas State University-San Marcos. ProQuest Dissertations & Theses Global.
Linse, A. R. (2007). Team peer evaluation. In Schreyer Institute for Teaching Excellence . http://www.schreyerinstitute.psu.edu/ .
Luo, C. J., & Gonda, D. E. (2019). Code Free Bot: An easy way to jumpstart your chatbot! Proceeding of the 2019 IEEE International Conference on Engineering, Technology and Education (TALE 2019) , Australia, 1–3, IEEE. https://doi.org/10.1109/TALE48000.2019.9226016 .
Meyer von Wolff, R., Nörtemann, J., Hobert, S., Schumann, M. (2020) Chatbots for the information acquisition at universities—A student’s view on the application area. In: Følstad A. et al. (eds) Chatbot research and design. CONVERSATIONS 2019. Lecture Notes in Computer Science , vol 11970 (pp. 231–244). Springer. https://doi.org/10.1007/978-3-030-39540-7_16 .
Miller, E. (2016). How chatbots will help education. Venturebeat. http://venturebeat.com/2016/09/29/how-chatbots-will-help-education/ .
Nguyen, Q. N., & Sidorova, A. (2018). Understanding user interactions with a chatbot: A self-determination theory approach. Proceedings of the Twenty-Fourth Americas Conference on Information Systems , United States of America, 1–5. Association for Information Systems (AIS).
Oke, A., & Fernandes, F. A. P. (2020). Innovations in teaching and learning: Exploring the perceptions of the education sector on the 4th industrial revolution (4IR). Journal of Open Innovation: Technology, Market, and Complexity., 6 (2), 31. https://doi.org/10.3390/JOITMC6020031 .
Ondas, S., Pleva, M., & Hladek, D. (2019). How chatbots can be involved in the education process. Proccedings of the 17th IEEE international conference on emerging eLearning technologies and applications ICETA 2019, Slovakia , 575–580. https://doi.org/10.1109/ICETA48886.2019.9040095 .
Pan, Y., Shang, Y., & Malika, R. (2020). Enhancing creativity in organizations: The role of the need for cognition. Management Decision . https://doi.org/10.1108/MD-04-2019-0516 .
Park, H. S., Baker, C., & Lee, D. W. (2008). Need for cognition, task complexity, and job satisfaction. Journal of Management in Engineering, 24 (2), 111–117. https://doi.org/10.1061/(asce)0742-597x(2008)24:2(111) .
Pereira, J. (2016). Leveraging chatbots to improve self-guided learning through conversational quizzes. Proceedings of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality—TEEM ’16 , Spain, 911–918, ACM. https://doi.org/10.1145/3012430.3012625 .
Pereira, J., Fernández-Raga, M., Osuna-Acedo, S., Roura-Redondo, M., Almazán-López, O., & Buldón-Olalla, A. (2019). Promoting learners’ voice productions using chatbots as a tool for improving the learning process in a MOOC. Technology, Knowledge and Learning, 24 (4), 545–565. https://doi.org/10.1007/s10758-019-09414-9 .
Pham, X. L., Pham, T., Nguyen, Q. M., Nguyen, T. H., & Cao, T. T. H. (2018). Chatbot as an intelligent personal assistant for mobile language learning. Proceedings of the 2018 2nd international conference on education and e-Learning—ICEEL 2018 , Indonesia, 16–21. ACM. https://doi.org/10.1145/3291078.3291115 .
Pintrich, P. R., & de Groot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82 (1), 33–40. https://doi.org/10.1037/0022-0663.82.1.33 .
Pintrich, P. R., Smith, D. A. F., Garcia, T., & Mckeachie, W. J. (1993). Reliability and predictive validity of the motivated strategies for learning questionnaire (MSLQ). Educational and Psychological Measurement, 53 (3), 801–813. https://doi.org/10.1177/0013164493053003024 .
Rahayu, Y. S., Wibawa, S. C., Yuliani, Y., Ratnasari, E., & Kusumadewi, S. (2018). The development of BOT API social media Telegram about plant hormones using Black Box Testing. IOP Conference Series: Materials Science and Engineering . https://doi.org/10.1088/1757-899X/434/1/012132 .
Rahman, A. M., Al Mamun, A., & Islam, A. (2018). Programming challenges of chatbot: Current and future prospective. 5th IEEE Region 10 Humanitarian Technology Conference 2017 (R10-HTC 2017) , India, 75–78, IEEE. https://doi.org/10.1109/R10-HTC.2017.8288910 .
Ren, R., Castro, J. W., Acuña, S. T., & De Lara, J. (2019). Evaluation techniques for chatbot usability: A systematic mapping study. International Journal of Software Engineering and Knowledge Engineering, 29 (11–12), 1673–1702. https://doi.org/10.1142/S0218194019400163 .
Riel, J. (2020). Essential features and critical issues with educational chatbots: toward personalized learning via digital agents. In: M. Khosrow-Pour (Ed.), Handbook of research on modern educational technologies, applications, and management (pp. 246–262). IGI Global. https://doi.org/10.1142/S0218194019400163 .
Rosenstein, L. D. (2019). Research design and analysis: A primer for the non-statistician . Wiley.
Book Google Scholar
Sandoval, Z. V. (2018). Design and implementation of a chatbot in online higher education settings. Issues in Information Systems, 19 (4), 44–52. https://doi.org/10.48009/4_iis_2018_44-52 .
Article MathSciNet Google Scholar
Sart, G. (2014). The effects of the development of metacognition on project-based learning. Procedia—Social and Behavioral Sciences, 152 , 131–136. https://doi.org/10.1016/j.sbspro.2014.09.169 .
Satow, L. (2017). Chatbots as teaching assistants: Introducing a model for learning facilitation by AI Bots. SAP Community . https://blogs.sap.com/2017/07/12/chatbots-as-teaching-assistants-introducing-a-model-for-learning-facilitation-by-ai-bots/ .
Schlagwein, D., Conboy, K., Feller, J., Leimeister, J. M., & Morgan, L. (2017). “Openness” with and without information technology: A framework and a brief history. Journal of Information Technology, 32 (4), 297–305. https://doi.org/10.1057/s41265-017-0049-3 .
Schmulian, A., & Coetzee, S. A. (2019). The development of Messenger bots for teaching and learning and accounting students’ experience of the use thereof. British Journal of Educational Technology, 50 (5), 2751–2777. https://doi.org/10.1111/bjet.12723 .
Setiaji, H., & Paputungan, I. V. (2018). Design of Telegram Bots for campus information sharing. IOP Conference Series: Materials Science and Engineering, 325 , 1–6. https://doi.org/10.1088/1757-899X/325/1/012005 .
Silva, P.A., Polo, B.J., Crosby, M.E. (2017). Adapting the studio based learning methodology to computer science education. In: Fee S., Holland-Minkley A., Lombardi T. (eds) New directions for computing education (pp. 119–142). Springer. https://doi.org/10.1007/978-3-319-54226-3_8 .
Sinclair, S., & Rockwell, G. (2021). Voyant tools (2.4). https://voyant-tools.org/ .
Sjöström, J., Aghaee, N., Dahlin, M., & Ågerfalk, P. J. (2018). Designing chatbots for higher education practice. Proceedings of the 2018 AIS SIGED International Conference on Information Systems Education and Research .
Smutny, P., & Schreiberova, P. (2020). Chatbots for learning: A review of educational chatbots for the Facebook Messenger. Computers and Education, 151 (February), 103862. https://doi.org/10.1016/j.compedu.2020.103862 .
Sreelakshmi, A. S., Abhinaya, S. B., Nair, A., & Jaya Nirmala, S. (2019). A question answering and quiz generation chatbot for education. Grace Hopper Celebration India (GHCI), 2019 , 1–6. https://doi.org/10.1109/GHCI47972.2019.9071832 .
Stathakarou, N., Nifakos, S., Karlgren, K., Konstantinidis, S. T., Bamidis, P. D., Pattichis, C. S., & Davoody, N. (2020). Students’ perceptions on chatbots’ potential and design characteristics in healthcare education. In J. Mantas, A. Hasman, & M. S. Househ (Eds.), The importance of health informatics in public health during a pandemic (Vol. 272, pp. 209–212). IOS Press. https://doi.org/10.3233/SHTI200531 .
Tamayo, P. A., Herrero, A., Martín, J., Navarro, C., & Tránchez, J. M. (2020). Design of a chatbot as a distance learning assistant. Open Praxis, 12 (1), 145. https://doi.org/10.5944/openpraxis.12.1.1063 .
Tegos, S., Demetriadis, S., Psathas, G. & Tsiatsos T. (2020) A configurable agent to advance peers’ productive dialogue in MOOCs. In: Følstad A. et al. (eds) Chatbot research and design. CONVERSATIONS 2019. Lecture Notes in Computer Science , vol 11970 (pp. 245–259). Springer. https://doi.org/10.1007/978-3-030-39540-7_17 .
Thirumalai, B., Ramanathan, A., Charania, A., & Stump, G. (2019). Designing for technology-enabled reflective practice: teachers’ voices on participating in a connected learning practice. In R. Setty, R. Iyenger, M. A. Witenstein, E. J. Byker, & H. Kidwai (Eds.), Teaching and teacher education: south asian perspectives (pp. 243–273). Palgrave Macmillan. https://doi.org/10.1016/S0742-051X(01)00046-4 .
van Knippenberg, D., & Hirst, G. (2020). A motivational lens model of person × situation interactions in employee creativity. Journal of Applied Psychology, 105 (10), 1129–1144. https://doi.org/10.1037/apl0000486 .
Vázquez-Cano, E., Mengual-Andrés, S., & López-Meneses, E. (2021). Chatbot to improve learning punctuation in Spanish and to enhance open and flexible learning environments. International Journal of Educational Technology in Higher Education, 18 (1), 33. https://doi.org/10.1186/s41239-021-00269-8 .
Verleger, M., & Pembridge, J. (2019). A pilot study integrating an AI-driven chatbot in an introductory programming course. Proceeding of the 2018 IEEE Frontiers in Education Conference (FIE) , USA. IEEE. https://doi.org/10.1109/FIE.2018.8659282 .
Walker, C. O., & Greene, B. A. (2009). The relations between student motivational beliefs and cognitive engagement in high school. Journal of Educational Research, 102 (6), 463–472. https://doi.org/10.3200/JOER.102.6.463-472 .
Wang, J., Hwang, G., & Chang, C. (2021). Directions of the 100 most cited chatbot-related human behavior research: A review of academic publications. Computers and Education: Artificial Intelligence, 2 , 1–12. https://doi.org/10.1016/j.caeai.2021.100023 .
Wei, H. C., & Chou, C. (2020). Online learning performance and satisfaction: Do perceptions and readiness matter? Distance Education, 41 (1), 48–69. https://doi.org/10.1080/01587919.2020.1724768 .
Winkler, R., & Söllner, M. (2018). Unleashing the potential of chatbots in education: A state-of-the-art analysis. In Academy of Management Annual Meeting (AOM) . https://www.alexandria.unisg.ch/254848/1/JML_699.pdf .
Wu, E. H. K., Lin, C. H., Ou, Y. Y., Liu, C. Z., Wang, W. K., & Chao, C. Y. (2020). Advantages and constraints of a hybrid model K-12 E-Learning assistant chatbot. IEEE Access, 8 , 77788–77801. https://doi.org/10.1109/ACCESS.2020.2988252 .
Yen, A. M. N. L. (2018). The influence of self-regulation processes on metacognition in a virtual learning environment. Educational Studies, 46 (1), 1–17. https://doi.org/10.1080/03055698.2018.1516628 .
Yilmaz, R. M., & Baydas, O. (2017). An examination of undergraduates’ metacognitive strategies in pre-class asynchronous activity in a flipped classroom. Educational Technology Research and Development, 65 (6), 1547–1567. https://doi.org/10.1007/s11423-017-9534-1 .
Yin, J., Goh, T. T., Yang, B., & Xiaobin, Y. (2021). Conversation technology with micro-learning: The impact of chatbot-based learning on students’ learning motivation and performance. Journal of Educational Computing Research, 59 (1), 154–177. https://doi.org/10.1177/0735633120952067 .
Download references
Not applicable.
This study was funded under the Universiti Sains Malaysia Short Term Research Grant 304/PMEDIA/6315219.
Authors and affiliations.
Centre for Instructional Technology and Multimedia, Universiti Sains Malaysia, Minden, Pulau Pinang, Malaysia
Jeya Amantha Kumar
You can also search for this author in PubMed Google Scholar
The author read and approved the final manuscript.
Correspondence to Jeya Amantha Kumar .
Competing interests.
The author declares that there is no conflict of interest.
Informed consent was obtained from all participants for being included in the study based on the approval of The Human Research Ethics Committee of Universiti Sains Malaysia (JEPeM) Ref No: USM/JEPeM/18050247.
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
Cite this article.
Kumar, J.A. Educational chatbots for project-based learning: investigating learning outcomes for a team-based design course. Int J Educ Technol High Educ 18 , 65 (2021). https://doi.org/10.1186/s41239-021-00302-w
Download citation
Received : 02 July 2021
Accepted : 23 September 2021
Published : 15 December 2021
DOI : https://doi.org/10.1186/s41239-021-00302-w
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Student chatbot system: a review on educational chatbot, development of chatbot application to support academic staff works for academic student services, a framework to implement ai-integrated chatbot in educational institutes, utilization of chabot in an educational system, use of chatbots in e-learning context: a systematic review, evolution in education: chatbots, chatbot to improve learning punctuation in spanish and to enhance open and flexible learning environments, application of chatbot technology in the study of the discipline «quality management» *, development of a chatbot system based on learning and discussion status in pair work, implementing the bashayer chatbot in saudi higher education: measuring the influence on students' motivation and learning strategies, 26 references, automated reply to students' queries in e-learning environment using web-bot, review of integrated applications with aiml based chatbot, building a chatbot with serverless computing, botwheels: a petri net based chatbot for recommending tires, a web-based platform for collection of human-chatbot interactions, an intelligent question answering conversational agent using naïve bayesian classifier, the forensic challenger, an ontological approach to digital storytelling, self-regulated learning with approximate reasoning and situation awareness, artificial intelligence technologies for personnel learning management systems, related papers.
Showing 1 through 3 of 0 Related Papers
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Original Submission Date Received: .
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
Designing a chatbot for contemporary education: a systematic literature review.
2. related work, 3.1. eligibility criteria, 3.2. information sources, 3.3. search strategy, 3.4. selection process and data collection process, 3.5. data items, 4.1. educational grade levels, 4.2. what are the steps for designing an educational chatbot for contemporary education, 4.2.1. research, 4.2.2. analysis, 4.2.3. definition, 4.2.4. shaping and formulation, 4.2.5. adaptation and modifications, 4.2.6. design principles and approaches, 4.2.7. implementation and training of the eca, 4.2.8. testing of the eca, 4.2.9. guidance and motivation provision for the adoption of the eca from the students, 4.2.10. application in the learning process, 4.2.11. evaluation of an eca, 5. discussion, 6. limitations, 7. conclusions and further research, author contributions, data availability statement, conflicts of interest.
Click here to enlarge figure
Reference | Type | Year | Number of Primary Studies/Number of Data Sources | Method | Subject |
---|---|---|---|---|---|
[ ] | SLR | 2021 | 29/1 | Coding scheme based on Chang & Hwang (2019) and Hsu et al. (2012) | Learning domains of ECAs; Learning strategies used by ECAs; Research design of studies in the field; Analysis methods utilized in relevant studies; Nationalities of authors and journals publishing relevant studies; Productive authors in the field |
[ ] | SLR | 2023 | 36/3 | Guidelines based on Keele et al. (2007) | Fields in which ECAs are used; Platforms on which the ECAs operate on; Roles that ECAs play when interacting with students; Interaction styles that are supported by the ECAs; Principles that are used to guide the design of ECAs; Empirical evidence that exists to support the capability of using ECAs as teaching assistants for students; Challenges of applying and using ECAs in the classroom |
[ ] | SLR | 2021 | 53/6 | Methods based on Kitchenham et al. (2007), Wohlin et al. (2012) and Aznoli & Navimipour (2017) | The most recent research status or profile for ECA applications in the education domain; The primary benefits of ECA applications in education; The challenges faced in the implementation of an ECA system in education; The potential future areas of education that could benefit from the use of ECAs |
[ ] | SLR | 2020 | 80/8 | PRISMA framework | The different types of educational and/or educational environment chatbots currently in use; The way ECAs affect student learning or service improvement; The type of technology ECAs use and the learning result that is obtained from each of them; The cases in which a chatbot helps learning under conditions similar to those of a human tutor; The possibility of evaluating the quality of chatbots and the techniques that exist for that |
[ ] | SLR | 2020 | 47 CAs/1 | undefined | Qualitative assessment of ECAs that operate on Meta Messenger |
[ ] | SLR | 2021 | 74/4 | PRISMA framework | The objectives for implementing chatbots in education; The pedagogical roles of chatbots; The application scenarios that have been used to mentor students; The extent in which chatbots are adaptable to personal students’ needs; The domains in which chatbots have been applied so far |
Inclusion Criteria | Exclusion Criteria |
---|---|
IC1: The examined chatbot application was used in teaching a subject to students and contains information about designing, integrating or evaluating it, as well as mentioning the tools or environments used in order to achieve that. | EC1: The chatbot application was designed for the training of specific target groups but not students or learners of an educational institution |
IC2: The publication year of the article is between 2018 and 2023 | |
IC3: The document type is a journal article | EC2: Articles that are focused too much on the results for the learners and do not describe the role of the CA and how it contributes to the results |
IC4: The retrieved article is written in English |
Educational Grade Level | References |
---|---|
K-12 education (14) | [ , , , , , , , , , , , , , ] |
Tertiary education (43) | [ , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] |
Unspecified (16) | [ , , , , , , , , , , , , , , , ] |
Category | References |
---|---|
Linear (28) | [ , , , , , , , , , , , , , , , , , , , , , , , , , , , ] |
Iterative (3) | [ , , ] |
Kind of Information | References |
---|---|
A suitable methodology to develop an ECA. | [ ] |
Communication methods and strategies that are going to be used to form the function of the educational tutor. | [ ] |
Design requirements to develop an ECA. | [ ] |
Design principles to develop an ECA. | [ ] |
Empirical information by examining similar applications. | [ , , , ] |
Learning methods that are going to be used to form the function of the educational tutor. | [ ] |
Mechanism prototypes that will be used to develop the educational agent. | [ ] |
Ready-to-use chatbots for the learning process. | [ , , ] |
Suitable environments and tools to develop an ECA. | [ , , , , ] |
Technical requirements to develop an ECA. | [ , ] |
Theories relevant to the development of an ECA. | [ ] |
Requirements | References |
---|---|
Users’ needs and expectations. | [ , , , , , , , , ] |
Technical requirements. | [ ] |
Collection of students’ common questions and most searched topics to be used in the educational material. | [ , , , , , , , ] |
Objects to Be Defined | References |
---|---|
An application flow to show the function of the ECA. | [ ] |
Communication channels the ECA is going to be accessible from. | Common to all |
Education plan. | [ ] |
Learning material and items the chatbot is going to use. | Common to all |
Learning methods and techniques that will be used to develop the agent. | [ , ] |
Learning objectives, tasks and goals. | [ , ] |
Student personas and conversational standards of the student–agent interaction. | [ ] |
Teaching and learning processes. | [ , , , ] |
The conversational flow between the student and the chatbot. | [ , ] |
The design principles of the chatbot. | [ , , ] |
The processing mechanisms of the chatbots that are going to be used. | [ , ] |
The purpose of the educational CA. | [ , , , ] |
Tools and environments or the ready solutions of already built CAs that are going to be used. | Common to all |
Usage Roles of ECAs | References |
---|---|
Course evaluator | [ ] |
Learner taught by the student | [ , ] |
Learning guide in a gamification environment | [ ] |
Self-Evaluator (learning partner/educational tutor) | [ , , , , , , , , , , , , , , , , , , , , , , , ] |
Storytelling conversational assistant | [ ] |
Student assistant (provider of supportive educational material) | [ , , , , , ] |
Question solver | [ , ] |
Proposed Steps | References |
---|---|
Evaluating students’ questions to measure their complexity and bias rate. | [ ] |
Shaping of the final learning material that is going to be used from the educational tutor. | Common to all |
Proposed Steps | References |
---|---|
Evaluating students’ competence in answering questions to modify the learning content accordingly. | [ ] |
Enriching the educational material with various forms of the educational material apart from text messages. | [ , , , ] |
Studying the curriculum of the educational institute and adapting the design principles of the ECA to it. | [ , , ] |
Studying the curriculum of the educational institute and modifying the function of the ECA to it. | [ , ] |
Adapting the function of the ECA to the teaching process. | [ ] |
Getting domain experts’ opinions to modify the function and the learning material used by the ECA. | [ , ] |
Design Direction | Design Suggestion | References |
---|---|---|
Adaption of the ECA’s function to students’ needs | Adaptation of the ECA’s function to the emotional needs of the students | [ ] |
Alignment of the ECA’s function with students’ learning needs | [ , , , , ] | |
Adjustment of the ECA’s function to the curriculum of the students | [ , ] | |
Modification of the ECA’s function to match user expectations | [ , ] | |
Definition of the ECA’s vocabulary and expression style to be suitable with the students’ linguistic capabilities | [ ] | |
Construction of the ECA in order to be an inclusive educational tool (suitable for every learner) | [ ] | |
Accessibility | Alignment of the ECA’s function with the selected communication channels | [ ] |
The ECA should be accessible from various communication channels | [ , ] | |
Guaranteed chatbot availability regardless of the external conditions | [ , ] | |
Conversational traits | Equipment of the ECA with many variations for phrases with the same meaning | [ , , ] |
Acceptance of oral messages as input | [ , ] | |
The ECA should address students by their name to provide personalized conversations | [ , ] | |
Avoidance of spelling errors | [ ] | |
Capability to discuss wide range for discussion topics including casual, non-educational subjects | [ , , , ] | |
The ECA should collect more information when the user cannot be understood by discussing with them to identify their intent | [ , ] | |
The ECA should let the user define the conversational flow when the CA cannot respond | [ ] | |
Provision of messages about the limitation of the ECA when it cannot respond | [ ] | |
Provision of motivational comments and rewarding messages to students | [ , , , , , , ] | |
The ECA should produce quick responses | [ , ] | |
Redirection of students to an educator’s communication channel when the ECA cannot respond | [ ] | |
Wise usage of humor in the user interaction | [ ] | |
Usage of easy-to-understand language in the response | [ ] | |
Utilization of human-like conversational characteristics such as emoticons, avatar representations and greeting messages | [ , , , , ] | |
Utilization of previous students’ responses to improve its conversational ability and provide personalized communication | [ , , , , ] | |
Usage of button-formed topic suggestion for quicker interaction | [ ] | |
Usage of “start” and “restart” interaction buttons | [ ] | |
Design process general suggestions | Engage every possible stakeholder of the development process to gain better results | [ ] |
Make the database of the system expendable | [ ] | |
Handling of the educational material | Explanation of the educational material from various perspectives | [ , , ] |
The ECA should predict different subjects the students did not comprehend and provide relevant supportive material | [ ] | |
Suggestion of external learning sources to students when it cannot respond | [ ] | |
Proposition of learning topics similar to the current to help students learn on their own | [ ] | |
The ECA should provide educational material in small segments with specific content | [ , , ] | |
Provision of educational material in various forms apart from text message | [ , , , , , , , , ] | |
Navigation buttons between the segments of the presented educational material | [ ] | |
Oral narration to accompany the offered learning material | [ ] | |
Handling of quizzes, tests or self-evaluation material | [ , , , , , , , , , , , , , , , , , , , , , , , ] | |
Recommendation of suitable practice exercises to the students | [ ] | |
Instruction provision | The ECA should provide usage instructions through the functional environment of the ECA | [ , ] |
Provided learning experience | Integration of other technologies such as AR to provide better user experience | [ ] |
The ECA should provide feedback to students | [ , , , , ] | |
The ECA should provide personalized learning experience | [ ] | |
Use of gamification elements | [ ] | |
Question handling to and control by the students | Addition of buttons so students can handle the questions they cannot answer | [ , ] |
The ECA should allow students to trace back to previous exercises and definitions | [ ] | |
Provision of hints to students when they cannot answer a question | [ ] | |
Use of button-formed reappearance of wrong questions so as to be easier to answer | [ ] | |
Regulations for the function of the system | Alignment of the ECA’s function with the ethics policies and rules for the protection of the user data | [ , , , , , ] |
Adjustment of the ECA’s function to the policies of the educational institution | [ , ] | |
Students’ notifications | The ECA should provide updates to students for important course events such as deadlines | [ ] |
Traits of the provided learning activities | Provision of challenging and interesting student learning activities | [ , , ] |
The ECA should provide collaborative learning activities | [ ] | |
Utilization of competitive learning activities | [ , ] | |
Teacher support | Alignment of the ECA’s function with the form of the teaching material | [ ] |
Adjustment of the ECA’s function to the teaching style of the educator | [ ] | |
The ECA should provide goal-setting possibilities to the teachers | [ ] | |
Tutoring approach | Alignment of the ECA’s function with specific learning theories | [ , ] |
Adjustment of the ECA’s function to specific motivational framework | [ , ] | |
Alignment of the ECA’s function with the learning purpose | [ ] | |
Design of the ECA as a student that learns from the students | [ ] | |
The ECA should utilize predefined learning paths | [ ] | |
Usage of students’ previous knowledge and skills to help them learn new information | [ ] | |
Utilization of learning motives such as students’ grades to increase students’ engagement willingness | [ ] |
Proposed Steps | References |
---|---|
Using previously collected or preconstructed material to train the chatbot | [ , ] |
Training Method | References |
---|---|
Datasets of the development platforms | [ , ] |
Existing corpora of data | [ ] |
Educational material (predefined sets of questions that students have done or formed by domain experts or educators) | [ , , , , , , , ] |
Machine learning techniques | [ , ] |
Proposed Steps | References |
---|---|
Trying a pilot application with a few students, teachers or domain experts to evaluate the first function of the ECA. | [ , , , , , , , , , ] |
Applying modifications based on the testing results. |
Testing Method | References |
---|---|
Domain expert or educator testing | [ , ] |
Student testing | [ , , , , , , ] |
Testing using performance metrics | [ , ] |
Proposed Steps | References |
---|---|
Providing guidance to the students on how to use the chatbot. | [ , , , , , , , ] |
Motivating students to use the agent. | [ , , ] |
Proposed Steps. | References |
---|---|
Evaluating the chatbot based on system analytics and user evaluations. | common-to-all |
Restarting the procedure from the design stage and using the evaluation data to improve the agent. |
Evaluation Instruments | References |
---|---|
Interviews | [ , , , , , , , , , , ] |
Learning and interaction analytics | [ , , , , , , , , , , , , , , , ] |
Questionnaires | [ , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] |
Student performance | [ , , , , , , ] |
Technical performance metrics | [ , , , , , , ] |
Evaluation Method | References |
---|---|
Comparison of the current ECA with other ECAs | [ , ] |
Functional assessments by domain experts | [ , ] |
Student usage evaluation | Common to all |
Usability evaluation | [ , , , , ] |
Workshops for users or stakeholders | [ ] |
Evaluation Category | Evaluation Metric | References |
---|---|---|
Interaction metrics | Acceptance of chat messages | [ ] |
Acceptance or indifference to the agent’s feedback | [ , ] | |
Average duration of the interaction between the user and the ECA | [ , ] | |
Capability of answering user questions and providing a pleasant interaction | [ ] | |
Capability of conducting a personalized conversation | [ ] | |
Capability of providing a human-like interaction and keeping a conversation going | [ ] | |
Capability of understanding and responding to the user | [ , , , , , ] | |
Capability of understanding different uses of the language | [ ] | |
Periodicity of chat messages | [ ] | |
Quality and accuracy of responses | [ , ] | |
The average number of words in the messages written by the students | [ ] | |
The number of propositions that were utilized by the learners | [ ] | |
The total duration of students’ time spent interacting with the ECA | [ , , ] | |
The total number of buttons that were pressed by the learners | [ ] | |
The total number of propositions the ECA offered to the learners. | [ ] | |
The total number of users that utilized the ECA | [ ] | |
The total number of words that were produced by the ECA | [ ] | |
The total number of words written by the students | [ ] | |
The number of user inputs that were formed using natural language and were understood by the chatbot | [ ] | |
Total number of interactions between the student and the ECA | [ , ] | |
Total number of messages between the student and the ECA | [ ] | |
Support and scaffolding of the educational process | Capability of supporting the achievement of students’ learning goals and tasks | [ , , ] |
Fulfillment of the initial purpose of the agent | [ ] | |
Rate of irrelevant (in an educational context) questions asked by the students | [ ] | |
Students’ rate of correct answers (to questions posed by the chatbot) | [ ] | |
The quality of the educational material suggestions made by the ECA | [ , , , , ] | |
Technical suitability and performance | Compatibility with other software | [ ] |
Maintenance needs | [ ] | |
User experience | Students’ self-efficacy and learning autonomy | [ , ] |
Students’ workload | [ ] | |
Usability | [ , , , , ] | |
User motivation | [ , ] | |
User satisfaction | [ , , , ] |
Suggestion | References |
---|---|
Evaluation based on the initial purpose of the ECA | [ ] |
Utilization of specific evaluation plan | [ , ] |
Usage of progress bar and “skip buttons” in the evaluation form | [ ] |
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
Ramandanis, D.; Xinogalos, S. Designing a Chatbot for Contemporary Education: A Systematic Literature Review. Information 2023 , 14 , 503. https://doi.org/10.3390/info14090503
Ramandanis D, Xinogalos S. Designing a Chatbot for Contemporary Education: A Systematic Literature Review. Information . 2023; 14(9):503. https://doi.org/10.3390/info14090503
Ramandanis, Dimitrios, and Stelios Xinogalos. 2023. "Designing a Chatbot for Contemporary Education: A Systematic Literature Review" Information 14, no. 9: 503. https://doi.org/10.3390/info14090503
Article access statistics, further information, mdpi initiatives, follow mdpi.
Subscribe to receive issue release notifications and newsletters from MDPI journals
The purpose of this paper is to explore the usefulness of chatbot in educational institutes such as schools and colleges and to propose a chatbot development plan that meet the needs. Usually chatbots are built for one specific purpose, for example, to answer general queries prospective students might have regarding admission. This paper aims to provide an artificial-intelligence(AI) integrated chatbot framework that can help develop a multi-use chatbot. Study is based highly on qualitative data collected from case studies and journal articles. Primary data is also collected from interviews and questionnaires presented to appropriate staffs and students in college, in this case, Middle East College. Integrating AI into the chatbot to make it self-reliant, intelligent and learn from user interaction is necessary to make it deal with multiple fields. This requires complex algorithms, database management and extensive labor, thus making it very costly. However, if developed, this single chatbot can help students, faculties and other staffs greatly, not just as an assistant in answering frequently asked questions, but also in learning and teaching. The chatbot can be integrated with mobile app making it a part of daily life. Due to over complexity, the chatbot will first developed for use in one field then gradually expanded to other. A chatbot built for multiple purposes certainly holds more complexity than a single general purpose chatbot. This being said, having the software developed and tested in real life would have helped to better understand its flexibility and functionality.
Abbasi, S. & Kazi, H. (2014). Measuring effectiveness of learning chatbot systems on Student’s learning outcome and memory retention. Asian Journal of Applied Science and Engineering, 3, 57-66. doi: 10.15590/ajase/2014/v3i7/53576
Abdul-Kader, S. A. & Woods, J. (2015). Survey on Chatbot Design Techniques in Speech Conversation Systems. International Journal of Advanced Computer Science and Applications, 6(7), 72-80. Retrieved from https://thesai.org/Downloads/Volume6No7/Paper_12-Survey_on_Chatbot_Design_Techniques_in_Speech_Conversation_Systems.pdf
Agarwal, A. (Speaker). (2013). Why massive open online courses (still) matter [Video]. Edinburgh, Scotland: TEDGlobal 2013. Retrieved from https://www.ted.com/talks/anant_agarwal_why_massive_open_online_courses_still_matter?utm_source=tedcomshare&utm_medium=email&utm_campaign=tedspread
Bradesko, L., & Mladenic, D. (2012). A Survey of Chabot Systems through a Loebner Prize Competition. Retrieved from https://pdfs.semanticscholar.org/9447/1160f13e9771df3199b3684e085729110428.pdf?_ga=2.108868619.847311103.1574207030-206774130.1574207030
Brandtzaeg, P.B. & Følstad A. (2017). Why People Use Chatbots. In: Kompatsiaris I. et al. (Eds.), Internet Science. INSCI 2017. Lecture Notes in Computer Science (pp 377-392). Springer, Cham. Retrieved from https://doi.org/10.1007/978-3-319-70284-1_30
Colace, F., Santo, M. D., Lombardi, M., Pascale, F., Pietrosanto, A. & Lemma, S. (2018) Chatbot for E-Learning: A Case of Study. International Journal of Mechanical Engineering and Robotics Research, 7(5), 528-533. doi: 10.18178/ijmerr.7.5.528-533
Cui, A. (2015, February 20). Massive Open Online Courses (MOOCs) and the next generation [Video file]. Retrieved from https://www.youtube.com/watch?v=gbkeWebvW1M
Fadhil, A. & Schiavo, G. (2019). Designing for Health Chatbots. ArXiv, abs/1902.09022.
Gonda, D. E., Luo, J., Wong, Y. and Lei, C. (2019). Evaluation of Developing Educational Chatbots Based on the Seven Principles for Good Teaching. 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE) (pp. 446-453). Wollongong, NSW, Australia. doi: 10.1109/TALE.2018.8615175
Huang, J., Zhou, M. & Yang, D. (2007). Extracting Chatbot Knowledge from Online Discussion Forums. IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence (pp. 423-428). Hyderabad, India. Retrieved from https://www.ijcai.org/Proceedings/07/Papers/066.pdf
Molnár, G. & Sz?ts, Z. (2018). The Role of Chatbots in Formal Education. IEEE 16th International Symposium on Intelligent Systems and Informatics (pp. 197-201). Subotica, Serbia. doi: 10.1109/SISY.2018.8524609
Rahman, AM., Mamun, AA & Islam, A. (2017). Programming challenges of chatbot: Current and future prospective. 2017 IEEE Region 10 Humanitarian Technology Conference (pp. 75-78). Dhaka, Bangladesh: IEEE. doi: 10.1109/R10-HTC.2017.8288910
Rouse, M. (2019). Chatbot. Retrieved 20 November, 2019 from https://searchcustomerexperience.techtarget.com/definition/chatbot
Sjöström, J., Aghaee, N., Dahlin, M. & Ågerfalk, P. J. (2018). Designing Chatbots for Higher Education Practice. International Conference on Information Systems Education and Research (pp. 1-10). San Francisco, CA. Retrieved from https://www.researchgate.net/publication/328245964_Designing_Chatbots_for_Higher_Education_Practice
V-Soft Consulting. (n.d.). Understanding The Conversational Chatbot Architecture. Retrieved 22 November, 2019 from https://blog.vsoftconsulting.com/blog/understanding-the-architecture-of-conversational-chatbot
Copyright (c) 2020 SHAIK Mazhar Hussain
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License .
Copyright holder(s) granted JSR a perpetual, non-exclusive license to distriute & display this article.
Call for papers: volume 13 issue 4.
If you are an undergraduate or graduate student at a college or university aspiring to publish, we are accepting submissions. Submit Your Article Now!
Deadline: 11:59 p.m. August 31, 2024
Grab your spot at the free arXiv Accessibility Forum
Help | Advanced Search
Title: developing effective educational chatbots with chatgpt prompts: insights from preliminary tests in a case study on social media literacy (with appendix).
Abstract: Educational chatbots come with a promise of interactive and personalized learning experiences, yet their development has been limited by the restricted free interaction capabilities of available platforms and the difficulty of encoding knowledge in a suitable format. Recent advances in language learning models with zero-shot learning capabilities, such as ChatGPT, suggest a new possibility for developing educational chatbots using a prompt-based approach. We present a case study with a simple system that enables mixed-turn chatbot interactions and discuss the insights and preliminary guidelines obtained from initial tests. We examine ChatGPT's ability to pursue multiple interconnected learning objectives, adapt the educational activity to users' characteristics, such as culture, age, and level of education, and its ability to use diverse educational strategies and conversational styles. Although the results are encouraging, challenges are posed by the limited history maintained for the conversation and the highly structured form of responses by ChatGPT, as well as their variability, which can lead to an unexpected switch of the chatbot's role from a teacher to a therapist. We provide some initial guidelines to address these issues and to facilitate the development of effective educational chatbots.
Comments: | Poster version accepted at the 31st International Conference on Computers in Education (ICCE) |
Subjects: | Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Computers and Society (cs.CY) |
Cite as: | [cs.HC] |
(or [cs.HC] for this version) | |
Focus to learn more arXiv-issued DOI via DataCite |
Access paper:.
Code, data and media associated with this article, recommenders and search tools.
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .
Nurul Amelina Nasharuddin , N. Sharef , E. Mansor +6 more · Jun 15, 2021
Influential Citations
2021 Fifth International Conference on Information Retrieval and Knowledge Management (CAMP)
Cikguaibot, a chatbot application for teaching ai in malay language, successfully achieved its objectives and is fostering successful learning in malaysia's education system..
This research aims to design a chatbot application to teach Artificial Intelligent (AI) in Malay language. CikguAIBot offers learners the possibility to learn and interact with an agent without the need for a teacher to be around. The development of CikguAIBot is based on the RAD model with the involvement of a number of experts and real users. The main focus of this paper is on the contents and flow design of the chatbot so that the objectives of the chatbot are achieved. Results from the expert review sessions were reported and a detailed evaluation strategy with the students is also included although the evaluation session is in the future plan. This research is expected to foster the usage of chatbot technology in supporting successful learning in Malaysia’s education system.
by Ryan Nelson | Nov 7, 2017 | Case study | 0 comments
No customer service rep wants to answer the same question a hundred times a day. No sales rep wants to talk to people who aren’t going to buy. And if you’re leading an organization, you can’t afford to let either of those scenarios be the norm.
Chatbots (more affectionately known as virtual assistants) provide a solution to both of these problems. Their infinite capacity helps free up your employees and scale your organization’s efforts . Whether you use chatbots for customer service, sales, or something else, their artificial intelligence ensures that your human resources are only used when they’re needed, and that your organization communicates with the most people possible.
But the fear many organizations have is that chatbots are heavy on the artificial and light on the intelligence. Few things are more infuriating when you need help than having to repeatedly rephrase your question or jump through hoops to talk to a real person. Most of us prefer talking to humans, and that’s OK. That’s why chatbots are most-suited for highly specialized tasks.
The best chatbots interact with more people faster than humans will ever be able to. The trick is knowing when and how to use them. In many cases, you’ll find that chatbots are basically a more informal way for people to navigate your website.
To help you see if there are opportunities for your organization to use chatbots, we found 10 case studies of companies that used them successfully. We’ll show you what they did, how they did it, and where you can go to see the full case study.
Some of these organizations started with live chat systems before switching to chatbots. Some used chatbots conservatively, and others used them for everything.
Check out these 10 case studies on chatbots.
Chatbot system: Next IT Industry: Public transportation Key stats:
Major takeaways:
Where the study came from: Next IT shared this chatbot case study on their website about Amtrak’s experience with “Julie”, which began in 2012.
Amtrak is the largest organization you’ll find in our list of case studies. They have 20,000 employees and serve 30 million passengers per year. At the time Next IT published this case study, Amtrak.com was getting 375,000 visitors every day .
Using Next IT’s advanced AI chat platform , they created “ Ask Julie ” to help visitors find what they needed without having to call or email customer service.
Here’s what Next IT says she’s capable of:
“Travelers can book rail travel by simply stating where and when they’d like to travel. Julie assists them by pre-filling forms on Amtrak’s scheduling tool and providing guidance through the rest of the booking process. And, of course, she’s easily capable of providing information on what items can be carried on trains or helping make hotel and rental-car reservations.”
Instead of making a phone call or waiting for customer service to email them back, more and more visitors are turning to Julie. In fact, Next IT reported a 50% growth in Julie’s usage year over year.
Julie “was designed to function like Amtrak’s best customer service representative,” and with 5 million answered questions per year, it’s hard to argue that she isn’t their best customer service rep.
Not to mention, when Julie answers questions, she tacks on subtle upsells like these:
Image source: Next IT
So in addition to answering more questions and increasing the number of bookings, Julie actually increased the value of bookings. Bookings made through Julie resulted in an average of 30% more revenue than bookings made through other means.
Clearly, the self-serve model is working for Amtrak. What’s interesting about Julie is that despite the smiling face, you know you’re talking to a robot. It doesn’t feel like AI that they’re trying to pass off as a real person. It’s almost like she’s a more advanced search feature of the website. When visitors ask questions, she pulls in only the relevant information, and it’s all contextualized to fit their specific question.
Maybe it’s just me, but that could be the difference between a helpful tool and a frustrating conversation.
Chatbot system: Intercom Industry: Email verification software (SAAS) Key stats:
Where the study came from: Pardeep Kullar published this case study on the Upscope blog in 2017.
As a two-person marketing startup, Anymail finder was stretched thin between sales, marketing, and support. They were answering the same few questions over and over via email.
Intercom’s operator bot helped this two-person team look and feel like they had a full-fledged support department.
Pardeep Kullar of Anymail finder says that the same handful of questions kept popping up in the chat window. They were usually questions like “how are you different from your competitor?” or “how do I upload this file?”
So Pardeep and his colleague wrote detailed articles that answered these popular questions and any related ones, then incorporated the articles in readymade responses and automated messages. Website visitors encountered one of 10 automated chat messages, depending on the page they arrived at.
Image source: Anymail finder
It’s like putting multiple fishing lines in the water at once, waiting for potential customers or users to bite. When a visitor replied to an automated message, employees got a push notification so they could promptly respond to every inquiry. Anymail finder’s prewritten responses to popular questions let them reply to some inquiries within seconds.
Intercom’s messaging metrics let Anymail finder gauge which automated messages were producing the best results:
Image source: Upscope
One of the primary benefits of chatbot services is that they can answer most questions customers have and qualify your leads without eating up valuable time from your customer service or sales staff.
But Intercom is pretty anti-AI , and their chatbots serve a more limited role. For Anymail finder, real people were waiting behind every automated message, but chatbots still helped them provide superior customer service with a limited team.
Chatbot system: Drift Industry: data science software Key stats:
Where the study came from: Drift published this case study on their website.
RapidMiner went all-in—they replaced every lead capture form on their site with a chatbot. (Even for their whitepapers !) RapidMiner realized that automated conversations could filter and qualify leads in minutes, whereas a sequential email campaign could take weeks.
Chatbots let them circumvent this messy process and direct the best leads straight to sales:
Image source: Drift
For CMO Tom Wentworth, the change was about understanding why people came to RapidMiner’s site:
“People who come to our website aren’t coming there because they want to surf our site, they’re coming there because they have a specific problem, whether it’s a question about our product or what it does, whether it’s some technical support they need, or whether it’s they want to talk to someone in sales.”
Chatbots made it possible to address the reasons people came to RapidMiner.com without bombarding the sales team with unqualified leads. In an article on the Harvard Business Review , RapidMiner shared:
“The Drift bot now conducts about a thousand chats per month. It resolves about two-thirds of customer inquiries; those that it cannot, it routes to humans.”
Wentworth went on to say, “It’s the most productive thing I’m doing in marketing.”
Drift’s Leadbot asked visitors the same questions salespeople would’ve asked—and it never sleeps, so leads trickle in 24/7. So far, it’s brought in over 4,000 leads, influenced 25% of their open sales pipeline, and accounted for 10% of all new sales.
This case study provides some helpful insights into the differences between a chatbot and a live chat service, but it’s worth noting: ditching forms altogether is a pretty drastic step.
If you’re using blogging to increase your traffic , gated whitepapers and sequential email campaigns help you build an audience and create long-term relationships. A chatbot on your blog is bound to convert some visitors into leads, but this probably isn’t going to grow an email list you can reach out to again and again:
Image source: RapidMiner
Chatbot system: Drift Industry: database/development platform Key stats:
MongoDB was having a lot of success with live chat, but like all humans, their salespeople were limited by things like “time” and “space.” They couldn’t significantly increase the number of conversations they were having without significantly increasing the size of their team.
As their director of demand generation puts it:
“We needed a messaging tool that could scale with our business and increase the volume of our conversations, leading to the increase of our pipeline and Sales Accepted Leads (SALs)—the metrics my Demand Generation team are measured on.”
Like RapidMiner, MongoDB let Drift’s Leadbot ensure that their sales reps only talked to the people who were most likely to buy. And with Drift’s meeting scheduler , people didn’t have to play phone tag to make an appointment:
For MongoDB, automating lead-qualifying conversations allowed them to have more conversations, and automating the scheduling process let them turn more of those conversations into leads.
Chatbot system: Drift Industry: Drag-and-drop landing page creator Key stats:
Where the study came from: Drift’s case studies page shared how Leadpages used automated messages .
LeadPages started using Drift’s chat system to let their site visitors ask questions. Within a few weeks, they were averaging 100+ questions per week. And they didn’t even have a welcome message. They quickly realized that there was a much bigger opportunity to encourage conversations that lead to conversions.
LeadPages CMO Dustin Robertson says, “Site visitors ask questions through Drift as they consider purchasing our software. But there’s more to Drift than just chat. We can proactively reach out to visitors.”
So they added a welcome message.
In the month prior to adding the message, they had 310 visitors use the chat system. In the month after, they had 1,168. That’s a 267% increase.
But the quantity of messages wasn’t the only thing they were improving. LeadPages started using targeted, automated messages to try to increase conversions on specific pages. Depending on where visitors were on the website, they’d see a different message that fit with the page and asked them to take a specific action.
Like this message on their comparison page:
These targeted messages had an open rate of 30% and a click- through rate of 21%.
“With Drift’s automation features, we’ve been able to increase the conversion rate of our site visitors by 36%,” Robertson says.
Interestingly, at the time we prepared this case study roundup, Leadpages didn’t appear to be using chatbots on their comparison page , which has been reworked to feature in-depth comparison reports.
Chatbot system: Drift Industry: Web, mobile, and IOT testing platform Key stats:
Where the study came from: Perfecto Mobile helped Drift prepare this case study , which was published on Drift.com.
Perfecto Mobile had a problem. Most of their “leads” weren’t within their target audience. They didn’t want their sales development reps wasting that kind of time on a live chat system, so they went with Drift.
“Our leads tend to be 70% out of our target, 30% in,” says Perfecto CMO Chris Willis. “Now, I expected with web chat we’d see about the same thing. So people chatting and just essentially taking up the time of our SDRs when they could be working on more productive activities. And so right out of the gate, we identified with Drift that we were going to see the ability to manage that process. So we’re able to, by IP address, identify companies by their size, and only present to our SDRs chats that come from companies that we want to sell to.”
If a website visitor was coming from a company that was too small to be in Perfecto’s target audience, they didn’t see the chatbot.
Check out what Chris has to say about their experience:
The other major benefit Perfecto noticed was that chatbots allowed them to capitalize on leads at the most opportune time.
“Leads that come in through chat tend to have a higher velocity,” Chris says. “So you’re able to solve the problem or meet the needs of the request in real-time. So you think in terms of somebody coming to a website, and having a question, and filling in a contact us form. And they’ll hear back in 24 hours, or two days…that problem might not be there anymore. If they’re able to initiate a conversation, so skip the form, and have a conversation in real-time, we’re seeing that move very quickly.”
Here’s an actual example Chris shared about how this worked for Perfecto:
For Perfecto Mobile, chatbots helped them qualify leads faster, and hand them off to the right people at the right time.
Chatbot system: Next IT Industry: Cable/Internet provider Key stats:
Where the study came from: Charter Communications implemented Next IT’s chatbot in 2012. Next IT published this case study on their website.
Charter Communications is the second largest cable provider and the fifth largest phone providers in the U.S. They have 16,000 employees and 25 million customers.
Before switching to a chatbot service, Charter Communications had 200,000 live chats per month. 38% of these live chat conversations were for forgotten usernames and passwords. That’s 76,000 ridiculously simple requests that had to be handled by a real person every month.
Obviously, all of those conversations take up a lot of customer service time. Since so many people were accustomed to resolving issues through chat, Charter didn’t wanted to pull the plug on the entire chat system, but they needed a self-serve option to save their customer service reps for more complex problems.
When they switched to a chatbot, it didn’t just take over those basic password and username questions. 83% of all of chat communications were handled by the bot. That’s 166,000 chat requests per month that Charter no longer had to worry about.
But Charter’s chatbot wasn’t just bumbling its way through these conversations, either. Part of their goal was to increase first-contact resolution rates, so customers wouldn’t need to be relayed through several people to get what they needed. The chatbot could also handle those tedious password and username requests 50% faster than a real person.
Ultimately, chatbots delivered a solid win for Charter and for their customers.
Since Facebook opened up its Messenger app for developers to create their own bots , a lot of brands have seized the opportunity to interact with their audience this way. The case studies you’ll see below are a little lighter than the ones we’ve looked at so far, but they showcase a few ways organizations are successfully using Facebook Messenger bots. Some ecommerce sites have had a lot of success with Messenger bots, but the three examples we’re going to look at are all primarily content-focused brands.
Something to think about: while the other chatbots we’ve looked at live on your website, this one lives in an app people are already using, and they can find your bot there. Facebook shares Messenger bots in the discover tab , and if you open Messenger right now and search, you’ll find “bots” right below “people.” In other words, a Facebook Messenger bot could grow your audience.
Chatbot system: Facebook messenger Industry: baby products Key stats:
Where the study came from: BabyCenter asked ubisend to design a Facebook Messenger bot in 2016. Ubisend published this case study on their website.
BabyCenter is one of the most trusted pregnancy websites out there (seriously, I’ve seen my wife’s OBGYN check this site during appointments). One of their biggest draws is a sequential email campaign that follows you every step of the way through pregnancy, and their revenue model is based on advertisements and a strong affiliate sales program.
Through ubisend, BabyCenter created a bot on Facebook Messenger to do two things:
As you can see in the GIF below, the bot also provided a more interactive way for people to consume BabyCenter’s content.
Image source: ubisend
The new bot accomplished both objectives, with some impressive results. On average, 84% of people read the message, and 53% of those who opened also clicked through to the website. Ubisend compares that to MailChimp’s open and click-through rates, and with some unstated math determined that the Messenger bot had a 1,428% higher engagement rate. I can’t speak to the validity of that claim, but here are a couple of reasons why the bot may have had better open and click-through rates than email:
Whatever the reason, a Messenger bot was clearly a viable content delivery system for BabyCenter. If enough people adopt it, the Messenger bot may even rival their well-established sequential email campaign.
Chatbot system: Facebook Messenger Industry: Spa reviews Key stats:
Where the study came from: Good Spa Guide solicited ubisend’s services in 2016. Ubisend published this case study on their website.
As the name implies, Good Spa Guide reviews spas. They make money when people use the site to book a spa, so not surprisingly, they really value website traffic.
Like BabyCenter, Good Spa Guide was looking for an alternative to their email list. They used ubisend to design a Messenger bot that functions a lot like Amtrak’s “Ask Julie” bot. It basically provides a more conversational way to navigate the website—but without actually being on the website.
Check it out:
After a short conversation with the bot, people can go to the exact spa review page they need, and continue their hunt on the website.
With a 29% increase in traffic and a 13% increase in spa bookings, it looks like a Facebook Messenger bot helped Good Spa Guide either tap into a new audience, or engage their existing audience in a better way.
Chatbot system: Facebook Messenger Industry: Forex trading education Key stats:
Where the study came from: In 2016, MyTradingHub was struggling to keep subscribers engaged, so they turned to ubisend. This case study was published on ubisend.com.
MyTrainingHub is a web-based social and educational platform for people who trade on the foreign exchange market. They’re after users, not customers, and they use a sequential email campaign to keep their users engaged.
Their primary metric is what they call “Trader Training Completion,” which measures the number of people who have viewed 80% of MyTradingHub’s content and performed specific tasks like quizzes. When this metric started declining, they learned that users weren’t completing the training because “they forgot about it.”
They decided to try an interactive Messenger bot to bring up the number of people who made it through training. They wound up creating a bot that could help people interact with the trading platform and continue their training.
MyTradingHub saw their TTC metric increase by 59% following the launch of the bot, and their training pages saw 17% more traffic.
In this case, it looks like a Messenger bot functioned as a sort of half-measure. MyTradingHub has been around since 2010, but to continue to be a strong “social platform,” they probably need their own app. In the meantime, MyTradingHub’s Messenger bot appears to be keeping users more engaged with their existing content.
Chatbot system: Facebook Messenger Industry: Tea Key stats:
Where the study came from: PG Tips asked ubisend to design a chatbot for a charity promotion. Ubisend then published this case study on their website in 2017.
PG Tips (a brand by Unilever) decided to turn their “Most Famous Monkey” into an AI chatbot to generate donations for charity. They wanted a conversational chatbot to tell jokes for their “one million laughs” campaign.
It took six weeks for ubisend to turn a chatbot into a mediocre standup comic. (They went pretty heavy on the dad jokes.) The AI could handle 150 conversations per second and handle 215 different conversation topics.
We only included this one because it shows how quickly you can set up a fairly intelligent, completely custom chatbot.
After allowing developers to create their own chatbots for Messenger, Facebook shared this roundup of brands successfully using chatbots . More than 30,000 chatbots were created in the first six months they were supported on Facebook Messenger. The roundup highlights four that Facebook thinks are worth checking out.
In most cases, chatbots aren’t going to fool anyone. The chatbots we’ve looked at here are obviously not real people. The brands that use them and the companies that make them might be excited about how human they seem, but that’s not the point.
In the right situations, chatbots can provide customers and users with a better experience because they process your request instantly, and it doesn’t matter how many other conversations they’re having. And if you’re waiting around for basic help (like, say, password reset), you’re really not going to care if the person who’s helping you is a Bob or a bot.
Unless you’re looking for something gimmicky (like a chatbot monkey that tells dad jokes), most chatbots simply provide a more conversational way for your audience to consume the information on your website. It’s certainly not for everyone—some people (myself included) would rather navigate websites the old fashioned way and read blog posts on a blog—but for many people, chatbots provide a helpful shortcut to the information they’re looking for. And that’s something you should probably care about.
One clear takeaway: if you’re using a live chat service right now, a chatbot can either outright replace it or vastly improve it. Ask your customer service reps what questions they get the most and how often they get them. Go ahead, ask them.
But even if you’re not already using some sort of chat service, chatbots can:
Start every week with all the content marketing stories, data, teardowns, case studies, and weird news you need to drop in your next marketing standup meeting.
No ads. No sponsorships. No crap. Unless it’s hilarious crap. We love that.
Submit a comment cancel reply.
Your email address will not be published. Required fields are marked *
Notify me of follow-up comments by email.
Notify me of new posts by email.
Recent posts.
IMAGES
COMMENTS
This research aims to design a chatbot application to teach Artificial Intelligent (AI) in Malay language. CikguAIBot offers learners the possibility to learn and interact with an agent without the need for a teacher to be around. The development of CikguAIBot is based on the RAD model with the involvement of a number of experts and real users. The main focus of this paper is on the contents ...
The integration of chatbots in education has gained significant attention due to their potential to provide personalized and adaptive learning experiences. These intelligent conversational agents ...
Several studies have shown that utilizing chatbots in educational settings may provide students with a positive learning experience, as human-to-chatbot interaction allows real-time engagement [34], improves students' communication skills [35], and improves students' efficiency of learning [36].
Designing an Educational Chatbot: A Case Study of CikguAIBot. Nurul Amelina Nasharuddin Nurfadhlina Mohd Sharef Evi Indriasari Mansor Normalia Samian Masrah Azrifah Azmi Murad Mohd Khaizer Omar Noreen Izza Arshad Faaizah Shahbodin Mohammad Hamiruce Marhaban
This paper presents a review of the different methods and tools devoted to the design of chatbots with an emphasis on their use and challenges in the education field.
Designing and maintaining a system of teaching aids would be time-consuming. Chatbots already have high usability and are accepted by the public, meaning that using an existing platform to develop a chatbot would reduce users' cognitive load during the learning process.
PDF | On Mar 31, 2020, Hyojung Jung and others published Deriving Design Principles for Educational Chatbots from Empirical Studies on Human-Chatbot Interaction | Find, read and cite all the ...
The purpose of this study was to conduct a systematic review of the literature on Chatbot applications in education to gain a better understanding of their current status, benefits, problems, and future potential. Four broad research questions were specified in relation to the objectives.
Educational chatbots (ECs) are chatbots designed for pedagogical purposes and are viewed as an Internet of Things (IoT) interface that could revolutionize teaching and learning. These chatbots are strategized to provide personalized learning through the concept of a virtual assistant that replicates humanized conversation. Nevertheless, in the education paradigm, ECs are still novel with ...
This study presents research on the development process of GPT-based educational chatbots. A case study methodology was employed to address the process of designing, implementing, and evaluating a prototype that functioned as a personal tutor for the Sociology of Education course in the Primary Education Teaching Degree. The objective is to provide valuable insights into the processes ...
This research aims to design a chatbot application to teach Artificial Intelligent (AI) in Malay language and foster the usage of chatbot technology in supporting successful learning in Malaysia's education system. This research aims to design a chatbot application to teach Artificial Intelligent (AI) in Malay language. CikguAIBot offers learners the possibility to learn and interact with an ...
A comprehensive review of current studies on the use of chatbots in education, especially in the field of e-learning, was conducted and insights include the scope of the research, the benefits and obstacles encountered, and potential areas for further study are provided.
This research aims to design a chatbot application to teach Artificial Intelligent (AI) in Malay language. CikguAIBot offers learners the possibility to learn and interact with an agent without the need for a teacher to be around. The development of CikguAIBot is based on the RAD model with the involvement of a number of experts and real users.
With a view to that, this paper intends to systematically review the literature on the applications of educational chatbots and propose frameworks, steps, tools and recommendations for designing educational chatbots, with the aim of providing guidelines and directions for both experienced and inexperienced educators.
This paper aims to provide an artificial-intelligence (AI) integrated chatbot framework that can help develop a multi-use chatbot. Study is based highly on qualitative data collected from case studies and journal articles.
We present a case study with a simple system that enables mixed-turn chatbot interactions and discuss the insights and preliminary guidelines obtained from initial tests.
Students are facilitated in the study as chatbots can answer questions concerning the educational material. A chatbot can also help students with school administration issues, such as enrolling in a course, the exam schedule, their grades, and other related details to their studies so that the pressure on the school departments is considerably ...
Nurul Amelina Nasharuddin. Senior Lecturer, Universiti Putra Malaysia. Verified email at upm.edu.my - Homepage. Information Retrieval Educational Technologies Personalised Learning Usability Studies Multimedia Computing.
Key takeaway: 'CikguAIBot, a chatbot application for teaching AI in Malay language, successfully achieved its objectives and is fostering successful learning in Malaysia's education system.'
Abstract—This study reviews recently published scientific literature on the use of chatbot in education, in order to: (a) identify the potential contribution of the incorporation of chatbot as educational tool in educational institutions, (b) present a synthesis of the available empirical evidence on the educational effectiveness of chatbot ...
, Talk2Learn: A Framework for Chatbot Learning , , An E-learning Bot for Bioprocess Systems Engineering , , +2 more Trending Questions (1) The paper does not mention any specific chatbots for education in the Philippines. The paper focuses on the design and implementation of a prototype chatbot in the educational domain. See answers from 5 ...
In Malaysia, ChatGPT has been explored for educational purposes. Studies have shown that ChatGPT can be utilized as a valuable tool for English teachers in designing teaching content and enhancing instructional strategies. Additionally, a chatbot application called CikguAIBot has been developed to teach Artificial Intelligence in Malay language, aiming to facilitate learning without the ...
Some used chatbots conservatively, and others used them for everything. Check out these 10 case studies on chatbots. 1. Amtrak: 5 million questions answered by chatbots annually. Chatbot system: Next IT. Industry: Public transportation. Key stats: 800% return on investment. Increased bookings by 25%.