AI-powered chatbots, designed ethically, can support high-quality university teaching

While COVID-19 forced an urgent move to online learning in universities, learning how to efficiently and effectively teach online using various platforms and tools is and will remain a positive addition to education.

To maintain this beneficial development and ensure quality education, universities should focus on helping the faculty embrace and guide change.

The ethical and strategic use of Artificial Intelligence in teaching and learning centers to help educators troubleshoot and innovate their online teaching practices can help with this task. Teaching and learning centers are responsible for the educational technology promotion, the teaching and learning promotion as well as the design of the lessons.

Extensive move to online education

Research published in August 2020 on 19 teaching and learning centers and their equivalents in Canada, the United States, Lebanon, the United Kingdom and France showed that the staff at these centers used all available resources to facilitate the rapid switch to online education to support.

The staff had worked 10 to 14 hour working days during the first phase of the pandemic to meet the increased need for teachers and staff. These centers also reported difficulties in recruiting and training qualified candidates.

Strategically deployed, chatbots could take on repetitive, low-level managerial roles that teaching and learning centers would take over, and help prevent congestion. A chatbot, also known as a conversation or virtual agent, is software or a computer system designed to communicate with people using natural language processing.

This communication can be via text messages or voice commands.

Teaching and learning centers must be equipped in such a way that they support teachers in their teaching.

Why a chatbot?

Chatbots offer a workable win-win solution for teaching and learning centers and faculties. They are available around the clock, can respond to thousands of simultaneous requests and offer immediate and reliable service support when needed.

Using chatbots could free teams for complex queries that require human intervention, such as:



Read more: Online Learning During COVID-19: 8 Ways Universities Can Improve Equal Opportunities and Access


Collaboration between the centers’ experts and technology could provide better services and support to the faculty to enhance the learning experiences they create for students. Chatbots can direct educators to appropriate and effective professional development resources and activities, such as: B. Articles with instructions, tutorials and upcoming workshops. These would be tailored to the faculties’ individual needs, their diverse digital skills, and their backgrounds in designing hybrid learning experiences.

Chatbot systems are already used in educational institutions for teaching and learning, take on administrative tasks, advise students and support them in research.

How would it work?

When it comes to the AI ​​conversation ability of chatbots, there are two options:

  1. Methodology of Artificial Intelligence Markup Language: Programmers provide the AI ​​with a library with questions / answers and keyword associations via a database. From there, the chatbot can give appropriate answers within a strictly defined framework.

  2. The Natural Language Processing approach: This allows more flexibility. As soon as programmers have created an initial data set, the AI-powered tool learns from the ongoing exchange to find the best combination of answers to recurring questions from faculty members. The AI ​​will then be able to identify key words in a sentence and understand the context of a question.

It is expected that over time, programmers will need to add data from the conversation to an ongoing record construction. When a question is asked, the chatbot responds based on its current knowledge base. If the conversation introduces a concept it doesn’t understand, the chatbot can indicate that it doesn’t understand the question – or route the communication to a human operator.

In any case, the chatbot will learn from this interaction as well as from future interactions. The chatbot will gradually gain in scope and relevance.

The diagram shows a figure sending a message through a natural language processing system;  The chatbot starts a query and the query goes to either a machine learning system or an external data source such as a person to answer the question or a system to log the query.
Chatbot implementation scenario for teaching and learning centers to support the faculty.
(Nadia Naffi)

In order to increase the reliability and trustworthiness of chatbots, it should effectively help these centers support their faculty. Implementing and fine-tuning chatbots so that they are operational is important, even if it takes time and resources.

Ethical Framework for AI in Education

The Institute for Ethical AI in Education, based at the University of Buckingham in the UK and funded by McGraw Hill, Microsoft Corporation, Nord Anglia Education and Pearson PLC published The Ethical Framework for AI in Education in 2020. The framework argues that AI systems should increase the capacities of organizations and the autonomy of learners while respecting human relationships and ensuring human control.

Chatbots in university environments should be inherently ethical, meaning they should be sensitive to values ​​like security, protection and accountability, and transparency. When used in teaching and learning centers, users should be protected from all forms of harm or abuse. You also need to feel that you are treated fairly and always have the opportunity to reach someone. Faculty members need to know they are exchanging with an AI.

Chatbots can and should be accessible. Tolerating user errors and input variations, designing for different skills, and enabling multilingual text communication are examples of facilitating accessibility.

Don’t hurt: privacy

Chatbots should be designed in such a way that they “do no harm”, as stated in the latest UNESCO recommendations on the ethics of artificial intelligence. When it comes to non-harm, privacy should be addressed.

AI-powered tools have problems recording data. Strong barriers to data collection and storage are required. Following the European data protection concept, centers should minimize data collection. Only necessary information should be saved, for example certain parts of a conversation, but not the identity of the interlocutor.

The transparency-based approach enables users to agree which personal data may or may not be passed on to the centers. This would help keep the tool high in confidence and ease of use. In the event of a disruption, faculty members would provide feedback on the problem and the centers would fix it.

Centers should consider anonymizing users, strong encryption of all stored data and internal storage, if possible, or by a trusted third party in accordance with similar data protection provisions.

A person holding a phone and showing a chat.
Data collection should be minimized to protect user privacy.
(Shutterstock)

Combating bias and environmental impact

The potential bias in the original database needs to be addressed. Regardless of whether it is gender, ethnicity, language or any other variable, the original dataset needs to be cleaned up and carefully analyzed before it is used to train the AI, whether it is AI markup language or NLP methods can be used. Continuous monitoring should be considered for the latter.

A green data storage solution needs to be tackled to reduce the carbon cost of activities on the environment. Universities could, for example, check whether water cooling could be used instead of air conditioning in server rooms.

The university of the future, as expected by many academics and policy makers, has already begun. Technology, when used ethically and strategically, can support the faculty in its mission of preparing its students for the needs of our society and the future of work.

Leave a Comment

Your email address will not be published.