The Inevitable Use of ChatGPT by Students

From: Susana Pérez de Pablos.  OpenMind BBVA. 28-03-2023. https://www.bbvaopenmind.com/en/technology/artificial-intelligence/rafif-srour-daher-chatgpt-revolution-ai-in-education/

Puzzled and intrigued. That’s how you feel when you receive by email the answers to an interview with an expert in artificial intelligence (AI) and, more specifically, in the teaching of this discipline, and discover that she has answered your questions using ChatGPT. And far from concealing the fact, she mentions it in an endnote.

The interviewee is Rafif Srour Daher (Beirut, Lebanon,1976), an expert in data science and Vice Dean of the School of Science & Technology at IE University in Madrid. She researches how to push the boundaries of the use of robotics and AI, exploring new ways to apply this technology and focus its use to make society more inclusive.

¿Recomienda entonces a sus alumnos y a los estudiantes, en general, el uso de ChatGPT para hacer sus trabajos? 

So, before we end the interview, it’s obligatory to ask her a few final questions, given the concerns that this interview is part of her own research, or that ChatGPT is the one answering questions about ChatGPT.

Doesn’t answering using ChatGPT limit your ability to think through the answers yourself, using your own reasoning?

ChatGPT doesn’t limit your thought process; on the contrary, it helps to reinforce what you know and invites you to be critical of the information it generates for you,” answers Srour. “I like using it because your answer contains most of the information available about the question. It’s true that it can often make things up, so you have to be very attentive and confirm the information it gives you.

So you recommend that your students, and students in general, use ChatGPT to do their work?

We need to teach our students to use the tool, but also to be aware of its limitations and to be critical of the output it generates for them. On the other hand, our educators need to change the way they assess students by creating tasks that ChatGPT can’t easily do, otherwise we are teaching them to use this technology to do the work for them, which is not what we want.

From what age do you think ChatGPT can be used in teaching?

Definitely from the age of 16.

But won’t its use interfere with students’ ability to learn and their future capabilities? 

If it’s used correctly, we don’t have to worry about that occurring. But with our current assessment system, which is based on rote learning, it’s easy to fall into that trap. We can use these kinds of AI tools to change our education systems.

Having clarified these points, as well as the fact that she used ChatGPT as a tool to gather information for this interview, but that her answers were then prepared by her, Srour explains why data science and AI have become her passion. “On the one hand, the transformation of data into information and its application in decision-making is revolutionizing all businesses. And on the other hand, the pace at which the technology is evolving holds great promise for improving the human condition,” she says optimistically. 

We are very aware of the risks of using data in machine learning algorithms. That’s why it’s important to teach students how to approach problems when there are insufficient data, outliers, and biases in data collection.

AI is based on the ability to process data and make decisions based on mathematical algorithms that mimic human behaviour. The more data it uses, the more accurate it becomes. But what can’t it do that humans can? “It cannot completely replace human intelligence or human judgement. And there are tasks that it cannot yet perform efficiently, such as abstract and creative thinking, emotional understanding, the ability to make ethical and moral decisions, and the ability to build human relationships and understand cultural and social contexts. ChatGPT, for example, sometimes uses language that sounds very authoritative, using repetitive phrases, and you have to keep checking the accuracy of the text.”

However, she warns that AI is evolving at a very fast pace. “Some of the most recent and promising developments include collaborative and autonomous robotics, which improve on the current functionality of robots. Deep learning has enabled significant advances in areas such as computer vision, natural language processing and decision making. In addition, the integration of robotics and AI with the so-called Internet of Things is enabling the creation of connected intelligent systems that can collect and analyze large amounts of data to improve decision-making and efficiency.”

But the researcher stresses that, paradoxically, one of the main goals of AI is not to replace humans, but to improve their lives: “It certainly has the potential to do this in a wide range of fields such as health, education, work and welfare. For example, AI has long been used to diagnose and treat disease, and in education it can personalize teaching to meet the individual needs of each student and assist teachers in identifying areas where students need more help.”

Governments need to create appropriate regulatory frameworks that promote AI while protecting citizens from discrimination, lack of privacy and unemployment.

Returning to ChatGPT, the company that created it, OpenAI, has just released GPT-4, a more accurate version that can reason better and understand everything from words to images. What will be the next step? “I think we will witness a major rollout of generative models—text, videos and audios—that will be more accurate and concise than previous ones.”

The ethical debate about AI and its impact on society is inevitable. Does the research take into account the ethics of what might occur with what you are creating? “In all of our projects, we are very aware of the risks associated with the use of data in machine learning algorithms. That’s why it’s important to teach our students how to collect representative, clean data, and to be very aware of the issues that can arise when there are insufficient data, outliers and biases in data collection. And with the rise of data artificially generated by AI, problems arise when you try to validate it with real data,” i.e., data not generated by an AI.

Among the new professions that could emerge in the short term from all this progress, Rafif Srour highlights that of artificial intelligence engineer. She adds artificial intelligence ethics expert, AI application developer and AI project manager to the list of professions. “Software engineers will be in high demand to develop low-code or no-code data processing tools and comprehensive AI products,” she adds.

Our educators need to change the way they assess students by creating tasks that ChatGPT can’t easily do, otherwise we are teaching them to use this technology to do the work for them, which is not what we want.

And she encourages students and most especially female students (“who bring diversity and creativity to these fields”), to fearlessly pursue STEM careers (i.e., those related to science, technology, engineering and mathematics): “We are at a very critical moment in history. Technology is advancing at an exponential rate and is impacting every aspect of our lives. Studying any STEM-related field is a direct pathway to the most promising jobs in the world. I can’t think of any area of society where technology isn’t being applied, so there is no limit to what a student can do in the future with a STEM degree.” 

There has already been talk of AI drones being used in the war in Ukraine, autonomously deciding who to shoot at. “Controlling the misuse of AI robots is an important and complex issue. The only way to get a handle on it is through regulations and international treaties, while at the same time launching educational campaigns on the ethical use of AI.” And it is possible, she adds, “that technologies will be developed to track and control drones and AI robots used in warfare, which could include tracking and monitoring systems to ensure proper use and identification of the operators of these systems.” 

We are already at the point where humans don’t understand a lot of AI reasoning. For this reason, it’s important for AI developers to ensure that their systems are understandable and explainable.

And she has a message for those in government regarding their responsibility for what may happen in the future: “Governments need to establish appropriate regulatory frameworks that encourage innovation and the development of AI, while protecting citizens from potential negative impacts, such as discrimination, lack of privacy and unemployment. In addition, mechanisms for the accountability and review of AI decisions must be put in place, and data protection must be ensured.”

We conclude with one last intriguing question. Might there come a time when humans will no longer be able to understand the reasoning of AI? “AI is developing at such a breakneck pace that we have already reached that point. As AI becomes ever more complex and sophisticated, it will be difficult for humans to fully understand how it works and how it makes decisions. For this reason, it is important for AI developers to ensure that their systems are understandable and explainable.”