Embracing ChatGPT: Redefining education in the age of artificial intelligence

The rapid pace of technological advancement has always been a double-edged sword, especially in the realm of education. Technological innovations, from the calculator to the internet, have sparked debates about their impact on education. Now, with ChatGPT, an Artificial Intelligence (AI) chatbot developed by OpenAI, the conversation about the future of education is more compelling than ever. But what is education? Is it repetition and rote learning? Or is it cultivating students to become innovative creators who learn to synthesize their ideas?

Generation Z is increasingly scrutinizing the alignment of current educational priorities with the rapidly evolving world. Credit: Danny Lawson.

In the not-so-distant past, education was often equated with the ability to recall vast amounts of information. Memorization was a hallmark of academic success and our education system held this as a universal truth. However, with the invention of calculators and the internet, access to facts became effortless, making rote learning less relevant. Slowly, the education system shifted from rote learning to idea synthesis, critical thinking, and problem-solving. The focus transitioned to ensuring that students could assimilate and structure facts into coherent narratives and solve complex logical problems. This shift in the education concept was a significant leap forward—it emphasized the development of skills in a world where access to information was becoming ubiquitous.

The processes of idea synthesis, critical thinking, and problem-solving were intended to redefine the meaning of education, however, this definition is now under threat as ChatGPT has landed in education’s midst with little warning. ChatGPT can produce high-quality content across a wide range of academic subjects nearly instantaneously, presenting a new challenge to the education system. While the first reaction might be to ban the use of such technology in education, the approach of “building a better mouse trap” to catch students using ChatGPT may prove ineffective in the long term as students often find ways around measures. Moreover, the reality is that the current generation of students will eventually enter a workforce inundated with generative AI programs. Educators embracing ChatGPT, with caution, can be a positive first step towards cultivating the next generation of students who are adept at working alongside AI instead of being hindered by it.

Rather than engaging in an uphill battle against rapidly improving AI chatbots, educational institutions should consider adopting a more nuanced approach. An example is permitting the use of ChatGPT for specific assignments while restricting its use for others. For assignments that require information synthesis, such as research-based assignments, students can use ChatGPT as a valuable resource to gather a synopsis of several complex research articles, and then rephrase the articles in their own words to demonstrate comprehension. However, for assignments that demand creative thinking or originality, such as creative writing, the use of ChatGPT should be restricted. Incorporating ChatGPT enhances productivity in certain areas while preserving the core educational objective of nurturing students' idea synthesis, critical thinking, and problem-solving skills in others. This balanced approach ensures educational institutions a more productive and skill-enriched learning environment that can prepare students to learn to work alongside AI.

Implanted brain electrodes relay electrical activity to a computer algorithm that translates the activity signals. Credit: Noah Berger.

The integration of AI in education comes with its share of challenges. These multifaceted challenges include ensuring AI-generated content is factually correct, upholding academic integrity, and ensuring proper credit is given to the original creators and authors. To properly harness the power of AI tools like ChatGPT, students must understand their strengths, weaknesses, and the ethical considerations of their use. As educators, we carry the responsibility of instructing our students on the fundamental principles of academic integrity. Responsible use includes understanding the biases that may be present in AI models and recognizing the potential for misuse and manipulation. Critics argue that biases can be viewed as an inevitable byproduct of AI's training data, which is collected by people. For example, law enforcement agencies have started using AI algorithms to predict crime hotspots and have allocated resources accordingly. While proponents argue that this can help reduce crime rates and enhance public safety, there are concerns about the potential for misuse and biases based on the training data. When historical crime data exhibits biases such as racial profiling or excessive policing in particular neighborhoods, AI algorithms trained on such data might unjustly focus on those specific areas in the present. This misuse can lead to increased surveillance and harassment of marginalized groups, further eroding trust in the government. Being aware of the biases within AI systems, and their training data, is essential for critical thinking, as it encourages students to question the information they receive and consider different perspectives.

We must also teach students to on how to distinguish between opportunities when AI can be used for beneficial purposes and those that may lead to misuse and manipulation. For example, AI can be a powerful tool for advancing scientific research. Researchers have developed brain implants that can convert neural activity (electric signals sent by neurons to communicate with each other) into text with remarkable speed and accuracy. In two studies, electrodes were placed on the surface of brain regions that can control speech muscles. Algorithms were used to recognize neural activity associated with phonemes, or the smallest unit of sound that can change the meaning of a word. This technology achieved 95% accuracy in matching neural activity to words, in individuals with and without speech difficulties. Here, technology like ChatGPT serves as a tool for advancing biomedical research, giving individuals who are incapable of verbal communication, either due to stroke or neurodegenerative disease, the ability to speak again.

However, we also need to recognize the existence of AI misuse in biomedical research. Misuse involves manipulating data unethically, for example through cherry-picking (selectively analyzing and presenting data in a way that supports a particular argument while omitting contradicting data). This manipulation can lead to unsafe treatments and harm public health. Proponents argue that AI can enhance data analysis, but strict oversight is crucial to prevent AI misuse. By educating students about these positive and negative aspects of AI technology, we equip them with the tools needed to navigate using AI responsibly and ethically, ensuring that they harness its benefits while mitigating its drawbacks in society.

Handwriting samples from children reveal that when a letter was spontaneously drawn, there was heightened activity in three critical brain regions, a phenomenon not observed during letter tracing or typing. Credit: Karin James.

Despite the advantages AI tools like ChatGPT offer, it's essential to remember that they should complement, not replace, students' cognitive processes. Traditional methods, such as handwriting, still have unique cognitive benefits that technology cannot fully replace. A New York Times article highlighted how writing information by hand engages our brains in a way that typing on a keyboard cannot. The article covered a study that showed that engaging in the physical act of handwriting enhances memory, creativity, and critical thinking abilities. This is attributed to the activation of distinct regions of the brain, promoting better comprehension and retention of information. In this digital age where efficiency often takes precedence, we should not disregard the cognitive advantages offered by traditional methods. These cognitive processing skills remain relevant and valuable in a world that increasingly relies on technology.

ChatGPT and similar AI tools are reshaping the landscape of education. The future of education needs to balance harnessing the power of technology and preserving cognitive skills and ethical values. Just like handwriting, our memory, creativity, and critical thinking are not relics of the past and students should not replace cognitive processes with ChatGPT. Barriers have been broken, and technology like ChatGPT is here to stay, evolving continuously as an integral part of our society. Instead of viewing ChatGPT as a threat to education, we can take this opportunity to redefine education in a world where AI is now an integral part of our daily lives.

Jasmine Pathan is a Neuroscience PhD candidate at the City University of New York (CUNY) Graduate Center in NYC. Her passion for understanding the brain compelled her to study neuroscience. Jasmine spent most of her childhood in the concrete jungle of NYC before moving to a rural village in Taiwan, then moving back to New York to finish high school. She ventured across the country to the West Coast for college, graduating with a Bachelor of Arts in Psychology from the University of Washington in 2018. Jasmine’s doctoral research focuses on how different methods of neuromodulation can prevent transneuronal degeneration of key spinal neuron classes. Jasmine also likes doing Pilates, reading non-fiction books, and mastering pottery to make planters for her many plants! Jasmine is also the neuroscience student representative to the Biology PhD Program Executive Committee at CUNY where she advocates for student well-being.

Edited by Denise Croote, PhD

 
Jasmine Pathan