AI: Navigating the Future of Education and Redefining Human Intelligence

A look at how AI is transforming education with personal tutoring for all.

Feb 28, 2024

-

5

min read

Illustration of woman looking at a laptop screen.

One to one teaching is rare in a classroom setting as most schools and universities model their teaching to address wider numbers of students. This can easily allow students to fall behind or disengage. A one-to-one learning experience that delivers a tailored approach based on an individual needs can elevate a student’s ability from below to above average, or above average to excellent. The Oxbridge system has long championed one-to-one tutoring to ensure a high level of success among cohorts, but most institutions can’t afford to give this experience to their students. AI has changed this as it can now act as a personal tutor for every student and a teaching assistant for every educator, in and outside of the classroom.

While many professors shuddered to think of chatbots aiding students in their academic studies, Sal Khan was an early adopter of AI in the classroom. He speaks candidly about the success stories of his AI assisted class members and addresses the keyways it delivers individualised teaching. One student was struggling to grasp the meaning of J. Gatsby’s obsession with the green light at the end of Daisy Buchannan’s dock, and a chatbot wrote to her in the style of Gatsby, addressing her as ‘Old sport’, carefully explaining his longing for Daisy and its symbolism of the unreachable American dream.

When a student didn’t understand the relevance of a video about cell replication, asking the chatbot ‘Why do I have to learn this?’, the chatbot related its answer to the individual’s desire to become a professional athlete and explained that understanding the body at a cellular level can help improve insight into nutrition and recovery – crucial to their intended career path. As chatbots learn more about what each student finds value in, it can continue to hone and develop its answers to deliver a more engaging experience, as well as a deeper understanding of the subject at hand.

Students can now ask as may questions as they want without feeling embarrassed or interrupting a class, they are guided to answers through nudges and prompts to improve problem solving ability, and chatbots even provide quizzes at the end of discussions to evaluate information retention. With careful oversight of how students use AI, it can be a rewarding tool that elevates students and their capabilities. But for educators like Marc Natangara, he sees more opportunity in what AI can’t answer for his students.

Artificial Intelligence was designed and modelled on human intelligence and the way it learns is based on the way humans learn. We determine intelligence through essays and exams, but AI machines can now write essays and pass exams with flying colours. Natangara begs the question, should we continue to educate and test people in ways where technology can so effectively step in and replace us? Suggesting now is the perfect time to reimagine how we measure and define human intelligence in the face of machine intelligence and its ability to achieve tasks only we thought we could do.

The way Natangara visualises opportunities provided by AI in the education space is not dissimilar to those that photography offered to the art world; when a camera could accurately capture a scene, the need for figurative painting shifted and artists took the opportunity to interpret their surroundings using radically new approaches in their practice. AI’s ability to interpret its surroundings may lead the way for humans to innovate and problem solve in profound new ways, challenging current notions of what intelligence can look like. Natangara reminds us that computers can only think inside the box, because they are the box, and students should now explore problems and experiences that AI cannot solve or participate in, so that we can begin to value human intelligence as more than just computer like processing.

In contrast to Natangara’s optimistic vision of AI’s influence on education, technology is being used to pressure students into resisting human behavior rather than embracing it. Schools across China are using electronic headbands that measure wavelengths produced by electro-activity in children’s brains to detect whether they are in a state of focus or absentmindedness, which sends information in real time to teachers and parents so effectively discipline students if their concentration wonders. This is not deemed to be an exact science, and many question if focus is something these bands can really quantify. But in terms of the user experience, these students talk about how the wavelength detectors leave them with sore heads, and how they worry about what their parents and peers think of them if they are seen to be losing focus.

AI’s ability to answer questions and produce images is shaped by the large amounts of data that feeds into these models, and Sasha Luccioni addresses issues with pre-trained discrimination in AI enforcing unhealthy stereotypes for students. If she uses Open AI to generate an image of a woman called Sasha, the images will nearly all be of women in their underwear. If she asks for the generator to show an image of a scientist, lawyer or CEO it will show a white male in almost 100% of the generated outcomes. These biases combined with copyright, and data protection issues reveal some of the more concerning sides of AI and its exposer to young people, making the need for ethical and inclusive AI models which draw from more diverse pools of data imperative to sculpting a more responsible tool for students.

In navigating the intricate landscape of AI, Sasha Luccioni not only sheds light on the pitfalls of pre-existing biases but also unveils a pressing concern: the environmental impact of artificial intelligence. Beyond the digital realm, the physical manifestation of AI machinery leaves an indelible mark on our planet, contributing significantly to carbon emissions. As we witness the rapid integration of AI into our daily lives, from smartphones to household appliances, the environmental toll becomes increasingly apparent. In the last quarter of 2023, GTP3 alone produced over 500 tons of carbon. (If the concept of digital activity carrying a significant carbon footprint is new to you, you may be interested, or horrified, to know that the average person in the developed world is thought to add 136kg of CO2 to their yearly carbon footprint through email alone.)

Luccioni emphasises the need for a paradigm shift, urging the development of ethically sound and environmentally friendly AI models. In identifying the carbon footprint AI carries through tracking technology, she presents an opportunity for us to make conscientious choices, aligning the trajectory of technology with sustainability. With a call to action, Luccioni invites us not to turn away from AI but to forge a path toward eco-conscious advancements that will benefit students and society at large. In the same way that we can make sustainable choices with the foods we eat or the clothes we buy, we will be able to make sustainable choices in the AI we choose to put into our homes, classrooms, and economies.

Image: Subkontr

About
the author

Jacob Pulley

UX Designer

Jacob is a UX Designer with a passion for solving customer problems. Jacob has a background in the retail and fashion industry where he worked with Louis Vuitton and was a winner of the British Airways 2023 hackathon.