By David Cornwell, Content Writer at Bon Education
One of the growing concerns with Artificial Intelligence (AI) is that we end up in a Sorcerer's Apprentice scenario. In this famous tale, an old sorcerer departs his workshop, leaving his apprentice with chores to perform. Tired of fetching water by hand, the apprentice decides to enchant a broom to do the work for him. He uses magic in which he has not been fully trained, and soon the water is everywhere. The apprentice realizes that he cannot stop the broom since he does not know the magic required to do so. Unfortunately, like the apprentice in the story, we may lack the necessary information to stop computers when we want them to stop. There is also a risk that computers will take on tasks for which they were not programmed. A similar scenario could occur if an advanced AI is developed and released into the world without proper safeguards. Should it develop sentience and decide to fulfill its goals (whatever they may be) without regard for human life, there could be no way to stop it…
Not convinced by the analogy?
Well… How would you feel if you knew that the entire paragraph was written by a piece of AI software known as GPT-3?
The rest of this month’s newsletter article was produced by the team of human beings at Bon Education. It takes a closer look at the Alignment Problem in AI and its implications for the future of education.
AI Crash Course
AI is defined by Amazon, the world’s most famous exponent of the technology, as “the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving and pattern recognition.”
The applications of AI can range from everyday pieces of technology that no longer impress us, like navigation apps and voice assistants, to things - such as self-driving cars and deep fake videos - that still feel vaguely ‘futuristic’ in the year 2022.
However, central to all these applications of AI is the concept of Machine Learning (ML). Machine Learning refers to a set of algorithms that can analyse, learn from and make predictions based on collected data.
Why seek advice from one or two experts, a Machine Learning enthusiast will argue, when you could use AI-driven data analysis to compare your problem to millions of similar problems – and let the machine tell you what to do?
ML is essentially an optimization tool and is employed in a huge variety of contexts, from the recommendations list in your Netflix feed, to entire production plants capable of generating their own maintenance reports.
The world of education has already felt the positive effects of Machine Learning. The “virtual teaching revolution” brought about by lockdown learning has meant that more students around the world have had greater (often 24/7) access to education. Many of the innovations that have enabled this shift, such as the large-scale ability to automate administrative tasks, have been powered by ML technologies.
However, even more interesting to futurists in the field of education is a special subset of Machine Learning known as Deep Learning (DL). Deep Learning algorithms consist of large neural networks that can engage in “unsupervised learning” and interact with one another to test out problem-solving strategies.
As Deep Learning becomes more and more integrated into the field of education, expect chatbots to morph into digital teaching assistants. From recruiting students, to onboarding them, managing their homework diaries and providing personalized tuition pointers, AI-driven chat interfaces will provide students with efficient, around-the-clock study support.
Georgia Tech recently tried an experiment by including a teaching assistant chatbot named Jill Watson on one of their courses, and the software ended up being nominated for a teaching award.
What is the Alignment Problem?
Efficient automation of menial administrative tasks, super-intelligent chatbot teaching assistants, personalized learning programs with optimized learning outcomes…
You might be thinking, What’s the problem here?
Brian Christian is the author of several books about AI, including The Most Human Human (2011) and The Alignment Problem: Machine Learning and Human Values (2020). He points out that, like the Sorcerer’s Apprentice, when we program these super-intelligent AI machines with incomplete or imprecise instructions, the outcomes they produce can quickly become nightmarish.
According to Christian, the Alignment Problem stems from the challenge of ensuring that AI systems “capture our norms and values, understand what we mean or intend, and, above all, do what we want.”
The Alignment Problem is created by the tension between computational efficiency and human values. But the heart of the Alignment Problem is not simply that we are involving AI in our decision-making processes. There are countless tasks that AI machines can perform better than the brightest human minds that have ever existed.
Rather, the Alignment Problem emerges when we give these thinking machines control over our value-based decisions. When we do this, we risk ceding control of “what makes us human” to the machines – and often with disastrous consequences.
Elaine Herzberg was tragically killed by a self-driving Uber SUV that wasn’t programmed to recognise jay-walking. Microsoft had to disable Tay, its Twitter chatbot, after it produced a torrent of abusive, racist and misogynistic tweets. Even Amazon, whose success has been built on productively harnessing AI technologies, was forced to abandon a secret automated hiring program after it discovered the algorithms contained problematic selection biases.
“It is in the fundamental, experiential, embodied, raw stuff of daily life where the unsung computational complexity of human life resides.”
Build on Values
The silver lining for educators is that the solution to the Alignment Problem is in our hands.
The Alignment Problem is, really, an expression of another term from the field of AI research: The Moravec Paradox. This is the idea that the kind of tasks that humans find complex (math problems, data analysis) are relatively easy to teach AI, while the kind of tasks that humans find easy (distinguishing between a soccer ball and a bald head) require incredible computational power. As Brian Christian poetically puts it: “It is in the fundamental, experiential, embodied, raw stuff of daily life where the unsung computational complexity of human life resides.”
This realm – the “fundamental, experiential, embodied, raw” sphere of human experience – is also, of course, where learning happens.
It is where we, the educators, remain the experts.
In the future, we will increasingly need to act as its guardians.
The widespread and rapid uptake of AI technology across all industries has been, and will continue to be, driven by for-profit considerations. This is shown by the AI industry’s predicted 34% CAGR in the UAE until the year 2030. But the kind of educational values that many institutions aspire to – inclusivity, tolerance, equality – could be directly threatened by the Alignment Problem, if AI technology is not implemented with the utmost responsibility.
The values that underpin our educational institutions must be protected as this wave of technology sweeps through the industry. This may require policymakers in education to ask themselves difficult questions.
- Why are we implementing this new technology?
- How does it safeguard our values?
- Does it threaten our values?
- What do we stand to gain by implementing this technology, and what do we stand to lose?
- Are we sacrificing quality of education for quantity of education?
The latest Stanford research tells us that most people find their partners through online dating apps these days, meaning that “matchmaking is now done primarily by algorithms.” Shudu Gram is one of the world’s most talked-about supermodels, and she is not a human being. Our phone cameras are replacing our skin specialists, and parents with young children are realizing that they may never have to teach their kids to drive.
It would be naïve not to anticipate similarly seismic changes in education as AI technology becomes more widespread.
As educators, we must continue to embrace the latest technological advances and ensure we are equipping students with the skills they need to navigate the world of tomorrow. But, if we wish to safeguard the future of learning and avoid falling into the role of the Sorcerer’s Apprentice, we must work to codify our values and ensure they act as the firewalls of the AI evolution.
The future of education will be steered by technology. The values of its institutions must, therefore, serve as the guardrails to ensure that the journey is safe.
“The aim of education is the knowledge, not of facts, but of values.”
— William S. Burroughs