Artificial Intelligence (AI) is a broad term describing the overall goal of creating machines that can perform tasks that typically require human intelligence. The term was first coined in 1955 by John McCarthy, a Dartmouth College computer scientist. Today, our everyday lives are drenched in AI, from large language models like ChatGPT to visual recognition features on phones and cameras, recommendation systems on Netflix and Spotify, Google Search, credit card fraud, self-driving cars, and more. It is literally everywhere.
Within the larger bucket of AI are several major areas, including:
- Machine Learning: Systems that learn patterns from data on their own rather than being explicitly programmed
- Natural Language Processing: Programs that can understand, interpret, and generate human text and speech
- Computer Vision: Allows machines to interpret and understand visual information from images and videos
- Robotics: Combining AI with physical systems to create machines that can act in the real world
Until recently, AI relied heavily on expert systems and rule-based programming, where humans had to manually program the computers and provide guidance on how to make decisions. But in recent years, Machine Learning (ML) has taken over as the dominant AI technique because it can handle complex, unstructured data found in the real world, whether that is all the text on the internet, images, videos, or any other type of data. The more data you feed ML systems, the better they typically perform. This fact, combined with advances in processing power through more efficient computer chips and companies able to spend billions of dollars on data centers, means most major AI breakthroughs in the last decade–from AlphaGo beating the best Go player in the world, ChatGPT mimicking human conversations in a chatbot, and newer image and video generators–rely heavily on ML techniques.
Risks and Rewards
But in addition to clear potential benefits, machine learning comes with a significant number of risks that are becoming increasingly apparent as these systems are more widely deployed:
- Data and Bias Issues: If the dataset given to an ML system is inherently biased, the resulting ML model will be biased, too. For example, many training sets for dermatology focus exclusively on white patients, meaning the model is trained to work well for them but not for patients with darker skin. Or suppose you train an ML model on historical data that contains discrimination. In that case, the model will perpetuate that, such as hiring systems that favor certain demographics, facial recognition systems that work poorly for people with dark skin, and so on. As ML models become more and more pervasive and start to replace humans, how accurate they are becomes important. The 2016 book, Weapons of Math Destruction goes in-depth on the societal impacts of algorithms, especially as many are used to perpetuate and even exacerbate preexisting inequality.
- Black Box Problem: In many ML systems, especially the deep neural networks used to power LLMs like ChatGPT, the creators can’t explain how the model reached specific decisions. Programmers provide an ML model with massive amounts of data, train it with enormous amounts of computer processing, and then evaluate if the end result seems to be accurate. But on the inside is a literal “black box.” ML engineers can’t recreate the ML models’ thought process to understand why a certain decision was made. As these systems become ever more incorporated into daily life–and especially in critical areas such as medicine, science, policing, and finance–the risks of using programs we don’t fully understand continue to grow.
- Privacy and Surveillance: ML enables governments and private businesses unprecedented surveillance options via facial recognition, behavior analysis, and data mining. Governments can monitor and control populations on a level never before deemed possible, while companies can engage in “algorithmic wage discrimination,” such as nursing gig apps that offer different rates to a nurse based on their credit history. The lower a nurse’s credit history–and, by extension, the more desperate they are for money–the lower the rate the app will offer. The hospital and the app pocket the difference in exploiting a person’s financial situation. There are similar examples across a growing number of professions.
- Job Loss: As ML displaces more tasks, it threatens to displace workers faster than new jobs are created, leading to significant economic and political unrest and most likely exacerbating already record levels of inequality.
- Education: ML models, especially LLMs, are already running rampant in education, where both students and teachers use them. Students use LLMs to write papers, and overworked teachers rely on them to grade large amounts of student work. There is a real risk that as people become more and more dependent on ML systems, they lose critical thinking skills, domain expertise, and the ability to step in should these ML systems start to fail in any way.
It is not hyperbole to state that the current AI explosion is analogous to the Industrial Revolution in the late 19th century: back then, physical labor was mechanized, while today, ML is mental labor and the current knowledge economy. The Industrial Revolution didn’t just change how things were made; it completely restructured the economy and politics, creating new classes or workers and concentrating power ever more in the hands of a wealthy elite.
Both revolutions started with specific applications and then quickly cascaded across society. Steam power began in mining before transforming transportation, manufacturing, and agriculture. ML started in tech companies but is now actively reshaping healthcare, finance, transportation, and even creative industries.
The Future
Machine Learning is on track to radically remake human society. Back in 1930, the English economist John Maynard Keynes wrote a famous essay, Economic Possibilities for our Grandchildren, where he ruminated on the potential changes of future technological progress and economic growth over the next 100 years. He estimated that living standards would increase by 4-8 times, and people would only need to work about 15 hours per week to maintain a comfortable lifestyle. Keynes’ main concern was what people would do with all this leisure time once their basic economic needs were met. He worried that humanity might face an existential crisis when work was no longer necessary for survival.
From 1930 to 2020, in developed countries like the U.K. and the United States, real GDP per capita actually increased 6-7 times, so Keynes was remarkably accurate in his estimate. But he was wrong in his predictions about everyday life. Most people today are forced to work ever longer hours just to get by. Most of the economic gains over the last one hundred years–powered by increases in technology and productivity–have been concentrated in the hands of a few capital-owning elites rather than evenly distributed amongst the general population.
The question now is what comes next? Will modern political systems adapt quickly enough to manage this transformation? If not, as seems likely, we will find ourselves living through a period of profound social upheaval where unprecedented job losses overlap with ever-more-powerful AI systems that operate beyond democratic control, shaping society according to the narrow wishes and whims of those who build and own them.