I’ve been doing quite a bit of reading lately on the topic of AI (Artificial Intelligence). AI is naturally interesting to me as a former IT and tech person, but also because its apparent rapid development, economic and employment impacts and its social policy implications are so much in the news at the moment. I have two or three more new books on this topic on my library list and bookshelf at the moment, but today I want to discuss one I’ve already read.
The book is entitled The AI Con: How to Fight Big
Tech's Hype and Create the Future We Want by Emily M. Bender and Alex
Hanna. Dr. Bender is a professor of Linguistics, and affiliate faculty in
Computer Science and Linguistics at the University of Washington, who often
consults with domestic and international organizations on understanding
"AI" technologies. Dr. Hanna is Director of Research at the
Distributed AI Research Institute, and a lecturer at the School of Information
at UC Berkeley.
This book is a dense, thorough critique of the entire field of "AI",
coming at it from many different levels and angles. It discusses the financial
and political motivations of AI’s main proponents, its foundation in
intellectual property theft, and the essential fraudulence of the hype that
what has been created in these software products is "intelligence".
It compares the hype around AI to other tech bubbles of the past, and discusses
both the need and some methods for resisting the inclusion of AI features in
many of our common computer and internet applications.
The book explores many of the adverse effects on society that can be expected
from further implementations of AI in many arenas, including job automation and
job loss, misplaced legitimization of discriminatory social policies, climate
change impacts from massive data center requirements, destructive effects on
many types of careers and human creative endeavors, and many more.
I mentioned in another review recently that I have some skepticism and doubt about the current clamor over AI and all the things it can do. This book clarifies and supports my uneasiness over much of what the public has been hearing lately about AI from the wealthy tech moguls who are promoting it.
One interesting dynamic the authors describe is the way the tech leaders seem to fall into two opposing camps about the promise and perils of AI. There are the “boomers” (or accelerationists), who claim that AI will deliver untold wealth, intellectual capacity and scientific benefits to humanity in the very near future, which is why we must do whatever it takes to develop it quickly. And then there are the “doomers”, who make much of their belief that AI will soon reach the “Singularity”, where the machines will outstrip humanity’s ability to reason, and may then consign us all to the dustbin of history, as in the Terminator films.
What the authors note about these two camps is that they both tend to be largely made up of people who are within the small circles of rich high-tech corporate leadership. In fact, the two conflicting views of AI’s risk/reward profiles are often present in the same people.
In one breath, they argue that they need to go all-in quickly on investing in and developing AI as fast as possible, for all the positive benefits they foresee. Then they turn around and say that they are the only ones who can be trusted to protect us from the potential disastrous AI outcomes they fear. Sam Altman and Elon Musk are prime examples of this sort of warring visions within one person, with their self-serving rationalizations for going ahead and doing whatever they want to do, despite all the dire risks to humanity they’ve predicted.
Both the book authors are active in the research fields of AI, and are not blind to the enormous potential economic and social benefits that may result from AI developments. At the same time, they see right through the breathless claims of the AI proponents, which is that AGI – Artificial General Intelligence, or machines that can think as humans do – is just around the corner.
They point out that this claim about impending AGI has been made repeatedly for over fifty years, and they share some of the history of that. But as they explain, the technical approach behind current generative AI (like ChatGPT), based on large language models and neural networks, is still basically a parlor trick, seemingly displaying magical “intelligence” that is in fact produced by an inanimate but sophisticated prediction engine.
AI doesn’t think, and it doesn’t feel, according to the authors, and they contend that with current models and approaches, it’s not likely to do so anytime soon. What generative AI does do is leverage existing human intelligence, creativity, art and information stored as data to seemingly “create” new text and images, based on content taken without compensation from the many people who originally created it.
This is not intelligence, say the authors – it is merely theft and repurposing on a colossal scale. And it is theft that allows those who own and control the vast computing resources needed for AI tools to function, to profit and extract great value from the fruits of human creativity and labor they’ve stolen from others, and used to train their AI models.
One of the most telling points the authors make is that an unacknowledged but very high priority of the AI moguls seems to be to use these smart machines to eliminate costly humans from the workforce. Everyone knows by now that AI is a potential threat to many existing jobs, especially in the white collar service industries, but that’s not something the AI hype merchants want to emphasize. Or so I assumed, until on a recent trip to San Francisco I saw billboards advertising a new AI product with the slogan “Never hire a human again”. Perhaps in the current political moment, these tech leaders no longer even feel they need to hide their desire to take away peoples' livelihoods for the sake of their bottom lines.
After thoroughly pulling back the veil on all the hype, the lies and flawed reasoning behind the “AI con”, the authors lay out some reasonable if limited ideas for how to resist the AI juggernaut. For example, they suggest not using AI agents to query for information when a simple web search would give comparable results.
The authors suggest this for several reasons: first, because widespread refusal to use new AI products might slow investment and further development in them; second, because these AI agents have vastly greater environmental and climate change overhead associated with using them compared to a simple web search; and third, because the AI answers might be made up, and don’t tend to provide their sources, or any attribution for how they arrived at their conclusions.
With a web search, you know where the information is coming from, and can draw your own conclusions about its validity, reliability and truthfulness. You also might discover some unexpected but valuable information farther down the page of search results. With an AI agent search, you have to just take it on faith that the AI didn’t make it up, but which it often does (popularly known as “hallucinating”, although the authors reject that term for its implicit hint of actual machine cognition).
Of course, as the authors are aware, resisting the hype and the promotional campaigns being waged for AI by the leaders of our largest tech companies will be difficult. And there is a need not to throw out all AI and advanced computing innovations that might truly be beneficial to humanity, although how lay people are supposed to distinguish between what is worthwhile and what is wasteful or dangerous among the many new AI products being released is hard to know.
The ultimate message the authors seem to hope readers will take away from their in-depth presentation is that if we’re going to have AI technologies at all, they need to be limited, regulated and well-controlled, and in the service of humanity’s greater good and noblest aspirations, not the science fiction-based fantasies and greed of a small group of wealthy tech entrepreneurs.
That all makes sense to me. Highly recommended.