10 TAs AI History Book Review Feat

Brief History of Artificial Intelligence: 10 Takeaways

10 Takeaways Book Review Non-Fiction Book Review

Step into an exciting recap of the evolution of AI, from its beginnings as a philosophical mathematical concept to its implementation, iteration, and, finally, limitation.

10 TAs AI History Book Review BOOK COVERA Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going

Michael Wooldridge

Consider A Brief History of AI for these online reading challenge tasks:

  • PopSugar Reading Challenge 2024. #16: A book set 24 years before you were born.
  • 52 Book Club Reading Challenge 2024. #22: A plot similar to another book.
  • Reader Haven Reading Challenge 2024. #8: Set during a historical time period you don’t know much about.
  • Shelf Reflections Reading Challenge 2024. #4: A nonfiction book about technology.
  • Library 24 in 2024. #18: A book set in Europe – 2.

The Turing test established the idea that the goal of AI was to produce behavior that was indistinguishable from that of humans. … The goal of AI thus began to shift from building agents that make human choices to agents that make optimal choices.

From concept to disruptor: A Brief History of AI in summary

AI is everywhere these days. It’s tempting to think that AI somehow came into being in November of 2022 when OpenAI announced it’s disruptive ChatGPT generative Ai model. However, as Wooldridge illustrates in his fascinating and engaging examination of AI’s history, AI has been around for decades. Once just within the purview of mathematical researchers and philosophers, artificial intelligence has only become a matter of technological debate relatively recently. Wooldridge uses his experience as a researcher to paint a colorful history of AI, discussing its origins, its many “false starts” and even periods of stagnation (“AI winters”), and then exploring its ethical and political ramifications. While this book was published a few years prior to the current wave of AI hysteria, it offers excellent context and a glimpse behind the scenes of science and philosophy.

I love AI because it is the most endlessly fascinating subject I know of. It draws upon and contributes to an astonishing range of disciplines, including philosophy, psychology, cognitive science, neuroscience, logic, statistics, economics, and robotics. And ultimately, of course, AI appeals to fundamental questions about the human condition and our status as Homo sapiens—what it means to be human, and whether humans are unique.

4 concepts in A Brief History of AI

  1. Turing Test: This concept, developed as a thought experiment by Alan Turing, would become a theoretical underpinning of artificial intelligence. Turing proposed that if a human interacting with a machine can’t discern that it is un-intelligent, that machine passes the test. The crux of the Turing Test is that it doesn’t matter if a thing is intelligent or not if it behaves like it is intelligent. Architecting artificial intelligence to behave intelligently became a primary goal of AI researchers.
  2. Expert-based systems: Most AI research started with infusing machines with narrow and deep expertise. This form of AI was logic-based but researchers soon found that logic couldn’t address everything, so these systems often fell short of their promises.
  3. The Grand Dream: Lurking behind the scenes of AI research is the specter of artificial general intelligence, or AGI. For every innovation and leap forward there is a corresponding, “What if this leads to a worst-case scenario in which AI runs amok and destroys humanity?”
  4. AI dangers: Although the Grand Idea hasn’t materialized (and likely never will), Wooldridge does point out that AI does introduce new dangers and threats to the human community. Combatting bias in AI is a significant issue since the humans architecting AI can infuse the algorithm with their own known and unknown biases, to significant negative consequence. In addition, AI’s ability to “fool” humans can lead to deep fakes and disinformation that could potentially destabilize public trust.

By the 1950s, all the key ingredients of the modern computer had been developed. Machines that realized Turing’s mathematical vision were a practical reality—all you needed was enough money to buy one and a building big enough (to house the Ferranti Mark 1 required two storage bays, each sixteen feet long, eight feet high, and four feet wide; the machine consumed twenty-seven kilowatts of electricity—about enough to power three modern homes).

3 connections

  1. OpenAI’s announcements as referenced in Four Battlegrounds: OpenAI was criticized for withholding information related to its ChatGPT generative AI innovation. The company cautioned that the non-disclosure was because of risk and wanted to allow time for safeguards; academics and researchers argued that the risks were over-stated and further contributed to AI fears and misinformation. Wooldridge references similar themes as he explores AI’s history and future.
  2. Elon Musk’s views on AI in Breaking Twitter: One of Elon Musk’s internal motivating forces is the protection of humanity. After discussions with an academic, he began to believe that AI, if improperly guided, could have the potential to destroy humanity’s future—the Grand Dream. Accordingly, he invested in OpenAI. Similar motivations prompted him to invest in Twitter, to enable free discussion of ideas.
  3. Theories of intelligence in Kinds of Minds: When does AI become “intelligent”? And what accommodations should we afford artificial vs. “organic” intelligence? Questions with similar roots as the Turing Test form the foundation of Daniel Dennett’s Kinds of Minds. (I hope that Dennett revisits Kinds of Minds for the AI era!)

MYCIN demonstrated, for the first time, that AI systems could outperform human experts in important problems and provided the template for countless systems that followed. MYCIN was intended to be a doctor’s assistant, providing expert advice about blood diseases in humans. … MYCIN became iconic because it embodied all the key features that came to be regarded as essential for expert systems. First, the operation of the system was intended to resemble a consultation with a human user—a sequence of questions to the user, to which the user provides responses. … Second, MYCIN was able to explain its reasoning. This issue—the transparency of reasoning—became crucially important for applications of AI.

2 conflicts

  1. ELIZA: An early iteration of an AI chatbot used keywords to approximate conversation. However, this implementation would not pass a Turing Test and was not AI. It didn’t communicate with understanding but rather was programmed on language and instruction. Although it might have fooled its users, it did not have any understanding of its responses. This illustrates early issues with AI using 1950s state of the art. Computing power and programming were not up to the challenge of inference.
  2. Terminator fears: The 1980s movie, Terminator, introduced the concept of destructive AI to the public conscious. The Grand Dream AGI turned lethal is often personified in the Skynet/Terminator characters. Reality falls far, far short.

The singularity is the hypothesized point at which computer intelligence (in the general sense) exceeds that of humans. … Computers could start to apply their own intelligence to improving themselves, and this process will then start to feed off itself. [Then, the] argument goes, it will be impossible for mere human intelligence to regain control. … Kurzweil’s main argument hinges on the idea that computer hardware (processors and memory) is developing at such a rate that the information-processing capacity of computers will soon exceed that of the human brain.

1 big idea

  1. The evolution of artificial intelligence demonstrates the virtuous cycle of philosophic imaginings driving scientific progress and ultimately technological breakthroughs and social impact.

DENDRAL showed that expert systems could be useful, MYCIN showed they could outperform human experts at their own game, and R1/XCON showed that they could make serious money.

10 TAs AI History Book Review Vertical

A reader’s thoughts on A Brief History of AI

My rating: ⭐⭐⭐⭐ 4/5

I learned so much with this book! Although I knew AI did not suddenly emerge on the scene in with ChatGPT in 2022 (I first started working with AI in 2015), I did not know even half of AI’s fascinating history. This book was immensely readable and easy for even a layperson to understand.

One of the things that struck me was AI’s philosophical underpinnings. I didn’t realize that AI started in theoretical mathematics and appealed more to thought experimenters than programmers. In fact, at one point in the book, a conference attendee laments to the author that there are too many computer scientists in the room, and not enough philosophers and mathematicians! This can likely be attributed to technology’s lagging behind the philosophical imagination. For neural networks and inferential learning to become commonplace (relatively), computers had to become smaller, less expensive, and more powerful. It would take decades.

The author’s writing style was also appealing. He writes with academic rigor—sound and complete explanations, citations, expertise, etc.—but in a friendly and approachable way that reminds one of a friendly chat over a glass of non-alcoholic wine at a dinner party.

My only regret with this book was that was published before AI made its way into the mainstream. Much like I would love to hear Daniel Dennett’s thoughts on AI vis a vis Kinds of Minds, I would love to hear Michael Wooldridge’s perspective on AI today and its future. Hopefully, this book gets an updated edition—and a new chapter—soon.

Excerpts from A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going.