Member Reviews

You can find the review in WaPo:
https://www.washingtonpost.com/outlook/2022/01/28/an-ai-loop-that-ensnares-consumers-critics-alike/

The backlash against tech began in books. In the 2010s, when Google’s motto “Don’t be evil” felt unironic and TED talks lauding the Internet as the great social equalizer still drew credulous audiences, books such as Eli Pariser’s “The Filter Bubble” and Shoshana Zuboff’s “The Age of Surveillance Capitalism” were defining terms that would massively shift public opinion. Over the next decade, technology, or more specifically software, would come to be seen less as an innovative convenience and more as a harbinger of societal turmoil.

In 2022, anti-tech is mainstream. Ideas that originated in books about how platforms and their undergirding artificial-intelligence algorithms threaten society have made their way into Netflix documentaries, op-eds, bipartisan legislation and even the latest “Space Jam” (with supervillain Al-G Rhythm, played by Don Cheadle). Technology scholars such as Lina Khan and Meredith Whittaker, once considered fringe for their criticisms of technology’s structural harms, have found themselves with prominent appointments in the Biden administration. The world is finally listening to technology critics. So now the question is: What should they write about next?

The easy answer is to ride the wave of tech’s new unpopularity, and that is the option NBC News tech correspondent Jacob Ward chose in writing “The Loop: How Technology Is Creating a World Without Choices and How to Fight Back.” The book argues that capitalistic AI technologies “prey on our psychological frailties” and threaten to create “a world in which our choices are narrowed, human agency is limited, and our worst unconscious impulses dominate society.” More than telling readers anything new about the dangers of technology, though, “The Loop” provides evidence that tech criticism itself is calcifying into a mainstream genre.

The titular “loop” that Ward warns his readers about is rooted in the power, predictability and stupidity of our unconscious minds. When humans make decisions, our brains are quick to take shortcuts. In doing so, we make predictable, systematic errors such as miscalculating risk and overtrusting authority. Technology companies, Ward argues, use algorithms to hijack these unconscious patterns for profit. The “loop” is Ward’s speculation that our ever-increasing dependence on AI products — Spotify for music recommendations, social media algorithms for news, automated weapons for waging war — drives our thoughtlessness, which in turn makes us more dependent on AI, and so on. “In a generation or two,” Ward posits, “we’ll be an entirely different species — distracted, obedient, helpless to resist the technologies we use to make our choices for us, even when they’re the wrong choices.”


Hachette Books
Throughout the book, Ward interviews technologists, academics and everyday users to understand how different AI products have become inextricably woven into people’s lives. In one section, he talks to people addicted to “social casino games,” free-to-play gambling simulators that lull users, often poor and in dark places in their lives, into spending tens of thousands of real dollars on in-game currency. In another, Ward rides along with police officers as they patrol a beat dictated by PredPol (now Geolitica), an infamously racially biased algorithm that predicts where crime will occur based on past incidents. Ward asks, “What happens when budgets and schedules for policing are built on the assumption that a software subscription can replace the need to pay overtime for detectives?” For people and institutions alike, once AI is introduced, it’s hard to remove.

Was this review helpful?

I really enjoyed this book. 10/10 would recommend
I would say that "The Loop" was an eye-opening journey into the dangers of AI. It explores the potential dangers that lie when you combine AI and human behavior, that often lead to to flawed decision-making. It debunks a common misconception about AI: robots enslaving humanity, Ward highlights the true peril—our own minds influencing AI systems. By drawing attention to the ways our brains make choices through shortcuts, biases, and hidden processes, the author successfully highlights the pressing need to reevaluate how we approach AI development.

Was this review helpful?

An important book that should be widely read. The thought-provoking subject matter needs exploration.

Was this review helpful?

This book could be interpreted as being just as scary as it is informative and educational about the use of artificial intelligence that impacts the lives of ordinary people. Yes, we are being scanned and recorded much more often than we are aware of, and all of it purportedly for the common good. However, history teaches us that what starts out for the common good often makes a drastic turn to our detriment and regret. This book is extremely detailed and somewhat of a technical read with real life examples scattered within about the pluses and minuses of AI. You will learn much, and perhaps be more wary of who might be watching you… Thanks to NetGalley for the advance read copy.

Was this review helpful?

Jacob Ward is an experienced science and technology journalist, and he's worried about where we humans are headed in our pursuit of, and relationship to Artificial Intelligence (AI). In this thought provoking book Ward lays out his concerns and demonstrates how some of them are already playing out.

Artificial Intelligence as a concept has been around since at least the 1950s. Over the years funding for AI has waxed and waned as has the enthusiasm for, and perception of, it's usefulness.

Roughly defined, AI is the ability of computers to reason, plan, and learn as a human would. Today a form of AI known as machine learning is predominant, and is an often used tool of tech companies trying to influence us as we use the internet.

Machine learning software is based on pattern recognition. The software is shown numerous examples of things and over time begins to recognize patterns and to be able to act on those patterns. An easy to understand example might be the Netflix software that monitors your viewing habits and recommends movies and shows it thinks you might like. It does this based on its observations of the viewing choices of others who've watched the shows you've watched.

That pattern recognition is at the heart of Ward's concerns. He argues that humans are not as rational as we might like to think we are. We tend to make decisions based on instinct, gut feel and compulsions rather than on any logical reasoning of risk and probability. As any Star Trek fan knows, we humans are much more the instinctual Captain Kirk than the logical Mr. Spock. Now with AI we are coupling our instincts and compulsions to computer systems designed to generate profits by feeding us what those instincts and compulsions will respond to.

In the first part of the book the author lays out the research into how we humans make decisions and how much of our decision making is done from a subconscious level. In the second part of the book he demonstrates how AI is finding patterns in our subconscious decision making and reinforcing them, and showing us how much of that reinforcement is for the profit of others, leaving us in a restricting loop of smaller and smaller individual choice.

These AI deployments designed "not just [to] spot patterns in human behavior, but also to shape them" include finding ways to make online "for fun casino" gaming software more addictive and to entice players to move on to real gambling. It can also include automating the placement of ads as we move around the internet to optimize sales. Notice that the primary aim in both these examples is not to benefit us, the people who are subject to this software, but rather the companies deploying the software for profit.

The book is, as I said, thought provoking, and Ward's metaphor of loops within loops constricting our choices is helpful. But he does a better job in his discussion of our human decision making mechanisms than in describing the ways in which AI is already harming us. There is no discussion at all, for example, of how social media algorithms reinforce tribal responses to online (and real world) political discussion, at least a portion of which I suspect is the product of the deployment of AI.

Even so, if you are interested in a better understanding of the potential dangers of Artificial Intelligence as it's being deployed right now you will find this book well worth the read. Despite the topic it is not overly technical but rather is highly readable. I give The Loop Four Stars ⭐⭐⭐⭐.

NOTE: I received an advanced reviewer's copy of this book through NetGalley and Hachette Books in exchange for a fair and honest review. The book is generally available Saturday, January 25, 2022.

Was this review helpful?

The Loop is an interesting book that combines the author’s stories as a news correspondent to discuss why we all may be stuck in the loop. I found it interesting from a technology side but some of the stories interested me less than others.

Thanks to the publisher and NetGalley for a copy to honestly review.

Was this review helpful?

One can just hope that "The Loop" will hit the top of the non-fiction best seller lists once it is being released. While a few books have been written about AI and it impact, Jacob Ward takes it to a new level. The theme of the book is that we basically give up our free will by consuming too much algorithmically calculated content instead of the human curated type of content. It is not only that AI solution in our lives make sometimes harsh and bad decisions, we make these as well influenced by AI. Lots of references to previous books provide great context for the scholar as well as the novice on the topic.

Was this review helpful?

<i>The Loop</i> is a broad cautionary tale about data science technology, mainly deep learning, but it has an unfortunate tendency to oversimplify (and occasionally misinterpret) the state of ML industry and research.

I imagine that this book would benefit from a narrower scope. It tries to address everything from high-tech surveillance to the dangers of AGI and a "one-size-fits-all" mentality, and ends up somewhat disjointed and patchy. Ward brings up a lot of genuine questions and concerns in present-day scenarios, but mitigation is another thing entirely and there aren't really any concrete action items. It's mostly some lofty ideals with a healthy does of pessimism:
<blockquote>It's not even clear that if we clearly articulate the problem and outline a solution, people in a position of power will be willing to act on any of it.</blockquote>

Most of the overall points are highly worthwhile to think about. A brief sampling of such ideas follows:

- A good objective function for business may not necessarily be a desirable objective function for society.
- There are dangers in fine-tuning models originally trained for a different objective.
- People shouldn't blindly trust and follow algorithms out of convenience, or because they don't want to make a hard decision.
- There's an urgent need for accountability as applications outstrip regulation.
- It's important to distinguish between correlation vs. causation in data, especially in predictive models determining housing, credit scores, crime, etc.
- There are things we <i>should</i> do the hard way to force more deliberate thought.

However, attempts at technical explanations and extrapolation often don't make much sense, and reveal a lack of understanding of the techniques being used, as well as why and how decisions are being made in the field.

This is most visible in ch.7, where the book tries to give a crash course in ML terminology. There seems to be some confusion about what, exactly, an objective function is ("the objective function for you may be very different from the objective function for me", "setting objective functions for humans"), an example of reinforcement learning is just standard supervised learning, and there's some misunderstanding about fine-tuning and shared model architectures.

Later, in ch.9, Ward seems to misinterpret the point of algorithms designed to measure audience reactions: from the interviews he quotes, it's clear that they're picking up engagement patterns, not specific emotional reactions. Companies like WattPadd are not trying to promote things that elicit the exact same viewer reaction every time based on simple emotional response, they just want to see if they're driving community excitement. There's also a curious lack of exploration into alternative explanations for some of the points raised--I'd posit that art/music/writing becoming trite comes from the fact that things often need to be inoffensive and bland to have mass appeal (see: Marvel), not because algorithms only create things that are "matched to a few of our most basic emotions."

Other passages here and there make it obvious that Ward is not terribly familiar with the industry itself. He mentions: "... today, AI is being refined entirely inside for-profit companies." This is strictly untrue, and in fact his next example is of a GTech research project funded by DARPA. It's also a little odd that he goes on so much about how secretive and closed-off the industry is, when most companies seem to be falling over themselves to publish their work. In the course of my job (DL engineer), I see papers from many different companies, large and small, and most famous models have multiple open-source implementations and downloadable checkpoints, with a large community of people writing blog posts about how they work.

As a side note, I do have to question the weird vendetta against Google Maps and assisted navigation that briefly comes up as an example of limiting human choice (because you are given only a few options as a default). I don't know about you, but I really do not miss the MapQuest and AAA folding map days. There are a few other examples that come up about machines limiting choice where it just seems like the algorithms are just mitigating human error (flight paths, etc.), so I don't really buy them as supporting evidence.

The book also tends to be quite melodramatic:
<blockquote>I worry that as we become caught in a cycle of sampled behavioral data and recommendation, we will be instead caught in a collapsing spiral of choice, at the bottom of which we no longer know what we like or how to make choices or how to speak to one another.</blockquote>

With some straight-up fear-mongering:
<blockquote>And while the examples I'm about to describe may feel disconnected, remember that the interoperability of machine learning means a set of algorithms built to do one thing can also do many others well enough that you'll never know its various roles, so anything AI can do in one part of your life will inevitably metastasize into others.</blockquote>
(I already have a hard enough time with transfer-learning within the same domain...)

Finally, there's a strong narrative of complacent people who get used to offloading thinking/choice to machines, leading to nothing truly new.
I would be interested in seeing studies of whether or not this is true, rather than speculation: percentages of people who search for specifics vs. autoplay on YouTube, for example, or how many people blindly take recommended food choices on delivery systems, and so on. One could also argue that recommender systems help people with niche tastes discover new artists who match those tastes (I know several people for whom this is true), even if they're not mainstream, rather than constraining choice. Just because an algorithm recommends something doesn't mean that your tastes change, and that's an idea that wasn't explored at all.

All in all, <i>The Loop</i> utilizes some alarmist language and iffy technical explanations that damage the credibility and believability of its argument, but it does still bring up a <i>lot</i>of interesting ideas. Where it excels is in the fascinating current examples that Ward has dug up and the informative interviews with various people in the industry, and these examples and thought-provoking questions earn it three stars and are well worth your while. At the very least, it's a jumping-off point for further discussion, and is useful as a general survey of issues (current and upcoming) in the field.

Was this review helpful?