Why Artificial Intelligence Often Feels Like Magic – New York Magazine

Npressfetimg 8797.png

Photo-Illustration: Intelligencer

In 2022, artificial-intelligence firms produced an overwhelming spectacle, a rolling carnival of new demonstrations. Curious people outside the tech industry could line up to interact with a variety of alluring and mysterious machine interfaces, and what they saw was dazzling.

The first major attraction was the image generators, which converted written commands into images, including illustrations mimicking specific styles, photorealistic renderings of described scenarios, as well as objects, characters, textures, or moods. Similar generators for video, music, and 3-D models are in development, and demos trickled out.

Soon, millions of people encountered ChatGPT, a conversational bot built on top of a large language model. It was by far the most convincing chatbot ever released to the public. It felt, in some contexts, and especially upon first contact, as though it could actually participate in something like conversation. What many users suggested felt truly magical, however, were the hints at the underlying model’s broader capabilities. You could ask it to explain things to you, and it would try — with confident and frequently persuasive results. You could ask it to write things for you — silly things, serious things, things that you might pass off as work product or school assignments — and it would.

As new users prompted these machines to show us what they could do, they repeatedly prompted us to do a little dirty extrapolation of our own: If AI can do this already, what will it be able to do next year? Meanwhile, other demonstrations cobbled together AI’s most sensational new competencies into more explicitly spiritual answers:

If these early AI encounters didn’t feel like magic, they often felt, at least, like very good magic tricks — and like magic tricks, they were disorienting. It wasn’t just direct encounters with these demonstrations that were confounding, though. Explanations about how deep-learning and large-language models actually work often emphasized incomprehensibility or, to use the terms of art, a model’s explainability or interpretability, or lack thereof. The companies making these tools could describe how they were designed, how they were trained, and on what data. But they couldn’t reveal exactly how an image generator got from the words purple dog to a specific image of a large mauve Labrador, not because they didn’t want to but because it wasn’t possible — their models were black boxes by design. They were creating machines that they didn’t fully understand, and we were playing with them. These models were inventing their own languages. Maybe they were haunted.

Leave a comment

Your email address will not be published. Required fields are marked *