Sometimes, AI can look like magic.
When ChatGPT hit the scene, a major part of the innovation was the front-end. OpenAI made an accessible interface to large language models that anyone could access via a familiar chat interface. They abstracted away the technical components, API calls, and code behind design.
Design can help abstract away complexity, but it can also abstract away accountability. For example, consider cars. Those who own and drive a manual transmission car are a little closer to the engine, so to speak. If something goes wrong, they are more likely to have an idea of what needs to be done to fix it.1 On the other hand, an automatic transmission abstracts away the need to manage gears.
Next, think about the difference between MySpace and Facebook. People on MySpace were experimenting with HTML snippets and significant customization. If something went wrong on your page, you could check your code snippets and try to piece together a solution. On Facebook, customization is limited. So is the ability to troubleshoot.
Now let’s turn to ChatGPT. That abstraction of what went wrong is now not only pushed back to OpenAI, but within OpenAI it’s pushed to the model itself. “Why did this answer come out?” The customer doesn’t know. The engineering team doesn’t know. The model can’t “know,” but it can generate additional text to appease the individual asking “Why?”
When you abstract away complexity, you leave the door open for a sense of shoulder shrugging when it comes to accountability. What’s wrong gets pushed upstream. And if what is upstream is a complex AI model with billions or trillions of weights? Then it gets waved away.
-
I am the exception that proves the rule. I drive stick, and I know nothing about cars. ↩