The Illusion of Explanatory Depth

What do you know? Less than you think you do.

Jorge Arango
4 min readFeb 6, 2019

“The only true wisdom is in knowing you know nothing.”
— Socrates

You know less than you think you do. We all do. Consider an object you interact with every day: a flushing toilet. You know how to operate this device. Depending on where you live, you activate it by either pushing a button or pulling on a small lever, which causes water to flush away wastes. Fine, but how does it do this? Knowing how to operate a thing doesn’t mean understanding how it does it. You probably have a rough mental model of how the toilet does its thing, but if asked to draw a diagram that explains it in detail, you’d likely have to do a bit of research.

This is an example of a cognitive bias called The Illusion of Explanatory Depth. Although it’s an old principle (as evidenced by Socrates’s quote), it was first named by cognitive scientists Leonid Rozenblit and Frank Keil. In a 2002 paper, Rozenblit and Keil explained that most of us think we know how things work, when in fact we have incomplete understandings. Our “folk theories” offer explanations that lead us to believe we know more than we actually do. We become overconfident, our mental models inadequate.

When we interact with complex systems, we often experience only a small part of the system. Over time we develop an understanding of cause-effect relationships through the elements we experience directly. While this understanding may correspond to the way the subsystem actually works, it doesn’t necessarily correspond to the way the whole works. Our understanding of the subsystem leads us to think we understand the whole. This is a challenge when interacting with systems where we can directly experience cause-effect relationships (e.g., we pull the flush lever​ and see and hear water rushing through the toilet) but it’s an even greater challenge in systems where such mechanics are hidden away from the user.

I’ve owned my Apple Watch for four years, and I still don’t understand why sometimes the device’s battery lasts all day, while at other times it’s completely depleted shortly after mid-day. At first, I was confident about my understanding of the problem; surely the Watch worked like an iPhone, a device I had some experience with. (And therefore for which I had a reliable energy usage mental model.) I tried tweaking the Watch in the same way I do the iPhone, but nothing worked as I expected. Eventually, I had to admit to myself that my model of how the Watch uses energy was flawed. I’ve since adopted a Socratic mindset with regards to the Apple Watch: I just don’t know what triggers greater energy consumption on the device. The only thing I know for sure with regards to this subject is that I don’t know.

The Illusion of Explanatory Depth leads us to make less-than-optimal decisions. Intervening in a complex system while thinking you know more than you actually do about the systems’ workings can lead to disastrous results. Designers — people who intervene in systems for a living — must adopt a “beginners mind” attitude when it comes to their workings. Even if (especially if) we think we understand what’s going on, we must assume we don’t really.

Designers should also aspire to create systems that are easy to use but offer some degree of transparency; that allow their users to create mental models that correspond to how the thing works. The first time I opened a toilet tank was a revelation: I could clearly see the chain of interactions that led from my pulling the lever to having water rush from the tank and down the tubes. Opening the tank isn’t something you do in your day-to-day use of the toilet, but it’s an ability the system affords. I can’t lift the lid on my Apple Watch to examine how it uses up energy.

Increasingly, the systems we design look more like an Apple Watch than a flush toilet: they’re extremely complex and driven by algorithms and data models that are opaque and often emergent. When we design an “intuitive” user interface to such a system, we run the risk of making people overconfident about how it works. We can’t build good models of systems if we can’t see how they do what they do. While this may not be an issue for some classes of systems​, it could be extremely problematic for others. As we move key social interactions to some of these systems, our inability to build good models of how they work coupled with their “easy-to-use” UIs can cause serious challenges for our societies at every level.

Originally published at jarango.com on February 6, 2019.

--

--