
After far too long away doing some personal things, I wanted to come back with a Cognitive Bias we could really sink our teeth into! Today we’re going to talk about a very timely Cognitive Bias, the Automation Effect.
The Automation Effect suggest that we are more likely to believe information that comes from an automated decision-making system than from other sources even when those sources are correct. Oh yeah. This one is going to be a problem for us. There are lots of situations in which we rely on automated information to be absolutely correct. When we look at our phones, we 100% believe the time on the front. This is because over time it has been consistently accurate and because we understand that the time on your phone is likely coming from a satellite which seems pretty reliable. If the time on our phones became unreliable, then we would be less likely to believe it, but I think we have cracked the nut on phone time. It’s a simple transmission of information, its not an automated decision-making system. The real danger point comes when we think about the future of our decision-making systems, in which all arrows point towards AI.
When our phone autocorrects us, we immediately notice and often get annoyed. Nobody has ever wanted to say ducking, have they? The autocorrect system on our phones is an automated decision support/making system; but we don’t often suffer the Automation bias when it comes to full word autocorrect “fails”, likely because we know what our intent was. Sometimes we might second guess it if it corrects spelling to something that might be correct but is unfamiliar to us, but we generally know what we are trying to say and know when the system has made an error.
Now let’s move onto really complex systems like airplane autopilot systems. An autopilot system takes in a ton of information from various systems. Speed, altitude, navigation and many others. The autopilot FLIES the plane. Does this not freak anyone else out? But based on all of the information that it receives; the autopilot does a great job! We don’t let it take off or land, but it probably could. And this is how these problems get started. We trust autopilot. We couldn’t possible calculate wind speed, trajectory, etc. in our heads without the help of the systems that feed the autopilot. We expect it to take perfect information and make perfect decisions because unlike human beings, computers are generally infallible. This reliance can lead to misuse and lack of attention to other cues that might be important, but it’s easy to imagine disregarding those, especially when information comes from highly more fallible humans.
Now let’s think about our increased reliance on AI. I’m hoping you all watch Black Mirror and have seen how fun science fiction becomes when we begin relying more and more on automated decision-making systems. The problem is, that when we trust these systems more than we trust humans…. well…. trouble. AI systems have not yet to date actually accomplished their ultimate goals. Artificial Intelligence today is only as good as the humans who have programmed it. For example, a computer who is taught to add numbers cannot subtract them unless its been specifically taught to do so. Everything computers can do is still controlled by humans. Totally fallible humans. Sure, they can take in much more information than a human and process it. But, without common sense and a human paying attention to understand the nuance, AI is not yet infallible. So, let’s not fall for this Automation Bias quite yet.