The Usability of Buttons that Harm or Save Lives

Per Axbom
10 min readJan 15, 2018

I can not begin to imagine the terror experienced in Hawaii when residents’ smartphones alerted them with the most horrific of messages:

BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.

[view tweet with image of this notification]

The aftermath of this message: the 38 minutes of fear, seeking shelter and confusion around the lack of news on radio and TV, has been documented in worldwide media coverage. Many of these reports attribute the harmful, and false, message to “human error”.

Not surprisingly many voices in the design community have displayed concern for this simplified conclusion. Human error has the ring of something that occurs naturally in our everyday lives… mishearing directions and taking a left instead of a right, adding two cups of sugar to the bowl instead of two teaspoons. But an action leading to the terrorization of a million human beings can only happen when a human is the victim of bad design.

Sadly when there is a movement to find fault it is not a person who designed the system who is blamed. It is also not a person who approved the design who is blamed. Consequences are suffered by a person at the very end of the chain of design, the person who was hired to use the badly designed system. A person with zero opportunity to affect its design.

And inevitably, the person who clicked the button will suffer emotionally and psychologically to a much greater extent than any person who placed the button in their path.

Usability in warfare

Usability, a professional field with the intent to make products and services better serve the people who use them, is no stranger to war-related interfaces. Drawing on the field of human factors, weapon arsenals are continuously improved to become better at killing more people. In a Smashing Magazine article these usability metrics are outlined for artillery cannons:

  • How quickly will a new crew member learn how to use the artillery cannon (now that the former crew member is dead)?
  • How many rounds per minute (ordnance) is the cannon able to fire with an inexperienced versus an experienced crew?
  • How will improving the design of the cannon improve target acquisition (and thus kill more enemies)?
  • How does a design improvement decrease soldier fatigue (as a consequence of a lighter cognitive load)?

Again, the designer of these artillery cannons is quite disconnected from the users of the same. The mental well-being of the person pushing the button to hurt others is lost in translation. Alas, the loss of human life is in this instance the very goal of the usability measures.

Usability is by and of itself not a power of benevolence, although its practitioners many times claim to have the best interest of humans in mind. The insights, tools and methods that usability research provides us with can be used in such a way that it makes people more easily harm others, it makes people more easily harm themselves, or it creates harm to the ecosystem of the planet we inhabit.

It is when this happens that I call it misusability.

Interfaces to prevent loss of life

The topical system, for which “human error” is blamed for the alarm and ensuing panic on Hawaii, has been designed for a different purpose: to minimize loss of human lives. In the aftermath we are receiving clues about a poorly designed system that could easily be triggered with little friction and no way of easily aborting and sending a follow-up message in the case of a false alarm. There seems to have been no consideration of handling false alarms at all, as if they were not possible. Certainly they were not anticipated.

While there are many in the design industry who now will call to attention the poor user interface and the importance of user-centric design, there are few things we know for certain about what the interface really looks like. What is talked about is mostly drawn from anectodal stories in the press.

As practitioners of user experience (UX) will also be well aware, the interface of a system is only part of the story. There are organizational processes, culture, verbal and written communication as well as adjacent systems that likely contribute to the entropy of the larger system (not only the computerized one). Fatigue, for example, is not only caused by and within the technical product, but within the many touchpoints of the organization and by pressure imposed by other humans.

In the inevitable inquiry into the events, I would urge for a transparent and open disclosure of everything that transpired and everything that will be changed to make the situation better. The best possible result from this incident (albeit a poor consolation for people experiencing the fear of death) is a distribution of shared knowledge that will allow more people around the world to improve the usability of their systems, elevating rather than diminishing trust.

In Misusability I outline five components to addressing any negative impact of design:

  1. Listen and understand what has gone wrong.
  2. Accept and assume ownership of the problem. Recognize that it is within your power to address it.
  3. Present your findings to achieve leadership buy-in for taking action.
  4. Be transparent with your findings and your response to the problem.
  5. Be prepared to add positive friction, ensuring that people are more mindful of the consequences of their actions within the system.

We are just two days after the events on Hawaii and many are quick to speculate — with reason — around the interface itself. I would like to start here by addressing three things that are already transparent and apparent:

1. The message is sent without a link

The notification that was sent out to to all mobile phones is in all capital letters, something that for many people is emotionally disconcerting in itself, but more importantly: the notification lacks a web link.

Imagine how many questions a person may have during the first minute of receiving a message announcing the likelihood of imminent death. Imagine how many more lives could potentially be saved with clear instructions on where to seek shelter, what to avoid, what to think about, how to talk to loved ones. Imagine how easily such a web page could have been updated to inform of the error.

A web page is the ultimate information carrier of our modern age, quick to update with relevant information, a place to post clarifying explanations. Please use it in crisis situations.

In a system designed to save lives in the 21st century it is far from my understanding how it is still designed to provide a minimum of information to information recipients, planting only a seed for the spread of doubt, confusion and fear.

2. The test system and live system are in the SAME INTERFACE

Interface designers will argue in the coming months over the best way to differentiate two actions, or buttons, with very different purposes. I myself have already critiqued the idea of a dropdown as “the worst possible design option for this type of interaction”.

But we need to go beyond the interface and ask this: would everyone not be better off if these two actions are not even in the same interface? They should never be on the same screen at the same time. Allow people to go to one interface for testing purposes, another one for announcing that thousands of people will die.

I would even propose to not only to put them on different web addresses; put them on different devices or even in separate rooms. Respect the potential harm imposed by false alarms about nuclear weapons(!) and configure the physical space accordingly.

3. Trust is already low and must be managed

“Is this for real?” “Is this a joke? Ballistic missile threat warning…” “Anyone else get that ballistic missile threat?”

Disbelief, even tonic immobility, when threatened is a common human response and it has been clearly demonstrated in the many questioning tweets that were sent out minutes after the mobile notification was sent out. Reconciling with the idea of a ballistic missile headed your way — and then acting on that — is not a thing the brain is regularly trained for.

In face of this disbelief the importance of information that will help you act, put one foot in front of the other, is ever stronger. Once again, where is the information provided to complement the alarm? If the goal truly is to minimize loss of lives there is huge gap to be filled in the provision of supporting instructions.

Undoubtedly disbelief will be even stronger in any future similar event and the willingness to act even lower, not only on Hawaii but with anyone in the world witnessing the unfolding events. We understand that the codes are tested regularly, but how is the response of the recipients tested and evaluated? In this scenario we have real people responding in a very real way and so much to be learned.

The user-centric design process starts with listening and the time to listen to real people, to real fears and doubts, is right now.

Field researchers need to be sent out to learn from residents of Hawaii and feed back insights to the design of early warning systems. And this should be done openly with publicly available information and results, for the benefit of trust and shared knowledge.

Crumbling confidence in security measures

In the hours after the incident on Hawaii I shared a short video clip on Twitter from the Dreamworks animated film Monsters vs Aliens. It plays out a scene from the White House Situation Room where the president walks up to a huge button on the wall and, before he pushes it, is interrupted by his panicking advisors and concerned military colleagues. This dialogue ensues:

– “That button launches all of our nuclear missiles!”
– “Well, then which button gets me a latte?”
– “Uhm, that would be the other one sir.”

The camera pans out to reveal an identical button two feet away from the first one. The president lazily pushes it to begin the dispensing of his java.

It’s a comedic take on usability that I never thought would feel as real as it does today. There is a huge trust deficit in the political arena and this is trickling down to the systems in place to protect. How secure are they? How resilient to inadvertent mistakes are they? It seems they aren’t as stable as we would have hoped. Fresh in my own mind is also how only six months ago we had the public emergency siren go off falsely in Stockholm.

To restore trust I truly believe that we need a significant display of transparency around the mechanisms that puts all of us and our world at risk. I believe mistrust itself has a boiling point that can erupt in a lot of damage to society.

Responding to “human error” with algorithm

My other concern with pushing the narrative of “human error” is that it supports the idea that by ridding ourselves of humans in decision-making we will have systems that perform better. I’ve already seen sentiments such as this one: “Shouldn’t those systems be fully computerized and equipped with artificial intelligence?”

I would argue, given all the evidence I have presented above, that there is hardly any consensus around what a defense-AI would actually do. What should it send out and when? How does it support a decision to respond? Should it be allowed autonomy in acting and responding? Should the AI communicate hope when there is none? Can it decide to save some at the expense of others?

There is a disconcerting tendency to ignore that what we call AI today are algorithms written by humans, often just enforcing and maintaining all the bias and prejudice, and error, that the humans bring to the code.

Today I have been rewatching bits of the 1983 cold war movie WarGames. The film involves a computerized system, an artificial intelligence, assuming control over a the situation room. The storyline asserts that the AI can not recognize human error (or the difference between a real life game and the virtual representation of the same), and has no understanding of human futility.

In response to a missile attack in a game the computer decides to respond by launching real missiles in return, unable to differentiate between virtual and real. To save the world before launch the young computer wiz has the computer play many iterations of the game Global Thermonuclear War against itself, having it finally ‘realize’ that there is no way to win.

The computer, Joshua, aborts the game and concludes:

A strange game. The only winning move is not to play. How about a nice game of chess?

The idea of nuclear war is already such a blatant logical error on the side of humanity that we could not possibly expect a true artificial intelligence to accept it as an option, assuming the goal is to preserve human life. We can either allow an artificial intelligence to learn exactly this, or we can teach it to be an agent of human misdirection… so that the second it detects a launch of a missile in our direction, by error or not, it launches one back. In the case of Hawaii the artifical intelligence perhaps would immediately have launched its missile in response to the threat believed by so many to be absolutely real for 38 minutes.

I would certainly feel more comfortable encouraging our world leaders that the pawns they play with should indeed be the wooden ones on a chess board.

Returning back to reality for a moment: Where do we go from here, today, in this world we share with each other? My current demand would be transparency and a long overdue movement of funds from usability investments in systems designed to kill more people, to usability investments in systems with the intent to save lives.

Update January 16: A picture from the interface has now been released by Hawaii Emergency Management Agency (HI-EMA). I first saw it via CivilBeat. It is many times worse than I would have dared imagine, and I believe it is easy for anyone to see how this “interface” leads to errors. As if by ironic fate, the interface is even reminiscent of the interface used to display a menu of games in the movie WarGames I referenced earlier. This similarity was first mentioned by Thomas Renger.

Update January 17: The released screenshot appears to a be a phony. It would seem the transparency is not as high as we’d hope for.

Hawaii emergency management officials have communicated to Honolulu CivilBeat that this facsimile is closer to the truth:

I am experimenting with mockups of this interface.

--

--

Per Axbom

Making tech safe and compassionate through design, coaching and teaching. Independent consultant. Co-host of UX Podcast. Primary publication: axbom.com