Connecting the Dots

Airlines, Algorithms, and Accidents

April 18, 2017

United Airlines had a rough week, but not as rough as one of its passengers. Dr. David Dao received a concussion, a broken nose, and two broken teeth after airport police forcibly dragged him off the plane he had boarded only a short while earlier.

As you have probably heard by now, the airline needed to get four crew members from Chicago, IL to Louisville, KY for a flight the next day and decided to boot four passengers off the plane after nobody volunteered to give up their seat. In light of the ensuing publicity disaster, it would have been cheaper and better for United to buy four Porsches and let the crew members drive to their destination.

Courts will sort out who owes what to whom—however, I have a different angle.

One in a Million > Zero

Earlier this month, JPMorgan Chase CEO Jamie Dimon talked about the massive contracts we all routinely accept without reading.

Most people do this online every day: “Click here to agree with our terms and conditions.” You can read them if you want, but few people do. You’d never get anything else done.

Dimon said:

“When you pushed that button and said ‘I agree,’ you have no idea what you agreed to. I do.”

Theoretically, United and all other airlines have the right to give your seat to someone else under certain conditions.  It’s in the fine print. If you buy a plane ticket, that’s what you’re agreeing to in their “Contract of Carriage.” It’s a take-it-or-leave-it proposition: accept their terms or go away.

Those contracts, drawn up by high-paid lawyers, cover every imaginable scenario (even the one-in-a-million ones) to protect the company—not the customers.

Whether or not United acted within its right to remove passengers from the plane, the airline staff who asked Dr. Dao to deplane—and then called police when he refused—certainly thought they were right.

Complex Machinery

An airline is an incredibly complicated machine with a zillion moving parts: planes, passengers, crew, luggage—and all of it must be in certain places at certain times, or the whole thing will fall apart. It’s remarkable when you think about it.

Modern airlines operate at the scale and speed they do by automating all those little details. Computer algorithms make many of the decisions; people just execute them.

The computer knows the rules, but it’s not perfect. One of those one-in-a-million scenarios will arrive eventually—or worse, several consecutive ones.

I suspect that’s what happened with United.

Several different processes converged. Computers issued instructions and people responded. No one wanted violence, but a series of flawed decisions produced it.

Things like that happen when you don’t build an adequate error margin into a complex system.  Occasionally, conditions will combine to create results no one wants or expects.

Here’s the scary part.

Airlines aren’t the only industry to depend on complex algorithms. So do banks, brokerages, utility companies, hospitals, pharmacies, health insurers, governments, and more.

All these organizations let their algorithms make complex decisions in rapidly changing conditions. They work fine… until they don’t.

Yet we all constantly agree to contracts we don’t understand, then submit ourselves to algorithms that follow them to ludicrous extremes.

See the problem?

Everything Awesome Until Not

While writing this story, I remembered a tweet Harald Malmgren, adviser and senior aide to four US presidents, wrote on this subject. Here it is.

(By the way: Harald alone is worth the price of attending our Strategic Investment Conference next month, but you’ll meet a total of 25 financial and geopolitical wizards if you join us there.)

Harald describes the financial-market equivalent of what just happened to United Airlines. Algorithms emit financial news and analysis. Other algorithms read it. Yet other algorithms launch trades a millisecond later.

“Neverland” is exactly right, even if you actually understand the investments you own—but worse if you don’t.

For instance…

Maybe you own one or more exchange-traded notes (ETNs). They look a lot like exchange-traded funds (ETFs). What most people don’t know: aside from the fact that ETFs and ETNs both trade on exchanges, they’re entirely different species.

  • An ETF is an “investment company,” a corporation or trust legally separate from the company that sponsors and sells it. If the sponsor goes bankrupt, the ETF survives under new management.
  • An ETN, on the other hand, is a kind of bond. Instead of owning a slice of a big portfolio, you have loaned your money to a bank. The bank promises to repay you based on an index. They explain all this in the disclosure documents, but few people read them.

Eventually, one of those banks—which depend on algorithms much like United does—will fail, and those who hold its ETNs will have to stand in line with other unsecured creditors, hoping to recover some of their investment. Good luck.

Deep Learning

This problem will get worse, not better.

We already let algorithms decide who gets a job, whether you can get a loan and on what terms, what funds go in your retirement portfolio, whether that spot on your MRI is cancer, and more.

That’s not always bad. In fact, it’s usually helpful and sometimes life-saving.

The latest “deep learning” artificial intelligence systems teach themselves how to do these things. They learn fast without human teachers. (We’re more like the textbooks.)

One thing AI systems can’t do, however, is explain why they decided X is better than Y. We have to trust them on that.

Most of the time it works out well. Occasionally it doesn’t. If you are the unlucky person interacting with the system at that moment, you might get a result like Dr. Dao’s.

I don’t think stopping progress is the solution. We probably couldn’t stop it even if we tried.

So, we’ll see more of these events where complex processes produce ridiculous or even harmful results. Welcome to Neverland.

See you at the top,

Patrick Watson

P.S. If you’re reading this because someone shared it with you, click here to get your own free Connecting the Dots subscription. You can also follow me on Twitter: @PatrickW.

Discuss This

0 comments

We welcome your comments. Please comply with our Community Rules.

Comments

Patrick Watson

April 19, 2017, 8:02 a.m.

Maybe I wasn’t clear. The police decision to remove the passenger forcibly was only the last in a sequence of decisions by both machines and people. There were several points at which better choices would have changed the outcome.

james@lasenby.com

April 18, 2017, 9:37 p.m.

I get your point.  But you totally missed the point.  Those cops were not robots.  They had within their power to make a discretionary call.  At the very least a little more tact in handling the situation.  More a reflection of the police state than an algo paradise in my opinion.  Definitely will see more of this, it’s got nothing to do with progress.