As you have probably heard by now, the airline needed to get four crew members from Chicago, IL to Louisville, KY for a flight the next day and decided to boot four passengers off the plane after nobody volunteered to give up their seat. In light of the ensuing publicity disaster, it would have been cheaper and better for United to buy four Porsches and let the crew members drive to their destination.
Courts will sort out who owes what to whom—however, I have a different angle.
One in a Million > Zero
Earlier this month, JPMorgan Chase CEO Jamie Dimon talked about the massive contracts we all routinely accept without reading.
Most people do this online every day: “Click here to agree with our terms and conditions.” You can read them if you want, but few people do. You’d never get anything else done.
“When you pushed that button and said ‘I agree,’ you have no idea what you agreed to. I do.”
Theoretically, United and all other airlines have the right to give your seat to someone else under certain conditions. It’s in the fine print. If you buy a plane ticket, that’s what you’re agreeing to in their “Contract of Carriage.” It’s a take-it-or-leave-it proposition: accept their terms or go away.
Those contracts, drawn up by high-paid lawyers, cover every imaginable scenario (even the one-in-a-million ones) to protect the company—not the customers.
Whether or not United acted within its right to remove passengers from the plane, the airline staff who asked Dr. Dao to deplane—and then called police when he refused—certainly thought they were right.
An airline is an incredibly complicated machine with a zillion moving parts: planes, passengers, crew, luggage—and all of it must be in certain places at certain times, or the whole thing will fall apart. It’s remarkable when you think about it.
Modern airlines operate at the scale and speed they do by automating all those little details. Computer algorithms make many of the decisions; people just execute them.
The computer knows the rules, but it’s not perfect. One of those one-in-a-million scenarios will arrive eventually—or worse, several consecutive ones.
I suspect that’s what happened with United.
Several different processes converged. Computers issued instructions and people responded. No one wanted violence, but a series of flawed decisions produced it.
Things like that happen when you don’t build an adequate error margin into a complex system. Occasionally, conditions will combine to create results no one wants or expects.
Here’s the scary part.
Airlines aren’t the only industry to depend on complex algorithms. So do banks, brokerages, utility companies, hospitals, pharmacies, health insurers, governments, and more.
All these organizations let their algorithms make complex decisions in rapidly changing conditions. They work fine… until they don’t.
Yet we all constantly agree to contracts we don’t understand, then submit ourselves to algorithms that follow them to ludicrous extremes.
See the problem?
Everything Awesome Until Not
While writing this story, I remembered a tweet Harald Malmgren, adviser and senior aide to four US presidents, wrote on this subject. Here it is.
Algorithms increasingly write headlines, so entering neverland, algos writing for algos w/o cautious skepticism.Everything awesome until not https://t.co/eG0uyAw7uP— Harald Malmgren (@Halsrethink) October 3, 2016
(By the way: Harald alone is worth the price of attending our Strategic Investment Conference next month, but you’ll meet a total of 25 financial and geopolitical wizards if you join us there.)
Harald describes the financial-market equivalent of what just happened to United Airlines. Algorithms emit financial news and analysis. Other algorithms read it. Yet other algorithms launch trades a millisecond later.
“Neverland” is exactly right, even if you actually understand the investments you own—but worse if you don’t.
Maybe you own one or more exchange-traded notes (ETNs). They look a lot like exchange-traded funds (ETFs). What most people don’t know: aside from the fact that ETFs and ETNs both trade on exchanges, they’re entirely different species.
- An ETF is an “investment company,” a corporation or trust legally separate from the company that sponsors and sells it. If the sponsor goes bankrupt, the ETF survives under new management.
- An ETN, on the other hand, is a kind of bond. Instead of owning a slice of a big portfolio, you have loaned your money to a bank. The bank promises to repay you based on an index. They explain all this in the disclosure documents, but few people read them.
Eventually, one of those banks—which depend on algorithms much like United does—will fail, and those who hold its ETNs will have to stand in line with other unsecured creditors, hoping to recover some of their investment. Good luck.
This problem will get worse, not better.
We already let algorithms decide who gets a job, whether you can get a loan and on what terms, what funds go in your retirement portfolio, whether that spot on your MRI is cancer, and more.
That’s not always bad. In fact, it’s usually helpful and sometimes life-saving.
The latest “deep learning” artificial intelligence systems teach themselves how to do these things. They learn fast without human teachers. (We’re more like the textbooks.)
One thing AI systems can’t do, however, is explain why they decided X is better than Y. We have to trust them on that.
Most of the time it works out well. Occasionally it doesn’t. If you are the unlucky person interacting with the system at that moment, you might get a result like Dr. Dao’s.
I don’t think stopping progress is the solution. We probably couldn’t stop it even if we tried.
So, we’ll see more of these events where complex processes produce ridiculous or even harmful results. Welcome to Neverland.
See you at the top,