Gerd Leonhard on the Societal Impact of IoT

Gerd Leonhard on the Societal Impact of IoT

Gerd Leonhard is a futurist, a lifelong familiar of IT, and a friend. He is one the the very few futurists that does not look at the future only through the technology aspects. And not unlike your humble servant, he tries to guess what the future holds and perhaps help to shape it a bit. Here I would like to share one of his broader and most relevant talks up to this date. The motto? Embrace technology—don’t become it!

Watch the Talk

A Matter of Purpose

If you have even a minor concern with how technology and innovation impact on everyone’s life, you already know it has impacted much and will do so even more in the future. Life was different when people did not have smartphones, laptops, and other digital artefacts we are all familiar with today. In the same way life will be different in ten or twenty years. But, it should be asked, what is the purpose of it all?

Gerd Leonhard’s talk is about getting your nose out of the grindstone and taking a broader look at what is going on. To do so, he says, what you need to do is ask yourself why or what the purpose of any business or innovation or anything relevant is. What is the purpose of business? Of IT? Of life itself?

If the purpose is happiness, then the following question is: can we reduce happiness down to an algorithm—something we can compute, perhaps pack into an app? Are there things that data alone will not tell?

If you look at as crucial an innovation as the Internet of Things (IoT), without a doubt, it will come with many benefits. The ability to pick up and digest massive amounts of data almost everywhere could solve global problems such as climate change, scarcity of drinking water, energy waste, or even cancer. All of this is within reach. But what could it do to us? What can be the unintended, possibly nasty consequences?

When you care about technology, nay, devote your life to it, then it becomes all too easy to forget about what is properly human and cannot be reduced to something else. For the sake of understanding Gerd Leonhard came with a new concept: the androrythms. Algorithms are the quintessence of technology, the very process that handles information about everything.

But, as Gerd Leonhard says, we humans are also made up of features that cannot be boiled down to sheer numbers or rules or categories.

Emotion, creativity, the ability to design or negotiate are absolutely human and cannot be equated with algorithms. Perhaps they can be emulated by a clever artificial intelligence: and yet, what will be emulated is not us. We use algorithms, we have tools as well as a purpose, yet tool and purpose are a different thing. If our purposes define us, we should be wary of pretending to reproduce ourselves into a sheer non-human technology.

How Tech Users Become Dumb and Dependent

For many of us, our phone is a second brain. This ubiquitous device has already turned into an extension of our intelligence. Our phone numbers, schedule, personal notes, music, books are all stocked in our smartphones. As these devices keep growing in complexity, it will be all the more tempting to put our proper intelligence there.

Imagine pushing Tinder a little farer than it already is. Imagine you have a Tinder 2.0 that allows you to date tonight, and if the encounter goes well you can marry next week. The app would plan and organize everything. You won’t have to choose where the date takes place, or how the marriage ceremony will be. The app would plan it all the way. What happens? Simple: you stop thinking. As you delegate every task and every choice to the algorithm, your mind becomes lazy and you are used to depend completely on the machine’s doings. This thought experience is not far-fetched, and if we turn our attention to long-term prospect, we can see the same phenomena at a wide, exponential scale.

At the end of the 80s’, several driving accidents happening on ultramodern planes and boats raised awareness on a rather disturbing trend. US Army mariners confused an airliner with a hostile fighter. They gunned it and took it down, killing all 290 civil passengers in the process. In other places planes that crashed would show, though their black boxes, that their crew was too confident in the data it received and did not take directions when they should have. Research was made, and the outcome was the discovery of the Glass Cockpit Syndrome: overwhelmed with massive and usually reliable information, pilots stop thinking. They lose the ability to choose and act on their own.

On a less dramatic tone, last April a Vietnamese doctor was beaten by several passengers in an airplane. (Another passenger filmed the scene and the video went viral.) This rather pathetic event happened because the embedded computer found out that the plane was overloaded, and consequently, that someone had to leave. The computer did a random draw and chose that this passenger was to be expelled. What happened next made the headlines. Well, this is what happens when tech goes wrong.

The hazard is not that machines take over. Machines are much less creative and autonomous than they should be to be even able to do so. No: the danger is that we become more and more like machines, less and less human, and dumber. A computer does not have to care about people’s feelings when it chooses. We cannot ask a machine to “understand” what its algorithms cannot feel. But it is rather awkward that we think and act like machines after having become used to delegate much of our thinking, feeling, and choices to machines.

Do We Really Need to Merge with Technology?

The very point of singularity is that, sooner or later, humans will have to merge with technology. The distinction between what is alive and what is not, between mind and “brute” matter, between human and non-human has already started to blur. The IoT will not remain a separate thing but will go beyond the limits we still know.

Connectivity is already a religion, technology already an altar, mobile phones have become the new cigarettes, and yet, Gerd Leonhard says, “on a scale of connectivity of 1 to 100, we are at 5.” Soon we will have brain-computer interfaces. Tech guru Elon Musk is preparing what he called a “neural lace” to connect the neocortex with a “digital cortex.”

Beyond the business aspect—if Musk’s company sells this first, it will earn literally trillions of dollars—this decisive advance will take a decisive step towards merging us with technology. Especially since artificial intelligence will be necessary, on the side of the “digital cortex”, to handle to raw data pouring out of our brains. From this point of view, it becomes difficult to tell the difference between the user and his tool. And here comes the potential dumbing down: “my kids, or my grand-kids”, Gerd Leonhard says, “will never know how to drive a car themselves.”

What will it mean to be human in a world where we merge with technology? How much computable are we—and what should remain off-limits? Again, we are already there: Google funds a project called the Global Brain, and Facebook already has a digital brain that computes data from 2 billion users. Our smartphones already are our digital cortexes. They are just not electronically connected with our heads. But we are definitely moving in this direction.

The evolution towards singularity puts us into a “Hellven”, a strange mix of heaven and hell. On the one hand we are becoming “like God.” We are able to create well-crafted, realistic, immersive virtual worlds. We have it in video games, in so-called augmented reality, or even on Facebook—the company that changed or distorted the very notion of what friendship means. Also, “many of us will be able to do singlehandedly jobs that required a thousand individuals to be done…” After the factories were deserted, the skyscrapers will be, to: an army of white collars harvesting and analyzing what an AI now can do will be a thing of the past.

On the other hand, being jobless, useless, and locked-up inside one’s own digital devices does not look like a bright future at all.

The Androrythms that will Flourish

From the point of view of sheer productivity, humans are hopelessly inefficient. We have to sleep and eat. We like to invent new things. We enjoy daydreaming and exercising our imagination. We cannot be reduced to a task-based, processes-segmented cognition. We could remove some of our nasty aspects, such as being liars or bad drivers, by a systematic use of lie detectors and automated driving. But should we?

Companies fire as many people as possible to automate everything they can. Looking for cost-efficiency, they give up to the temptation of automation. People are inefficient and expensive after all. People do complain. Yet the instant, flawless calculations of algorithms do not necessitate to replace us.

As the AI researcher Luciano Floridi stated:

Algorithms outperform human intelligence when it is not about understanding, mental, or emotional states, intentions, interpretations, deep semantic skills, consciousness, self-awareness… flexible intelligence.

Which means that all this is properly human and should receive our full attention. Gerd Leonhard suggests to think of intelligence this way: humans have a social, an emotional, an intellectual intelligence, and machines have their own artificial form of it. Machines think but in a different way than we do. Machines are far better calculators, but they are much less holistic than we are—have you ever tried programming a bot to recognize CAPTCHAs? If you do, good luck!—, and they don’t have a body. Our emotions impact on our body temperature. We are our body, just as we are our emotions. Algorithms are something different.

All routine jobs are declining. Routine can be automated. Machines can handle that, so they do. Cashiers are out and machines are in. Even beers can be impeccably served by robots. This means we will have to rethink our jobs. Culture will have to go over technology, our skills will have to be more humane, to fit into what will be needed, that is, what cannot be done by machines “We will need great humans, not only great engineers or programmers”, Gerd Leonhard says.

Some Final Issues, and Why a Mission Control is Necessary

To conclude the talk, Gerd Leonhard provides a takeaway list of relevant issues and two solutions.

  • De-skilling: what skills should we maintain or cultivate? What skills can we delegate to the machines? Is knowing how to drive a car important enough for us to forbid fully automated driving?
  • More broadly, is technology manipulating us? Imagine that a computer able to carry on a genetic analysis can also tell you whether it is prudent to have a child or not. This is a complex ethical issue.
  • As more and more functionalities are delegated to algorithms and machine, do we also have an ethical imperative to harness the power of it all—IoT, AI, robotics—for the good of mankind? If so, how?
  • As a handful of companies own most key innovations, the richest one per cent’s share of wealth keeps increasing as the other ninety-nine per cent stagnates or becomes poorer. Not to mention the digital divide between developed countries who can afford the devices and the others. How do we prevent the rise or aggravation of inequalities? And how could, for example, African countries be equipped with IoT?
  • Privacy, of course, will remain a recurring issue.

Mark Zuckerberg once said he created a monster. The idea of Facebook becoming autonomous is a bit frightening. Someone must remain in a “mission control” position. Someone has to know where the data is and where the rules are. The machines won’t take over, but they make human mistakes and neglects exponentially costlier.

Thus, Gerd Leonhard suggests a couple of solutions to at least start addressing all these issues:

  1. the creation of a digital council on ethics. As everyone on Earth becomes more and more interdependent, and as trillions of future earnings are at stake, it would be foolish to hope that all this will self-regulate: it won’t. Thus, a council making sure some ethical rules are respected and enforced—rules like the Asilomar AI principles—is likely to save us from what mere algorithms will never foresee on their own.
  2. As properly humane, i.e. “androrythmic” and non-reducible virtues will gain in importance, we must focus on developing many qualities, on improving constantly. We must be able to follow the pace up to the point of being “exponential humans.” Emotional intelligence will reign over IQ, and this must be fostered and cherished.

For, once again, what matters most is not technology itself, but the dreams and purposes behind it.

To know more about Gerd Leonhard’s ideas, check his website.

You may also like

Leave a comment