Imagine a software program that can learn on its own. Initially, it starts out as a few dumb lines of code. But over time, the application is able to improve itself through constant iterations.

If it’s able to improve itself at a modest rate of 1% per day, that program will be twice as smart in about 70 days.

That’s not a huge gain.

But within its first year, the program will be 37 times smarter. And that’s just with 1% daily growth.

Another year later, it’ll be 37 times smarter again.

With that kind of exponential improvement, it wouldn’t take long for those “dumb” lines of code to outgrow the programmer’s original intentions.

Eventually, it’ll start making its own decisions.

And those decisions won’t necessarily benefit humanity.

Is Uncontrollable Artificial Intelligence Inevitable or Avoidable?

From the Terminator to the Matrix to Daemon, the above scenario is a popular plotline in countless books and movies:

  • Humans create artificial intelligence (AI) with the best of intentions.
  • The AI improves faster than anyone could have imagined.
  • It eventually unleashes death and destruction on its creators.

And this hypothetical danger is hardly new.

Science-fiction writer, Isaac Asimov, even penned a series of “laws” to accompany his seminal 1950’s classic I, Robot:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov was way ahead of his time – creating these 3 Laws of Robotics back when computers could barely handle simple arithmetic.

It made for great science fiction writing.

But when author and programmer, Daniel Suarez, wrote Daemon in 2006, he did so at a time when we already had the ability to create autonomous, self-learning programs.

This is no longer science fiction.

It’s just science.

From traffic lights to wristwatches to coffeemakers, we are now surrounded by artificial intelligence. And these programs are only getting smarter and faster – exponentially.

In fact, Google recently made headlines when its DeepMind AI program beat the top players of Go – one of the most complex board games in the world. And just this month, a newer version of the platform taught itself Go, without any human input at all. And it beat its AI predecessor 100 times in a row.

What happens when these dumb lines of code outgrow us humans?

What happens when smart lines of code don’t need humans period?

Can Asimov’s Laws of Robotics Keep Us Safe?

If you listen to the likes of Elon Musk, Stephen Hawking, or Hollywood, there’s a lot to fear. We’re fast approaching a technological singularity – after which – we can’t possibly imagine what the world will be like.

But we tend to side with the Mark Zuckerberg’s and Ray Kurzweil’s of the world, believing that AI, automation, and machine learning will ultimately yield net positives for humanity.

How do we justify such a rosy outlook?

Well, we maintain that self-learning applications aren’t inherently cruel or benign. As with any tool, there is tremendous potential for good or bad. And it ultimately comes down to how these programs are used.

Because humans are humans, careful thought must be invested into how AI tools are deployed in the real world. We must understand the unintended consequences of misuse and take appropriate steps to control these emerging technologies (and our worst instincts).

We’ve already done this with potentially dangerous tools like mechanical cars. Automobiles can be convenient forms of transportation, or they can be mortal death traps. It depends on what laws and regulations we have in place.

And the same thing needs to happen with autonomous cars (or whatever AI platforms we develop).

If we do nothing at all, these tools will end up doing everything for us, including:

  • Creating their own values.
  • Ranking their own priorities.
  • Making their own decisions.
  • Executing their own directives.

And this could be very bad for everyone.

However, if we design the right frameworks, provide proper training, and input intelligent instructions, then artificial intelligence can be a blessing – both for society as a whole and for the testers and developers who work “under the hood.”

Do you agree that it’s possible to “control” programs that are smarter and faster than we are?

Are you scared of the coming singularity and what that means for the human species?

Do you fear retroactive punishment for not helping AI improve faster (as outlined in the horrifying thought experiment – Roko’s Basilisk)?

Don’t be shy.

Share your thoughts down below.