Each new video featuring Spot accomplishing amazing displays of agility and power also brings up many questions about Ethics and the future of robotics. Turns out a good place for answers is the past.  When wondering if Spot is the next Talos, mythology can teach us about AI.

30 years ago, the sole concern about robots working in factories was how much they were putting people out of jobs. Now, new anxieties are arising because robots are becoming extremely mobile and smart.

Just for the record, I am a big fan of robots, but it does not mean that I ignore some of the other questions that are surfacing now.  Oddly enough, since the dawn of humanity, people are trying to tackle these ethical questions. Yes, back in Ancient Greece, they did not have the technical means to build what we, now-a-days , consider robots.  Though they were capable of creating several robot-like contraptions, prosthetics, and divinatory systems. These inventions stemmed the same questions we have today about our tech. Here are a few examples:

Talos being captured by the Argonauts

 

Should Robots Look Like Humans or Animals?

In Greek mythology, Talos was a robot built to protect an island.  Talos had human traits, was autonomous and had what we can call a basic AI system.
The darker side of Talos was that it was ruthless at doing its job including killing people in many gruesome ways should the circumstances call for it.

Very much like Spot.  The adorable robot from Boston Dynamics looks really cute and is an accomplished dancer.  However, it is also a machine capable of going through fire and lifting very heavy loads. Hopefully the lethal part has not been installed yet!

But here is the kicker:  Talos killed many of its enemies using a very cunning plan.  It looked human and displayed emotions. What did Talos do with some of its enemies?  It acted innocent and gave them hugs.  Yes hugs.  Then its armor would heat up and burn the enemy.

This is an early example of humans being deceived in trusting a robot because of its appearance.  Look, everyone loves Spot because it looks like a dog and I’ve seen several videos of people petting Spot without realizing they did.

Recently a local meetup group set up a call with some of the Boston Dynamics staff and the discussion took this observation further.  

As humans, we pass judgement based on our past experience and -except for a few rare breeds- would not think twice about getting close to a robot that looks like something or someone we like.  The problem is that this judgment call bypasses all the risks.  

Spot may not be programmed to bite us but considering its size and weight, there are lots of damages the auto-doggy can impact on its environment without meaning to.

Which leads us to the next question to ponder:

Should AI have fail-safe mechanisms?

 

Should AI have fail-safe mechanisms?

In antiquity (or even before in Egyptian myths where the first reports of magicians appear), it turns out that there is always a way to regain control of the AI.  In Talos’ case, the talented Medea found the robot’s literal Achilles’ heel.  As an aside, Medea, who maybe should be known as the first hacker, found many ways to disrupt systems.

Should we build fail-safe mechanisms by default?  Many robots have those but let’s take it a step further in terms of AI.  While we are still far away from true AI as depicted in the movies, the systems are getting smarter.  And often the “fail-safe” are seen as an after-thought. For example, GDPR, Hipaa, Ferpa can be seen as “fail-safe” and not just pain-in-the-you-know-what.  It’s always a good thing to wonder what would happen should something go wrong with what we are building.

Oddly enough the concept of fail-safe mechanism also brings back the question of what the AI (and Robot) should look like.

Let’s say that Boston Dynamics decides to take things a step further and really make Spot look like a dog.  So now rather than looking at THIS:

Robotic dog

(Credit: Boston Dynamics)

you are looking at THIS:

One day Spot goes a bit bonkers and needs to be taken out.  As humans, what do you think would be simpler for us to “kill”?  Not the anthropomorphic shape. That’s for sure.

It’s quite a dilemma to tackle because the anthropomorphic shape that is problematic now is also what made it simple to adopt the robot in the first place.

 

Are we ready to live with the consequences of AI/Robots?

In mythology, the reason why Prometheus betrays the gods and steals fire is simple:  He wanted to help humans.  Compared to the nature around us and the predators we were ill-equipped and could use some extra help.  Tools and fire provided that help.

And since then we have not looked back and always looked at technology and inventions to improve our lives.  That’s who we are.

The question is whether we are willing to live with the consequences of our advancements.  

Ancient Greeks were already searching for recipes for eternal lives but quickly wrote about the downfall of living for eternity stuck in an old body!

Nowadays, we need to shift our thinking from living with how the devices and contraptions we invent impact our lives to how AI will transform us.  

AI, or at least machine-driven algorithms, are already way faster than us at analyzing situations. Some would even argue that they are becoming smarter than us.  However, we are still a long way before they can replicate human empathy and reasoning.  This gap is where we have the greatest area for threat and opportunity. 

We get to decide how we want technology to help us and which limits we want to set.  And as we do this, let’s not be blinded by our times and look back in history; we can learn a lot about how our ancestors dealt with similar challenges and learn from their mistakes and accomplishments. 

If you are interested in AI, Ethics, and Robotics, I highly recommend that you pick up Gods and Robots from Adrienne Mayor.  Her deed dive into AI, Gods, and mythology is extremely well written and brings new light to several myths.

 

 

Please follow and like us: