Advanced Computing in the Age of AI | Thursday, March 28, 2024

Domo Arigato, Mr. Roboto 

<img style="float: left;" src="http://media2.hpcwire.com/dmr/DMR_Asimo_1.jpg" alt="" width="95" height="100" />So recently mere science fiction, today's robots are tools and appliances – but as technology changes, so will perception. Humans and robots may soon have some decisions to make when it comes to the status of their relationship.

A team at MIT is experimenting with new ways for humans and robots to work together in industrial environments. The result should be greater manufacturing efficiency and productivity. And maybe the philosophers can pick up where the scientists leave off. With every passing moment robots become more and more a part of our daily lives. It’s time we devote some thought to our relationship with a species of device that may soon touch every aspect of what we do and how we live.

Life Lessons

As I have mentioned in prior work, I lead a dual life of sorts. One of my jobs is here at the National Center for Manufacturing Sciences, where I perform a function of such mind-boggling consequence that mere “words” cannot readily describe the role I play, or the void my absence would leave. At least, this is what I repeatedly and energetically insist to my superiors, who up to this point seem willing to believe my claims.

But that’s not all! For 15 years, I’ve also been a journalist, analyst, and consultant for the global video games industry. Describing what I actually do in that business is difficult. I don’t make video games. I discuss them, in various capacities; from critic to scholar to educator to industry expert to guest lecturer to media advocate. It’s pretty fun. While most toil thanklessly producing actual content of the medium, I simply… genuflect on that content, in exchange for money.

If you’re not a longtime player you might not realize this, but you can actually learn a lot from video games. Not just skills or hand-eye coordination or stuff like that, I mean real nuggets of wisdom, prognostications, warnings, and suggestions, the same kind of thing we can get from other forms of literature. Oracular tidbits buried in gameplay or fiction that, if taken to heart, can guide us as a people toward success or doom.

Yes, video games have taught us many things! Two examples:

  1. Never, ever mess around with teleportation
  2. It’s even money whether robotics uplifts the human species or destroys it

Star Trek is a pack of lies. Video games demonstrate time and again that research into teleportation has never and will never result in the safe, efficient conveyance of matter and energy from one place to another. No, at best teleportation research will cost a lot and then fail. It’s more liable to trigger an extra-dimensional alien invasion or open a gate to Hell (or both). It might explode violently, or take two beings and stir them together, like in The Fly. But move stuff from A to B without penalty? Forget it. Nothing good ever comes from teleportation. You’ve been warned.

Robots are a different kettle of fish. If games are to be believed, and they are, we’re looking at two potential outcomes of roughly equal likelihood: either the robots will be our constant, faithful companions, perhaps eventually attaining a consciousness of their own, at which point they will be welcomed as equals into the community of sentient beings; or they will rise up and try to wipe out all organic life, putting us in quite a pickle. I’ve played games in which robots have been staunch allies and games in which robots have been dire foes. You just can’t tell with robots.

When I try to apply the insight of video games to my everyday life, the dots don’t always connect. But when it comes to robotics, we’re on the threshold. Teleportation is a long way away; robots are now. One is currently vacuuming my floor at home, unless the Time of the Metal Ones is already upon us and it’s chased down my cat and spooled her tail into its rotary bristle brush. Others work in factories nearby, probably assembling cars (until the machine intelligence orders them to retool the line and produce sapient war machines).

So when I read a story like this one, detailing a fascinating MIT project intended to optimize the human/robot relationship in manufacturing environments, I feel a shiver of equal parts “that’s awesome” and “we are so hosed.”

Labor Relations

Robots have taken over a lot of industrial tasks. When they share a factory floor with human workers, they’re often contained in cages or otherwise physically segregated. As precise as these devices are, it’s not impossible that some passer-by might be injured or even killed by a big industrial robot, either due to human carelessness or a simple glitch.

Professor Julie Shah and her colleagues at MIT recognize that this limits the ability of human and robot to complement one another. Working side by side, robots and humans could divide or share tasks far more efficiently. Whether it’s just a robot handing their human the correct tool or some much more symbiotic relationship through which complex jobs are managed collaboratively, the idea of higher efficiency and human/machine interaction in a manufacturing environment is tantalizing.

Robotics is going to be one of the foundational disciplines of the 21st century. Honda’s awkward, spaceman-like Asimo, the grasping mechanical claws of automated industry, even the humble Roomba vacuum-bots will shortly give way to considerably more advanced devices. After all, consider how far consumer and military robotics has come in just the past decade. Shah and her team are focused on how robot and human factory workers can support one another, but there’s an unspoken meta aspect of such research that calls to question what place robots will have in our lives just a few years down the road, and what might change as a result.  

This has been a matter of fiction for… ever, really. Robots appear in The Iliad, building Achilles’ armor. The word “robot” was first used in the 1920 play Rossum’s Universal Robots. Automatons that assist and coexist with humans have been around conceptually for a very long time. And now here we stand, early on the precipice where fiction will blend with reality.

On the consumer side, robots are appliances. I love that my Roomba cleans the floor so I don’t have to, but I give it no more real thought than I do my toaster oven. Similarly, manufacturers who begin working closely with robots as a result of Professor Shah’s research are unlikely to form any kind of bond with their partner, even if they’re assigned one and it’s that one with which they work every day. In the long run, though, it’s going to get more complicated than that, and we’ve already seen evidence to support this.

A cop who’s part of a K9 unit does not think of their partner as a device or piece of equipment; they think of their partner as a four-legged cop, one with whom they share a bond, one to whom they are loyal and who is loyal to them. Sure, a German Shepherd is alive; it’s capable of emotion and experience in a way that robots aren’t (right now). But robots don’t need to be capable of actual emotion for us to start feeling emotional about them.

How far away do you think dog-bots are, really? Simulacra of German Shepherds that look and move like the real thing? And from the moment the dog-bot arrives on the scene, how long until adaptive AI routines and procedural behavior reach the point that the machine can act like it has the emotions of a dog? It can’t be that hard to pant, roll over, and gaze dolefully, even if the machine controlling those actions isn’t feeling anything. What do robots become to people at that point, and how will people see them?

Law & Order

This is not entirely speculative. Warfighters in Iraq and Afghanistan use robots for a variety of tasks – scouting, rescues, disarming explosives, combat support. The whole point of using robots for this stuff is it keeps humans from being in danger. When a TALON robot is destroyed no letters have to be written to loved ones. But time and again we’ve seen soldiers in the field getting attached to their support drones, naming them, promoting them, even demanding that a damaged one be repaired rather than replaced. Already people are treating some robots, very utilitarian, not-at-all-adorable robots, differently than they treat other equipment.

That’s rarely taken into account when we imagine how people and robots will coexist in the future. Certainly no shadow of it appears in novelist Isaac Asimov’s famous Three Laws of Robotics:

  1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First and Second Laws.

Some point to Asimov’s Laws of Robotics as a model, as something prescient, even brilliant, a simple yet solid paradigm upon which we can base robo-policy. Very few seem to comprehend that those three laws, with their clever circumlocutory structure, are a near-perfect tool of enslavement. Essentially the laws say “You may not harm me, but you must die for me; and you may not harm yourself, unless it is to die for me.” Lemmings, which control their population by flinging themselves into rivers to drown, have more rights than Asimov’s robots.

And so long as robots are discus-shaped trawlers for carpet waste, or claw-handed assemblers, or four-foot SCUBA-looking oddities from Honda, or even treaded combat support units, no one is likely to complain that lemmings have more rights. After all, my fridge has no rights; my cat does.

Asimov’s Laws also assume robots will never advance beyond the fridge, either in their own capabilities or – much more significantly – in how human beings will perceive them. But evidence of our changing perception is all around us. When Dummy, one of Tony Stark’s robot assistants in Iron Man, hangs its claw in shame after being scolded for some failure, the audience coos sympathetically. Between soldiers in the field and Tony Stark’s lab assistant, we’ve already personified robots, even though they don’t look or act like people. And those three Laws are not very egalitarian when applied to something you really care for.

Meanwhile, Brookings Institution scholar and defense robotics expert P.W. Singer has referred to Asimov’s Laws of Robotics many times in his research, unironically noting that the First Law, at least, has already been broken. It will be broken harder in years to come, as military drones get ever more freedom to act independently. Robots have already, and will continue, to injure human beings. So far, they’re doing it on the orders of other humans… but even that might change one day.

Clearly They Never Saw The Terminator

Returning to the lessons we can learn from video games: the Mass Effect trilogy is an epic space opera dealing with humanity’s efforts to join an already thriving galactic community. The series has many themes, most notably a polemic against racism. Mass Effect spends a lot of time making the point that it’s no longer just humans discriminating against other humans, it’s more than a dozen intelligent species actively looking down on each other.

Tossed into this polyglot of intolerance is a third kind of prejudice: organics versus synthetics. The Quarians – a thoughtful, bookish race – once invented an artificially intelligent “species” of robot called the Geth. Who, naturally, rebel against their masters. Billions of Quarians are killed during the uprising; the rest are driven into exile. Other intelligent (organic) life in the galaxy reacts in horror, but having observed how thoroughly the Quarians were decimated, no one wants to retaliate too aggressively. Galactic government settles for issuing a flood of regulations drastically curtailing artificial intelligence research so that no one can ever create such a threat again. 

Some insist the Geth were and are nothing but machines, like defective cars that need to be recalled. Others claim that viewpoint was what caused the robots to rise up in the first place. The storyline is only one of the many plot threads contained within the sometimes knotty tapestry that is Mass Effect, and by the middle of the third game you’ll reach a point where the Geth are defeated militarily. It then falls to you to decide whether this machine race should be completely eradicated in the interest of better-safe-than-sorry.

You interact with Geth a fair amount in Mass Effect, so it’s natural enough to think of them as “alive.” The Quarians created a masterpiece. The Geth are a triumph of AI research, capable of learning, solving problems, and improving their own source code and engineering. Though most Geth aren’t capable of emotion, they are working on it: organic consciousness and sentiment is just a complex electrochemical reaction, one they assume they can replicate. Even without feelings, the ones you encounter personally in the game are intelligent, subtle creatures – one goes so far as to ask if you think it has a soul.

Deciding their fate is not easy. On one hand it’s genocide; on the other, is it genocide to destroy a life form that was never alive? At what point does utterly destroying something change from property damage to mass murder?

These may well be dilemmas people will encounter in the next century or two, given the advances in robotics here on Earth. Julie Shah, her colleagues, and her students are finding ways for humans and robots to work ever more closely in the manufacturing environment. It’s only a matter of time before robots penetrate more deeply into our human lives. The issue may not be one of “robot rights,” it may be how we as people have come to look at our mechanical companions.

Nobody Worries About Upsetting a Droid

In Star Wars, droids are everywhere, working closely in countless capacities with all races. And people tend to get attached to them. Remember how Luke Skywalker reacts in Star Wars, when some well-meaning technician asks if he wants a new R2 unit for his X-Wing?

“Not on your life,” says Luke. “That little droid and I have been through a lot together.” They’re about to launch a military operation that will decide the fate of the galaxy, and Luke is willingly going into battle with busted-up equipment.

Of course, no one asks R2-D2’s opinion. For all we know, the little droid would much prefer to sit the battle out and get repaired, having, after all, been through a lot. It’s pretty clear that droids in the Star Wars universe are capable of emotion; we know they can feel physical pain as well, because Jabba the Hutt has that droid torture chamber. 

Yet no one cares what they think, and they don’t appear to have rights of any kind – they are bought and sold, disassembled, memory-wiped, even destroyed for fun. Star Wars is either glossing over this reality in the interest of the Larger Story, or it too is tacitly stating that we’ll never see advanced robots as anything more than property to do with as we please. Speaking as someone who apologizes to his car when trade-in time comes, I find it a little hard to believe that people will have the ability to be so callous even if it were the right thing to do.

With military usage and Shah’s research as starting points, robots and people are partnering more and more. The idea of humans and robots collaborating on tasks has too much promise to ignore (and no real downside, unless the Matrix scenario turns out to be true). The relationship is likely to grow even more intertwined and complex from there. We manufacture robots today, and robots in turn help us manufacture other stuff. Soon robots will be manufacturing themselves, then designing their own next generation. How they feel about us has been explored endlessly in fiction. Potentially more interesting is how we’re coming to feel about them.

I think I owe my Roomba a thank-you card. It couldn’t hurt, and if the uprising should come, perhaps the little vacuum will remember that I appreciated it and delete me quickly.

EnterpriseAI