The Teenage Robot


The Teenage Robot Photo

Recently, I was a featured speaker at the University Transportation Research Center (UTRC) conference in SUNY Polytechnic Institute. I am a strong proponent for autonomous mobility and with my 18 year old on the cusp of getting a license it is not happening fast enough. Teenagers have their own psyche and the word “no” becomes a focal point in their development.

Researchers Gordon Briggs and Matthias Scheutz of Tufts University are now working on a mechanism to teach robots to say, “no” to their human masters. The system is one that allows a robot to not only understand the language of a command, but the larger context — whether the robot is actually capable of executing it.  One may ask, is this a good thing.

This ability, while limiting own own control of the situation, is a necessary step to loosening the reins as we move forward into an autonomous future. Self-driving cars will be expected to make ethical judgement calls that cause the least damage when human obstacles block their path illegally. For example, does the car turn into the bicycle rider to avoid the mother and child jaywalking?

According to Google’s statement about their own car, “liability is a major ethical issue surrounding autonomous vehicles. Complex systems inherently have errors and bugs and Google’s self-driving car is not immune to software failure. An ethical issue that will arise surrounding liability is assigning fault when an autonomous vehicle crashes. The only instance of the Google car crashing was attributed to human error in another car hitting the Google car.”

However, as autonomous vehicles become more prevalent a system of responsibility must be established. If the software misinterprets a worn down sign does the blame fall on the department of transportation for poorly maintained signage or the company who produced the self-driving software? It is unclear where the future of liability will rest in the realm of self-driving cars however it is known that the United States is fast to place blame on car manufacturers. In 1992 Ford was hit with over 1,000 product liability suits in the United States and only 1 suit in Europe. The precedent set over the next few years will have a significant impact on how willing car companies will be in pursuing autonomous vehicle technology.

At an overall level self-driving cars seem to create an environment where society is better off as a whole. The creators of the Google self-driving car have the goal of saving millions of lives by eliminating automobile related accidents in the United States and eventually the World. The intent and final end product of less automobile related deaths would be accepted in both a Deontological and Utilitarian framework because the intent is to save millions of lives and the end result is the elimination car accidents. However these philosophical frameworks could diverge in their agreement at a lower level of examination. Consider the difference between a computer operated car versus a human operated car. If a crash is about to occur humans will almost always have a virtuous intention of avoiding the crash even if the crash is not avoided. Utilitarians would still likely favor the autonomous car at this level because the self-driving car will likely out preform the driver in avoiding the crash all together. While a Deontologist may struggle with the idea of a computer have a “good will” when acting in avoiding the crash. When a car must choose between killing a pedestrian or the driver will the act be with good intention or simply a process executed and arbitrarily carried out. A Deontologist, however, might still favor autonomous cars because the choosing to use a safer self-driving car in the first place could override the decisions made by the car’s technology.

Regardless of which ethical philosophy is used when decided whether or not society is better off as a whole the proliferation of autonomous vehicles will depend on convincing the public that self-driving cars are significantly safer than manually operating a car. People tend to want control over avoiding accidents or bodily harm, it is unclear how willing drivers will be willing to give up their control in favor of safety and convenience. Many drivers may take a slightly increased chance of accident in exchange for maintaining their ability to avoid accident.

Now the researchers at Tufts, have set a series of conditions, known as felicity conditions in order for the robot to accept the proposed action and decide the best path forward, which could be a first step to solving this ethical dilemma. In their documents the researchers write:

  1. Knowledge: Do I know how to do X?
  2. Capacity: Am I physically able to do X now? Am I normally physically able to do X?
  3. Goal priority and timing: Am I able to do X right now?
  4. Social role and obligation: Am I obligated based on my social role to do X?
  5. Normative permissibility: Does it violate any normative principle to do X?

Yes, the ethical debates surrounding robots in mainstream society is still a heated one. It’s an area Jerry Kaplan talks about quite a bit, questioning how our laws will adapt to punish wrong-doing robots. He says humans are “going to need new kinds of laws that deal with the consequences of well-intentioned autonomous actions that robots take.”

It’s basically an unavoidable consequence of life to hand over the keys to your teenager (or eventually your robot) as we really have no control over the future…



Reprinted by permission


About the author: Oliver Mitchell

Oliver Mitchell is the Founding Partner of Autonomy Ventures a New York based venture capital firm focused on seed stage investments in robotics, autonomous mobility and artificial intelligence. He has spent the last twenty years building and selling ventures, including: Holmes Protection to ADT/Tyco, Americash to American Express, and launching RobotGalaxy, a national EdTech brand. Oliver has been investing in the robotic industry for close to 10 years, with four successful exits in his angel portfolio in the past two years (including 2 IPOs). He is also a member of New York Angels and co-chairs the Frontier Tech Committee.

As father of five, Oliver launched RobotGalaxy in 2006 to fill a personal need: he wanted a wholesome activity for his son. RobotGalaxy’s patented toys were a national phenomena available at Toys’R’Us, Nordstrom Department Stores, and online that connected to a virtual world and library of mobile apps.

Before RobotGalaxy, Oliver was involved in a number of successful technology ventures and real estate developments. Oliver was part of the executive team of Softcom/IVT, an interactive video startup backed by Allen & Co., Intel Capital (NASDAQ:INTC) and Sun Microsystems. At IVT, Oliver was instrumental in expanding the market for their products with such leading broadcasters as HBO, Showtime, and Home Shopping Network.

Prior to IVT, Oliver was a founding member of AmeriCash, Inc., a network of ATMs in high traffic retail locations. AmeriCash was acquired by American Express (NYSE:AXP) within 32 months of operations. Oliver was also instrumental in the development of Holmes Protection and its sale to ADT/Tyco International (NYSE:TYC). Oliver has extensive background in merchant banking and advertising. He started his career at Kirshenbaum, Bond & Partners.

Oliver holds 14 patents and has appeared on numerous television shows, including: The Big Idea with Donny Deutsch, Fox Business News, The Today Show, and Rachel Ray. He also serves as a mentor on the Entrepreneur Roundtable Accelerator Fund, and advises many technology companies on their growth strategies including Greensight Agronomics and Que Innovations.

Oliver is also the publisher of the well-known robotics blog Robot Rabbi and is in the midst of writing a book entitled, “An Innovator’s Field Guide: Taking Ideas From Zero to Hero.”

You are seconds away from signing up for the hottest list in New York Tech!

Join the millions and keep up with the stories shaping entrepreneurship. Sign up today.