728x90 AdSpace




Latest News

ad

Saturday, January 16, 2016

Self-Driving Cars — And Trust


I want go for a somewhat deeper dive on a specific issue concerning autonomous driving: Trust. The context here is Google’s insistence on moving straight to entirely autonomous vehicles (ones without steering wheels), and its disparaging comments about more incremental approaches.

Users don’t trust machines

The starting point here has to be the fact users don’t generally trust machines, at least at first. Most users take quite some time to warm up to machines and computers. Their experience teaches them machines often don’t work the way you expect them to, and often malfunction. In the context of a machine that’s supposed to drive you places at dangerous speeds, that’s problematic. When the worst outcome we’re used to with most computers is figurative crashes, literal ones are pretty intimidating.

Incrementalism is the key

In this context, incrementalism is not only not an inferior approach, but is likely the key to gaining user trust. Can you imagine walking into a car dealership tomorrow and buying a self-driving car without ever having driven one? Would you even feel comfortable test-driving one without a steering wheel, with no previous experience of such a thing? Such a prospect would be utterly intimidating. However, if you were to make small, incremental steps in this direction first, over time you’d probably be quite prepared to make a smaller leap to a self-driving car. The U.S. Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) has defined five levels of automated driving:


  1. No automation (Level 0)
  2. Function-specific automation (Level 1)
  3. Combined function automation (Level 2)
  4. Limited self-driving automation (Level 3)
  5. Full self-driving automation (Level 4)

Most drivers, accustomed to vehicles operating at Level 0 or Level 1, will not have developed the level of trust necessary to climb into a car operating at Level 4. But if they move through the stages in between, they may gain that trust, assuming they don’t have a poor experience. That likely means starting with cruise control and moving on to brake assistance, electronic stability control, smart cruise control, etc., and going on from there. Google’s biggest challenge will be that it appears to be working only on the ultimate goal of Level 4 automation, without the ability to take users through the intermediate steps.

Good early experiences are key

Those early experiences, though, are critical in developing user trust — incrementalism alone won’t cut it. If the car performs poorly at those lower-level automation tasks, users will never trust them to do more. Tesla’s recent move to Level 2 automation with its Autopilot feature was exciting for drivers, some of whom posted videos to YouTube showing the technology in action. But search for “Tesla autopilot fail” on YouTube, and you’ll get quite a few results demonstrating that the technology isn’t quite ready in some cases. These types of failures, if they become frequent or tragic enough, will start to erode trust even in Level 2 automation, which will make it much harder to help users make the move toward Levels 3 and 4.

Ina Fried of Re/code recently posted a video of an experience of a self-driving car in which the demo driver frequently had to intervene — another example of a poor experience. Conversely, I had a demo in a simulator while at CES last week in which the “car” allowed me to relinquish control only under certain circumstances where it would do a better job. When I did so, it did a good job every time, and this kind of consistent good experience is critical to gaining user trust, too.

Mimicking (some aspects of) human drivers

Interestingly, another important aspect of developing driver (or passenger) trust is mimicking some of the characteristics of human drivers. Obviously, that doesn’t include the more dangerous aspects of human driving, but it does mean machines can’t simply take what appears to be the logical approach, and can’t always drive at the maximum safe speed. Some of the people I’ve talked to who work in this field tell me that human drivers tend to go slower than automated cars would go when exiting a freeway, for example. The car may be traveling perfectly safely, but if it doesn’t feel that way to the driver/passenger based on her personal experience, then it doesn’t matter. The bar for developing trust is not merely driving safely, but in such a way the occupants of the vehicle feel safe.

Since not everyone drives the same way, over time this mimicking of human drivers needs to move beyond average driver behavior and toward multiple different driver profiles, some driving faster and some slower, some prioritizing earliest arrival time and others maximum fuel efficiency, for example.

A long game

We already know the technological side of developing truly autonomous vehicles is a long game. Though the tech is moving quickly, mapping technology, LiDAR, regulations and so many other aspects of this field have a long way to go. But gaining user trust is also going to be a long game, and one that companies can’t short-cut.

Thankfully, companies have plenty of time to train users to trust these computers in a way they haven’t been able to trust other computers in the past.

-
5 ( 88 ratings )
-
  • Blogger Comments
  • Facebook Comments

0 Comments:

Post a Comment