Part 1. Human vs AV

Solji Lee
6 min readDec 15, 2020

UW Graduate Design Thesis 2020–2021

This post is part of a series of design papers on the theme of ‘New Interface Design for the Semi-autonomous Car’.

Hello! I’m Solji, a 2nd-year MDes at the University of Washington. I majored and majoring in industrial design, and now I work as a UX designer based in Seattle.

In the last post, I told about what causes fatal accidents in self-driving cars. In short, cognitive differences and communication failure between humans and machine drivers cause accidents. Today, I would like to talk about the situation and details of the two entities.

1. Cognitive Differences

First, the process of mechanical and human drivers making decisions in the car is not so different. The control loop, called OODA (Observe-Orient-Decide-Act), shows a cyclical process of control machine which was first devised by the United States Air Force.

OODA, devised by a U.S. Air Force colonel

When you imagine the human driver driving the car following the loop, the driver sees the surrounding environment with his eyes, applies and judges the situation with the vision data, and drives the car with his hands and feet. The human is in the car and has done everything himself. In other words, he is in the loop.

However, with the increasing technology in driving autonomation systems, humans are becoming supervisors above the control loop (on-the-loop), and machines control a vehicle in the loop instead of humans. Also, the control loop of the machine has more branches than humans, as shown in the table below.

VEHICLE SYSTEM DYNAMICS 2020, VOL. 58, NO. 5, 672–704
  1. Look around with multiple sensors.
  2. Localize themselves in the situation with a computer and cloud networking in the vehicle.
  3. Based on data from all sensors and the cloud, judge the next action.
  4. Operating the vehicle.

Vision is the first step in the control loop and is currently the most controversial stage in AV.

Self-driving cars use three major sensors to obtain the same visual information as humans. Camera, Radar, and LiDAR. (Exceptionally, Tesla was chosen to remove LiDAR and measure depth and distance only with the camera and Radar.) In addition, a machine driver receives traffic information, such as signal systems and construction roads, thanks to GPS and HD maps.

The reason why the three sensors are combined is that each sensor has different characteristics, such as range of vision and resolution, as shown in the table below.

Characteristics table by self-driving car sensor

However, the AV industry‘s trial figuring out how to have the same level of human vision, self-driving cars still have limitations. Sensors are affected by weather or external factors like droppings from birds. Plus, the process of combining and analyzing data is not yer collective and rapid like a human.

2. Communication failure

Another factor causing unstable autonomous driving is the failure of communication between humans and machines.

1) Car HMI: Human Machine Interface

  • Information for human drivers
    With the development of autonomous driving technology, the human role is now closer to a supervisor, not a real driver. Human is on top of the loop with semi-AV. In other words, the information we need while driving has also changed. Inside the vehicle, indicators that evaluate the operating ability of a machine driver are more important, not information necessary for driving, such as speedometers and traffic information.
Tesla, Full Self Driving Beta Interface_Oct 20,2020

An example is Tesla users’ reaction to the newly updated fully autonomous beta version interface this year. The image on the left is more like the screen as the technology developer sees, rather than a screen for the driver. HMI designers will be crazy with this screen. However, the reaction of Tesla users was the opposite. They prefer this screen because it allows drivers to determine in real-time how the car sees the surrounding environment and what actions the car is trying to take. As a result, the human driver as a supervisor was able to respond quickly to the wrong machine’s judgment.

  • Communication methods for human drivers
    The medium of the interface is as important as the contents of the information to be delivered to human drivers. With the development of technology, communication with cars while driving has become possible not only with physical keys and buttons, but also with cell phones, hand gestures, and voices.

2) Differences between expected capabilities and actual AV technology
Here is a more fundamental problem causing fatal accidents. Anyone interested in self-driving cars knows that the word “autonomous driving mode/autopilot” has been arguing. In this term, the level of self-driving that the driver expects is that the car will take charge of driving instead of me while I’m not paying attention and handling trivial things such as emails and phone calls. However, any driver who has used the actual car having self-driving mode may have seen a message that the driver is responsible for all accidents. That is, self-driving is now at the level of a driving aid system, unlike the meaning the word conveys.

According to the self-driving level table below, level 0 is the level at which human drivers take all driving without any assistance. Level 5 is a completely autonomous mode that does not even require a driver’s seat. Currently, cars in a commercial with autonomous driving modes such as Tesla, BMW, and Hyundai are level 1–2. This is because it is still needed hands-on and driver engagement. Expectations, that we are completely out of the OODA control loop, make drivers neglect their duties as supervisors which cause self-driving car accidents.

To summarize the driving traits of humans and cars, both human and machine drivers follow the OODA loop. However, the process of performing each action is much more complicated for the machine.

In particular, there is still much controversy over observation/vision. Human beings complete the whole process in five seconds through previous learning experiences. However, the machine requires a multi-dimensional analysis to obtain the same visual information, and there are errors and delays as the machine has not yet been fully learned.

Another issue is how to illustrate the changed human role in the car. Before fully autonomous vehicle commercialization, human drivers will play a role as on-of-loop supervisors, not as drivers in-the-loop. However, we are so enthusiastic about fully autonomous cars that we neglect to worry about the conflicts we will run into during the in-between time. More research is needed to find what information and interface are needed for humans as supervisors.

If you have any question or advice, please do not hesitate to email soljilee@uw.edu — I’m happy to talk to you :)

Reference:
Dickey Singh’s essays-VISUAL PERCEPTION, IN HUMANS, ANIMALS, MACHINES, AND AUTONOMOUS VEHICLES
Tulga Ersal et al.,2019-Connected and automated road vehicles: state of the art and future challenges
DRIVE Labs: How We’re Building Path Perception for Autonomous Vehicles- https://blogs.nvidia.com/blog/2019/04/30/drive-labs-path-perception/

--

--

Solji Lee

a second-year design grad at University of Washington / UX designer based in Seattle