Social competence of self-driving cars

Ramin Assadollahi
3 min readJun 18, 2021

Cars not only need to understand what other cars and humans do, they also need to communicate what they perceive.

With every software update, my car seems to grow mature in terms of understand the world. Descriptions of elements of the world become more precise. It can distinguish between different types of traffic cones, for example. Or the differentiation between traffic lights becomes more sophisticated.
Recently, I noticed that the car is seeing dogs and not just passengers and cyclists. To me, it is a joy to see my car “grow up” and with it the whole industry of self-driving cars.

It appeared to me, that the perception of the outer world actually has two components: enabling the car to better navigate the world, especially cities. But also to communicate to the passengers that it actually understands the world and thus is building up trust.

First-time users of (semi-)self-driving cars can’t really assess the degree safety the car can provide and thus it seems to me a very important point that car designers let the car communicated what it sees or feels so that the trust between “driver” and car is increasing over time. I think Tesla does a good job here as it shows what the car sees continually even if it is not in self-driving mode.
There are also other methods of communicating to the user, for example when a line is crossed but the turn indicator is not active. The steering wheel will then give a bit of resistance when crossing the line.
Currently, it is also required the driver communicates to the car that she is attentive by touching the steering wheel every couple of minutes. When the driver doesn’t provide this signal, the car will increasingly warn the user by flashing the screen, doing a sound and finally breaking sharply for a split second.

The problem is that sometimes the perceptions of the car are wrong for a micro second, for example on the autobahn where some shadows of bridges are mistaken for a truck and the car brakes sharply until it notices that the perception was wrong (this is commonly refered to a “phantom braking”).

In the communication design between driver and car, it seems to me especially important also to not over-load the human with information. So what cars to show on the display? Parked cars on the side are only displayed when the car feels that the human is looking for a gap to put the car but not when driving slowly towards traffic lights. When cars are relevant for changing lanes, they are shown.

Thus, not only car has to have some sort of attention towards elements that determine its own acts but it also needs to anticipate what the human needs to see or to know. Certainly, today robots such as self-driving cars are not learning when to display / communicate what, they are programmed by human designers.

Still, to me it is super-intriguing how this subtle communication between humans and robots is happening and I’m very much looking forward to the upcoming developments.

--

--