Recent Posts



No tags yet.

Scientific experiment gone wrong: early thoughts on Uber fatality

Not your typical car crash!

Cars are weird. To increase the speed of going from point A to point B, humanity created these obscure looking metal boxes with wheels and called them ‘cars’. When moving at a relatively high speed, they inevitably become dangerous, especially if operated by an unskilled driver. It has been reported that 3,000+ people die in road crashes every day - truly terrifying statistics.

However, the case at hand is different. A woman in Arizona died from being hit by an Uber-operated self-driving car. This tragic incident marks the first death on record of autonomous vehicles (apart from when the driver was killed in a Tesla). Such scenario has been envisaged and debated by lawyers, regulators and...just about everyone.

Self-driving cars became a reality incredibly quickly. One day people talking about self-driving cars on the roads are called futurists and what feels like 5 minutes later we see it as an obvious and imminent technological advancement. While Uber has paused further tests of its autonomous cars, more details regarding the incident will come out shortly.

How did the car not ‘see’?

Self-driving cars use video, radar and LiDAR technologies to navigate. The latter one (transformer-esque box at the top of the Uber car below) was supposed to come into play to detect the obstacle in front of the car. In simple terms (not that I have the ability to freely operate the complex ones), the technology measures the distance by calculating the time it takes for a laser ‘shot’ to reach the target and return back (for the keen ones - here).

The video above shows that the pedestrian indeed came from ‘nowhere’ and the system had minimal time to react. Minimal…but still time. Even an alert human driver with good reaction could have attempted to steer the car to the side. Isn’t technology generally agreed to be ‘quicker’ than a human in this situation (some explanation here)? There are reports suggesting that Waymo (cutie below) self-driving vehicles perform better than the rest of the market. Could it be that their tech would have done something differently?

What was the human operator doing?

Judging from the video, the human operator who was in the car was not giving his undivided attention to the testing of the vehicle. I should say that the media coverage of this aspect of the accident has not been of the highest standard. Far too many papers kept mentioning the operator's criminal record in the titles of their articles.

The criminal charge was not for negligence or hitting a pedestrian - it was for armed robbery. This crime from more than 10 years ago most likely had little to do with the state of mind and the skillset of the operator. Let’s assume that the individual in charge of the car testing was objectively suitable for the position he held at Uber.

That aside, while watching the video, I seriously struggled to understand what caused this person to constantly look away from the road and look down. Initially, I thought he was checking the data of the test drive or logging some observations. However, having spotted the micro smile that appeared on his face after looking down, I am not so sure. If he was using the phone/tablet/other form of entertainment, questions should be ask whether the operator was discharging his duties in accordance with the reasonable standards of skill expected from an individual in his position. The Arizona police department suggested that there was ‘little a driver could have done’ because of how quickly everything happened.

Of course, as humans we do lose focus. To make the matter worse, this particular job requires limited input and, frankly, is quite boring. Perhaps, Uber would have to implement some tech solutions, such as eye movement sensor in Tesla cars, to keep the drivers 'in check'.

Scientific benefit the shareholders?

You have to be undereducated on the subject matter or biased not to see the potential of self-driving cars. In order to perform better, the cars have to be tested in the real life environment, i.e. on the roads. The state of Arizona has agreed to testing of self-driving cars on its roads. Many resources suggest that such testing constitutes a ‘scientific experiment’ and needs to be treated as such. I do have some reservations regarding this 'definition'.

For something to be called an 'experiment', it needs to have clear hypothesise and a set methodology to prove or disprove them. The race for supremacy in the production of self-driving vehicles has many giants of both tech and traditional car manufacturers as avid participants. Attraction to the idea is very much profit-driven for understandable reasons and cannot be regarded as a mere 'experiment'.

Interestingly, Uber considered shutting down its automated car initiative when the new CEO took the reign of the company. Dara Khosrowshahi chose to keep it alive, understanding how much is at stake and how disproportionately high the returns can be if Uber ‘gets it right’. Uber and its competitors all need real time data to improve their respective technologies.

As we all know, in modern world data is king (or queen!) - having the most of it gives any developing technology an edge over the competition. Perhaps scientific in nature, the testings of automated cars are aimed at getting an edge over competitors and ultimately 'pleasing' the shareholders. For this reason, it is preferable that the society and the regulators take into account the overwhelmingly commercial nature of what Uber, Google and others are doing and treat the testing accordingly.

Thanks for reading and stay safe!

#selfdrivingcars #tech #regulation