Self-driving and autonomous vehicles have till now, and will continue to be in the near future, the preserve of developed markets. There have been disruptions aplenty in the automotive sector over the past decade moving towards autonomous vehicles.
A north star for both vehicle and component manufacturers could be developing systems which transform the way we travel, making road journeys safer. This, along with the advent of EVs, lays down the foundations for a cleaner and more sustainable future. For EVs per se, governments the world over have been very supportive, with lucrative tax rebates on their purchase compared to their Internal Combustion counterparts.
These benefits, along with incentives such as free/low cost of charging, is starting to drive a marked shift in buyer choices. Nonetheless, there is still a long way to go with ‘range anxiety’ which remains a common concern, particularly with the need to improve the charging infrastructure across non-urban areas.
Taking autonomous tech to the road
Today, in the western world, it’s not uncommon to see commercial vehicles equipped with semi-autonomous systems being widely available for purchase. Some OEMs are selling ‘self-driving as a subscription service’, with the underlying hardware already equipped in the vehicle. Periodic software updates continuously add improvements and fix bugs.
At present, most of these systems primarily consist of traffic aware adaptive cruise control, with some having the additional ability to change lanes in moving traffic. This is a step towards achieving full L5 SAE autonomy.
The SAE has determined 6 levels of Autonomy, with L0 being complete manual control and L5 with no human attention required as well (Lyft’s former self-driving unit was called ‘Level 5’).
To the best of my knowledge, the self-driving packages that are commercially available for purchase today, are at best L4. However, this achievement by itself is quite exciting. The questions today have shifted from one of being an ‘if’ to a ‘when’ we will reach L5. However, some of the other tangential driver assistant technologies are impactful in the near term, with the goal of reducing repetitive tasks, improving safety. Assisted (also self) parking, collision warning systems, pedestrian detection, lane departure warning, and driver alertness monitoring systems have all been introduced more widely.
This begs the question, how soon can we expect to see this in Asian markets, in the crowded start-stop traffic conditions in New Delhi or along the narrow inner roads of Goa?
Clearly, not in the near future and only after these systems are more successful in the more standardised western driving conditions. There is an exponential complexity involved, with non-homogenous driving instructions sometimes even in the same metropolitan area, let alone a full city.
Street signs, driving patterns, varying road and weather conditions are just some of the conditions that represent the challenges that these systems need to generalise on. Today, popular opinion is that some complexities in self-driving have a longer tail than that was initially expected, with deadlines continuously updated. As these systems continuously keep improving, and learning from real world data, we would expect to continue seeing incremental improvements.
OEMs partner with start-ups
Most of what we see is in part due to the advances in Machine Learning and Computer Vision techniques over the past decade. Organisations have leveraged these as part of their core technology stack, with robotics and autonomous driving some major applied fields which have benefited from it. Specifically, convolutional neural networks have been transformational, being able to learn complex representations from camera footage.
A key dynamic in this is the growing collaboration between OEM and start-ups in this space. Most of these start-ups are focussing on targeted problems in the self-driving domain, leaving the challenges of manufacturing to the large OEMs. General Motors + Cruise, Argo AI + Ford + Volkswagen, Aurora + Toyota are just a few of the many such partnerships in the field. Larger organisations have also been acquiring start-ups (Zoox by Amazon).
Being at the cutting edge, these collaborations have even involved academic institutions as well such as Carnegie Mellon's Argo AI Center for Autonomous Vehicle Research, and Stanford Toyota Center for AI Research. Uber had famously built its own test track around the city of Pittsburgh, for its Advanced Technologies Centre division which was created in partnership with Carnegie Mellon University.
There are a few interesting differences in the ways in which organisations are handling their approach to this complex problem. Most of them are starting top down, adding as many varying sensors – Radar, Lidar, Cameras – to help the software ‘understand’ the environment, and accordingly help guide control. A few have decided to adopt another approach, taking inspiration on how humans learn. Tesla recently decided to move to vision only, mimicking how humans have been driving all along. The American EV major argues that LIDAR and radar systems are quite expensive, with real-world camera data from their fleet of pedestrian vehicles.
Tesla and why it’s a game-changer
Sometime ago, I made the decision to purchase a Tesla Model 3 as my first vehicle, quite ubiquitous around the sunny California roadways. The car is more of a computer than a piece of mechanical engineering, having less than 20 moving parts (a tenth-fraction of what is found in ICE cars).
One of the many intriguing bits from a UI point of view is the lack of any mechanical instrument panel or physical car key. A single touch-screen layout (slightly larger than an iPad) is responsible for controlling everything from the Navigation settings, air-conditioning to the entertainment section (Netflix, gaming, web browser). Periodic software updates were repeatedly delivered to keep older versions of the vehicle up-to-date with what’s available in the newest models.
I particularly enjoy the ‘Camp mode’ with the seats folding up as a bed for quick naps across long roadtrips, and the battery is conserved with all systems running. Also, there’s no spare tyre. A courtesy tow truck is provided in case of replacement, which might not work out so well if in a remote area. The route planner is very important for long road trips . . . it automatically sets the route to optimise for charging across Tesla's charging network, which saves some planning, and it’s reasonably accurate.
Overall, some ‘FSD’ as a subscription service has been intriguing. The ‘Summon’ feature transforms me back into a kid (currently restricted to private parking lots) being able to navigate from one spot to the owner, right out of the movies. I've set the early collision warning to ‘early’ in my case, and it's been completely worth the money spent. Multiple instances of travelling between the busy stretch of highway that connects San Francisco to Silicon Valley has it able to detect near-collisions among vehicles ahead of it, due to traffic patterns.
While this technology is not foolproof yet, requiring a driver's full attention (a lot of potential for misuse, people have been caught napping at the wheel), I amm quite optimistic about the future, seeing the fleet of vehicles collecting data across various sources.
The author is an AI researcher at Google DeepMind. He has served as a reviewer of academic publications in the domain of AI and autonomous systems.
Link: 6 Levels of SAE Autonomy Standards