In a first for any government worldwide, the US Department of Transportation (DOT) has issued guidelines on the regulation of manufacture and sale of highly automated vehicles (HAVs).The guidelines are significant not only for the future of transportation but also for the regulation of new technologies in general. The policy addresses user privacy, data sharing and cybersecurity, all of which are pertinent, recurring issues with emerging digital technologies like wearable devices and the Internet of Things (IoT). At 116 pages, the voluminous policy is a step in the right direction having been issued as voluntary guidelines for companies to follow. Moving forward regulators will have to strike the right balance in defining new technologies and calibrating the threshold of regulation required.
Defining new technology
It is a challenging proposition to create a legal regime for an evolving technology, especially one such as HAVs, which are still in their testing phase. The guidelines have come at a time when Uber is testing its driverless vehicles in Pittsburgh and the world’s first self-driving taxis have made their debut in Singapore. The Policy adopts the standards created by SAE (Society of Automotive Engineers) on “levels” of automation in vehicles which are primarily based on whether the human driver or machine performs the main driving task.[i] The Policy is only applicable to highly automated vehicles from Levels 3-5 where the machine does most of the driving and human drivers are ready to step in; whenever requested (Level 3), in limited circumstances (Level 4) or not at all (Level 5). SAE makes it clear while defining their standards that they are not meant to be normative or legal but rather descriptive and technical.
The DOT however went ahead and adopted the above standards as the basis for the entire policy. This is likely to result in issues around interpretation and compliance by manufacturers. Also the levels of automation in vehicles have been centered on the amount of human involvement required. This can turn out to be problematic as even at Levels 3 and 4, a human driver may be called in to intervene and it might be difficult for drivers to stay alert, what with the machine doing most of the job. California regulators have in fact called for car companies to not advertise their cars as “self-driving,” “automated” or “auto-pilot” when in reality their cars are incapable of driving themselves without human intervention.
This move could lead to a debate on whether only those vehicles that are truly driverless (with no steering wheels even) should be considered autonomous. Tesla’s Autopilot – which can steer on its own and change lanes, is often considered the gold standard when it comes to what HAVs can do, qualifies only for Level 2 automation and would not be covered by this federal policy. The discussion around the definition of HAVs is a crucial one with the occurrence of fatal accidents such as the one involving a Tesla few months ago where the driver was said to be watching a movie when driving on the autopilot mode.
An important outcome of the Policy is the Safety Assessment checklist that the DOT has prepared, calling for manufacturers and other entities to document and share data. The checklist addresses fifteen different areas such as vehicle cybersecurity, post-crash behavior, and ethical considerations among other things. According to the Policy, the DOT is trying to instill faith in public around this new technology through transparency while protecting consumer privacy and competitive interests.
Manufacturers might disagree though, and would not be too keen on sharing so much information seeing as a lot of it can be proprietary or confidential. The Policy calls for companies to share data with one another on event reconstructions of crashes, positive outcomes etc to build the safety of HAV systems as a whole. This guideline is not likely to go down well with many companies as they may fear losing their competitive edge. At the end of the comment period for the Policy, there will be a clearer idea of what and how much of the data the companies are willing to reveal and share.
The Policy lists down the data protection principles that manufacturers’ privacy policies and practices should ensure such as consent, security, integrity and de-identification. These rules were identified in the 2014 principles on Privacy for Vehicle Technology and Services that several car companies voluntarily signed up for.
HAVs will present various scenarios where a user’s geo-location information, biometrics and driver behavior information can be shared and used, be it for marketing purposes or government requests. Every movement of a person and a driver’s behavior can be tracked, with huge repercussions for the future of privacy and liability. These cars should come with an “incognito” mode or affirmative consent should be obtained at the beginning of every ride. It is not inconceivable that this information can land in the wrong hands and be misused. Manufacturing companies should be in a position to securely store this information and issue clear guidelines on the destruction of data retrieved from vehicles.
Efficient regulation has to be created around autonomous vehicles as they are said to enhance safety, cut down pollution and traffic congestion, and make driving accessible to more people. Standards will have to be devised for HAVs in terms of the components and software that each HAV must possess. Further methods of enforcing these standards across the board have to be explored, after sale when it comes to software updates.
A Chinese security team recently hacked a Tesla Model S remotely and managed to unlock the car and even take control of the brakes from 12 miles away. Tesla since then has come out with a firmware update tackling this security flaw. Manufacturing companies have their work cut out as they try to find permanent solutions and safeguards to evolving problems like security vulnerabilities. Regulators on the other hand can try and get ahead of these issues or wait for developments in technologies that try to minimise the scope for exploiting vulnerabilities.
(This commentary first appeared at the ORF Website)