Based on SAE International’s standard J3016 there are 6 levels of car automation with level 0 as the old fashion car with no automation and the highest, level 5, offering full automation in any geographical region and (weather) condition with a human only entering the destination. As a reference point, Tesla cars are at level 3 (short sellers claim it is level 2 while Elon believes it is level 5 or soon-to-be-level-5).
Waymo is probably the farthest ahead with level 4 cars – full automation on select types of roads and geographical regions (“trained” in Bay Area or Phoenix, but not in both), and weather conditions (let the car not drive in Chicago in snow or experience pot holes when snow is not on the ground). Atmospheric rain storms in San Francisco have probably also wrack havoc to Waymo’s cars.
In “Self-driving cars have a problem: safer human-drive ones,” The Wall Street Journal, June 15, 2019, Waymo’s CEO John Krafcik stated that level 5 was decades away. A human will be needed to occasionally intervene for foreseeable future (a flooded street in San Francisco or a large pot hole in Chicago).
Let us now switch to the other side of the equation: humans, aka drivers. Apparently, they will be needed for decades. The existence of humans in the future is less problematic except for the believers of superintelligence and singularity, but the survival of drivers is less clear.
I have three young-adult children, each with a driver’s license, however they are likely the last generation with driving knowledge. Ten years from now I actually doubt they will still know how to drive. While they currently have their own car at home owned by the parenty, they frequently use car-sharing. I doubt they will ever own their car since they will not need one. Their number of miles driven per year is steadily going down and I am confident that five years from now it is going to deplete to zero. As with any other skill or knowledge, if you do not practice it, you forget how to do it.
Thirty years from now I predict that the only people with driving knowledge will be those who are currently between 30 and 45 years old (I am assuming that all of us older than 45 will not be delighted to drive at the age of 75 or will RIP). Those who are now 30 years old or younger will forget how to drive in a decade or so. Those currently below the driving age and yet-to-be-born will probably never even learn how to drive.
People outside of this age range of between 30 and 45 will not be able to own a car unless we get level 5 cars, which seems to be unlikely. No matter how much they long to have a car, occasionally they will have to take control of the car, but they will not know how to operate it. As a result, there will be no car ownership for those outside of age 30 to 45.
The logic does not quite add up since car-sharing drivers will always be needed to transport my children, and occasionally, me. In short, thirty years from now, the only drivers will be those who are currently between 30 and 45 years old or are, and will be, car-sharing drivers. The latter will be in a similar situation as the airplane pilots are now. Car-sharing drivers will mostly be trained in simulators just to have enough knowledge to sporadically take control of a car. Since we consider today’s pilots to know how to fly, we should as well call future car- sharing operators, drivers.
There is another problem with not having level 5 cars soon. I personally find Tesla’s autopilot useless (I do not own a Tesla, and thus, this claim is based on reported facts), as well as, level 4 cars. The main purpose of an autonomous car is to increase my productivity. The car should drive while I am working on more important things. If I have to pay attention to the road and traffic even without actively driving, it defeats the purpose; it is still a waste of time. The only useful autonomous cars are level 5. There is clearly a benefit of levels 1-4 by being safer, but at least, in my case, this argument goes only so far.
In summary, the future of car automation levels 0-4 is bleak. They do increase safety, but they do not increase productivity if a human driver needs to be in the car paying attention to weather and pot holes. Furthermore, the lack of future drivers will make them even more problematic. In short, a human in the loop is not resounding.
One solution is to have level 4 cars being remotely assisted or teleoperated. This practice is already being advocated by some start-ups (Designed Driver) and, in essence, is a human on the loop. In such a scenario, I will be able to do productive work while being driven by an autonomous car with teleoperated assistance. This business model also aligns nicely with the aforementioned lack of drivers since there will only be a need for ‘drivers’ capable of driving in a simulator. Is this a next generation job that will not be wiped out by AI? You bet it is, and we will probably need many.
There is a challenge on a technical side. A car would have to identify an unusual situation or low-confidence action in order to invoke a teleoperator. If this can be done with sufficient reliability is yet to be seen. Current deep learning models are capable of concluding “this is an animal that I have not yet seen” without specifying what kind of animal it is. There is hope to invent deep learning solutions RELIABLY identifying an unseen situation and passing the control to a teleoperator.
Waymo is probably the farthest ahead with level 4 cars – full automation on select types of roads and geographical regions (“trained” in Bay Area or Phoenix, but not in both), and weather conditions (let the car not drive in Chicago in snow or experience pot holes when snow is not on the ground). Atmospheric rain storms in San Francisco have probably also wrack havoc to Waymo’s cars.
In “Self-driving cars have a problem: safer human-drive ones,” The Wall Street Journal, June 15, 2019, Waymo’s CEO John Krafcik stated that level 5 was decades away. A human will be needed to occasionally intervene for foreseeable future (a flooded street in San Francisco or a large pot hole in Chicago).
Let us now switch to the other side of the equation: humans, aka drivers. Apparently, they will be needed for decades. The existence of humans in the future is less problematic except for the believers of superintelligence and singularity, but the survival of drivers is less clear.
I have three young-adult children, each with a driver’s license, however they are likely the last generation with driving knowledge. Ten years from now I actually doubt they will still know how to drive. While they currently have their own car at home owned by the parenty, they frequently use car-sharing. I doubt they will ever own their car since they will not need one. Their number of miles driven per year is steadily going down and I am confident that five years from now it is going to deplete to zero. As with any other skill or knowledge, if you do not practice it, you forget how to do it.
Thirty years from now I predict that the only people with driving knowledge will be those who are currently between 30 and 45 years old (I am assuming that all of us older than 45 will not be delighted to drive at the age of 75 or will RIP). Those who are now 30 years old or younger will forget how to drive in a decade or so. Those currently below the driving age and yet-to-be-born will probably never even learn how to drive.
People outside of this age range of between 30 and 45 will not be able to own a car unless we get level 5 cars, which seems to be unlikely. No matter how much they long to have a car, occasionally they will have to take control of the car, but they will not know how to operate it. As a result, there will be no car ownership for those outside of age 30 to 45.
The logic does not quite add up since car-sharing drivers will always be needed to transport my children, and occasionally, me. In short, thirty years from now, the only drivers will be those who are currently between 30 and 45 years old or are, and will be, car-sharing drivers. The latter will be in a similar situation as the airplane pilots are now. Car-sharing drivers will mostly be trained in simulators just to have enough knowledge to sporadically take control of a car. Since we consider today’s pilots to know how to fly, we should as well call future car- sharing operators, drivers.
There is another problem with not having level 5 cars soon. I personally find Tesla’s autopilot useless (I do not own a Tesla, and thus, this claim is based on reported facts), as well as, level 4 cars. The main purpose of an autonomous car is to increase my productivity. The car should drive while I am working on more important things. If I have to pay attention to the road and traffic even without actively driving, it defeats the purpose; it is still a waste of time. The only useful autonomous cars are level 5. There is clearly a benefit of levels 1-4 by being safer, but at least, in my case, this argument goes only so far.
In summary, the future of car automation levels 0-4 is bleak. They do increase safety, but they do not increase productivity if a human driver needs to be in the car paying attention to weather and pot holes. Furthermore, the lack of future drivers will make them even more problematic. In short, a human in the loop is not resounding.
One solution is to have level 4 cars being remotely assisted or teleoperated. This practice is already being advocated by some start-ups (Designed Driver) and, in essence, is a human on the loop. In such a scenario, I will be able to do productive work while being driven by an autonomous car with teleoperated assistance. This business model also aligns nicely with the aforementioned lack of drivers since there will only be a need for ‘drivers’ capable of driving in a simulator. Is this a next generation job that will not be wiped out by AI? You bet it is, and we will probably need many.
There is a challenge on a technical side. A car would have to identify an unusual situation or low-confidence action in order to invoke a teleoperator. If this can be done with sufficient reliability is yet to be seen. Current deep learning models are capable of concluding “this is an animal that I have not yet seen” without specifying what kind of animal it is. There is hope to invent deep learning solutions RELIABLY identifying an unseen situation and passing the control to a teleoperator.