With Autonomous Cars, It’s Time to Realize We’re Trying to Solve the Wrong Problem
They roll in weekly. We watch them. We rub our hands together withÂ schadenfroh glee.
I’m speaking of Tesla Autopilot crash videos.
Like a train wreck, weÂ seem unable to avert our eyes from videos depicting theÂ Silicon Valley darling’s sheetmetal kissingÂ concrete dividersÂ and other animate and inanimateÂ objects. Time and time again, owners of Tesla’s Autopilot-equipped Model S and Model X vehicles throw caution to the wind and let the computer issue ordersÂ in situations when it’s imperative there beÂ human intervention.
And it’s not going to change â€” not tomorrow, not ever â€” until we change course. That’s because we’re trying to answer the wrong question when it comes to autonomous mobility.
First, let’s contrast two things: Tesla’s Autopilot (or any other autonomous system) and someone with a well-below average IQ.
In the latest video depicting a Tesla Autopilot crash, the environment is easy to decipher: the highway is diverted due to construction, Botts’ dots are visible on the road to indicate there’s a new temporary lane for vehicles toÂ follow, and that new lane is bordered by an armco-and-concrete barrier to protect workers in the construction area and/or drivers from hitting heavy equipment.
Autopilot’s camera and radar sensors are going to have a very difficult time finding Botts’ dots. Complicating the scenario is a vehicle directly ahead of the Tesla, which you can see followingÂ the newly demarcated lane in the video before the crash. Because of this, we don’t know if the Tesla “sensed” the barrier here, but let’s give Tesla credit and assume it did sense itÂ for the sake of argument. There’s another vehicle beside and just behind the Tesla Model S moments before the crash, which forcesÂ the Model S into a quandary: shouldÂ I stay (in my lane) orÂ should I go (into the other lane and hit another vehicle)? The system is completely unaware of the Botts’ dots and chooses to hit the barrier instead of hitting a vehicle. That vehicle in the other lane, driven by a human driver, is followingÂ the Botts’ dots. Had the Autopilot system saw the Botts’ dots,Â it would have gently steeredÂ the right as it would know the object blocking it (the human-driven vehicle) would have moved and not posed a threat.
That’s a situation where we put a lot of stock in the capabilities of Autopilot. For all we know, Autopilot didn’t notice the barrier at all thanks to being screened by the vehicle ahead of it like a winger screening a goalie in hockey.
Now replace Autopilot with any sober, licensed (or maybe unlicensed) driver. Intelligence, even a minute amountÂ of it, is key here. This intellectual idiot, who’s still infinitely more intelligent than a computer, wouldÂ notice construction signs, see taller heavy equipment ahead, and plan accordingly before the lane diverges. Above all,Â this person would be able to make these decisions in varying weather conditions.
The great thing about the human brain is its ability toÂ make decisions based on small bits of incomplete informationÂ and fill in the blanks. For instance, if we are fiddling with the radio and pop our head up just in time to see a diamond-shaped orange sign drift by, we know there’s likely construction ahead even if we don’t see the content of the sign itself. Conversely, if a camera only faintly sees a snow-covered sign through a blizzard against an equally white background, it won’t know what to do with it. But us imperfect humans do.
So what does this have to do with asking the wrong question? Well, we’re now at a point where we’re trying to digitally sense and program our way around an infrastructure designed for the human interface. Signs are meant to be read by eyes and not cameras. The same logic applies to temporary road markings like Botts’ dots and others.Â All these warnings, cues, hints, and commands are designed with humans in mind. And we’re now trying to engineerÂ sensors (LIDAR, radar, and cameras) and software to interpret the world as humans do without the necessary intelligence to back it all up.Â Until we’re able to control the weather and develop some sort of artificial intelligence that’s on par withÂ those populations with even the least amount of mental capacity, this effort is all for nought.
But there is a solution, and it has the ability to fix this and other problems: we need to change our infrastructure to best support autonomous mobility.
It’s no secret that road infrastructure is falling apart, and not just in the United States.Â Under-funded transportation departments are coming home to roost. We are in for a collective crisis when it comes to the health of our roads. We could just rebuild them again as we’ve done in the past and continue on the 30-year cycle of replacing concrete, or we could take this opportunity to future-proof our roads to handle autonomous operation.
Should you be a member of the camp campaigning forÂ an autonomous vehicle future, you should be cheerleading an infrastructure upgrade. In-road communication between cars and central information hubs is the only currently foreseeable way to solveÂ many of the challenges afflicting the autonomous vehicles we see today, whether they be semi-autonomus Teslas or fully-autonomous Waymos. Weather is no longer a concern if vehicles no longer need to “see” road lines through snow and slush. Construction signs can be a thing of the past as central information hubs can alert vehicles to construction ahead. God forbid there’s an accident a mile down the road in this autonomous utopia, a message could be sent to inbound vehicles to zipper merge without causing excessive delays.
But best of all, and this is me wishing for a perfect world, maybe our meatbag-driven vehicles could be equipped with the same message-reception technology.Â Instead of being expected to see a school-zone sign hidden behind a badly manicured bush, that sign could then be displayed on a heads-up display. We could be warned of accidents ahead and what lane will get us through the bottleneck most efficiently. And â€” this is reaching, but allow me a moment â€” maybe I could blast up through the middle lane of a three-lane freeway at 80 mph while a sea ofÂ fully autonomous vehicles part into the other lanesÂ as if I’m some sort of petrol-powered Moses. We can all hope.
In-road communication isn’t without its flaws. In a world where everything digitally connect is also hackable, there’s the riskÂ of hijack via message spoofing â€” wherein an external actor sends an unauthorized message down the communication channel pretending to be a “road authority” â€” which could have disastrous results and be a target for large-scale terrorism. But so do all connected, autonomous vehicles, as do all connected,Â non-autonomous vehicles.
I don’t want to be a party-pooper, but there needs to be a time when we never see car crash videos outside of NASCAR ever again, because the price of those events isn’t just an autonomous algorithm â€”Â it’s intelligent and human.
via The Truth About Cars http://ift.tt/Jh8LjA
March 3, 2017 at 06:02AM