In the 60s, there was a rather famous project where scientists thought that solving image recognition will take them few months at most. It's 2018 and the best algorithms on the planet will answer with 99% confidence that a sofa in a zebra print is in fact, a zebra, unless they were very specifically trained against this scenario.
I feel this exact same thing is happening with autonomous cars - yes, it's possible to get them to be extremely good at recognising the road and surroundings - but the last few percent, those crucial few percent that make the technology actually usable, I don't see those happening for another 50 years at least.
My unpopular (to me, even) prediction is that wartime will be the catalyst that makes self-driving cars ubiquitous. In times of peace having a car that kills its occupant even 1/10,000 of the time is unacceptable. In times of war having a car (or more likely, truck/tanker/tank) with no occupant is a huge competitive advantage, because someone is actively trying to kill the occupant. Build something with even a 90% chance of crashing and you still win, because you can take people off the battlefield entirely while your opponent loses precious soldiers with every vehicle that's destroyed.
Then after the war, people's risk tolerances get reset because quibbling over a 1/10,000 chance of death seems ridiculous when people have been actively trying to kill you for the last 5 years and a countable percentage of your friends are now dead.
The way global politics is going we probably don't even have to wait 10 years for this.
You seem to have a war like WW2 in mind. But a war between big powers in the 21st century couldn't drag on for five years while you frantically work on new technology. Even without use of nukes, one side or the other is going to be flattened a lot faster than in the days when bombers used propellers.
Now, a new Cold War, that might push technological competition, although you won't get that "reset" of people's attitudes that you're after.
It sounds more like he has a war like Iraq and Afghanistan in mind. Imagine resupply convoys being driven autonomously, capable of launching drones for defense.
> It sounds more like he has a war like Iraq and Afghanistan in mind. Imagine [...]
So... that would make them exactly the wars in Iraq and Afghanistan. If that were the case we wouldn't have to imagine, we'd have the results in the field already.
So that war must be of a different kind. Not the kind where you already have technical superiority and have 0 incentive to develop it because you can already make truckloads of money by supplying current generation equipment to the front lines. It has to be a war where developing the new tech is the difference between your country existing or not a decade or more from now.
That's a special kind of war. It could be a cold war but it's unlikely to have one of those in the same way the one from the 20th century unfolded. And if that kind of "hot" war is the only one that can bring these improvements I'd rather stick to driving my own car and labeling my own photo library :).
I think you are half right. It would take wars exactly like Irag and Afghanistan. And here we are, testing autonomous vehicles. Perhaps I'm wrong, but my understanding was the current raft of autonomous technology was supported in it's early stages by DARPA, with their priorities set by what was going on in Iraq and Afghanistan.
Someone I know frequently relays an anecdote. They were designing a new military helicopter. One of the requirements was that if the pilot was injured the aircraft should be able to return to base and come to a hover. They were trying to figure out how to cut weight. My friend said, well, if we just remove the pilot and all the equipment needed to support them, we'll lose a lot of weight, and the vehicle will be more aerodynamic. His suggestion wasn't taken seriously. That was before Iraq/Afghanistan.
That project got canceled, my friend retired. I have another friend working at the same company. They are building an autonomous helicopter.
What I mean is that it's not these wars that will bring you autonomous vehicles. It's war in general, the idea of using autonomous machines of any kind for war is very old, and so is the study of it. But Iraq has been a war zone for decades now (with intermezzos). And although some of the tech was there since before the Gulf War we still haven't progressed that far in ~30 years.
This kind of war brings slightly accelerated incremental progress, evolution.
Something like a WW or a cold war where you question whether your city will be the next Hiroshima brings you a jump: the nuclear bomb, the ICBM, man in space and on the Moon, and so many other Sci-Fi tech. That's what I meant.
Yes, today we have slightly better autonomous vehicles than a decade ago but this is natural evolution and it relied on progress in so many other (not necessarily war driven) improvements: computers, electronics, etc.
I would rather not see the war that brings you the AI for autonomous machines.
I've seen a number of projects already for self-driving supply trucks, and even walkers from Boston Dynamics. There's a (probably highly scripted) one featured in a later episode of Top Gear, for example.
Well, you probably won't have to wait 10 years for another non-peer war like that.
I think he's envisaging a big peer war with mass mobilization ala WW1/WW2, but he imagines such a war would be like a big Iraq/Afghanistan. It wouldn't.
It seems like it might be feasible for a proxy war (like the one in Syria) to escalate significantly, to the point where multiple major powers are contributing manpower and materiel directly to the front lines; In such a case, it may be possible for an extended war to take place, while still incentivizing deployment of autonomous weaponized platforms - which are already in development today [1].
I'm uncertain exactly what the next major war will be like or who the sides will be, but I was thinking of a war like Syria but in a developed country. Perhaps you'd have Jesusland vs. Union of Socialist States of America vs. the Neoconfederacy vs. Ecotopia vs. The Reconquista vs. Sovereign Citizens vs. drug cartels vs. Steel Glory vs. Greater Canada against the backdrop of Central American refugees in North America. Or Cosmopolitan London vs. Hail Britannia vs. Neo-Luddites vs. Scotland vs. Ireland in the British Isles. Or whatever the fissures are in Russia or China.
One observation about the Syrian Civil War (and Iraq and Afghanistan) is that large areas of the country became extremely dangerous, because everyone was fighting over them and you often couldn't see the adversary or know who you were fighting. Another is that supply lines were quite vulnerable; forces could hole up in a military base and be relatively safe, but to continue operations in the field, they needed food/ammo/fuel, all of which needed to be transported at significant risk. A third was that whichever force brought security to a region and stopped the fighting there often won political power, because a majority of people don't care who rules them, they just want to not die.
Drones + self-driving supply lines would allow a belligerent to fortify & disperse their own industrial and operations base, well out of contested zones, and then project power at zero risk to their own lives into disputed areas. If the drones are smart enough (i.e. minimize civilian casualties but can easily detect and eliminate belligerents), they're also likely to win political points for eliminating combatants.
I dearly hope that the Western cultural fractures don't turn into this.
In any case, this is unlikely to spell victory for the belligerent who chooses this route. Humans are cheaper than self driving military trucks. You can afford to lose them. Literally-literally.
It's only in our cushy relative world peace situation we find ourselves entertaining the idea of spending a million dollars on a truck is somehow cheaper than losing a ten thousand dollar truck with 6 men on it.
I also dearly hope that Western cultural fractures don't turn into this.
And no, humans are not cheaper than self-driving trucks. They appear so at the beginning of a conflict because wars usually start when there's an excess of humans and a shortage of resources for them all. However, it takes 18 years to grow a human to the point where they can fight in a war, and another year or two to train them. It takes 2-3 years to tool up a factory to produce drones & self-driving trucks (and maybe a decade to get the software right), but once you do, you can produce one every couple days. Assuming you can maintain your industrial & technological base long enough to get that factory up, guess which one is going to win?
The limiting factor for the Japanese in WW2 wasn't planes, it was trained pilots - they had no problem crashing the planes into ships with untrained pilots because those were both abundant resources, but were incapable of fighting a sustainable air war.
The US Army is still putting funding into a variety of autonomous vehicle programs. For example they want to send a convoy of vehicles to resupply a remote outpost without putting human drivers at risk.
The reasons I don't believe you're correct are (a) in war, most ground truck transport already involves transporting non driver humans. Boots are indefinitely the workhorse of occupation, (b) active remote control works great in situations where human drivers create extraordinary risk, (c) it might never even be possible to create self driving cars that perform better than humans, no matter how much the military wants it or how many years are invested.
Isn't that mostly Hollywood illusion? That the Americans are the Good Guys, paragons of morality and protectors of all that is alive, while everyone in "backward countries" treats their own as cannon fodder?
No, I'd say from personal experience that Hollywood severely downplays reality here. Typical american movies feature villains and antagonists who are still pretty "americanized" in terms of culture.
Well, it depends. The armys certainly do value their highly trained soldiers and pilots. Just alone for.the fact that they invested a lot od time.and money in them. And during normal times also the ones of the average grunt, because the public does not like casualities.
But in a serious war, nobody cares too much about a normal soldier.
To me, a fundamental question is "is this a problem?"
Humans have similar problems too. Our intelligence is trained by experience and evolution to operate within certain parameters. More concerning to me is the fact that a human can conceptualize "couch" from a single example. ML algorithims needs to see thousands of couches before they can classify them.
Preface: I didn't intend for this to be so long... I just got on a roll...
I'm not sure it's true that we conceptualize couch from a single example. We've all seen thousands of couches in many different contexts, homes, schools, doctor's offices, on TV. If you had a person who had never seen a couch or a zebra before, and all you could tell them was right or wrong, it would probably take them (a lot?) more than a single try before they could distinguish between couch and zebra without fail.
The only reason it's easy for us is that we have this giant scaffolding built around zebras as living things that look like a horses and donkeys, and couches as inanimate objects that look like things people sit on and regularly have some pattern on them.
I think people tend to underestimate just how much "training" in the ML sense goes into a human brain. After all, humans spend the first... decade? of life incapable of all but the simplest tasks. That's a decade in which our brains are consuming petabytes of information and processing it constantly.
Watching my children grow up, the way they learn seems remarkably similar to the way computers learn. If you watch a baby learn to move, it is purely an exercise in going too far in one direction and then too far in the other direction, repeated for basically years until they're coordinated enough to move roughly like an adult by the time they're 3 or so. It's the same with words and concepts too, they're just guessing based on things they already know (and the guesses are often waaaay off, because they don't yet know much), but they're constantly filing things away into their frameworks until their frameworks get big enough that this too resembles the way adults learn.
By the time we're adults, the human brain makes ML algorithms look pathetic, but that doesn't take into the account the decades long head start that our brains got.
I thought it might have been a reference to (LISP inventor) McCarthy's famous Dartmouth Conference in 1956, where they thought that 10 people over 2 months could make "significant advances" in making machines "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves".
I feel this exact same thing is happening with autonomous cars - yes, it's possible to get them to be extremely good at recognising the road and surroundings - but the last few percent, those crucial few percent that make the technology actually usable, I don't see those happening for another 50 years at least.