So he's saying that concentration of "wealth" is bad, and war is bad.
A quick glance at the definition of moral gives, "a person's standards of behavior or beliefs concerning what is and is not acceptable for them to do" which suggests that they may be fluid. Are killer robots necessarily any less moral than killer humans? We seek to replace humans with "robots" in many cases under the assumption that they perform better. I suppose in the case of killer robots this could mean more effective killing, or perhaps it could mean more accurate strikes and less civilian casualties? (I'm not saying I am an advocate for military AI, just posing some questions).
Finally, suggesting that we need to focus less on incremental progress when DL still isn't completely understood seems premature. I'm not sure another great leap in AI is on the horizon until a leap in computational power or a new framework is discovered.
My take on the "robots mean more accurate/less risky warfare" is that that's precisely the problem, actually (at least if we start from the assumption that war is bad). By "industrializing" warfare and reducing the cost in lives, we make it more politically palatable.
Risk assessment is a massive part of waging war. If the risk to one side is reduced by using robots (or other weapons-of-cheap-destruction) instead of humans, then the likelihood that side will favor war as a conflict resolution mechanism is (all other things being equal) increased.
On the other hand, if the risk is too high, then alternative options are more likely to be favored. This seems to be the thinking around things like nuclear disarmament and why proliferation is generally seen as a bad thing despite nukes being hands-down the most cost-effective way to end a war (at least against an enemy not similarly equipped - see the Cold War). The reason given for the US bombing of Japan was to save lives by shortening the war - though I'm not about to get into whether that decision was justified.
And that's before we introduce AI, which has had notorious bugs like failing to identify dark-skinned faces relative to light-skinned faces (https://www.bostonmagazine.com/news/2018/02/23/artificial-in...) or driverless car crashes. As uncomfortable as I am with proliferation of killer tech in general, introducing AI actually makes my skin crawl.
A quick glance at the definition of moral gives, "a person's standards of behavior or beliefs concerning what is and is not acceptable for them to do" which suggests that they may be fluid. Are killer robots necessarily any less moral than killer humans? We seek to replace humans with "robots" in many cases under the assumption that they perform better. I suppose in the case of killer robots this could mean more effective killing, or perhaps it could mean more accurate strikes and less civilian casualties? (I'm not saying I am an advocate for military AI, just posing some questions).
Finally, suggesting that we need to focus less on incremental progress when DL still isn't completely understood seems premature. I'm not sure another great leap in AI is on the horizon until a leap in computational power or a new framework is discovered.