Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I work in this field, I have a project specifically on adversarial examples and I have a strong opinion on this. I personally think worrying about adversarial examples in real life production systems is like worrying about getting the vanilla linux kernel to perform RT critical tasks. It is fundamentally not a burden you should put on that one component alone and is a problem you can only solve with a system approach. And if you do that, it is for all practical purposes already solved: apply multiple, random perturbations to the input, project your perturbed version onto a known,safe image space, and establish consensus. [1] is a work from my university which I like to point towards. Yes this lower accuracy, yes you won't be able to do critical things with this anymore but that's the price you pay for safety. Not getting hyped about CNNs and adopting a fail-safe approach that is only augmented with NNs is (in my humble opinion) why Waymo has 30k miles between disengagements [2] now while Tesla is either going to make me eat this post (not impossible given Andrej Karpathy is much smarter than me) OR are trying to hide the fact that they will never have anything resembling FSD by avoiding to report numbers.

[3] is another paper I recommend for anyone wanting to USE CNNs for applications and wants to calmly assess the risk associated with adversarial examples

Now, from a research perspective they are fascinating, they highlight weaknesses in our ability to train models,are a valuable tool to train robust CV models in the low data regime and have paved the way towards understanding the types of features learned in CNNs (our neighbours just released this [4] which in my eyes debunked a previously held assumptions that CNNs have a bias towards high frequency features, which is a fascinating result).

But for anyone wanting to use the models, you shouldn't worry about them because you shouldn't be using the models for anything critical in a place where an attack can happen anyway. The same way that "what is the best way to encrypt our users passwords so they cannot be stolen" is the wrong way to approach passwords "how can we make the deep neural network in the application critical path robust against targeted attack" is (for now) the wrong way to approach CV.

[1] https://arxiv.org/abs/1802.06806

[2] https://www.forbes.com/sites/bradtempleton/2021/02/09/califo...

[3]https://arxiv.org/abs/1807.06732

[4] https://proceedings.neurips.cc/paper/2020/hash/1ea97de85eb63...



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: