Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

exactly! while there may be some neutral to slightly positive use of this tech (haha funny video) I can only really see the evil uses of it: scams, misinformation, propaganda, easily available to create by anyone at massive scale.

I really don't see the argument for this tech to be any kind of good, unless you think moving into an era where you cannot trust any image or video is somehow a neutral outcome, AND are happy about the people who are in control of this tech. which I guess captures a larger part of the HN crowd than I'd hoped

 help



My perspective is different: we never could trust videos and images in the past. Our hopes, back then, were that the costs of faking said media (despite us being in the age of information and media) would remain permanently high and would deter people from choosing so. But this was always wishful thinking.

GenAI has presented tangible proof of such risks and is forcing society to reevaluate the way we trust evidence. In my eyes, it serves as an opportunity to improve our foundations of trust to something that relies less on the good will of random authorities onto something more objective.

Also, I haven't really seem anyone celebrating the large corporations who control AI tech. Could be simply the people I'm involved with, but most AI enthusiasts I've seem are more about, at least, open-weights AI models.


IMO what's really wishful thinking is believing that society will necessarily adapt for the better in response to a deluge of AI spam/ads/propaganda.

You could have said the same about say, pre-AI deceptively edited/ragebait/made up content going viral on FB, "actually this is good because soon people will realize they are being tricked/lied to, they'll think extra-critically before sharing dubious content next time".

Which has not happened. I can only see AI videos/images making the problem worse as people are fed personalized, narrowly targeted content that seem to perfectly appeal to their own beliefs/biases/emotions/etc.

Also, if anything it seems like we will have to trust authoritative groups more thanks to GenAI. If I have to consider every video on the internet from e.g. Iran as fake, I'm going to turn to NYT or WSJ who can be relied on to (usually) share only original content, or highly vetted 3rd party content.


I agree that the solution we may find might not necessarily be for the better. In fact, there are a couple solutions I've seen that fall onto that category, like banning GenAI (does nothing to solve the underlying issue while control over economic production always requires increased authoritarianism).

I can't really provide a truly good solution, as this problem has large ramifications into philosophy and ethics, but I'd think it would involve solutions like attestation and certificates, and, primarily, thinking of shared media (text, images, videos, etc.) not as facts, but, strictly as allegations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: