Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think trusting that someone wrote the code was ever a good assurance of anything, and I don't see how that changes with AI. There will always be certain _individuals_ who are more reliable than others, not because they handcraft code, but because they follow through with it (make sure it works, fix bugs after release, keep an eye to make sure it worked, etc).

Yes, AI will enable exponentially more people to write code, but that's not a new phenomenon - bootcamps enabled an order of magnitude more people to become developers. So did higher level languages, IDEs, frameworks, etc. The march of technology has always been about doing more while having to understand less - higher and higher levels of abstraction. Isn't that a good thing?



Until now, the march of technology has taken place through a realm which was somewhat limited or slowed down only by our advancements in the physical and cognitive realities. This has given us ample time to catch up, to adjust.

The cognitive reality of AI, and more specifically of AI+Humans in the context of a social and globally connected world, is on a higher level of sophistication and can unfold much faster, which in turn might generate entirely unexpected trajectories.


Has it really? What evidence do we have that it's such an insanely exponential advancement?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: