Underwhelmed is the absolute correct word to use here.
Absolutely everyone raves about this but other than a few basic computer related tasks I’ve not seen compelling use cases that justify the billions being lit on fire trying to pursue it.
My cynical take is the crypto bro’s needed something to do with their useless GPU’s after the crash and found the perfect answer in LLM’s.
It’s primarily about confidence and motivations. People with high confidence at what they do are supremely unmotivated to use something like AI to solve problems they don’t have.
People with low confidence will be super excited for AI because it solves problems they weren’t even thinking about.
Executives that don’t write code are super excited about AI because hopefully it means they can continue to high low confidence people, which are plentiful and cost less.
I am sitting on the sidelines watching in disbelief. I don’t use AI and don’t plan to. I used to write JavaScript for a living and still get JavaScript job alerts from a lot of job boards. The compensation for JavaScript work is starting to shoot through the roof as employers are moving away from garbage like React and Angular. The recent jobs are becoming fewer and are more reliant upon people with tons of experience that can actually program. Clearly AI is not replacing positions for higher talent with greater than 8-12 years experience.
"I refuse to pick up the magic hammer that nails things in by me just thinking about it while holding it in my hand; nosiree, give me that old fashioned hammer so I can sit here and nail some nails into a 2x4 while the guy using the other tool is building whole slop neighborhoods. Ha, that guy is so dumb and I'm too cool because I won't ever use that hammer."
I don't get it. Proudly saying you don't plan to use better tools is not some 'cool' look or the brag you think it is. You're just making yourself less valuable and being ignorant on purpose.
Yeah, I'm sure I'm the one who doesn't get it, not the guy refusing to use the paradigm changing tools, "because".
You sound like my Grandmother who refused to even look at a computer screen. Literally the same. It's amazing it's coming from so called tech-literate people in the field.
I have heard this same logic numerous times through my career and its a bias void of evidence, a loud indication of low confidence.
People would lose their minds when they discovered I did not pray at the cult of jQuery and then later React and so forth. I didn't need them. I was more productive without them and still managed to produce applications that executed dramatically faster with substantially less code. AI tools fall into this same camp. What could they provide me that I cannot do better myself at this point in my career? That is a serious question, by the way.
AI tools might work well for you. I am not you. Affirming your bias with baseless assumptions void of evidence will not make you a better programmer. Real developers write their own original code and/or architecture plans. Real engineers measure things and live or die by those measurements.
AI is just another tool. It is not a skill and will not compensate for skills. Perhaps I will use AT later to write test automation because that is something that is very simple to validate and likewise something I really don't want to bother with.
"Real developers write their own original code and/or architecture plans."
Nice tired no true Scotsman argument. You literally sound like a stubborn boomer refusing to see the change. You sound ignorant, not wise.
I've created several multi-modal AI applications deployed in production; fully created through Codex and Claude code. SOC2 compliant and created in a month, not years.
You can be an old man, but you don't need to have an old man mentality.
You wouldn't hire someone who only knows how to use a typewriter in a world of computers. No big deal, right? "A computer is just 'another tool', why should I not be fine with my typewriter?"
Literally fucking blows my mind I'm even having this discussion on this website, with people who should know better.
Your frequent astonishment indicates you have not been writing code very long. I also suspect you also may have some combination of ADHD or ASD, which would explain both your necessary dependence on a tool for relevance as well as your hostile defensiveness. Either way, in the long run, you will have trouble sticking around because tools eventually get replaced.
If after 20 years you still have not figured out how this industry works, hope that tool evangelism will save your career, and cannot measure things you almost certainly have autism. If you are not already diagnosed I strongly recommend seeking an evaluation.
The reality of success in this industry has nothing to do with tools. Its all about KPIs (however your organization defines them), superior planning/communication skills, and leading people. The people that produce the most with the least maintenance overhead are the people most well rewarded. Simply just not getting fired is not a metric of success.
I literally won a corporate Innovation award last year at my company (that does $30B in revenue yearly) for some of the previous applications I put into production that I mentioned. Even got posted on LinkedIn where thousands of people liked it. I am also in a technical role that is not far removed from the C-Suite, reporting to a VP.
Your analysis couldn't be further from reality, except the ADHD part, lmao.
I have the industry figured out buddy, it's you who thought they did, but doesn't anymore.
This website is literally unrecognizable from 10 years ago. I don't even know where to go now. /r/accelerate seems to be the only place with people who aren't blinded by some kind of emotional bias, plain stubbornness, or straight up stupidity.
It seems like you get personally offended by people using their critical reasoning abilities.
I know a folk who did a PhD in the area, and work at one of those frontier labs as a researcher, and privately he is as sceptical as the most "stubborn" HN denizen you mention.
Unbounded enthusiasm for AI without any reservations is something that can only be born out of minds utterly deprived of imagination and creativity.
Totally right! The folks who were very recently telling us we were all going to be trading NFTs in the metaverse are the clear eyed optimists not motivated by anything but rational consideration for the truth.
As a senior dev who has been using these tools to their fullest effectiveness in production environments, until AI can reduce the entropy of a codebase while still adding capability I will continue to be underwhelmed.
When you use the term "luddite" in the way you do, you reveal that you aren't aware of who the Luddites actually were. Luddites weren't anti-technology; many of them were experts at using advanced machinery. What they opposed was the poor quality output of automated factories and the use of machinery to circumvent apprenticeships and decent wages.
As for your promise of a great leap at some vague point in the future, that's such a widely-mocked AI industry trope at this point that it's a little embarrassing you went there.
The only thing that will be embarrassing is how badly your comments, and those like yours will age.
I don't know what happened to this place, but it went from actual young people sharing information on the newest things in tech, tech philosophy, interesting stuff; to now old men yelling at the clouds about the new tech.
I agree with your basic point, but it’s not just an age thing. There are plenty of older people enthusiastically using AI for software development now. Just as an example, Steve Yegge, who vibe-coded the Beads and Gas Town AI projects, is around 57. I’m a bit older than him, and I’m working with Claude, Gemini, and Codex on a daily basis, having great fun and learning tons.
What we seem to be seeing with AI is that the prospect of completely changing the way you work is threatening for a lot of people, and of course so is the prospect of losing your job. When people are faced with something threatening, a common reaction is to criticize it in every possible way - you can’t admit anything about it is good because that risks encouraging the threat. It’s not exactly rational, but it’s what people often do.
HN has never been exempt from that, it’s just that AI is a big change that brings out this instinct in many more people.
"Yes, it sucks now, but believe me it won't be for long" spiel has been hyped for several years now.
Oh, don't get me wrong, these tools are amazing. But just yesterday a very small refactoring resulted in 480 fully duplicated lines in a 5000-line codebase (on top of extremely bad DB access patterns) despite all the best shamanic rituals this world has to offer [1].
So yeah, senior engineers especially use these tools daily, and keep being completely honest about their issues and shortcomings. Unlike the hype and scam artists.
[1] Oh, sorry. I meant to say skills, context engineering and management, memory, prompt engineering.
" But just yesterday a very small refactoring resulted in 480 fully duplicated lines in a 5000-line codebase (on top of extremely bad DB access patterns) despite all the best shamanic rituals this world has to offer."
And even staying within the comfort of AI enthusiasm: Google wasn't exactly leading in this race. If you have this much confidence in what those presenters and engineers at Google told you, you now have some opportunities to make a lot of money.
Anyone here who is currently 'underwhelmed'; please get through all 5 levels here and then say the same thing.
This is just the beginning. I seriously can't believe this place turned into neo-boomerism ideology on tech. I honestly don't get it, just makes me think everyone here talking about being seniors and architecture and blah blah; don't actually know shit, and aren't actually good at what they do.
That is the completed instructions for the fifth level, I leave it as an exercise to the reader to actually read more and find the rest of the steps on their own.
I spent some time chatting with Google engineer who put this together, Ayo Adedeji, at UCLA's SAIRS conference.
You asked about Google and what impressed me so much, going through this exercise, while not exactly helpful for me and my work directly (I'm doing similar things but completely in the Azure ecosystem), it is definitely a great display of how agents are more than just an 'LLM' that everyone here seems to think is equivalent to AI.
It's seriously the opposite feeling of imposter syndrome at this point, I'm in my 30's, a senior data engineer myself at a F200 company; I can't believe so many of my peers are so behind and ignorant of what is going on, confident enough to makes publicly lasting comments about how 'unreliable', 'bad', 'slop'; 'AI will never this or that'.
Even SOTA models when used in agents in simple NLP tasks such as text classification still fail more times than acceptable when evaluated against a realistic evaluation dataset with sufficient example variety and with some adversarial prompts included.
Improving such uses cases is mostly an artisanal endeavor, sometimes a few-shot prompt improves things, sometimes it improves things at the expense of kind of overfitting it, sometimes structured reasoning works, sometimes it doesn't, or sometimes it works and then the latency and token explodes, etc etc....
And yet a lot of teams don't see this problem because they don't care much about evaluations, and will only find this issues in production a few months after deployment.
"AI-insiders" are trying to market their tools to you. See Anthropic's continuous lithany of "all programmers will be replaced in 6 months" while they struggle to make their TUI API wrapper consume less than 2-4 GB of RAM (they brought it down from 68 GB[1]), or have a decent uptime.
> When did Hacker news start becoming a luddite, bad takes everywhere I look, feels like everyone is '50 year old burnt out guy' that has no idea what is going on vibe?
Much to the opposite, I think healthy skepticism is a sign of maturity. The overeager embracing of hype cycles is extremely cringe.
> I just got back from a SAIRS conference at UCLA and talked directly with some of the presenters and engineers at Google.
Cringe, as I was saying.
Conferences are just mutual fart smelling, swagger, and expensing trips on company momey. I am not against it, but treating your participation in some conference as a sign of the future is very silly.
Every conference I participated always overhyped every current bullshit.
There is emotional bias and stubbornness in nearly all of your responses in this thread, the very same traits you lambasted HN broadly for in another comment. Rather than calling people, "stupid and wrong", why don't you make your case?
If you don't want to be bothered to argue your points, and this place truly chaps your ass to the degree it does, why even waste your time commenting at all when, according to you, there's a more fun place with bigger brains that-a-way, as far as you're concerned? points
I mean, it takes more energy and effort to be angry and annoyed than to just move on and leave us luddites in the dust.
It is pretty emotional seeing a place with people you respected and learned from for so long, where you could rely on for the place to find the newest and most interesting things happening in tech, where people in the know discussed the technical aspects; to now neo-luddites everywhere bashing shit they don't understand, ON A FUCKING TECH FORUM; like THE tech forum.
I feel like I'm living in some kind of bizzarro world now when I read anything AI related on HN. It's insane.
Yes, we know, you've said that a few times already.
You could foster that high-level dialogue you seem to value so much by trying to better articulate your view so that the plebs understand, kinda like I suggested just there. Ya know, "be the change you want to see in the world" and all that, but okay...
"Or: they actually understand the tech, and see its limitations. Unlike wide-eyed neophytes and zealots."
I'm sure you super qualified rando's on HN know and understand the tech and know its limitations better than the Google engineers actually making the stuff.
Real ripe coming from a guy who can't even refactor a few lines of code correctly with an AI.
If you hate everyone here so much, why did you come back today? Further, why did you come back just to spew more negative, unhelpful comments that just parrot what you've ranted about already, rather than attempt to foster the "smart" dialogue that you wax poetic about?
This place actually hates all technology after the invention of Lisp. And there's the common online incentive to dunk on things that also exists here. Hence the infamous Dropbox comment and others.
But it's also been anti-Javascript, anti-cloud, anti-social-media, anti-crypto, anti-React, and so on.
I would therefore not in a million years expect it to be pro-LLM, and this is so obvious to me that I'm a bit suspicious of your motives for acting confused about it, as if it was ever any different.
> But it's also been anti-Javascript, anti-cloud, anti-social-media, anti-crypto, anti-React, and so on.
It was never any of these things, and you're misremembering if you think it was. There's never been a mono-opinion held by some all-encompassing hivemind.
I'm not misremembering. You can easily find monoculturey threads about all of these things. Just because there's a small slice of counter views, doesn't mean the average HN positions on these things isn't or wasn't decidedly negative.
It's literally unbearable now. I don't know how the place that once used to be exciting and deep in the know; is now old-man-yells-at-clouds ignorant of what is happening. It's actually really sad. /g/ and /r/accelerate seem like the last bastions of actual intelligent people discussing these things.
Google has a degree of seperation value stored on every account. Once the algorithm determines its been wronged it increases the radius so expect your household members and work colleagues accounts to be at risk when you try this.
Definitely not bullshit. I have a friend who was banned simply for returning a Pixel phone after accidentally ordering 2. Some automated mechanism flagged it as potential fraud and nothing worked to reverse the ban. Going to the bank to block payments, remove authorization, or God forbid, do a chargeback for the money they already took after banning you is playing Russian roulette with your Google account.
It's also the only way to stop Google from stealing your money short of going to a lawyer.
The Youtube account is banned. Google can escalate things and widen the net to ban anything and everything you have in the Google ecosystem, like a Gmail account. You can see here [0] that OP still has access to the Gmail account.
When you consider the cross section of the tech community posting on HN, is it really that surprising?
It’s mad for sure, but I’d bet 99.9% of people spending money on AI aren’t spending their own hard earned sooo… “YOLO it’s a business expense/investment”…
This is pretty anti-thetical to most good practices but the older and more experienced I get the more(13 years as a C# dev) I think copy & pasting sections of code is wayyyyyy more appropriate than extracting into a method/class/library or other forms of abstraction.
Everything starts out with good intentions when someone comes along and says “hey you could make that an abstraction” and I just clench my jaw because I’ve seen that happen so much and then that simple clean abstraction eventually ends up being a horrible 1000 line monster that barely anyone understands and no one wants to change.
I agree with everything except for it being anti-thetical to good practice. I have noticed a lot of experienced devs agree with that sentiment.
It has been a pretty common trend for the last few years of people breaking out of the “OOP style programming” and practices they were taught at university. I am not saying avoiding things like over abstraction is new, but I do think there is a newer generation of programmers who have been taught and warned about drawbacks from practices like that.
Similarly, my anecdotal experience tells me more newer game devs are aware of basic memory practices being better than overly complex OOP code. Think flat arrays and simple cache alignment over something abstract and over engineered
100% this. All the abstractions and OOP stuff make you end up with a codebase where half the code doesn’t actually DO anything in the product itself, it just connects to other code! It also becomes impossible to follow the flow of execution because it passes through dozens of files and layers of abstraction.
reply