Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really don’t people will be programming like we do today in five years


I think the biggest blind spot for many programers/coders is that yes it might not change much for them but it will allow many more people to code and do stuff that they were not able to before. As the the models get better and people use them more and learn how to use them more efficiently they will start changing things.

I am hoping we get to the point where the models are good enough that classes in schools are introduced on how to use them rather than just build them as the number of people wanting to or willing to learn programming is a lot smaller than the number of people to looking for ways to do things more efficiently.


It's not like schools have any other option on the table, students will find a way to use all the help they can get like they always have. Embracing it is the only way they can stay relevant in the coming age of one-on-one AI tutors.

It reminds me of the middle ages where only the priest was allowed to read and interpret the bible, mostly through the virtue of knowing latin. Then suddenly the printing press comes around and everyone can get their own cheap bible in their language. You just can't fight and enforce this kind of thing in the face of such insane progress. In 100 years (if we're not extinct then) people will probably look back on mass education where one overworked teacher tries to explain something in a standard way to 30 people (over half of who are bored or can't keep up) as some kind of old age savagery.


I’ve been programming since middle school. That would be 30 years. Nothing really changed much. C++ is incrementally more convenient but fundamentally the same. Code editors are same. Debugger are same. Shell is same.

I am certain in 30 years everything will still be the same.


The way I write code was fundamentally altered in the last year by GPT4 and copilot. Try having GPT4 write your code, you won’t be so certain about the future of programming afterward I guarantee it.


I am the same, 35 years. I use GPT 4 every day now. It sure is handy. It speeds up some things. It is a time saver but it does not seem to be better than me. It is like an OK assistant.

I would agree, not a fundamental or radical improvement yet.

Will it be? I hope so.


GPT 4 does not produce code that I'm ready to accept. The time it takes to convince it to produce code that I'll accept significantly larger than the time it takes to write that code myself.

GPT 4 is fine for absolutely foreign tasks to me, like write a power shell script, because I know almost nothing about power shell. However those tasks are rare and I generally competent about things I need to do.


I have free Copilot due to my OSS work. This week I disabled it for C++ because it is chronically incapable to match brackets. I was wasting too much time fixing the messes.

I use it for TypeScript/React. But it’s just a more comprehensive code complete. Incremental.


Uh huh, try GPT4 and report back. It’s a generational leap above copilot. I use copilot to auto complete one liners and GPT4 to generate whole methods.


Other than 30 years ago you were writing a whole shitload more buffer/integer overflows. Hell, that's why we've written numerous languages since that point to ensure it's a hell of a lot harder to footgun yourself.

If coding hasn't change much in 30 years, it may mean you have not changed much in 30 years.


Process improved - version control, CI, unit testing. But not the tools. Clang is “recent” but it is still traditional CLI compiler.

I right fundamental software. Chromium, Node. It is good old, largely incremental, C++.


I don’t see why not.

I like programming how I do now. I don’t plan to stop.

People do lots of things manually that machines have been able to do for a long time.


You can do what you want, you just won't be paid to do it anymore.


I still get paid for project involving Oracle 9i running on Itanium HPUX and Delphi application running on Windows XP. This project is not going anywhere in the next 5 years. And there are numerous other projects which will not go anywhere either. I just don't believe that programming landscape will change much. May be in California startup world. My world moves slower.

I don't think anything significantly changed in my approach to the code since 2013. We will see how 2033 goes, but I don't expect nothing big either. ChatGPT is just Google replacement, Copilot is just smart autocomplete. I can use Google instead of ChatGPT and I could use Google 10 years ago. I can use vim macros instead of Copilot. This AI stuff helps me to save few hours a months, I guess, so it worth few bucks of subscription, but nothing groundbreaking so far.


I think most are going to wait on the side lines watching the bleeding edge. The promise is great but so is the risk of disaster. Imo it's going to be a generational shift.


We can't all run a YouTube channel for the programming equivalent of Primitive Technology, fun though that would be. 99.99% of us will have to adapt to AI being a coworker, who will probably eventually replace us.

Right now we're still OK because the AI isn't good enough; when it gets good enough, doing things manually is as economically sensible as making your own iron by gathering a few times your mass in wood, burning some of in a sealed clay dome to turn it into charcoal, digging up some more clay to make a porous pot and a kiln to fire it in, filling it with iron rich bacterial scum from a creek, letting the water drain, building a furnace, preheating it with the rest of the wood, then smelting the bacterial ore with the charcoal, to yield about 7 grams or iron.


Nice analogy! I saw an estimate recently on the cost of programming and they predicted that automated coding will cost 10,000 times less than human coders. It was all back of the envelope and questionable but still it was food for thought. Will we be 10,000 times more productive or will we be out of work? I think a lot of people will be out of work.


Thanks! :)

> Will we be 10,000 times more productive or will we be out of work? I think a lot of people will be out of work.

It can be both. Automation of farming means we've gone from a constant risk of starvation to an epidemic of obesity, while simultaneously reducing the percentage of the workforce in agriculture.


who will probably eventually replace us.

no one is going to be using AI and then just have it 'replace them', they're going to use it to augment their abilities and avoid replacement.


The people using AI to write code aren't necessarily former professional programmers, for the same reasons people using AI to make pictures aren't necessarily former professional artists, and those using aim bots aren't necessarily former professional snipers or olympic shooters.

A manager can prompt a chatbot to write a thing instead of prompting me to write the same thing — for the moment, what (I hope) keeps me employable is that the the chatbot is "only" at the level of getting good grades rather than n-years professional experience.

I have no expectation for any specific timeline for that to change. Perhaps there are enough incentives it will never get trained to that level, but also perhaps it was already trained 4 months back and the improvement to capabilities are what caused the OpenAI drama.


I mean, if we had something that capable, I have zero idea why you think your manager is going to be in a safer position to you? That seems ridiculous.

You could easily flip it around, ask the bot to manage you better than your manager and make the best use of your time, or something like this?


Managers have more business contacts and access to money.

But others your point is valid.


And they generally get outcompeted sooner or latter.

All disciplines evolve over time and those who fail or refuse to keep up will be left behind.


Just because you can have a robot/machine that can efficiently churn out 1000 frozen lasagnas a second doesn't necessarily mean that italian restaurants have been "outcompeted" or "left behind" by not using such a machine in their business.

Sometimes quality and responsibility matter. Even if a machine is really good at producing bug-free code, often someone is going to have to read/understand the code that the machine produces and take resposibility for it, since machines cannot take responsibility for things.

In the end, you'll always need a human that understands the formal algorithmic language (i.e., a programmer), capable of parsing the formal program that will be run, if you want to be able to trust the program, since any way of mapping an informal request (described imperfectly in an informal language) to a formal construct is always going to be "lossy" and prone to errors (and you'll know this if you ever used an automatic code generator). Just because someone is willing to blindly trust automatically-generated code, doesn't mean everyone else is: there are contexts in which you need a person to blame, when things go wrong.


Ok, but to continue this analogy, industrialization and the ability to create 1000 frozen lasagnas a second had an enormous impact on the world. Not only on the economics of production, but ultimately on human society.

Sure, handmade lasagna still exists, but the world looks nothing like it did 200 years ago.


Heh, most cooking these days is like using libraries to build an application.

I don't slaughter an animal, I buy a cut of meat.

It's very rare I make pasta, I rehydrate dried husks.

The cheese comes in some kind of jar or package. The vegetables come from a store.

This has been the general move in applications too. I see companies with very large programs that are just huge sets of node modules joined together with a small amount of code.


> and take responsibility for it, since machines cannot take responsibility for things.

That's an interesting thought; people "taking responsibility" as a form of labour, for machines that stole their lunch. Probably be around zero applicants for that job.

Responsibility is a complex quality needing capacity and competence. Right now even the manufacturers of "AI" are unable to assert much about behaviour.

Where "responsibility" exists around robotics and ML it will more likely be a blunt legal instrument tied to ownership, like owning a dangerous animal.


AI can be used to support that activity too. Models can just as well used to explain existing code, possibly cranked out by another AI. I bet many companies are thinking about fine-tuning or LoRA-ing language models on their moldy codebases and outdated pile of documentation to make onboarding, refactoring, and routine extensions easier.

To interpret what AI models themselves are actually doing, researchers employ AI models as well.


People do things because they enjoy doing them. And people will continue to do things they enjoy doing such as programming.

I don’t think it has anything to do with competition or being left behind.


The point is that people won't get paid anymore to do it. It has happened before: many activities that have been replaced by technology have been almost forgotten (for example the newspaper reader in the factory) or are practiced as art or niche crafts only. Careers built on these are either wildly successful or have highly unsteady income since they are literally reliant on the whims, not on the needs, of people.


They’re making fun of your typo, but you’re right. Pretty much every software job in 5 years will be an AI job. This rustles a lot of feathers, but ignoring the truth will only hurt your career.

I think the era of big tech paying fat stacks to a rather larger number of technical staff will start to wane as well. Better hope you have top AI paper publications and deep experience with all parts of using LLMs/whatever future models there are, because if not, you’ll be in for a world of pain if you got used to cushy tech work and think it’s inevitable in a world where AI is advancing so fast.


Have you ever worked in tech and had to deal with the typical illiteracy and incompetence of management and execs?

If LLMs got this good, the brick wall these orgs will hit is what will really ruffle feathers. Leadership will have to be replaced by their technical workers in order for the company to continue existing. There's simply not enough information in the very high level plain english requirements they're used to thinking about. From a theoretical and practical perspective, you very likely cannot feed that half-assed junk to any LLM no matter how advanced and expect useful results. This is very much already the case human-to-human for all of history.

Either that or nothing happens, which is the current state of things. Writing code is not even 10% of the job.


> you very likely cannot feed that half-assed junk to any LLM no matter how advanced and expect useful results

Why don't you think that a sufficiently advanced AI can do the same as what technical humans do today with vague directions from managers?


I feel that issue with AI is similar to issues with AI cars.

AI car won't ever reach its destination in my city. Because you need to actively break the rules few times if you want to drive to the destination. There's a stream of cars and you need to merge into it. You don't have an advantage, so you need to wait until this stream of cars will end. However you can wait for that for hours. In reality you act aggressively and someone will allow you to join. AI will not do that. Every driver does that all the time.

So when AI will try to integrate into human society, it'll hit the same issues. You sent mail to manager and this mail got lost because manager does not feel like answering it. You need to seek him, you need to face him and ask your question, so he has nowhere to run. AI does not have physical presence, neither he have aggression necessary for this. He'll just helplessly send emails around, moving right into spam.


Indeed.

I can give a vague, poorly written, poorly spelled request to the free version of ChatGPT and it still gives me a correct response.

As correct as usual at least (85-95%), but that's a different problem.


Correct compared to what?

There's gonna be a lot of context implied in project docs based on previous projects and the LLM won't ask hard questions back to management during the planning process. It will just happily return naive answers from its Turing tarpit.

No offense intended to anyone, but we already see this when there are other communication problems due to language barrier or too many people in a big game of corporate telephone. An LLM necessarily makes that problem worse.


Correct compared to what I ask it for.

Previous projects can be fed into LLMs either by context window (those are getting huge now) or fine tuning… but of course it's not a magic wand like some expect it to be.

People keep being disappointed it's not as smart as a human, but everyone should look how broad it is and ask themselves: if it were as good as a human, why would companies still want to employ you? What skills do you have which can't be described adequately in writing?


LLMs are cool and will continue to change society in ways we cannot readily predict, but they are not quite that cool. GPT3 has been around for a little bit now and the world has not ended or encountered a singularity. The models are expensive to run both in compute and expertise. They produce a lot of garbage.

I see the threat right now to low-paid writing gigs. I’m sure there’s a whole stratum of those they have wiped out, but I also know real live humans still doing that kind of work.

What developers may use in five years is a better version of Copilot trained on existing code bases. They will let developers do more in the time they have, not replace them. Open source software has not put us all out of jobs. I foresee the waning of Big Tech for other reasons.


> GPT3 has been around for a little bit now and the world has not ended or encountered a singularity.

And they won't right up until they do. Reason why is that…

> The models are expensive to run both in compute and expertise.

…doesn't extend to the one cost that matters: money.

Imagine a future AI that beats graduates and not just students. If it costs as much per line of code as 1000 gpt-4-1106-preview[0] tokens, the cost of rewriting all of Red Hat Linux 7.1 from scratch[1] is less than 1 million USD.

[0] $0.03 / 1K tokens

[1] https://dwheeler.com/sloc/


I like financial breakdowns like this. The thing an LLM cannot do is all the decision making that went into that. Framing the problem is harder to quantify, and is almost certainly an order of magnitude more work than writing and debugging the code. But a sufficiently good LLM should be able to produce code cheaper than humans. Maybe with time and outside sources of truth, better.


I give it two years. Salaries will drop like a rock.


I don't people will be either brotha, I don't people will be <3


Cool


People don’t like it is, but it


People don't think it be like it is, but it do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: