Hacker Newsnew | past | comments | ask | show | jobs | submit | mjr00's commentslogin

> However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

Strongly disagree with this thesis, and in fact I'd go completely the opposite: code quality is more important than ever thanks to AI.

LLM-assisted coding is most successful in codebases with attributes strongly associated with high code quality: predictable patterns, well-named variables, use of a type system, no global mutable state, very low mutability in general, etc.

I'm using AI on a pretty shitty legacy area of a Python codebase right now (like, literally right now, Claude is running while I type this) and it's struggling for the same reason a human would struggle. What are the columns in this DataFrame? Who knows, because the dataframe is getting mutated depending on the function calls! Oh yeah and someone thought they could be "clever" and assemble function names via strings and dynamically call them to save a few lines of code, awesome! An LLM is going to struggle deciphering this disasterpiece, same as anyone.

Meanwhile for newer areas of the code with strict typing and a sensible architecture, Claude will usually just one-shot whatever I ask.

edit: I see most replies are saying basically the same thing here, which is an indicator.


> LLM-assisted coding is most successful in codebases with attributes strongly associated with high code quality: predictable patterns, well-named variables, use of a type system, no global mutable state, very low mutability in general, etc.

That's all very true, but what you're missing is that the proportion of codebases that need this is shrinking relative to the total number of codebases. There's an incredible proliferation of very small, bespoke, simple, AI-coded apps, that are nonetheless quite useful. Most are being created by people who have never written a line of code in their life, who will do no maintenance, and who will not give two craps how the code looks, any more than the average YouTuber cares about the aperture of their lens or the average forum commenter care about the style of their prose.

We don't see these apps because we're professional software engineers working on the other stuff. But we're rapidly approaching a world where more and more software is created by non-professionals.


> That's all very true, but what you're missing is that the proportion of codebases that need this is shrinking relative to the total number of codebases. There's an incredible proliferation of very small, bespoke, simple, AI-coded apps, that are nonetheless quite useful. Most are being created by people who have never written a line of code in their life, who will do no maintenance, and who will not give two craps how the code looks, any more than the average YouTuber cares about the aperture of their lens or the average forum commenter care about the style of their prose.

I agree that there will be more small, single-use utilities, but you seem to believe that this will decrease the number or importance of traditional long-lived codebases, which doesn't make sense. The fact that Jane Q. Notadeveloper can vibe code an app for tracking household chores is great, but it does not change the fact that she needs to use her operating system (a massive codebase) to open Google Chrome (a massive codebase) and go to her bank's website (a massive codebase) to transfer money to her landlord for rent (a process which involves many massive software systems interacting with each other, hopefully none of which are vibe coded).

The average YouTuber not caring about the aperture of their lens is an apt comparison: the median YouTube video has 35 views[0]. These people likely do not care about their camera or audio setup, it's true. The question is, how is that relevant to the actual professional YouTubers, MrBeast et al, who actually do care about their AV setup?

[0] https://www.intotheminds.com/blog/en/research-youtube-stats/


This is where I get into much more speculative land, but I think people are underestimating the degree to which AI assistant apps are going to eat much of the traditional software industry. The same way smart phones ate so many individual tools, calculators, stop watches, iPods, etc.

It takes a long time for humanity to adjust to a new technology. First, the technology needs to improve for years. Then it needs to be adopted and reach near ubiquity. And then the slower-moving parts of society need to converge and rearrange around it. For example, the web was quite ready for apps like Airbnb in the mid 90s, but the adoption+culture+infra was not.

In 5, maybe 10, certainly 15 years, I don't think as many people are going to want to learn, browse, and click through a gazillion complex websites and apps and flows when they can easily just tell their assistant to do most of it. Google already correctly realizes this as an existential threat, as do many SaaS companies.

AI assistants are already good enough to create ephemeral applications on the fly in response to certain questions. And we're in the very, very early days of people building businesses and infra meant to be consumed by LLMs.


> In 5, maybe 10, certainly 15 years, I don't think as many people are going to want to learn, browse, and click through a gazillion complex websites and apps and flows when they can easily just tell their assistant to do most of it.

And how do you think their assistant will interact with external systems? If I tell my AI assistant "pay my rent" or "book my flight" do you think it's going to ephemerally vibe code something on the banks' and airlines' servers to make this happen?

You're only thinking of the tip of the iceberg which is the last mile of client-facing software. 90%+ of software development is the rest of the iceberg, unseen beneath the surface.

I agree there will be more of this but again, that does not preclude the existence of more of the big backend systems existing.


Just like everyone has a 3D printer at home?

People want convenience, not a way to generate an application that creates convenience.


> "vibe coded" is NOT the bad thing you think it is.

It's not inherently bad in the same way that a first draft of a novel is not inherently bad.

But if someone asked me to read their novel and it was a first draft that they themselves had clearly not bothered reading or editing, I'd tell them to fuck off.


At least in the novel example the author had the decency to write what they're asking you to read.

These are more like sending someone who didn't ask you a question a LMGTFY link they didn't ask for and expecting them to read all the results. Just a complete lack of awareness and respect for the maintainers


> assume it is only a matter of time until they will be useful tools in the hands of even the untrained masses.

IMO this vastly overestimates how good the "untrained masses" are at thinking in a logical, mathematical way. Apparently something as basic as Calculus II has a fail rate of ~50% in most universities.


How does this follow?

There's nothing "basic" about Calculus II. Calculus is uniquely cursed in mathematical education because everything that comes before it is more or less rooted in intuition about the real world, while calculus is built on axioms that are far more abstract and not substantiated well (not until later in your mathematical education). I expect many intelligent, resourceful people to fail it and I think it says more about the abstractions we're teaching than anything else.

But also, prompting LLMs to give good results is nowhere near as complex as calculus.


Who cares? People know what they want and need and AI is increasingly able to take it from there.

> People know what they want and need

If they truly did, there wouldn't be a huge amount of humans whose role is basically "Take what users/executives say they want, and figure out what they REALLY want, then write that down for others".

Maybe I've worked for too many startups, and only consulted for larger companies, but everywhere in businesses I see so many problems that are basically "Others misunderstood what that person meant" and/or "Someone thought they wanted X, they actually wanted Y".


> People know what they want and need

The multi-decade existence of roles like "business analysts" and "product owners" (and sometimes "customer success") is pretty strong evidence that this is not the case.


What they want? Sometimes. What they need? Almost never.

That’s why you can’t generalise opinions on here.

Most people on here don’t belong to that group of people. So ofc they can find a way to create value out of a thing that requires some tinkering and playing with.

The question is can the techniques evolve to become technologies to produce stuff with minimal effort - whilst only knowing the bare minimum. I’m not convinced personally - it’s a pipe dream and overlooks the innate skill necessary to produce stuff.


> Many men (Not all, but many) are there simply because they want to get laid. They're not looking for a relationship, they're looking for a hook-up, and they're not honest about their intentions.

In fairness, this is not at all exclusive to online dating.


In fairness, this is not at all exclusive to men.

My experience with OKCupid was that women must lie to get laid, moreso than men. A man can state "just want sex" on his profile and it is socially neutral. A woman who posts such a thing has social consequences.


Or starting a job; wanting to advance in the office; become an entrepreneur; wanting to go into politics; wanting to go into the clergy; wanting to become president; wanting to visit islands; wanting run casinos; wanting to run beuaty pagents...

Hrm...


I was really confused how this could be possible for such a seemingly simple site but it looks like it's storing + writing many new commits every time there's a new review, or new financial data, or a new show, etc.

Someone might want to tell the author to ask Claude what a database is typically used for...


json in git for reference data actually isn't terrible. having it with the code isn't great, and the repo is massively bloated in other ways, but for change tracking a source of truth, not bad except for maybe it should be canonicalized.

It's not a terrible storage mechanism but 36,625 workflow runs taking between ~1-12 minutes seems like a terrible use of runner resources. Even at many orgs, constantly actions running for very little benefit has been a challenge. Whether it's wasted dev time or wasted cpu, to say nothing of the horrible security environment that global arbitrary pr action triggers introduce, there's something wrong with Actions as a product.

What is git if not a database for source code?

Meh, then filesystems are databases for bytes. Airplanes are buses for flying.

I could make that argument, but I wouldn’t believe it.


Both of those statements are true.

It is pretty damn fast though.

Molyneux is obviously a well-known gamedev figure, but he's always been much more on the design side than programming side, as opposed to someone like Carmack or even J Blow. I wouldn't take his opinions on minutiae like coroutines as authoritative.

> Beyond the initial 2-3 minutes of novelty, what do AI generated videos really have to offer when there is no shortage of people making professional, high quality content on competing platforms?

I don't know where they got September from; Sora launched in Feb 2024[0] which was a bit before people had become tired of awful AI-generated content. There was real belief that people would be willing to spend all day scrolling a social network with infinite AI-generated content. See the similar hype with Suno AI, which started a whole "musicians are obsolete" movement before becoming mostly irrelevant.

I think Sora 2 produced quite good videos, at least of a certain type. It was very good at producing convincing low-resolution cellphone footage. Unfortunately you had to have a very creative mind to get anything interesting out of it, as the copyright and content restrictions were a big "no fun allowed" clause, which contributed to its demise. Everything on the main Sora page was the same "cute animals doing something wholesome and unexpected" video.

My "favorite" part was how the post-generation checks would self-report. e.g. It was impossible to make a video of an angry chef with a British accent because Sora would always overfit it to Gordon Ramsey, and flag its own generated video after it was created!

[0] https://news.ycombinator.com/item?id=39386156 - only one mention of "AI slop" in the entire thread, though partial credit goes to "movieslop".


To nitpick a tiny bit, from Wikipedia[0]:

> In February 2024, OpenAI previewed examples of its output to the public,[1] with the first generation of Sora released publicly for ChatGPT Plus and ChatGPT Pro users in the US and Canada in December 2024[2][3] and the second generation, Sora 2, was released to select users in the US and Canada at the end of September 2025.

[0] https://en.wikipedia.org/wiki/Sora_(text-to-video_model)


AFAIK both of these use cases had many millions of invested dollars dumped into them during the Blockchain hype and neither resulted in anything. It might not be an exact match for (1), but there was famously the ASX blockchain project[0] which turned out to be a total failure. For (2), IBM made "Farmer Connect"[1], which is now almost entirely scrubbed from their website, which promised to do supply chain logistics on a blockchain.

[0] https://www.reuters.com/markets/australian-stock-exchanges-b...

[1] https://mediacenter.ibm.com/media/Farmer+Connect+%2B+IBM/1_8...


> ASX blockchain project[0] which turned out to be a total failure.

FWIW if you know anything about the ASX, you'll know that the failure was a result of the people running the ASX and not necessarily the tech behind it.


> why are the examples given of futuristic capabilities always so visionless - it's always booking a flight or scheduling a meeting.

This AI wave is filled with "ideas guys/gals" who thought they had an amazing awesome idea and if only they knew how to program they could make a best-selling billion dollar idea, being confronted with the reality that their ideas are really uninteresting as well.

They're still happy to write blog posts about how their bleeding-edge Claw setup sends them a push notification whenever someone comments on one of their LinkedIn posts, though.


I have "new genius" ideas very often. After doing quick search I discover that any idea worth thinking of implementation is either implemented already or what seems to be low barrier to entry clashes with some legal obstacles.


I have the opposite problem. I have a genius idea, and I start to research it.

I find a company that actually built a solid product, dangit this is really good. They appear to have executed well, but they failed, or went nowhere, heck the app is still out there. Maybe they are even chugging along but its a smaller business even with a better product than I would have been able to build. Had I been a founder of the product, I would be questioning staying.

Then I also find sometimes I was doing it all wrong and the world has moved past my notions of products. I think there's a market opportunity because I don't realize that the rest of the world is already cool using a $15 plant hygrometer bluetooth device which can also keep track of your medicine or food in your cooler, my notion of the value of something is skewed by western costs


Interestingly that sort of research is actually what I've used Claude/Chatgpt deep research and openclaw for. If I have an idea, I get an agent to go and do some product research for me and see if there is a market, if anyone has tried it, and if there is anyone doing it.

It has unironically saved me a lot of time I would have otherwise spent going down rabbit holes.

Of the models I've found that claude doesn't gas you up as much as GPT, so for stuff like this where the answer can be "no, that's not a good idea" I usually use claude.


Yup. I do have a 4-step process for this (just for prompts and some bash scripts that call CC). 1. Breadth first 2. Compress 3. Per player deep research 4. Per player compression. Then I just merge all the markdown files, fit it into 250k tokens, load any model that supports that much and you can pretty much "tall to market".

The biggest limitation here is data access though. A lot of market data is gated behind registration or anti-bot captchas, so the project that my CC is working on now is a playwright clone that is not easily detectable + can be used with CLI same as playwright itself.


Sure I do use AI to to do research on my ideas as well


I find ChatGPT so infuriating the way it always agrees with everything you say. The product is optimised for engagement so it wants its users to be delighted

Jim Rohn said one time “just pick a direction and go, you will find out sooner”, if its good or bad.

That was adjusted for 80s. In todays world you can know whether something is worth pursuing in minutes. Tip - in 99.9% of cases its not, but you will still learn along the way. Maybe you find something new.


Hmm I often have ideas that I don't see anywhere else but I'm just in it for rent curiosity and learning. I absolutely hate the business side and usually I do stuff for free just so I don't have to complicate my taxes.

So being an entrepreneur would never work for me.


Story of my life.


the whole obsequious nature of how LLMs also amp them up thinking they're onto something incredible is throwing gas on this dumpster fire.

"What a great idea! This will revolutionize linkedin commenting. Let's implement it together."


Anthropic tried to fix this I think. Because it's the only model that will push back, but it's even funnier.

Ask a question, it will say yes, ask "are you sure?", it will reverse direction full throttle, then ask are you sure again and it'll go back to initial response saying "yeah I confused myself there". You can do this until context window exhaustion and this will never stop.

On the other side of this, Gemini will stand by whatever it generated the first time, no matter how much you push back and no matter how stupid the idea is.


Oh for sure. When I present something to the LLM it always tells me how great it is until I make it "question" it, then it says it was overestimating this or that. Eh. Quite annoying.



You have to remember that LLM's don't have any persistent capacity to hold a "judgement". You ask for something, it provides an attempt at a completion for it. No fact checking, no reasoning, just a plausible looking output, tuned to hopefully get you to repeat the interaction.

Half the reason the dominant UX is a "Chat" is that's the only way to provide a facsimile of memory or persistence across requests. Append the last few turns, press go. Over time you can develop an eye for the model's tics/attractor topics.

Remember that they bill by token use, and suddenly, the entire UX/architecture starts making sense.


Yeah it seems like we're still in the "XYZ ... but on a computer!" stage of AI.


Wait til you see my todo app though…


Can it suggest me to do the things I should do ? Can it talk to me into overcoming what's blocking me from completing tasks at an emotional level ? :)



nope.

It won't even help you understand that the 20 second task you've been putting off for 6 months causing anxiety will only take 20 seconds (nor will we learn from this)


Or the fact that in the time it took me to read this thread I could have finished that task. Sometimes I really want to punch my brain in its stupid face lol.


No they didn't. You can see all the commits as this was built iteratively[0]. This project started development on Saturday morning and now it's here.

This is pretty common now, people love to rapidly throw together stuff and show it off a few days later. The only thing different about this from your average Show HN sloppa is that it's living under the NVIDIA Github org, though that also has 700+ repositories[1] in it so they don't appear too discerning about what makes it into the official repo.

My best guess is this was an internal hackathon project they wanted to release publicly.

[0] https://github.com/NVIDIA/NemoClaw/commits/main/?after=241ff...

[1] https://github.com/orgs/NVIDIA/repositories?type=all


Cash in on the claw brand recognition by having "claw, but Nvidia".

And, to be fair to them, it works. It sticks. It gets the desired reactions.


it's the new norm that you put together stuff, it works and you show it off.

all the naysayers, "senior" engineers who haven't done any assisted coding by Claude/codex, just need to get either with the program or it's time to retire, as this is just the beginning.

if you can't ship stuff in days then I have some bad news for you.


> it's the new norm that you put together stuff, it works and you show it off.

You're probably right, but it'd be nice if the new norm were you put together stuff quickly using AI-assisted coding, you use it yourself and iterate on the product for a while as you discover things you dislike/features you want/etc, and then you share it with the world.

It seems like everyone wants to skip the second step. Most of the "Show HN" sloppa that gets built in a few days and shared here ends up abandoned immediately after.


There was some kind of public knowledge of this project over a week ago because people were trying to domain squat them and submit it to HN: https://news.ycombinator.com/from?site=nemoclaw.bot


Sorry to be the one to inform you that we edit history in git.

There has been reporting on nemoclaw for the last couple weeks. Are you supposing that journalists were writing about software that hadn't even been designed?


> Sorry to be the one to inform you that we edit history in git.

Who is "we"? Do you work for NVidia?

> There has been reporting on nemoclaw for the last couple weeks.

The earliest reporting I've seen was yesterday. Can you link something from prior to March 14?

edit: I did find some articles from before March 14[0] which says NVidia was "prepping" this. Which is extremely funny, because it means they were hyping up software which hadn't even started being written yet. The AI bubble truly does not stop delivering.

> Are you supposing that journalists were writing about software that hadn't even been designed?

If you think journalists writing about things that will never exist is new, welcome to the real world. There's a whole term for it.[1]

[0] https://fudzilla.com/nvidia-opens-the-gates-with-nemoclaw/

[1] https://en.wikipedia.org/wiki/Vaporware


alright so the git history goes back 4 days.

I learned about nemoclaw 5 days ago here: https://www.youtube.com/watch?v=fL2lMpLjxWA

but it was reported 8 days ago here: https://www.youtube.com/watch?v=345GsxnrHHg

I am not anyone special. I don't know anything about nvidia. I just know that the "4 day history" you think matters, is not a reasonable belief given that random youtubers have been reporting on it.

and by "we" i mean git users. people who used git for its usefulness before github existed, and understand the value of a clean history over an accurate history.


There's nothing clean about the history. You think commits like [0], with the commit message "improve", count as "clean"? What do you think the motivation for the author would be to modify git history to make it appear that this was written over a weekend, including separating each feature/commit by a few hours, which corresponds to a reasonable amount of time that it may have taken to write that feature? Including a break on Mar 15 at 1:18 AM PDT before continuing to commit at Mar 15 at 12:43 PM PDT. Hey, isn't there a normal human behaviour that occurs around this time every day which takes 6-10 hours?

I'm fully aware you can rewrite git history to whatever you want, but this is an occam's razor situation here. You'd only think this wasn't a weekend project if you desperately wanted to believe that this was some major initiative for some reason.

[0] https://github.com/NVIDIA/NemoClaw/commit/b9382d27d13b160dcf...


Just let go of the notion that a 4 day github history necessarily means the project is only 4 days old. It's a ridiculous assumption to base an argument off of. It's extremely normal to have work in one, perhaps internal, repo which you then blast over to a public repo in one (or a few) big commits. There is zero reason for them to let you see their internal progress.


> It's extremely normal to have work in one, perhaps internal, repo which you then blast over to a public repo in one (or a few) big commits.

Did you even read the commit history? That is not what is happening here.

This is turning into a "don't believe your lying eyes" situation. Why are you people so desperate to pretend this wasn't written in a weekend?

> There is zero reason for them to let you see their internal progress.

Again, I ask you -- what is the reason for them to edit commit history to show incremental progress as if it were written in a weekend, when it actually was not?


I don't have to know their reasoning in order to know public github history is not necessarily an accurate record of all changes.


Okay, so there's overwhelming evidence that their public github history is accurate and Nemoclaw was written in a weekend, and the only reason to think it's not accurate is that... it's technically possible to edit git history, and also there's no reasonable explanation for why they would have edited git history they way they did.

So... yeah, draw your own conclusion I guess, whatever.


> there's overwhelming evidence that their public github history is accurate and Nemoclaw was written in a weekend

Aside from commits on github, which we've already established mean absolutely nothing, what is the overwhelming evidence?


Lmfao. This is how I know you have never worked at a big company before. I promise you every big company has processes around open sourcing things. It's not something that just whip up and release over a weekend. Just the legal approval would have taken months

I have buddies at Nvidia. Their primary platform is not GitHub. Sorry you're so naive. Almost certainly this was built in house for at least a month or two prior. Then private repo. Approvals. Then public

Not to mention the fact that Jensen literally announced it in their biggest yearly launch conference. No you're totally right. He mandated someone build it over the weekend while drafting up a full presentation and launch announcement about it

That's more plausible than the very normal practice of developing internally, scrubbing commits of any accidental whoopsies, vetting it and then putting it out publicly

"Overwhelming evidence" = git history that is completely fungible. Once you're done here I have a lobster claw to sell you


> Again, I ask you -- what is the reason for them to edit commit history to show incremental progress as if it were written in a weekend, when it actually was not?

Answer this question or we're done here, thanks.

> Almost certainly this was built in house for at least a month or two prior. Then private repo. Approvals. Then public.

Source, other than you making it up?

> That's more plausible than the very normal practice of developing internally, scrubbing commits of any accidental whoopsies, vetting it and then putting it out publicly

Could you point to a specific commit you believe is a representation of an internal data transfer from a separate source control system which is not representative of work achievable within the time period represented by the differential between the commit time and the time of the prior commit?


You cannot really be this naive but i'll play along:

> what is the reason for them to edit commit history to show incremental progress as if it were written in a weekend, when it actually was not?

Like i said. You are letting on that you have never actually worked on an internal project that is going to go open source. There are a million and one reasons. Here are some completely normal and plausible ones. It was worked on over weeks internally, commits referenced other internal NVIDIA software/libraries they used. It name dropped projects and code names. Maybe it was just an extremely long chain of messy commits that is improper to have on a potentially big open source repo. So here's what happens (since you clearly are unaware of how people operate in this world), you "unstage" everything and write canonical commits free of all the garbage. You squash, you merge, you set up standards, you leave a clean commit history. All of it very important for open source

> Source, other than you making it up?

Ah yes let me just go ping the people who worked on it. Lol. Source is my decade long experience working on similar projects where i literally did this scrubbing of commits. You have a circuitous argument "It was done in a weekend because the commits say so" is really quite the hill to die on

> Could you point to a specific commit you believe is a representation of an internal data transfer

If there was any indication left over of a "transfer", it wouldn't have done it's purpose would it? But if you really are looking for something, how about the fact that there's only one human contributor of the first few commits. Very odd, you would think a massive open sourcing of a project like this would probably involve a team right? Or do you believe AI tools have gotten that good that one engineer is just driving with Claude and open sourcing full launches?

Here, how about we just do some critical thinking. Nvidia setup a "Set up NemoClaw" booth at their GTC that was happening just a few days ago. Jensen had a full presentation for it and it was a big highlight.

Do you really think a company as big as Nvidia is hinging the release of a big announcement on the hope that ONE engineer is going to START working on it a few days before the announcement and ACTUALLY get it done to a point where they can talk about it on stage?

Please come on, no one can be this dense. You have to be trolling. Try another argument than "The commits say so". Just apply a basic level of understanding of how software is built and released


I asksed you specific questions and you failed to answer all of them, I think that says everything. Thanks.


Nice job proving you can't read either. I answered everything. Gave you some critical thinking homework and you didn't do it. Great job

"It's true because commit history says so" - mjr00 2026. Hall of fame comment really

Try answering my questions next:

1. Do you really believe a company like Nvidia would announce a project in their yearly conference when that project was done the weekend before?

2. Do you really believe ONE engineer wrote the entire project in one weekend with Claude

3. Do you really believe companies like Nvidia don't have internal private Github/Gitlab repos where they don't pre build projects like this?

Thanks. I'll wait. Sorry these won't have simple answers like "The commit history says so"


Nothing more to discuss here, the commit history (and your lack of coherent responses beyond hypothetical "it's technically possible it COULD have happened this way") speak for themselves. Thanks for trying though.

edit: Wait, you don't "have buddies at NVidia" -- you literally work at NVidia. Weird that you tried to hide this information? No wonder you're so desperate to pretend this project is more than it actually is though, it must be embarrassing for you that your company didn't scrub git history properly before making this public!


Ding ding ding. See it would have been too easy to just say "i know for a fact". I just wanted to walk you to the conclusion. Congrats.

Now you are more enlightened about how things work. Of course Nvidia is a big company not everyone that works at nvidia knows everything about every team. That's by design. Welcome to working at a big company! I do have buddies that worked on this project internally and yes it was done over many weeks and months

Thanks for playing. I do know for a fact it's definitely not what you think it is but i had a chuckle watching you twist yourself in a knot trying to convince me you knew better. Why would i disclose information about myself? odd thing to expect from someone. But had you riled up enough to have you go looking through my comment history then my github then my website huh! Must have really struck a nerve. Don't worry i won't do the same to you. I don't care about random people yapping on the internet enough


edit: Removing, not productive to engage with this. pre-emptive apology to dang/tom if this gets cleaned up, most of this thread is not productive and I should not have continued responding much earlier.


Lol where did i make it sound like any of that? Just saw you confidently make the wrong claim and tried to socratic method you into understanding. You are sadly too far gone to understand

Good ad hominem. I'd be riled up too if i was publicly dressed down and proved to be wrong. So now you know, commit history doesn't mean jack sh!t. Sorry i had to ruin Christmas for you

> you guys wanted to make this look like it was written in a weekend though

Imagine thinking this was done to convince anyone about the TIME it took to write this project. Here's a very simple explanation, those commits reflect a PORT over to public Github to reflect launch. Author chose to do it in some number of commits instead of "feat: Full implementation in one commit". The port happened before their announcement. Not the write of it

Now I won't propose hypotheses because clearly the socratic method didn't work on you. So now sit down and learn how things work

And next time, try not to be so confidently wrong on the internet. I had a very good laugh watching you twist and turn yourself. Must have been typing furiously thinking you really were in the right :)


edit: Removing, not productive to engage with this.


This is quite funny

> Why are you people so desperate to pretend this wasn't written in a weekend?

Because it wasn't? And your only "proof" of it was commit history. "You're telling me to not believe my lying eyes" hilarious. You are being told again and again that it means nothing. It's not blockchain. You are allowed to write commits as you see fit without making it a system of record of time spent

> People with above room temp IQ can figure out what's going on here

Yes we can. We have one person convinced they can look at commit history and say for sure that is exactly when that code was written. No developer agrees with you. As you have been told a couple times by other people above as well

It's quite obvious you work at some small shop or are a freelancer and have never done work in any kind of big environment. No you cannot just open source a "weekend" project at any big company. Wherever you are you may be allowed to vibe code and ship something under your company's github willy nilly.

It's just not the reality in any serious place. No one is trying to deceive you. You have just deceived yourself. Thanks again for playing

You can have the last word you are so desperate for


> Here are some completely normal and plausible [reasons]. It was worked on over weeks internally, commits referenced other internal NVIDIA software/libraries they used. It name dropped projects and code names. Maybe it was just an extremely long chain of messy commits that is improper to have on a potentially big open source repo.

... it referenced internal servers and they want to scrub that for security reasons

... it might have had secrets embedded at some point because it was a quick and dirty proof-of-concept

... it could have had swear words in the code

... it had enormous binaries checked in at one point and they don't want the repo to be huge

... they don't want you to know the names of everyone that worked on it

... it's forked off other internal work that isn't public yet

There are so many reasons that the easiest thing to do is just snapshot it and have minimal public git history. Some places I've worked make it so publicly, there's one commit per release. Did NVidia do this? Well, they didn't collapse it down to a single commit, but we have no evidence that the commits we see were the actual internal development timeline.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: