However, the cause it not really Java as such, but massive frameworks and tooling (e.g. Maven). Maybe AI will bring some fresh air to Java, because old truths might no longer be valid. But it will be hard to get rid of old habits because it manifested in AI training data.
"AI guys" use Claude CLI that renders to a terminal via freakin' react.
Come on, java starts up fast enough and the memory usage can be set (better throughput vs less memory, it's a classic tradeoff. Go just defaults to worse throughput).
Really, without some fat framework doing all kinds of initialization stuff, java code starts up practically instantly.
Java takes flags for min/initial/max memory that, as a strict law of nature, are always wrong the first time you try. And it holds onto unused memory unless you pass another flag. Idk exactly why there's no reasonable default for that, but probably cause it's in a VM. No other language has this problem.
The Java "compiled" code isn't a native binary like in Go, it really does run in a VM. I honestly don't know if that's why they handle memory differently though.
> Come on, java starts up fast enough and the memory usage can be set
Yeah it can be adjusted, on the other hand you have a language that just works.
Starts instantly and memory usage is bounded by usage, not guesstimate of how much the JVM will need.
> Really, without some fat framework doing all kinds of initialization stuff, java code starts up practically instantly
Have you many java projects that didn't use some fat framework?
Good for you if you did, but 100% of java project I had the displeasure of touching were huge mess with deprecated frameworks on ancient java versions.
why does every AI skeptic assume that everyone is lying to them. theres millions of developers using AI to be more productive and you just keep plugging your ears and screaming, claiming its only dumb managers, meanwhile Linus Torvalds is vibe coding stuff.
Who said anything about that? The argument was "if you're not using AI RIGHT NOW, you will fall behind forever"
This is the nonsense management and CTOs are pushing. Use it now if you want, I do. Wait for things to cool down if you want. You'll be fine either way. The comical view that it'll be a "winner takes all" subset of developers who some how would have figured out secret AI techniques that make them 10Kx more productive and every other developer will be SOL is laughable.
i agree, and its strange that this failure mode continually gets lumped onto AI. The whole point of longer term software engineering was to make it so that the context within a particular persons head should not impact the ability of a new employee to contribute to a codebase. turns out everything we do to make sure that is the case for a human also works for an agent.
As far as i can tell, the only reason AI agents currently fail is because they dont have access to the undocumented context inside of peoples heads and if we can just properly put that in text somehwere there will be no problems.
The failure mode is getting lumped into AI because AI is a lot more likely to fail.
We've done this with Neural Networks v1, Expert Systems, Neural Networks v2, SVM, etc, etc. only a matter of time before we figured it out with deep neural networks. Clearly getting closer with every cycle, but no telling how many cycles we have left because there is no sound theoretical framework.
At the same time, we have spent a large part of the existence of civilisation figuring out organisational structures and methods to create resilient processes using unreliable humans, and it turns out a lot of those methods also work on agents. People just often seem miffed that they have to apply them on computers too.
You can do that with mocks if it's important that something is only called once, or likely there's some unintended side effect of calling it twice and tests woukd catch the bug
The first filter is redundant in this example. Duplicate code checkers are checking for exactly matching lines.
I am unaware of any linter or static analyzer that would flag this.
What's more, unit tests to test the code for printEvens (there exists one) pass because they're working properly... and the unit test that calls the calling function passes because it is working properly too.
Alternatively, write the failing test for this code.
Nothing in there is wrong. There is no test that would fail short of going through the hassle of creating a new type that does some sort of introspection of its call stack to verify which function its being called in.
Likewise, identify if a linter or other static analysis tool could catch this issue.
Yes, this is a contrived example and it likely isn't idiomatic C++ (C++ isn't my 'native' language). The actual code in Java was more complex and had a lot more going on in other parts of the files. However, it should serve to show that there isn't a test for printEvens or someCall that would fail because it was filtered twice. Additionally, it should show that a linter or other static analysis wouldn't catch the problem (I would be rather impressed with one that did).
> You could write a test that makes sure the output of someCall is passed directly to printeven without being modified.
But why would anyone ever do that? There's nothing incorrect about the code, it's just less efficient than it should be. There's no reason to limit calls to printEven to accept only output from someCall.
A redundant filter() isn't observable (except in execution time).
You could pick it up if you were to explicitly track whether it's being called redundantly but it'd be very hard and by the time you'd thought of doing that you'd certainly have already manually checked the code for it.