Hacker Newsnew | past | comments | ask | show | jobs | submit | imjonse's commentslogin

> The real answer is something HN doesn't like so I won't advocate it openly, but it involves society paying to take care of people to provide homes, provide medical access, things like this. Neither party is interested in that.

There are enough people on HN who think working social democracy is a great option; not everybody here is a libertarian cryptobro, an eastern european with decades-long PTSD or a hardcore conservative.


It probably goes against Vim tradition, culture and freedom to choose, but I wish they added even more built-in features (like Helix) that are currently implemented in competing and sometimes brittle plugins and have to be put together into also competing vim starter packs and distros of plugins and config files just to have a modern setup out of the box.


I agree in principle that absorbing the best from the ecosystem is good. However, anything pulled into core should have a long lifetime and be considered part of the API. This deserves careful consideration, and plugins work really well until it is clear there is a reason to pull something in.


Not to talk about the other side of the holy war too much, but one of the things I appreciate about GNU ELPA is it's treated as part of the Emacs distribution and needs to follow all the rules of Emacs proper as a result.


There is zero reason not to include a picker like helix does. I’m gonna guess 90% of everyone running neovim has a picker


I believe we are thinking about different time horizons, and your language and comparison to <modern editor> reveals a lot about unsaid about your reasoning.

I don't think comparison to other editors is a good basis for deciding what should be pulled in. The vi ecosystem was and remains weird to those outside, but in a way that is internally consistent to the usage patterns of its own user over decades.

Also, percentage of users using X feature is also a bad selection criteria for pulling a plugin provided feature, unless that number is 100% and there is little deviation in configuring it. There is very little friction in pulling in a plugin as a user.

So what are some good criteria for absorbing plugin functionality?

- extensions that provide an API for entire ecosystems (plenary, lazy.nvim)

- plugins that add missing features or concepts that are to useful, and timeless to ignore

- plugins that would benefit themselves or neovim by moving to native code

Honestly, the bar for absorbing plugins should be pretty high. There should be a real benefit that outweighs the cost of more maintenance, coupling, and ultimately cost.

The cost of installing plugins is pretty low, and they are great at keeping the core software simple.



Does ex not count?


This is what happened with the Language Server Protocol.

Prior to 0.9 (if I recall correctly), you had to install a plugin to be able to interface with LSP servers, and in 0.9 they integrated the support into NeoVim itself.


Would be nice to also have such support for DAP, though nvim-dap is doing a good job so far.


I believe neovim started as a fork specifically to implement features like LSP support and package management, VIM eventually also caught up. But i don't believe anything is out of the table, or against Vim tradition. Which features do you want to see built-in, specifically?


I’m also pretty sure that on an episode of The Standup, one of the Neovim core maintainers TJ DeVries (Teej) said that it is a good idea to prove new ideas in the form of a plugin rather than submitting pull requests for Neovim itself with new ideas that have not yet been tested out and proven in the real world. Implicitly implying that indeed Neovim is open to bring features from plugins into core Neovim itself, if they are proven to be useful for a lot of people.

Unfortunately I don’t remember what episode it was or even if it was specifically on an episode of The Standup, or if it was some other video that he and ThePrimagen did outside of The Standup.


This is essentially how the new package manager got done. `mini.deps` was created as basically a proposal for a built in package manager (beyond also just being its own thing), sat in the wild for a year or two then a derived version got imported.


Multi threading, but yeah.

Original HN post here if you’re interested. https://news.ycombinator.com/item?id=7279358


> The author of NeoVim (Thiago de Arruda)

I've always wondered what this legend is doing now


That's why I stopped using it. Didn't want to have "reconfiguring an editor to be an IDE" as a hobby.


As others have said, the fact that they're letting the ecosystem settle before including something out-of-the box is beneficial in some sense. It's allowed time for experiments (including my own "how would I do UI in Neovim: morph.nvim [1]").

For some, this stage of a project attracts tinkerers and builders, and lets the community shape how things are done in the future. It's not always practical, but it does have a certain appeal.

[1] https://github.com/jrop/morph.nvim


Neovim is actively moving in that direction.


Which is why I just went with Helix and learned their keybindings. I have much more important things to do than figuring out why a plugin stopped working.


Doesn't seem like it if you can waste time learning all the keybinds just because you switched an editor, but also how does "can't do things since there are no plugins yet" rank higher vs "sometimes stops working" in importance?


It took me about 10 min to learn the keybindings. It does take longer to get familiar and efficient with them, but I wasn't a Vim master to begin with. (I can navigate efficiently and am proficient with a few combinations that I use the most, but that's about it.)

> "can't do things since there are no plugins yet"

Depending on what I am doing, I will probably go back to VSCode to get things done. Terminal editors are nice, but VSCode's extension ecosystem and usability is unmatched. I speak of that as someone who has spent hundreds of hours developing VSCode extensions. For me, "can't do things" is not (necessarily) a reason to set up Neovim plugins. It means I should figure out 1) if that's something I need to do regularly 2) If so, what's the best way to get it done.

(I am very well aware of what you can do with vim/Neovim plugins, just like zsh and tmux etc. Not spending time hand writing my config or setting up my plugins is an intentional choice. I like to start with a commonly used setup, discover pain points and bottlenecks, and then optimize or find some other solutions.)


> 10 min to learn the keybindings. It does take longer to get familiar and efficient

So not the red-herringly 10 min (and there are hundreds of keybinds, so the initial learning wasn't 10 min either)

> like to start with a commonly used setup, discover pain points and bottlenecks, and then optimize or find some other solutions.)

Which you've presumably already done at least twice with vim and VSCode, so again it's just a waste of time to start from scratch yet again instead of configuring for the things you know you need


No-one started as a vim master.

Your arguments here are valid, for a particular kind of person who values a particular kind of workflow.

Some of us would rather use vi than vscode. If you take away the plugin ecosystem, the core value is still there.


Just pin the plugin or don't use it.


Not a choice if you need a specific new feature or a certain fix.

The entire software development world would be much simpler if nobody needs new features, bugs and CVEs don't exist, or "just pin the version" works.


Neovims API isn't (yet) fully stable. So updating neovim could also break a plugin.

There are lot of readymade neovim configs you can copy. I was experimenting recently with lazy.vim and took a git clone and cp command to get up and running


I love the batteries included in Helix. Just the right amount that I don't need much else.

At this point I just want a decent Helix-Evil-Mode.


But this isn't vim, so doesn't go against those?

> 0.13 “The year of Batteries Included”

> 0.12 “The year of Nvim OOTB”


nice to see that.


Define “modern”!

Almost all such complaints are close to “I want to be cool and be seen as an haxor, but all I know is a bit of VSCode and IDEA, make it easier for me, plz”.


I think what they did with first-party support for LSP would be an example of this.

However, Neovim explicitely states that they don't want to turn VIM into an IDE. The feature parent is talking about seem to be falling into that type of vertical integration instead of composability.


"The TurboQuant paper (ICLR 2026) contains serious issues in how it describes RaBitQ, including incorrect technical claims and misleading theory/experiment comparisons.

We flagged these issues to the authors before submission. They acknowledged them, but chose not to fix them. The paper was later accepted and widely promoted by Google, reaching tens of millions of views.

We’re speaking up now because once a misleading narrative spreads, it becomes much harder to correct. We’ve written a public comment on openreview (https://openreview.net/forum?id=tO3AS KZlok ).

We would greatly appreciate your attention and help in sharing it."

https://x.com/gaoj0017/status/2037532673812443214


I guess I'm trying to understand. I'm hearing this paper has been around for a year -- I would think that many companies would have already implemented and measured its performance in production by now... is that not the case?


Okay, I spent about half an hour reading about this and asking gemini I guess my best understanding is this:

The main breakthrough [rotating by an orthogonal matrix to make important outliers averaged acrossed more dimensions] comes from RaBitQ. Sounds like the RaBitQ team was much more involved, and earlier, and the turbo quant paper very deliberately tries to avoid crediting and acknowledging RaBitQ.

My understanding is that the efficacy of these methods isn't in dispute, what turboquant did was adapt the method that was being used in vector databases and adapted it for transformers, and passed it of more as a new invention than an adaptation.


Openreview link is not working, was split apparently.

https://openreview.net/forum?id=tO3ASKZlok


> It’s called parenting.

That clearly is required here, but the scale of the existing and potential harm is such that relying on parenting only is the equivalent of using paper instead of plastic straws when the worlds biggest companies and militaries are burning down the environment.


> I don't envy the kids that found an outlet doing something productive only to have a nanny state eventually rip it away from them.

99% of today's social media usage is the opposite of productive, too bad the laws concentrate on policing internet use though.


I'm sure parents back in the day thought the same about video games. You're lumping a lot of kids together there, maybe some of them will become journalists... Maybe social media is the only media that will even be relevant in 20 years when their career gets serious.


games could cause a lot of 'lost' time, but you had a say in games; there's a lot more consuming and almost no producing in social media use. And games did not cause you anxiety and FOMO, nor did they programatically lure you into spending your time and money on them.


Oh we definitely are in need of more 15s "journalism"


what other assumptions sound more reasonable?


As a european I see what you mean, but that 'we all' in your sentence probably hasn't included those from Latin America, and large parts of Africa or Asia since long before Trump. The US pulled quite a few less than admirable tricks (to use an euphemism) on non-europeans during the 20th century.


Exactly.


I was a bit worried you are paraphrasing Rob Pike, but no, he actually agrees with that Knuth quote.

I am almost certain that people building bloated software are not willfully misunderstanding this quote; it's likely they never heard about it. Let's not ignore the relevance of this half a century old advice just because many programmers do not care about efficiency or do not understand how computers work. Premature optimization is exactly that, the fact that is premature makes it wrong, regardless if it's about GOTO statements in the 70s or a some modern equivalent where in the name of craft or fun people make their apps a lot more complex than they should be. I wouldn't be surprised if some of the brutally inefficient code you mention was so because people optimized prematurely for web-scale and their app never ever needed those abstractions and extra components. The advice applies both to hackers doing micro-optimizations and architecture astronauts dreaming too big IMHO.


No I've definitely heard plenty of people use this as some kind of inarguable excuse to not care about performance. Especially if they're writing something in Python that should really be not super slow. "It's fine! Premature optimisation and all that. We'll optimise it later."

And then of course later is too late; you can't optimise most Python.


> On the LLM Architecture Gallery, it’s interesting to see the variations between models, but I think the 30,000ft view of this is that in the last seven years since GPT-2 there have been a lot of improvements to LLM architecture but no fundamental innovations in that area.

After years of showing up in papers and toy models, hybrid architectures like Qwen3.5 contain one such fundamental innovation - linear attention variants which replace the core of transformer, the self-attention mechanism. In Qwen3.5 in particular only one of every four layers is a self-attention layer.

MoEs are another fundamental innovation - also from a Google paper.


Thanks for the note about Qwen3.5. I should keep up with this more. If only it were more relevant to my day to day work with LLMs!

I did consider MoEs but decided (pretty arbitrarily) that I wasn’t going to count them as a truly fundamental change. But I agree, they’re pretty important. There’s also RoPE too, perhaps slightly less of a big deal but still a big difference from the earlier models. And of course lots of brilliant inference tricks like speculative decoding that have helped make big models more usable.


It's more likely the other way around, the .ai domain with a fairly generic and maybe future-proof name needed a quick vibecoded project to not be empty when it launches.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: