Hello Boris! How do I increase the 1 hour prompt cache window for the main agent? I would love to be able to set that to, say, 4 hours. That gives me enough time to work on something, go teach a class, grab a snack, and come back and pick up where I left off.
author here. indeed, a more preceise title could be
> given everything we know about OSS incentives from prior studies and how easy it is to load an OSS library with your AI agent, the demand-reducing effect of vibe coding is larger than the productivity-increasing effect
This looks very interesting. I wish it came with some guides for using it with a local LLM. I have an MBP with 128gb of ram and I have been trying to find a local open source coding agent. This feels like it could be the thing.
I'll add docs! Tl;DR: in the onboarding (or in the Add Model menu section), you can select adding a custom LLM. It'll ask you for your API base URL, which is whatever localhost+port setup you're using, and then an env var to use as an API credential. Just put in any non-empty credential, since local models typically don't actually use authentication. Then you're good to go.
IMO gpt-oss-120b is actually a very competent local coding agent — and it should fit on your 128GB Macbook Pro. I've used it while testing Octo actually, it's quite good for a local model. The best open model in my opinion is zai-org/GLM-4.5, but it probably won't fit on your machine (although it works well with APIs — my tip is to avoid OpenRouter though since quite a few of the round-robin hosts have broken implementations.)
I'm trying to set it up right now with lmstudio with qwen3-coder-30b. Hopefully it's going to work. Happy to take any pointers on anything y'all have tried that seemed particularly promising.
They're just Llama 3.1 8b Instruct LoRAs, so yes — you can run them locally! Probably the easiest way is to merge the weights, since AFAIK ollama and llama.cpp don't support LoRAs directly — although llama.cpp has utilities for doing the merge. In the settings menu or the config file you should be able to set up any API base URL + env var credential for the autofix models, just like any other model, which allows you to point to your local server :)
As a proud graduate of MTSU's CS department, I'm so happy to see my school listed here. Even back in the 1990s when I attended there, everyone knew the recording industry program was something special. For a small to medium town, Murfreesboro had an incredible music scene.
I loved getting my bachelor's degree there. Best 9 years of my life :-D
Proud RIM (Recording Industry Major) Major here of 2001; cool to see MTSU mentioned on HN.
It was a super fun degree and looks to be even more fun as they now offer a complete songwriting concentration to study in. It was either Music Business or Audio Engineering when i went.
This was a really enjoyable couple of minutes. I was able to find some of the more ancient cities I've visited (Ceuta, for example), and see how they're represented.
Now that we've seen how quickly digitized information becomes un-usuable, it's interesting to reflect that the actual physical Mappa Mundi will far outlive (barring fires/disasters) any of it's digital reflections.
I've lived in single family homes and apartments in USA, Switzerland and Spain. I never understood why the apartment buildings in the USA felt so different, and now it makes sense. Even in my 15 story apartment in Zurich, there was a single stair. It made the apartment layouts much better, made it easier to make apartments with a lot more light, and many of the things this article talks about.
Now I live in Spain in a building from the 1960s. A 4 story apartment building, retrofitted in the 1980s with a tiny elevator. It's a really efficient design, though my wife and I have discussed that from an accessibility standpoint, it leaves a lot to be desired.
Now I understand the constraints of apartment designers in the USA a bit better!
reply