Hacker Newsnew | past | comments | ask | show | jobs | submit | earino's commentslogin

Claude going down is how I know I can start messaging with my west coast friends.

Hello Boris! How do I increase the 1 hour prompt cache window for the main agent? I would love to be able to set that to, say, 4 hours. That gives me enough time to work on something, go teach a class, grab a snack, and come back and pick up where I left off.

Another CC team member confirmed it's 5 minutes now, not 1 hour.

See the links in https://news.ycombinator.com/item?id=47747209


Two of the authors are engaging on bluesky regarding the "clickbaityness" of the paper:

https://bsky.app/profile/gaborbekes.bsky.social/post/3md4rga...

(Note, I receive a thanks in the paper.)


author here. indeed, a more preceise title could be

> given everything we know about OSS incentives from prior studies and how easy it is to load an OSS library with your AI agent, the demand-reducing effect of vibe coding is larger than the productivity-increasing effect

but that would be a mouthful


How about "Vibe coding might hurt open source"?


What joy to bump into you in the comments section! I definitely preferred 5 minutes past, but my calendar was pretty awful.

What was really awful, however, was when your calendar was a random mishmash of starts at :00, :05, :30 and :35 :-)


Indeed :) Ah yes I agree! That was awful to everyone (and to analyze!).


Built this over the weekend with my wife using Claude Code. Github repo for site is here: https://github.com/earino/nonprofit-ai and github repo for prompt testing harness is here: https://github.com/earino/prompt-harness

Was a really fun weekend project!


This looks very interesting. I wish it came with some guides for using it with a local LLM. I have an MBP with 128gb of ram and I have been trying to find a local open source coding agent. This feels like it could be the thing.


I'll add docs! Tl;DR: in the onboarding (or in the Add Model menu section), you can select adding a custom LLM. It'll ask you for your API base URL, which is whatever localhost+port setup you're using, and then an env var to use as an API credential. Just put in any non-empty credential, since local models typically don't actually use authentication. Then you're good to go.

IMO gpt-oss-120b is actually a very competent local coding agent — and it should fit on your 128GB Macbook Pro. I've used it while testing Octo actually, it's quite good for a local model. The best open model in my opinion is zai-org/GLM-4.5, but it probably won't fit on your machine (although it works well with APIs — my tip is to avoid OpenRouter though since quite a few of the round-robin hosts have broken implementations.)


Ok wonderful! Thanks.

I'm trying to set it up right now with lmstudio with qwen3-coder-30b. Hopefully it's going to work. Happy to take any pointers on anything y'all have tried that seemed particularly promising.


For sure! We also have a Discord server if you need any help: https://discord.gg/syntheticlab


Follow up question, can the diff apply and fix json models be run locally as well with octofriend, or do they have to hit your servers? Thanks!


They're just Llama 3.1 8b Instruct LoRAs, so yes — you can run them locally! Probably the easiest way is to merge the weights, since AFAIK ollama and llama.cpp don't support LoRAs directly — although llama.cpp has utilities for doing the merge. In the settings menu or the config file you should be able to set up any API base URL + env var credential for the autofix models, just like any other model, which allows you to point to your local server :)

The weights are here:

https://huggingface.co/syntheticlab/diff-apply

https://huggingface.co/syntheticlab/fix-json

And if you're curious about how they're trained (or want to train your own), the entire training pipeline is in the Octofriend repo.


I think this might be your best bet right now. GLM-4.5-Air is probably next best. I'd run them at 8-bit using MLX.


As a proud graduate of MTSU's CS department, I'm so happy to see my school listed here. Even back in the 1990s when I attended there, everyone knew the recording industry program was something special. For a small to medium town, Murfreesboro had an incredible music scene.

I loved getting my bachelor's degree there. Best 9 years of my life :-D


Proud RIM (Recording Industry Major) Major here of 2001; cool to see MTSU mentioned on HN. It was a super fun degree and looks to be even more fun as they now offer a complete songwriting concentration to study in. It was either Music Business or Audio Engineering when i went.


MIS Graduate. Crazy to think that stuff I listend to while working in the computer lab is now in the school's archive.


This was a really enjoyable couple of minutes. I was able to find some of the more ancient cities I've visited (Ceuta, for example), and see how they're represented.

Now that we've seen how quickly digitized information becomes un-usuable, it's interesting to reflect that the actual physical Mappa Mundi will far outlive (barring fires/disasters) any of it's digital reflections.


I now live in a 4 story building in Spain and I too miss having BBQs.


What a wonderful article.

I've lived in single family homes and apartments in USA, Switzerland and Spain. I never understood why the apartment buildings in the USA felt so different, and now it makes sense. Even in my 15 story apartment in Zurich, there was a single stair. It made the apartment layouts much better, made it easier to make apartments with a lot more light, and many of the things this article talks about.

Now I live in Spain in a building from the 1960s. A 4 story apartment building, retrofitted in the 1980s with a tiny elevator. It's a really efficient design, though my wife and I have discussed that from an accessibility standpoint, it leaves a lot to be desired.

Now I understand the constraints of apartment designers in the USA a bit better!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: