Yeah. Great callout. You basically need signal all the way to production. Did it get nuked during a code review? Is it still running 2 mo later? How many incidents were caused by this code?
This is tough though because enterprises go absolutely ballistic over “training on our data” - which is understandable, but will also hold us back.
This reduces down to the problem of summarization - a quite difficult one. At commit time it’s difficult to know what questions readers will have. You can get close but never all the way there.
Pre AI when engineers couldn’t find the answer in commit messages or documentation they would ask the author “why” and that human would “compute” the summary on demand.
I think that’s what I expect to do with these agent sessions - I don’t want more markdown, I want to ask it questions on demand. Git AI (https://github.com/git-ai-project/git-ai) uses the prompts that way. I think that model will win out. Save sessions. Read/ask questions relevant to the current agent’s work.
On asking peers. This is regrettably on the way out today - I’ll ask engineers about complex code they generated and they can’t give good answers. I think it’s because it all happened so fast — they didn’t sit with the problem for 48 hours. So even if they steered the agent thoughtfully it’s hard to remember all the decisions they made a week later.
love the shout but git-ai is decidedly not trying to replace the SCMs. there are teams building code review tools (commercial and internal) on top of the standard and I don't think it'll be long before GitHub, GitLab and the usual suspects start supporting it since folks the community have already been hacking it into Chrome extensions - this one got play on HN last week https://news.ycombinator.com/item?id=46871473
I met Marshall a few times. He was a good teacher and someone who had a positive impact on several successive classes of students who wanted to start companies and build meaningful products + technologies on that campus.
And I trust (quite a bit) that whatever he brought to light should be followed up on - if no other reason than to respect his memory. I hope it is taken seriously and those who retaliated find themselves w/o their positions of responsibility and power over other faculty.
Useful here-say from some investors at the last few demo days: not all of the companies that are "copiers" apply + are accepted with the "copying" idea. Many founding teams end up pivoting during the batch and scramble to get proof points on the board before demo day. They're most likely to end up pivoting to well-known problems, therefore the clustering around a few common themes. It doesn't explain all of the data, but it's a big part of it. When you have to come up with a fundable new idea in a week and prove it out in a month this can happen...
This is tough though because enterprises go absolutely ballistic over “training on our data” - which is understandable, but will also hold us back.
reply