Hacker Newsnew | past | comments | ask | show | jobs | submit | simonw's commentslogin

Thaler v. Perlmutter said that an AI system cannot be listed as the sole author of a work - copyright requires a human author.

US Copyright Office guidance in 2023 said work created with the help of AI can be registered as long as there is "sufficient human creative input". I don't believe that has ever been qualified with respect to code, but my instinct is that the way most people use coding agents (especially for something like kernel development) would qualify.


Sounds like using AI as a tool is fine, but those autonomous clawbots are not. All the more reason to reject their submissions, I guess.

Interesting. That seems to suggest that one would need to retain the prompts in order to pursue copyright claims if a defendant can cast enough doubt on human authorship.

Though I guess such a suit is unlikely if the defendant could just AI wash the work in the first place.


> A recent leak of Claude’s code prompted the startup to publish a blogpost at the beginning of the month saying that AI models had surpassed “all but the most skilled humans at finding and exploiting software vulnerabilities” [...]

I've seen a bunch of people conflate the Claude Code source-map leak with the Mythos story, though not quite as blatantly as here. I'm confident that they are totally unrelated.


I have a pet theory that the uptick in normal cybersecurity PRs you mention as a trend in your blog were done with Claude Code’s stealth mode and Mythos.

Yeah, regular web chat Claude and ChatGPT both have full container access (even on the free version, at least for ChatGPT) which can run CLI tools.

Both of them can even install CLI tools from npm and PyPI - they're limited in terms of what network services they can contact aside from those allow-listed ones though, so CLI tools in those environments won't be able to access the public web.

... unless you find the option buried deep in Claude for enabling additional hosts for the default container environment to talk to. That's a gnarly lethal trifecta exfiltration risk so I recommend against it, but the option is there!

More notes on ChatGPT's ability to install tools:

- https://simonwillison.net/2026/Jan/26/chatgpt-containers/


The ads for prediction markets on TikTok are aggressive - like (paraphrasing) "this is your new source of passive income and you'd be crazy to miss it" aggressive.

So basically the standard online scam script for 20+ years but in a TikTok. I remember seeing AdWords text ads in the 2000s for "make $$$ working from home".

Pelicans: https://simonwillison.net/2026/Apr/8/muse-spark/

I also had a poke around with the tools exposed on https://meta.ai/ - they're pretty cool, there's a Code Interpreter Python container thing now and they also have an image analysis tool called "container.visual_grounding" which is a lot of fun.


Alexandr Wang suggesting this might be open-weights/source in the future gives me hope. Hopefully they stay on this path.

I have a feeling it won't be this exact model, but rather smaller distilled variants, similar to the gemma line

It is fair to think so because that is what everyone is doing. But being Meta and considering Llama, if MSL is going to keep releasing models and wants to join back the AI war, they may actually open weights just to get more attention. Once they establish a sizable community, they can start guarding their frontier models.

Seems like not all tools are available everywhere? Don't have access to visual_grounding sadly, only these: https://embed.fbsbx.com/playables/view/4208761039384112/?ext...

Interesting, you got some I didn't: animate image, create video and get reference audio.

The only benchmark I care about! Just curious Simon - which model do you think has created the best pelican riding a bicycle thus far?

Gemini 3.1 Pro: https://simonwillison.net/2026/Feb/19/gemini-31-pro/

But GLM-5.1 has the best NORTH VIRGINIA OPOSSUM ON AN E-SCOOTER: https://simonwillison.net/2026/Apr/7/glm-51/


> but you can try it out today on meta.ai (Facebook or Instagram login required).

I guess I will have to wait. I hope at least soon it will be available on Openrouter. Overall, I am really excited to try it out.


I've been trying that prompt agains other leading models and honestly GLM-5.1's is by far the best.

Not only did this one draw me an excellent pelican... it also animated it! https://simonwillison.net/2026/Apr/7/glm-51/

It made it realistic. A pelican is much more likely to be flying in the sky than riding a bicycle.

Surely at this point it’s part of the training set and the benchmark has lost its value?

these comments are as useless as simon posting his pelicans

Simon, you need to come up with improved benchmarks soon.

Agree. But you can keep the pelican theme in whatever new benchmark you choose to come up with. Iconic at this point.

let me see Tayne with a hat wobble

I buy the rationale for this. There's been a notable uptick over the past couple of weeks of credible security experts unrelated to Anthropic calling the alarm on the recent influx of actually valuable AI-assisted vulnerability reports.

From Willy Tarreau, lead developer of HA Proxy: https://lwn.net/Articles/1065620/

> On the kernel security list we've seen a huge bump of reports. We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we're around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us.

> And we're now seeing on a daily basis something that never happened before: duplicate reports, or the same bug found by two different people using (possibly slightly) different tools.

From Daniel Stenberg of curl: https://mastodon.social/@bagder/116336957584445742

> The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.

> I'm spending hours per day on this now. It's intense.

From Greg Kroah-Hartman, Linux kernel maintainer: https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_...

> Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality. It was kind of funny. It didn't really worry us.

> Something happened a month ago, and the world switched. Now we have real reports. All open source projects have real reports that are made with AI, but they're good, and they're real.

Shared some more notes on my blog here: https://simonwillison.net/2026/Apr/7/project-glasswing/


Could this potentially be because more researches are becoming accustomed to the tools/adding them in their pipelines?

The reason I ask is because I’ve been using them to snag bounties to great effect for quite a while and while other models have of course improved they’ve been useful for this kind of work before now.


I have a project to help with that:

  uvx datasette data.db
That starts a web app on port 8001 that looks like this:

https://latest.datasette.io/fixtures



"Not as a proof of concept. Not for a side project with three users. A real store" - suggestion for human writers, don't use "not X, not Y" - it carries that LLM smell whether or not you used an LLM.

And that's just the opening paragraph, the full text is rounded off with:

"The constraint is real: one server, and careful deploy pacing."

Another strong LLM smell, "The <X> is real", nicely bookends an obviously generated blog-post.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: