Thaler v. Perlmutter said that an AI system cannot be listed as the sole author of a work - copyright requires a human author.
US Copyright Office guidance in 2023 said work created with the help of AI can be registered as long as there is "sufficient human creative input". I don't believe that has ever been qualified with respect to code, but my instinct is that the way most people use coding agents (especially for something like kernel development) would qualify.
Interesting. That seems to suggest that one would need to retain the prompts in order to pursue copyright claims if a defendant can cast enough doubt on human authorship.
Though I guess such a suit is unlikely if the defendant could just AI wash the work in the first place.
> A recent leak of Claude’s code prompted the startup to publish a blogpost at the beginning of the month saying that AI models had surpassed “all but the most skilled humans at finding and exploiting software vulnerabilities” [...]
I've seen a bunch of people conflate the Claude Code source-map leak with the Mythos story, though not quite as blatantly as here. I'm confident that they are totally unrelated.
I have a pet theory that the uptick in normal cybersecurity PRs you mention as a trend in your blog were done with Claude Code’s stealth mode and Mythos.
Yeah, regular web chat Claude and ChatGPT both have full container access (even on the free version, at least for ChatGPT) which can run CLI tools.
Both of them can even install CLI tools from npm and PyPI - they're limited in terms of what network services they can contact aside from those allow-listed ones though, so CLI tools in those environments won't be able to access the public web.
... unless you find the option buried deep in Claude for enabling additional hosts for the default container environment to talk to. That's a gnarly lethal trifecta exfiltration risk so I recommend against it, but the option is there!
The ads for prediction markets on TikTok are aggressive - like (paraphrasing) "this is your new source of passive income and you'd be crazy to miss it" aggressive.
So basically the standard online scam script for 20+ years but in a TikTok. I remember seeing AdWords text ads in the 2000s for "make $$$ working from home".
I also had a poke around with the tools exposed on https://meta.ai/ - they're pretty cool, there's a Code Interpreter Python container thing now and they also have an image analysis tool called "container.visual_grounding" which is a lot of fun.
It is fair to think so because that is what everyone is doing. But being Meta and considering Llama, if MSL is going to keep releasing models and wants to join back the AI war, they may actually open weights just to get more attention. Once they establish a sizable community, they can start guarding their frontier models.
I buy the rationale for this. There's been a notable uptick over the past couple of weeks of credible security experts unrelated to Anthropic calling the alarm on the recent influx of actually valuable AI-assisted vulnerability reports.
> On the kernel security list we've seen a huge bump of reports. We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we're around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us.
> And we're now seeing on a daily basis something that never happened before: duplicate reports, or the same bug found by two different people using (possibly slightly) different tools.
> The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.
> I'm spending hours per day on this now. It's intense.
> Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality. It was kind of funny. It didn't really worry us.
> Something happened a month ago, and the world switched. Now we have real reports. All open source projects have real reports that are made with AI, but they're good, and they're real.
Could this potentially be because more researches are becoming accustomed to the tools/adding them in their pipelines?
The reason I ask is because I’ve been using them to snag bounties to great effect for quite a while and while other models have of course improved they’ve been useful for this kind of work before now.
"Not as a proof of concept. Not for a side project with three users. A real store" - suggestion for human writers, don't use "not X, not Y" - it carries that LLM smell whether or not you used an LLM.
US Copyright Office guidance in 2023 said work created with the help of AI can be registered as long as there is "sufficient human creative input". I don't believe that has ever been qualified with respect to code, but my instinct is that the way most people use coding agents (especially for something like kernel development) would qualify.
reply