One thing that MCP solves well, that neither CLI apps (like the `gh` CLI for example) nor letting your LLM call arbitrary APIs via CURL does, is setting granular permissions per tool.
Most agent frontends I've used like Claude Code only give you one level deep of CLI commands to authorize, which works fine for allowing commands like `docker build:*`. But for complex CLIs like GitHub, Azure, etc. it just doesn't scale well. It is absurd to grant Claude Code permission to `az vm:*` when that includes everything from `az vm show` to `az vm delete`. Likewise, the argument that says that you should just let your LLM call APIs directly via curl or whatever, does not hold up well when Claude Code just wants raw access to all of `curl:*`.
Meanwhile, MCP tools are (currently, at least in CC) managed at the individual tool level, which is very convenient for managing granular permissions.
Perhaps there could be some "CTCP" (CLI tool context protocol; the CCP acronym does not work well) where CLI apps could expose their available tools to the LLM, and it could then be dynamically loaded and managed at a granular level. But until then, I'm going to keep using MCP.
This is solved by the agent having its own identity and credentials. Why would you share your login and identity with your AI agent?
Access control and permissions should be handled on the backend by enforcing IAM on well-defined principals, not with MCP middleware. Claude can already bypass MCP and call APIs or use CLIs if it runs into blockers using MCP, so it’s not an effective point to implement the control.
Anti-pattern imho. Agents should operate within granular identity and permission scopes, with audit and log trails for all data operations (read, write, etc).
can something of a cli utility be made which can deny any request from moving on, let's name this cli b which takes a user level configuration at say ~/.config or have a way to enter it via cli too or within the context of the folder which it is running in
then we can have "b az vm delete test123" be run via these agents but then b checks if az vm delete command itself is allowed or not, and if it finds that its denied then it gives an error: This command isn't allowed to run.
but if something like b az vm create test123 is done, then the command is allowed to run
Someone must have made an utility similar to b, perhaps someone can share the links of things like this, but what are your thoughts on something like this paul? I definitely feel like convenience can be wrapped around something like this rather than continue to use MCP protocol.
As a vegetarian of 20 years, I like being able to go to restaurants and have something that is on par with what my friends and family are eating (although I do prefer Impossible to Beyond, by far). Even without friends and family, there's a social (and distinctively American) aspect to being able to have a realistic burger and beer at my local sports bar/grill and not just have a salad or some Sysco frozen black bean burger.
Avalonia is FOSS (MIT licensed). You only need Avalonia XPF if you are migrating legacy stuff.
Moq is largely unnecessary today with LLMs being able to easily generate mock classes. I personally prefer to hand-roll my mocks, but if you prefer the Moq-like approach, there's NSubstitute (3-BSD).
Automapper and MediatR are both libraries I avoided prior to the license change anyways, because I don't like runtime "magic" and not being able to trace dependency calls through my code. But, there is Mapster and Wolverine to fill those needs (both MIT). Wolverine can also replace much of MassTransit.
Telerik stuff - there are many good FOSS alternatives to these UI components; too many to list since it depends on which stack you're using.
PDF is indeed a sore spot. PdfPig is good, but limited in capability. I've started offloading PDF processing to a separate Python container with a simple, stateless Flask API with PyMuPdf.
> we have a problem with opensource being asymmetrically underfunded and if people going commercial is the cost perhaps we've failed.
Completely agree with this, though. My company and myself personally contribute a lot of time back to OSS, and I feel like that is part of the social contract of OSS. To have these libraries rug-pulled feels like a slap in the face as a OSS contributor and maintainer.
I agree with almost all of this, especially MediatR being nonsense, but I would recommend against using a LLM to generate a mock. That’s just more code that you need to maintain and update on every interface change. NSubstitute is a fine library.
Another popular library that went commercial is FluentAssertions, Shouldly is a good open-source alternative.
PDF is an enormous festering wound in .Net. I've also been doing .Net since day one and never bought a single commercial component. Used it to build some massive commercial products all on OSS.
BUT. PDF has always been a nightmare. It's made a lot better in the last year since LLMs have vast knowledge of all the functions available in each of the .Net PDF OSS libraries and can usually find a way to do the thing I need now. (I've even had them just hack the PDF streams to do something when there is no library to do it as they know the whole spec)
Any recommendation of good alternatives to Telerik? We've been using it for years, but I'm open to considering alternatives even though it doesn't cost me anything to pay for the license.
Depends on what layer of Telerik [0]. Honestly of late since I'm extra rusty on frontend I just get Copilot with Claude to help generate UI widgets since that's allowed.
Before that, years ago, I just YOLOed with WebSharper and built composition helpers to make 'spartan but correct' UIs that could be prettied up with bootstrap if needed.
That said, alas, Bolero (what replaced WebSharper) is F# specific rather than also supporting C#.
I mostly bring those up because they have various libraries out there to work with different JS bits.
Yes, you are right, if you are on 5.0+, however the 4.x stuff is definitely out of support.
Sorry, I did not know they had actually brought non-Core ASP.NET forward into 5.0+, but it makes sense given how much of .NET Framework they continued support for and how much ASP.NET and Forms stuff is still around in enterprise with no budget for bringing it forward.
Totally agree with breaking the chain though, we moved to Core around 2.0 and never looked back, as an ecosystem it is so much better.
> however the 4.x stuff is definitely out of support [...] Sorry, I did not know they had actually brought non-Core ASP.NET forward into 5.0+
None of this is true, you've gotten yourself very confused. The only real change with .NET 5 was the "Core" name being dropped and the Mono runtime being merged in. .NET Framework 4.x is still around and is still fully supported for legacy applications.
It seems like everyone is dog-piling on Anker over this, so I'd like to put forward a bit more positive of a take.
I have a M5 that I got through the original Kickstarter campaign. I love this printer. I use it casually, but I rarely have a failed print that I can't attribute to i.e. poorly chosen supports. Almost all of the parts on it are still original and working well, apart from the hot end that I've had to replace a few times due to accidentally breaking the screws when changing nozzles. (It was a poor design of the original hot end where the very thin, long screws did not have the shear strength in aluminum to not break when accidentally torquing the nozzle head. This was fixed with the all-metal hot end, which is not ideal for all filaments.)
Despite my issue with the hot end (which I believe could easily be fixed with an updated design), and the nearly useless "AI" feature, I feel like this printer was a great value at the time. It's very well built, looks great, and very reliable. I really enjoy every opportunity I have to use it and do not regret my purchase decision at all.
I'm saddened that they seem to be pulling out of the market, even though it makes sense compared to the competition. It really seemed like they had a promising start. If this is truly the end, then RIP with positive sentiments from me.
> The hot end that I've had to replace a few times due to accidentally breaking the screws when changing nozzles. (It was a poor design of the original hot end...)
That bit was highlighted in the article, though, as one of the more annoying aspects of Anker pulling out of the market. It's likely if your hot end fails again, you'll suddenly have 10 lbs of useless 3D printer to deal with. Most people will just toss them in a landfill.
There was a time I thought 3D printers would break free from the 'every part is proprietary' industry of 2D printers, where you have cheap disposable hardware and people are incentivized to buy new printers frequently to replace dodgy old equipment.
But outside of the passionate Voron community and a few companies who still have at least some of the community/repairability-first ethos, it seems the wider industry is moving towards proprietary hardware, even to the point of blocking out (or at least making difficult) 3rd party accessories, mods, and community software.
+100 this. I understand them exiting the market, but it's a tragedy we won't have access to spare parts. I wish they had at least produced a ton of hot ends in advance, so owners could have a reliable supply for a few more years.
I'm a happy M5 user, but now counting the days until it becomes paperweight.
Azure SQL Database for a long while has been the most cost-effective way of running SQL Server as a PaaS database, and still is if you choose the DTU-based modes, making it a very attractive option. Combined with the rich feature set and maturity and reliability of SQL Server, it is hardly legacy; in fact it's very capable and continues to get new updates like vector operations.
I've helped create apps that support millions to hundreds of millions of revenue on Azure SQL Databases that cost at most a few hundred dollars per month. And you can get started with a S0 database for $15/mo which is absolutely suitable for production use for simple apps.
Unfortunately, I think Microsoft realized how good of a value the DTU-based model was, and has started pushing everyone to the vCore model, which dramatically increases the barrier to entry for Azure SQL Database, making PostgreSQL a much more attractive option. If Microsoft ever kills off the DTU purchasing model of Azure SQL Database, I likely won't be recommending or choosing Azure SQL Database at all going forward. It'll 100% be PostgreSQL.
Can someone check my understanding: does this mean they have eight logical qubits on the chip? It appears that way from the graphic where it zooms into each logical qubit, although it only shows two there.
If that is true, it sounds like having a plan to scale to millions of logical qubits on a chip is even more impressive.
They have never demonstrated even a single physical qubit.
Microsoft has claimed for a while to have observed some signatures of quantized Majorana conductance which might potentially allow building a qubit in the future. However, other researches in the field have strongly criticized their analysis, and the field is full of retracted papers and allegations of scientific misconduct.
They have no qubits at all, "logical" or not. yet. They plan to make millions. It is substantially easier to release a plan for millions of qubits than it is to make even one.
> Cadmium is bad news. Lead and mercury get all the press, but cadmium is just as foul, even if far fewer people encounter it. Never in my career have I had any occasion to use any, and I like it that way.
It seems clear that he doesn’t want to work with cadmium, regardless of the compound.
I mean, sure. But then you read past that sentence, and you see that the rest of the article is about this particular compound, and it's unique tendency to explode, form toxic gases when burned, and so on.
I can't speak for the guy, but lots of things are "bad news", colloquially, and yet we work with them in the laboratory as an accepted everyday risk. I am not an inorganic chemist, but I'm pretty certain that they work with far riskier things than inorganic Cadmium on a regular basis.
I see you plan on making money by charging for the hosted service. Given that, and given recent history in the industry with companies starting out with this model only to rug-pull it from users later and move to a more restrictive license, can you publicly commit to keeping the code MIT/AGPLv3-licensed into the future?
Yes. Both Zai and I care a lot about FOSS — we also believe that open-source business models work, and that most proprietary devtools will slowly but surely be replaced by open-source alternatives. Our monetization strategy is very similar to Supabase — build in the open, and then charge for hosting and support. Also, we reject any investors that don't commit to the same beliefs.
Most agent frontends I've used like Claude Code only give you one level deep of CLI commands to authorize, which works fine for allowing commands like `docker build:*`. But for complex CLIs like GitHub, Azure, etc. it just doesn't scale well. It is absurd to grant Claude Code permission to `az vm:*` when that includes everything from `az vm show` to `az vm delete`. Likewise, the argument that says that you should just let your LLM call APIs directly via curl or whatever, does not hold up well when Claude Code just wants raw access to all of `curl:*`.
Meanwhile, MCP tools are (currently, at least in CC) managed at the individual tool level, which is very convenient for managing granular permissions.
Perhaps there could be some "CTCP" (CLI tool context protocol; the CCP acronym does not work well) where CLI apps could expose their available tools to the LLM, and it could then be dynamically loaded and managed at a granular level. But until then, I'm going to keep using MCP.
reply