Hacker Newsnew | past | comments | ask | show | jobs | submit | ndr's commentslogin

Check out from this onwards and the following point. You get a nice summary on top right. Mind that Anthropic alone is doing 30B/y annualized already.

Take a snapshot and check again in a few months. It's not perfect but it's much more falsifiable than a lot of the noise.

https://ai-2027.com/#narrative-2026-04-30


I think the grandparent meant "fundamentalism" as "mechanistic", and lots of things we can know (as you say using the scientific method) to be useful long before we have a good mechanistic explanation of how they work.

Some examples: aspirin (willow bark used for thousands of years, drug synthesized in 1897 and mechanism explained almost 100y later), or general anesthesia used again since mid 1800s and the mechanism is quite still debated.

This is not to downplay all the long term, or developmental, risks that using something novel can result in. But we can empirically know something about the effects without having good mechanistic models.


But it is usually not necessary for approval of a compound to be able to describe how it works on a molecular or cellular level. What you need to show are three things: efficacy, safety and quality, so basically: the compound has the intended clinical benefit, has an acceptable safety profile and can be produced with a consistent manufacturing quality. Most compounds fail because of lack of efficacy (roughly half), and roughly a third because of lack of acceptable safety.

The vast majority of drug candidates don’t make it to the trial stage. Much of the research has to be defensible prior to the trial and what makes them defensible is having a mechanism for action. Of course once a drug is being used off label there starts to be some empirical data which can be used for trials, and it seems that we’ll get lucky with GLP1-As.

You are entirely correct. New compounds for trials do not come out of thin air, you usually derive them from compounds you already know how and why they work. For instance, we know very well how Semaglutide works, same goes for many other peptides that are currently being studied. However, you are correct that we do not understand why they would help for ME/CFS, simply because we do not understand ME/CFS in the first place. As I've written above, it's a severely neglected disease.

Anyway, I don't think we really disagree, I rather misunderstood your original post. It's good to hear that these new peptides are helping with your condition, and I wish you all the best!


Thanks for the feedback, I’m noticing that ‘fundamentalism’ didn’t translate properly and I should have referred to first principles and mechanisms of action. I need better words for these and I will try to find them.

As a fun aside, consider the effect of the birthday paradox on empiricism, as the pool of candidates grows larger the probability of a match increases substantially as potential matching candidates increases quadratically.


It's not as ergonomic as they made it to be.

The fact that you have to bundle input+output signatures and everything is dynamically typed (sometimes into the args) just make it annoying to use in codebases that have type annotations everywhere.

Plus their out of the box agent loop has been a joke for the longest time, and writing your own if feasible but it's night and day when trying to get something done with pydantic-ai.

Too bad because it has a lot of nice things, I wish it were more popular.


Yeah! I can agree with this. There's some improved ergonomics to get here


Have you looked at ADK? How does it compare? Does it even fit in the same place as Dspy?

https://google.github.io/adk-docs/

Disclaimer, I use ADK, haven't really looked at Dspy (though I have prior heard of it). ADK certainly addresses all of the points you have in the post.


I personally haven't looked super closely at ADK. But I would love if someone more knowledgeable could do a sort of comparison. I imagine there are a lot of similar/shared ideas!


There are dozens if not 100s of agent frameworks in use today, 1000s if you peruse /new. I'm curious what features will make for longevity. One thing about ADK is that it comes in four languages (Py, TS, Go, Java; so far), which means understanding can transfer over/between teams in larger orgs, and they can share the same backing services (like the db to persist sessions).


related self plug: I built https://codeglf.com on Pyodide, a weekly Python code golf site where every submission runs locally in the browser.

WebAssembly's isolation means you get sandboxing for free without any server infrastructure. Of course users could cheat and submit code that doesn't actually pass the tests, but so far everybody has been friendly and I haven't needed to validate results server-side. Hopefully sharing the link here doesn't make that worse!


TIL these are not stem cells directly but rather reprogrammed cells that "comes from an XY donor isolated from neonatal foreskin", which I take to mean baby circumcisions.

From their previous work on Pong: https://pmc.ncbi.nlm.nih.gov/articles/PMC9747182/


Immutability is underrated in general. It's a sore point every time I have to handle non-clojure code.


Given the ubiquity of react, I think immutability is generally rated pretty appropriately. If anything, I think mutability is under-rated. I mean, it wouldn't be applicable to the domain of Temporal, but sometimes a mutable hash map is a simpler/more performant solution than any of the immutable alternatives.


Props data passed to React itself isn't immutable which is probably one of the missing bricks.

React only checks references but since the objects aren't immutable they could have changed even without the reference changing.

Immutability also has a performance price which is not always great.


Yes, you can mutate props. But no, it's probably not going to do what you want if you did it intentionally. If react added Object.freeze() (or deepFreeze) to the component render invoker, everything would be the same, except props would be formally immutable, instead of being only expected to be immutable. But this seems like a distinction without much of a difference, because if you just try to use a pattern like that without having a pretty deep understanding of react internals, it's not going to do what you wanted anyway.


React doesn’t really force you to make your props immutable data. Using mutable data with React is allowed and just as error prone as elsewhere. But certainly you are encouraged to use something like https://immutable-js.com together with React. At least that’s what I used before I discovered ClojureScript.


Well, mutability is the default, and React tries to address some of the problems with mutability. So React being popular as a subecosystem inside a mutable environment isn't really evidence that people are missing out on the benefits of mutability.

Though React is less about immutability and more about uni-directional flow + the idiosyncrasy where you need values that are 'stable' across renders.


This seems a Chesterton's fence fail.

protobuf solved serialization with schema evolution back/forward compatibility.

Skir seems to have great devex for the codegen part, but that's the least interesting aspect of protobufs. I don't see how the serialization this proposes fixes it without the numerical tagging equivalent.


Hey, Skir does have numerical tagging, see https://skir.build/docs/language-reference#structs


This seems new and retrofit.

The implicit version is brittle design for backwards compatibility.

People/LLMs will keep adding fields out of order and whatever has been serialised (both in client/server interaction, and stored in dbs) will be broken.


At 2'58'' you can see a frame of them projecting on Senate House, London.

During WW2 that was used by the Ministry of Information, and it inspired Orwell's description for the building of the Ministry of Truth. His wife Eileen worked in the building for the Censorship Department.

https://en.wikipedia.org/wiki/Senate_House,_London


Worth checking this post from someone who actually has worked on this change:

> I take significant responsibility for this change.

https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...


This guy from Effective Altruism pivoted away from helping the poor to help try to control AI from being a terminator type entity and then pivoted to being, ah, its okay for it to be a terminator type entity.

> Holden Karnofsky, who co-founded the EA charity evaluator GiveWell, says that while he used to work on trying to help the poor, he switched to working on artificial intelligence because of the “stakes”:

> “The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today—which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high.”

> Karnofsky says that artificial intelligence could produce a future “like in the Terminator movies” and that “AI could defeat all of humanity combined.” Thus stopping artificial intelligence from doing this is a very high priority indeed.

https://www.currentaffairs.org/news/2022/09/defective-altrui...

He is just giving everyone permission to do bad things by saying a lot of words around it.


> then pivoted to being, ah, its okay for it to be a terminator type entity.

Isn’t that the opposite of what he’s saying? He’s saying it could become that powerful, and given that possibility it’s incredibly important that we do whatever we can to gain more control of that scenario


> Isn’t that the opposite of what he’s saying?

The quote was from 2022 for the first pivot to AI to prevent it from becoming a terminator style entity. The last pivot was not in the quote but is the topic of this current Hacker News post, where takes credit for dropping the safety pledge:

"That decision included scrapping the promise to not release AI models if Anthropic can’t guarantee proper risk mitigations in advance."

I expect the next pivot will be that we need to allow the US military to use Anthropic to kill people because otherwise they will use a less pure AI to kill people and our Anthropic is better at only killing the bad guys, thus it is the lesser evil.


I think the poster here has an axe to grind, considering they quoted something that directly contradicted their point and didn't even notice.


The quote was only for the 2022 pivot to AI safety, the 2026 pivot away from AI safety is the topic of this hacker news post.


Effective Altruism is such a beautiful term for a pretentious Karen that needs to wrap their selfish actions with moral superiority.

It's that perfect blend of I'm doing what everyone else are doing, and I'm better than everyone else.

Chefs' Kiss


Getting SBF vibes from this. "Earn to give" is an inherently flawed philosophy.


Effective altruism came from the "rationalist"

It was never about helping poor people.

For some reason, the rationalist movement and its offshoots are really pervasive in silicon valley. i don't see it much in the other tech cities.


> I generally think it’s bad to create an environment that encourages people to be afraid of making mistakes, afraid of admitting mistakes and reticent to change things that aren’t working

"move fast and break things" ?


"don't hold me liable"


> > I take significant responsibility for this change.

Empty words. I would like to know one single meaningful way he will be held responsible for any negative effects.


Did this guy actually write this?

Incredibly long and verbose. I will fall short of accusing him of using an AI to generate slop, but whatever happened to people's ability to make short, strong, simple arguments?

If you can't communicate the essence of an argument in a short and simple way, you probably don't understand it in great depth, and clearly don't care about actually convincing anybody because Lord knows nobody is going to RTFA when it's that long...

At best, you're just trying to communicate to academics who are used to reading papers... Need to expect better from these people if we want to actually improve the world... Standards need to be higher.


This is where people go to post long verbose statements.

You can usually find the short version on Twitter.


Perhaps they didn’t have the time to write a shorter version.

Or the discipline.

Maybe neither.


This style is in vogue for the less wrong community.


I genuinely believe that website is responsible for a lot of the worst ideas currently permeating the technology sector.


pretty much the intellectual equivalent of looksmaxxing


Been thinking about the nature of this behavior for a long time, you have nailed it so well, no one will be able to take out this nail.


Worth checking out what someone working on it actually has to say: https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: