Hacker Newsnew | past | comments | ask | show | jobs | submit | exe34's commentslogin

I've always thought we should maintain a list of people like you. Every time we cure something, like blindness in one person, one of you gets picked and your eyes get poked out. That way the total amount of suffering will be conserved, but those who think that's necessary get to be the ones who pay the price for their beliefs.

In the first half of my comment, I explained that I don't think people should suffer. I'm just also aware that if everyone can pick their child's attributes, it could lead to a nation of blond-hair, blue-eyes kids

Would you pick blond hair, blue eyes for your kids? Would black people pick it? Asian?

I wouldn't, but I can imagine a lot of people would

If you wouldn’t, why would a big chunk of the population do it? And if they did it, so what? Why is blonde and blue eyes bad?

Btw, I also wouldn’t if I could choose.


No it wouldn't. If these traits were everywhere, then they would no longer be exclusive and therefore would lose their appeal. There is nothing inherently "hot" or "attractive" about blond hair or blue eyes.

You must have had a very good surgeon then! Congrats!

Would it be better to bring this up in the retro? We're getting sidetracked here. We could set up a meeting with the stakeholders.

Could you explain the incompatibility? These seem like orthogonal axes to me.

Just a joke, but you’re right I can like open source and not want to self host

It should be at least cheaper if anyone can host it, no?

I think this is the majority view.

If rumours OpenAI are doing 70% margins on inference and Anthropic doing 30% margins, then open weights models hosted on clouds happy with 10% margins increase competition and decrease cost. I’m game, like most. Much easier compliance with data sovereign concerns too.

> OpenAI are doing 70% margins on inference and Anthropic doing 30% margins

That difference is actually pretty surprising. Is Claude that much more expensive to host? The end-user pricing seems to be pretty similar, or better for OpenAI.


You could have given Linus the weekend off.

Electron apps will expand to fill the available space!

build the power source next door and charge them extra.

What does deterministic mean to you?

In this context, it means being able to deterministically predict properties of the output based on properties of the input. That is, you don’t treat each distinct input as a unicorn, but instead consider properties of the input, and you want to know useful properties of the output. With LLMs, you can only do that statistically at best, but not deterministically, in the sense of being able to know that whenever the input has property A then the output will always have property B.

I mean can’t you have a grammar on both ends and just set out-of-language tokens to zero. I thought one of the APIs had a way to staple a JSON schema to the output, for ex.

We’re making pretty strong statements here. It’s not like it’s impossible to make sure DROP TABLE doesn’t get output.


You still can’t predict whether the in-language responses will be correct or not.

As an analogy: If, for a compiler, you verify that its output is valid machine code, that doesn’t tell you whether the output machine code is faithful to the input source code. For example, you might want to have the assurance that if the input specifies a terminating program, then the output machine code represents a terminating program as well. For a compiler, you can guarantee that such properties are true by construction.

More generally, you can write your programs such that you can prove from their code that they satisfy properties you are interested in for all inputs.

With LLMs, however, you have no practical way to reason about relations between the properties of inputs and outputs.


And also have a blacklist of keywords detecting program that the LLM output is run through afterwards, that's probably the easiest filter.

I think they mean having some useful predicates P, Q such that for any input i and for any output o that the LLM can generate from that input, P(i) => Q(o).

If you could do that, why would you need an LLM? You'd already know the answer...

Having that property is still a looooong way away from being able to get a meaningful answer. Consider P being something like "asks for SQL output" and Q being "is syntactically valid SQL output". This would represent a useful guarantee, but it would not in any way mean that you could do away with the LLM.

Let's eat grandma.

The device still works. It's being crippled on purpose.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: