Hacker Newsnew | past | comments | ask | show | jobs | submit | rfv6723's commentslogin

Human spoken conversation doesn’t really work like file buffering.

People can tolerate missing words surprisingly well. If a phrase is slightly clipped, masked by noise, or dropped, the listener can often infer it from context. That happens constantly in real speech.

But pauses and stalls are much more damaging. A sudden freeze in the middle of speech breaks turn-taking, timing, and attention. It feels like the speaker stopped thinking, the connection died, or the system got stuck.

For voice UX, a tiny omission is often less harmful than a perfectly complete sentence that freezes halfway.


> People can tolerate missing words surprisingly well. If a phrase is slightly clipped, masked by noise, or dropped, the listener can often infer it from context. That happens constantly in real speech.

LLMs are surprisingly good at this, too.

This entire blog post is based on assumptions that

1) WebRTC garbling is common

2) LLMs fall apart if there are any audio glitches

I would bet money that OpenAI explored and has statistics on both of those and how it impacts service. More than this blogger heaping snark upon snark to avoid having a realistic conversation about pros and cons


I think this is mixing domains quite a bit;

If I'm talking to a friend or peer and I'm on a crappy link, we can probably work it out. If I'm calling my lawyer from prison with my "one call" I really want my lawyer to get my instructions clearly and correctly, ideally the first time without a lot of coaching.

Where on this scale does "person talking to LLM" fit?

I believe there's a ton of research into the shannon limit and human speech. You can trivially observe how much redundancy there is by listening to a podcast at 1x, 1.2x, 1.5x, 2x, etc, and when you can't follow what's going on, you've found the "redundancy" built into that language. This number falls way off when you're listening to a person with an accent or when the recording is noisy or whatever.

You'll also find that your tolerance for lossy media is radically different based on latency and echos and jitter in the audio (which I believe is the point of the original "don't use webrtc" article...)

Finally, people may tolerate this, but the "phonem to token" thinger may be less tolerant, and will certainly not be able to magic correct meaning from lost packets, and if the resulting exchange is extremely expensive or important (from the lawyer and the "I'm in jail in poughkeepsie; I need bail!" exchange) you really want to take the time to get it right, not make things guess.


I think this conflates atheism with a much stronger form of causal rationalism.

Dawkins-style atheism is not “reject anything without a complete causal model.” It is a rejection of hypotheses with no explanatory gain, no empirical constraint, and unlimited ad hoc flexibility — like the Flying Spaghetti Monster.

Consciousness is different. It is first a phenomenon, not an already-settled causal model. We do not believe humans, infants, or animals are conscious because we possess a complete mechanism for subjective experience. We infer consciousness from a cluster of phenomena that need explanation.

So the lack of a full causal account warrants caution, not denial. It is reasonable to say current AI gives weak evidence for consciousness. But that is not the same as saying AI consciousness is equivalent to believing in the Flying Spaghetti Monster.


The point is "Claude is conscious" is a hypothesis with no explanatory gain, no empirical constraint, and by denying that non-human consciousness is relevant to the discussion it gains unlimited ad hoc flexibility. I am relating this to plausibility and causality because there is a much more rational causal explanation for Claude seeming conscious than it actually being conscious: it imitates human (modern Western) consciousness via big data. Since this is a totally different causal mechanism than human consciousness, and since Claude has nothing in common with non-human animals, and since we don't need consciousness to explain Claude's behavior, "Claude is conscious" is overwhelmingly less plausible than "Claude is a sophisticated but ultimately brainless chatbot."

It is truly irrational - and hostile to scientific thought - to believe Claude is conscious. It truly is believing in the Flying Spaghetti Monster.


All the claims that AI can't be consciousness seem to mostly be using "consciousness" as a scientific-sounding word for "soul" and asserting that machines can't have souls.

> Since this is a totally different causal mechanism than human consciousness

A causal mechanism for what, exactly? Could you kindly define consciousness in a rigorous way so that Dawkins can see why it doesn't apply to Claude?


The anxiety surrounding AI-generated "slop" mirrors the frantic warnings of late 15th-century clerics who viewed the printing press as an engine of spiritual decay. Johannes Trithemius, a prominent Benedictine abbot, famously argued that monk-scribes should not abandon their pens, fearing that printed books were ephemeral, error-ridden toys that would undermine the sanctity of scripture and the discipline of the mind. He believed that the sheer volume of cheap, mechanical texts would drown out genuine wisdom and lead to a permanent decline in the quality of human thought.

History shows he fundamentally misunderstood the human capacity for adaptation. Rather than succumbing to a sea of printed garbage, society developed sophisticated new filters. We invented the modern bibliography, the peer-review process, the concept of a "trusted publisher," and the critical literacy skills required to navigate a world where information was no longer a rare luxury. Humans have an innate drive to seek out signal over noise. Just as the chaos of the early printing era eventually gave way to the Enlightenment, our current struggle with synthetic content will likely trigger a new evolution in how we verify truth and value human insight.


Manuscript could contain handwritten errors and of course there could be misprints due to wrongly selected types but content wasn't generated out of nowhere. Unless we're talking about asemic or automatic writing due to some... "spiritual" influence.

The key here is human thought as you said. Whether these books were written by clerics or printed by the press these were still containing human produced substance. It's not a fair comparison.


That exact stance (+scribes' financial interests) prevented printing press to be used in the Ottoman Empire widely for more than 200 years


I think his legacy is about stegography and cryptography. I think he relied on handwritten volumes and couldn’t adapt his cryptographic techniques.


"Generating slop is totally fine because we'll eventually develop anti-slop filters" isn't exactly the most convincing argument, you know.

Besides, your link between the "chaos of the early printing press" and the start of the Enlightenment is very forced. The Greek philosophers did plenty of critical thinking after all, and they had no need for a printing press. I see absolutely zero reason why the current AI bubble will inevitably result in an Enlightenment-like period, nor why AI would be a hard requirement for one.


The frontiers of mathematics is already incorporating AI and people like Terrance Tao are documenting the progress of AI. At the very least the current best mathematician in the world only does this because he has predicted an opposite conclusion to you.

So when you say zero reason, I have to tell you that your absolutist stance is blindness. There are many reasons why it can happen, and many reasons why it can’t.


Incredibly valid opinion. Many people disagree but this is an extremely possible future for AI.

There is also a darker future where AI improves to the point where it’s no longer slop. It produces quality code, texts, and books that are better and in a fraction of a second after one misspelled prompt. Given the past trajectory of AI, this is the more likely outcome.

The other outcome is AI flatlines. This is as good as it gets. In which case the future you predict may come to pass.


The concept of a valuable service falls apart if players can influence the actual event. Without equal footing and basic honesty, you aren't measuring reality so much as you are subsidizing those with the power to manipulate it.


There's a feel good story where a parent can't afford a very expensive medical procedure to save a child, so someone tells them to place a massive bet in a prediction market for a certain event that may happen, and then they make it happen, therefore siphoning off money from the other gamblers for a good cause. Just a small way everyday people use the system against itself as a way to survive.


Seems pretty naive to think that this type of thing is happening in favor of the "everyday person".


Yup. That is the other big exception described in the article.


The logistical nightmare of hydrogen makes its production price almost irrelevant. Using surplus wind energy for carbon capture to create synthetic fuels is much smarter because these liquids are compatible with our current global infrastructure. You bypass the need for expensive new pipelines and specialized tanks entirely. By binding green hydrogen into a stable synthetic hydrocarbon, you get a fuel that is easy to move, has high energy density, and won't leak through solid steel.


The price of H2 is a contributing factor to the price of synthetic fuels, though. Just saying. Otherwise I agree with your points on synthetic fuel.


Solid state batteries are overhyped because their production complexity makes them a pricing nightmare for the average consumer. Sodium ion batteries are the practical choice for short distance transport because they are affordable and charge incredibly fast.

When it comes to long distance shipping or aviation, the energy density of liquid fuel is simply too hard to beat. Fossil fuels will stay dominant for decades, likely evolving into carbon captured or bio derived alternatives rather than being replaced by batteries.


This outlook is as short-sighted as the 2000 fiber optic bust. Critics then thought overcapacity meant the end, yet that infrastructure eventually created the modern internet. Capital does not walk away from a fundamental shift just because of one market correction. While specific companies may fail, the long-term value of the technology ensures that investment will continue far beyond a five-year window.


But fiber optic created in 2000 is still very usable in 2026. AI hardware purchases in 2026 is going to out of date very quickly by comparison.


The massive investment in power grids and data centers provides a permanent physical backbone that outlives any specific silicon generation. This infrastructure serves as a durable shell for the model design knowledge and chip architectural IP gained through each iteration. Capital is effectively funding a structural moat built on energy access and engineering mastery.


Seems like there’s a lot of resources being dumped into those data centers that will not be very useful. Saying it will all be worthwhile because we’ll have the buildings and the modest power grid updates (which are largely paid for by tax payers, anyway,) feels like saying a PS5 is a good long-term investment because the cords and box will still be good long ag after the PS5 has outlived its usefulness.


The "PS5" analogy fails to account for how "useless" hardware often triggers the next paradigm shift. For decades, traditionalists dismissed high-end GPUs as expensive toys for gamers, yet that specific architecture became the accidental engine of the AI revolution.


And you imagine these incredibly expensive-to-operate, environmentally damaging, highly specialized, years-outdated GPUs will trigger some sort of technological revolution that won’t be infinitely better served by the shiny new GPUs of the day that will not only be dramatically more powerful, but offer a ton more compute for the amount of electricity used?

The AI use of GPUs didn’t stem from a glut of outdated, discarded units with nearly no market value. All of those old discarded GPUs were, and still are, worthless digital refuse.

The closest analog i can think of to what you’re referring to is cluster computing with old commodity PCs that got companies like Google and Hotmail off the ground… for a few years until they could afford big boy servers and now all of those, and most current PCs on the verge of obsolescence, are also worthless digital refuse.

The big difference is that Google et al chose those PC clusters because they were cheap, commodity pieces right off-the-bat, not because they were narrowly scoped specialty hardware pieces that collectively cost hundreds of billions of dollars.

Your supposition fails to account for our history with hardware in any reasonable way.


Focusing exclusively on the physical decay and replacement cycle of hardware is a classic case of tunnel vision. It ignores the fact that the semiconductor industry’s true value lies in the evolution of manufacturing processes and architectural design rather than the lifespan of a specific unit. While individual chips eventually become obsolete, the compounding breakthroughs in logic and efficiency are what actually drive the technological revolution you are discounting.


Tunnel vision is ignoring the astonishing amount of money and environmental resources our society is dumping into these very physical, very temporally useful chips and their housing because… of what we learn by doing that. We should have dumped 1/100th of that money into research and we’d have been further along.

This isn’t a normal tech expenditure— the scale of this threatens the economy in a serious way if they get it wrong. That’s 401ks, IRAs, pension plans, houses foreclosed on, jobs lost, surgeries skipped… if we took a tiny fraction of this race-to-hypeland and put towards childhood food insecurity, we could be living in a fundamentally different looking society. The big takeaway from this whole ordeal has nothing to do with semiconductors — it is that rich guys playing with other people’s money singularly focused on becoming king of the hill are still terrible stewards of our financial system.


Dismissing massive capital expenditure as "hypeland" ignores the historical reality that speculative bubbles often build the physical foundation for the next century. The Panic of 1873 saw a catastrophic evaporation of debt-driven capital, yet the "worthless" railroads built during that frenzy remained in the ground. That redundant, overbuilt infrastructure became the literal backbone of American industrialization, providing the logistics required for a global economic shift that far outlasted the initial financial ruin.

Divorcing research from "learning by doing" is a recipe for a bureaucratic ivory tower. If you only funnel money into pure research without the messy, expensive, and often "wasteful" reality of large-scale deployment, you end up with an economy of academic metrics rather than industrial power.

The most damning evidence against the "research-only" model is the birth of the Transformer architecture. It did not emerge from an ivory tower funded by bureaucratic grants or academic peer-review cycles; it was forged in the fires of industrial practice.

History shows that a fixation on immediate social utility or "rational" cost analysis can be a strategic trap. During the same era, Qing Dynasty bureaucrats employed your exact logic, arguing that the astronomical costs of industrialization and rail were a waste of resources better spent elsewhere. By prioritizing short-term stability over "expensive" technological leaps, they missed the industrial window entirely. Two decades later, they faced an industrialized Japan in 1894 and suffered a total collapse. The "waste" of one generation is frequently the essential infrastructure of the next.


How much capital was wiped out for it to be cheap after the bust? Someone is going to eat the exuberance loss in the near term, even if there is long term value.


Humanity has never known a world without surveillance. Responsibility cannot exist without being watched. Primitive tribes lived under the constant eye of the group, and agricultural eras relied on the strict oversight of the clan. Modern states simply adopted new tools for an ancient necessity. A society without monitoring is a society without accountability, which only leads to the Hobbesian trap of endless conflict.


Mass surveillance is a relatively recent development. Dense urban civilizations are not. And yet their denizens have not historically devolved into a “nasty, brutish, and short” existence. In fact, cities have been centers of culture and learning throughout history. How does this square with your theory?


The 19th century was the true cradle of mass surveillance. Civil registration, property tracking, and institutionalized police forces provided the systemic oversight required to manage dense urban life. These administrative tools served as the analogue version of digital monitoring to ensure every citizen remained known and categorized. Cities thrived as centers of culture only because these new forms of visibility prevented the Hobbesian collapse that anonymity would have otherwise triggered.


And what about all of the previous ~40-50 centuries where cities were centers of learning and art and not Hobbesian hell holes? Ur is slightly older than the 19th century, I believe.

And note that there is evidence for cities of tens of thousands of inhabitants from 3000 BCE, while Rome reached 1 000 000 residents by 1CE. Again, without becoming some Hobbesian nightmare.


Augustus established the Vigiles Urbani and the Urban Cohorts, creating a state-funded police and firefighting force to replace the chaotic and often violent system of private client-patron justice. These were the bold, persistent experiments in social order that allowed a million people to coexist without descending into a Hobbesian hell.


None of those things are remotely comparable to the surveillance we're talking about. There's a world of difference between, "My city knows who owns what properties and also we have a police force", and "Western intelligence agencies scoop up every bit of data they can grab about anyone on the planet and store it forever"


In my country it wasn't until the late 19th century that someone had the balls to stop going to church on Sunday. It was a huge scandal at the time but it all worked out in the end.

Humans have always done mass surveillance on eachother. You don't need technology for that.


At no point in time before this era was it possible for a random bureaucrat to have a reasonably comprehensive list of everyone in a country who attended church yesterday.

Scale matters.


This is a reduction to absurdity. Those old societies you cite didn't actively surveil with the goal of micromanaging people's daily lives the way that modern ones do.


Rural surveillance was far more suffocating because every single action was subject to the community gaze. This is exactly why classic literature frames the journey to the city as a liberation from the crushing weight of the village eye. The idea of the peaceful countryside is a modern utopian fantasy that ignores how ancient clans dictated every aspect of life including marriage and death. Modern Homeowners Associations prove that localized oversight is often the most intrusive form of management. Ancient society did not just monitor people; it owned their entire existence through inescapable social visibility.


"It was always shit everywhere" is revisionist history born out of the fantasy of statists looking to justify the modern (administrative) enforcement state.

While the lack of anonymity in small towns certainly puts a damper on one's ability to deviate too far from social norms, the list of things and subject that could get you subjected to government violence without creating a victimized party was infinity shorter. Things that get state or state deputized enforcers on your case today were matters of "yeah that's distasteful, he'll have to settle that with god" or it would come back to bite you when something happened 150+yr ago because society did not have the surplus to justify paying nearly as manny people to go around looking for deviance that could be leveraged to extract money. These people had way more practical day to day freedom to run and better their lives than we do now, if constrained by the fact that they had substantially less wealth to leverage to that effect.

> Modern Homeowners Associations prove that localized oversight is often the most intrusive form of management

And they almost exclusively deal in things that historical societies didn't even bother to regulate.

You're beyond delusional if you think running afoul of HOA is worse than running afoul of the local, state or federal government. Yeah they can screech and send you scary letter with scary numbers but they don't get the buddy treatment from courts that "real" governments do (to the great injustice of their victims) and their procedural avenues for screwing their victims on multiple axis are way more limited.

Seriously, go get in a pissing match with a municipality over just where the line for "requires permit" is and get back to me. Unless you want to do something that is more than petty cosmetic stuff and unambiguously in violation of the rules a HOA is a paper tiger for the most part (not to say that they don't suck).


Modern bureaucracy provides the institutional architecture and political recourse needed to check such arbitrary local tyranny. Without a central legal authority, an HOA or a town council becomes a lawless fiefdom. In those "freer" times, falling out with the local elite meant you didn't fight a permit; you simply had to pack your life and leave.


That's an incredibly bullshit argument to defend the indefensible.


Your reaction actually proves the point. Aggression thrives in anonymous spaces because the lack of oversight removes the weight of accountability. When people feel unobserved, they quickly abandon the social friction that once held tribes and clans together. You are essentially providing a live demonstration of why a society without any form of monitoring inevitably slides into the Hobbesian trap.


I don't think a random internet comment proves anything about society at large.

People don't hesitate to be aggressive even when they're not anonymous and there's a threat of accountability - see, all crime, or people just acting shitty toward others.

Mass surveillance does not cause everyone to magically get along.


History shows that whenever surveillance gaps appear, chaos follows. The explosion of crime during early urbanization was the specific catalyst for the creation of modern police forces because traditional social bonds had failed to provide oversight in growing cities. Japan maintains its safety through a deep-rooted culture of mutual neighborhood monitoring that leaves little room for anonymity. Even China successfully quelled the violent crime waves of its early economic boom by implementing a sophisticated surveillance network.


Police forces nor "neighborhood monitoring" are equivalent to mass surveillance though.

Anyway I'm curious why - despite having less anonymity than at any point in history, at least from the perspective of law enforcement - we still see high crime rates, from fraud to murders?


This scenario echoes the fatal flaw of 19th-century Marxist theory by assuming that surging productivity leads to a permanent reserve army of the unemployed and systemic collapse. Marx failed to foresee how the 20th-century economy would elastically adapt through the birth of a massive service sector that absorbed labor displaced by industrial automation.

While this Global Intelligence Crisis assumes a rigid endgame where machines spend nothing and humans lose everything, it ignores the historical reality that human desires are infinite. As AI commoditizes current white-collar tasks, the economy will pivot toward new and currently unimaginable domains of human value. A 19th-century economist could never have predicted the rise of cybersecurity or the creator economy, and we are likely in a similar pre-prediction stage today. Betting against human adaptability has been a losing trade for two hundred years because our social and economic structures have always evolved to find new utility for human agency.


>it ignores the historical reality that human desires are infinite

This is factually false. Human desires are only infinite for things that have positive utility and cost nothing and by nothing I mean nothing. The moment you have to spend even a single second thinking whether you want to buy or not, demand collapses from infinite to finite by definition.

This means people will accumulate infinite quantities of money, stocks, etc, but never infinite quantities of anything concrete that exists in the real world.


Reality might stop a transaction, but it cannot kill a drive. Sublimation reroutes our infinite hunger into the scientist’s obsession or the artist’s lifelong pursuit of beauty. These are not finite market choices. They are the redirection of a psychic energy that no physical object can ever satisfy.

This redirection is precisely what fuels the expansion of the global economy into realms far beyond basic survival. When a primal drive is blocked by the cost of a physical object, it sublimates into the high-end art market, the pursuit of scientific breakthroughs, or the infinite scroll of digital entertainment. Entire industries exist solely to harvest this redirected energy.


Historical records, notably by Herodotus, confirm that the Persian Empire used gold to bribe Greek Oracles, turning "divine prophecy" into a psychological warfare tool.

This mirrors a core flaw in Polymarket: profit maximization is not truth-seeking. Just as Persian bribes manipulated ancient morale, modern "whales" can distort market odds to manufacture narratives or hedge external interests. In both cases, the prediction is a commodity sold to the highest bidder rather than an objective forecast of reality.


and newspapers are owned by fatcats. but we are still interested in what they have to say.


This comparison is flawed because accountability creates a structural divide. A newspaper has a visible masthead and named editors, creating a reputational stake where consistent bias leads to institutional ruin.

In contrast, Polymarket relies on pseudonymous liquidity. A "whale" can use a "Persian bribe" to distort odds and then vanish without consequence. While a newspaper offers a testable argument, Polymarket provides a "math-washed" price signal that allows financial manipulation to masquerade as objective probability.


> objective probability.

i don't believe such a concept exists. if you do, then you have greater epistemic problems that should be resolved first, before reading either the newspaper, or the prediction market.


Dismissing "objective probability" is a convenient philosophical retreat that strips Polymarket of its only legitimate function. If the market isn’t an attempt to aggregate information toward a binary, external "ground truth," then it isn't a forecasting tool—it’s a "Keynesian Beauty Contest" where people bet on what they think others believe rather than what will actually happen.

Without an objective anchor to measure against, concepts like "mispricing" or "alpha" become logically impossible; you cannot have a "wrong" price if you don't believe a "right" probability exists. If we accept that the market signal is just a reflection of whale liquidity and "Persian bribes" rather than a calculated proximity to reality, then the platform is merely a math-washed gambling hall. Ultimately, a prediction market that abandons the pursuit of objective truth loses its epistemic utility and its entire reason to exist.


prediction markets are a useful tool for aggregating information about uncertain events.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: