Hacker Newsnew | past | comments | ask | show | jobs | submit | harrall's commentslogin

This article gets ahead of itself.

The issue isn’t the splitting. There is no fiber to even split in most places. A lot of places in America had their “network” infra built 50-100 years ago on copper and no one wants to pay to basically rebuild all of it.

I happen to live in an area where there are still above ground utilities.

We got >5 gig fiber fast. We have 700Mbps 5G. I literally watched them string the fiber on the poles.

It’s still not shared, but it’s fast because it’s new. Shared would be preferred, but you need destroy + “new” first, and most people are fine with what copper gives them. Shared may even be cheaper but most people don’t think we need to rebuild anything.


> A lot of places in America had their “network” infra built 50-100 years ago on copper

That's no different to Switzerland so far…

> and no one wants to pay to basically rebuild all of it.

…but the Swiss seem to have decided it's worth the investment.

> I happen to live in an area where there are still above ground utilities.

If anything, that can make things cheaper. You don't need to bury everything, and in some places (e.g. earthquake prone Japan) it's really counterproductive. But even if it isn't, it's certainly more expensive.

Sent from a 25G internet connection. My laptop only has 10G via TB though.


I'ts almost certainly shared. 99% of FTTH is (X)G(S)-PON which shares the fibre over a few properties. Usually something like 32 max.

The Swiss use point to point fibre (there are a few small pockets of this elsewhere). But in reality it is very hard to saturate. XGSPON has 10G/10G shared between the node. GPON has 2.4gbit down/1.2gbit up shared across the node.

In reality point to point is not really a benefit in 99.99% of scenarios, residential internet use cannot saturate 10G/10G for long, even with many 'heavy' internet users (most users can't really get more than >1gig internally over WiFi to start with).

And if it is a problem there is now 50G-PON which can run side by side, so you just add more bandwidth that way.


Residential 2.5gb equipment is just starting to appear as a “default” and 10gb is still pretty rare, though accessible if you want it.

I don’t even have 25gb and I’ve a home lab!


I had 600mbps down/200 up (I could have upgraded to 1GB) and I downgraded to 175 down/50 up (to switch to a more reliable provider) and didn’t notice any difference (family of 4).

You’re not seeing better engines because there aren’t any. We are reaching the limits of physics.

That’s why we are working on alternatives like refueling in space or reusable ships.

The Artemis missions are testing things that we still have a lot of area to improve upon — materials (a huge one), international standards for things like docking ports, computing, radiation safety, and a lot more.


Yeah, RS25/SSME still have a higher specific impulse than any boost stage engine in operation, past or present.

Artemis II doesn’t have any docking hardware since it won’t have anything to dock with. And Artemis in general is just using the IDSS used on the ISS and by Dragon and Starliner, nothing new being discovered or tested there.

<snark>Did we really need to spend $90 billion and send people past the Moon to troubleshoot Bluetooth?</snark>[1]

Sharing because this seems to capture the je ne sais quoi that seems off about Artemis for me.

1. https://github.com/RICLAMER/Artemis_II_2026

NASA's Artemis II Live Views from Orion, 04 - Day 1-2 - 03-04-2026 - 1645-Transcript-EN.txt: "03/04/2026 - 18:57:27 (-3 TMZ) | 01:23:22:27 (Artemis Clock) "No joy seeing the device in the list of available devices when I attempt to re-pair it after doing the Bluetooth forget."


The reality is that nothing in life can be trusted but everything can be modeled.

For example, you never know what a driver going 60 miles/hr will do, but you do know that the laws of physics say that the driver can’t suddenly go backwards.

Once you figure this out, you realize you can work through absolute chaos because you can work with black boxes.

It doesn’t matter if the media is lying. For example, the source might say there’s this magic pill that has cured cancer, but if that were actually true, we wouldn’t have chemotherapy still. Therefore, without ever having to grapple with the question of the trust, the actual truth is bounded between “fake news” and “there maybe be potential new developments.” If you still care, you can still look into it, but 3 seconds of modeling already gave you a good black box answer.

What people mistakenly do is try to determine if the statement is true or not, but that’s a waste of time in most cases. It’s better to model the system enough to work within it and then move on.


Sure, but I think the media mostly misleads through omission and shifting the focus.

It seems trivial, but on a national or global scale, so many things happen that it becomes a powerful force.

Every day, the media ignores millions of events that happened in the country. It only reports on a few hundreds. The way it chooses what is important give it massive power.

Every day, some politicians somewhere act in a corrupt manner. The media covers a tiny fraction of those. Instead the media might fill the space with celebrity gossip. This creates a false impression that things are alright when they are not.

Unfortunately it's hard for us to get a general sense for how people in our society are doing because our perception is badly distorted.

My sense is that our current society is terrible and many people are harmed and left behind but the suffering is covered up and nobody is held accountable. This is based on what I've observed of people who I used to go to school with (for example).


Sure but I think it’s more a deeper structural problem.

When in human society have we had access to information at this rate? Yes there are bad people but there were have always and will be bad people (like people selling crock medicines 200 years ago).

We’re in this situation because it’s a brand new problem (information density) and we are still looking for a solution. I mean we now have a way to ensure medicines you buy are reasonably vetted, but it took us a while to figure that one out.

It just sucks that we have to live during the times where we haven’t figured it out, but there’s always going to some new problem grappling human society.


ISO changes the analog gain and in a way yes, it does make it more sensitive to a certain range of brightness.

This is because the ADC (analog to digital converter) right after can only handle so many bits of data (like 12-16ish in consumer cameras). You want to “center” the data “spread” so when the “ends” get cut off, it’s not so bad. Adjusting the ISO moves this spread around. In addition, even if you had an infinite bitrate ADC, noise gets added between the gain circuit and the ADC so you want to raise the base signal above the “noise floor” before it gets to the ADC.

Gain is not great — it amplifies noise too. You want as low ISO as possible (lowest gain), but the goal is not actually to lower gain; your goal is to change the environment so you can use a lower gain. If you have the choice between keeping the lights off and using higher ISO versus turning on the lights and using a lower ISO, the latter will always have less noise.

Most photo cameras have one gain circuit that has to cover both dark and light scenes. Some cameras like a Sony FX line actually have two gain circuits connected to each photosite and you can choose, with one gain circuit optimized for darker scenes and the other optimized for brighter scenes. ARRI digital cinema line cameras have both and both are actually running at the same time (!).


> your goal is to change the environment

...or integration time.


This is how free drink refills, airplane tickets, Internet service, unlimited data plans, insurance, flat rate shipping, monthly transit passes, Netflix, Apple Music, gym memberships, museum memberships, car wash plans, amusement park passes, all you can eat buffets, news subscriptions, and many more work.

Either you get a flat rate fee based on certain allowed usage patterns or everyone has to be billed à la carte.


This is a different case - those all have limitations based on human behavior (it's not necessary or possible to constantly be washing your car the entire month when you pay for unlimited washes) - that doesn't exist here. The types of plans available should reflect that reality. If gyms faced a situation where people would go and spend 18 hours working out every day for a month, they would probably change how they billed things.

Your comparisons are all also "unlimited" situations to Claude's very much limited situation. You can't buy a plan for Claude that is marketed as being unlimited. They're already selling people metered usage. They're just also adding restrictions on top of that.


They sell metered usage while having the implied expectation that most wont use it fully. Power users and users of stuff like OpenClaw don't match that idea.

So they further restricted the metered caps, which were only offered to NOT be reached by that many.

Simple as that.


>Power users and users of stuff like OpenClaw don't match that idea.

Then they should figure out how to structure an offering that accommodates this type of usage not just blanket ban it


> Then they should figure out how to structure an offering that accommodates this type of usage

They did, didn't they? You can pay the non-plan rate.

> not just blanket ban it

They didn't do that. The email specifically tells you how to use Openclaw with Anthropic. There is no "blanket ban".


Why "should" they? There's no reason they would especially when their competitor now owns OpenClaw.

Because a big part of Anthropic's story is that they build based on how people actually use AI. Power users aren't just annoying edge cases, they're signal. Throttling them and calling it done is inconsistent with that.

> Power users aren't just annoying edge cases, they're signal.

You got that right; in this case they are signalling that AI token providers are not going to be able to run at a profit anytime soon.

Not sure if that helps or hurts your argument, though.


> Power users aren't just annoying edge cases, they're signal.

Not all power users. Some re-invent the wheel and/or do things inefficiently, and in most cases there's no business incentive to adapt the service to fit the usage patterns of those users, or of other users that deviate from the norm in regards to resource usage.


They build based on how _people_ use AI.

Sorry to tell you but generally any company's "story" is all marketing and PR, if it interferes with their making money, which it does in this case, that company will not hesitate to leave it behind.

Oh the billion bollar vc backed pre ipo companys story was this? Omg and they somehow are not delivering up to your standards? Damn they better get their act together lest people like you will whine on twitter about them losing their way

> Why "should" they?

Because it is clear that there is a market demand for it.


There is also a clear market demand for $10 bills sold for $5, but I don't see you tapping into that opportunity!

I didn't write anything about pricing. I just claim that people would love an offering without the discussed restriction, and because there is clear evidence of such a demand, it would make sense for Anthropic to prepare such an offering.

>I didn't write anything about pricing

Yes, and that's exactly the problem I'm pointig at.

Your comment "that people would love an offering without the discussed restriction" ignores the pricing burden of that, which is why it's confused why Anthropic don't just offer this.


> I didn't write anything about pricing. I just claim that people would love an offering without the discussed restriction,

The API has no restrictions; what is the people's objection to that?


Then you mean the API, and if that's not sufficient, then you do have an issue with wanting something for nothing.

They did figure out how to structure an offering that accommodates that type of usage: pay for your tokens.

They did: just use the metered API.

Don’t cry while you’re ruining it for everyone.

Isn’t that just usage based charges?

"Unlimited" has always been a lie. There is no free lunch. There are always limits.

I've had to unwind "unlimited" within startups that oversold. I've been bit by ISPs, storage providers, music streamers, fuckin _Ubers_, now AI subscription services, that all dealt in "unlimited". None of them delivered in the long run.

I'd be mad at Anthropic if it weren't for the fact that my experience now can see this sort of thing from a mile away. There are a lot folks, even on HN, that haven't been around for as long. I understand the outrage. I've been there. But these computers cost money to run, and companies don't operate at a loss in the fullness of time.

Once you know that unlimited trends towards limited, the real question is whether we're equipped as a society to deal with the fact that the capital-L Labor input to the economic equation is about to be replaced with a Capital input for which only a handful of companies have a non-zero value.


You can both know that "unlimited" means "limited" and also be pissed that they market it as such and try to conceal the actual limits.

> You can both know that "unlimited" means "limited" and also be pissed that they market it as such and try to conceal the actual limits.

Reminds me of when ATT had a fake 5G decoration on phones.

"AT&T won’t remove fake 5G logo even after ad board says it’s misleading"[0]

You can just get away with lying. That's the level of enforcement that exists against unethical behavior in business today.

0. https://www.theverge.com/2020/5/20/21265048/att-5g-e-mislead...


On your 1.5Mbps link, you could theoretically download 500GB per month. A huge amount, but I believe it was often genuinely allowed, because their uplinks could cope with it. Unlimited could genuinely be unlimited.

But now you might get things like “unlimited” 1Gbps… which reverts to 10Mbps (1% speed) or worse after 3.6TB (eight hours). And so your new theoretical maximum is about 6.8TB per month rather than 330TB.


This is all just the classic "the first hit's free" business model.

>If gyms faced a situation where people would go and spend 18 hours working out every day for a month, they would probably change how they billed things.

Not the best example. The upkeep cost of a gym is pretty flat regardless of how much people use the facilities. Two people can't use a single machine at the same time make it wear out twice as fast. The price of memberships is not correlated to usage, it's inversely correlated to the number of memberships sold.


Two people can't use a machine at the same time is the issue. If you have 50 machines and 200 customers all of whom want to be in the gym 18 hours per day that's quickly going to lead to cancelled subscriptions. Now you need more space and machines or some other way to balance things.

Agreed, but it's an indirect causal link, not a direct one. If the demand far outstrips the possibly supply the demand will have to go down, and it can either go down by people accepting that they can't be in the gym as much time as they would like, or as you say by memberships being cancelled (in which case the price may go up or something else might change).

>Two people can't use a single machine at the same time make it wear out twice as fast

The machine doesn't care about the number of people using it. If it's constantly being used, it will wear out faster. You are conflating "we price based on expected under-utilization" with "costs don't scale with usage." Those are different things.

The inverse correlation you talk about isn't relevant here - People buy gym memberships intending to go, feel good about the intention, and then don't follow through. The business model is built on that gap. That's pretty specific to fitness and a handful of similar industries where aspiration drives purchase.

Anthropic doesn't sell based on a "golly gee I hope people dont use this" gap - they sell compute. Different business.


> Anthropic doesn't sell based on a "golly gee I hope people dont use this" gap - they sell compute. Different business.

There is nothing anywhere hinting at that.

They don’t sell compute. They sell a subscription for LLM token budgets that they hope people don’t use because the compute is vastly more expensive than what they charge or what users are ever willing to pay.

Especially with enterprise subscription plans the idea is for customers to never utilize anywhere close to their limits.


>If it's constantly being used, it will wear out faster.

Yeah, but there's an absolute limit to that, beyond which the cost doesn't keep increasing. Beyond that point, the QoS goes down (queues).

>You are conflating "we price based on expected under-utilization" with "costs don't scale with usage."

I'm not conflating anything, I'm responding to what you said:

>If gyms faced a situation where people would go and spend 18 hours working out every day for a month, they would probably change how they billed things.

Why would a gym need to change how they bill things if all their customers were aiming for maximal utilization, when their costs would barely see any change? I doubt your typical gym operates on razor-thin margins.


Gym costs absolutely scale with usage. Equipment wears faster under heavier use. Cleaning and maintenance staff hours scale with how much the facility is used. Consumables like towels, soap, and chalk go faster. HVAC runs harder. The reason gyms can offer flat-rate pricing is that they bet on under-utilization, not that costs are flat.

Setting that aside, even if we accept your argument that gym costs barely scale with usage, then that makes gyms a bad comparison case for Anthropic, whose costs directly scale with usage. You can't use the gym model to defend Anthropic's pricing decisions if the two cost structures are nothing alike.

I'm arguing that both gyms and Anthropic have usage costs that scale with usage, but gym business model assumes a large margin of under-utilization and there's a hard cap to "power user" - I think both of those extremes don't apply to Anthropic's situation. Under-utilizers aren't paying for AI they have a free tier. There's also a natural ceiling on how much any one person can use a gym. There's no equivalent constraint on API usage.


> The reason gyms can offer flat-rate pricing is that they bet on under-utilization, not that costs are flat.

Yes. In fact i remember hearing about a gym which offered a flat-rate pricing model but explicitly excluded certain professions from partaking in it. I remember the deal was excluding police, bouncers, models, actors and air stewardesses. They had a separate more costly tier for these people. (And I think i heard about it from the indignation the deal has caused online.)


>You can't use the gym model to defend Anthropic's pricing decisions if the two cost structures are nothing alike.

Am I? I think you read something into my comments that I didn't write.


> Under-utilizers aren't paying for AI they have a free tier.

Sure they do. Free tiers suck. I may not always need to use AI, but when I need it, I don't want to immediately get hit by stupidly low quotas and rate limits, or get anything but SOTA models.


Rent doesn't work that way... yet. Imagine if it did though, people would be arguing:

"Well, you're not expected to be able to live in that home the entire month that you paid for!"


à la carte is honest; overprovisioning just slows progress by preventing demand from creating pressure to innovate proper solutions.

The commons? Tragic.

Internet service and unlimited data plans work like this though.

If you consider information theory, when something has states, you can store data in any system that has multiple states, which means you can store data in any system.

The placement of coffee cups on a table can be used to encode data.

At that point, only your audience needs to know that data is there.


As part of the contract for construction, the county or city must buy a certain amount of water every year.

Because desalination is not economically feasible, the water is more expensive and this extra subsidy raises the cost of the water bill.

This is how it works for the facility in San Diego County.

Building a desalination facility is economically hard to justify because the break-even point seems far away. It also assumes the state won’t eventually create a state-wide solution, which would benefit from a state-level economy of scale that a city/county effort might not.


> It also assumes the state won’t eventually create a state-wide solution, which would benefit from a state-level economy of scale that a city/county effort might not.

How would a state-level solution to who deserves water more benefit from economies of scale? This is about as core of an example of where you don't want central planning as you can find.


This is like saying I know how to do plumbing so now I’m going to do all my own plumbing.

Yet I will still pay for a plumber. I wonder why.


Not the same at all lol. One would require robotics to solve. This is an asinine comparison

That’s why California has had dozens of state laws passed in the past 3 years that override many city and county ordinances to force additional supply. Lawmakers knew the only solution was more housing.

These bills permit the construction of denser housing near transit, force cities to actually meet housing quotas, allow the state to override a city’s zoning to permit more housing if they don’t make a good faith attempt, loosen the rules to build ADUs, and many other changes.

Is it ideal that we had to get to this point? No, but housing wasn’t being built.


And the biggest reason is, learning from history, the instability of food prices destabilizes nations.

There have been many “bread riots.”


Bread is much cheaper today (at least relative to incomes) than in the days of bread riots.



Wheat is HEAVILY subsidized.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: