Hacker Newsnew | past | comments | ask | show | jobs | submit | miyuru's commentslogin

>My last chance is with Scaleway.

I have been using for sw for object storage for a while. I haven't had a a issue with them with object storage.

only problem I had was with a another product called cockpit(they ship metrics about object storage there), which they bundled with object storage product, which cannot be disabled.

This month I had a 12 euro surcharge, because they enabled something in cockpit by default a while ago and suddenly start to charging for it.


I just tried their domains page it took 10.8MB of data and took 2s for the DOM to be ready.

page actually took 17s to fully render with multiple shift changes.

all to render a domain search bar similar to google home page.

https://railway.com/domains


There are some easy optimizations wins for this page but none of the top ones are framework related. Maybe with the faster build times they can easily optimize images and 3rd party dependencies. As someone else pointed out, nearly half that data is unoptimized images.

For the curious, google's current homepage is a 200kb payload all in, or about 50 times smaller.


Who remembers sprite sheets? Does that give my age away?

I did an optimization pass for a client once where I got rid of a ton of the sprites but didn't have the energy to redo it all, so it just had huge sections that were blank.

Super snappy loading afterwards though.


Yes, good times! With http2/3 they don't really matter anymore though, you get similar benefits from request pipelining.

Spriting is actually harmful for performance except in specific HTTP-1 scenarios.

Doesn't McMaster Carr still use sprites? Is that like the one optimization they managed to get wrong?

Looks like it, but isn't this site famous for being a "classic" storefront?

Some CMSs would auto-generate sprites. If you are showing most of them, it's still a positive, I'd assume. And, if it ain't broke, don't fix it.


I indeed remember.

HTTP 2+ (supported by every web browser) obviates sprite sheets.

They were a useful hack, but still a hack.


web dev is a sewer

All my projects are server rendered with jinja/minijinja, bootstrap, jQuery, and htmx when I need a little bit of SPA behavior on forms.

No builds, just static <script src= tags. Very fast and easy. I'll never recommend anything else.


I'm coming back to Django after a decade of experience with it post-0.96 and having moved to Next.js a few years ago. Going from 1,700 dependencies to 65 total with Django + Wagtail + HTMX.

Sounds more difficult then modern web frameworks. We've all done this for little projects, but anything with users or development teams, your method is DOA.

I disagree, most webapps, like 99.9% I would say, are just forms, links, and pages. Meaning, they can be done with 0 reactivity and that is the most simple and straightforward way to do it.

Less code is basically always better, so if you can skip the huge amounts of JS and orchestration required by modern web frameworks, then it will be easy. People are out here using React to render static pages. It's very overkill.


That can't be your measurement when you're loading 3 huge js libraries which are a lot more code then say svelte, which also excels at SSG.

Eh, there’s tradeoffs. They’re real. But I’ve done plenty of this on teams back in the day before all these frameworks and it can absolutely work. It may even be easier now with JS modules.

When I am given the choice to pick a stack, it is classical Java and .NET Web frameworks, with minimal JavaScript.

On hobby projects same script approach without any kind of build step.


With C#'s Blazor templating, you can ditch all JS logic, and use raw C# for all front-end logic, and have it all be transparently server rendered similar to how Phoenix has LiveView.

I also have experimented with HTMX and Django, and that seems to be a nice combination.

Everything is AJAX again.


And all the latency of classic ASP.Net Webforms. Click a button and see the page change in the length of a short yawn. Or, switch to client side wasm and load a payload that makes the typical react dev jealous.

I've a C# fanboy, but Blazor's DX just isn't very good compared to say Vite.

There are many conditions under which the hot reload just straight up crashes out regularly.


Hot reload definitely needs to grow.

The 3.57MB background PNG is hilarious [0]

[0] https://railway.com/dots-oxipng.png


Ha! I normally wouldn’t find it quite so hilarious, but it’s a stylistically pixelated image. There’s just too much irony packed in there to not chuckle.

It's more halftone (might not be the correct term), not pixelated

There might be more irony in saying it's stylized pixels without realizing that the style of the image can't be replicated with blocks of the same size but I dunno, I'm not Alanis Morissette


I got it down to 1.03MB by just switching the png to palette encoding mode.

Got the same running through tinypng.

Perhaps if those geniuses at Railway were slightly more competent they wouldn't have created a 10-minute-to-build frontend app, disregarding the choice of underlying framework.

They could have saved themselves 3MB by converting it to AVIF.

Dear lord. It's actually laggy for me to scroll on that page.

same here and I'm using a beefy MacBook (Apple M4 Max, 64gb ram). something is wrong with the front end code. there are a lot of animations, so my hunch would be that something goes wrong there.

Moore said computers get twice as fast every 18 months. Web devs took that as a challenge.

He said transistor count on chip doubles. (The more accurate pithy comment would be they took it as available resources.)

U+1F913

FWIW with pretty aggressive uBlock setup its "just" 7MB and 1.6s to load, so it might be just their love for analytics, tracking, measuring and lack of smart code splitting thats killing the performance.

not everyone lives in the USA or earns USA based salaries.

also I said this in a another thread, they charges 1$ even for single testing http request.

https://news.ycombinator.com/item?id=46873521


Feel free to use local services then, not every company has to support the entire world. Some are fine with a small slice. Expecting otherwise isn't sustainable for the sub trillion dollar non-monopolists companies, not without massive public support from the government at least.

Why would you be a useful target market for a business running these services then? Seriously, if you can't pay anything at all, of what value is catering product offerings to you? It is thus irrelevant that you aren't happy with not being offered a free service.

I'm not in the USA or earn USA salary but I can pay 1 euro a month for a thing.

There is a screenshot in the thread that makes all make sense.

> I'll tell our DMCA agent you're in the clear

https://github.com/mikf/gallery-dl/discussions/9304#discussi...

similar to patent trolls, there is now a DMCA trolls for hire.


Commit maker is here and have only posts slop here as well.

https://news.ycombinator.com/submitted?id=ndhandala

wonder when will he submit them here.


I think that account should be banned. Going further, the whole oneuptime.com domain should probably be blacklisted.

only-eu.eu registered on Porkbun LLC and hosted on Cloudflare, Inc

https://whois.eurid.eu/en/search/?domain=only-eu

MX points to route1.mx.cloudflare.net as well.

they should use their own product before giving others advice.


> registered on Porkbun LLC and hosted on Cloudflare, Inc

And is built with Astro, which was created by an American, existed as an American company, and then was absorbed by Cloudflare.


Using Astro and giving your data to American companies are two entirely different things

The site isn't limited to just cloud service providers; it includes Mattel and suggests replacing it with Lego. Are people giving their data to American companies by buying Barbies?

It's all performative anyway.

> Europe does it Better.

> Europe does it Safer.

> Europe does it Greener.

> Europe does it Fairer.

> Europe does it Private.

> Europe does it Stronger.

Unfortunately I think it's mostly just a meme at this point.


I don't need any of these statements to be true to want to divest from a US monopoly on essential software.

It's about decentralization, always has been.


> Europe doesn't trust the USA.

most the upper management of companies who use them have dont have the technical competence to see it. (eg: banks, supermarket chains, manufacturing companies)

once they are in, no one likes to admit they made a mistake.


DNS should be auto configured and work with multiple redundancy these days.

If it breaks, so much that you cannot do a dig, you need to re think your network.


Oh yes, that's really convenient for home users. "Install this thing on several computers and keep it in sync or you're not qualified to have a network"

Home users would ideally be served by things like mDNS and LLMNR, which should just work in the background. If I want to connect to the thermostat I should be able to just go to http://honeywell-thermostat and have it work. If I want to connect to the printer it should just be ipp://brother and I shouldn't even need to have a DNS server.

And if DNS fails, I have to use a serial console to get into my router and fix it, because I can't remember what address to type in ssh?

Your interface has a default gateway configured for it, doesn't it? Isn't that default gateway the router? NDP should show the local routers through router advertisements. There is also LLDP to help find such devices. LLMNR/mDNS provides DNS services even without a centralized nameserver (hence the whole "I shouldn't even need to have a DNS server"). So much out there other than just memorizing numbers. I've been working with IPv6 for nearly 20 years and I've never had an issue of "what was the IP address of the local router", because there's so many ways to find devices.

Even then nobody is stopping you from giving them memorable IP addresses. Giving your local router a link-local address of fe80::1 is perfectly valid. Or if you're needing larger networking than just link-local and have memorable addresses use ULAs and have the router on network one be fd00:1::1, the router on network two be fd00:2::1, the router on network three be fd00:3::1, etc. Is fe80::1 or fd00:1::1 really that much harder to memorize than 192.168.0.1 or 192.168.1.1 or 10.0.0.1, if you're really super gung-ho about memorizing numbers?


> Giving your local router a link-local address of fe80::1 is perfectly valid.

You're right. That would work.


really home users who mess with DNS settings? Lot of people here are living in a bubble.

My DNS "server" is a router which can "add" static entries. Easy with static addresses, won't work with dynamic addresses.

What redundancy, multiple servers? Do you think everybody runs dedicated homelabs to access a raspberry pi.


> My DNS "server" is a router which can "add" static entries...won't work with dynamic addresses.

Sounds like a pretty poor setup, systems which could auto-add DHCP'd or discovered entries have been around for literally decades. You're choosing to live in that limitation.

> What redundancy, multiple servers?

Multicast name resolution is a thing. Hosts can send out queries and other devices can respond back. You don't need a centralized DNS server to have functional DNS.


it will naturally happen when you work with it long term, similar to how it was with v4.

they do use it in their speedtest server.

  curl -v https://speedtest.ams.t-mobile.nl.prod.hosts.ooklaserver.net:8080
  ...
  * Connected to speedtest.ams.t-mobile.nl.prod.hosts.ooklaserver.net (2a02:4240::e) port 8080

Probably a requirement from Ookla, so again "They refuse to implement anything that isn't strictly required".

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: