The author dismisses cubemaps pretty quickly, but imo it’s the simplest solution & it’s what I did when rendering dynamic gas giants on my own personal project a number of years back* . Using a cubemap doesn’t result in a 6x increase in memory usage, you’re just splitting the texture size from one large rectangular face into 6 smaller rectangular faces, but the total texture detail is the same. The nice part about a cubemap is you don’t have to worry about pole pinching at all + you can use a 3 or 4 dimensional noise function to easily create a seamless flow field for texture animation/distortion.
If you want to avoid weird seam artifacts, using 2 hemispheres of stereographic projection is probably better still. Each hemisphere projects to a disk but you can just fill out the texture to the boundaries of a square, duplicating a bit of each hemisphere into the corners of the other's texture (or you could leave those parts blank if you want). There's a 1:2 difference in scale from the center to the edge of each disk, so you could argue this is slightly wasteful of pixels for a given minimum required level of detail, but the projection is conformal so it's considerably less tricky to figure out how to sample it to decide the color for destination pixels drawn at steep perspective, and the stereographic projection is very cheap to compute in both directions (1 division + some additions and multiplications per projected point), even cheaper than the gnomonic projection used for a cubemap.
If you want something conformal that has less scale variation and wastes fewer corner pixels than a pair of stereographically projected hemispheres and is still not too conceptually tricky, you can use a pair of slightly overlapping Mercator projections, at right angles to each-other, covering the sphere like the two pieces of leather covering a baseball. Each one can have a rectangular texture. There are some NOAA papers suggesting this approach for the grids for solving differential equations needed in weather simulation of the Earth.
The most pixel-efficient projection I know starts by breaking the sphere into an octahedron, then taking each octant to be covered in a grid of hexagonal pixels, using "spherical area coordinates" in each octant to determine the grid. Each octant can then be represented in an ordinary square-pixel image by a half square ("45–45–90 right triangle"), so the result is something like this <https://observablehq.com/@jrus/sac-quincuncial> with a hexagon grid like <https://observablehq.com/@jrus/sphere-resample> (scroll a few examples down from the top of the page). But figuring out the details about how to sample the texture when you need to cross edge boundaries, etc., makes using this quite a bit more fiddly than the 2 stereographic projection version. And there will be some seam artifacts.
And if you want to further minimise distortion, you could group the triangular faces of an icosahedron into 10 rhombuses, each covered by one square texture.
It's more math, though, and usually not worth it unless you're already planning to subdivide the surface further for some other reason.
I’ve since done a bunch more on planetary cloud rendering, need to do a proper write up, but it’s a combination of volumetric noise, flow fields & atmospheric scattering.
The story of how I rediscovered the first real project I built after teaching myself to program - an RPG written in QBasic and how I got it playable again 25 years later.
Great job explaning the whole process! I've been building some similar stuff recently for my procedural space-exploration side project (https://www.threads.net/@mrsharpoblunto/post/CufzeNxt9Ol). I was planning to do a dev blog post writing a lot of the details up, but yours covers most of the tricks :)
A couple of extra things I ended up doing were:
1) Using a lower-res texture to modulate the high-res raymarched density, this gives you control over the overall macro shape of the clouds and allows you to transition between LOD better (i.e. as you move from ground level up to space you can lerp between the raymarched renderer & just rendering the low-res 2D texture without any jarring transition.
2) Using some atmospheric simulation to colorize the clouds for sunrise/sunset. To make this performant I had to build some lookup tables for the atmospheric density at given angles of the sun.
3) Altering the step size of the raymarch based on density, I took this from the Horizon Zero Dawn developers where they have a coarse raymarch and as soon as they get a dense enough sample they step back and switch to a higher res step until the density hits zero again.
There’s also OpenC1, which is an almost complete open source remake of Carmageddon. Jeff also has posted a ton of content around the internal workings and file formats etc on his dev blog - http://1amstudios.com/projects/openc1/.
Hi bgirard! I support the FB team that works on this - it's on the roadmap to open source hopefully H1 next year :) It's continued to find significant leaks in other parts of our infra too, so hopefully it will be equally helpful to others. Great to see others looking at memory tools now too - I'll definitely be checking Fuite out
Makes me wonder if you could effectively apply Chernoff faces (https://en.wikipedia.org/wiki/Chernoff_face) to make different hashes easier for humans to recognize. TLDR map parts of the hash to modify aspects of a face (position, size, orientation of eyes, ears etc.) and you can take advantage of all the in-built circuitry in the human brain which can identify very small differences in facial appearance.
The idea is explored a bit in Peter Watts novel Blindsight - not for hashes, but visualizing high dimensional multivariate data via clouds of tormented faces :)
I used the 4000 for a number of years, but found that it was causing problems in my right shoulder. The reason is the inclusion of a full numpad makes the keyboard so wide that your mouse hand has to sit really far to your right, causing your shoulder to be in a position of constant rotation and eventual strain. I eventually took a hacksaw to it to remove the numpad, which improved things - but it was the main reason I preferred the successor (sculpt) as the numpad is a separate detachable piece. These days I'm a split keyboard ergodox weirdo because it lets me put a trackpad in between the keyboard segments which means my shoulders can stay in a neutral position the whole time
Have you ever tried switching to using your left hand for the mouse. It is generally recommended for ergonomic reasons even if you don't have a keyboard issue. It'll seem weird to your brain at first, but I got used to it pretty quickly, as did everyone I know who has tried.
When I was in grad school (before I had any ergonomic pains), I had my mouse on the left side at work, and the right side at home - so I was using both configurations daily.
I switched to mousing lefty a long time ago. If you want to get up to speed, just play a few games of solitaire or something like that.
Unfortunately, gaming while mousing lefty is often inconvenient. Many games are set up assuming your left hand is on the keyboard. In most cases you can re-bind the keys though.
I added support for ES2017 bundles for compatible browsers for Instagram.com a couple of years ago - we saw a 5.7% reduction in the size of our core consumer JavaScript bundles when we removed all ES2017 transpiling plugins from our build. In testing we found that the end-to-end load times for the feed page improved by 3% for users who were served the ES2017 bundle compared with those who were not (more info here https://instagram-engineering.com/making-instagram-com-faste...)
Why do you think its in the margin of error? I never mentioned the error bars on the statistic - We did an extensive AB test and it was a 3% stat-sig improvement
A 3% improvement on a 3 second page load means it loads in 2.91 seconds. So even if it were statistically significant, it's not practically significant?
Unless your initial page load is taking like 10 seconds, a human wouldn't even notice. And if it is taking 10 seconds, well, then you've got better things to fix.
That's not an actual improvement. That's just noise.
I worked at a big web site and we could demonstrate lots of incremental revenue (many millions of dollars) from a change like this. Even though it doesn’t seem noticeable to an individual, it adds up. If you draw a graph with page load time on one axis, and usage on the other, it’s a smooth curve, not a step function. Out of all your customers, some fraction of them are going to get distracted and disengage after 3 seconds but not after 2.9 seconds.
Imagine you lower the grip on everyone’s tires by 3%. Most people wouldn’t notice, but for a few people it will prevent them from being T-boned when they wait too long to brake for a red light. That improvement would be measurable if your data was good enough. I’m not saying optimizing your website is important like preventing car crashes, but it’s an illustration of how a small change can lead to a measurable improvement.
Thats why you should also track other metrics when AB testing performance changes. During my time at IG, even small perf improvements almost always corresponded to stat-sig improvements in engagement metrics. 100ms is a pretty significant performance win, definitely not noise - the other thing is that those wins often scale by the quality of device & connection. So a 100ms win for a high end desktop on fiber internet could be multiple seconds for a low end mobile on 3G.
I'm actually kinda curious about this in regards to browser finger printing.
This isn't like statistics in a political poll where there is an expected margin or error, when it comes to logging sites you absolutely know everything about your users in regards to browser, operating system, and even the hardware (sometimes).
3% of a million is 30,000, 3% of 500,000,000 is 1,5000,000. An absolutely real chunk of users that can produce real value (for Instagram, ad dollars).
Why would there be a margin error at all? How would it be introduce? A large portion of crawlers masking themselves? I mean it's possible, but I do not know if that's enough to throw off proper logging.