If, like me, you have too much "P5 api" muscle memory to really get into LÖVE (not a criticism of that library, tbc), then L5 might be a nice alternative:
Sort-of unrelated (but very on-brand for people into BeOS I think), it's so satisfying when a webpage is so free of bloat that navigation and latency to clicking on things in general feels instant.
> i think i'll rest for a bit after this. i can only do 80-hour weeks for so long
Jesus Christ, 80 hours?! I really hope the author seriously takes a proper break! I mean, they seem to be riding that incredible high that comes from having a breakthrough in deeply understanding a really tough problem after thinking about it for too long, so I kind of get it, but that is also all the more reason to take good care the precious brain that now stores all that knowledge, before it burns out!
String is also a pretty damn fundamental object, and I'm sure trim() calls are extremely common too. I wouldn't be surprised if making sure that seemingly small optimizations like this are applied in the interpreter before the JIT kicks are not premature optimizations in that context.
There might be common scenarios where this had a real, significant performance impacts, E.G. use-cases where it's such a bottle-neck in the interpreter that it measurably affects warm-up time. Also, string manipulation seems like the kind of thing you see in small scripts that end before a JIT even kicks in but that are also called very often (although I don't know how many people would reach for Java in that case.
EDIT: also, if you're a commercial entity trying to get people to use your programming language, it's probably a good idea to make the language perform less bad with the most common terrible code. And accidentally quadratic or worse string manipulation involving excessive calls to trim() seems like a very likely scenario in that context.
This is a really great article, and I really appreciate how it explains the different parts of how JPEG works with so much clarity and interactive visualizations.
However, I do have to give one bit of critique: it also makes my laptop fans spin like crazy even when nothing is happening at all.
Now, this is not intended as a critique of the author. I'm assuming that she used some framework to get the results out quickly, and that there is a bug in how that framework handles events and reactivity. But it would still be nice if whatever causes this issue could be fixed. It would be sad if the website had the same issue on mobile and caused my phone battery to drain quickly when 90% of the time is spent reading text and watching graphics that don't change.
I share this frustration. It's ironic that an article explaining compression and efficiency requires so much client-side overhead.
I've been experimenting with a 'Zero-Framework' approach for a biotech project recently, precisely to avoid this. By sticking to Vanilla JS and native APIs (like Blob for real-time PDF generation), I managed to keep the entire bundle under 20KB with a 0.3s TTI.
We often forget that for users on legacy devices or unstable 3G/Edge connections, a 'heavy' interactive page isn't just slow, it's inaccessible. Simplicity shouldn't just be an aesthetic choice, but a core engineering requirement for global equity.
It looks like there's some requestAnimationFrame call going on more than once per second. It's definitely an energy intensive tab.
But for reference, keeping CNN.com open is more than double that memory pressure on my 5 year old Mac laptop, and it handles both fine. Do your fans really kick in for heavy sites?
I have an 8 year old laptop that works fine except as long as I don't bother with sites like CNN.com. Heck, I even have a 13 year old laptop that works fine on most sites. Absurd ad-tech and tracking technology is not a motivation for me to upgrade but to avoid badly coded sites.
> However, I misunderstood and came up with an even more extreme version: instead of tracing versions of normal instructions, I had only one instruction responsible for tracing, and all instructions in the second table point to that. Yes I know this part is confusing, I’ll hopefully try to explain better one day. This turned out to be a really really good choice. I found that the initial dual table approach was so much slower due to a doubling of the size of the interpreter, causing huge compiled code bloat, and naturally a slowdown.
> By using only a single instruction and two tables, we only increase the interpreter by a size of 1 instruction, and also keep the base interpreter ultra fast. I affectionally call this mechanism dual dispatch.
I really do hope they'll write that better explanation one day because this sounds pretty intriguing all on its own.
Because it looks like your opponent is a Swedish former demoscener who started programming at age 12 on the C64 and Amiga computers in 1990, quickly moving on to writing games and demos in assembly, then professionally developing physics engines since 2001, specializing in game performance profiling and squeezing performance out of optimized mobile games.
As far as game dev stereotypes go you basically picked a Final Boss fight. Good luck, you'll need it :p
I half-agree, but in general I'm always a bit weary of saying "x is easy" for anything programming, because it might appear easy in isolation, but often that depends on also having decent understanding of all interconnected parts of the computer, OS, and so on that relate to it.
And in turn each of those may also appear simple in isolation, but as a whole it can still be an overwhelming amount of knowledge to learn, integrate and connect the dots between (and then once you reach that level of mastery, there's meta-problem of applying outdated rules of thumb from a few decades ago to modern hardware). That's where the advantage of having the kind of lifelong experience like a former demoscener gamedev comes in.
Anyway, my earlier post might have been a bit tongue-in-cheek but I'm rooting for you to surpass this guy one day! :)
The quote makes much more sense as an in-joke between two like-minded people, because Alan Kay isn't exactly humble himself nor does he avoid provocative statements.
And speaking as a Dutch man, given the kind of humor we have I'm pretty certain Dijkstra appreciated a good roast like that too.
Have seen that presentation, but that still does not give the full context. At least, I don't think it is obvious from the video alone whether this remark was a friendly jab between friends, or whether it was a stereotypical vicious academic back-and-forth between to big names in a field.
Perhaps we would have more of a chance if we make a collection of international differences in checkmark designs and propose that set of glyphs as a whole.
https://l5lua.org/l5-for-processingp5/
reply