Bug report: on Chrome 95 / Windows, fullscreened on a 1080p display, the data-testid="RetoolGrid:container_question" element gets a dynamically-calculated translate3d transform that's not pixel-aligned, making all the text in the code regions extremely blurry.
Thanks, wasn't sure if it was just me. Trying it again on Firefox.
Edit: Win 10, Chrome 95 maximized on 1600x900
Edit 2: Surprisingly, the font that won when I ran it on Chrome was #2 when I tried it again on Firefox (I tried not to look at the names as much as possible to avoid this).
Twitter had an intern investigating making this situation not suck for screen readers (some magic to detect and convert sequences) last year but seems like it got abandoned. Not high enough priority I suppose.
If I was coding a screen reader I would detect homoglyphs[0] and map things like Fraktur 'fonts' to their ASCII equivalent. Also if I was coding a search engine I would scan for homoglyphs and map Unicode homoglyphs to their ASCII equivalent.
The author says "(Maybe it does cause a problem, but it wouldn’t have to.)" and this feels extremely dismissive of people who actually rely on screenreaders today.
I had Safari/VoiceOver speak this tweet [0] (no offense to the author, it was just the first one in my feed that had this style of character in it). Every letter is some form of "Mathematical bold script capital s"
In that particular tweet, not only does it not read the word, but the "i" character is "Mathematical bold script small 1"
This does feel like a screen reader bug though. I would expect software like that to be advanced enough to know from "context" (twitter.com) that it's probably not reading mathematical formulas.
I don't think it's dismissive at all (let alone extremely) to say "this might cause a problem [and by extension, if this is important, do more research], but it shouldn't." It's pointing out an area where screen readers fall short unnecessarily.
This prioritizes the wrong thing. If a nice formatting technique excludes people who use screen readers from perceiving the content correctly, and you are aware of this, then using those characters is dismissive, not just extremely but completely.
Pointing the blame at the screen reader software absolves the decision maker for a choice that they themselves control.
It's also really handwavy to say that in context screen readers should know to announce something different than the actual characters being used. What if I want to tweet an actual math formula, for example? How can that be solved for?
It's a hack, and it doesn't work in screenreaders because hacks are often not robust when context changes, otherwise they wouldn't be hacks, they'd just be "the way".
I don’t understand, the screen reader is doing the correct thing. People tweet math, code, all sorts of things that aren’t prose formatted with hacky unintended use of unicode. Maybe some form of AI could detect that, but I wouldn’t bet accessibility on it.
The real issue is that there’s a general use case for basic formatting of real prose, and perhaps unicode should accommodate that. But expecting screen readers to understand these kinds of workarounds is unrealistic, and expecting it despite knowing it isn’t supported is user hostile.
It's not that it might be a problem - it is a real-world accessibility problem for people who use screenreaders.
Should the software be able to detect context? Sure. But that's not where we are today and to withhold content from people because their software hasn't gotten that far just seems wrong.
It's an interesting exercise in seeing what we can do with various Unicode characters, for sure. The 'disclaimers' presented could be worded a bit more strongly, as in "This will as of 2021 cause problems for people using screenreaders, so it's not recommended to use it."
> Many keyboards with US or US-International layout display the broken bar on a keycap even though the solid vertical bar character is produced. This includes many German QWERTZ keyboards.
2004 is a long time ago. I don't remember threading discussed any place when I started learning game development on a hobby and university level at that time. Probably because multiple cores wasn't a thing, so threading mostly would add complexity and cost performance. Core 2 was released in 2006 https://en.wikipedia.org/wiki/Intel_Core_2
IIRC all places I did learn from did tell to throw everything into the game loop and remember to use delta since last frame in all calculations
I think Assembly eliminated 64k years ago, because 4k for all practical purposes had become the new 64k, and 64k entries had been scarce and not super impressive for a while.
At Revision 2015 to 2017, the PC 64k compo was absolutely insane, with production values exceeding pretty much every other category, including unconstrained PC demos. If 4k is the new 64k, 64k at that time became the new PC demo.
256 bytes is the big (hum...) thing now. With one ridiculous entry that managed to pack 8 different effects, with music and transitions https://www.pouet.net/prod.php?which=85227
The 4k category was blown wide open in 2009 by rgba/tbc with Elevated [1] (live reactions: [2]). It demonstrated that 4k was more than sufficient for impressive effects and music -- suddenly, 64k was no longer an interesting constraint.
and I'm sure the extension doesn't just send the URL of every page you visit somewhere...
EDIT: Oh, it appears to be hashed... that's not as bad as I expected - but it's still extremely abusable. Nope... just nope - absolutely not without some sort of differential privacy or other concealment, thanks.
Oh thanks Sorry. I remember them too the air fingers. Also a great idea. The latency and sometimes loss of virtual digits was quirky but I did have hopes it would get incorporared into a VR hmd. 30m is better than bust.