> but at the point where there isn't enough economically useful things for everyone to do
This assumes that for example a person who has been an artist for 20 years, can easily enough switch professions to a machinist. An insane way to think. This is not how humans work.
The definition of Display fonts is quite loose. Generally speaking, display fonts are made to grab attention by incorporating some more extravagant visual features (think something like Papyrus)
They are made for shorter texts that are often written in a bigger font. Again I talk about this in a very general way because it depends on the font and other factors. But usually this includes things like headings. So they would use slightly different proportions that wouldn't work that well at small sizes, but stand out more in bigger sizes compared to the "text" variant.
So in this case you would use Zed Text for all your larger text blocks and Zed Display for headings or maybe emphasized words. But to be honest, since they are pretty close visually, you can get away with using Zed Text for everything imho.
https://github.com/allegro-systems/score a full stack Swift web framework with Controllers pattern for the backend and SwiftUI like syntax for frontend code. All renders down to minified HTML, CSS and JS. Includes features for auth, REST, frontend reactivity via signals and more.
Building in the same space. vital-stack.com focuses specifically on interactions between supplements and drugs, rather than efficacy: the question "is it safe to take with what I'm already on" rather than "does this work". The interaction data typically lives in clinical pharmacology databases and isn't easily surfaced in a way consumers can use. We have 182 supplements and 1,312 interactions catalogued so far, with an MCP server that lets AI assistants query the interaction database in real time. Curious how supplementdex sources efficacy data. Is it summarized literature along the lines of Examine.com, or closer to primary research?
if you go to the premium section and look at the benefits it literally says (from memory) if you pay 3.5€/month you get a small boost on interactions for your tweets. If you pay 5€/month you get a bigger boost on interactions if you pay (I don't remember how much it is) 22€/month ~ you get the biggest boost for your tweets.
That to me sounds like if you don't pay, you're going to be in the bottom of the feed for everyone else.
As you have so much RAM I would suggest running Q8_0 directly. It's not slower, and might even be faster, while being almost identical in quality to the original model.
And just to be sure: you're are running the MLX version, right? The mlx-community quantization seemed to be broken when I tried it last week (it spit out garbage), so I downloaded the unsloth version instead. That too was broken in mlx-lm (it crashed), but has since been fixed on the main branch of https://github.com/ml-explore/mlx-lm.
I unfortunately only have 16 GiB of RAM on a Macbook M1, but I just tried to run the Q8_0 GGUF version on a 2023 AMD Framework 13 with 64 GiB RAM just using the CPU, and that works surprisingly well with tokens/s much faster than I can read the output.
I don't really have the hardware to try it out, but I'm curious to see how Qwen3.5 stacks up against Gemma 4 in a comparison like this. Especially this model that was fine tuned to be good at tool calling that has more than 500k downloads as of this moment:
https://huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-...
Exactly. Under Windows, this isn't even consistent across applications. I'm in France, with the location set to France, using English display language and "English (Europe)" formatting. This means that the expected date is DD/MM/YYYY. It's what shows up in the taskbar, for example. But many applications seem to do this based on language, so I sometimes get MM/DD/YYYY.
I don't normally run Windows, so I can't check right now, but I think it's mostly "modern" applications that mess this up. Like the MS Store, Teams (obviously).
I would not call a "non-standard arithmetic convention" that ln(0) = -∞.
This is the standard convention when doing operations in the extended real number line, i.e. in the set of the real numbers completed with positive and negative infinities.
When the overflow exception is disabled, any modern CPU implements the operations with floating-point numbers as operations in the extended real number line.
So in computing this convention has been standard for more than 40 years, while in mathematics it has been standard for a couple of centuries or so.
As always in mathematics, when computing expressions, i.e. when computing any kind of function, you must be aware very well which are the sets within which you operate.
If you work with real numbers (i.e. in a computer you enable the FP overflow exception), then ln(0) is undefined. However, if you work with the extended real number line, which is actually the default setting in most current programming languages, then ln(0) is well defined and it is -∞.
If you used a username, you wouldn't have this problem. As it stands, signing up someone else's address for a lot of sites to spam them with confirmations is already an attack vector that's used in the wild. And that's legitimate spam and should be reported as spam and sites that do this are spam amplifiers.
Don't agree there considering x86 has MODRM, size-prefix(16/32 and later 64bit operand sizes), SIB(with prefix for 32bit), segment/selector prefixes,etc.
Biggest difference perhaps where 68000 is more complicated is postincrement but considering all the cruft 32bit X86 already inherited from 8086 compared to the "clean" 32bit variations of 68000 I'd make it a toss at best but leaning to 68000 being easier (stuff like IP relative addressing also exists on the RISC-y ARM arch).
Apart from addressing the sheer number of weird x86 instructions and prefixes has always been the bane of lowpower x86.
Evidence typing on a per claim basis is the right abstraction. Most fact-checking tools treat a document as a single unit, but technical writing mixes well established facts, contested interpretations, and unverifiable assertions, and they all need different handling. The domain where this is most valuable is health and supplementation, where a single paragraph can mix claims backed by RCTs, mechanistic plausibility statements, and marketing assertions with zero evidence basis. I've been building a supplement interaction checker (vital-stack.com) and the signal to noise problem on health claims is significant enough that an evidence grade layer would be directly useful. Does Grainulator handle the case where a claim is supported by evidence but the evidence quality is low, say an n=20 observational study vs. a preregistered meta-analysis?
Collection of 15 diagnostic tools (VPN leak test, DNS checker, port scanner,
etc.) built after a WiFi security incident. All client-side, no data
collection.
> I’m generally with you, but I am not prepared to say companies should be forced to host and distribute content they believe reflects badly on them.
If Apple and Google are hell-bent on killing sideloading, and they control 99% of the mobile market, I think they have an obligation to host things they don't like, as long as it is legal.
I tend to think it equally likely that the same corporations that cannot justify their LLM spending[1] need to shift the blame, and (unsurprisingly) the old, crotchety, and sociopathic who claw their way to positions of ownership, management, and other power find it both easy and convenient to target the younger generations, especially Gen Z.
(In short: the Principal Skinner "the children are wrong" meme.)
[1]: See the panoply of HN posts in recent months about how LLMs are great for eliminating workers' idiosyncratic drudgery, but workers cannot or do not reinvest that saved time/effort for non-software companies' benefit, hence non-software companies see no positive impact to their bottom lines.