(Disclaimer: I am not a cryptographer and this is a heavily simplified explanation). Homomorphic encryption is built on the foundation of 'hard problems' (e.g. the Learning with Errors Problem) - loosely, computational problems that are thought to be impossible to reverse without being in the possession of a secret key.
The crux of HE is that it provides a _homomorphism_: you map from the space of plaintext to the space of cipher texts, but the mapping preserves arithmetic properties such as addition and multiplication. To be clear - this means that the server can add and multiply the cipher texts, but the plaintext result of that operation is still irreversible without the private key. To the server, it looks like random noise.
I don't think it's helpful to think about this as connected to deep learning or embedding spaces. An excellent resource I'd recommend is Jeremy Kun's guide: https://www.jeremykun.com/2024/05/04/fhe-overview/
I had good fun transliterating it to Rust as a learning experience (https://github.com/stochastical/microgpt-rs). The trickiest part was working out how to represent the autograd graph data structure with Rust types. I'm finalising some small tweaks to make it run in the browser via WebAssmebly and then compile it up for my blog :) Andrej's code is really quite poetic, I love how much it packs into such a concise program
Handwritten! (aka no LLM assistance :) It wasn't transpiled or anything like that. I've been meaning to post a little about it on my blog; just been caught up with other stuff atm.
One thing that was a _little_ frustrating coming from Python, though, was the need to rely on crates for basic things like random number generation and network requests. It pulls in a lot, even if you only need a little. I understand the Rust community prefers it that way as it's easier to evolve rather than be stuck with backwards-compatability requirements. But I still missed "batteries included" Python.
I recently wrote an eigenvalue solver for an interactive component on my blog with Rust compiled to WebAssembly. Being able to write-once and compile for the web and desktop felt like the future. But then, I'm no fan of JavaScript and wouldn't have attempted it if WASM didn't exist.
I've just recently done the same, turned Rust into WASM and it does feel great. Being able to compile mature and well-tested libraries into WASM instead of trying to find a JS equivalent is incredible value.
Thanks for writing this game! I came across it after seeing your Checkers written in Rust for WASM game[0], and thought it deserved an HN submission of its own.
Ooh, thanks for sharing that algorithm! Somehow, I didn't come across this and jumped straight into using the QR algorithm cited everywhere.
I found it hard to find a good reference that had a clean implementation end to end (without calling BLAS/LAPACK subroutines under the hood). It also wasn't easy to find proper convergence properties for different classes of matrices, but I fear I likely wasn't looking in the right places.
> We did a complete overhaul of Hugo’s template system in v0.146.0. We’re working on getting all of the relevant documentation up to date, but until then, see this page.
I don't mind breaking changes, but it'd sure be nice if the documentation reflected the changes.
Pitch: Currently working as a Data Scientist and early-careers AI researcher at a major Australian bank. Bachelor in mathematics & CS, looking to go back for a Masters in mathematics. I'm looking for challenging roles that ideally involve maths. I enjoy functional programming, reading books and papers, blogging about arcane things and learning new languages and algorithms.
The crux of HE is that it provides a _homomorphism_: you map from the space of plaintext to the space of cipher texts, but the mapping preserves arithmetic properties such as addition and multiplication. To be clear - this means that the server can add and multiply the cipher texts, but the plaintext result of that operation is still irreversible without the private key. To the server, it looks like random noise.
I don't think it's helpful to think about this as connected to deep learning or embedding spaces. An excellent resource I'd recommend is Jeremy Kun's guide: https://www.jeremykun.com/2024/05/04/fhe-overview/