Feels amazing to get a code review within minutes after putting up a PR. The signal to noise ratio has been great as well, most of the review comments are actionable and relevant.
Actually it is really really fast relative to speeds we are familiar with in our lives. Space is just so big that it makes the speed of light (c) feel slow.
Even on Earth, c is slow and it's pretty inconvenient. We wouldn't have to worry too much about which edge location a resource is served from if intercontinental latencies were on the order of 100 microseconds instead of 100 milliseconds.
this video[1] by Dr James O'Donoghue really made me stop and consider how slow the fastest possible speed is, relative to the awesome vastness of space. I mean how many people have actually paused for 8:20 mins and thought, "wow that's a really long time for light to arrive from that star that seems like a stone's throw away!"
Ah but there is a reason for my watch to have the same seconds value as yours, and that is if our watches are kept synchronized to an accurate time source, for ex. via NTP. By assuming otherwise you're sidestepping the problem by claiming you already have a reliable RNG.
Heck, even granted you have an unsynchronized watch, your "random" numbers are easy to predict, I'll ask you for a random number once per second and after a few answers I don't need to ask any more to know your answer.
This is an example of a precise analysis that is unhelpful because it is irrelevant in the contexts in which the algorithm would be used. It's like saying "all algorithms are constant time-complexity in practice because computers are finite". True but useless.
In addition to this there is also a quantum effect called quantum tunneling [1] which allows for a very tiny probability for particles with insufficient energy to fuse upon collision anyway.
The main challenge in working with these high temperature plasmas is confinement. In order to achieve nuclear fusion matter needs to be heated to immense temperature, so that the kinetic energy of nuclei colliding can overcome the electrostatic force of the protons pushing each other away and "fuse" into larger nuclei (held together by the "strong force"), converting a fraction of the reaction mass into a relatively large amount of energy in the process.
In order to keep the plasma at the temperatures where fusion can occur, rather extreme measures have to be taken. In the Tokamak approach, the plasma is placed in a toroidal vacuum chamber, and "suspended" in the center of the torus by using electromagnets that line the Tokamak chamber's walls. At such high temperatures the plasma is so energetic that it is very hard to contain such fast moving particles. If the plasma "escapes" the confinement and contacts anything (ie. the walls of the Tokamak) it rapidly cools down to temperatures below where fusion can happen.
The immense engineering challenge here is to heat plasma to ridiculous temperatures, and keep it confined in a very small volume at great temperature and pressure to mimic conditions that give rise to nuclear fusion in the center of stars.
>The immense engineering challenge here is to heat plasma to ridiculous temperatures, and keep it confined in a very small volume at great temperature and pressure to mimic conditions that give rise to nuclear fusion in the center of stars.
This is not exactly true. Inertial confinement fusion has conditions that are similar to stars. The engineering challenge for magnetically confined fusion to keep the low density plasma confined for long time durations for fusion.
For anyone interested in further reading, look up the Lawson Criterion.
> If the plasma "escapes" the confinement and contacts anything (ie. the walls of the Tokamak) it rapidly cools down to temperatures below where fusion can happen.
Sounds relieving. I used to think that «if the plasma "escapes" the confinement and contacts anything (ie. the walls of the Tokamak) it rapidly…» disintegrates everything around or, when the power is huge enough, causes an apocalypse…
One of the beautiful things about nuclear fusion reactors is that they are inherently unstable at STP. In the event of a catastrophic failure, they will simply stop working (potentially after some large bangs).
Nuclear fission reactions can continue on their own for quite a while. This is one of the reasons they can be so dangerous.
At those temperatures, it will disintegrate whatever it touches. It's just that, unlike fission, fusion is unstable[1] so it quickly fizzles out, and damage will be local.
[1] Unstable in the sense that it is hard to maintain fusion conditions, not in the Hollywood sense that it blows up if you look at it sideways.
That description of destruction is entirely correct, it's just that the amount involved is tiny. Just like a big enough firecracker could destroy anything, but the ones we make just go pop.
This is not a very useful comparison. You can say that there is less energy in a stick of dynamite than a chocolate chip cookie, and yet, that stick of dynamite should still be handled carefully.
Yep even if it's hot enough it still needs the correct densities. It's akin to trying to compress a balloon with your hands. If enough 'hands' all push simultaneously it can work, but you can imagine the instabilities.
CoreOS is built using Gentoo. Gentoo is a meta-distribution whose main contribution is the portage tree which is a collective of ebuilds that describe how to build each individual package, the dependencies, the configuration flags etc. Literally every user who uses Gentoo builds and produces their own linux distribution, with their own CFLAGS, USE flags etc. ChromeOS is built using Gentoo, so is CoreOS. Neither ChromeOS nor CoreOS are Gentoo based distros because Gentoo is a toolkit to build distributions, not a distribution itself.
The chain of trust doesn't quite stop at compiling the source, in order to be really sure that nothing unintended is going on you have to compile the compiler yourself. At the end of the day you will have to trust some bootstrapping binary compiler unless you put it together yourself in machine language.
Schneier's summary of Wheeler's method says: "This countermeasure will only fail if both [compilers] are infected in exactly the same way. The second compiler can be malicious; it just has to be malicious in some different way: i.e., it can't have the same triggers and payloads of the first. You can greatly increase the odds that the triggers/payloads are not identical by increasing diversity: using a compiler from a different era, on a different platform, without a common heritage, transforming the code, etc."
Good point. Given sufficient paranoia this train of suspicion can be continued even deeper down the rabbit hole: you'd need to inspect the hardware designs and make sure the hardware you've got was actually produced according to the inspected designs.
In technology as elsewhere, it seems life is ultimately based on trust in someone.
Trust is a function of the expected incentives of the trusted.
One way to manage their incentives is by exercising control, but there are other more friendly ways, too. For example, shared goals, community, reputation, financial rewards, reciprocity and ethics standards all provide weaker or stronger reasons to trust others.
Indeed - for all folks disillusioned by systemd, Gentoo is a source based rolling release distro whose fundamental tenet is choice - so it shouldn't be a surprise that it is possible to use alternate init systems on Gentoo, in fact Gentoo liveCD/handbook defaults to OpenRC while providing the choice to run other init systems (I run systemd with a somewhat complicated RAID setup with no problems).
Or use Gentoo, that's what I do. You can verify hashes/signatures on the Firefox source archive and audit the source code if necessary before compiling.
That was only half serious - I know that are valid use cases for people to prefer using binary distros. However I think this particular issue is a good example why IMO even binary distros need to provide a convenient option to locally build any package for security conscious users.
That sounds tangential. The point is if two people build the same thing, they should be able to compare their builds to see if they are truly the same. If not, the argument is that one of them has a "tampered" environment.
In other words, if you don't know your compiled binary is the same as the distributed binary, you have no reason to think yours does not have a vulnerability added by the toolchain.
Unless I'm the one that is misunderstanding, of course. :)
Well it's a solution to the same underlying problem - that by running binaries compiled by a 3rd party you trust that they aren't adding in code to compromise your privacy (voluntarily or not). If you compile the application from source yourself you don't need that leap of faith - no need to compare identical binaries or have deterministic builds (which is not trivial as the bug report demonstrates).
I'm not sure your solution solves that. If Firefox has vulnerabilities in the source right now, you do little to protect yourself by compiling on your own. Even if you can verify that you and someone else produce the same binary, they could just both be vulnerable.
In fact, if you compile it yourself, unless you can verify the compile against a "known good" one, then you can't even be sure that your local toolchain hasn't been compromised. (I mean, sure, if you were a perfect auditor of your entire toolchain, then you could have some confidence here. You have to be perfect, though.)
Consider, you do a compile of Firefox and it is different than the one for download. Why? As things stand now, you don't know. And that is the problem.
> If Firefox has vulnerabilities in the source right now, you do little to protect yourself by compiling on your own.
You do more to protect yourself than taking the same vulnerable source and compiling it with Mozilla's "reproducible build chain".
If the source itself is corrupt then having a verified build of malicious source is completely useless.
With Gentoo you can verify the source itself matches the "trusted" upstream source and then build it with your own trustworthy build chain.
And before you go "what if your build chain isn't trustworthy huh????" think about it a little further... if your own local build chain can't be trusted you're already screwed even before you download anything from mozilla.org, just as you'd be if you downloaded a "bit verified" binary from mozilla to run on your already-pwned local operating system.
No you don't. You do nothing to protect yourself from vulnerabilities in the code by compiling it yourself. Literally nothing.
You do protect yourself from vulnerabilities in their toolchain. And this is where the effort makes sense. If there are differences in the builds, then you can at least suspect one of you has a tampered environment. Right now, you have no way of knowing that one way or the other. You just have the joy of having done your own build.
My main question is still just one of magnitude. Consider, I have not had a wreck or other car mishap in 20 years. I could conclude that seatbelts, then, have not increased my safety really. I am not trying to make that claim, as I feel it is false. So, my question here is essentially, how much safer would this really make things? (Or trustworthy, if you'd rather that term.)
Fellow gentoo user here. Gentoo does not protect you from Trusting Trust attacks (mentioned above). But then again neither do reproducible builds, because you still have to trust the original compiler. Reducing the variance (to zero in this case) by using a deterministic build system DOES protect against compromise of everything except for the original build environment. Yes, that makes the original build env a target for attacks, but if we honestly believe we have a "trustable" reference build environment then those attacks are also exceptionally hard to pull off.
The question is how to provide those same benefits to most people in the world. Most of them are not in a position to compile their own software, for various reasons ranging from (reasonably!) not wanting to spend that much time on it to using an OS where it's even more of a pain than on Linux (e.g. Windows, or Android, or iOS).
The only sane way to help these people trust their software is to enable meaningful third-party audits of said software. And that requires that the auditor be auditing exactly the same thing as the user is using.