>As the foundation for this new operating system, we chose seL4 as the microkernel because it puts security front and center; it is mathematically proven secure, with guaranteed confidentiality, integrity, and availability.
>KataOS provides a verifiably-secure platform that protects the user's privacy because it is logically impossible for applications to breach the kernel's hardware security protections and the system components are verifiably secure.
The wording seems quite confident, maybe it could use some additional "at least according to its specification".
This approach doesn't protect against hardware bugs and side-channel attacks.
Especially when one thinks of unexpected attacks like Rowhammer, there is probably no way to include them in a formal systems model beforehand.
> protects the user's privacy because it is logically impossible for applications to breach the kernel's hardware security protections and the system components are verifiably secure.
Notice also that they're doing the traditional Google trick of pretending that it respects the user's privacy because it's secure, while ignoring the fact that most of the users privacy will be destroyed by things they designed the operating system to intentionally do in its security model.
I chose Google's mesh routers because I have more trust in their security than any competitor. I would feel pretty safe with PFSense, but my needs are way too simple to bother with that.
I know a lot of techies opt for prosumer, small business gear for their home networks, but most of those vendors have crap security trackrecords. Ubiquiti, seems to be the sole standout in this space, but I don't trust them any more than google to produce secure hardware.
Security is not privacy though. If I had a piece of information that could kill, I would trust Google with safeguarding it. What I don't trust them, is not to use information they got. Or rather, I expect them to use it (but safeguard it from others, because that's how they make money: by being the only ones allowed access to private information).
Have you seen Krebs on Security most recent update. He pretty much disavowed his reporting on the Ubiquity breach.
Everybody has security breaches. I'm not out there counting CVEs, but anecdotally, it seems that Netgear, TP-Link, microtik have seem to have much worse reputations.
I don't pay attention to the market for high end networking (Cisco, Juniper).
It protects the user's privacy against attackers other than Google.
To be fair, this is an entirely reasonable threat model for a lot of people. For instance, if you're a reporter in an authoritarian country, Google is almost certainly not colluding with the attackers who are literally trying to kill you, and using a Chromebook and Gmail is probably the best option out there. Your threat model is "Don't die," not "Don't be subject to surveillance capitalism."
But it's also something we should collectively be pushing back on. The motivating example for these products is "intelligent ambient systems," i.e., things like Nest hubs and doorbells that capture audio/video all the time. These products probably shouldn't exist at all, and to the extent they do, they should process data locally and discard it as soon as they can.
Google sucks up a lot of data, and is in a position to do a lot of bad stuff with it, but historically they have never told my spouse about my affair, my government about my accounts in the caymans, or leaked my nude pictures to my grandma. (I don't actually have any of these!)
I really don't care how much data of mine they have while they limit their evil they use it for to deciding if they should show an ad for baseball or football shirts...
And I trust them not to accidentally leak it far more than I trust my government or any smaller/less techy company.
This 100x. Of all the companies/entities that have had some sort of data of mine over the years Google feels by far the most trustworthy.
My country's agencies (Canada) have leaked more data than Google, and MS can claim they're secure all they want, I've had accounts on MS services hacked but never Gmail or Google services...
> historically they have never told my spouse about my affair
Have we forgotten Google Buzz? Google changed GMail to publicly list the people you email most. In one case, this de-anonymized a woman's blog and enabled her abusive ex-husband to stalk her. https://fugitivus.wordpress.com/2010/02/11/fuck-you-google/
This is IMO the most likely way that "bad stuff" will happen: not maliciously, but through privacy-invading misfeatures connected to pushing people to share more.
Thats 12 years old... I think it's a real testament to Googles privacy behaviour that amongst their 2 Billon+ users over 11 years, there are no fresher news stories that come to mind.
Compare with facebook/instagram, where it seems every other week someone messes up the privacy settings and posts something to an audience they didn't intend because the product is deliberately designed to encourage accidental oversharing.
> Google sucks up a lot of data, and is in a position to do a lot of bad stuff with it, but historically they have never told my spouse about my affair, my government about my accounts in the caymans, or leaked my nude pictures to my grandma. (I don't actually have any of these!)
"""It's unclear how widespread Barksdale's abuses were, but in at least four cases, Barksdale spied on minors' Google accounts without their consent, according to a source close to the incidents. In an incident this[2010] spring involving a 15-year-old boy who he'd befriended, Barksdale tapped into call logs from Google Voice, Google's Internet phone service, after the boy refused to tell him the name of his new girlfriend, according to our source. After accessing the kid's account to retrieve her name and phone number, Barksdale then taunted the boy and threatened to call her. [...]"""
Fwiw that was 12 years ago, and a lot of the Google infra has changed quite a bit since then to make looking at user data much harder and track access more explicitly.
Ie. I want them to commit to "No human who works at Google will ever see your email or photos without you knowing about it". And then splash that statement all over TV ads.
Set up some system so every time an engineer sees user data, the owner of that data is sent a notification (and there are legit reasons for that, like investigating a bug a user has reported). It doesn't need to be for every kind of user data, just the super sensitive ones like the text of emails.
Absolutely agree, but how do you do that in practice?
Do you self-host your services on some Linux distro? How many FAANG employees have upload access to that distro or maintain its infrastructure?
(Or maybe you audited everything yourself and you're 100% confident in your audit, somehow, and you've turned off automatic updates. How many FAANG employees are working on fuzzers to automatically find new exploitable security vulnerabilities and scale out those fuzzers on their employers' infrastructure?)
This is true now, but once they have those data you can't know what they will use them for in the future. Maybe they will keep using them in the same way as now, maybe not. Also don't forget the recent case of users that got reported to the police by Google because they took pictures of their children for medical reasons.
It's actually spelled '"Auto-Deletion" of data' since you can't prove it's been deleted.
Google and other US tech companies have no right to be trusted after PRISM. Not to mention the US government's complete abdication of public oversight under the guise of national security, with secret courts, secret rulings, and national security letters compelling silence from these same organizations while complying with whatever demands they make.
You realize many tech companies responded to PRISM by making their data centers and private fiber more secure against domestic state sponsored hacking, right?
Unfortunately, I believe that there were 2 possible outcomes in a post-PRISM world:
1) Tech companies increased their security, but it wasn't enough, and security services still have a feed of nearly all data, through a combination of software/hardware/algorithmic flaws.
2) Tech companies did manage to mostly stem the flow of information into security services. However, security services simply sent secret letters to all the big players demanding an API/backdoor and requiring them not to talk about it.
My lukewarm take is that it is possible to construct your company/infra in such a way that functionally, any employee can audit that (2) is not the case, and that Google comes very close to doing this.
If you take security and specifically insider threats seriously, you can't privilege or hide any subsystem, or it becomes a threat of its own, so the same processes that prevent an attacker from creating a shadow-system in your infrastructure also prevent you from doing the same thing.
Apple's "cooperation" with authoritarian governments tends to only go so far as it needs to in order for the next iPhone to come out on time and in sufficient supply. Otherwise Apple bends heaven and earth to engineer their devices to be as secure as they can make them, even against state authorities.
That said, if you live in China, you probably don't want to sync your stuff to iCloud. Not because Apple doesn't want to protect your data, but more because you can't trust anything in any data centers that are physically on Chinese soil.
But let's get real. If you're in mainland China and the authorities decide they need to confiscate your phone, you're already fscked.
Digging through the link the other commentator posted, Apple complied with 88% of Russia's requests for information and 94% of China's with over 1000 requests from each of those nations...
Versus Google which has avoided giving information to or censoring search results in both countries and as a result is mostly banned.
With Apple leaving Russia and removing government-affiliated apps from App Store with no way to side-load them, the only other option is Android now and blocking Google completely will probably render most smartphones useless, as most Android phones rely on Google services to function. I think that's why it's not banned yet.
> Apple's "cooperation" with authoritarian governments tends to only go so far as it needs to in order for the next iPhone to come out on time and in sufficient supply
That statement is kind of information-free. If China knows they have Apple completely over the barrel, why wouldn't they demand a lot?
But for how they cooperate, Apple's own transparency report shows they give information on Apple customers to Chinese authorities thousands of times per year, and accept the vast majority of requests: https://www.apple.com/legal/transparency/cn.html
>If you're in mainland China and the authorities decide they need to confiscate your phone, you're already fscked.
Funny how you specifically mention China, as if it worked differently in USA - the country where you can get four years of jail time for talking back to police.
Because of hypocrisy? They pretend to be not in ads business with your data
So now everyone is doing the same thing so called value the 'privacy' (aka only they could collect the data for themselves to do personalized ads). So in the end you pick the one who hoard ur data and show the ads. What's the difference again?
Google, being US-based company, is legally obliged to provide all the data they have to three letter agencies, without any real oversight. They can’t refuse even if they wanted.
Regardless, I care less about the US government having my info than, say, Russia (especially being part Ukrainian, having Ukrainian friends and family, etc...).
Lol. Selling your data to the government is one of the ways they make money. BigTech and BigBrother have been in cahoots for more than 2-3 decades now. Read https://en.wikipedia.org/wiki/PRISM for more info.
This is a research project. There are no users and probably won’t be in this form. If it makes it into a real product, it might not even be done by Google (since they’re open sourcing it.)
So nobody is being tricked. It just too early to say what real products will do.
I mean, it does also claim to be "almost entirely written in Rust", which is true if you ignore almost the entire OS part of the OS (the kernel and the minimal seL4 runtime).
TL from the project here: you're right, the changes to the kernel are not yet formally verified, but that's on the roadmap -- there's quite a lot of work that has been done here, and tons more to come. The vast majority of changes we have made have involved lots of conversations with folks on the seL4 mailing list including Gernot Heiser and video conferences to work out the best way to do what we're doing.
I realize the blog post comes out pretty strongly on this topic, and that's my oversight -- I let my aspirations leak out instead of tempering them (this is not your typical PM-driven project) properly.
Please understand that this is an engineer-driven project in Research with a very small team where we're doing our hardest to do the right thing, so please bear with us.
Don't sweat it, this is just a blip. I for one have wanted an SEL4 + Rust based OS for a long time, really cool that someone is finally doing it. It's clear what the aspiration is, just keep working toward it.
Okay, then you should fix your mistake and edit the post or issue a new post that does not call or imply that “KataOS provides a verifiably-secure platform” since it does not. You have achieved that when any new readers of the post do not mistakenly believe that it is currently verifiably-secure.
TL from the project here: yeah, I should have done more work on the wording -- we locked the content too fast, and I pushed a tad too hard at getting the post out. :P
Side-channel attacks are out of scope for the security model of both seL4 and our KataOS project, so bear that in mind for sure.
XMAS comes early as far as I'm concerned... A rust os & risc-v implementation is sorely needed & I expect to begin experimenting on private cloud frankenwulfs immediately. I can see why you rushed, this in my humble opinion is bigger than the release of the go programming lang ;)
Thank you very much for putting all the effort into this project. It is a great step towards more secure computing in general, and you earned respect for that.
It's either verifiably secure or it isn't, and that makes an enormous difference. Also, the issue of hardware bugs purports to be addressed by verifiably secure CPU designs.
Of course that leaves the multitude of programmable peripheral devices. But starting with hardware and software that are implemented to be provably secure is a big change. It is table stakes for systems to be vastly harder to penetrate.
Absolutely, and this is specifically why I chose to start with seL4 and use Rust for the userland we built. seL4 has a verification framework already in place, so we can use it to ensure our system design and implementation is good. We've spent this time working with the seL4 guys to find a good middle ground in these changes, and we're going to see about verifying the design as we go, but we wanted to get these things out sooner rather than waiting because it affords more chances for feedback and collaboration. My only regret is not being able to open the entire source tree at once yet. We'll get there, but this is a good start in the meantime.
We do not have our changes formally verified yet, but that is definitely on our roadmap -- otherwise, what's the point of starting from this set of options? Likewise, this is why we chose Rust -- there are several projects already in progress to produce formal verification tools for Rust, so we can hopefully use those as additional proofs.
are there any formally verified CPUs that support any of the constructs needed for anything more than microcontrollers? Like, I have not yet found a formally verified CPU which supports virtual memory or caching
I've always said that computer science has a PR problem.
Formally verified applications is such a foreign concept to people that when you say "verified correct" they get skeptical and mistrust the whole concept.
Saying something is "secure" when it has been formally verified will be received with a grain of salt, but it is much easier to say than:
"we wrote a detailed specification that define the whole system via algebra, and then we let a theorem prover run all possible permutations of the specification It has now tested a billion edge-cases and we have reached a state where it no longer finds any deviation from the specification."
At least it is provable better than someone saying "it is secure because we think it is".
I do not quite get it: seL4 is verified. Is the rest of the code as well? I understand that verification of Rust is just starting to gain traction (compared to C, Java or Ada), or did they make major progress here?
The idea is to use some peoples receptivity to religion for shifting their awareness to the risks of traveling.
It can also help to center thoughts, calm down, and increase mindfulness while steering the motorized vehicle, therefore decreasing traffic accidents.
(I wonder whether there are studies on if this actually helps.)
Ah, that's interesting. Speaking from personal experience (and as an atheist, who happened to grow up in a religious household), I think many atheists are too quick to dismiss the value of rituals/prayers/meditations. As you pointed out, there can be value even if the deity to whom the prayer is offered doesn't exist or isn't listening.
For example, growing up my family began and ended each meal with a prayer of, essentially, thanksgiving, with a few other requests thrown in ("keep us safe", etc.) This delimited the meal time: we were there for the duration, we all started eating together and there was no leaving before the meal had ended. I don't remember the prayer any more but I do have fond memories of family dinners, the unity and connection we all experienced during that time and the spirited conversations they featured. Prayers of thanksgiving also center the experience of eating in one of gratitude.
There's deeply embedded programming in our brains related to sharing a meal with others which strengthens social bonds. Rituals like this very much play into that. As do lunch with colleagues or dinner dates.
Kids are "defiant" or have "bad behavior"? Just abuse them more. Even better, just hand them to institutional abusers.
> Cynthia says they eventually settled with the programme for an undisclosed amount on the condition they could speak freely about the circumstances of their daughter's death. She learned Erica had been pushed to keep hiking as her condition worsened throughout the day. She later testified to Congress about how her daughter's distress had been mistaken for teenage belligerence by staff.
>KataOS provides a verifiably-secure platform that protects the user's privacy because it is logically impossible for applications to breach the kernel's hardware security protections and the system components are verifiably secure.
The wording seems quite confident, maybe it could use some additional "at least according to its specification". This approach doesn't protect against hardware bugs and side-channel attacks.
Especially when one thinks of unexpected attacks like Rowhammer, there is probably no way to include them in a formal systems model beforehand.