Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
CORS Is Stupid (kevincox.ca)
92 points by rainworld on Aug 24, 2024 | hide | past | favorite | 48 comments


Lots of stuff about this article is strange:

* The author first describes a mildly convoluted cookie-stripping middleware under "Actually Solving The Problem" and includes the much simpler `SameSite=Lax` cookie under "Defense in Depth" later on.

* The author neglects to mention that `SameSite=Lax` is the default on all major browsers (still good to set it, though), which basically blows up the premise of the article, that all your cookies are one cross-origin form away from being mis-used

* The author recommends _loosening_ the default CORS protections, which I guess makes sense if you think "CORS Is Stupid", but definitely makes your server less secure for basically no reason.

The best way to deal with the Cross-Origin Resource Sharing restrictions is to simply set up your site so that it doesn't need to make cross-origin requests: serve your site from the same domain that it makes requests to, and make sure you set your cookies with `SameSite=Lax`, `Secure`, and `HttpOnly`. I outline this approach in more detail here: https://htmx.org/essays/web-security-basics-with-htmx/

Also, while I still think that existing browser controls basically solve this problem, it's true that POST forms have some sneaky vulnerabilities due to backwards compatibility concerns. One way to help fix that is by introducing PUT, PATCH, and DELETE to HTML forms—methods which don't have the same CORS exceptions: https://alexanderpetros.com/triptych/form-http-methods


FTA: >> The default policy allows making requests, but you can’t read the results.

This seems like the central premise of the article. Is the author just wrong about this? I'm confused. As I understand it, if the remote site sets Access-Control-Allow-Origin to a wildcard, and doesn't explicitly name the origin, then by default the browser will not send credentials with a request. Isn't that CORS working as intended? Was there some time in the past when browsers defaulted cookies to `SameSite=None`?


How can the browser know not to send the credential before it gets the response that contains the Access-Control-Allow-Origin header? This is the crux of the issue. For many types of request (notably with `Content-Type: application/json`) the browser will send a preflight request. But there is a carve-out for "simple" requests which includes some POST requests.

https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#simpl...

If you can be sure that you application doesn't do any writes on "simple requests" then your job gets a lot easier and you can mostly rely on `Access-Control-Allow-Origin: *` omitting credentials. But if you have endpoints that accept form-posts or don't check the request's Content-Type header than you need to be extra careful.


This is a good point, I tried to research `SameSite=Lax` but most of the docs don't actually mention how it applies to form posts. But the actual RFC does specify that it only applies to top-level "safe" methods. So form POSTs are actually protected by default.

https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-coo...

This does actually make it pretty robust, although it does become an issue if you actually want to support cross-site form posts. But that is a pretty niche use case.

> includes the much simpler `SameSite=Lax` cookie under "Defense in Depth" later on.

The reason that I put the cookie-ignoring solution first is that it is robust and flexible. If you do want to support some cross-domain interaction with a logged in user you need something other than `SameSite=Lax`. The "ignore auth unless explicitly handled" provides a simple and effective baseline while being much more flexible. I also don't think ignoring cookies across you app is any more "mildly convoluted" that ensure that all cookies have a specific set of attributes. Both basically require a common layer to implement, but it seems more likely that someone adds a cookie somewhere without the approved helper than that they somehow accept a request that bypasses the middleware.

> `SameSite=Lax` is the default on all major browsers

It isn't. For some reason Firefox has had it prefed off since forever: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Se...

But it is well supported if you set it explicitly.

> loosening the default CORS protections ... but definitely makes your server less secure for basically no reason.

Can you elaborate? The "loosening" described is equivalent to what can be accomplished via a CORS proxy (other than possible network perspective concerns if they apply to your use case). So it isn't really loosening it. How does this make your server less secure?

This is explained here: https://kevincox.ca/2024/08/24/cors/#shouldnt-i-be-more-spec...

> One way to help fix that is by introducing PUT, PATCH, and DELETE to HTML forms—methods which don't have the same CORS exceptions

This would be nice, but it seems to mostly be a workaround, you can choose a different verb just to get CORS protection but now you can't use POST even if it is appropriate.


> For some reason Firefox has had it prefed off since forever

I'm not sure this is correct. I use Firefox, I have the setting off (by default), but even still I get the more secure SameSite=Lax secure by default even with the setting off, according to the sandbox they link to: https://samesite-sandbox.glitch.me/

For me the setting they mention makes no difference to the behaviour whether it is off or on.

EDIT: It seems like it's because of enhanced tracking protection that I get the more secure behaviour. With that off, I get the less secure behaviour.


First off, really appreciate you engaging with the feedback so thoughtfully!

> This does actually make it pretty robust, although it does become an issue if you actually want to support cross-site form posts. But that is a pretty niche use case.

In my opinion, the dominant security message for web applications should be: unless you are a specialist you should not be supporting cross-domain credentialed interactions. It's a simple rule that lets the browser do most of the security work for you.

The premise of your article that I most disagree with is "the automatically provided cross-origin protections are completely broken. Every single site that uses cookies needs to explicitly handle it." That's a scary line and it's simply not true for single-origin sites which is a) most sites and b) what should be encouraged.

> The reason that I put the cookie-ignoring solution first is that it is robust and flexible.

The robustness and flexibility of the solution are also add to the complexity of it (I did say "mildly"). Are the couple lines of python terribly complex? No. But middleware is a relatively big jump in complexity from just setting the `SameSite` property. If you said "in the situations where you need to support cross-origin requests, you'll have to do some extra work here" I'd have no problem with it.

> I also don't think ignoring cookies across you app is any more "mildly convoluted" that ensure that all cookies have a specific set of attributes.

Agree to disagree here, I guess, but setting cookie properties definitely easier (it's one line!) than filtering all your request cookies at runtime. The latter also involves a bunch of unnecessary computation.

> Can you elaborate? The "loosening" described is equivalent to what can be accomplished via a CORS proxy (other than possible network perspective concerns if they apply to your use case). So it isn't really loosening it. How does this make your server less secure?

Well, you're setting a header that says to allow a bunch of stuff that the browser would otherwise block (for instance, cross-origin PUT and DELETE methods). The point of CORS isn't that bad actors can't proxy it—you can always just send a curl—it's to make the median web client (i.e. your less tech-savy relative) more secure. In effect, you're disabling a chunk of CORS because it's annoying and replacing it with your own security mechanism. I don't think most people should do that, and there's better, easier advice we can give them.

> This would be nice, but it seems to mostly be a workaround, you can choose a different verb just to get CORS protection but now you can't use POST even if it is appropriate.

True, I'd also like to see them add a CREATE method with the same semantics as POST but with with CORS protections. But there's only so much you can accomplish in one proposal :P


> The premise of your article ... it's simply not true for single-origin sites

But it is. Even if you consider your site single-origin "simple requests" including form posts are allowed by default on some browsers. This requires some sort of solution. There are a handful of possible solutions, but something needs to be done.

Example solutions:

1. Use SameSite cookies.

2. Ensure that you don't respond dangerously to all simple requests.

3. Ignore credentials or block cross site requests.


My biggest gripe with CORS is how the destination site needs to opt into it for requests of every kind. For instance, by default a cross-origin XMLHttpRequest or fetch() request will not include cookies, and if you ask it to include cookies, then the destination must include an Access-Control-Allow-Credentials header if you want the browser to allow it.

So that's all well and good, except for the part where even requests without cookies still need an explicit Access-Control-Allow-Origin in the response. As far as I am aware, the only attack vector this protects against is when the destination decides to release sensitive information solely based on the IP address of the source, and not based on its cookies or any other information not already accessible to JS. (Obviously, the biggest instance of this is in the case of local network addresses, but there are efforts to further restrict cross-origin requests to such addresses, using a separate header, Access-Control-Allow-Private-Network [0].)

I find this a shame, since it means that it's impossible to download most public data in a web app without proxying it through your own server, just because of this one edge case.

[0] https://github.com/WICG/private-network-access


>it means that it's impossible to download public data from a web app without proxying it through your own server.

Maybe with a subdomain that points to the target website IP? If it supports HTTP and doesn't check the host header it should be fine. That's pretty exceptional these days though.

Edit: I just checked and a subdomain won't do it, it needs CORS as well.


This post doesn’t mention the other reasons the same origin policy was necessary: intranets.

If your company hosts content on https://company-news.corp.internal/ it’s very important that some random malicious site on the internet can’t use fetch() or XMLHttpRequest to read that page and exfiltrate that information, just from one of your employees being tricked into visiting that site.

So it’s about more than just cookies.


LAN segmentation is mostly stupid too.

There is a case to make for old protocols and stuff Microsoft just won't fix (it's always Microsoft). But segmenting a web intranet because you expect the information not to escape is doubly stupid. It not only will escape through an infinity of exploits that exist everywhere; you have exactly 2 groups of people to make access control over, what it certainly insufficient; and finally, you constraint the capacity of people to work physically to inside your office.


Just because it's stupid doesn't mean people haven't been doing it for more than twenty years.


> But segmenting a web intranet because you expect the information not to escape is doubly stupid.

It's one layer in the security stack. It's not sufficient on it's own but there's no reason to get rid of it because it's not 100% effective.

> you constraint the capacity of people to work physically to inside your office.

Or, you know, just have a VPN.


Hmm, true. But perhaps you could mitigate this with cookies as OP suggests. Simply don't return anything unless the GET request has a valid intranet cookie?

Or perhaps the client can tell the server what webpage it's fetching from and the security check can be done server-side?

It is just strange to me that this security check has to be done on client-side (in the browser) as opposed to on the web server actually responsible for distributing the content.


You definitely could. But I guess it should be secure by default. Even if you didn't implement any check on the server.

Because people are lazy or they may forget to implement the security checks, or simply be unaware about them.

Ex., when you hacked up a super simple script to display a number of today's users of your startup to display on the big screen in your office. You would probably want something as simple as possible. This page is just 3-5 lines of code. Maybe one-liner even. No authorization or other security, as it's for the office intranet.

Without CORS any website that is visited by people from your office could fetch that number on screen.


Even with CORS, DNS rebinding may be a concern here. I think HTTPS may prevent that as the cert wouldn't contain the original site but in this setup where you want "no other security" it would probably work.


Ideally TFA would have also explained why some requests do go through without a preflight. What's called a 'simple' request is the explanation behind TFA's whole premise.

https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#simpl...

These days, pretty much everything requires a non-simple request in order to invoke an action, regardless of whether the client can read the result.

Agree'd in spirit though that CORS is annoying to use, and it's always worth consulting the manual.


Yeah, I had a section on this but decided to cut it because I felt the other protections mostly obsoleted the need to talk about this.

> pretty much everything requires a non-simple request in order to invoke an action

Except for POST request such as in forms. They may be a little out of fashion with JS-based frontends and JSON APIs but I would consider it a pretty gaping hole. If the list could be reduced to just HEAD and GET requests I would consider it a pretty complete protection. But POST seems like too big of a hole. (Although if you can block all requests that are POST with a "simple" Content-Type then you unlock a pretty robust protection.)


I didn't get this article. This first part explains common CSRF vulnerabilities, and then even explains the SameSite attribute, but it neglected to point out that SameSite=Lax is now the default on browsers, and has been for some time, specifically to mitigate the problems outlined in this article and to get rid of the need for CSRF tokens.


> SameSite=Lax is now the default on browsers

Note on all major browsers unfortunately: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Se...

A real shame, because it would be a pretty robust solution for sites that don't require any cross-origin interactions at all. That being said it is well supported if set explicitly. So if you can ensure that all cookies that your app sets have the attribute you should be good to go.


One of my favorite explanations of CORS is this one, which includes an interactive playground: https://jakearchibald.com/2021/cors/


> It provides both opt-out protections as an attempt to mitigate XSS attacks

nitpick, I guess, but CORS is about CSRF (Cross Site Request Forgery) and not XSS (Cross Site Scripting). If you have an XSS bug, CORS can't save you since they can make requests from your origin.


That's a pretty important nitpick. CSRF and XSS are very much not the same thing.


Sorry, you are completely correct. I'll update the article.


I was expecting a rant, but, wow, it was far better than that. This is a well founded criticism.

CORS is an afterthought that never carefully integrated into the ecosystem, which resulted in bad DX. It’s so blunt that applying CORS always feels like a hack.


I run into CORS issues often when fetching RSS feeds from browser Javascript [0], where the RSS provider has failed to currently set the Access-Control-Allow-Origin header

[0] https://porjo.github.io/freshtube/


100%, this is one of the reasons that I want promote `Access-Control-Allow-Origin: *`. It allows client site RSS and link previews without needing a useless (and potentially privacy-harming) CORS proxy.


You cannot use `Access-Control-Allow-Origin: *` indiscriminately, though. In some cases, it can be dangerous: https://security.stackexchange.com/questions/227779/concrete...


I agree with those points but I don't think they mean that we shouldn't be promoting that header as a common solution.

> Server bound to an inaccessible network interface

This is a niche use case. Most sites don't have this problem.

> Distributed client-side brute-force attack against login

This is pretty easy to solve by adding checks on your login endpoint. But really you should have more robust solutions against login rate limit whether or not they can be triggered by clients on different sites.


Do people use XSRF-tokens nowadays? That used to be the standard approach to this, before all browser CORS protection came to be. The server gives the client a token on the top level page, that must be included in any subsequent POST requests, while also requiring the cookies. Seems like a safer approach, unless you fully trust all browsers to get CORS correct.

It's similar to the Authorization header technique, except you would normally submit it as a parameter in the POST request instead of headers. Explicit credentials are good but has some drawbacks, by being in the headers, you must submit it using fetch(), making it difficult to use in forms or <a>-tags, there the implicit credentials work smoother.


I know that CORS is stupid.

It took a CORS expert from W3C to answer this convoluted CORS question.

https://stackoverflow.com/questions/62289103/same-origin-req...



But that's not a CORS problem. It's a CSP problem. (And CSP deserves to be despised.)


It’s great when people learn things and write about it, that behavior enbiggens us all. But “X is stupid” screeds shows a lack of appreciation for historical context and why things are the way they are. No one had a grand design for the web and it is for better or worse a series of compromises.


While I do partly agree, as someone who is old, I don't expect the younglings to think we didn't do stupid things. We did because we did not know any better, because the world was new and fast and they seemed good at the time. Expecting the future to value the compromises we have chosen doesn't scale, piling complexity does not scale. I don't know how to solve this, but I doubt continuing to add complex and fragile hacks will work forever. In this sense "X is stupid".



You really should be using Content-Security-Policy to block untrusted scripts. And anything you do not host yourself should be untrusted, no matter how much your marketing department whines they want to include Google Tag Manager.


CSP is a poorly designed, dangerous, user-hostile standard that should have never been written, let alone implemented.

Better idea than "using Content-Security-Policy to block untrusted scripts": don't link against untrusted scripts.


Every time I need to fiddle with CORS is CSP, I have to read the MDN pages for them, could never get them to stick intuitively. This is a nice article but more info on CORS would be nice.


Anytime who complains about CORS has my instant upvote, haha!

Seriously though, I think, like this guy suggests, if you avoid cookies on your site or use same site, your fine and cors is mostly a waste.


CORS is clunky. This article is not very good and I would not recommend just blindly following the advice given.


100% agree. CORS is very clunky


Wait what

The main premise of this post is false.

The mentioned request can’t succeed without first completing a preflight check.


It depends on including things like headers and other non-approved changes to the request. Content-Type is one of them, but even it has some whitelisted options.

Read about 'simple' requests to see what you can and cannot do before a preflight is required: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#simpl...


Probably you didn’t see my first reply.

Put simply if I make cross origin POST request that’s JSON type it cannot be a simple request.

There are no ways around this


Didn't see it :) you nailed it!


It depends on content-type.


I find it difficult to assume that’s what the author meant especially since they made no mention of content type.

It doesn’t seem the author was aware of content type once you see their solution.

Normally I would give the benefit of the doubt, but the “fix” is pretty damming here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: