Dilemas are, who gatekeeps the gatekeepers, and how much do they want to act like a publisher vs a town square. If too much, maybe they'd become liable for the content they publish.
>maybe they'd become liable for the content they publish
And therein you have found the crux.
Take away the massive human resource cost of moderating this they still run the risk of implying approval/endorsement of things they do not manually kill that the algo throws up.
Personally I think you could just consider that training for the AI.
They will have to so something eventually, I can’t see the EU whose member nations do have hate speech laws letting this rabbit hole continue for much longer.
This is amazing - this is the thing that people demand of Youtube, in order that the algorithm stops promoting anti-vaxxers and Holocaust deniers.