Kinda seems like we’re rapidly headed for the complete collapse of the internet as we know it.
Every site that is driven by user posting seems to be headed towards being overrun by AI bots chatting with each other, either for sake of promoting something or farming karma.
And there’s really not much point in publishing good content anymore, since AI is just going slurp it up and regurgitate it without driving you any traffic.
Though it’ll be interesting to see what happens to ChatGPT and the like once the amount of quality content for them to consume slows to a trickle. Will people still use ChatGPT to get product recommendations without Reddit posts and Wirecutter providing good content for those recommendations?
The bot problem cannot be solved. Even if you strongly authenticate, people are letting bots act on their behalf (moltbook is a great example of this) and what's to stop people doing that in the future. Build your identity and reputation autonomously with the benefits that come with that.
This happens now on Onlyfans too. Content creators hire agencies which in the best case outsource chatting to "customers" to armies of cheap labour in Asia, and the worst case use bots.
The dead internet theory [1] is probably not just a theory anymore. HN recently made a policy to not allow AI posting and posters, but do you honestly think that's going to work? I would place a bet that a top HN poster within the next year is outed as using AI for posting on their behalf.
The bot problem can easily be solved. It’s just that no one likes the cure. Think about this for a minute: what would happen if you had a country where all its citizens could act anonymously with no consequences, no reputation, no repercussions, and no trace? Would you want to go there? Live there? No, because it would be a lawless wasteland dominated by the worst of the worst.
Yet people act like the internet is somehow different. The internet is a massive society. Social networks are very much like virtual countries, or even continents. We’ve all enjoyed the benefits of living in this society of zero consequence, but it’s now been overrun by the very worst people, just like the imaginary country above.
You claim we can’t solve this problem, but we already have solved it here in the physical world with identities, laws, and consequences. The real problem is that most people don’t want to let go of the very thing that is the problem: anonymity. Unfortunately, there won’t be a choice for much longer. The internet will certainly be dead without a system that ties IP addresses and online identities to real people.
No, it’s not the internet we all wanted, but humanity has ruined the one we have.
I can imagine a "anonymity" or "reputation" filter attached to every interaction in the internet. Enabled by default, but you can disable safe mode and see bots having fun.
Also for me problem is not in the anonymity itself, but in the lack of reputation. If I have a signal that entity can be trusted, I don't care much about its real identity.
I think this is a great way to frame the conversation and possible solution: reputation. things like accumulated karma or credits and IRL connections (big data will love this) all begin to feel dystopian whereas reputation I believe is something that everybody can get behind. It can absolutely remain anonymous, while still benefiting from IRL meetups for big reputation bumps (just use your handle). We all hang out in lots of places online, let that rep build and be used everywhere. Pretty sure they were trying to do something like this in the fedverse but haven't touched base on it in a long time ...
So you are missing something here. Up until recently IRL was anonymous by the nature of capturing all that data of what people are doing was expensive and difficult to process. Cameras weren't everywhere either.
If you lie to me in the real world, I know what you look like and won’t trust you again. You cannot change your face. If you punch me in the real world, I can punch you back. If you stab me in the real world, you’re likely going to jail once the police catch up to you. You don’t do any of those things because the lack of anonymity imparts consequence. There is no anonymity in the real world unless you run around in a full face mask, in which case no one will trust you anyways.
>The real problem is that most people don’t want to let go of the very thing that is the problem: anonymity.
Anonymity is not the problem though. We've gone with anonymity for a long while and it has worked fine. Would a removal of anonymity suddenly fix all this? No, absolutely not. Astroturfing and PR campaigns happened before AI comments were a concern, same as bad actors.
The problem here is the "recent" development of trusting whatever you read online. Of insisting that content should be personal, trustable and real, when none of this can ever be ensured. The separate, but related problem of engagement-based economy makes it way worse.
And remember: social media sites don't actually want to get rid of bots, for the most part. That's not in their interest, as long as bots increase engagement, does anyone trust them to actively hurt their bottom line in order to promote honest, productive discourse? Please.
Bots get so good that they become indistinguishable from humans. If that’s true then it doesn’t actually matter if your community is all bots. But it does matter because authenticity matters to humans. They will seek authenticity where they can successfully sense it, which will be in-person.
Human simulacrums will one day cause a repeat of this issue. Then we’ll have a whole Blade Runner 2049 issue about what exactly is authenticity?
"Creator", on the other hand, is beautiful. It means you don't have to pick a lane. Anything can be creative. Documentary filmmaking, stop motion, dance, costume work, historical reenactment, indie animation, economics essays, game dev...
The problem is we don't have a nice word that holistically captures the output of creators. They're not all making films or illustrations. So what do you call it? "Art" is awkward.
"Content" works, but it sounds like slop. We need a better alternative word that elevates creative output.
You can call it video essays, Let's Plays, news reports, slop videos, and so on.
To repeat fgiesen/ryg: to a content plumber, it's all content, just like the mail system delivers packages and doesn't really care what's in them. The video engineers at YouTube don't care whether it's a news report or a slop video, as long as the frames get on your screen. However the sender and receiver had better care what's in the package or something's gone horribly wrong.
> people are letting bots act on their behalf (moltbook is a great example of this) and what's to stop people doing that in the future.
Verifiable credentials; services can get persistent pseudonymous identifiers that are linked to a real-world identity. Ban them once and they stay banned. It doesn’t matter if a person lets a bot post inauthentic content using their identity if, when they are caught, that person cannot simply register a new account. This solves a bunch of problems – online abuse, spam, bots, etc. – without telling websites who you are or governments what you do.
There is the other side of this too: Real people - fake posts.
So, you have other folks on here already saying that the code their bots write is better than their own, right?
How long until someone who is karma focused just uses a bot to write their comments and post their threads? I mean, it's probably already happening, right? Just like a bot doing your homework for you, but with somehow even less stakes. I imagine that non native speakers will take their posts and go to an AI to help clean them up, at the very least. At the worst, I can imagine a person having a bot interact fully under their name.
So even if we have some draconian system of verification, we will still have some non-zero percentage of bot spam. My out-of-my-butt guess is somewhere near 40%.
IMO this is inevitable. HN is freaking about about the end of the anonymous internet, but it's already over and we're just figuring it out. Eventually the bots will find their 90s cyberpunk cosplay IRC channel too.
You kinda skipped the bit I wrote alongside this about strong authentication. There are numerous ways to do this. For example, in Finland you have to physically identify yourself to open a bank account and you can then use that to authenticate. It's used for all public sector services and a few others with strict accreditation.
The issue is that it solves nothing if you can't distinguish between text that is written by AI and isn't, regardless of strong authentication.
I'd rather have a system where there's a small investment cost to making an account, but you could always make another.
Imagine A system where there's a vending machine outside City Hall, you spend $X on a charity for choice, and you get a one-time, anonymous token. You can "spend" it with a forum to indicate "this is probably a person or close enough to it."
Misuse of the system could be curbed by making it so that the status of a token cannot be tested non-destructively.
Something Awful made you pay $10 for an account. Directly to the forum. If you got banned you could pay another $10 to try again. Somehow this didn't lead to that bad incentives even though you'd think it would.
Ban reason and the moderator name were public on Something Awful, which allowed the community to respond (actively or passively), and for more senior moderators/admin to take public action against rogue moderators. The transparent audit trail countered the incentive to ban somewhat, but a lot of people also treating getting banned as a game.
It's because you can't reasonably put everything in the rules. They would be thousands of words and still have holes and special carve-outs, _and_ users will still argue about rules application if you say your rules cover everything.
It's more reasonable to have "a spirit of the law", so to speak.
Lemmy isn't simply Lemmy since it's federated. A screenshot like this is somewhat meaningless without specifying on which instance this happened. There are instances with very lax or even no moderation at all.
For the majority of large, well-federated instances, I don't think it's meaningless, because deletions also propagate to other instances.
If a mod on one server doesn't like something I say, and they delete my comment, all the other (well-behaved) federated instances will also delete my comment.
Of course this also creates problems in the other direction, like servers that ignore deletion requests.
That combined with a large amount of blocked instances across the board, I feel like you get into this "which direction would you like to piss into the wind" situation where you have no idea how many people/instances will actually see your message if at all.
>transactional emails from various services that you’ve signed up for
These are one of the main culprits of unwanted emails... and a toll system would make them all the more valuable for the even worse actors to take advantage of.
probably not, the problem is that spammers/scammers are looking for whales, and if you are talking about draining the retirement accounts of an American who's been saving all their life, that's quite a big payout in the six or seven figures.
In the case of the 415 scams I used to ask “who would expect $20M to fall out of the sky?” The obvious answer is “someone who already had $20M fall out of the sky”
I've talked about this on here before, but we think the solution is an auth layer built on top of credit score through an intermediary like creditkarma. The score itself doesn't really matter but it does solve big problems.
Plus, if you wanted to implement a filtering system for users, I personally would rather trust reviews / comments from credit scores over 650, they have less incentive to be astroturfing.
But yes, I think your conclusion is correct. This is the only way.
How is that creditkarma accumulated? By other "users"? Does the intermediary guarantee, the this account is a valid person now and always, and not sold the account or not stolen? I mean, we will always need some middlemen I guess?
Anubis is one such answer [0]. Cryptocurrency and micro transactions are another.
In the last few decades, spam was a problem because the marginal transaction costs of information exchange were orders of magnitude lower than they had been. Note that physical mail spam was, and still, is an issue. Focusing on perceptual or fuzzy computation as the limiting factor, through captchas and other 'human tests', allowed for most spam to be effectively mitigated.
Now that intelligence is becoming orders of magnitude cheaper, perceptual computation challenges no longer work, but we can still do computation challenges in the form of proof of work or proxies thereof. Spam will never wholly go away but we can at least cause more friction by charging bot networks to execute in the form of energy or money.
I don't see how Anubis solves anything. If a human lets the bot control a completely vanilla computer (which there is now a lot of tooling for), then how is it going to stop that?
You're right. My proposed solution only addressed AI at scale (web crawlers, mass spam campaigns, etc.) and doesn't address low bandwidth, "high friction" events like PRs, code reviews, blog posts, etc.
I don't want to dismiss the concerns outright and I'm not sure I have a concise response but my feeling is that, in some sense, it doesn't really matter. If AI is used to create a high quality output, then it should be accepted. If AI is creating low quality output, then it should be easy to verify, maybe with better (AI) tooling.
In other words, the bot problem cannot be solved, in that we might never know whether the source is from human or machine, but it won't matter as that's not the core of the problem, quality content is.
My opinion is that DIT is overstated and, where it isn't, we'll see much better technology evolve to separate the signal from the noise. As an analogy, in the late 1990s, internet search engines were abysmal, raking by document keyword searches and so were easily game-able by content that had nothing to do with the search intent. Google came along with page rank and, almost overnight, made the internet usable. From the bad Yahoo search results, one might be tempted to think that the entirety of the internet looked like what Yahoo was serving, but this was the wrong impression as there were plenty of interesting things on the internet, it just took page rank to provide the necessary filter to make the internet usable.
At most, PoW makes it a bit annoying to scale: you need to add some form of RPC that delegates solving to a beefy+cheap Hetzner server. If you're really scaling and it's getting expensive, you can rent a GPU to do batch solves.
With AI running rampant, it seems security through obscurity is basically the best thing we have. Everyone knows reddit, facebook, xitter, etc so any clown can and does have bots running loose. HN is "obscure" in that most normies don't know about this place, and so it's relatively safe from the floods of spam. But I think it's just a matter of time until non-tech people start looking for those few bastions of human comments online, come across this place, and a great flood begins and it'll never be undone. After that, I guess it'll be a rise of invite-only forums like we had in the early 2000s all over again.
HN may not be “mainstream” but it is certainly _very_ vulnerable to bot spam given the topics discussed and the make-up of the audience.
You can already see it happening now - at least the bots that write like vanilla Claude/ChatGPT. Presumably there is a much larger hidden cohort of bots that are instructed to talk more naturally and thus are better adept at flying under the radar…
If you are rate limited, a moderator has manually applied a rate limit to your account. Accounts are not rate limited by default. You can appeal the decision by emailing hn@ycombinator.com.
I think there's a short-term rate limit applied to everyone, e.g. you get a message if you try to post three replies in the same minute. I've seen it once, and I don't think I'm active enough to have earned a manual flag.
The karma points you get on HN are worthless, which I think is a bonus. They don't buy you anything. On Reddit, for instance, many parts of the site are walled off until you have "farmed" enough karma to participate.
You get the right to down vote and if I promote my totally not a scam product on HN, people will check my user account and see: on wow over 9000 karma? Gotta be trust worthy, when in truth it's just been karma farming.
I don't know, never found much value in karma. I recreate an account at least once a year for no particular reason and it roughly takes me a week to get enough karma to do what is important (flagging posts).
I’ve never seen people on the likes of blackhatworld selling hacker news accounts or services. The glass half full take on this is that hn is surprisingly robust in its ability to deal with vote manipulation.
150m page views a month is peanuts and very far away from the "social" networks numbers. I don't have those numbers, but I know how many page views we had 2011 while running a german browser game community.
The internet seems to have grown massively within the past couple years (unfortunately, almost certainly because of bots). I bet the number today is orders of magnitude higher.
I've asked ChatGPT a question about something I read in a thread here and it responded with a comment from that thread, even though the thread was less than an hour old. HN is well known in the tech community and there are certain subjects, especially anything involving Israel or India, that nearly instantly result in a flood of comments from bad actors. HN isn't Reddit but it's also a shadow of what it once was, which is driving away more of the productive participation in favor of agenda-based posting.
Note that these topics often involve comments which you can predict very easily. Internet users are like that, agenda or no. Wasn’t it in the heyday of forums that you could recognize the most prolific/annoying members by their style and vocabulary? A model should have no problem pulling such things off.
I was thinking the same thing, that this wouldn't necessarily be a bad thing. I'm curious how far it will go.. if we'll get invite-only mesh networks with self-contained mini-internets and the like.
The future is human curated content. Provide the same experience people get today but without the noise. Give them just the good stuff and don't let just anyone make a post. A book has an author, a movie has a director, maybe websites can have webmasters again who filter through the garbage for you.
This means that only sites which verify identity will have any value in the future. And by verified, that means against government ID and verified as real.
No amount of sign up fee works as an alternative.
Note that a site can verify identity, prevent sock puppets, ban bad actors and prevent re-registration, all while keeping that ID private.
You still get a handle and publicly facing nick if you want it.
The company which handles this correctly will have a big B after it. Digg actually has a chance at this.
It has no users, so the outrage won't exist in the same capacity. Existing platforms will be pummeled in the market if they try to convert to this type of site, as their DAU will likely drop a thousandfold, just due to the eliminated bots.
But Digg could relaunch this way. And as exhibited, this is now the only way.
The age of the anonymous internet is over, it's done. People not realizing this are living in the past.
Note, I don't like this, but acknowledging reality is vital. Issues with leaked databases, users, hacking of Pii are all technical and legislative issues, and not relevant to whether or not this happens.
Because it will happen, and is happening.
It should be noted that falsifying ID is a crime. Fake ID coupled with
computer fraud laws will eventually result in hefty jail time. This is sensible, if people want a world where ecommerce, and discourse is online... and the general public does.
And has exhibited a complete lack of care about privacy regardless.
I think people who want to stay anonymous just will not participate anymore. Like I’ve enjoyed using this site, Reddit etc but couldn’t care less about dropping them if I need to have an id verification to access. Someone will probably create a new communication method to replace this.
>No amount of sign up fee works as an alternative.
Simply put money is worth too much, at some point someone will want access to this human audience and offer too much to be resisted.
>It should be noted that falsifying ID is a crime
Lol, no one gives a shit on the internet. People will use stolen ID'S to get accounts. If the network is lucrative enough, governments will provide fake IDs to spread propaganda.
You've nailed it. Social media is no longer and will never again be a substitute for real human interactions. It sort of worked when it was mostly real humans, but that era is ending and not coming back. Algorithms are now controlling what you see, and bots and agents are increasingly creating and posting most of the content.
Everything clicks nice, so to speak. A nice UI you have there.
I would suggest you explain what it's about in one sentence, just like you explain in your HN profile. The About-page says not so much. You can add some explanation there, or even just one sentence at the top of the homepage (or other pages).
Every website needs to add the "friend or foe" system[0] so that I can mark bots to avoid their content and mark good posters so I can filter just to theirs.
no, I truly do not want to read IHeartHitler88's opinion on jews, or donttreadonme09's bright opinions about how the economy would be better if we listened to Ayn Rand. I'll be very happy when they're out of my sight. If I want to have a miserable day, sure, I'll turn it off.
Fact of the matter is, most posts on the internet are already dogshit. Now they're also populated by AI, but the point stands. Most of what you will say online is at best useless.
I know, it hurts. Most of what I say in this website doesn't matter. Even if it did, it's about the same thing as screaming into the void. And it applies to you too.
The vast majority of what we post is vapid, useless bullshit.
> And there’s really not much point in publishing good content anymore, since AI is just going slurp it up and regurgitate it without driving you any traffic.
You just published good content knowing AI will slurp it up and not give you any traffic in return. I'm now replying to you with more content with the same expectations about AI and traffic. Why care about AI or traffic or recognition? Isn't the content the thing that matters?
It's like answering technical questions in an anonymous/pseudonymous chat or forum, which I'm sure you've done, too. We do it to help others. If an AI can take my answer and spread it around without paying me or mentioning one of my random usernames I change every month or so, I would be happy. And if the AI gives me credit like "coffeecup543 originally posted that on IRC channel X 5 years ago", I couldn't care less. It would be noise to the reader. Even if the AI uses my real name, so what?
The people who cared about traffic and money from their posts rarely made good content, anyway. Listicles and affiliate marketing BS and SEO optimizations and making a video that could be 1 minute into 10 minutes, or text that could've been 5 articles into a long book - all existed from before AI. With AI I actually get less of this crap - either skip it or condense it.
It's two different problems. People who run review sites and blogs and such care about traffic, and not getting attribution will kill their desire to participate. People who post here and on Reddit etc. care about talking with other human beings, and feeling ignored in a sea of botspam will kill *their* desire to participate.
> feeling ignored in a sea of botspam will kill their desire to participate.
The bots are not really that bad, they're (still) pretty easy to spot and not engage with. I'm more perplexed about the negativity filled comments sections, and I'm pretty sure most posters are real grass-fed certified humans.
I don't get why negative posts get so upvoted, get so popular on the front page, and people still debate with outdated arguments in them. People come in and fight other deamons, make straw-man arguments and in general promote negative stuff like there's no tomorrow. I think you can get so much more signal from posititve examples, from "hey I did a thing" type posts, and so on. Even overhyped stuff like the claw-mania can still be useful. Yet the "I did a thing" get so overwhelmed by negativity, nitpicking and "haha not perfect means doa" type of messages. That makes me want to participate less...
Oh that's just human nature: there's a reason why trashy tabloids continue to exist despite how public sentiment seems to universally agree that they're awful spreaders of rumour and insecurity. More people are Skankhunt42 than we'd like to admit.
Sure, just be aware of what you're up against: if religion teaches us anything it's that even concerted, systematic efforts over millennia to conquer human nature (eg: libido) still fail. But if you want to give it a go, by all means: one can only imagine Sisyphus happy.
In the most simple sense - Yes, it is the content that matters.
In the more practical sense - cognitive and emotional resources are limited and our brains are not content agnostic.
We have different behaviors, expectations and capacities for talking to machines and talking to humans.
For example, if I am engaging with a human I can expect to potentially change their minds.
For a machine? Why bother even responding. It’s of no utility to me to respond.
Furthermore, all human communication comes with a human emotional context. There are vast amounts of information implied through tone, through what we choose not to say. Sometimes people say things in one emotional state that is not what they would say on another occasion.
To move the conversation forward, addressing the emotional payload behind the words used, matters more than the words used themselves.
There are a myriad reasons why humans are practically poorer for these tools.
This could be positive. So far things were gamed and manipulated to some extent, with some fake content, but it was never too obvious, and a bit of a cat and mouse game with filters and whatnot. Now, it's so easy to fake content that robust systems will have to evolve, or most social media sites will become worthless, and advertisers will catch up eventually when they are paying for bot-only sites.
The downside of course is that these robust systems are hard to imagine without complete loss of anonymity of the users.
Web of trust weakens anonymity, but doesn’t eliminate it.
- You know who your online invitees are, but not your invitees-of-invitees-of-…
- You can create an account, get it invited, then create an alt account and invite it. Now the alt account is still linked to you, but others don’t know whether it’s your friend or yourself. (Importantly, you can’t evade bans with alts; if your invited users keep getting banned, you’ll be prevented from inviting more if not banned yourself)
You mean a complete collapse of social media, not the whole internet. The internet is a telecom ecosystem and has a lot more to it than just forums and link aggregators.
I honestly believe it might not even be such a bad thing. People were arguably better without social networks and media, and it's perhaps better to let the cancerous thing just die and keep the internet just as a utility powering boring things like banking and academia.
What would you say are the major applications of the internet? It's used for business and academia in ways that aren't going away, yes. M2M communication will stay. Social media is the largest user-facing segment and it might not. I don't have a sense of how big these sectors are relative to each other. If the largest sectors of the internet disappear, the internet shrinks a lot.
Asking money to people in order to read stuff, and promoting the one people are actually ready to part with real money to read, is a first interesting step. (See: substack, Patreon,etc...)
I know this is going to sound horrible, but : how about asking money to contribute, period ? Maybe have a free tier of a couple comments, etc... But if you want to build a troll factory, sure... Show us the cash ?
I do believe that charging for it is one way to create some friction, but it's not enough.
Twitter is full of blue checks that are just bots and automated reply guys.
I'm treating now all these bots as a stressor on our defense systems, and we will end up having to learn how to build a real Web of Trust, and really up our game on the PKI side. We also need some good Zero Knowledge proof of humanity that people can tie to their Keyoxide profile, so that we can just filter out any message that is not provably associated with a human.
Instead of building a Web of Trust, a better solution might be to find efficient ways of clustering people.
Look how civil and insightful this discussion is. Why? Because people have different "quality", even if it's not politically correct to talk about it, we all know it intuitively.
Imagine a forum, where all the HN-level people are gradually clustered together, all the rednecks with conspiracy theories, all the leftists dreaming of communism, all form their own independent echo-chambers.
Yes, echo-chambers are bad, but let's face it, you won't be able to change most people's opinions anyway. Don't agree with me? Go on reddit, try saying something "controversial", like "men can't get pregnant", see how many people you would be able to convince :)
> magine a forum, where all (...) form their own independent echo-chambers.
That would be a horrible nightmare. You are falling for the same identity politics trap of the ones you are implicitly criticizing. You being on HN makes you no better than the "others".
> you won't be able to change most people's opinions anyway.
Who cares about "changing most people's opinions"?
Every website that was driven by traffic is also dying. I have put nearly a decade of work into mine, and AI overviews and ChatGPT have reduced traffic by over 60%. At some point I will need to give up and find a job, and that corner of the internet will get no new original information, just rehashed slop.
> Though it’ll be interesting to see what happens to ChatGPT and the like once the amount of quality content for them to consume slows to a trickle.
Creative loop moves inside the agentic chat room, where we do learning, work, art, research, leisure, planning, and other activities. Already OpenAI is close to 1B users and puts multiple trillion tokens per day into our heads, while we put our own tokens into their logs. An experience flywheel or extended cognition wheel of planetary size. LLMs can reflect and detect which of their responses compound better in downstream activities and derive RLHF-RLVR signalling from all our interactions. One good thing is that a chat room is less about posing than a forum, but LLMs have taken to sycophancy so they are not immune, just easier to deal with than forums. And you can more easily find another LLM than a replacement speciality forum.
As someone who came of age before “the internet as you know it”, I am looking forward to all of the cancerous Web 2.0 OG slop and narcissism factories succumbing to their own fates. Let me tell you, the internet as we know it sucks, and the internet it ate 25-years ago is a marked improvement. We should be so lucky. Now go write a personal blog in plain text, and rejoice.
Unless you're allowed to say slurs without being banned, your forum will be overrun with bots. The sanitation of the internet is the perfect breeding ground for brand-safe AI promotion bots.
Curious how you came to that conclusion. Anecdotally, places where you can slur to your heart's content like /r/conservative seem far more inundated with bots than other areas of Reddit. I feel like that's really saying something too, because Reddit has a really bad bot problem overall.
Every site that is driven by user posting seems to be headed towards being overrun by AI bots chatting with each other, either for sake of promoting something or farming karma.
And there’s really not much point in publishing good content anymore, since AI is just going slurp it up and regurgitate it without driving you any traffic.
Though it’ll be interesting to see what happens to ChatGPT and the like once the amount of quality content for them to consume slows to a trickle. Will people still use ChatGPT to get product recommendations without Reddit posts and Wirecutter providing good content for those recommendations?