> This is not a reflection of their talent, their effort, or their belief in what we were building. It's a reflection of the brutal reality of finding product-market fit in an environment that has fundamentally changed.
Ironic, they use AI in their shutdown post that blames AI.
>> This is not a reflection of their talent, their effort, or their belief in what we were building. It's a reflection of the brutal reality of finding product-market fit in an environment that has fundamentally changed.
> Ironic, they use AI in their shutdown post that blames AI.
This… seems like regular prose to me. What makes you say so confidently it was written by AI?
I think you're spot on. It feels like parts were edited with AI and parts were left alone.
> This isn't just a Digg problem. It's an internet problem. But it hit us harder because trust is the product.
The statement this is making is presumably the crux of the problem (Digg cannot survive without trust!) but it's worded so poorly that it's hard to imagine someone sat down and figured these three sentences were the best way to make the point.
The rule of three is a basic writing structure taught to 12 year olds. I know people have given up on even the basics (capitalisation) in recent years but let's not just banish structured writing to "AI".
"We underestimated the gravitational pull of existing platforms. Network effects aren't just a moat, they're a wall."
It's a mixed metaphor which doesn't make any sense. There are really very few ways in which this can be considered good writing - I guess the grammar is ok even if it is nonsense.
So let's break it down - underestimated the gravitational effects - ok, this is nice, like where it's going talking about these big competitors sucking in users, but then we have the metaphor extended to breaking point:
Network effects are a moat, but not just a moat, they're a wall (which is really not anything like a moat). So which of these 3 things are they, and why are we mixing the metaphors of gravity (pulling in customers), moats (competitive moat) and walls (walled gardens).
It's just all a bit nonsensical and the kind of fuzzy prose that seems superficially impressive without actually saying anything meaningful in which LLMs excel. Go try generating an article from just the heads in this article, and see how similarly it reads.
If you want your gradation to work, the items need to be similar and progressively stronger. That's why it doesn't work. A wall is not "stronger" than a moat. "Not a fence, a rampart" would work.
Compare to the canonical example from Cyrano de Bergerac: ''Tis a rock! ... a peak! ... a cape! -- A cape, forsooth! 'Tis a peninsular!'
That’s the entire point - network effects are commonly discussed as being a moat (people can’t cross without difficulty) but are actually a wall - people can’t cross and can’t view the other side. Seems simple and straightforward to me.
Walls are crossed just like moats, they were also used in tandem, they are not natural opposites.
Also the problems here stem from mixing metaphors between things that attract and repel, and mixing up attracting customers and repelling competitors without clear explanation.
That’s what makes this bad writing and a classic example of LLM slop which people are willing to make post-hoc excuses to try to make sense of. It doesn’t make sense because sense was not involved in the making of it.
In a castle for defence, yes similar in function but not form and often used together not one or the other.
In business metaphors no they are used for different things and also when you create a metaphor you should stick with it, that’s what makes this jarring and weird.
"Network effects aren't just a moat, they're a wall." is a VERY ChatGPT way to write. It's not proof, but the parent is right that this smells a bit of AI writing.
Not to the same extent at all. If you use ChatGPT for a while, you'll see it writes like that very frequently. Humans do write like that sometimes, but not with anywhere the frequency that ChatGPT does it. That's weak evidence for it being ChatGPT.
Suppose ChatGPT uses a semicolon more often than an individual person. On a pageful of comments from many random people, someone using a semicolon doesn't mean they're a bot even if 100% of their comments on that page includes one.
> It behooves you to not write like that if you don’t want people dehumanizing you.
I have to strongly disagree with you on this. It behooves us (as a species) not to degrade our own manner of speaking and writing simply because of a (possibly temporary) technical anomaly.
In my view, it would be really, really sad to lose expressive punctuation or ways of constructing sentences simply because they're overused by AI.
I, for one, won't be a part of that, and I hope you won't, either.
I think a human would have split the "it's not this, it's that" type of sentence into two separate sentences that could be more descriptive. This is a blog post, not a tweet, so there's no length constraint.
If they wanted to keep it to a single sentence, they could have used a a word like "rather" to act as a separator between moat and wall.
Ironic, they use AI in their shutdown post that blames AI.