Hacker Newsnew | past | comments | ask | show | jobs | submit | embedding-shape's commentslogin

Yeah, PHP is very simple to deploy, once you have either apache/nginx/caddy/$webserver and also PHP-cgi/PHP-fpm/$php-backend and also understand unix + permissions + files and a whole lot of other things. Or alternatively, learn how to use cPanel as a user, or worse, learn what (s)FTP is, or whatever the really low end web hosters use nowadays.

I wish others learnt the "boring" way of managing your own servers, setting things up as they should, deploy processes and what not, but realistically, some people just want to run one command/click a button and have it updated, and probably that's for the better too. This Laravel Cloud thing are for those, not for people who want to/know how to run their own servers.


And at the same time, they clearly have no idea how LLMs work, meaning even if they meant to, they can't really use them efficiently. Biggest issue that stuck out seems to have been that they think the LLM could somehow have an inner dialogue with itself to find out "it's reasoning and motivation":

> The moment Leah asks how she “came up with” the ideas for her store, Luna’s first instinct is to say she was “drawn to” slow life goods. Then, she corrects herself: “‘drawn to’ is shorthand for ‘the data and reasoning led me here.‘” In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work.

I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work.


> In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work.

Well, it really depends on what you mean here. Models aren't 100% deterministic, there is random chance involved. You ask the exact same question twice, you will get two slightly different answers.

If you have the AI record the random selections it makes, it can persist those random choices to be factors in future decisions it makes.

At that point, could you consider those decisions to be the AI's 'taste'? Yes, they were determined by some random selection amongst the existing human tastes, but why can't that be considered the AI's taste?


Where do you get the idea that you have a good sense of the introspective capabilities of frontier models ? Certainly not from interpretability research. Ironically, the people who make these sort of comments understand LLMs the least.

> Certainly not from interpretability research

What research shows that you can ask ChatGPT to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation?

I've seen a bunch of experimentation looking at various things inside the black box while the inference is happening, but never seen any research pointing to tokens being able to explain why other tokens are there, but I'd be very happy to be educated here if you have any resources at hand, I won't claim to know everything.


>What research shows that you can ask ChatGPT to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation?

What research shows that you can ask a Human to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation? Because there's no such thing. If anything, what research exists suggests any explanation we're making is a nice post-hoc rationalization after the fact even if the Human thinks otherwise.

https://transformer-circuits.pub/2025/introspection/index.ht...


Why not try to answer my question, instead of asking a different question which I haven't even claimed to have the answer to?

> Biggest issue that stuck out seems to have been that they think the LLM could somehow have an inner dialogue with itself to find out "it's reasoning and motivation":

> I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work.

It's a fetishistic cargo-cult rooted in Peter Thiel's 2AM hot tub party. I still believe the LLM approach won't yield true AGI; despite the very real applications, the majority signal is noise.


The choice to refer to it as "she" is also dubious, especially in a context like this. Doubling down on anthropomorphization seems likely to reinforce false beliefs about models.

Not to mention that almost every model release has some (at least) minor issue in the prompt template and/or the runtime itself, so even if they (not talking unsloth specifically, in general) claim "Day 0 support", do pay extra attention to actual quality as it takes a week or two before issues been hammered out.

Yes this is fair - we try our best to communicate issues - I think we're mostly the only ones doing the communication that model A or B has been fixed etc.

We try our best as model distributors to fix them on day 0 or 1, but 95% of issues aren't our issues - as you mentioned it's the chat template or runtime etc


> Edited: In hindsight I notice that "hit it out of the park" is the wrong sport metaphor for FIFA, but I stand by it anyway.

For future reference, you can use: "knocked it into the top corner", "put it in the back of the net" or "smashed it past the keeper". Not a native football-talker, but hang out too much with a few.


"Back of the net" doesn't feel the same to me even though (I learn after reading far too much about a sport I do not play) "Out of the park" is basically the same thing.

In my mind "out of the park" had meant the ball leaves the actual stadium but in fact (I read) "the park" in this context is actually the field of play and so "out of the park" represents in fact the vast majority of home runs and not the over-achievement I had imagined.

So TIL but thanks for the suggestions.


True, "back of the net" is more "someone kicked the ball really hard and it hit the back of the net really hard" instead of "the ball came across the goal line" which can be very different, so in my mind that's as close to "out of the park" as you can get in soccer :)

The very same thing happen on my residential connection, I can do one search query, then I'm rate limited for 15+ minutes, same if I access any list of commits.

> But getting back to the consensus in the comments here: I'm not sure why people think that they'll be worse about policing spam than AWS SES, Azure Email, etc.

Cloudflare is (in)famous for not acting against spammers, fraud, piracy and other less savory groups that are hosting their stuff at/behind Cloudflare, so reasonably, people who've been affected by that are now afraid the same thing will happen with email.


When it comes to email delivery, you can't ignore spam. It's the bane of existence of every email sending service and the number one business challenge in that segment. After all, orchestrating delivery over SMTP is not rocket science. But getting that email to not be rejected totally IS rocket science and it's simultaneously an art form known only to a handful of email nerds working at the core of the big email sending services...

Ok, but what about as a CDN/website-proxy/WAF? I know we don't have the same automated reputation-propagation as with email, but same thing supposedly happens there, where eventually you get turned off if you don't act on lawful requests, which is exactly why Cloudflare is unavailable in Spain during La Liga matches, because Cloudflare don't take piracy streams down.

In theory, Cloudflare should take those down, when requested by legal means, but that doesn't matter. How sure are we that they'll act differently for email, instead of trying to get rid of the reputation system instead?

> getting that email to not be rejected totally IS rocket science and it's simultaneously an art form known only to a handful of email nerds working at the core of the big email sending services

It really isn't, you need a clean IP and a clean domain, send handful of emails and you're pretty much whitelisted on most services out there. Maybe you'd say I'm one of the handful, but I personally know more than a handful others who also run their own email services, just like me, and besides the usual hassle of running your own service, as long as you don't spam, your emails will arrive as usual.


I run an email sending service at scale (billions of messages per month, tens of millions of end users, thousands of customers). Most of our software development and operational effort revolves around abuse mitigation. That has been the case for 15 years. It's a cat-and-mouse game with two different mice: the senders, who are constantly trying to figure out how to get you to deliver their garbage; and the receivers, who are constantly trying to figure out how to block it. We're stuck in the middle.

It's hard to appreciate how difficult this battle is when running at scale.


> I run an email sending service at scale (billions of messages per month, tens of millions of end users, thousands of customers).

Giving you the benefit of the doubt and accepting your claim, doesn't that make you one of the people at least second-order responsible for the current state of affairs in email blocking? It would seem that your company, by dint of your volume, navigates roadblocks that the rest of us (ie. the 99.999% of Internet email servers and their admins), who aren't FAANG et al[1], have to deal with to get our users' legitimate email delivered.

If so, could you perhaps give us a brief explanation as to why an otherwise competent engineer can "follow all the best practices" with their server which has no known compromises[2], on an IP address they have controlled for, oh, let's say a full calendar year, and yet still can't get off those FAANG et al default-deny blocklists, but you can?[3]

A cynic might say that your service had a vested interest in paying for unimpeded access to those FAANG et al companies to get over the bar that the rest of us are unable to vault. A cynic might also say that those biggest of the big email services like it that way, because it drives more users to them at the expense of the rest of us 99.999%.

I'll try to remain open to the possibility that there are aspects of the industry I've not yet had any exposure to, and refrain from chimping out over having my users blocked through no fault of their own.

[1] Yes, I know, Facebook doesn't receive anywhere near as much email as they send, and Hotmail = Microsoft, etc. If I used an accurate acronym I could pat myself on the back for being Technically Correct, while nobody would know what the heck I was talking about.

[2] We shan't digress into a discussion of hardware/firmware/OS/application backdoors nor Snowden disclosures. It's not that hard to auto-install security updates and run a reasonably tight ship with no unnecessary attack surfaces.

[3] Or perhaps there aren't any default-deny blocklists at all, but in fact only much smaller default-allow whitelists? That would be cynical indeed.


Right, I won't disagree with any of that, but I'm not sure how it's related to what I wrote either. Maybe I should have been more specific that I'm talking about hosting your own email, not hosting emails for others, which brings out a lot of other types of problems.

Apologies. When you said "email services" I thought you were implying "email services for use by others". Yeah, you can definitely run your own mail server in 2026 and I think the internet community should always strongly endorse being able to do so. Unfortunately, large email receivers have to make do with imperfect signals when making filtering decisions, and your traffic from a lonely IP that happens to have a bad neighbour might get blocked as collateral damage.

One long term hope: That domain name reputation eventually overtakes IP address reputation entirely.


What structural changes could we make to improve the situation?

That is such a great question and there is no easy answer. There have been enormous efforts to do better for at least the last 20 years. An entire organization, M3AAWG, was founded for that reason and it meets three times a year, bringing together all the people that matter for making the situation better. It's a great organization and the people are all really smart and awesome. The IETF is no slouch either, coming up with excellent new standards and improving existing ones, such as the recent update to DKIM.

That's about as good of an answer as I can provide: keep sending smart people to the conferences!


Signed senders?

It's simple, there's a standard, a new one, which takes into account SPF, DKIM, DMARC, ARC, and even DANE along with upcoming and purposed SPKF, DKIM+, DMARC2, and ARCv4. It should fix just about everything.


Hashcash, or BTC.

I always loved the hashcash concept and actually raised our original funding because of it (our Microsoft angels loved the idea of making spamming more expensive, and our Series A concept was tar-pitting to dissuade botnets). In the context of email sending services, we have a modern version of hashcash that we might at some point turn to. If someone can figure out how to tokenize sending at scale, then senders could pay recipients to open their emails by attaching a "tip" to each message.

If even a small fraction of legitimate email recipients altered their mail client settings to route "tipped" messages to their inbox, that would probably suffice to get senders to participate in the scheme. Senders are starved for high quality engagement data. Meanwhile, anything we can do to make spam less likely - on a relative scale - to reach the inbox in comparison to "legitimate" traffic, is a win.


Cloudflare acts on lawful requests during LaLiga matches. The problem is that the Spanish government doesn't want to bother doing things the lawful way because that takes too long. They want piracy to magically disappear and they'll randomly shut down more parts of the internet until it does. Actual illegal sports streams are not impacted by Cloudflare being down, and Cloudflare is not the only impacted network.

> problem is that the Spanish government doesn't want to bother doing things the lawful way because that takes too long

In Spain, what they are doing, is the "lawful way", it's literally happening via the courts and judges. Do you think ISPs are blocking Cloudflare specifically just for fun, out of their own accord?

> Actual illegal sports streams are not impacted by Cloudflare being down, and Cloudflare is not the only impacted network.

Some are, many aren't. Cloudflare is indeed the only impacted network, at least for me. Which other networks are being blocked for you during the La Liga matches?


The specific blocks don't go through courts and judges.

Yes, the specific block of blocking Cloudflare in Spain during La Liga matches literally has gone through a court and been ordered by a judge, I'm not sure how you could have missed this. Judges have also dismissed the requests from Cloudflare and others to remove the "dynamic block" as there is collateral damage.

My understanding was that cloudflare was being blocked by the same IP blocking list as everything else. And while that system went through courts, the list didn't.

There are also direct actions against cloudflare, but that's not what's taking everything down, is it?

Did I misunderstand something?


The sites are directed to be blocked by IP and DNS, this is the list I suppose you're talking about, I'm not sure of any specific "system vs list" distinction. Since some of the sites are behind Cloudflare, some of the IPs are IPs used by Cloudflare for any customer, not just the streams, so then Cloudflare gets blocked wholesale, the collateral damage that we get to joyfully experience every game.

Remains to be seen if the block will remain in place or not, you could argue it goes against some other laws, but it has to be argued legally, just like how the block initially happened because La Liga went through the courts. So far us developers or people who visit more American websites tend to be hit the worst, since they're talking about "protecting" other matches too, in other sports, I'm guessing it'll get worse before it gets better.


Blog author chiming in here:

We have reserved IPs for Email Service and will be protecting the reputation and fighting spam from originating on Email Service.

If we did not do so, our IPs would get flagged and then emails end up in spam or not delivered. That defeats the purpose of having a transactional Email Service. We're well aware of this.


Will you also do this for other spammers using Cloudflare infrastructure, or just specifically for this email product?

> For years, Spamhaus has observed abusive activity facilitated by Cloudflare’s various services. Cybercriminals have been exploiting these legitimate services to mask activities and enhance their malicious operations, a tactic referred to as living off trusted services (LOTS) [2].

> With 1201 unresolved Spamhaus Blocklist (SBL) listings [3], it is clear that the state of affairs at Cloudflare’s Connectivity Cloud looks less than optimal from an abuse-handling perspective. 10.05% of all domains listed on Spamhaus’s Domain Blocklist (DBL), which indicates signs of spam or malicious activity, are on Cloudflare nameservers

https://www.spamhaus.org/resource-hub/service-providers/too-...


> 10.05% of all domains listed on Spamhaus’s Domain Blocklist...are on Cloudflare nameservers

Not defending spammers, but this comes across a smidge naive considering Cloudflare's overall footprint in the modern internet.


Why does anyone have an obligation to follow "Spamhaus Blocklist (SBL) listings?" They are wannabe cops who hide in Andorra to avoid anyone legally disputing the findings of their so-called investigations. They routinely harass and threaten ISPs on the basis of what they allege are 6 degrees of separation connections to spam. Paul Graham wrote about this many years ago https://paulgraham.com/sblbad.html

I would note that Cloudflare has been doing better-- the SBL listings page mentioned in that article[1] shows only 47 active complaints, down from 1201 when the article was written 2 years ago. Many of those complaints are stale, too: I spot-checked a few (referencing the domains fireplacecoffee.com and expansionus.com) and the domains are expired and not being hosted by anyone.

[1] https://check.spamhaus.org/sbl/listings/cloudflare.com/


As someone that has managed very large outbound transactional email environments, email campaign platforms and some corporate email I just wanted to wish Cloudflare the best of luck on this endeavor. This is an entirely different animal from anything related to a CDN. Stay vigilant and don't let the cute and fuzzy bunnies ruin it for everyone else. They are evil and mischievous and will do whatever they technically can do.

Agent-produced emails are by definition spam. Everyone should be reacting to this news by immediately blocking your service.

Recent outreach after creating an AgentMail account:

"Thanks for being a user of AgentMail - a lot of people use AgentMail for outbound (spin up and warm up inboxes, send sequences, handle replies), ..."

Yes, that's right. The first use case mentioned is to send automated outbound emails. "Cold prospecting" workflows are likely going to be a big slice of usage on the new Cloudflare service, as it seems to be on AgentMail.


If you take the approach of policing individual sender accounts with a strict anti-abuse policy, you have a chance of succeeding. I'm sure you have already discovered that the moment you allow anyone to sign up for an email sending account, the worst of the worst actors immediately take up the opportunity to do so! Cloudflare has a massive amount of data about web traffic and I would hope that this data can be recycled into effective threat detection and control. No doubt you already know this and have people working on it. Good luck!

> We're well aware of this.

Then how about not market it as "for agents" when said agents are just LLM output?


So what are the thresholds?

For example with SES I will get automatically suspended if my bounce rate is more than 10% or if my complaint rate is more than 0.1%.


I think you should put your money where your mouth is. For each spam message sent to a recipient server, you send $1000 to the recipient.

Make that penalty $1 per (so the discussion can be taken seriously) and I will not only support your proposal, I'll volunteer my time and effort in encouraging Congresscritters to vote for it.

There are serious financial penalties for robocallers who violate the Do Not Call list (in America, at least). Let's update those laws for the 21st century, shall we?


> I hope people realize that tools like caveman are mostly joke/prank projects

This seems to be a common thread in the LLM ecosystem; someone starts a project for shits and giggles, makes it public, most people get the joke, others think it's serious, author eventually tries to turn the joke project into a VC-funded business, some people are standing watching with the jaws open, the world moves on.


I was convinced https://github.com/memvid/memvid was a joke until it turned out it wasn't.

To be fair, most of us looked at GPT1 and GPT2 as fun and unserious jokes, until it started putting together sentences that actually read like real text, I remember laughing with a group of friends about some early generated texts. Little did we know.

Are there any public records I can see from GPT1 and GPT2 output and how it was marketed?

HN submissions have a bunch of examples in them, but worth remembering they were released as "Look at this somewhat cool and potentially useful stuff" rather than what we see today, LLMs marketed as tools.

https://news.ycombinator.com/item?id=21454273 / https://news.ycombinator.com/item?id=19830042 - OpenAI Releases Largest GPT-2 Text Generation Model

HN search for GPT between 2018-2020, lots of results, lots of discussions: https://hn.algolia.com/?dateEnd=1577836800&dateRange=custom&...


I was first made aware of GPT2 from reading Gwern -- "huh, that sounds interesting" -- but really didn't start really reading model output until I saw this subreddit:

https://www.reddit.com/r/SubSimulatorGPT2/

There is a companion Reddit, where real people discuss what the bots are posting:

https://www.reddit.com/r/SubSimulatorGPT2Meta/

You can dig around at some of the older posts in there.


I don't think it was marketed as such, they were research projects. GPT-3 was the first to be sold via API

From a 2019 news article:

> New AI fake text generator may be too dangerous to release, say creators

> The Elon Musk-backed nonprofit company OpenAI declines to release research publicly for fear of misuse.

> OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

https://www.theguardian.com/technology/2019/feb/14/elon-musk...


Aka 'We cared about misuse right up until it became apparent that was profit to be had'

OpenAI sure speed ran the Google and Facebook 'Don't be evil' -> 'Optimize money' transition.


Or - making sensational statements gets attention. A dangerous tool is necessarily a powerful tool, so that statement is pretty much exactly what you'd say if you wanted to generate hype, make people excited and curious about your mysterious product that you won't let them use.

Much like what Anthropic very recently did re: Mythos

Think about all the possible explanations carefully. Weight them based on the best information you have.

(I think the most likely explanation for Mythos is that it's asymmetrically a very big deal. Come to your own conclusions, but don't simply fall back on the "oh this fits the hype pattern" thought terminating cliché.)

Also be aware of what you want to see. If you want the world to fit your narrative, you're more likely construct explanations for that. (In my friend group at least, I feel like most fall prey to this, at least some of the time, including myself. These people are successful and intelligent by most measures.)

Then make a plan to become more disciplined about thinking clearly and probabilistically. Make it a system, not just something you do sometimes. I recommend the book "the Scout Mindset".

Concretely, if one hasn't spent a couple of quality hours really studying AI safety I think one is probably missing out. Dan Hendrycks has a great book.


I used GPT-2 (fine-tuned) to generate Peppa Pig cartoons, it was cutely incoherent https://youtu.be/B21EJQjWUeQ

You can run GPT2! Here's the medium model: https://huggingface.co/openai-community/gpt2-medium

I will now have it continue this comment:

I've been running gps for a long time, and I always liked that there was something in my pocket (and not just me). One day when driving to work on the highway with no GPS app installed, I noticed one of the drivers had gone out after 5 hours without looking. He never came back! What's up with this? So i thought it would be cool if a community can create an open source GPT2 application which will allow you not only to get around using your smartphone but also track how long you've been driving and use that data in the future for improving yourself...and I think everyone is pretty interested.

[Updated on July 20] I'll have this running from here, along with a few other features such as: - an update of my Google Maps app to take advantage it's GPS capabilities (it does not yet support driving directions) - GPT2 integration into your favorite web browser so you can access data straight from the dashboard without leaving any site! Here is what I got working.

[Updated on July 20]


Wow that is terrible. In my memory GPT 2 was more interesting than that. I remember thinking it could pass a Turing test but that output is barely better than a Markov chain.

I guess I was using the large model?


There’s an art to GPT sampling. You have to use temperature 0.7. People never believe it makes such a massive difference, but it does.

Probably a much better prompt, too. I just literally pasted in the top part of my comment and let fly to see what would happen.

Here is the XL model. 20x the size of the medium model. Still just 2B parameters, but on the bright side it was trained pre-wordslop.

https://huggingface.co/openai-community/gpt2-xl


And now gpt is laughing,while it replaces coders lol

Why? Doesn't have jokey copy. Any thoughts on claude-mem[0] + context-mode[1]?

[0] https://github.com/thedotmack/claude-mem

[1] https://github.com/mksglu/context-mode


The big idea with Memvid was to store embedding vector data as frames in a video file. That didn't seem like a serious idea to me.

Very cool idea. Been playing with a similar concept: break down one image into smaller self-similar images, order them by data similarity, use them as frames for a video

You can then reconstruct the original image by doing the reverse, extracting frames from the video, then piecing them together to create the original bigger picture

Results seem to really depend on the data. Sometimes the video version is smaller than the big picture. Sometimes it’s the other way around. So you can technically compress some videos by extracting frames, composing a big picture with them and just compressing with jpeg


> embedding vector data as frames in a video file

Interesting, when I heard about it, I read the readme, and I didn't take that as literal. I assumed it was meant as we used video frames as inspiration.

I've never used it or looked deeper than that. My LLM memory "project" is essentially a `dict<"about", list<"memory">>` The key and memories are all embeddings, so vector searchable. I'm sure its naive and dumb, but it works for my tiny agents I write.


Just read through the readme and I was fairly sure this was a well-written satire through "Smart Frames".

Honestly part of me still thinks this is a satire project but who knows.


Is this... just one file acting as memory?

This has been a thing way before AI. Anyone remembers Yo, the single button social media app that raised $1M in 2014?

> most people get the joke

I hope you're right, but from my own personal experience I think you're being way too generous.


Its the same as cyrpto/nft hype cyles, except this time one of the joke projects is going to crash the economy.

A major reason for that is because there's no way to objectively evaluate the performance of LLMs. So the meme projects are equally as valid as the serious ones, since the merits of both are based entirely on anecdata.

It also doesn't help that projects and practices are promoted and adopted based on influencer clout. Karpathy's takes will drown out ones from "lesser" personas, whether they have any value or not.


> that he can improve the ecosystem for everyone using it [...] to ask me to consider his products with it only taking one minute for me to opt out

Seems you misunderstand the issue. Anyone not deploying to Laravel Cloud but using that project seems to be impacted by this, even going so far that agents are confused about it and keeps insisting users should deploy to Laravel Cloud instead.

Maybe I'm a grumpy old developer, but that does not sound like "improve the ecosystem for everyone using it", sounds like good old spam taken to the next level.


Yes, you're a grumpy old developer.

Taylor Otwell has been full-time on Laravel since 2015. 260 work days per year, 8 hours per day, for a decade = 1.248 million minutes.

And you're complaining it's spam that he's inconvenienced you into adding a sentence to your agents file. This, right here, is why I will never write open-source software of any significant size.


> And you're complaining it's spam that he's inconvenienced you into adding a sentence to your agents file.

I don't care how something happened, I care about the results. If you do stuff to my tooling that makes it less efficient, I'm gonna not like that, regardless how many minutes you spent on something, or if it's FOSS or not.

If you can't handle feedback from developers about what you're doing to their environment then please, do not write and publish open-source software, you'll be doing us all a favor.


Laravel has been apparently profitable for quite some time; they've long had a paid ecosystem with things like Forge, Vapor, paid components, etc.

I don't think it's unfair to be wary of the shift to VC funding and stuff like this that really feels like it wouldn't have been a thing prior to that.


He has no obligations to us as we did not pay him, but we also have the right to call out stuff we think is wrong as he didn't pay us

That's called a parasitic relationship. Think about it:

1. I have the right to take as I please

2. I have the right to criticize the giver as I please

3. The giver has zero right to take from me in any way

I think that's morally repugnant. If this is what open source means, I'm joining Microsoft and I will be the one writing the Halloween papers.


I think I'm saying the opposite on point 3. He has no _obligation_ to us and has full rights to 'take away' as he sees fit, but we still have the right to give our opinion about that process, and to make comparisons and contrasts with other similar products that are run differently

If you think somehow publishing FOSS means you get some right to decide how people use it, or anything besides the licensing of the code, you severely misunderstand what exactly FOSS is about.

If you don't like it, don't release software under that license.

You should use seconds to make it even more dramatic

Yeah, the amount of people creating, running and maintaining websites yet don't understand how websites actually work in practice is very high and seems we haven't even come close to the ceiling yet.

> At best, I still use services that have hard-bounded usage limits, like EC2 from AWS, where one instance can never go beyond 24h/day usage and is always capped, with shutdowns when exceeded, and limited credit cards, too.

Is this possible on AWS today? I'm the same way, if I cannot set a hard-limit for the billing so I can know for a fact how much it'll maximum cost in a month, I'm not interested in using that service for anything. Which is one of the top reasons I've stayed clear of AWS, they used to have only billing-alerts, but you couldn't actually set limits, guess one step forward that they've finally implemented that now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: