Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can I just say what a dick move it was to do this as a 12 days of Christmas. I mean to be honest I agree with the arguments this isn’t as impressive as my initial impression, but they clearly intended it to be shocking/a show of possible AGI, which is rightly scary.

It feels so insensitive to that right before a major holiday when the likely outcome is a lot of people feeling less secure in their career/job/life.

Thanks again openAI for showing us you don’t give a shit about actual people.



Or maybe the target audience that watches 12 launch videos in the morning are genuninely excited about the new model. The intended it to be a preview of something to look forward to.

What a weird way to react to this.


It sounds like you aren't thinking about this that deeply then. Or at least not understanding that many smart (and financially disinterested) people who are, are coming to concerning conclusions.

https://www.transformernews.ai/p/richard-ngo-openai-resign-s...

>But while the “making AGI” part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the “succeeding” part of the mission, especially when it comes to preventing existential risks to humanity.

Almost every single one of the people OpenAI had hired to work on AI safety have left the firm with similar messages. Perhaps you should at least consider the thinking of experts?


There is no AGI it’s just marketing, this stuff if over hyped, enjoy your holidays you won’t lose your job ;)


I agree, it’s just more about the intent than anything else, like boasting about your amazing new job when someone has recently been made redundant, just before Christmas.


The vast majority of people who will lose jobs to AI aren’t following AGI benchmarks, or even know what AGI is short for.


That’s is true and a reasonable point. But looking in This thread you can see there has been this reaction from quite a few.


I don't know, maybe it's a bit off topic, but at least in cases that I'm imagining, I would always hire a human than fully rely on AI. Let the human consult with AI if needed, but still finalize the decision or result. The human will be thinking about the problem for months or years, even if passively during a vacation, an idea will occasionaly pop up. AI will think about its task for seconds, in case it missed some information or whatever, it will never wake up in the middle of the night thinking "s**, i forgot about X"


I feel you. It's tough trying to think about what we can do to avert this; even to the extent that individuals are often powerless, in this regard it feels worse than almost anything that's come before.


Some of us actual people are actually enthusiastic about AGI. Although I'm a bit weird in being into the sci-fi upload / ending death stuff.


Out of interest, what do you think would happen to your sense of subjective experience on sci-fi upload? And secondly have you watched black mirror? In that show they show many great ways there the end of death is just the beginning of eternal techno suffering.


I'm not quite sure - we need to work on the details. I've not watched very much black mirror.


I would think it would not lead to a transfer of consciousness and instead just make a copy of you. I recommend black mirror, it deals with one technological change (usually) and shows how it can be dystopian (usually, there are occasional happy endings). Each episode is standalone.


I'm hoping we get to the stage fairly soon that we can make AI with something like human consciousness and be able to study and understand it better. That stuff will probably start as a very crude model and get closer as both AI and brain science advance. I figure a way to avoid dystopian problems is to experiment and play around with it so you figure how it works. Most dystopian examples I've come across in real life have been very much driven by human behaviour rather than tech.

Yeah maybe black mirror but I'm not sure it's really my thing.


Blaming OpenAI for progress is like blaming a calendar for Christmas—it’s not the timing, it’s your unwillingness to adapt


Unwillingness to adapt to the destruction of the middle class and knowledge work is pretty reasonable tbh.


Historically when tech has taken over jobs people have done ok, they've just done something else, usually something more pleasant.


Wow, you just solved the ethics of technology in a one liner. Impressive.


This is a you problem. Yes there will be pain in short term, but it will be worth it in long term.

Many of us look forward to what a future with AGI can do to help humanity and hopefully change society for the better, mainly to achieve a post scarcity economy.


Surely the elites that control this fancy new technology will share the benefits with all of us _this_ time!


No it'll be like when tech took over 97% of agricultural work with 97% of us starving while all the money went to the farm elites.


How did that go for the farm workers?


I guess they did other stuff instead.


https://www.transformernews.ai/p/richard-ngo-openai-resign-s...

>But while the “making AGI” part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the “succeeding” part of the mission, especially when it comes to preventing existential risks to humanity.

Almost every single one of the people OpenAI had hired to work on AI safety have left the firm with similar messages. Perhaps you should at least consider the thinking of experts? There is a real chance that this ends with significant good. There is also a real chance that this ends with the death of every single human being. That's never been a choice we've had to make before, and it seems like we as a species are unprepared to approach it.


Post scarcity seems very unlikely. Humans might be worthless, but there will still be a finite number of AIs, compute, space, resources.


How are you going to make housing, healthcare, etc. not scarce, and pay for them?


Robots supply that, controlled by democratic government.


Robots supply the land and physical labor that underlie the price of housing? Are you thinking of space colonies or something?

You need to make these expensive things nearly free if you're going to speak of post scarcity.


Robots supply the physical labour. The land shortages are largely regulatory - there's a lot of land out there or you could build higher.


I hate the deliberate fear-mongering that these companies pedal on the population to get higher valuations


Wtf is wrong with you dude? It's just another tech, some jobs will get worse some jobs will get better. Happens every couple of decades. Stop freaking out.


This is not a very kind or humble comment. There are real experts talking about how this time is different -- as an analogy, think about how horses, for thousands of years, always had new things to do -- until one day they didn't. It's hubris to think that we're somehow so different from them.

Notably, the last key AI safety researcher just left OpenAI: https://www.transformernews.ai/p/richard-ngo-openai-resign-s...

>But while the “making AGI” part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the “succeeding” part of the mission, especially when it comes to preventing existential risks to humanity.

Are you that upset that this guy chose to trust the people that OpenAI hired to talk about AI safety, on the topic of AI safety?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: