Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Another thing that has not been discussed yet: OpenAI does not want to be responsible for the output of this model. Can you imagine the headlines? "OpenAI released a text generating AI and it is racist as hell". People should have learned their lesson after the Tay fiasco.

I am ambiguous on the issue of release vs. non-release, but the mockery and derision they faced makes me ashamed to contribute to this field. No good faith is assumed, but projection of PR-blitzes and academic penis envy.

Perhaps AI researchers are simply not the best for dealing with ethical and societal issues. Look at how long it took to get decent research into fairness, and its current low focus in industry. If you were in predictive modeling 10 years ago, it is likely you contributed to promoting and institutionalizing bias and racism. Do you want these same people deciding on responsible disclosure standards? Does the head of Facebook AI or those that OK'd project Dragonfly or Maven have any real authority on the responsible ethical use of new technology?

I am not too sure about the impact of a human-level text generating tool. It may throw us back to the old days of email spam (before Bayesian spam filters). It is always easier to troll and derail than it is to employ such techniques for good. Scaling up disinformation campaigns is a real threat to our democracies (or maybe this decade-old technique is already in use at scale by militaries, and this work is merely showing what the AI community's love for military funding looks like).

I am sure that the impact of the NIPS abbreviation is an order of magnitude lower than that of this technology, yet companies like NVIDIA used Neurips in their marketing PR before it was officially introduced (made them look like the good guys for a profit). How is that for malaligning ML research for PR purposes? Would the current vitriol displayed in online discussons be appreciated when there was a name change proposal for the betterment of society?

Disclaimer: this comment in favor of OpenAI was written by a real human. Could you tell for sure now you know the current state of the art? What would these comment sections look like if one person controls 20% of the accounts here?



> It is always easier to troll and derail than it is to employ such techniques for good.

Curious-- what possible good comes with the ability to generate grammatically correct text devoid of actual meaning?

Sure, you could generate an arbitrary essay, but it's less an essay about anything and more just an arrangement of words that happen to statistically relate to each other. Markov chains already do this and while the output looks technically correct, you're not going to learn or interpret anything from it.

Same goes with things like autocomplete. You could generate an entire passage just accepting its suggestions. It would pass a grammar test but doesn't mean anything.

Chatbots are an obvious application, but how is that "good," or any different than the Harlow experiment performed on humans? Fooling people into developing social relationships with an algorithm (however short or trivial) is cruel and unethical beyond belief.

A photoshopped image might be pleasant or stimulating to look at, and does have known problems in terms of normalizing a fictitious reality. But fake text? What non-malicious use can there possibly be?


Current application: Better character - and token completion helps make it easier for people with physical disabilities to interact with computers. http://www.inference.org.uk/djw30/papers/uist2000.pdf (pdf)

Research progress: Better compression measures the progress to general intelligence. http://mattmahoney.net/dc/rationale.html

Future application: Meaningful completion of questions, leading to personalized learning material for students all over the world. If only there was an OpenQuora.


We incubate harmful viruses and bacteria so we can learn how they work, experiment, and test them. Having the output of the full model could allow analysis of structural weaknesses, or the building of GANs to detect fake text.

The technology is obviously going to go out there, why give well funded actors (nation states, troll farms) a head start, instead of giving everyone (researchers, hobbyists) an opportunity to prevent it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: