"Idle hands are the devil's workshop." The more people that are economically sidelined, who nave nothing to do and no prospects for self-advancement, the more appealing radical ideologies will become and the more social unrest there will be. A major source of social cohesion is the belief that if you work hard you will be able to succeed, but if it's structurally impossible to find work, and all you have is a UBI which doesn't afford much, you can expect that mass stagnation to provide ample breeding ground for a colorful assortment of social ills--drug abuse, suicide, crime, and finally, once the individual angst finds collective expression and organization, revolts and political radicalization.
Work is one of the major sources of purpose people have. Earning an income and putting food on the table is fundamental to one's self-respect. Like educational achievement, it gives the impression that there is some way to advance beyond your lot. Take that away and it will cause a mass existential crisis unless there is something to fill the void. All the blights of the rust belt and other economically overlooked areas should be expected to spread.
Are you saying that, if we ever arrive at a technologically advanced, post scarcity society where there is no money and everyone has access to as much education, entertainment, and material stuff as they want without having to work, then we'll all fall into a nihilistic despair and commit suicide or become heroin junkies?
I think they're saying that's the inevitable result of the current paradigm, not the natural trajectory of a human being. I do think there are a lot of people who define themselves by their work and by the (frequently flawed) notion that what they're working toward is a promotion, a pay raise, a title-bump. In a post-scarcity world there might still be that mentality and opportunities to satisfy it, but the implication is that more people will need to redefine their personal concept of success and fulfillment.
UBI is the opposite of a post scarcity society, it props up a society of scarcity and exploitation by maintaining the current economic structure with an increasing unemployment rate.
> The more people that are economically sidelined, who nave nothing to do and no prospects for self-advancement
Eh, that's a wild assumption. If anything, automating away crap labour that barely keeps you sheltered and giving people the economic stability to focus on something long term should enable various forms of self-advancement for masses of people who are now just stuck doing a crap job until they get to retire.
This seems plausible. There's usually a proliferation then a consolidation phase with any period of innovation, where people invent all sorts of ways of doing things and then gradually figure out what works best together.
Why the client-side has seen such an era of fecundity in recent years is beyond me. Perhaps because there was a real problem with leaving the job to jquery. Regardless, eventually there's just going to be too many available solutions but not enough solidified and proven methods where everything you need has been packaged together neatly because trial and error and years of iteration have pulled the optimal toolchains together and left the others by the wayside.
The way everything client oriented is just sort of floating around with no real scheme to it creates a lot of possibilities but at the same time much confusion and exhaustion. Eventually, (hopefully) there will come the standards.
There's something in human nature that finds meaning in turbulence, friction, conflict, movement. A sense of being challenged, pushing past limits, propelling into
excitement and change. When you reach a state where you have all your needs met and don't need to go any further, there still remains a sense that there's more left undone. Comfort can be equated with stagnation. It's a Nietzschean notion. A will-to-danger (not to sound too crazy). There's a sense of being domesticated that adds a subtle discomfort to the comfort.
I've worked blue collar and white collar jobs, and it was the blue collar jobs that made me feel more alive, because I was around dangerous tools, hazards, and actuating my body like it was supposed to be outside exposed to the elements. I went home after and my rest felt deserved and truly regenerative, like my body was drinking it in. The white collar work, though it pays better and offers more comfort and is more cognitively demanding, feels at the same time too breezy, too safe.
It seems like a really stupid argument, that things can be bad because they're too good, affluenza and so on, but there's something to it.
Sounds like the complete opposite of what Jiddu Krishnamurti would say. Krishnamurti would ask whether you can live without conflict, agony, turbulence, and yet, live more fully. To me, Krishnamurti is speaking from the position of sanity, such sanity, that nowadays seem to exist only on a small scale, in places, where the "what's the right thing to do" is not muddied in the accumulated daily problems arising extended from the human psyche and our current way of life.
Or are you saying that there is no way to be productive without the stress that is more and more present in our daily lives, in this environment of competition and constant yearning for more, for example?
I think OP means that idealizing ML experiments using contrived data can distort the picture. Real world, ecologically valid results can only be discovered as they emerge when the algorithms are deployed in production. ML algorithms sometimes cook up solutions that can surprise or even disturb their creators.
I'm not sure I exactly agree with this premise. If you read about the principles of chaos engineering, (https://principlesofchaos.org/) it's possible to simulate real world events in testing. And if there's a rigorous mathematical backbone to ML as there clearly is, some determinations about its limitations should be universal for all cases, even if the emergent results in production are unpredictable and could range over intractably many possible outcomes.
Right, web apps aren't websites. The distinction is pretty sharp when you move from a simple static old fashioned display to Amazon tier complexity where money is being transacted, log in information stored, orders updated, and a million more things besides.
The big question is if the internet and its primary portals, the desktop browser and the smartphone, will reach the end of their tether. The only thing I could see replacing it is neurotechnology of some sort, and that seems so far off and so difficult to commercialize that it's not something web developers need to worry about any time soon. Or perhaps quantum computing becomes commercially viable and simply shoves digital computing aside, negating the need for those skills completely.
Another alternative is that the web radically changes unforeseeably and new practices instantly antiquate everything people are learning now, which is a more frustrating and realistic possibility and also something that is bound to happen. Given how difficult it can be for many people to learn these skills, and given how much knowledge one must integrate to be a productive developer, this is an upsetting potentiality and one that is bound to happen eventually.
Not every field carries these risks. Law practice isn't going to change fundamentally in the coming decades. And some aspects of coding can be exhausting and demoralizing to learn. Past generations of coders have had their skill sets rendered null and void and it will probably happen again at some point to this generation.
I'd go even further and say that Amazon-level things aren't web apps to, that's a different level of web resources. It is one thing to write fairy complex web app with auth, log in info, security etc, building enterprise level system is a different thing.
Another risk I think is the automation of coding. It already happens all the time, and in principle coding should be the most automatable of anything. The only thing saving it is that it's very hard to automate the creation of good ideas or purposeful human intentions, which naturally inform most successful coding projects. There's also a bit of a conflict of interest, why would coders building automation systems want to automate themselves out of existence? I think these rationales protect software developers' job security for the near to mid-term future.
I'm also thinking that how people code could change drastically, but even so, that's less of a problem. Because the rules and logic of any programming language boil down to fundamental computer science principles which should remain constant despite the interface you're using to code, whether it's a keyboard and monitor or some kind of visual drag and drop thing or whatever someone comes up with.
You have to think of it at a higher level: My job isn't coding, it's clarifying ideas. Programming languages can get higher-level, and more automated, but nothing is ever going to free us from the burden of clarifying our ideas. I believe this makes my job (the interesting parts, at least) pretty safe in the long-term.
I tend not to worry about programming automation, because I believe it has actually been going on for a long time with higher level languages. E.g. when programming was done at assembly level, the advent of C did not reduce the number of jobs, rather it made programming more powerful and created more jobs. Same with Java and 'automated' memory management. Same with Ruby on Rails, php, etc. By simplifying developers jobs, developers can create bigger and better products, which increases adoption of technology, which creates demand for even better products, which leads to more jobs and more automation. Maybe I'm wrong but I don't see any end to this cycle anytime soon.
I agree... but I just find that automation leads to more productivity which leads to more demand. There are always additional projects and features to be developed. The way development has focused in prioritizing features over handling technical debt, or adding bodies for features which escalates accumulating tech debt more quickly will always lead to more work.
> in principle coding should be the most automatable of anything
You still have to describe to the automaton what you want it to do -- and in doing that, you're programming.
Modern languages are all "automated programming" already -- you aren't writing machine code, you're describing to the automation system what you want, and then it generates the machine code for you.
This feels, unrealistic. Coding practices, tools and technology have minimally changed since the 70's.
Faster machines, but with each new language, you still have the baggage of design/code/compile/test/debug/refactor and the library/dependency hell that plagues things like oh, say Node.js with NPM, maven with Java, make with C/C++ etc. etc.
Then there is the flood of new "UI hotness libraries" in the JavaScript space alone. Pick one. wait 6 months and see what else is now the new hotness or if your current one has changed with a new major version release (Angular?).
There should be more folks like Bret Victor working with language teams in the FANG(+MS) companies to drive some further breakthroughs in reducing cognitive load and tool/library dependency complexities.
Where are this generation's Alan Kay and Jean-Marie Hullot types, pushing the envelope on software development environments?
> in principle coding should be the most automatable of anything
All coding is itself the act of automating something. I don't think there is a risk to developers as there is always more to automate. We use products that automate our own programming tasks to create more code to automate yet more things.
Whether or not programming can be completely automated is an interesting question.. when I think about what would make it unecessary for a human to be specifying the behavior of a computational system the only phrase that comes to mind is "general ai".
>Even though eye beams do not exist in reality, and even though most people do not intellectually believe in them, they may exist as a part of the rich, implicit social model that we naturally apply to seeing agents.
It makes sense that people would attempt to reconstruct a model or theory of mind from scant cues based on other's gazes. In many contexts, such as encountering strangers, you don't have anything else to work with, and so the importance is on leveraging what you can.
It's also something that even if eye beams or extramissons have no empirical reality, the concept of them can still be utilized in representations of other's minds. It reminds me of Zizek's interest in the "reality of the virtual" that is, even things that aren't empirically real but are merely conceived can have real world impacts.
I agree that the government will need to work on a transparent front-end to make this data universally accessible. Nonprofits without the budgets for advanced tech workers and without volunteers will need clearly organized links to download. They may not know how to do shell scripting.
The more data everyone can use, rather than data that can be owned and commoditized or utilized only by specialists is a good thing.
It's good to see the US govt making an effort to step up its technical level. A vast if hidden problem in the political sphere is that most politicians do not have a technical education. This creates a serious misconfiguration of the govt alongside other centers of soft power like large cap tech companies.
There is no way around it that these companies have to work with the government to secure public interests from 21st century threats. Just today I read an article about how black hat hackers are targeting outdated industrial control systems more vigorously than ever before. The government on its own without technical upgrades cannot face down this problem in its current condition. Which is why opening up data is a beneficial thing.
Openness of data is a double edged sword. It will make malicious agents' job easier to have as much data as possible in a consistently machine readable format, but it will also help those on the other side.
If tech is one of the things that can bolster and improve government, tech needs to work in the optimal environment. Which is one with open data.
If humanlike reasoning is the destination for AGI, there's more than just symbolic reasoning to factor in. Emotions are a huge control on human reasoning.
People essentially rely on emotions to make all their decisions. Emotions implicitly represent rapid-fire unconscious decision work.
Again the current popular understanding of the mind separates emotion from thinking. They are not distinct. Emotional processing is another kind of thinking, and it drives the show.
I see emotions as analogous to the value function in RL. It is essentially a prediction of future rewards based on current state and action plan. Artificial RL agents learn emotion as it is related to their tasks and environments.
If you want AGI you need to give it a world to live in. The ecological component of perception is missing. Without full senses, a machine doesn't have a world to think generally about. It just has the narrow subdomain of inputs that it is able to process.
You could bet that AGI won't manifest until AI and robotics are properly fused. Cognition does not happen in a void. This image of a purely rational mind floating in an abyss is an outdated paradigm to which many in the AI community still cling. Instead, the body and environment become incorporated into the computation.
Work is one of the major sources of purpose people have. Earning an income and putting food on the table is fundamental to one's self-respect. Like educational achievement, it gives the impression that there is some way to advance beyond your lot. Take that away and it will cause a mass existential crisis unless there is something to fill the void. All the blights of the rust belt and other economically overlooked areas should be expected to spread.