Wolfram stuff is like a completely different world that doesn't acknowledge anyone else. In context to them it makes sense, but to everyone else it's WTF and meh and not much in between.
I remember the pre-Wolfram Alpha marketing. It wasn't nearly as impressive as they made it out to be. The product was cool but to hype it up like that only did damage.
To be honest it always reminds me of Futurama: "welcome to the world of tomorrow!" (which is actually pretty shitty).
The problem for them of course is that they exist in context to everything else in the world. So a "huge" innovation to their local context, because they just realized something everybody else knows, is just "meh" or "already tried" and determined to be uninteresting or useless.
The problem for everybody else is that there's just enough interesting in Wolfram's output to make it hard to dismiss outright.
Your thesis seems to be that we're so naive that we re-invent discarded or failed ideas and call them innovations. Can you give an example?
I'll give you just a two counter-examples off the top of my head.
1. Our frontend uses what turned out to be functional reactive programming to accomplish seamlessly interactive UI. And FRP is widely considered to be a Good Idea.
2. We invented the notebook-based REPL as the world knows it today, which is of course copied by iPython (to such rave reviews). In fact, our language embraces homoiconicity to such a degree that our notebooks are themselves Wolfram Language expressions -- iPython is just JSON.
Seems like we're ahead of the curve in those two cases.
Well the idea of building as much as possible into the language is old and has repeatedly failed. Languages like PL/1, APL, and to a lesser extent PHP have all tried and failed at this. People do not need huge languages with as much stuff as possible crammed into them. What they need are languages where you can be reasonably certain, and maybe even formally prove things about the execution of your code (putting everything in The Cloud (tm) also hampers this), and they need to be able to grow the language to fit their needs (e.g. Scheme, Haskell, Clojure, etc... are all pretty good at this).
The whole idea of claiming "moar data" solves everything comes across as incredibly naive and ignorant of what people like Tony Hoare, Dijkstra, or Alan Perlis have said about computing.
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult -- Tony Hoare
> People do not need huge languages with as much stuff as possible crammed into them.
I think you should replace "people" with "systems programmers" in order not to express an opinion that will in time (if not already) prove to be anachronistic and narrow-minded.
And the kinds of things we're "cramming in" (in reality, very slowly and methodically adding) have little to do with what you'd find in the languages you mention.
Think of it like this: standard libraries are great, but give you access to functionality that is effectively linear in the amount of stuff you learn, because they cannot be too tightly coupled in order to remain modular in the usual way.
With the Wolfram Language, you have more of a network effect: the more you learn, the more useful what you already know becomes, because the topic modelling you did over there can produce a network you can cluster over here and that you can then decorate with the graphics primitives you just learned about.
> The whole idea of claiming "moar data" solves everything comes across as incredibly naive and ignorant of what people like Tony Hoare, Dijkstra, or Alan Perlis have said about computing.
What is the "everything" that needs to be "solved"? Writing systems software?
I'm sure having graphs and images and timeseries built into your language isn't tremendously useful for writing a server. But people have already written servers. We don't need more languages to write servers (other than Rust or Go, perhaps).
We "need" the next generation of smart apps, things you couldn't even contemplate if you didn't already have access to curated data and algorithms that all work together. Or rather, that's what Wolfram Research is trying to make possible.
So in conclusion, it's not a systems programming language.
I think we got that.
So to break it down, and this is how I understand it as a user, it's a collection of algorithms, Lego bricks of math and a library of data sources glued together with a functional REPL that is semantically represented as a notebook.
Can you stick that on the web site somewhere without the millions of bullet points, marketoid spew and weaseling.
Maybe system programmers are “doing it wrong” now?
Even when you write a simple VGA driver you have to solve some equations. They are simple, so what does an average [good] C programmer does? He writes them down in his favourite inexpressive specialized language. Consequences: 1) a simple concept immediately becomes an incomprehensible mess; 2) approaches using more advanced ideas rarely get the proper attention because a system programmer is unable even to write them down, let alone reason about them and tweak them. There is no reason why a VGA driver for certain microprocessor could not be generated by a high-level language with the basic concepts behind it expressed more clearly, and nevertheless formally.
Agree with you entirely (especially after recently writing an SSDP and UPnP stack) but unfortunately the academics with all the supposed solutions have managed to deliver nothing with any momentum, so we're stuck with a 40 year old programming system and 30 year old abstractions.
Then again, you know what? It works pretty well so perhaps we're not doing it wrong
Out of curiosity, would you classify this in the 4GL camp?
Not to denigrate the technology and functionality that the Wolfram language is exposing, but it sure sounds a lot like the 4GL pitches from the 90s. Is that a fair comparison?
The notebook REPL has actually been around way before Mathematica. Probably 1972. Perhaps it wasn't semantically complete but it was essentially there. I would suggest the concept is obvious.
On a similar note, it would really help if the website for the Wolfram Programming Cloud had just one bloody-awesome/blow-me-away example or tutorial that would make it accessible to a lot of capable programmers who may not have used a Mathematica Notebook.
Folks who haven't used Mathematica ever - I recommend the Home Edition. Yes, it is worth paying for it!
I'm willing to admit I lack imagination or vision here. I think the lack of killer examples to sort of show me the way here is hobbling me. Wolfram's demo video was interesting but viewed to me more like a demo of a fancy calculator than a path towards a new way of doing development. I have a drawer full of fancy calculators I don't use because my need to "compute" things in such a general sense as presented here is pretty rare, or is so narrow in scope that I can usually just go hunt down the relevant information myself in less time than it takes to learn all of this new stuff.
I'll probably wait on the sidelines for a while and wait to see what kinds of things come out of this work.
Please take my skepticism as a healthy challenge and not as dismissal. The day the Wolfram language shows me something that fundamentally transforms some aspect of my life I didn't know needed transforming, I'll sign up.
I remember the pre-Wolfram Alpha marketing. It wasn't nearly as impressive as they made it out to be. The product was cool but to hype it up like that only did damage.
To be honest it always reminds me of Futurama: "welcome to the world of tomorrow!" (which is actually pretty shitty).