Persistence has some good properties, particularly if you are writing compilers.
Just about every language that promises an advance in parallelism (other than solidly "worse is better" approaches such as Hadoop and Pig) is selling some kind of snake oil, and immutability is one of the worst of them.
Immutability has benefits in terms of correctness, but not in terms of scalability to more processors or total throughput. With modern memory hierarchies a lot revolves around never letting two threads touch the same cache line and this happens at all you lose at least an order of magnitude in performance. Garbage collection involves global properties of the system, so there will always be some "stop the world" element of GC, so GC itself becomes a scaling bottleneck when you allocate lots of memory and throw it all behind your rear.
Although the funny thing is, as the 2nd oldest surviving computer language after FORTRAN, and at least back then used for very difficult problems, to be competitive let alone justifiable on extremely scarce and expensive hardware, "Lisp" has always been "obsessed" with performance. ADDED along with a bit before: It first ran on the IBM 704, a vacuum tube computer. It was derived from the IBM 701, which initially used Williams tubes for memory (https://en.wikipedia.org/wiki/Williams_tube).
Scare quotes because "anyone" can program up a Lisp "in a week". High performance LISPs not surprisingly require very roughly the same amount of effort as any other high performance language implementation. Lisp got its first compiler in 1962, 4 years after its launch (and it was the first self-hosting compiler, i.e. written in itself). As the inventor of garbage collection, it's long been a major driver of innovations in GC (although of course much less so now that Java etc. gave GCed languages mainstream cred).
But by and large, except for strictness of dynamic type checking (often a tunable variable), sacrificing correctness for speed has never been part of Lisp's DNA.
Yes, but there are many ways to be obsessed with performance while keeping correctness.
As you might imagine by my post history I had a specific language in mind.
Funny that you mention FORTRAN, as it reminds me of a systems programming language used for writing several OS, about the same age as FORTRAN and Lisp, namely Algol and its variants.
When they questioned their customers if Algol compilers should support disabling bounds checking, they said no responsible engineer would ever need it. As described by Hoare on his Turing award.
To indicate how far we've come from that, my biggest concern with the lowRISC project I mention elsewhere is that it's a RISC-V project, and in a very New Jersey way, that CPU has no provision for integer math errors other than divide by zero. Also makes the large integer cryptographic crowd rather upset.
While this is ... tolerable, I seriously wonder what other corners are being cut, e.g. in the unreleased privileged ISA.
(A bit more background: RISC-V is intended to be a open core for everything, so making the base processor simple for education is good, but promoting it as a CPU for industry is in the direction of bad, IMHO.)
> Immutability has benefits in terms of correctness, but not in terms of scalability to more processors or total throughput.
Wrong. Immutability can (and often does) increase scalability. It does that mainly by making this situation harmless:
> With modern memory hierarchies a lot revolves around never letting two threads touch the same cache line and this happens at all you lose at least an order of magnitude in performance.
Just about every language that promises an advance in parallelism (other than solidly "worse is better" approaches such as Hadoop and Pig) is selling some kind of snake oil, and immutability is one of the worst of them.
Immutability has benefits in terms of correctness, but not in terms of scalability to more processors or total throughput. With modern memory hierarchies a lot revolves around never letting two threads touch the same cache line and this happens at all you lose at least an order of magnitude in performance. Garbage collection involves global properties of the system, so there will always be some "stop the world" element of GC, so GC itself becomes a scaling bottleneck when you allocate lots of memory and throw it all behind your rear.