I disagree that measuring program length by the size of the parse tree is a good measure of conciseness. To an author and reader, it is the number of words you read that matters, so i use word count as the benchmark value. People who write newspaper articles are given a word count to hit, you type a certain number of words per minute, and people read at a certain speed. So words are the units of measurement that are easily counted and there will be no dispute.
A parse tree is not particularly comparable between languages; most modern languages make extensive use of powerful runtimes to handle complex things that are part of the OS. There is typically over a million lines of code behind a simple text entry field.
In my Beads language for example, i go to great lengths to allow declarations to have a great deal of power, but they are not executable code, and thus have hardly any errors. Lisp is not a particularly declarative type of language and is thus more error prone. The more code you don't execute the more reliable the software will be, so declarative programming is even better than Lisp.
Lisp derivative languages are notorious for their poor readability; hence the avoidance of Lisp by companies who fear "read-only" code bases that cannot be transferred to a new person. Lisp, Forth, APL, all win contests for fewest characters, but lose when it comes to the transfer phase. But like a bad penny, Lisp keeps coming back over and over, and it always will, because 2nd level programming, where you modify the program (creating your own domain specific language basically for each app), is the standard operating methodology of Lisp programmers. That one cannot understand this domain specific language without executing the code makes Lisp very unsuitable for commercial use.
Yes there are a few notable successful commercial products (AutoCad) that used Lisp to great results, but for the next 5-10 million programmers coming on board in the next few years, I laugh at anyone with the audacity to imagine that Lisp would be remotely suitable. Lisp is an archaic language is so many ways. It cannot run backwards. It has no concept of drawing; it is firmly rooted in the terminal/console era from which it sprang. With no database or graphics or event model, you have to use API's that are not standardized to make any graphical interactive products, which is what the majority of programmers are making.
>> Lisp derivative languages are notorious for their poor readability
Is it though?
I find at least 2 other languages more unreadable.
- Scala, with operators everywhere, you need to have a cheat sheet
- Perl, you know that joke that this is the only language that looks the same before and after RSA applied on it?
Lisp, on the other hand, is pretty readable, at least to me. I only used Clojure from the Lisp family and it had a great impact on how I think and write code in other languages. The result is more readability. For a long time MIT tought CS courses using Lisp and SICP is also a pretty amazing read.
Scala and Perl are even worse, i agree with you there. Some people love Haskell too, but i think it's awful.
MIT has replaced Lisp with Python, because even though they pushed forced it upon their students for decades they had to admit Lisp was archaic and not particularly readable. The Pharo IDE is arguably the most sophisticated IDE around today, but the fact remains that Lisp doesn't permit easy interchangeable parts, which is a major goal of the new programming languages being developed. Although very bright people can get very good at Lisp, the average person finds it extremely hard. Remember you are reading it from the inside-out which is highly unnatural to someone who reads books which read left-to-right.
Words are called 'symbols' in Lisp. Making extremely expressive domain level constructs is reducing the code size a lot in larger Lisp programs.
> Lisp is not a particularly declarative type of language
Just the opposite. Lisp is one of the major tools to write declarative code. The declarations are a part of the running system and can be queried&changed while the program is running.
> Lisp ... all win contests for fewest characters,
Most Lisp since the end 70s don't care about character count. Lisp usually uses very descriptive symbols as operator names. Lisp users invented machines with larger memory sizes in the late 70s, to get rid of limitations in code and data size.
Paul Graham is favoring low-character count code, but this is not representative for general Lisp code, which favors low-symbol count code through expressive constructs.
> understand this domain specific language without executing the code
the languages will be documented and will be operators in the language. Other programming languages also create large vocabularities, but in different ways. Lisp makes it easy to integrate syntactic abstractions in the software, without the need to write external languages, which many other systems need to do.
For example one can see the Common Lisp Object System as a domain-specific extension to write object-oriented code in Lisp. It's constructs are well documented and widely use in Lisp.
> it/is firmly routed in the terminal/console era...
A parse tree is not particularly comparable between languages; most modern languages make extensive use of powerful runtimes to handle complex things that are part of the OS. There is typically over a million lines of code behind a simple text entry field.
In my Beads language for example, i go to great lengths to allow declarations to have a great deal of power, but they are not executable code, and thus have hardly any errors. Lisp is not a particularly declarative type of language and is thus more error prone. The more code you don't execute the more reliable the software will be, so declarative programming is even better than Lisp.
Lisp derivative languages are notorious for their poor readability; hence the avoidance of Lisp by companies who fear "read-only" code bases that cannot be transferred to a new person. Lisp, Forth, APL, all win contests for fewest characters, but lose when it comes to the transfer phase. But like a bad penny, Lisp keeps coming back over and over, and it always will, because 2nd level programming, where you modify the program (creating your own domain specific language basically for each app), is the standard operating methodology of Lisp programmers. That one cannot understand this domain specific language without executing the code makes Lisp very unsuitable for commercial use.
Yes there are a few notable successful commercial products (AutoCad) that used Lisp to great results, but for the next 5-10 million programmers coming on board in the next few years, I laugh at anyone with the audacity to imagine that Lisp would be remotely suitable. Lisp is an archaic language is so many ways. It cannot run backwards. It has no concept of drawing; it is firmly rooted in the terminal/console era from which it sprang. With no database or graphics or event model, you have to use API's that are not standardized to make any graphical interactive products, which is what the majority of programmers are making.