Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Can a language have Lisp's powerful macros without the parentheses?
6 points by BrandonM on April 5, 2007 | hide | past | favorite | 20 comments


It would have to have some notation for parse trees, and if that wasn't s-expressions, it would probably be something less convenient, not more.


I had a sneaking suspicion that that was the case.

This reminds me of something that an Associate Professor at Ohio State said when I was presenting on a mathematical tool that I was (and am) implementing. He couldn't remember the order of operations, so he was having trouble parsing the statement:

(forall x, y in Z)(x >= y AND x <= y ==> x = y)

That led him to ask why should I not just include the unnecessary parens, i.e. ((x >= y AND x <= y) ==> x = y), which then led to the question of where to stop adding parentheses. Finally, he recalled talking to another professor who made the case that having rules for order of operations is a bad idea (because those rules have to be memorized), and why not just parenthesize everything? I pointed out, of course, that he was almost exactly describing Lisp.

That thought, however, led me to consider the idea of making order of operations always evaluate left-to-right, except for parenthesized expressions, which would be evaluated first (in the same manner). Leveraging this idea, perhaps it would be possible to escape the hold of exclusively using s-expressions?

My main goal would be to write a general-purpose language which could be used for writing any other language by using the macro facilities, instead of being forced to write Lisp-like languages as appears to be the case with Lisp. Of course, maybe to get that general, you may as well simply use lex and yacc to write a compiler.

Well, anyways, these are mostly just ramblings, and I'm glad to see that others are interested in this idea.


"That thought, however, led me to consider the idea of making order of operations always evaluate left-to-right"

Smalltalk does something similar. In Smalltalk, any infix operator is just a binary message send, and they're parsed left to right. So for example, 12 + 6 / 3 results in 6 instead of 14. This result seems perfectly natural to most Smalltalkers, but it drives me nuts.

Personally, I think it's important to consider the audience when designing a programming language (or anything, really). Operator precedence is totally illogical and adds lots of unnecessary complexity - but it's drilled into people's heads from elementary school onwards. It's like Qwerty keyboards, American date formats, and English spelling. In most cases, though, it's better to work with people's illogical expectations than to say "No, you're wrong, here's a better way." Perhaps that's why Lisp never caught on.


"In most cases, though, it's better to work with people's illogical expectations.... Perhaps that's why Lisp never caught on."

I appreciate the insight here. I think that if the goal is to create a general-purpose programming language for consumption by the average population, it might be better to meet the expectations of the average person. I'm not saying that it needs to be C-like necessarily, but I think that you're right that things like order of operations and other expectations should probably be properly preserved, at least for mathematical operations.

I also realized that my approach was still wrong. If the goal is to be able to build any programming language, the left-to-right approach would already be limited to building a language which evaluates from left-to-right. So it seems that I still have work to do in order to figure out how best to design a language which is able to look like whatever the user desires it to.

One possibility is to sacrifice ease of macro-writing for making other areas of the language more consistent (but not Lisp-like). That is, I would like to keep the expressive power of macros high (even if it involves modifying the read table in some way) while keeping the language intuitive for a beginner. This would likely make it a bit more difficult to write macros, but by the time you are using macros a lot, you are probably a bit more advanced, anyways.

I'm thinking that there has to be some kind of grammar theory that I can leverage here, where the programmer could actually modify the language's parser in the program while maintaining consistency and lack of ambiguity, or being alerted to the potential ambiguity in case they fail to.

In any case, I am quite convinced that Lisp is probably one of the most ideal languages. This means that my hypothetical language may be an example of the phenomenon pg describes, where the best languages are those which are designed for use by the designer, and languages like Java and Cobol are the result of designing for others. I would like to hope that's not the case here, since I'm approaching this with a use-case in mind: to design a language which can be used to design any other language.


But, of course, you don't have to have code files full of parens in order to have s-expressions. Alternative syntaxes have never taken hold in the Lisp world, which may mean they just offer nothing to an experienced Lisp hacker, but they're certainly not impossible.

You can have user-definable infix operators; Haskell has them and uses them to good effect. (The way Haskell does it, you can't have identifiers composed of both letters and operator characters, which Lisp people like to do, but I think we can safely treat that as a matter of taste.) You pick a symbol, say ~, and assign it a precedence and left- or right-associativity:

infixr 3 ~

infixl 7 .!.

Now (x ~ y .!. z) is parsed as ((~) x ((.!.) y z)), where (~) is the Haskell way of making the ~ operator as a prefix function. Haskell also lets you use a prefix function as an infix operator, by putting backquotes around it.

If you want to apply this in Lisp, there is the issue that (foo) means something else than foo; I would like to allow (foo x + bar y) to mean ((+) (foo x) (bar y)), but I wouldn't want (x + y) to mean ((+) (x) (y)). My solution would be that a single value is parsed as just a value, but multiple values next to each other is a function call: (foo x + y) is ((+) (foo x) y), to call y without parameters you have to write (foo x + (y)).

This leaves you with the parantheses for control structures and the like. If you like layout-based languages, you can adopt the rule that "expression + colon + newline + indented expression 1 + indented expression 2 + ..." is parsed as "expression (indented expression 1) (indented expression 2) ...".

Some examples of code from On Lisp formatted like this:

http://himalia.it.jyu.fi/~benja/2007/layoutedlisp.txt

Of course, it still looks like Lisp, with less parens.


That's definitely true in terms of defmacro and syntax-case. However, I do think with some effort the scripting-language community could come up with something almost as powerful as syntax-rules from R5RS, thus catching up to where the Scheme community was in 1991. With syntax-rules the person writing the macro does not have to explicitly deal with the parse tree.


Yes, it can.

http://www.livelogix.net/logix/intro.html

Logix is built on Python and compiles down to Python bytecode, so you can use it with Python modules, but it has macros that I think are probably as powerful as Lisp's. It's meant for DSLs.


Could you give an example that's more powerful? From the 60-second intro you linked to, these look no more powerful than C's macros.

http://en.wikipedia.org/wiki/C_preprocessor#Macro_definition_and_expansion


I don't use it myself; I just had it in my bookmarks.

I am also not very familiar with Lisp's macros. You may find something of interest in the more complete documentation.

Sorry.


I agree, thanks for the link. Unfortunately, I find the complete lack of syntax to make it even harder to read than Lisp macros (which I don't personally find difficult, but some do). Additionally, having to use numbers to indicate operator precedence seems a bit strange. Of course, Lisp solves this by always using parentheses, but I would be curious to see if there is a more elegant way of handling this than the numeric approach (or even the approach suggested in their proposed changes to Logix).


I agree with you. The lack of parentheses makes that hard to read. I am trying to get head wrapped around lisp (maybe thats it, I am _looking_ for the parentheses :)).


Thanks for posting that. We're building a DSL for our users and this will get put on the stack of things to look into. We're Python hackers so it's a good find.


In a few of Paul Graham's essays and in some of my own experiences, I have found that Lisp is a very powerful language for hard programming problems, largely due to its powerful macro facilities. These macros give the programmer the power to create entirely new, domain-specific languages without too much difficulty.

I am currently a Master's student and I hope to eventually be a founder, and my question is the title of this submission. I can understand why some people would shy away from Lisp, but I also see how many upcoming programmers appreciate the power of Python and Ruby. I believe, then, that the next big language will be one that combines the macros of Lisp with the ease of use of Python. I think that such a language would be perfect for startups.

I am a glutton for punishment, so my graduate interests lie in programming languages. Here at Ohio State University, there is a language called RESOLVE that a lot of students don't like too much (it's built on top of C++ and is much too wordy), but it does have some interesting concepts built-in that I would like to put in a language of my own. In creating a new language, one of the things that would be first and foremost in my mind would be to ensure it had powerful macro capabilities, so I am interesting to hear your feedback on the viability of a language with powerful macros but fewer parentheses.


See the Io programming language:

"Io is a small, prototype-based programming language. The ideas in Io are mostly inspired by Smalltalk (all values are objects), Self (prototype-based), NewtonScript (differential inheritance), Act1 (actors and futures for concurrency), LISP (code is a runtime inspectable/modifiable tree) and Lua (small, embeddable)."



Thanks for the link. It looks pretty interesting; I'm installing it right now to play around with at some point.


I've never understood why this is such a popular question.

Remember this remark from The Matrix? "After a while you don't even see the code. All you see is blonde, redhead..." Well, after a while you don't even see the parens. All you see is closure, continuation...

Seriously, just start reading On Lisp and by the time you're half-way through it you'll be at this point.


Oh, I totally get that. The "problem" is that of building languages on top of Lisp, which is really half of the point. The languages that can be trivially built on top of Lisp generally look like Lisp. I would like to be able to use a language with Lisp's capabilities to build a new language which looks completely different.

I'm not really asking for an answer here; this is really just a thought exercise on my part that I thought I'd share.


In the io programming language you can generate code (message trees) at runtime and then eval them in any context that you want to.

You can also pass messages trees (code) to a method without evaling it, and then modify it however you want beforing evaling it within whatever context you want.


My favorite approach was detailed here: "Growing Languages with Metamorphic Syntax Macros" http://www.brics.dk/RS/00/24/

Monty




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: