Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's obvious, to me at least, that hygienic macros are generally preferable to unhygienic ones. It's also clear to me that macros are an atypical way of thinking about programming. And yet I have a hard time seeing what advantage they yield over code that employs dynamic typing and proper composition of functions.

I have seen lots of examples of macros used to do things like square numbers. These are just minimal demonstrations of the technique, so you expect the code to be longer and make less sense. But the same thing appears to happen when the problem is more involved. The code makes less sense, and it's longer than the code I would write for a more traditional solution.

I know little about Lisp, but I wonder if Lisp programmers aren't conflating the advantages of macros with the advantages you get simply by building up functions in an intelligent way. Macros are neat, granted, but I've yet to see an instance where they actually increase productivity.

If this hypothesis is wrong, I would love to have someone point out why. I have been wary about approaching Lisp precisely because the preoccupation with macros seemed to me to be unjustified (and thus symptomatic of a culture more preoccupied with doing things the computationally interesting way rather than the more powerful way, e.g., using recursion when a for-loop would be shorter and faster).



Well, the short answer is that composing functions allows you to build algebraic abstractions but not syntactic abstractions.

I would never use squaring things as an example of macros: as you properly point out, this does not illustrate anything that cannot be done at least as easily with functions.

The examples I would trot out are things like let, which adds block scoping to languages that don't have it built in, or things like cond/case/switch. If you have a languge that doesn't have a case-like statement, you can use a macro to make one.

Your current language probably has these things built in, but such macors at least demonstrate a kind of abstraction that cannot be done with functions.

For example, you could build a function that looks like a case statement, but because most languages these days employ eager evaluation, your case function would evaluate all of its outcomes rather than only evaluating the one you want.

To make it work as a pure function, you would have to roll your own thunks by passing anonymous functions in for each case to be evaluated and each outcome. That works, but all that boilerplate becomes tedious.

The macro merely automates that kind of boilerplate for you in places where eager evaluation is harmful.

Time for me to stop, but I'll leave with a suggestion: syntactic abstraction is most useful when there's something unusual about the way you want to evaluate things.


> you would have to roll your own thunks by passing anonymous functions... all that boilerplate becomes tedious.

In many cases you could just define plain functions and have a way to denote what parts of the input should be converted to lambdas. I proposed something like this on the Arc forum, but it just got brushed aside. I still think it's a decent idea.

http://arclanguage.org/item?id=7216


To make [case] work as a pure function, you would have to roll your own thunks by passing anonymous functions in for each case to be evaluated and each outcome. That works, but all that boilerplate becomes tedious.

I like Ruby's block syntax for doing this without macros, but unfortunately you only get to pass in one block per function.


It's a worse-is-better thing: it makes some forms of abstraction awkward, you have to fool around chaining methods. But it does take the most common case and make it simple.

It seems like a legitimate design choice, even if I don't always like the resuls.


For a class this quarter, I was writing a compiler from scratch in C. For the lexer, we had to have a table mapping current state and current character to next state. Moreover, it was necessary to figure out what states would be needed to produce the desired tokens.

Of course, knowing the language beforehand, one could come up with the necessary states and then produce the table by hand, but that is labor-intensive and error-prone, and having the language we were parsing hard-coded into my lexer code did not satisfy me.

My first approach was to figure out the states by hand, and then, rather than creating the transition table by hand, I defined a simpler structure near the top of my code which was an array of

{ current_state, next_state, "characters-resulting-in-this-transition" }

arrays. Then, at runtime, I would have to perform initialization to translate this structure into the actual transition table itself. It was at this point that I really wished for the macro power of Lisp, to have the full power of the language at compile time to define structures that would remain static at runtime.

As an aside, I ended up writing Python (to figure out both the necessary states and to build the transition table) that generated the C code, and then I added items to my Makefile that would run the Python to create the C and then compile it. If I was using Lisp, I could have done this in a much more straightforward manner.


Not to denounce the educational aspect of doing it all, but it would seem the most straightfoward way to do this would be to use lex and yacc, which implement DSLs specifically designed to do this. Using a general purpose language seems like overkill (which is why parser generators exist).


The very last assignment was to use lex and yacc to compile the same language that we had spent the whole quarter writing a compiler and simulator for. The point of the class ("Compiler Design and Implementation") was not to learn the quickest way to write a compiler, but to learn about the common methods used in writing compilers (REs (finite automata) for the lexer, and LL, SLR, and LALR parsers for the compiler). Since this is precisely how lex and yacc work, someone coming out of the class would have the knowledge to port lex and yacc to another language or to perhaps even improve upon it.

As for saying that lex and yacc "implement DSLs", that is a huge simplification of what they do. A large part of the work of lex and yacc is generating the tables and the engine code that are used to translate characters to tokens and token sequences to parser rules. Thanks to the work we did in using this same approach ourselves, it was much easier for me to understand what lex and yacc do and why various design decisions were made.


'As for saying that lex and yacc "implement DSLs", that is a huge simplification of what they do.'

The DSLs I'm referring to are the lex input format and the yacc input format, both of which are specific languages for the domains of lexical analysis and parser generation respectively.


Ahh... I thought you meant that lex and yacc are typically used to implement DSLs.


A rough equation: Macro = Language level customization (for example, you want to mix two different languages, like HTML + Java or Java + SQL etc and still make the resulting code very readable vs using quoted strings all over the place)

Think of frameworks or code generators. (Assuming familiarity with Java) Consider a JSP page, and contrast it to the Java servlet that a JSP page generates.

JSP allows you to embed Java fragments in a HTML file. A Java compiler cannot by itself read it, so a JSP pre-compiler takes it, generates Java servlet code out of it. That servlet code has HTML strings embedded in it all over the place, so is hard to read, hard to maintain, and hard to write in the first place.

The JSP solution allows you to separate the visual design part (HTML/CSS) from the Java part.The pre-compiler generates normal Java code from this Java-embedded-HTML.

This is a classic macro solution. In Lisp, unlike Java, the macro code generation facility is built-in. This is both a plus and a minus. You don't do Lisp-embedded-in-HTML, but you would do Lisp-made-to-look-sort-of-like-HTML. A well-designed macro like that would make code easier to understand than traditional non-macro code which will have HTML fragments embedded as strings all over the place.

I personally prefer code generators to macros, because when you want that linguistic convenience, you really want it (I want full HTML/CSS, not someone's bastardized macro that kind of looks like HTML).

Thinking in terms of code generators also keeps macro-abuse minimal, simply because you have to work a bit harder to do code generation. A lot of Lisp macros are really mental masturbation by show-off artists.


I too am not very experienced in Lisp, but one rule of thumb I have seen for macros is: "If you don't want to automatically evaluate all the arguments of a function, use a macro instead."

As you point out, macros can be overused, but there are some things that you can do with macros that simply can't be done with functions alone (like implementing short-circuiting "and"s and "or"s using just "if" statements).

Many syntactic sugar improvements to programming languages -- like .NET's "using" statement, which allocates a resource, runs some code, then frees the resource -- can be implemented directly with macros. These improvements make code shorter, easier to read, and less error-prone... and with macros you don't have to wait for the language designer to add that functionality for you.


Most languages short-circuit ands and ors already, but adding additional short-circuiting demands macros:

My practical example of this is my desire for a short-circuit "implies" operator to use in assertions (typically in post-conditions).

I want to able to write checks like

  assert (foo != null _implies_ foo.someProperty);
where the _implies_ operator short-circuits.

Writing my own implies(a,b) function is no good in most languages, because strict evaluation would yield a null-reference exception in the example when a is null.

Without macros I end-up writing:

  assert(a == null || a.someProperty); 
or worse

  if (a != null) { assert(a.someProperty);

With lisp-strength macros I could easily add an implies operator and get readable rather than clever code.


> and thus symptomatic of a culture more preoccupied with doing things the computationally interesting way rather than the more powerful way, e.g., using recursion when a for-loop would be shorter and faster

That would indeed be a strange culture, but it's not the Lisp culture. We use loops.

In fact, most (all?) of our looping constructs are implemented as macros. Try implementing a looping construct (or any other macro in CL) without macros; I bet you'll find it shorter and faster to use a macro than a function.


I agree that Lispers sometimes overestimate the usefulness of macros. A lot of supposedly powerful macros can be (and are, in other languages) replaced by plain functions. I ranted about this on the Arc forum recently, but no one seemed to share my concern.

Also, many common macros (e.g. HTML generators) confer a performance advantage by doing computation at compile time, but do not confer any expressive advantage.

But, this is not to say that macros can't increase expressivity. You agree that the creation of new languages (metalinguistic abstraction) is an important tool, right? Think of a macro as a compiler for an arbitrary language or sub-language into Lisp. Lisp's homoiconic syntax just makes such compilers extremely easy to write. The trade-off is that the homoiconic syntax is harder to read (IMO).

If you want an example of a useful macro, here's one I used in a recent program. The interface was a simple command line where commands and their arguments generally corresponded to functions and their arguments. Much of the code looked like this (highly simplified):

  (define (add a b) (print (+ a b)))

  (new-command 'add add "Adds two integers" (a integer) (b integer))
The program would then display "add" among the list of available commands, along with its description and syntax. When the user typed e.g. "add 1 2" it would validate the input and call add on it.

Obviously, there's a lot of duplication above, because the properties of a function (its name, its parameters, etc.) aren't available at runtime; theoretically some reflection system could provide this, but it would be kind of a mess. So I wrote a macro that unified define and new-command, roughly like this:

  (define-command add (a integer b integer)
    "Adds two integers"
    (print (+ a b)))
In short, I designed a domain-specific language to attack my problem, and Lisp made it easier than any other language would have.

Also, I'd suggest dropping the rhetoric about Lisp "culture," which you clearly have only a vague understanding of. Have some trust in your fellow humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: