That's not the case; apply doesn't care how many arguments the function takes. Take map (which takes a mandatory argument and a rest parameter) and map1 (which takes two mandatory arguments):
arc> (help map)
(from "arc.arc")
[fn] (map f . seqs)
Applies the elements of the sequences to the given function.
Returns a sequence containing the results of the function.
See also [[each]] [[mapeach]] [[map1]] [[mappend]] [[andmap]]
[[ormap]] [[reduce]]
arc> (map list '(1 2 3) '(4 5 6))
((1 4) (2 5) (3 6))
arc> (apply map list '((1 2 3) (4 5 6)))
((1 4) (2 5) (3 6))
arc> (apply map (list list '(1 2 3) '(4 5 6)))
((1 4) (2 5) (3 6))
arc> (help map1)
(from "arc.arc")
[fn] (map1 f xs)
Return a sequence with function f applied to every element in sequence xs.
See also [[map]] [[each]] [[mappend]] [[andmap]] [[ormap]]
arc> (map1 [* 2 _] '(1 2 3))
(2 4 6)
arc> (apply map1 (list [* 2 _] '(1 2 3)))
(2 4 6)
As long as the last argument to apply is a list, you're golden.
Interesting. I was wondering if that might be the case.
So I guess that we don't need an anon. macro for that ;)
I'm also wondering whether we would ever actually want an anonymous macro, since often they turn out to be so general purpose that you might want to make a utility out of it.
I'm not really clear on what an anonymous macro would do. Transform code? If we're writing it in one place, we can just transform it right there. Can you give a usage example?
And no, Arc only has ordinary macros and ssyntax. Though actually---and this just occurred to me now---you can use ssyntax to create symbol macros. For instance, I have the following in a library file
The first two let me write (@func arg1 restargs) instead of (apply func arg1 restargs) and ($func xs) instead of (map xs). However, this $ makes the $ macro misbehave, and so I added ssyntax turning @ into apply and $ into the original $ macro. It turns out that this technique is fully generic, since ssyntax essentially does a find-and-replace on symbols:
arc> (= defsym add-ssyntax-top)
#3(tagged mac #<procedure>)
arc> (defsym RANDOM (rand))
t
arc> RANDOM
0.5548165450808223
arc> RANDOM
0.15745063842035198
arc> (= things '(alpha beta gamma))
(alpha beta gamma)
arc> (defsym THING1 (car things))
t
arc> THING1
alpha
arc> (= THING1 'one)
one
arc> things
(one beta gamma)
arc> (defsym NOISY (do (prn 'HI) nil))
t
arc> NOISY
HI
nil
arc> (car NOISY)
HI
nil
This is actually really nifty, even if it's an unintended side effect. On some level, though, I'm not fully convinced of the value of this---it seems a bit omnipresent. Then again, Common Lisp works fine with them, so perhaps they're fine for Arc too. Are there naming conventions for symbol macros in Common Lisp? I used capital letters above just because I wanted some distinguishing mark.
There are a couple of caveats: first, every time you run add-ssyntax-top, you'll add another entry to the ssyntax table instead of overwriting the old one, so redefinition will make ssyntax slower if it's done too much. Second, the five tokens R, r, L, l, and ... are all unavailable, even if quoted; they turn the given string into modifying ssyntax, not standalone ssyntax. Still, this is a new option I hadn't thought about yet.
I just found the macro varif on Anarki, which returns the value if bound. Would that be better, or worse? I should think it might be a little better, but I don't know.
True, this is running, I think, in the repl. So I don't think it matters that much. But having a good way to look up the value of a symbol is kind of important, I think. I want to do similar things that might be used in a "performance-sensitive" environment, not that arc is really performance sensitive in general. ;)
rntz wanted two separate functions to search macs and defs, so I suggested filtering the sig table based on whether each symbol was a mac or fn; that would only be possible if there was a way to turn the symbol into a value we could take the type of.
Poking around the DreamHost support wiki and terms of service, I can't see anything stopping you. You can run an Arc interpreter in the background, so long as it doesn't eat too many resources, and I can't see anything stopping you from forwarding requests to it.
However, it will not be officially supported in the same way that Ruby and Python are, so you'll be on your own when it comes to fixing problems. It might be better to use their private hosting service so that they don't complain about your Arc process (which they may do if a bug causes it to behave badly). Although this means it runs on a private virtual server, you still get all the same support and software as with the shared service.
I'm not sure I understand what you're doing, but perhaps you need another table that maps the symbols you created to the tables they refer to. Call it table-names. Then you could rewrite thad like this:
But I suspect there is a better way of doing what you're trying to do, so if you give us some more details then we might be able to suggest a more sane alternative.
Another example: Mathematica is well integrated with its environment and uses much more than plain ASCII. Not only does it have all kinds of mathematical symbols but you can even paste pictures into Mathematica code to do image processing.
Something like that for a Lisp language would be very cool.
Interesting. Code as data -> Data as Code. Pictures as data; pictures as code? That reminds me of some "graphical" programming language I saw a while ago. It used blocks of color to control the interpreter much like a Turing tape. The head would "move" up down, left or right depending on color, and perform various other operations.
Well term rewriting languages have been around for over 40 years and Lisp isn't dead yet! :)
The reason term rewriting can't replace macros is that the power of macros doesn't come from their similarity to rules, but from the fact that they run before normal execution. This two-step process makes macros an extension to the language's syntax. Pure doesn't have that ability. Pure's rules are more like a replacement to Lisp's functions, rather than Lisp's macros.
Ultimately, term rewriting and normal functional programming (lambda calculus) are very similar. I don't think either of them have much of an advantage over the other.
Macros are similar to term rewriting rules. However, the Arc macro system is less powerful than the the term rewriting of Pure (and Q) for the following reasons:
1. Symbols are not evaluated at macro-expansion time.
2. Most Arc operators are functions or special forms (e.g. if, set, +), so they will not be evaluated until after all macros have been expanded.
3. Macros are expanded leftmost-outermost, which makes recursive macros impossible (Pure uses leftmost-innermost evaluation). On the other hand, this makes macros better for code transformation.
4. Pure expresses rules using equations. Arc expresses macros as functions on code, which is less convenient (but more consistent with the way Arc expresses normal functions).
5. Arc macros do not form a Turing complete language, but Pure is Turing complete. This is because of the leftmost-outermost order of evaluation, which causes recursive macros to expand infinitely, rather than converge.
I once tried to create a language that would use both full term rewriting and macros together. It was very difficult, but I can't remember why.
5. Macro bodies are written in Arc; of course they're Turing complete! Even using macros and some non-turing-complete subset of the rest of Arc, it should be possible to write a Turing-complete language using recursive macros, albeit not one that you'd really want to use. It's true that recursion without a base case won't terminate, but a Turing complete language whose evaluation always terminates is a contradiction.
Actually, I apologize. My earlier statement about macros plus some turing-incomplete subset of arc being turing-complete makes no sense. There is no such thing as "macros on their own". Macros are just Arc code that gets evaluated before what we like to think of as "runtime". Arc macros sans the rest of arc are nothing. The "difference" here is not that macros are (or rather, macroexpansion is) turing-incomplete. The difference is that Arc delineates macroexpansion from normal evaluation, whereas Pure doesn't have a distinction; everything is term rewriting.