With interpreter semantics, in which a macro gets expanded anew every time a function is called, the late binding comes for free. ;-) Then, if you want the runtime performance that comes from compilation, you optimize for the case where people are not redefining functions, and invalidate old compilation results when they do. I think that rough plan should be doable, though I haven't gotten around to implementing enough of a system to say how well it works. But I think that's the only way to get anything close to good performance in Javascript VMs (not that they expand macros, but I expect they inline function calls and such, which requires similar assumptions about global definitions), and it seems to have been done.
For separate compilation, it does seem clear that what gets serialized will be references like "the object [probably a function] named 'foo in module bar", and structures (s-expressions or otherwise) containing such references. Given that compilation implies macroexpansion, you do have to assume (or verify) that the macros from other modules are what they used to be—and that non-macros (used in functional position at least) are still non-macros. If you have a full-blown Makefile kind of build system, then by default I suppose every file's output depends on the contents of every other file that it uses; or, as an optimization, depends merely on the exact set and definitions of macros exposed from those files. (In the C++ system I encounter at work, code is separated into .cpp and .h files, and editing a .h file causes the recompilation of every .cpp file that recursively depends on it, but editing a .cpp file only causes its own recompilation. If you wanted to imitate that, I guess you'd put macros into a distinctively named set of files, and forbid exportable macros anywhere else.)
---
Thanks! I've sold out and have been working for a medium-sized company doing mostly C++ and bash (the latter is unbelievably useful) for the past 3.5 years. I make intermittent progress on the side doing other things.