a) All examples are now checked when you load tests.arc.
b) You can pass in an unprintable expression using valueof. For example:
(examples sref
(ret x '(1 2 3)
(sref x 4 1))
(1 4 3)
(ret x "abc"
(sref x #\d 0))
"dbc"
(ret x (obj a 1 b 2)
(sref x 3 'd))
(valueof (obj a 1 b 2 d 3)))
This is how it looks:
arc> (help sref)
[fn] (sref tem v k)
Sets position 'indices' in 'aggregate' (which might be a list, string, hash
table, or other user-defined type) to 'value'.
Examples:
arc> (ret x '(1 2 3)
(sref x 4 1))
(1 4 3)
arc> (ret x "abc"
(sref x #\d 0))
"dbc"
arc> (ret x (obj a 1 b 2)
(sref x 3 'd))
#hash((b . 2) (a . 1) (d . 3))
Summary of rules for expected value:
i) If it's _, checking is skipped and help doesn't print the result.
ii) If it's of the form (valueof x) then we evaluate x when printing and comparing.
iii) Otherwise we compare against the raw value.
I could eliminate b) by always evaluating x. This would make tests look like:
(examples list
(list 1 2 3)
'(1 2 3) <-- note the quote
(list "a" '(1 2) 3)
'("a" (1 2) 3)) <-- note the quote
rather than like the current:
(examples list
(list 1 2 3)
(1 2 3)
(list "a" '(1 2) 3)
("a" (1 2) 3))
Which do people prefer?
(Then again, perhaps there's no point polishing this further if we find a clean way to manage examples as strings that can continue to be checked, while handling ordering and so on.)
Personally, I'm of the opinion that tests and examples should probably be kept separately. Examples are intended to be evaluated and displayed for help as a part of the documentation, while tests are designed to prevent errors in the code and often need to be designed for that purpose. 'Not equal' is only one of many assertions one may wish to make about output, and as noted before, many things have side effects that are not so easily compared.
Merging the concept can be helpful, but requires more of the people making the examples in the first place. Also, just because a code snippet makes a good example does not mean it makes a good test, and vice versa.
I would prefer a solution where 'examples didn't include any predefined results at all, and they were all just evaluated during help. If desired, someone working with the unit test suite could write code that leveraged the examples, but it wouldn't be necessary. That way we could use good illustrative examples that may not make good tests, and good thorough tests that may not make good examples.
Yeah you may well be right. Is it really not useful to also be able to see the result of an example call right there next to the code?
I certainly agree that the vocabulary of matchers is incomplete. And if we can't find a small basis set of them, this whole second alternative to tests start to seem inelegant.
Perhaps we should just tag some tests to be shown in help and call it a day. Though I like seeing the tests right next to each function. That seems useful even if we rip out the online help altogether. In fact, I started out writing examples when the macro was a noop. We could just inline all the tests, but then it would be overwhelming to have 20 tests for each function.. Ok, I'll stop rambling now. Summary of use cases:
a) Getting feedback that something broke when we make a change. (Unit tests currently do this.)
b) Seeing some examples as we read the code.
c) Online help at the repl.
d) Online help in the browser. (like kens's /ref/)
I don't think including the tests alongside the code would help much; many tests are rather complicated, involving set up, take down, and more complex assertions than just 'eq. Not that one couldn't understand what it meant, just that it's not as clear as an example.
I hadn't thought of the use case for wanting examples while perusing the code itself, but I must admit that I find that somewhat uncommon. I don't often know which file a function is defined in, and rarely need it given the help support we have in the repl. If I do want to look at the code, the 'src utility shows the source of a function. If I want to edit it, I often use the help utilities to find the file anyway. So having the results only available via the repl wouldn't bother me any.
C and D can get by with just evaluating the examples and showing the output.
No, unless you wanted to dynamically read the examples from the docstrings to evaluate them when the examples are queried, either directly or as part of help.
Actually, I don't know if the examples should be automatically displayed with 'help, or queried separately.
Either way, it would be nice to make them automatically evaluated, unless they can't be for whatever reason. It seems like that would be easiest to do with something like the existing examples macro, but if you think it would be doable with docstrings I guess that could work.
Ah, ok. So you don't care about being able to see them next to the function, but you would like some way to see them at the repl along with their results. Let me know if I'm still missing something.
(Sorry I'm asking basic questions. The discussion has ranged far enough that I just want to circle back to nail down precisely what you're suggesting among all the use cases and possible UIs.)
Well, that's what I'm suggesting. I don't see the other cases as essential, and the interface seems simpler that way. I'm lazy too, which means that if I'm making examples, I'd rather not have to provide the results. Not that I couldn't, but I'd like the option at least to do otherwise.
As always, you don't have to change anything just to meet my opinions. I'm used to being in the minority, in fact.
Maybe nobody else cares. It's a pretty small community at this point anyway, and I doubt everyone checks daily. I know I've gone through long periods without checking.