Thanks for chiming in @jancsika. You confirmed what I thought about the "stack overflow" counter. Pondering the issue more, I think what I am trying to get at is the difference between memory overflows and CPU overflows. These seem like very different things, although they are covered by the same error message. I'm going to elaborate my question, because I suspect that you do know the answer, if only I can communicate clearly.
My intuition tells me that the first example I posted uses a lot more memory than the second, precisely because it has to remember the numbers 0-98 before it gets to print anything. If the process continued for long enough, it would lead to a memory overflow, ie. the stack would be required to store more numbers than its capacity allows. This is what the "stack overflow" counter is there to protect against.
In the second example, however, the memory overflow doesn't seem to enter the picture. The number is forwarded to the print object and "forgotten" before it is sent back to the top. So if the second example were run without [moses] or any other safety limit, it would not eat up the memory. It would, of course, lead to a CPU overflow, since the process would never terminate, and the computer would still crash if the "stack overflow" counter failed to intervene. But it would crash for a very different reason than the first example, ie. because of a CPU overflow, not a memory overflow.
Does my reasoning here seem correct? Is there anything I've overlooked?
I'll now give some context to my question, because you're probably wondering why I care so much about these details. The context is Context, the library that I'm building. Context uses a lot of recursive functions similar to the second example that I posted, and I'm trying to decide whether or not it's worth rebuilding them using [until]. This would be a lot of work, but there are two reasons why it might be worth doing: 1: Because it would make Context more efficient, and 2: Because recursion will lead to unwanted error messages, as @ingox pointed out.
I am currently much more interested in 1 than in 2. For whatever reason, unwanted error messages have not been a problem for me so far (perhaps because I haven't reached the limit). As for efficiency, the standard answer seems to be that it doesn't make an appreciable difference on modern computers, but in fact I am finding reasons to care about it. Context has a heavy loading procedure, involving several recursive algorithms of the type I've outlined in the second example. Load time is something I care about because it can lead to audio dropouts, hence I am trying to decide whether or not I need to replace my recursive systems with iterative [until] based systems. I ran a simple test* for one such system and found that the recursive system actually seemed to run faster than the iterative counterpart, contrary to what I had expected. This makes me inclined to stick with recursion, but I'd like to know if there is anything that I have failed to consider.
*I can provide more details about this test you like, but I'm worried that if I do so now it would lead to a memory to overflow!