It is not intrinsic, but hard to avoid. Alternatives include:
- just allocate, never collect (not infeasible with 64-bit memory spaces, if you have lots of swap and can rebozo fairly frequently, but bad for cache locality)
- garbage collect at provable idle times. Question is: when are those?
- concurrent garbage collect, and proof that it can keep up with allocations
Finally, you could try and design a language where one can (often) prove that bar can be mutated in place in expressions such as
bar = foo(bar,baz)
(That's possible if you can prove there's only one reference to bar at the time of the call)
(Rust's memory model may help here)
I am not aware of any claims that it is possible to write meaningful systems based on this model that do not have to allocate new objects regularly. Problem is that, to guarantee the 'one reference' property, you have to make fresh copies of objects all the time, and that beats the reason why you want that 'one reference' rule.
- just allocate, never collect (not infeasible with 64-bit memory spaces, if you have lots of swap and can rebozo fairly frequently, but bad for cache locality)
- garbage collect at provable idle times. Question is: when are those?
- concurrent garbage collect, and proof that it can keep up with allocations
Finally, you could try and design a language where one can (often) prove that bar can be mutated in place in expressions such as
(That's possible if you can prove there's only one reference to bar at the time of the call)(Rust's memory model may help here)
I am not aware of any claims that it is possible to write meaningful systems based on this model that do not have to allocate new objects regularly. Problem is that, to guarantee the 'one reference' property, you have to make fresh copies of objects all the time, and that beats the reason why you want that 'one reference' rule.