Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What LLVM static analysis can be done to figure out the lifecycle of objects also applies to other GC algorithms.

ARC may be less likely to get memory paused on release, but ARC doesn't compact free memory, and therefore fragmentation can cause object allocations (malloc) to become more expensive, possibly leading to pauses.

If you use non-copying GC that doesn't compact (ARC doesn't compact), then there are low pause GC algorithms out there that give pretty good bounds (e.g. 5ms pause).

I'd like to see actual benchmarks of both ARC and say, mark-and-sweep on a mobile device rather than speculation and opinion, and both benchmarks must get the same static compilation treatment. That is, if escape analysis tells you that the lifetime of the object is bounded by the current stack frame, then allocate it on the stack, and don't penalize the non-reference-counted GC by allocating 100% of everything on the heap.



ARC may be slower than a good GC, but what it has is deterministic behaviour. A 10ms pause could be tolerable if you know when it will happen.

A fair few years ago I worked in the murky world of J2ME games, and a common pattern was byte[] _variables = new byte[1024] so you could ensure there were no GC pauses.


> A fair few years ago I worked in the murky world of J2ME games, and a common pattern was byte[] _variables = new byte[1024] so you could ensure there were no GC pauses.

That's not the same as ARC, though. The Java equivalent to ARC would be something like having an array of volatile reference counts to go with your variables and fiddling with those at most accesses.


Not the same, but they had the same problem - GC pauses when it likes, not when you like. If you know calling move_hero() takes 23ms, 15 of which is ARC you can plan for that. You can't plan for move_hero() taking 7ms, except when GC happens.


Depending on VM, Java developers can plan where the pauses happen. One way they do that is by object pooling, another is by using DirectBuffers/off-head memory. A third way to do it is with scoped heaps.

It's not like tons of games haven't been shipped with non ref-count GC. Minecraft being the most famous, but games on Unity3D/Mono can have non-ref count GC. Lua is shipped in tons of game engines and uses classic GC.

Regardless of whether you are using automatic GC, or if you are using malloc/free, you can't write a high performance game without carefully working around memory issues. Even C/C++ games like Max Payne have shipped with frame hiccups caused by poor malloc/free behavior that had to be fixed with custom allocators.

If you're writing a game, you have to pay attention to what you're doing. But do non-framrate limited games and regular apps need ref-count GC? I suggest no, they do not.


> But do non-framrate limited games and regular apps need ref-count GC? I suggest no, they do not.

Practically the first piece of advice usually given when someone asks "how do I make my Android app non-laggy" is to avoid allocation wherever possible; even a single frame drop due to a GC pause when the user is scrolling a list, say, can be noticeable.


Android's GC is not optimal/best of breed. ART is introducing much lower pause times for Android AFAIK.

Low pause GC is possible. See http://openjdk.java.net/jeps/189 or Zing/Azul (absolutely pauseless GC or so they claim)


Azul C4 uses a custom kernel extension. It's not really a general purpose GC for user-level apps. (There are kernel-extension-free variations of it, but they suffer from reduced throughput over the HotSpot GC.)


That's true, but for a mobile device, you can control the kernel, it it may be, from a user experience point of view, that reducing pause latency is better than high throughput. I dunno, has anyone ever considered using Azul-like tricks on mobile kernels? Does ARM have the required HW to support it efficiently?

What's FirefoxOS doing for it's GC?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: