Sure, but that's not relevant to the discussion here. In this case the context was responding to the the statements that Go pundits claim the garbage collector has low latency but it always has bugs that will be fixed in the next version. I was providing evidence that the garbage collector actually does achieve low latency right now.
Also, there are ways to solve your throughput problems if you have control over your allocations, but there are not ways to solve your latency problems. Indeed, even if you don't have control of your allocations, you can often run multiple copies of your program to increase your throughput, but multiple copies will not help your latency.
Also, tuning your latency does in fact help increase throughput, because if your process spends a significant amount of time in allocations, its latency suffers. It's not a zero-sum knob between the two, especially in the presence of humans that care about tuning the performance of their applications.
You don't have much control over allocation in Go (in fact, according to the spec, you have none at all). Language constructs allocate in ways that are not obvious.
> Also, tuning your latency does in fact help increase throughput, because if your process spends a significant amount of time in allocations, its latency suffers.
I assume you meant to write "throughput" in that last sentence. That's not how throughput is defined. Throughput isn't "how long does it take to allocate", though that influences throughput. It measures how much time is spent in memory management in total during some workload. Optimizing for latency over throughput means you are choosing to spend more time in GC.
> You don't have much control over allocation in Go
Maybe you mean something different by "control over allocation"? Go does allow that, for example "The Journey of Go's Garbage Collector" talk has a good summary,
Specifically the sections about value-oriented programming are exactly that: Go allows the developer to avoid a lot of allocation by embedding structs, passing interior pointers, etc. Compared to Java or C#, it can have a much smaller number of allocations as a result.
> Specifically the sections about value-oriented programming are exactly that: Go allows the developer to avoid a lot of allocation by embedding structs, passing interior pointers, etc.
To elaborate, for example, indirect calls cause parameters to those calls to be judged escaping and unconditionally allocated on the heap. Go style encourages frequent use of interfaces such as io.Reader. So in order to interoperate with common Go code, such as that of the standard library, you will be allocating a lot.
> Compared to Java or C#, it can have a much smaller number of allocations as a result.
C# also allows you to embed structs within other structs and pass interior pointers. Java HotSpot has escape analysis as well, though it's less important for that JVM (and other JVMs), since HotSpot has a generational GC with fast bump allocation in the nursery.
As I've mentioned before, I also have a problem with the conclusion of this talk: that generational garbage collection isn't the right thing for Go. The problem is that nobody has tested generational GC with the biggest practical benefit of it: bump allocation in the nursery. I'm not surprised that generational GC is a loss without that benefit.
> avoid a lot of allocation by embedding structs, passing interior pointers, etc
I can create a struct and take a ref to an interior field of that struct in C#, can I not? And if I wanted to badly enough, I could use unsafe code and take a pointer to the third byte of an interior field of that struct.
No, I meant what I wrote. The time spent in total in GC includes the time spent performing allocations. If you reduce the time spent in allocations with all else being equal, you reduce both latency and increase throughput. In that way, optimizing for your allocation latency also increases your total throughput.
Also, there are ways to solve your throughput problems if you have control over your allocations, but there are not ways to solve your latency problems. Indeed, even if you don't have control of your allocations, you can often run multiple copies of your program to increase your throughput, but multiple copies will not help your latency.
Also, tuning your latency does in fact help increase throughput, because if your process spends a significant amount of time in allocations, its latency suffers. It's not a zero-sum knob between the two, especially in the presence of humans that care about tuning the performance of their applications.