I guess being a long time part of Java eco-system, kind of gets tiring of having outsiders (in general, not referring to you) always mixing up Java with what comes with their PC, as if C would be defined by GCC.
As for the actual question, naturally having value types helps reduce GC pressure. Which on Java's case could be helped by trying out Graal or other JVMs that do better job at escape analysis than Hotspot. Alternatively, although it kind of is cheating, using the language extensions from either Azul or IBM for value types.
In any case, when inline classes (aka value types) arrive, Java can easily do the same as Go here.
JIT and de-optimizations play a role certainly, and are to blame for some performance impact, which can be further improved if a JVM like J9 gets used, given that it allows for PGO across runs.
Finally, while Hotspot has good defaults, tuning all the knobs is a science, even with help of J/Rockit and VisualVM, which opens the door for performance consulting.
The JVMs I mentioned, are targeted at soft real time deployment scenarios, as such they have APIs for low level control of memory management, while also supporting AOT with PGO compilation, thus allowing for low level fine tuning out of reach for the regular Java developers (pure Java SE implementations).
Since most of the explanations that you gave involve things like controlling value types and data layout, it sounds to me like you might agree with the statement that in languages with better control over allocation and data layout, tuning a garbage collector for latency over throughput can be a good idea because your time spent allocating is less important. Is that fair?
In this case, "tuning for latency over throughput" means "not having a generational GC". Whether this choice makes sense does not depend on how much a program allocates. Rather, it depends on whether the generational hypothesis holds. The generational hypothesis is one of the most powerful and consistent observations in the entire CS field. It certainly holds for .NET, which has a very similar memory model to that of Go, and therefore .NET has a generational garbage collector.
It's not as simple as does the generational hypothesis hold, there are real downsides to having a generational GC - it means you need a copying collector. Go heap values never move, which dramatically simplifies everything else (e.g. other threads don't need to be paused to update heap pointers, reducing STW times).
You don't need stop-the-world pauses for generational GC. As long as your write barrier ensures that pointers to a young object are entirely local to the TLAB that the object is allocated in, you will never need to stop other threads to sweep the nursery.
I guess being a long time part of Java eco-system, kind of gets tiring of having outsiders (in general, not referring to you) always mixing up Java with what comes with their PC, as if C would be defined by GCC.
As for the actual question, naturally having value types helps reduce GC pressure. Which on Java's case could be helped by trying out Graal or other JVMs that do better job at escape analysis than Hotspot. Alternatively, although it kind of is cheating, using the language extensions from either Azul or IBM for value types.
In any case, when inline classes (aka value types) arrive, Java can easily do the same as Go here.
JIT and de-optimizations play a role certainly, and are to blame for some performance impact, which can be further improved if a JVM like J9 gets used, given that it allows for PGO across runs.
Finally, while Hotspot has good defaults, tuning all the knobs is a science, even with help of J/Rockit and VisualVM, which opens the door for performance consulting.
The JVMs I mentioned, are targeted at soft real time deployment scenarios, as such they have APIs for low level control of memory management, while also supporting AOT with PGO compilation, thus allowing for low level fine tuning out of reach for the regular Java developers (pure Java SE implementations).