Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Go and Not Rust? (kristoff.it)
471 points by kristoff_it on Sept 16, 2019 | hide | past | favorite | 477 comments


Not disagreeing with the sentiment, but I dislike the word "simple" used to describe languages, because it has two meanings: simple = "consists of few parts" and simple = "easy to use", and these are not the same.

Certain things in Go are not easy (not simple), because Go has few language features (is simple).

Go has definitely a lower barrier to entry and is very productive for certain size and complexity of a project. However, as you start pushing beyond "scale" Go is designed for, Go becomes less simple to use. To appreciate things like generics, non-nullable types, no shared mutable state, you have to be working on a problem that requires these things, and only then these extra features pay off and make development and maintenance easier.


Whenever I used the word `simple` I did it not to mean easy. You raise a good point and I agree that simplicity != ease. I think it's a well known point of discussion from rob pike and, although less authoritative, it's also the focus of my first ever blog post [1] :)

That said, I wanted to talk about Go for enterprise development, and I did explicitly made the point that some abstractions are in my opinion detrimental to the reality of enterprise software, and I suspect generics might be one of them. It's a big topic and I honestly don't know the definitive answer, but I just wanted to point that out. I do agree that non-nullable types are good and a big hole in Go's repertoire.

[1] https://kristoff.it/blog/simple-not-just-easy/


Have you gone back and read that post you linked recently?

It seems to me like the final section has a lot of bad things to say about the methodlogy of a go developer. Huge, bloated toolchains required to make progress. Code volume rather than appropriate abstraction. To me, your MS Word vs LaTeX graph sure seems like it's damning "easy" languages like Golang in favor of other languages with more powerful abstractions.

You also assert that Golang is easier to learn than other languages. I don't think you've really got a leg to stand on there. It's "easy" for folks doing service development because the arbitrary decisions made by Golang were made from the perspective of someone experienced at writing web services. If you already know C and Python a bit, Golang cherrypicks a lot of good stuff, but if you don't (or you're not writing a bunch of web services that essentially do nothing but punt to C frameworks and validate strings in request headers) then Go's going to struggle, and folks have pointed this out.

It's pretty surprising to me that you write about how Generics are the death of enterprise software when so much quite-usable software uses generics. Why do yo simply get to forget the existence of the huge body of Java work and instead assert it's bad? Android uses generics in many cases where appropriate and it's one of the better UI kids available these days.

Even the Actor methodology you're citing as part of Golang as good is actually a fairly sophisticated abstraction with lots of implications for the runtime and execution order. Why is that specific programming abstraction given a pass because of its benefits, but writing a generic linked list is going to be the death of your programming organization?

You also defend the structuring of many enterprise groups even as you suggest that they lack training, refuse to pay technical debt, and place unreasonable burdens on developers. You seem to have just accepted this and said Golang helps you be more complicit in this mode of operation that you also seem to suggest is somewhat bad. Why would we want to pander to a methodology that asks junior developers to proceed without training and places focus on process and hyper-specialized domain experts rather than clear communication, sound architecture and sustainable velocity?

I'm quite confused how you can hold both these opinions at the same time. It seems like they're contradictory.


> "easy" languages like Golang in favor of other languages with more powerful abstractions.

Go is easier to learn than C# (I'll omit Java in this comment as I have more experience with the former), but that doesn't make Go an "easy language". The name itself implies that mastery doesn't come immediately.

The powerful abstractions that you mention, including generics, are indeed good things, but pardon me but I suspect you've never seen how badly and easily they can be abused in certain environments. You've probably seen the Factory<Factory<NaturalNumbersFacade>> joke somewhere, real live enterprise software is sometime like this, but unironically.

> Generics are the death of enterprise software

That's an hyperbole, I've never made such strong statement.

>Even the Actor methodology you're citing as part of Golang as good is actually a fairly sophisticated abstraction with lots of implications for the runtime and execution order. Why is that specific programming abstraction given a pass because of its benefits, but writing a generic linked list is going to be the death of your programming organization?

Goroutines and channels are simpler than the threaded multiplexing that C# does with its stackless coroutines. My argument is mainly related to the many ways you can mess up async/await in C# vs the 2-3 ways you can deadlock in Go. I link to a talk, an image and a blog post related to the subject.

> You also defend the structuring of many enterprise groups even as you suggest that they lack training, refuse to pay technical debt, and place unreasonable burdens on developers.

I don't defend nor encourage this, but that's what happens in real life. It's the result of many factors at play, some of which I described in the post, some of which even I don't understand. If I did, I would be way more rich :)

You can refute my assessment of reality, but "being complicit" is not exactly appropriate when 80% of the enterprise consultancy jobs out there are like this.

> Why would we want to pander to a methodology that asks junior developers to proceed without training and places focus on process and hyper-specialized domain experts rather than clear communication, sound architecture and sustainable velocity?

You can't change how things work until you understand why the work the way they do (or at least you can't reliably change the world unless you do). Refusing the current state of things is step 0 of N, and I've learned in my experience that change in big systems comes incrementally, you can't "distrupt all the things", so Go is (in my opinion) a step in the right direction.

I would recommend you talk with somebody you know that has this type of work experience, they will be able to convey to you how these things work better than I can, probably.


> The powerful abstractions that you mention, including generics, are indeed good things, but pardon me but I suspect you've never seen how badly and easily they can be abused in certain environments. You've probably seen the Factory<Factory<NaturalNumbersFacade>> joke somewhere, real live enterprise software is sometime like this, but unironically.

Isn't that more of a criticism of enterprise software than generics? If they can write ugly generic code, they sure as hell can write ugly Go code as well. They're just different different programming styles, and while some like one others prefer the other. Even when you code in ASM, you'll probably end up writing generic code down the line. You'll just be managing it manually rather than have a compiler, preprocessor, or templating engine do it for you. It's a preference, and I say to each their own. Also, the whole moralizing holier-than-thou simplicity and anti-abstraction talk is getting kind of annoying, but then again, there's lots of moralizing people in the Rust camp who are just as annoying.


> Isn't that more of a criticism of enterprise software than generics?

Absolutely yes! My whole argument is restricted to (how I experienced) enterprise development.


Isn't this just post-hoc rationalization though?

People complain that Rustaceans are insistent to the point of being obnoxious about Rust. But I insist that Gophers turn post-hoc rationalization into an art form. Things are bad because Go didn't do them, if they were good Go would have.


> You've probably seen the Factory<Factory<NaturalNumbersFacade>> joke somewhere, real live enterprise software is sometime like this, but unironically.

I'm sorry, this is a terrible argument. That's definitely not a common usage of Generics.

The common enterprise OOP idiom that is mocked is NaturalNumbersFacadeFactoryFactory, which use no Generics at all, leading to a large number of classes.

Ironically, if one used generics to replace factories like you did in your example, you would be able to replace hundreds or thousands of Factory or FactoryFactory classes in an enterprise application with a single Factory<T> class. Of course, that would probably be pointless. There's a reason multiple factories exist in a program, even though there are ways to replace them with something simpler.

I get it that Go programmers don't want the language to be complex, but Generics themselves don't have to be complex. They solve a lot of problems in a simple way and are MUCH easier to use than ad-hoc generics made using interface{} + Reflection.


It definitely is common with generics, it happens all the time. Here's some production code I've worked on recently:

     new TypeReference<FooResponseCollectionResource<MemberUpdate>>(){})
I like generics, but I don't love this sort of thing and it's not even a mis-use of them


Many languages do support type alias. In C# you would do

    using NiceType = TypeReference<FooResponseCollectionResource<MemberUpdate>>
    // ....
    new NiceType ()


My point was that the commonly mocked VisitorObserverFacadeFactoryFactory Java idiom doesn't normally use generics at all.

Of course you can mix generics with terribly named classes and OOP patterns, but that's hardly a problem with generics themselves.


what's the alternative ? using an "untyped" variable ?

but then you loose type checking and some people don't like to loose type checking...


Do not use patterns that require you to write code like that.


You never have cases where you have a list of lists? Or do you introduce a pointless wrapper type for the inner lists to "hide" the generics?


That's the answer for me.

The problem in those cases always the abuse of often obsolete OOP patterns and misnamed classes.


Or use type aliases that are helpful.


> I'm sorry, this is a terrible argument. That's definitely not a common usage of Generics.

It is very common indeed in the three large Enterprise Java houses I've worked in over the past 15 years. FizzBuzzEnterpriseEdition is the reality in all three of those places. Not only is it reality, it is requirement.


FizzBuzzEnterpriseEdition doesn't implement any generic classes.

You're just making my point for me.


> Go is easier to learn than C# (I'll omit Java in this comment as I have more experience with the former), but that doesn't make Go an "easy language". The name itself implies that mastery doesn't come immediately.

I'm using "easy" in the sense of "easy" vs "simple". I think that the modern "commonly in-use" parts of C# are about the same size as Golang, to my sense of scale.

> , but pardon me but I suspect you've never seen how badly and easily they can be abused in certain environments. You've probably seen the Factory<Factory<NaturalNumbersFacade>> joke somewhere, real live enterprise software is sometime like this, but unironically.

Pardon me if I think it's profoundly disingenuous for you to conflate factory patterns (which can exist in literally any type system and language) with Generics, and let me again positively beg for pardoning if I come across thinking you don't really understand the Golang argument for generics if this is your go-to example.

> That's an hyperbole, I've never made such strong statement.

Perhaps, but you have lumped generics in with a group of features and said, "The Go community regards as anti-patterns many abstractions regularly employed by Java / C#" and essentially intimdated that Generics are almost never good.

Further, your rhetoric carefully partitions everyone else's abstractions as risky anti-patterns while below you carefully rationalize Go's abstractions as in fact good. Your argument there essentially boils down to taste and fear.

> Goroutines and channels are simpler than the threaded multiplexing that C# does with its stackless coroutines. My argument is mainly related to the many ways you can mess up async/await in C# vs the 2-3 ways you can deadlock in Go. I link to a talk, an image and a blog post related to the subject.

No, they're not universally so. Firstly, actors and threads define equivalent systems [0], they simply have different tradeoffs within that space. There are some constructs where actors are easier (e.g., when a process maps well to an individual loop consuming a mailbox) and some where they simply are not (e.g., when spinning over shared memory and hoping to pull out a copy to another space). What's more, there's an awful lot of progress on the shared space model by attacking the memory coherency problem.

It's quite possible to build systems that are as resistant to deadlock as Golang using threaded models. They're also amenable to static analysis. There's also a very large and useful body of research on using structures that are unopinionated about the order that they receive updates in (CRDTs are a good place to start here), making the strict linearization of actors unnecessary and even a performance bottleneck sometimes.

You're either unaware of it, or you're uninterested. I don't know, but if it is the former then you should probably keep up with what's going on there.

> I don't defend nor encourage this, but that's what happens in real life. It's the result of many factors at play, some of which I described in the post, some of which even I don't understand. If I did, I would be way more rich :)

You are encouraging it though. You're saying we should use tools that are designed to accommodate it. That's literally baking this mode of operation into our automation at a fundamental level. And as you've implied, once there it often takes monumental effort to get it dislodged.

> You can't change how things work until you understand why the work the way they do

That's my line. The way it works is people suggesting that there is no other way it could work, and then baking these assumptions deeply into their corporate structure.

> I would recommend you talk with somebody you know that has this type of work experience, they will be able to convey to you how these things work better than I can, probably.

Hi. I'm Dave. I'm a SRM at Google right now but I've also been a Director for Capital One, worked in software at numerous companies including Microsoft and about a dozen startups in technical and advisory capacities, and founded (and sold) my own startup. I've been in management in one capacity or another for nearly a decade.

And a lot of my time spent when I'm not working directly on projects is advocating for developers to be more empowered, receive more training, and have the power to actually set and run a sustainable pace, even if that means a slow start to burn away the technical debt in place.

But thanks, I'll keep that in mind.

[0]: "On the Duality of Operating System Structures", by Lauer & Needham http://web.cecs.pdx.edu/~walpole/class/cs533/papers/duality7...


> Your argument there essentially boils down to taste and fear.

I suspect the larger factor may simply be familiarity. It is very rare to see a balanced piece about some programming language - let alone one about two programming languages - where the author has an equal and substantial amount of experience with both and is really able to compare them on their merits.


You can mess up async/await in C# in exactly 0 ways. Calling .Result is just a bad practice and should never be done.


Uhh, I've messed it up a lot more ways than zero, and it's easy to do. In fact, doing the exact same thing in a console app and a Windows Forms app will result in a deadlock in one of them, for absolutely no obvious reason.

Async and await in C# may be wonderful to some, but they are horribly broken to others.


I agree there're caveats, I experienced that too.

I disagree they're broken.

C# 7.1 introduced async main. No reason to call Task.Result in console programs.

Even before that feature arrived, visual studio debugger could resolve that literally in a minute. Press F5, reproduce deadlock, you'll see exactly what's locked and why.


I also don't understand your argument on simplicity. You make mention in multiple points about "await". You don't like it. We get it. In what way is it not simple? And how does it follow that Go is more simple (using the same definition).


I actually quite like it. C# has a peculiar async runtime that when misused tends to deadlock, and that's the entirety of my issue with it... and the fact that async/await is a half baked monad in most languages, but I say that with love (mostly) :)


I almost completely agree with everything in the article except the word "Enterprise", it might just be me but it can mean a lot of things to say it's good for enterprises.

We could say it's good for Web and Services Enterprises (probably) but for a different kind of an enterprise it can vary, depending on their business model and requirements. As the significance of it's features can change.

Tldr; All languages have a place(IMHO), Go is a simpler language with reasonable value in the current industry for various reasons.


> Certain things in Go are not easy (not simple), because Go has few language features (is simple).

Too true, I touch on that in a post I made here as well. In Go I often found myself having to reuse logic in functions that, in Rust, could be hidden behind iterators or custom iterator implementations.

While the Rust version is definitely more complex if you look at the implementation of the iterator; in practice you are just using the iterator - and that ends up meaning the problem you're solving keeps locality, and is easier to reason about. In my experience, at least.


I wonder if it is actually more complex. Emergent complexity is a thing, and one only needs to look at Go (the game; pun definitely (not) intended). The rules are dead-simple; you can learn them in under a minute. But to actually play the game well, you have to learn a lot more. You have to memorize openings, learn to identify and utilize patterns, etc. There is a lot of "meta-play" going on, and you could say that high-level Go play is full of abstractions. In fact, you won't have a chance without the abstractions. If you only see individual stones rather than shapes, you're done for. I think the same applies to languages like Go. You don't have generics, but if you want to effectively re-use code, you need abstractions. The difference now is that in languages like Rust, you have predefined and marked abstractions, while in Go, you define them yourself and need to recognize them yourself. Granted, Go also makes certain kinds of abstractions difficult or impossible to manually define within the language, so you may end up with copy-pasted code, but I imagine even that will be manually abstracted through naming and such.

The Rust iterator is a pretty simple abstraction in my opinion. It's essentially a lazy list. You see an iterator used, you know what you're getting. You see the map function used, you know it's the same as a for loop over all the elements. A fold/reduce is just a for loop over all elements with a result variable that gets returned. I really don't think using iterators is any more complex than a for loop, and with a for loop, you actually have to look through it to identify the pattern used and find out what it does. I'd say having to manually check for patterns is more work, but I couldn't say which one is more complex; maybe neither is.


You can encapsulate iteration logic in Go.

I do it in my libraries when warranted.

The pattern is:

  v := NewIteratedValue()
  for v.Next() {
    item := v.Item
    // process item
  }
  if v.Err() != nil {
     // process error
  }
A variation of this pattern exists in standard library and third part libraries.

I know that you meant implementing some sort of iteration protocol natively supported by the language, so that you can write:

  for v { ... }
vs:

  for v.Next() { }
But you ended up making much stronger and untrue claim that one can't write re-usable iteration logic in Go.


That's an apples-to-oranges comparison because your iterator interface here isn't generic like Rust's. It isn't reusable.

Your iterator has to be specific to a set of types — "v.Item" in your example is always of some specific type. Using an untyped interface{} would be worse than any alternative because every single use would have to use type switches and casting; you lose out of compile-type type safety.

Because of this, even if you have a local iteration system, it isn't composable beyond your package and its types. Given this (simpler) interface:

  type Iterator interface {
    Next() (Foo, error)
  }
...then I can write a map function:

  func Map(it Iterator, mapper func(Foo) Foo)) Iterator
...but I cannot write a generic one that works with anything other than Foo. And if I need this:

  func MapFooToBar(it Iterator, mapper func(Foo) Bar)) Iterator
...then I have to write it just for that purpose.

That's why there's no "iterator library" for Go, like there is with Rust.


> But you ended up making much stronger and untrue claim that one can't write re-usable iteration logic in Go.

Not at all. Are you telling me that `v.Next()` can be written in a reusable way for different data structures? Nonsense.

`v.Next()` will have a single, bound return type. So at that point you're re-implementing iterators for anything you want to iterate on.

Furthermore, lets say I want to `v.Next().Map()`. How can I write a Map function in a reusable way? The entirety of your iterator chain would have to be custom implemented for the one data structure you're using. Or you throw your entire type information away and use `interface{}`.

With Go, your limit of reusability is hand writing the entire iterator implementation per type. Which is hardly reusable in my view. I touched on this when I mentioned creating separate functions for these patterns. That's all you're doing here - creating custom methods to make something look like, but not behave like an actual iterator. No Map support, no Filter, no .. anything. All you made was a loop, and a hand written one.

I'm not trying to be overly negative here. However saying Go has an iterator pattern is imo strongly misleading.


re: Simple So it took me a number of years running a tech agency to realize that the words "simple" and “easy” are bad. In my job I now correct my, and our project managers, from using the word "simple" to the word "straightforward". I find that the term simple connotes a feeling of the task/work being easy and thereby quick to get done. As an agency the term “simple” speaks not only to risk and ease of completion but also budget, as a function of time.

Moving 10,000 bricks from point A to B is likely simple or easy, but it takes time. If not careful a client can think “oh it’s just moving 10K bricks 5 feet over, they should automagically be able to do this since John said it was “simple” ”. What I intended to impart was that I know how to do it, no solutioning needed, we have the skills and resources to get it done, and we can probably get moving on this quickly. Now I say something more like … “Moving 10,000 bricks from A to B is straightforward for my team, when do you want us to get started”.


Rich Hickey describes simple as unbraided, like a class is identity, state and schema all braided together

And easy as close by and accessible i.e. npm i latest-framework might be easy but not simple

https://www.infoq.com/presentations/Simple-Made-Easy/


This presentation had an outsize influence on my professional development as a programmer. If I've watched it once (and I have), I've watched it a dozen times.

edit: The "Limits" slide (go to 12:30 in the vid) is one that I really internalized early on. And looking at it again years later, the principles from that slide absolutely guide my app development:

- We can only hope to make reliable those things we can understand

- We can only consider a few things at a time

- Intertwined things must be considered together

- Complexity undermines understanding


isn't the same idea exactly covered by the term "(de)coupled"?


It can include decoupling, but no it's not synonymous.


> However, as you start pushing beyond "scale" Go is designed for,

Go was designed at Google to be used internally at Google, so frankly it's hard to imagine commonly pushing past the "scale" Go was designed for.


I assume the GP meant "scale of codebase complexity" – which is orthogonal to whether something is "web scale" (ie. everything Google does).


Isn't the Google codebase infamously a monorepo with billions of loc? That seems like it fits "scale of codebase complexity" as well as "web scale"


It's a monolithic repo, but not a monolithic codebase. It contains many many separate projects, libraries, and applications. There's a lot of code reuse, but it's not like you have to wrap your head around the whole thing.


Yes, but much (if not most) of that codebase is in C++.


How do you check out a project like that?


Go was designed for Google-scale and Google-style logs analysis, for which it has been very successful. The fact that it comes with a decent HTTP and RPC server is the inevitable consequence of the fact that every process at that company is expected to present HTTP and RPC control and diagnostic services in addition to its primary purpose.


>Go was designed at Google to be used internally at Google

That's not true, expect in a pedantic way. Google was "designed at Google to be used internally at Google" only in that:

(a) a few people at Google, on their own, designed a language (mostly based on an older Plan 9 language some had helped built) - not at the request of Google execs, nor as an explicit company-mandated project to create a language to solve Google's problems. It was almost on of these "20% spare time" things.

(b) these people added the features that they thought, as far as they were concerned that would be nice for programming Google style stuff. Those were based on their ad-hoc (and quite idiosyncratic) intuition and personal experience, and not some special research into programming at scale, or from involving the company at large at it.

Go was designed at Google, but not "from Google", if this makes sense. E.g. a few googler's building a language on their own initiative and among other work is not the same as Google, i.e. some higher-ups, saying "we need a language of our own that's a match for our developers' challenges" and the company devoting resources for this.

Google has 2 language projects they really put money on, Javascript as V8 and Dart. They built a top notch specialist team, spent lots of money for promotion and branding, built an IDE and developer tools for both, etc.

Apple has had Obj-C and now Swift like that, MS has C#, etc. Those are language projects with a big weight of the companies behind them.

Golang was not like that, but, according to all official accounts and recollections, a grassroots project, by a small team:

"Robert Griesemer, Rob Pike and Ken Thompson started sketching the goals for a new language on the white board on September 21, 2007. Within a few days the goals had settled into a plan to do something and a fair idea of what it would be. Design continued part-time in parallel with unrelated work. By January 2008, Ken had started work on a compiler with which to explore ideas; it generated C code as its output. By mid-year the language had become a full-time project and had settled enough to attempt a production compiler. In May 2008, Ian Taylor independently started on a GCC front end for Go using the draft specification. Russ Cox joined in late 2008 and helped move the language and libraries from prototype to reality."

It was never officially intended to be "Google's development language" or had a major push from Google. Since then, and after the first few years, Google seems to have devoted more money and time to Go, and several Google internal projects have adopted it, but Google is fine with C++, Java, and Python as well.


It seems like you're setting up a false dichotomy. Researchers working at a company are often thinking about solving company problems, even if their particular solution doesn't have management support yet. Getting official approval often involves advocates selling their particular solution to management.

In particular, language designers typically have concrete problems in mind, even if they're inventing a general-purpose language. It's enough to say that Go's designers expected to use their new language at Google, so it needed to work well in Google's environment and solve some problems that weren't currently being solved well in that environment. If it didn't work at Google, then they wouldn't have succeeded at their original goal.

While it's not the most popular language at Google, Go is officially supported for server-side projects (communicating via RPC with many other internal servers), and has been for some time. That's a high bar that few languages meet, and about as official as it gets. If it didn't succeed internally then the Go team probably wouldn't have had consistent management support and stable funding over many years.

The other languages you mention (Dart and JavaScript) are considered client-side only within Google so they mostly don't compete with Go. For example, Node is supported for developer tools and external users, not because Google runs its own servers using Node. (Or at least that was true when I left.)


I don't think it's a false dichotomy. I don't think the teams that originally worked on the development of the Go language were Google production engineers of any sort.

I've been working at Google for 8 years and have yet to touch a Go codebase in production.

It's just not used that frequently.


If we're sharing anecdotes, I didn't write any C++ in over a decade at Google.

But I'm not going to dispute that it's the most-used server-side language, because I've seen the statistics.


How about the statistics you've seen on Go?


I don't remember (other than being up and to the right) and it seems not to be published.


>Researchers working at a company are often thinking about solving company problems, even if their particular solution doesn't have management support yet

Yes.

But it's another thing to have a company's the management, say "we want a team to create a language to solve programming at our scale" and throw full resources and money at it (at C#/Java/V8/Swift scale) and have it adopted by mandate for further development,

(as is often implied),

and another thing to have an independent group of a few devs at a company to sit on their own, and say "you know what would be interesting to try to build? A language to handle what we see as Google scale problems", and then getting some more resources, and seeing some at the company adopting it, as just one more language used along with several other company-approved languages for greenfield stuff...


Uh, Google is unlikely to standardize on a single language. The codebase is too big, the incumbent languages are too well entrenched, and they've heavily invested in supporting multiple languages. I'm not sure anyone claimed that either?

If anything the trend is towards slowly adding acceptable languages, but the bar is pretty high.


It's not that hard to imagine. Dropbox has talked about using Rust over Go for storage because cpu/memory efficiency was considerably better (4x if I recall) in Rust.


Go is pretty explicitly designed (as in there are quotes from the designers) with a low ceiling because Google doesn't want to trust their engineers to have nuanced programming taste.


This is why Go is good. If you care about writing beautiful code, Go might not be for you. If you care about getting shit done, it’s a great language.


Go is good at getting things done..until it isn't and you bump your head on the ceiling.

I know Go and Haskell equally well, and I get just as much shit done in Haskell and don't have to be a copy-paste machine sometimes. There are many benefits of Haskell and like-minded languages besides being "beautiful."

Feels like if the Go designers had the sense to include parametric polymorphism and sums (aka proven features from the 70's whose main "downside" is being weird to Go's target audience), it would be a great middleground.


Eh, in practice I haven’t missed explicit (closed) sum times as much as I thought I would. I started using Haskell many years ago (20-ish?) and have written some medium-size apps with it, and I’m very comfortable with using "switch x := y.(type)" in Go where I would use an ordinary "case y of" in Haskell. The only thing missing here is a compile-time check for completeness.

Missing polymorphism is a valid complaint but it hasn’t had the impact I thought it would.

Meanwhile, some of the programs I had written in Haskell I’ve rewritten in Go, e.g. due to problems with the use of finalizers (causing crashes!) in Haskell. You can chalk this up to poor library design but it simply isn’t a problem in Golang, which seems to avoid the finalizers more than other languages.

I’m going to continue using both Go and Haskell, I’m happy with both.


Emulating sums with interfaces isn't the worst but proper sums with exhaustive checking make it way easier for the consumers of your code to also perform matches in a maintainable way. When I emulate sums, I keep the matching internal and at best I expose a function that is the equivalent of an exhaustive match (which takes a function per case)

I've never run into finalizer issues (or finalizers at all really) in Haskell but I'm sure they exist. What libraries in particular used finalizers that you had issues with? In general though I find resource management much easier in Haskell thanks to bracket, ResourceT, and the (imo very good!) exception handling stuff. Async exceptions in particular are a surprisingly nice feature when combined with threading!

The biggest place lack of parametric polymorphism hurts is in concurrency. I have to hand-roll so much concurrency in Go and I don't really have a good option for abstracting over various patterns.

> I'm going to continue using both Go and Haskell, I'm happy with both.

Yeah same here. I'm glad to have them both in my toolkit.


I'm not asking for specifics but do you use Haskell at work? Is it in academia? I have worked for several large and small corps and no one has used Haskell. Most use C++, a lot use Go, a few use some rust, but none of them were using Haskell. I met people who use Haskell in their spare time because of their love for CS things, but never in production code.


Without specifics..I've been writing Haskell professionally & in production for several years now for multiple companies.

None were academia & most were commercial enterprises (that most people have heard of)


Beautiful is subjective, I know, but I think you can write beautiful code in Go. I like its simplicity and explicitness. Those things make it beautiful to me.


I think OP thinks about different scale.

It's not scale about number of users or data, it's complexity scale.

Last enterprise app i worked on had (provided) something like 10.000 different api calls (and yes mostly because of bad/wrong initial design decisions), and over 50k different possible queries that it could run against database. (my job was to "make database go faster")

Just running unit tests took longer than 6hrs (of course most unit test went out and connected to the database, because why not \s).

It all could be replaced with dozen or so 10k to 30k lines go projects each, that would be a lot easier to maintain and to scale out.


People may also mean different things by scale.

For me, scale means more micro-services. For another person it means a bigger monolith.


Which is probably the reason Google have made tools like a decorator to transform code with generics to valid go code.


“non-nullable types“

This is not some advanced feature. I would think this should be the default in any language.


All the popular languages from last gen, did not have non-nullable types.


We're accustomed to thinking of our world as really fast moving with constant technology turnover, but computer programming languages turn over at maybe twice the speed of human generations. A lot of the features that HN posters, including me, simply can't imagine a language without have yet to penetrate the top ten list of current programming languages at all.

My point isn't that this is wrong, but just to suggest that people's mental models incorporate the idea that programming languages actually move pretty slowly. There's a constant froth of undergrowth in the forest, but it takes a long time to establish a tree, and a long time for it to be supplanted by another.


And it was a mistake in all of them.


"I call it my billion-dollar mistake…At that time, I was designing the first comprehensive type system for references in an object-oriented language. My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years." – Tony Hoare, inventor of ALGOL W.


A good example of that is the difference of allocation handling in both languages: Go is simpler than Rust because in Go everything is automatic, while in Rust there is an explicit Box if you want to allocate on the heap. But avoiding heap allocation is easier in Rust than in Go because of the explicitness vs automatic management.


Good point, actually C is perfect example of this, it is a actually quite simple language that is not that simple to use.


Rich Hockey made a nice talk about this.

https://youtu.be/34_L7t7fD_U


Auto-complete turning his name into "Rick Hockey" does actually explain what's been going on with Rich's hair over the past few years.


How stupid of me not to spot this ...


he also had another talk https://www.infoq.com/presentations/Design-Composition-Perfo...

and I would argue there's idea that simplicity of the instrument/tool is not a problem in itself and we shouldn't lose it in pursuit of 'ease'


Yup. This is the origin of this idea.


Go can not be simple, because software itself is a complex beast, language really does not matter.

One thing that gophers say a lot is the count of Go's keywords is minimal. This is true literally, only 25 keywords as of Go 1.13. But it does not reduce the complexity of writing code, it just hides them.

For example, Go does not have the `new` keyword, but it has `make` function, which is so powerful that one of my friends actually thought `make` was a keyword of Go.


Can somebody post an example of when you would use generics to solve a problem in the real world, and how that would be implemented in Go instead?


Non-nullable types in are implemented using a combination of (powerful) enumeration types and generics.

Enumeration types in rust can be the usual ones you find in other languages:

    enum UserStatus {
      Anonymous,
      Registered,
      Confirmed,
    }
But their variants are not limited to names, they can be other things[1]:

    enum UserStatus {
      Anonymous,
      Registered{ email: Email, date: Date }, // a struct
      Confirmed(date), // a tuple
    }
Enum types can also be made generic. For example the Date type could be a parameter in the example above:

    enum UserStatus<T> {
      Anonymous,
      Registered{ email: Email, date: T }, // a struct
      Confirmed(T), // a tuple
    }

Which brings us to the Option type[2] in rust. It is an Enum with a generic type attribute, defined like this:

    enum Option<T>  
    {  
        Some(T), // a tuple with a single item of type T 
        None,
    }  
In rust, when a function says it returns an `i32`, it will always return an `i32`. It never returns `i32 or null`.

If you want not return an `i32` sometimes, you must specify exactly what else it can be. It is often convenient to return an Option<i32> instead. Then it can return `Some(i32)` or `None`. The difference with the "implicit null" from before is that the compiler will force you to "deconstruct the option safely", in compile time. You will never get a "Null Pointer Exception" in run time because of this.

For me, this is huge, and one of the reasons I like Rust more than Go.

[1]: https://doc.rust-lang.org/rust-by-example/custom_types/enum.... [2]: https://doc.rust-lang.org/std/option/


The simplest example I can think of is: Map(slice, func)

You can can implement this generically in Go using interface{} types and runtime type checking, but then you have runtime type checking failures.

A java/c++-esque "generics" implementation would be able to type-check at compile time.


You can implement it in Go using loops, and it's not significantly slower or more error prone.


Unless you want to implement that on top of your custom structure, like, for example, b-tree. Then you either need to write N different implementations for each separate tree or you need to use `interface{}`.


Yes, custom containers are one place that generics are very useful.

Go has the most useful containers built in. Most code I write doesn't need custom containers, even in languages with templates (I write far more C++ and Java than Go) -- so I find that I don't miss them much when I spend time in Gopherspace.


The most egregious missing container is "set", coming from Python (as many Go developers do). After debugging, reviewing, and writing at least 3 different local implementations of a set, I can't wait for generics so that it can be written once and then never looked at again.


I'm slightly surprised you found three different set implementations - `map[T]struct{}` is the idiomatic one i see everywhere.

I suppose it's not sorted or concurrent-safe?


I mean that people make their own per-file or per-package set implementation for their service. There's no point putting it in a common utility package even within the same company, because it's specific to the type of what they're storing.

`map[T]struct{}` is only a type, it's not an implementation. You still need a handful of methods, that might be named inconsistently in each implementation by different developers. I will be happy when I never have to think about this again


> I'm slightly surprised you found three different set implementations - `map[T]struct{}` is the idiomatic one i see everywhere.

Set storage is not really the concern here, it's the set-theoretic operations which make sets useful (generally speaking, there are cases where all you need is the set being a set e.g. deduplication).

So I wouldn't be surprised that GP found several different implementations of union, intersection, difference, symmetric difference, subset, superset, …


Yeah but you can't implement it generically.

That means for every conceivable input type and output type combination you have to have a new MAP or REDUCE function to handle it.

All Modern languages except golang and elm only require one implementation of MAP that can handle all type combinations.

It's called parametric polymorphism.


Elm has parametric polymorphism.


you're right. My mistake.


Java is one example of modern language without proper generics. They have syntax sugar for Object references, but you can do almost the same with golang using interface{}.


I'm not sure what you mean by 'proper generics', but Java does have compile-time generics, ie List<T>, <A, B> B foo(A bar), etc.


Proper generics would allow you to parameterize with all types including primitive ones, not just Objects. Check out java.util.Arrays class as an example of why Java's generics are not proper: https://docs.oracle.com/javase/7/docs/api/java/util/Arrays.h...


You've got a kind of made up definition of 'proper'. Java has generics, they don't have the best implementation of it I've seen, but I wouldn't call it 'improper'. It's just not a word that has any specific meaning in this context.

I seem to remember an article being written a while ago that found Java's type system, with the addition of generics, was unsound. Maybe that's what you're trying to say.


Java generics is a syntax sugar. They don't add anything substantial. You can use Object (or another bound type) everywhere you're using generic type. You only need to add few explicit casts, which are added by compiler anyway.

You can do exactly the same thing in Golang. Just use interface{} everywhere and add few casts where needed. That is Java generics.


> Java generics is a syntax sugar.

That's not true, Java generics are not just syntactic sugar.

> They don't add anything substantial.

They add parametric polymorphism and type safety.

> You can use Object (or another bound type) everywhere you're using generic type.

Only if you go out of your way to do so, and ignore warnings. This has less to do with how generics work in Java and more to do with the fact that Object can be explicitly cast to another type and vice versa.

> You can do exactly the same thing in Golang. Just use interface{} everywhere and add few casts where needed.

No, it's completely different.


Yes, I know what it's called. I have implemented generics in compilers. I still don't miss them in most go code, in practice.


One example is n-dimensional arrays where each element could be a byte, uint16, int16, ... , uint64, int64, float, or even some custom vector. In a language with generics, you could describe the nD array more simply and the operators work depending on what was passed. The implementation can also be shared across a number of similar element types. In Go, though, you’d need to use interfaces and at some point you’d have to make tradeoffs about speed, simplicity of the implementation (e.g. hand-coding each element implementation) vs use of interface function calls to get properties, etc.

One example is to look at how Go handles the Image package vs other languages with generics. https://golang.org/pkg/image/


In fact with Go is even more obvious. Go don't have inheritance!

Traits is a way to solve it. But the problem is that in Go types are "too open". If you wanna constrain it more traits is not an elegant solution.

Generics is better in this case.

BTW: This is how is in Rust. Rust have traits but you fill the rest with generics.

I made a mini-go database command liner that I call from rust (so I get access to drivers that are not yet available in rust). Is much more boilerplate from some stuff that in the rust side is just Value<T>.

And type errors that Go have that rust don't.

I think that Rust & Go sharing some(most?) design principles (about how model types) make easier to see what each side make harder...


You can prevent external packages from implementing your interfaces by adding unexported method names to your interface. Usually I'd call it an anti-pattern for interfaces, but I've done it a couple of times when there was literally no conceivable way for an external package to implement something conforming to a particular interface for various technical reasons.


Yeah, most "good" uses of this technique are kindof abusing it to substitue for some other feature the language doesn't have, the most common of which I can think of is combining it with type switches to get a poor-man's sum types (what rust calls enums). You see this a lot in code working with ASTs, especially.


You can solve any problem in a language without generics, such as Go, C, Fortran-IV, or any other Turing-complete language, including the Turing machine "assembly".

The difference is in speed of producing such code, amount of bugs introduced, ability to understand, modify, and evolve the code, etc.


Don't know Fortran, but while C doesn't have generics you're getting similar behavior due to macros (for example min(), max() are macros) and weak typing. So the need of having generics there is much weaker.


There are a number of tools (and the "go generate" construct) designed to support code generation. If you just want macro-level behavior, that's easy to do.


Yes, and that's what I'm currently doing, but it is essentially just automating writing the same code multiple times. It still has issues, like having to remember to regenerate the code on change, or making harder to use IDE features to refactor. There are also some limitations it has.


> automating writing the same code multiple times

What would you call a language that forces you to write / use a preprocessor to introduce higher-level features?

Low-level. More specifically, low-level for the domain where you're working. C is admittedly low-level because it had to be minimal even for 1970 and close to the metal. Other languages usually enjoy less drastic design constraints.


Yep, that is why some were already using PL/I and Algol dialects almost 10 years before C came to be, alongside Fortran, Cobol and Lisp.


Just like Borland C++ 2.0 for MS-DOS did, before templates were started to be designed.

I let you research when it was released.


Even C does have generics now (_Generic) even if quite limited.


Neat, I had no idea.


I have functions that have to deal with hierarchical data objects, JSON like things. All collections are identically typed (although they have optional fields, different length arrays, etc.), but there is more than one type. In order to write simple code, I wanted to write one function that gets them from the database, one to update a single record, one to notify of changes, etc. So my top-level type to cover all of these collections is map[string]interface{}, which in TypeScript would be Map<string, any>.

Then I ran unit tests, and sometimes a weird error cropped up. Turns out I made an error in an append: instead of concatenating two arrays, I added the second array as the last element of the first. That was a time consuming bummer.

This problem could have been prevented by two approaches: copying every function for every type, or generics.

Go is a nice language, which makes some tasks easy to implement, almost fool proof, but it still lacks in other areas.


For a Math.max function you would normally have generic comparables and implement on that.

Go makes you implement max and min in all your codebases.


Same with float, if you for some reason need to use float32, tough luck. Makes you wonder why even have types that aren't supported by the standard lib.


literally any data structure that stores different types similarly. hashmaps, linkedlists, vectors, binary trees (assuming you can bound on some ordering function), etc, then all the operations you expect to work on those data structures. You could also define mapping operations that work on the containing structure and don't care much about the specific type of the contents. There's a million and one uses for generics (polymorphism).

In Go, I believe you need to give up type safety to write these implementations, by using Interface{}


A lot of the time you’ll end up with an implementation for each discrete possibility, but that’s pretty crappy especially if you have more than 3 or 4 discrete types you need logic for (though not always bad, some go libs follow this pattern e.g. sort offers a method for each basic type). You could use the empty interface (which is essentially ‘any’), but that generally requires you to do casting or runtime type checking and that’s suboptimal in a statically typed, compiled language.

Generally it makes more sense to describe an interface that can be satisfied by different types, and writing logic to handle that interface instead. Go may not have traditional generics but the interface pattern is certainly a kind of generic programming that can be used to achieve some powerful results. Because structs can satisfy many interfaces and interfaces can be satisfied with any kind of struct (if you want an in language example check out the io.Writer interface and the many functions that use it) you can end up with highly reusable code despite not having traditional generics.


In my experience Generics become much more useful, when your typesystem is strict. I use them quite often in Rust, especially when writing functions that should take many different types as inputs/outputs.

You can get away without them in many cases, but if you are working on stuff that you want to reuse, generics are the way to go. In combination with Rust’s traits this is especially cool.



>However, as you start pushing beyond "scale" Go is designed for, Go becomes less simple to use.

I wonder if that is really true, some really complex systems are written in Go and it does seem to scale well with that. Take Kubernetes for example.


I view it as a problem of scaling comprehension. Not scaling of performance.


Yes this is still true[1]

[1]: https://www.youtube.com/watch?v=4VNDjwzzKPo


kind of agree but also disagree. the runway for Go to go (pun unintended) from easy to difficult to use because of it consisting few parts is far longer than for a developer using a feature rich language to eventually shoot themselves in the foot. let me throw in another language in the mix to describe what i personally think is _almost_ the best tradeoff between this difficult and easy simplicity. Elixir. it has almost the same amount of keywords (as a proxy measure to API surface) as Go but on the other hand it also exposes metaprogramming and so, if you really want to, you can easily shoot yourself in the foot. but in both languages, as I said for Go, the runway for language usability to go from easy to difficult due to their limited features is very very long.


The tasks you do with any language are often not simple. And the simpler the language you use the more complex and harder the task gets. Assembly language is "simple" in the extreme but getting a task done in it is not trivial.


> However, as you start pushing beyond "scale" Go is designed for

Like Google scale?

That's what Go is designed for.


Yes, that is why Google mostly uses Java and C++ instead.


Or it may be because Java and C++ are much older so lot more code is written in those. And companies do not replace working code just because new language developed in house has few more features.


Except that lots of new code also get written in those languages.

Google's major Go projects appear to be gVisor, Android GPGPU debugger, Fucshia's TCP/IP stack and volume management tools and the download server as it was done by the Go team.

https://commandcenter.blogspot.com/2012/06/less-is-exponenti...


You forgot kubernetes



From description:

"In this talk we explore the devastating effects using object oriented antipatterns in go while building programs in a monorepo. ... Unknown to most, Kubernetes was originally written in Java. If you have ever looked at the source code, or vendored a library you probably have already noticed a fair amount of factory patterns and singletons littered throughout the code base. "

So basically writing Java in Go lead to clusterfuck codebase.


This is a joke talk (hyperbolic)

Most of the codebase problems in Kube are:

1. we depended on half the Go ecosystem at one point (docker, grpc, etcd, a few others) which is hard to do with dependencies in go (few standardized libraries)

2. performance of serialization mattered and JSON and protobuf were still raw at the time

I don't think Kubernetes is any worse than any other large (3M+ LOC), relatively young codebase I've seen on average.


That's why they created Go, to get rid of Java and C++ for things like download.google.com

They are not scalable for Google scale and are not maintainable at google scale.

That's why Google C++ guidelines basically disallow everything that is problematic in C++ and enforce a style guide to

> Avoid surprising or dangerous constructs. C++ has features that are more surprising or dangerous than one might think at a glance

> Avoid constructs that our average C++ programmer would find tricky or hard to maintain. C++ has features that may not be generally appropriate because of the complexity they introduce to the code

> Be mindful of our scale. With a codebase of 100+ million lines and thousands of engineers, some mistakes and simplifications for one engineer can become costly for many.


Yeah, better tell that to the Android, ChromeOS, Fuchsia, Maps, Search and Flutter teams.


You are sidetracking the conversation with a troll attitude.

I was replying to

> However, as you start pushing beyond "scale" Go is designed for

Go is designed for Google scale, I don't know anybody pushing beyond Google scale right now.

Anyway, I'm a nice person, you'll find your answer below

Enjoy

----------------

most of those projects are legacy software

did you expect search to be written in a language invented 10 years in the future?

Go is slowly replacing C++ to write the tooling.

C++ is simply not scalable enough for day by day use, Linus knew that 15 years ago.

Even Firefox is replacing it for its engine, it must mean something.

It can be done in C++ doesn't mean it should anymore.

Java is there because it runs in ads systems, you don't replace your core product overnight, just like in some banks you still find Cobol.

Java is the new Cobol.

Fuchsia is written mainly in C and Dart.

You wanna write an Haiku style microkernel?

C++ is fine, it's just a few thousands line of code.

Wanna write millions lines of code in a maintainable way at Google scale?

You don't start the project in C++ today, unless you're a crazy person.

Even ES5 Javascript is more maintainable than C++.


As a now Rust lover and prior (6 years?) Go lover, this article hits home. We have a lot of tech debt in my shop (who doesn't lol), and generally I advocate Rust or Go. I usually start the conversation with: Would this be a program you'd write in C++ or in Python? And then I advocate Rust or Go, with one amendment; data structures, and frequency of conversion/etc.

Having converted some large Go programs to Rust recently I now have the perspective that some programs, while perfectly fit for Go, are miserable to write without Generics. I had a program which did some pretty minor work, but it did so with a lot of varying data structures and writing it in Go was less than pleasant. No, Go might handle this better with generics. However I imagine this will only move the bar. Certain problems are just going to be friendlier (though, more complex!) to solve with a more advanced typing system.

I do look forward to the day that Go can get basic iterator behavior via Generics (if ever). Some of the smallest things in Rust were the biggest sighs of relief for me. Converting a slice of slices from one type to another is just a PITA in Go (in my opinion), and iterators make a world of difference for short, easy to read and comprehend implementations.

In my experience, Go's biggest fault is code bloat that "simplicity" offers. A simple goal could turn into a multi function implementation in Go that frankly shouldn't have to be. Then code locality gets worse over time, and suddenly simple doesn't feel simple. Rust's larger complexity (iterators and the like) improve code locality, and thusly simplicity, in my experience.

It's a game of tradeoffs. They're both great languages, and they both have their places in my company.


I'm a Rust evangelist too but I think it's going a bit too far to blanket recommend Rust and Go over C++ and Python. Python in particular is still an excellent language and there are many situations where it makes perfect sense to continue using it.

I can see a future where Rust and Go overshadow Python but that future isn't near. Python still has superior libraries for many use cases, a great community, easy to learn syntax, and little training requirement (most people know it or can become proficient in a very short time). Moreover with the type annotations and linting tools it's becoming a lot easier to write a large and maintainable Python codebase.


You only mentioned one tradeoff for Rust and that is the complexity of the overall language.

I would say that the complexity of the overall language is a one time hurdle. Once you get past that and once Rust has more mature libraries. Which language in your opinion is the better one?


For the record, I like Go. I also like Rust. But I really see them as being aimed at different markets. Go is a better Python/Java. Rust is a better C/C++. The interesting thing about Go is Rob Pike envisioned it as a systems programming language but any language with GC is a complete nonstarter as a systems programming language.

So as for the author's pretext (writing some tool in Go), personally my view is "why not?" not "why not Rust?". If someone had written that same tool in Rust I wouldn't be saying "why not Go?". Whatever floats your boat.

But here is the one part where I disagree with the author:

> Go is unapologetically simple

It is not as simple as it seems, just like every GC language out there. I find Go advocates in particular seem to dismissive of any GC complexities or downsides. Maybe because many of them just don't have the background in dealing with this from years of Java.

I worked on a team at Google that wrote and maintained a project using Go. Note that I never wrote anything for this project so my experience was second hand but close second hand.

One thing I remember was the Go binary blowing up on memory limits (10GB+) in production. Some debugging found there to be millions of Go channels (IIRC) that needed to be cleaned up. For whatever reason the GC didn't clean them up, possibly because of a non-obvious dangling reference, possibly not.

Anything can have bugs obviously. It's just a myth that GC is a silver bullet is my point.

There was a guy who worked on the Go team (David Crawshaw) who'd stop by every now and again. Occasionally I'd get into debates with him about Go and GC. It's from these that I established my general observations of Go pundits:

1. Full GC pauses and GC in general are totally not a problem in Go.

2. If they were, they're totally going to be fixed in Go 1.N+1 where N = the current version of Go.

(At the time, the argument David made was that Go's STW GC pauses were sub-millisecond so totally not a problem).


The memory leak due to channels not being cleaned up was almost certainly because goroutines themselves are not garbage collected when they're forever blocked. In other words, `ch := make(chan int); go func() { ch <- 1 }()` is a memory leak. This is analogous to spawning a thread that deadlocks by attempting to lock a mutex twice, and has nothing to do with garbage collection.

About the GC pauses, check out the latency graphs at https://github.com/ixy-languages/ixy-languages (being careful not to mix up the Javascript and Go lines). Go handily beat every other garbage collected language in latency, and is in the same ballpark as the two non-GC languages (Rust and C). The peak tail latency times are measured in the hundreds of microseconds even at the highest loads tested, so I think the pundits may have a point.


GC is more than just pauses. Throughput matters too; in fact, it often matters more than latency (for example, when writing batch jobs like compilers).


Sure, but that's not relevant to the discussion here. In this case the context was responding to the the statements that Go pundits claim the garbage collector has low latency but it always has bugs that will be fixed in the next version. I was providing evidence that the garbage collector actually does achieve low latency right now.

Also, there are ways to solve your throughput problems if you have control over your allocations, but there are not ways to solve your latency problems. Indeed, even if you don't have control of your allocations, you can often run multiple copies of your program to increase your throughput, but multiple copies will not help your latency.

Also, tuning your latency does in fact help increase throughput, because if your process spends a significant amount of time in allocations, its latency suffers. It's not a zero-sum knob between the two, especially in the presence of humans that care about tuning the performance of their applications.


You don't have much control over allocation in Go (in fact, according to the spec, you have none at all). Language constructs allocate in ways that are not obvious.

> Also, tuning your latency does in fact help increase throughput, because if your process spends a significant amount of time in allocations, its latency suffers.

I assume you meant to write "throughput" in that last sentence. That's not how throughput is defined. Throughput isn't "how long does it take to allocate", though that influences throughput. It measures how much time is spent in memory management in total during some workload. Optimizing for latency over throughput means you are choosing to spend more time in GC.


> You don't have much control over allocation in Go

Maybe you mean something different by "control over allocation"? Go does allow that, for example "The Journey of Go's Garbage Collector" talk has a good summary,

https://blog.golang.org/ismmkeynote

Specifically the sections about value-oriented programming are exactly that: Go allows the developer to avoid a lot of allocation by embedding structs, passing interior pointers, etc. Compared to Java or C#, it can have a much smaller number of allocations as a result.


> Specifically the sections about value-oriented programming are exactly that: Go allows the developer to avoid a lot of allocation by embedding structs, passing interior pointers, etc.

To elaborate, for example, indirect calls cause parameters to those calls to be judged escaping and unconditionally allocated on the heap. Go style encourages frequent use of interfaces such as io.Reader. So in order to interoperate with common Go code, such as that of the standard library, you will be allocating a lot.

> Compared to Java or C#, it can have a much smaller number of allocations as a result.

C# also allows you to embed structs within other structs and pass interior pointers. Java HotSpot has escape analysis as well, though it's less important for that JVM (and other JVMs), since HotSpot has a generational GC with fast bump allocation in the nursery.

> https://blog.golang.org/ismmkeynote

As I've mentioned before, I also have a problem with the conclusion of this talk: that generational garbage collection isn't the right thing for Go. The problem is that nobody has tested generational GC with the biggest practical benefit of it: bump allocation in the nursery. I'm not surprised that generational GC is a loss without that benefit.


> avoid a lot of allocation by embedding structs, passing interior pointers, etc

I can create a struct and take a ref to an interior field of that struct in C#, can I not? And if I wanted to badly enough, I could use unsafe code and take a pointer to the third byte of an interior field of that struct.


No, I meant what I wrote. The time spent in total in GC includes the time spent performing allocations. If you reduce the time spent in allocations with all else being equal, you reduce both latency and increase throughput. In that way, optimizing for your allocation latency also increases your total throughput.


Sounds very similar to "leaking" event listeners in other languages.

The handler and all of the scope it can reach stick around forever even though they should have been replaced or removed.


I'm not disagreeing with you, but the example you're using (the ixy driver) is a pretty poor one to compare garbage-collected languages: Go performs particularly good on this one, because the go implementation doesn't allocate anything on the heap (and then the GC has nothing to collect). Being able to work on the stack only is a feature of the Go language, but it has nothing to do with its garbage collector.


This example is just to show that pauses are not in the same class as any other fully managed languages. I haven’t looked at the code to know how much it stresses the garbage collector, but I agree it does seem unlikely to do so.

That said, being able to work on the stack does influence the design of the garbage collector. Throughput becomes less important because you can choose to allocate less, and pause time is more important because latency is harder to optimize. Latency is affected by your whole stack and can’t be meaningfully reduced by just adding more machines in the same way that throughput can be increased.


> This example is just to show that pauses are not in the same class as any other fully managed languages.

But your choose one of the few examples where there is no pause at all, because go's GC didn't even tick once!

Go could have a 10 seconds stop the world, it would still perform the same way here, that's why I say this examplei is a poor one for the point you're making (which is valid, even though one could argue than 10ms pauses are better than 30% of the CPU usage being the GC running (actual production usage on a Go service at my former work)).


> but any language with GC is a complete nonstarter as a systems programming language.

Check this out: http://www.projectoberon.com/

A full system (not just a language, but also custom CPU on FPGA, OS with GUI, applications, etc) where the language is garbage collected. The system has 1MB of RAM (by default). Note that the garbage collector is implemented entirely in the language itself.

The Oberon language was also a big inspiration for Go.


> The interesting thing about Go is Rob Pike envisioned it as a systems programming language but any language with GC is a complete nonstarter as a systems programming language.

AFAIK, by "systems programming" he meant "non-user-facing programming", not "kernel or embedded programming". So, basically, servers, batch programs and other os utilities.


I'd always assumed he was using "systems language" to just mean "as opposed to a scripting language" like perl or bash.


> any language with GC is a complete nonstarter as a systems programming language

If it was possible to write whole operating systems in garbage-collected LISP in the 80s, then it surely is possible to use a GC'ed language for systems programming thirty years later.


If you dig into those machines, anything vaguely real time including huge chunks of the drivers were written in user writable microcode.


Sure, but that's not really a universal indictment of a system or a language. For example, C gets a lot of justifiable criticism, but the ability to drop down to assembly when it's needed is considered a feature and not a bug.

The highest-performance gc systems (OCaml?) might very well be associated with functional programming languages, but the more relatable system for a lot of people might be the Oberon system, which did influence Go. I'm pretty certain Oberon did not rely on any hand-coded assembly to build a functioning workstation. (although, on the topic of microcode: the newest Wirth Oberon incarnation does have the student program an FPGA to run the system, as an exercise...)


Oberon for sure depends on hand coded assembly. It's just in the compiler rather than .asm or .s files, but it's still a big blob of assembly that the type system and GC have no real knowledge of.

Right now, state of the art for real time GC is that it's a huge trade-off between throughput and determinism. Like orders of magnitude lower throughput to be able to make guarantees that you'd expect out of a desktop system.


> It's just in the compiler

What's your objection to having the compiler emit "hand coded assembly?" It's not clear to me what axe is being ground here. (that's true for me in a big picture way here too. There's a very strong argument against gc languages being made here and I'm not sure if you're saying Oberon doesn't count as a gc language, or what.)

> Like orders of magnitude lower throughput to be able to make guarantees that you'd expect out of a desktop system.

Do you have a current citation for that?


More recent examples to check out were Spin (Modula-3), JX (Java), and House (Haskell).


It was also found in the 80s that any operating system written in LISP was dog slow. They even started making dedicated hardware interpreters of LISP to try and get around this. https://en.wikipedia.org/wiki/Lisp_machine

When you need to start designing your hardware around your computer language you know there's a problem going on.


> It was also found in the 80s that any operating system written in LISP was dog slow.

Lisp Machine operating systems were written in the mid 70s - developed for a new breed of computers: single user workstations with graphical user interfaces.

They were developed at Xerox PARC and at the MIT AI Lab.

> They even started making dedicated hardware interpreters of LISP to try and get around this.

The machines were built to get around slow Lisp systems on time-shared computers with tiny memory. They wanted to have dedicated machines with memory exclusive memory for that one user.

A bunch of stuff was then invented for those systems, including better automatic memory management like generational garbage collectors.

Around 1980 (!) such machines were extremely expensive and still had tiny hardware: around 1 MB RAM and approaching 1 VAX MIPS of speed. Mid/end 80s they had 20 MB RAM and 5 MIPS...

No wonder: an entirely new class of systems was developed on the slow hardware of the time.


To be fair those machines made sense at the gate counts we're talking about. They were true von Neumann machines with no caches, so the vast majority of the value add was putting the interpreter in microcode where it doesn't compete with the fetch bandwidth of the rest of the application. That's why CISC machines of the time tend to have mem(cpy/set/cmp) instructions, getting the instruction bandwidth out of the way of data heavy loads in a way that makes sense for the use case.

That's also why these dedicated machines tended to disappear right as instruction caches became more standard; an I$ solves the same problem in a way more general way. As long as the hot path to your interpreter fits in I$, it's six of one, half dozen of another.


It's not just putting the bytecode interpreter into microcode; there is the question of data representations suitable for Lisp, like pointers and numbers with tags.

Caching doesn't really help with this. Even when everything is nicely in I$ and L1, it costs cycles to do the type checking. What helps is compiler techniques: type inference to eliminate some of the checks. But hardware can basically bury the cost of the checks; you can do them all the time on all operands.


> started making dedicated hardware interpreters of LISP to try and get around this.

That's a misconception; the actual technology of that type provides an instruction set architecture (which isn't Lisp), to which Lisp is compiled.


The issue with Go and garbage collection is basically that Go's GC is just a GC tuned for latency above all else. The GC of Java HotSpot, on the other hand, balances throughput and latency, and is configurable to target one or the other. The latter is typically what applications want, even if it can't advertise the same pause times. For example, allocation is an order of magnitude cheaper in Java HotSpot than it is in Go.


How do you explain the throughput and latency numbers posted by https://github.com/ixy-languages/ixy-languages? Go is beating Java in both latency and throughput, and the author attempted many different Java GC tunings (https://github.com/ixy-languages/ixy-languages/blob/master/J...)

Maybe tuning for latency is the appropriate trade-off given Go's high level of control over allocation and data layout?



You're jumping to conclusions. I understand the distinction and that there are many options for Java. Please be charitable.

To rephrase the question more precisely: can you explain why OpenJDK, which has a multitude of garbage collector implementations tuned for both throughput and latency, performs worse on both throughput and latency than Go on this benchmark which has a garbage collector that, according to the original statement, is "tuned for latency above all else"?

Perhaps OpenJDK is problematic in other ways (the author of the benchmarks suspects it's JIT induced, even though that's attempted to be controlled for), or perhaps this test depends less on the GC for some reason. Or maybe typical Go programs don't require as many allocations making allocation time much less important making tuning for latency the correct engineering decision. Would HotSpot do significantly better as is claimed? That's the sort of interesting technical discussion I'd like to have, instead of pedantry.


Sorry about that.

I guess being a long time part of Java eco-system, kind of gets tiring of having outsiders (in general, not referring to you) always mixing up Java with what comes with their PC, as if C would be defined by GCC.

As for the actual question, naturally having value types helps reduce GC pressure. Which on Java's case could be helped by trying out Graal or other JVMs that do better job at escape analysis than Hotspot. Alternatively, although it kind of is cheating, using the language extensions from either Azul or IBM for value types.

In any case, when inline classes (aka value types) arrive, Java can easily do the same as Go here.

JIT and de-optimizations play a role certainly, and are to blame for some performance impact, which can be further improved if a JVM like J9 gets used, given that it allows for PGO across runs.

Finally, while Hotspot has good defaults, tuning all the knobs is a science, even with help of J/Rockit and VisualVM, which opens the door for performance consulting.

The JVMs I mentioned, are targeted at soft real time deployment scenarios, as such they have APIs for low level control of memory management, while also supporting AOT with PGO compilation, thus allowing for low level fine tuning out of reach for the regular Java developers (pure Java SE implementations).


Thanks.

Since most of the explanations that you gave involve things like controlling value types and data layout, it sounds to me like you might agree with the statement that in languages with better control over allocation and data layout, tuning a garbage collector for latency over throughput can be a good idea because your time spent allocating is less important. Is that fair?


In this case, "tuning for latency over throughput" means "not having a generational GC". Whether this choice makes sense does not depend on how much a program allocates. Rather, it depends on whether the generational hypothesis holds. The generational hypothesis is one of the most powerful and consistent observations in the entire CS field. It certainly holds for .NET, which has a very similar memory model to that of Go, and therefore .NET has a generational garbage collector.


It's not as simple as does the generational hypothesis hold, there are real downsides to having a generational GC - it means you need a copying collector. Go heap values never move, which dramatically simplifies everything else (e.g. other threads don't need to be paused to update heap pointers, reducing STW times).

Some previous discussion https://news.ycombinator.com/item?id=17551012


You don't need stop-the-world pauses for generational GC. As long as your write barrier ensures that pointers to a young object are entirely local to the TLAB that the object is allocated in, you will never need to stop other threads to sweep the nursery.


Go's a pretty simple language.

It should be straightforward (not easy, because compilers aren't easy) for somebody to write another implementation, tuned for another use case - I can't imagine it wouldn't be easier than doing it for Java.


> How do you explain the throughput and latency numbers posted by https://github.com/ixy-languages/ixy-languages?

Because those graphs measure overall throughput and latency in an I/O setting, not GC throughput and latency specifically. There are many other confounding factors. In particular, the application in question has been tuned to perform as few allocations as possible, so GC throughput will naturally not show up as much as in other apps!

Thanks to its generational GC and TLABs, allocation in HotSpot is like 5 instructions.

> Maybe tuning for latency is the appropriate trade-off given Go's high level of control over allocation and data layout?

Go doesn't give you control over allocation in a meaningful way.


I believe we misunderstood each other. In particular "the application in question has been tuned to perform as few allocations as possible" seems contradictory to "Go doesn't give you control over allocation in a meaningful way."

Allocation being fast isn't very important if you don't spend a significant portion of your time allocating. Isn't it possible that some languages (maybe even Go) by the nature of their semantics spend significantly less time allocating than other languages, and so tuning a garbage collector for latency is the appropriate decision?

Why do you seem to believe that the Go application has been tuned more to avoid allocations than the Java application? If it hadn't been, then you would have to agree that tuning the garbage collector for latency is better because you can invest some effort into your application to have it perform better on both throughput and latency.

Or is the argument that because "[throughput] is typically what applications want", this application is not a good example of most applications (according to what measure of "most")?


You can sometimes use unnatural programming patterns in both Go and Java to reduce allocations in practice. Nontrivial programs will always allocate, though, because various language constructs allocate in ways that are not obvious, and escape analysis is complex and hard to reason about. And the allocation semantics are not part of the language and are subject to change: this is what I mean by it not being meaningfully controllable.

The most salient difference between Go's GC and HotSpot's GC is that the latter is generational. There is no convincing reason I've seen for Go not to have a generational GC, which would dramatically improve throughput by enabling bump allocation in the TLAB. The tiny amount of latency that this could add is by no means worth the cost of making allocations an order of magnitude slower. Allocation in HotSpot is five instructions.


> You can sometimes use unnatural programming patterns in both Go and Java to reduce allocations in practice.

This whole "you can't control your allocations in Go" thing is a strawman. You don't have absolute control because of escape analysis, that's true. But there's a pretty wide gap between "oh I'll just move this allocation point out of a loop" and "oh I wrote a bunch of unnatural code". It's pretty idiomatic to manage allocations in Go, just look at all the posts about using pprof.

Besides, the spectrum of "clarity" (or whatever) to performance is present no matter what language you're using. Ex: ripgrep takes on more complexity so it can search a big buffer of text instead of just a line [1]. I wouldn't call that "unnatural" at all, just systems programming.

[1]: https://blog.burntsushi.net/ripgrep/#mechanics


> But there's a pretty wide gap between "oh I'll just move this allocation point out of a loop" and "oh I wrote a bunch of unnatural code".

Here's just one example: The Go compiler judges that parameters to indirect calls such as interface method calls as escaping. So in Go if you don't want to allocate you have no choice other than to avoid interfaces. But Go heavily encourages the use of interfaces, especially in the standard library.

In C, C++, and Rust, on the other hand, you can use indirect calls without allocating, because the language guarantees the escaping behavior. This is a significant difference.


My experience in the Golang side of this is limited to profiling and optimizing Kubernetes, which doesn't have extensive use of interfaces in general, but over the last 5 years I would say 90% of the wins we have have been:

1. optimizing allocations away in serialization

2. optimizing allocations away in api or biz logic in critical paths around that

3. algorithmic improvements on certain naive code paths

4. blocking

... distant gap

5. everything else

Serialization is its own can of worms in Go, and many places we tried to optimize came down to trying to avoid a) stupid (don't use pointers to naive objects) and b) obvious (use value types) and then hit a wall where there weren't many cheap wins.

I probably can count on one hand the number of hot paths which were interface related, so while I've occasionally been annoyed at the forced allocation moving into an interface, it's rarely actual something I've gotten a win from.

That's just one particular experience, but the AMAZING integration with pprof from very early days has saved me far more time in improving perf than other Go annoyances has cost (relative to my experience in Java 2005-2011).


Oh yeah, totally agree that you have way more control in C/C++/Rust, and that's non-negotiable in many important cases. I just don't think that's the same as "you don't have any meaningful control over allocations in Go"; you still have a lot of control. It's a spectrum, and if you don't do enough research you might find you need more control (you used Go) or you might also find you signed up to deal with more low-level memory management than you needed to (C/C++/Rust).


And yet here’s an empirical benchmark providing evidence against both of those claims. What empirical evidence do you have supporting yours? Can you provide them?

Who defines when a construct is “unnatural”? Can you show examples of “unnatural” patterns in the provided benchmark programs supporting your hypothesis that you have to write unnaturally to control allocations? If not, why are you making those claims?

Why is throughput of allocations important for “most” applications? How much time is typically spent on allocations in those programs? What is a typical application? What about typical applications in Go specifically since that’s the only language that Go’s garbage collector matters for?

Also, why did you bring up the point about generational collectors? It seems like a red herring. This discussion has been about tuning for latency, and how in this benchmark, Go’s implementation did better on both throughput and latency than any OpenJDK collector under any tuning, and that tuning for latency is possibly a sound engineering decision.

I’ve also noticed that you’ve made no explicit attempt to answer my more difficult questions, and if you continue to do so I will no longer assume you are acting in good faith.


Again, that benchmark isn't a benchmark of garbage collection; it's a whole-program benchmark.

There's plenty of empirical evidence—a wealth of papers dating back to the 80s—that generational GC provides better throughput than non-generational GC on most applications. I'd be extremely surprised if a properly-implemented generational GC with bump allocation in the nursery wouldn't improve the performance of Go's GC by trading off a small amount of latency for increased throughput. The reason why you won't see a benchmark like that for Go is that nobody has implemented such a collector for Go.


No cited references, no answers about what “most” is, no answers about what “most” is in the specific context of Go, no answers about any “unnatural” patterns existing, no answers about the claim that it was “tuned” to reduce allocations, answering a question that wasn’t asked dodging the question that was (generational having better throughput vs when does throughput matter).

There’s no way I can believe you are acting in good faith. It’s obvious to me now that you’ve just been trolling for years every time you discuss the topic of garbage collectors. I’m no longer going to engage.


I am not sure I'd go that far, but I want to point a thing out:

Back in the 90s, it was a running joke that if you were going backpacking, you should take along a 3' length of fiber optic cable. "If you get lost, just bury the fiber optic cable and ask the backhoe operator for a ride back to town."

In at least three programming-related communities I'm in, I have made a quip about "I think I'm gonna replace the 3' length of fiber optic cable in my backpacking gear with a short post about Go's garbage collector", and people have filled in "so if I get lost, I can just take the post out, and then ask pcwalton for a ride back to town".

I can't say whether or not the argument is made in good faith, but man, there sure is a lot of it.


If you're trying to write high performance Java, that driver was a pretty poor attempt. While not idiomatic in the normal sense, there are a lot of low-latency, high throughput Java apps that never allocate after startup (and maybe some object pool growth before hitting a steady state). That benchmark was even running out of memory when using the non-collecting GC which shows they were allocating quite a bit, and their allocations probably escaped and needed to be moved off the TLAB and become expensive. I'm not sure they really knew how to write performance Java.

I've been a part of systems that process million of events a second and never allocated. These will beat most C++ systems I've seen, and Go doesn't even start a chance. Java and HotSpot can do some amazing things (esp around inlining), but you do have to do them a little differently (and carefully), but at least you can. My experience with Go is that I never had the same level of control, and I don't think it is possible.


> There is no convincing reason I've seen for Go not to have a generational GC

Well, they explored generational GC (after admitting that their uber-ultimate GC that they marketed as future-proof until 2025 could be better): https://blog.golang.org/ismmkeynote

> It isn't that the generational hypothesis isn't true for Go, it's just that the young objects live and die young on the stack. The result is that generational collection is much less effective than you might find in other managed runtime languages.


> any language with GC is a complete nonstarter as a systems programming language

As someone else hinted, your rant loses everyone familiar with the history of computing right here.

"There were bugs in Go at one time" is not much of an argument against it either.


And in the meantime Java gets its GC finally fixed to not stall. Yes, it's a real problem, ignoring it for many years costed some people lots of productivity.


I can see moving from C/C++ to Rust for buffer-copying style codes, but for most code general-use programs the cognitive load required to deal with resource management seems way to high to move from from Java/C#/Node to Rust. (I just got called about a shop moving their Node code to Rust, and I can't imagine why any code that would make good Rust code was written in JS in the first place.) Go seems like a non-starter to me -- sure features can be abused (as implementation inheritance has been for years) but to omit generics at this point just seems silly.

As for moving from C# to Rust, you can learn to program with the Span<> APIs and get the non-GC perf. of Rust for the most part and still keep GC and other C# niceties (like the whole toolchain and ecosystem...) in much less time than a wholesale move to Rust. As as for message-passing/agent/channel programming, you can certainly do that in C# if you like, and though there is certainly a lot a foot-shooting that can be done with async and thread contexts, in general the Task<> system is a joy to use compare to almost anything else out there, not only for async code, but for concurrent code.

Don't get me wrong, I always thought that C++ was a "worst of all worlds" language and appreciate moving to Rust from there, but for most user-facing applications that tend to have complex object lifetimes, I just can't see why you'd want to deal with RIAA when modern GC's are so darn good.


Have you actually used Rust? I spend very little time (or code) on resource management when writing Rust code. And when I do, it's usually the compiler helping me understand an aspect of my design that doesn't work as I expected. Sure, other languages might allow me to gloss over that, but that will likely come down to finding the edge case in production.


Yes - rust type system is very restricted in what it can express because it needs to be verifiable at complie time - this means that some trivial things become a huge pain to describe to rust compiler and rust programmers seem to develop this Stockholm syndrome relationship with it and start singing it's praises on every turn. Memory management in dynamic graph structures with cycles is hard without GC, rust type system is not equipped to deal with this. Although you can still have leaks by holding on to unused references the problem becomes a higher level one and describing the structures/managing them becomes much simpler with a GC.

That guy saying that you can easily avoid GC in C# is also not realistic or doesn't know what he's talking about - even language expressions can lead to object allocation in C# - you really need to know what you are doing to avoid the landmines - it's very clear the language wasn't designed for this - if you need a lot of code like that.


LOL, I'm "that guy" I think... Glad we agree about the complex/cyclic structures at least.

Regarding C#, I said for the most part. I stick by my assertion that (a) GC is quite efficient (esp.w/gen0 objects generated by your "expressions") and far from "lame" and (b) working with value types and Span<> (like Rust Slices) together can significantly reduce GC pressure to the point where it's acceptable, assuming there was actually an issue to begin with. Doing this in C# on hot-paths is certainly much less effort than moving to Rust wholesale, and you haven't thrown out the GC baby with the bath-water.

All the things that you do to make Rust efficient - allocate on the stack instead of the heap when you can, pre-allocate when size is known, use static/nested lifetimes, use slices to owned structures, etc., you can do in C#, but you don't have to worry about it until it actually becomes an issue.

If you're in the kernel or embedded system or the middle of a game rendering loop, then sure, Rust's compiler guarantees make this style programming easier if that's the way you want to go - and Rust has macros and other features that C# lacks. (Although the .Net JITTer is going to generate type-specialized methods and inline them, etc., w/a Rust macro you know up-front exactly what is being generated, which is nice.)


My experience in avoiding C# GC comes from XNA era when Microsoft had this shitty .NET runtime for XBox 360 allowing anyone to develop for it - the GC was so bad you had to be very very careful to avoid it and stuff like using foreach would allocate (AFAIK nowdays it can optimize it away in some cases but not in all and you still need to know when if you want to avoid allocation). People wrote code which avoided GC but it looked nothing like C# and in fact was more verbose than something like Rust or C++ and you were in uncharted territory since no allocation programming is not really C# thing and using high level features would just lead you into invisible traps. So you ended up using preallocated arrays, static functions, globals, etc. etc. in a language with poor generics (you can't even specify an operator as a generic constraint), no top level functions, etc. etc.

I've kept track of .NET progress since then and read about stuff like Span and ValueTask, ASP.NET Core team perf did a good job leveraging those for optimisations (eg. their optimised JSON parser) in such scenarios I agree C# with low level stuff sprinkled in is a good choice

But if your problem domain requires avoiding GC throughout and having better understanding on what the abstractions will compile to pick a language thats designed for that. It's like when I had to review some Java 6 code which tried to work around the fact that Java doesn't have value types with byte arrays - it was just soo bad compared to even C equivalent it was better to rewrite and go trough JNA.


You are right that cyclic mutable graphs are very hard to model in Rust.

But very little code actually looks like that in reality. A lot of code looks like that by accident.


Cyclic mutable graphs are very hard to model in _safe_ Rust (without arena). They can be implemented easily in unsafe Rust, using pointers instead of references, like in C.


They can also be implemented by not using pointer spaghetti ;)

Even in C++ I generally prefer to use handles over pointers for cases like that because graphs are just plain hard to get right, and if you use handles you can get a nice error message when something goes wrong instead of a segfault.


To be fair, I spend the same amount of time on resource management in C++ too (compared to Rust). Unless I'm writing a data structure (happens occasionally) or some explicit resource management layer (texture pool, mesh LOD system, etc), I delegate management/ownership to the system designed to handle it.


I'm a hobby Rust user and I love the language, but I definitely feel the heavy hand of the borrow checker a lot of the times. It's the only language where coding can something seemingly simple can push me to the limit of my abilities and understanding.

However, I usually pick Rust for performance-critical code with quite a bit of concurrency, so the problems involved are inherently tricky. I'm gaining more and more intuition about ownership and rustc-friendly software design, and so I hope I'll be able to better understand if my struggle is due more to limitations of the compiler and Rust's semantics or the limitations of my skills.

For comparison, I've written ~6000 SLOC of Rust in my lifetime, so I do have some experience but I'm definitely not an expert.


I use it occasionally and trying to port an old Gtkmm toy project to Gtk-rs was an eye opener regarding both build performance and the pain of dealing with callbacks in GUI code.

First having neither a build cache, or binary dependencies, means that C++ wins on the "make world" build, because naturally all my third party libraries are already compiled.

Then there is incremental compilation, incremental linking, pre-compiled headers and modules to help with the rest.

GUI code then becomes a fest of Rc<RefCell<>> in event handlers, or using arrays with vector clocks workaround as shown on Catherine's talk.


Could you link that talk?


"RustConf 2018 - Closing Keynote - Using Rust For Game Development by Catherine West"

https://www.youtube.com/watch?v=aKLntZcp27M


I have a bit -- but I don't have a project that makes sense to do in Rust at the moment. I'm mostly doing long-running symbolic AI code where I don't want to mess around with object lifetime because imposing the concept of "ownership" would be difficult. (F#/C# is the best fit at the moment.)


It takes a little while to wrap your head around it -- I think it took me a solid 3 months before it clicked. Once it clicks though, the lifetime stuff is completely automatic. You really don't think about it. I think because Rust's syntax is so similar to what you might see in a C derived language it adds to the cognitive burden of learning lifetimes the first time around, but IMO, Rust should be the first thing people learn.


> long-running symbolic AI code

That sounds cool. An obvious question is, why not a lisp? But generally do you find f# works well on dotnet mixed with c#?


I love Lisp and Prolog the languages and I went there first -- the issue always turns out to be either (a) they make hard stuff really easy, but stuff that should be easy really hard, and (b) IMHO nothing even comes close to the .Net ecosystem when it comes to debugging tools, libs, etc. Clojure+Cursive comes closest, but the JVM world is a bit of a turn-off for me -- maybe I've been away from it too long, but it just seems a bit clunky. (And of course, reified generics, value types, etc., aren't available in the JVM.) The meta-programming story isn't as good as in Lisp/Prolog of course. JS (as others have said) an acceptable Lisp and has some advantages for the type of work I'm doing -- my problem with JS (and Typescript) isn't the language, but the run-time environment of Node, which makes things like true parallel programming a PITA and incurs serialization overhead between workers.


Thanks fort he reply. I've been looking at f# or ocaml for a symbolic code generation project, but haven't been able to decide.


Ah the Generics, the fact that empty interfaces have been accepted as the way to get things done is absolutely terrifying to me. I understand that C++'s approach to generic type usage has slowly evolved into the most absurd collection of left angle brackets out there but Go seriously needs to shape up about generics, when people are regularly hacking your logic rules and advising others to do the same thing then... maybe there was an error in your rules.

Go is interesting and I really wonder how it'd be doing if someone went all in on the multi-return syntax sugar (pretty awesome stuff), the prohibition (mostly) on exceptions, channels and trivial threading and a bunch of other nicely packaged features while giving some ground on Generics. All languages have their warts, but I think this one is particularly easy to address - though it may take a major version bump and introduce some BC breaks I feel like those could be minimized, the biggest cost would be library incompatibility.


Multiple return values are a lot less general and harder to use than tuple values; you can't store them in maps or pass them over channels or compose functions (while f(g()) works, f(g(), h()) is not allowed).


You could implement an entire corner of the language to work with special-cased / non-reified MRV, that's what Common Lisp does.

Go doesn't do that though. There is some magic for builtins (e.g. variable-arity MRV) but as usual mere mortals need not apply.


> C++'s approach to generic type usage has slowly evolved into the most absurd collection of left angle brackets out there

C++ templates are a purely-functional Lisplike language. There's nothing strange or absurd about it; it's a very vanilla way to approach string rewriting systems.

(It's ugly to read, yes, but then all Lisplikes are too.)


> but for most user-facing applications that tend to have complex object lifetimes, I just can't see why you'd want to deal with RIAA when modern GC's are so darn good.

My day job is ~half c++ and ~half c#, and I couldn't possibly disagree more.

The first problem is IDisposable and event handlers. Because c# doesn't have anything like weak pointers, event handlers require that you manually dispose of half your objects. In c, there's a simple rule: you always dispose of your objects. In c++, there's a simple rule: the destructor of your data structure/smart pointer always disposes of your objects. In c#, the rule isn't simple. Half the time, you have nontrivial destructor doing nontrivial things, the other half the time you can leave it up to the GC. But it isn't necessarily immediately clear which is which.

The second is `using`. Again, you must necessarily mix the semantics of non-deterministic GC cleanup and deterministic RAII cleanup. Which is worse than having a simple rule that always works.


> c# doesn't have anything like weak pointers

https://docs.microsoft.com/en-us/dotnet/api/system.weakrefer...

https://docs.microsoft.com/en-us/dotnet/api/system.weakrefer...

https://docs.microsoft.com/en-us/dotnet/api/system.runtime.c...

> event handlers require that you manually dispose of half your objects.

You probably have software design issue on your day job. C# events can be great, but they're not a silver bullet.

For loosely coupled application wide events, event aggregator pattern works better, see IEventAggregator from Caliburn.Micro for an example.

If the two sides of the event handlers need strong coupling for a good reason, an interface or abstract class for the consumer works better, see ExpressionVisitor from System.Linq.Expressions for an example.


You forgot to add safe handles as well.

https://docs.microsoft.com/en-us/dotnet/api/system.runtime.i...

This and the the thread of the other day regarding having to use C for what C# is capable of, apparently many don't look on their toolboxes.


There is obviously a learning curve but it's hard to argue that:

#[get("/hello/<name>/<age>")]

fn hello(name: String, age: u8) -> String {

    format!("Hello, {} year old named {}!", age, name)
}

is more complex than express + Node. With added bonus that you get validation for free


I'm a firm believer in that Rust is the greatest imperative programming language ever designed. But your example is a pathologically simple problem with a horrendously complex solution in Rust. In responding to a simple get request you have to lean on both function annotations and a macro. That code would be so much nicer in Ruby.


Would you post an equivalent code in Ruby ? (that like above does validation of inputs)


If you need input validation like that, which you normally wouldn't in Ruby because your models should do it for you, I'd probably use grape or something similar:

  require 'grape'

  params do
    requires :age, type: Integer
    requires :name, type: String
  end
  get "/hello/:name/:age" do
     "Hello, #{params[:age]} year old named #{params[:name]}!"
  end

In any case, my beef is more with the macro than with the function annotation, which is rather spiffy. The macro hides the fact that string manipulation in Rust definitely requires a little bit of manual reading. And I feel that as soon as your app becomes more complex Rust will just start getting more in the way. And to an experienced Rust dev that might not be a big deal, because an experienced dev knows how to work with strings or which memory management strategy so it won't bog them down. But if you're just working on something and you quickly want to whip out a service that tells people what age they are, I'd definitely go for a quick Ruby or Go service.


One of the things this API can do for you is not give you just strings, but fully typed instances of structs. And you don’t need to do the conversions yourself. So you’re not wrong, but it’s easier than you may imagine.


Not exactly the same, since there's no UInt8 in Ruby.

    require 'sinatra'
    get %r{/hello/([^/]+)/(\d+)} do |name, age|
      "Hello, #{age} year old named #{name}!"
    end


I am guessing age would be a string though?


yes, the regex part handles the validation.


> the cognitive load required

There's no cognitive load required to code in C++. You just don't know the language.

There's no easy tutorials or learning materials for it, but it's not hard.


Im not sure if this is sarcasm, honestly. C++ is one of the hardest languages to use, full of edge cases, it blows my mind every time I pick it up.

> You just don't know the language.

Nobody knows the language haha. If anyone tells you they know C++, they're absolutely wrong. This has been, IME, the easiest way to know if a candidate is over-stating their resume -- if they say they "know" C++.


> C++ is one of the hardest languages to use,

No.

> full of edge cases,

Definitely no.

> it blows my mind every time I pick it up.

I'm tempted to say "programming is not for you", but I'll be charitable and just point out that you've never learned the actual language to make statements like that.

> If anyone tells you they know C++, they're absolutely wrong.

That's a load of crapola. It's impossible to "know C++" in the sense of knowing the ISO standard for C++; but that is also true of any other language with a real standard.

Languages without standards, of course, are worse in every way.


C++ (as a result of it's C roots) has one of the most obscure and arcane library assumptions out there. Different platforms only need to roughly adhere to a standard and this can cause wildly different usages. Additionally it's the only language I've ever worked in that provides ample opportunities to blow off your own foot while copying a string. Lastly `const` what does it do where and why and should everyone ever use it... C++ is one of the simplest languages at a surface level, but the standard is incredibly dirty.


> Additionally it's the only language I've ever worked in that provides ample opportunities to blow off your own foot while copying a string.

False.

    std::string x = y;
There's no way to mess this up.

Again, learn the language. There is no such thing as "C/C++". 99% of the problems stem from the fact that people don't understand that C and C++ are completely different languages, despite sharing one compiler.

Learn C++ as itself and your problems go away.

> Lastly `const` what does it do where and why and should everyone ever use it...

`const` is a contract that means "I will not modify this object here". Nothing confusing or complex about it, unless you're trying to shoehorn this concept into C semantics somehow.


>> C++ is one of the hardest languages to use

> No.

Most defenses I've seen of C++ boils down to something along the lines of "you're using it wrong (tm)".

I've seen enough C++ to take the side of "the design is probably wonky if it's that easy to do the wrong thing."


> Most defenses I've seen of C++ boils down to something along the lines of "you're using it wrong (tm)".

Maybe, but that wasn't what I said. The problem is that lots of people know C, but very, very few people know C++.

A big part of the problem is that both languages share one compiler, and people come from C thinking that C++ is just an upgrade with some features bolted on.

It's not. It's a completely different language, and if you approach it as "I'll learn a bit of C and then throw in some C++ features" you're setting yourself up for a world of hurt.

The hurt goes away if you forget C, start with a clean slate and learn C++ as a new language.


C++ has more historical baggage than any other mainstream language. Consider something as simple as initialization: https://www.youtube.com/watch?v=7DTlWPgX6zs


That's not true, and an example of it would be scoping rules. If you see 'x' in the code for a method, there are strictly more places that could come from than in say, Python or Gp, than in C++.


Having used both, I admire Rust as an intellectual exercise, but would write back-end web stuff in Go.

Go is an OK language, not a great one. The real advantage is that you have the libraries that Google uses for their own web server side stuff. Those have been pounded on by billions of transactions, and that the special cases have been handled. There's one well-tested library for each major web service related function.

Rust's libraries don't have that volume of use behind them. Look up "http" in the Rust libraries. You find "Note that this crate is still early on in its lifecycle so the support libraries that integrate with the http crate are a work in progress!"[1] (Rust enthusiasts may comment with a complicated excuse for why this isn't a problem, if they like. That's what the official document says.)

[1] https://docs.rs/http/0.1.18/http/


Your point isn't wrong, but you've linked to a weird package to make it; the http crate is an attempt to standardize some of the types used across multiple implementations. That takes longer than actually building the functionality, as the whole point is that you have multiple implementations, and then consolidate afterwards.

It's also not an "official document"; that's just a package that exists. It's not run by the Rust project.


Rust isn't there for http services yet (if you're being conservative). Come back this time next year and things will be different though.

A nice thing about Rust is that there is so much safety is built into the language that even random libraries tend to work quite reliably.


I don’t think it is necessary to integrate http as a standard library. CGI allows using common wed servers, and there is a separate set of ecosystems around those servers.


You still need most of the http libraries for parsing requests and generating responses with CGI. You just skip the TCP bit.


>>Go is an OK language, not a great one.

This is probably the most succinct and accurate description of Go. It's fairly decent and reasonably reliable at small/medium scale (which is why it is popular among the microservices crowd), but has no outstanding features.

IMO by far the biggest reason it became popular is that it is backed by Google (which makes it a "safe" choice). If it weren't, most people would not have heard of it today.


"Decent and reasonable at small/medium scale" is a weird way to sum up Go in a Go vs. Rust discussion, since Go is deployed at drastically larger scales than Rust is currently. I don't think there's any validity to the idea that Go tops out at medium-scale projects; in fact, a more common argument against it is that it makes sacrifices to facilitate large-scale programming that people don't like.


> Go is deployed at drastically larger scales than Rust is currently.

I think it's important to define which "scale" you're talking about here; Go is deployed at a larger scale in the sense of number of deployments, but both are deployed in production inside the largest tech companies in the world. For example, Rust is now at the core of all of AWS Lambda (and Fargate).

That being said, I do think that "decent and reasonable at small/medium scale" is an attempt at damning through faint praise, and is certainly not how I'd characterize Go in any sense.


Go is doing more stuff in large-scale deployments, processing more traffic, doing more transactions, than Rust is. I don't just mean there are a lot more Go backend projects (there are). I mean that some of those projects do a lot more work than Rust does right now --- I don't mean ubiquitous infrastructure things like Docker and k8s, I mean purpose-built components for specific large-scale applications/platforms.

For instance: "Today, most Dropbox infrastructure is written in Go." Or: "Today Go is at the heart of CloudFlare's services". Or: "Rend is a high-performance proxy written in Go with Netflix use cases [ed: all internal memcache] as the primary driver for development". Or: "The search infrastructure on [Soundcloud] Next is driven by Elastic Search, but managed and interfaced with the rest of SoundCloud almost exclusively through Go services." Or: "How We Built Uber Engineering’s Highest Query per Second Service Using Go.". Or: "Handling five billion sessions a day – in real time [at Twitter]".

I don't see where there's room for a "that said" here.

Both languages are obviously capable of scaling.


Yeah, as mentioned, I don't think that's true anymore. Historically, it has, but Rust has had a lot of high-profile, high-traffic deployments in the last year. But honestly, the high order bit is:

> Both languages are obviously capable of scaling.

This is clearly true, and I agree fully. I don't really want to argue "is this deployment really larger than that deployment", as it's kind of silly. The point is that both have demonstrated the ability to scale to the largest workloads, and so knocking either one of Rust or Go on this axis doesn't make sense.


I like Firecracker too, more than I like Docker (which is, obviously, Go). I don't have any problem with Rust. The parent comment, about Go being suited for small/medium scale programming, was wrong.


PHP is doing more by whatever metric you want than Go or Rust. Does that mean it is better?


That's not the question the thread is discussing (thankfully; "better" or "worse" languages is a stupid message board debate to have).


Proof being that almost no one cared about Limbo and Plan 9 keeps being referenced, while forgetting about Inferno.


If rust had all the libraries you needed. Would it be a better language than golang?


I have written a command-line log processor in Rust, an XML processor in Go, and some small microservices in both. My experience in working with both languages is that Rust has a steeper learning curve than Go, and it took me longer to feel proficient in Rust. However, once I felt proficient in both I felt much more productive in Rust.

It's my impression that you may need to have worked with functional languages and have an understanding of type systems before you can be efficiently productive in Rust, but for those who reach that I think they can be much more productive in Rust than Go,based on only my personal experience. One of Rust's biggest and best features, and its biggest barrier to grokability, is the Rust Borrow Checker.

I think Go is a very good language, has awesome concurrency designs (I love Go channels), and is more accessible to more developers, but it also doesn't seem as flexible. I think the OP hit the nail on the head with the idea that Go is designed for Enterprise, where a sea or journeyman engineers are working under a small number of senior eningeers. I think Rust OTOH is designed for senior engineers, but is usable by less experienced engineers if they are mentored properly, i.e. 1:1 or at most 1:3.

Also, I like the tooling better in Rust than Go. Rust tools are easier for me to install, and the cargo build tool is not part of the language, but there is a standard build tool, so I get to have my cake and eat it too. Meaning I can customized the build tool to my needs, and have different customizations for different projects. That can get out of hand, which is why so many hated Gradle. OTOH, the level of customization is why many shops with senior engineers loved Gradle (e.g., Netflix).

Examples of build tool install:

* Go: $GO111MODULE=on go get golang.org/x/tools/cmd/stress

* Rust: $cargo install cargo-stress

One thing I think both languages will need to watch out for is the P3 problem (Package Proliferation Pachyderm), where there becomes an ocean of packages, some too trivial to really be an effective package (e.g., a package with a single functor to add two numbers). I've seen this with Node.js, though the community has noticed it and is working to rectify the situation. This, like so many problems with language adoption and evolution, is a policy/community problem and not a technical problem. Both Go and Rust have great communities, so hopefully we can avoid P3 in the future.


I have zero field experience regarding go, but it seems to me that it's made to remove friction for teams.

Lean toolchain, lean build times, lean formatting (can you imagine the amount of time wasted on IDE config, commit syntax, and formatting style debates ?) so that large groups can just go to work.

ps: I'd love to work in rust. As you said, it seems very potent at making very expressive yet very efficient code.


I was trying to do some CLI formatting this weekend and discovered that an equivalent to leftpad exists in the Node API.

I was up to my elbows in troubleshooting (when you don't know how to solve the problem, describe it more clearly), so I didn't have time to dig into the history, but I got a chuckle out of that.


> Go is a better Java / C#, while Rust is not.

Mm, not really. Go is a less sophisticated Java / C#, in the same way that a bicycle is a less sophisticated motorcycle. There's situations to use both, and there's nothing with either, but sometimes you want or need to travel hundreds of kilometers in a day and a bicycle isn't going to cut that for most people.


I hate bad analogies that have us arguing more about how well the analogy fits than simply debating the underlying question. When we talk about concurrency or the ability to make simple yet high-performing server apps, it's ridiculous to say Go is a bicycle compared to the motorcycle that is Java / C#.

User-space network driver? The benchmarks group Go closer to Rust with Java / C# way behind: https://github.com/ixy-languages/ixy-languages

This would be a very complex comparison and simple analogies do us a disservice. I think the author's blanket statement that Go is faster than Java / C# also seems too broad.


> User-space network driver? The benchmarks group Go closer to Rust with Java / C# way behind

That's a very generous reading of those benchmarks. What you said is true in the latency benchmarks, but in the throughput benchmarks, C# _beats_ Go at high packet rates and is much closer to Go than Go is to Rust at low to medium packet rates.


How do you get that C# is way behind Go on this one? If anything, those benchmarks show these groups of languages performing at roughly the same level:

1. C/Rust

2. Go/C#

3. Java/OCaml/Haskell

4. JavaScript/Swift

5. Python


I agree that my wording is too strong due to my memory of the latency results. As you increase the load, Go is closer to Rust/C than C# (see last benchmarks) and at a given load, C# isn’t in picture. It’s fair to say that Go/C# is similar while Java is far behind depending on the benchmarks that are important to you.


> and at a given load, C# isn’t in picture

And depending on how you draw the picture, Go might not even be in the bandwidth picture.


The analogy is better than you intended. It's a lot easier to kill or injure yourself with a motorcycle than a bicycle, too.

The Java/C# vs Go equivalent being causing your project to run overbudget and a be a trainwreck of bugs due to overengineering and runaway complexity.


golang has nothing to offer compared to Java's introspection capability, performance, tooling, tunability, and even concurrency (see java.util.concurrent). Not to mention maturity and widespread adoption.

It's just an overhyped, subpar language.


Concise, clean, effective and without cruft. That's the main reason for Golangs unstoppable growth.

Java is a language and computing platform from the 90s and it shows. It really needs to just stop and the mess around licensing only serves to help kill it.

I also wouldn't tout the concurrency of Java over Golang - having the concurrency model baked into the language and runtime means a clear advantage for Go and one which simplifies the communication among different agents.

Please learn some basics.


I have used both in production, and Java/JVM is far superior.

Concise is not a word I'd use to describe golang. I can't count how many times I've come across code that would be 1 or 2 lines in Java compared to 15 lines or more in golang. It's quite ironic given the unsubstantiated claims around golang. Lots cruft with "if err != nil" littered everywhere is barely scratching the surface.

Java is getting a green thread implementation by means of project Loom. However, the JVM already handles very high concurrency systems in production by using libraries like Akka, Vertx, and Reactor. It is already used by major corporations to handle extremely high load systems. It's already proven itself over decades.

I don't know what you mean by Java being a computing platform from the 90s. If anything, golang is at a similar level as Java when it was first released (no generics), except worse (error prone error handling, poor design of interfaces, and many bad and unsubstantiated design decisions). Not to mention the JVM having state of the art GC, as well as performing optimizations way beyond what golang is capable of.

OpenJDK is fully open source, no licensing there. Not only that, but Oracle has open sourced previously closed source projects pertaining to the JVM and tooling around it.

The main reason for golang is the hype behind it. Proof is that some of the same authors worked on its predecessor a long time ago, and nothing ever came out of it because they didn't have the Google brand behind them. People today follow hype without substantiation.


You're arguing a lot of abstract points here but let's have a walkthrough...

> I have used both in production

likewise, and I have found the opposite stance for Java/JVM based applications.

Terrible deployment story, terrible amount of tweaking and JVM "hacks".

> Java is getting a green thread implementation by means of project Loom

Well done to Java. It's getting features already commonplace in Golang. Bolting on Akka, Vertx... yeah, enjoy your frankensteins monster lol.

> no generics

Generics _arent_ critical for a language, exceptions are a mess and as for the JVM being state of the art... yeah. In the 90s it was.

> The main reason for golang is the hype behind it

I just fundamentally disagree - if that was truly the case, I'd have adopted Java back when it was hyped and relevant and would've moved onto Rust by now from Golang which never happened.


> Terrible deployment story, terrible amount of tweaking and JVM "hacks".

A lot of people these days are building fat/uber jars. All it takes to run your code is `java -jar foo.jar`. Can't get much simpler than that.

Tweaking is a plus that golang doesn't offer. The JVM runs an extremely wide range of workloads, and gives you the ability to tune it accordingly, e.g. whether you care more about throughput vs latency. Compared to golang where this is extremely limited.

It's not surprising that a huge number of data processing workloads run on the JVM (regardless of implementation language).

> Bolting on Akka, Vertx... yeah, enjoy your frankensteins monster lol.

That's not an argument. These frameworks are mature and battle tested, not to mention built on sound principles (e.g. Akka is similar to Erlang's actor model, which includes supervisor capability and remoting - nothing like this exists in golang).

> Generics _arent_ critical for a language, exceptions are a mess

They're not critical in the same way "functions/procedures" are not critical, but you're going to end up with a messy code base when it comes to reality. Again, the number of times I've seen what amounts to map/filter calls littering the code base, making it more difficult to read (not to mention error prone) is too many to count. Generics fix this. Funnily enough, it's golang that chose to disregard best practices from the 70s.

> and as for the JVM being state of the art... yeah. In the 90s it was.

Tell that to Google, FB, Apple, Amazon, and many more who run their critical infrastructure on the JVM. The introspection and monitoring it provides is literally second to none, not to mention performance, tunability, hot-swapping, rich ecosystem, and many more.

> I just fundamentally disagree

You can, but the fact remains that some of the same authors worked on a golang predecessor which never went anywhere, precisely because it didn't have the Google name behind it.


but but... anybody can ride a bike and you don’t need to pay for gas. and it’s better for the environment. think about the new people joining the team.


Fortunately I've used Go, Kotlin and C# in production building systems dealing millions of rpm. I can correlate my experience with different parts author has described. But I would disagree with few parts, specially simplicity part. You see the so called "simplicity" comes at a cost of extra verbosity and pain e.g. no generics; I have to add 3 extra lines of code, and repeatedly do casting and `ok` checks to just get value out of an in-memory LRU cache. Now some of folks might argue well this is the right way to do it. IMHO generics is one example of compiler slacking on me, and putting the burden on developer to maintain those extra lines of code. I do agree with C#'s breakable TPL model where you can choke threads; and I think that problem exists due to Tasks being slapped on to legacy down the road. If they were to reinvent another language I bet no-body with sane ideas will ignore the async programming model. Kotlin is one such example.

I can keep going on and on, but to cut it short; yes I would write a high performance router, reverse proxy, or a very simple micro-service in Go. But I would never recommend anyone to write a high business complexity service in Go. It's a matter of time before somebody writes a transpiler that would take these short comings of Go, and fixes them (Like Kotlin, Nim etc.).


I'm honestly conflicted about generics. I see your point, and I have felt the need for them in the past. My main concern is that as of now the lacks of generics forces people to rethink a little their approach to development, while with generics I fear there will be a proliferation of not-always-great patterns that will be copypasted from equivalent Java/C# code.

The Go community has a lot of good ideas on designing interaction through interfaces that I really hope won't get lost in Go 2.

That said, yes, casting interface{} manually over and over is stupid, let's hope we can get the best of both worlds.


I've moved from Java to Go and I appreciate it very much. It's very practical for day-to-day development. The 2 killer features for me are:

1. the built-in tooling (cross-platform building, profiling, formatting, test coverage, etc)

2. the standard library. I've written services that have convoluted TLS certificate handling, encryption and REST calls and never had to look elsewhere.

The language itself is good, though 2 gripes for me:

1. x.Y. At first glance, it isn't obvious whether this is function Y in package x, or method Y on object x. Package::function would be better IMHO.

2. The ':=' assignment with inference operator has sometimes led to unintended variable shadowing.

edit: another one comes to mind. The usual ... generics, pretty pls?

BTW, the author claims Go is faster than Java. Is this true, because it didn't used to be?


> 2. the standard library. I've written services that have convoluted TLS certificate handling, encryption and REST calls and never had to look elsewhere.

Is this going to be an issue in a decade or two? http://pyfound.blogspot.com/2019/05/amber-brown-batteries-in...


That's a good point. I suppose "usefulness" in today's world reflects the kind of problems being solved today. eg Java had CORBA support for years, just because in the late 90s, distributed objects were going to be a thing. XML/WS support is still there, but it took a while to get JSON/REST, etc.

If you want to remove stdlib packages, I guess you would need Go to ship a tool that automatically rewrites sources to point those package imports to a standard "golang/x" location (or something) that provides the same package.

Or alternatively, some kind of indirection via the go.mod file.


> BTW, the author claims Go is faster than Java. Is this true, because it didn't used to be?

I very recently ran an evaluation that included load testing feature identical business focused microservices written in both Java and Go. These are IO bound processes. The results are that, in this context, Go is on par with Java when it comes to performance (i.e. throughput and latency).

http://glennengstrand.info/software/architecture/microservic...


> 2. The ':=' assignment with inference operator has sometimes led to unintended variable shadowing.

I've been bitten by this once or twice? It is helpful for when declare and assign multiple return values.

eg:

    g, ctx := errgroup.WithContext(ctx)

there is of course a shadowed variable above. In practice it hasn't really been an issue for me, but I can see the foot gun.

The thing I like most about Go is the resulting output is a lot easier to follow. Maybe that's just because I am more familiar with Go than Java. Java seems to have endless abstractions that are not intuitive to me


In my experience, the multiple-return `, err` convention and the `:=` syntax sugar are constantly clashing. It's confusing because the `err` is reused, but still has to be created at some point; and because `:=` is allowed even if only one variable is being created, but forbidden if you're only assigning.

It doesn't always lead to a bug, but it's always frustrating. Even when restricting the problem to a single (non-error) return value, there are 6 combinations to consider:

* What do you do with the non-error return value? (none/assign/create)

* Is it the first time you've checked an error? (yes/no)

   none, yes    |    err := f()
   none, no     |    err = f()
   assign, yes  |    var err error; r, err = f()
   assign, no   |    r, err = f()
   create, yes  |    r, err := f()
   create, no   |    r, err := f()
The shadowing case comes up infrequently enough to surprise you when it does. You've been trained by the other examples to change `=` to `:=` when you see a certain error, and you don't always get a warning about the shadow this creates.


This is annoying and should be fixed: err should be pre-declared along with the stack. := should both allow assignment without creation and NOT create shadow variables (which could still be made explicitly with var.) It would be a poor argument that var and := show where declaration happens. The := rules seem made for the clarity of the parser rather than the programmer.


Those changes would break existing code, so I don't think they'll go for that. I'm slightly hopeful that whatever Go 2 does for error handling will dodge this issue by not making you declare and re-use the `err` variable, but I don't remember enough about that proposal to tell if it would actually help.

In general, the trend of making things easier for the compiler rather than the developer is a thing that annoys me about go, you're right to point that out


This, Rust is awesome but its standard library is truly lacking.


What do you miss in the standard library?

Fwiw, Rust attempts, by design, to not be batteries included, as this would tie Rust to specific architectures, or operating systems – or even require Rust code to run on an operating system, which doesn't have to be the case.

So, yeah, as a Rust dev, I pretty much always rely upon external crates. But hey, that's what they're here for :)


When I was playing with Rust a while back, I recall there being no easy to generate a random number in the standard library.


A lot of what you might find in the standard library of other languages is offloaded to crates, by design in order to keep the core Rust footprint small.

For random numbers, there is the rand crate, described here in the Rust Cookbook:

https://rust-lang-nursery.github.io/rust-cookbook/algorithms...


The problem is that it is easy to trust standard libraries, but third party dependencies? For something basic you’d think rust could have included that in their std.

It makes no sense to call a third party, and maybe a broken one, when you want to generate a random number.

The worst part about rust is its standard library and thats not a secret.


rand is the first thing that comes to mind, json parser, any crypto, etc.

Take a look at the golang standard library.


Feel exactly the same about your two gripes! Seems odd that a language that considers it too risky to give devs things like ternary operators.. makes it so easy to accidentally shadow variables.

And the package thing gets annoying because it's natural to call a package that deal with foos.. the `foos` package.. but that's also the natural name somewhere else for a slice of foos.


> BTW, the author claims Go is faster than Java. Is this true, because it didn't used to be?

I think for certain use cases C# or Java will be faster. THere was a post here the other day about a network driver written in C, Rust, C#, Go, Java and a few others and The C# one was faster then Go. Can't remember if Java was faster or not.


> The C# one was faster then Go

As pointed out over here (https://news.ycombinator.com/item?id=20984503), Compared to C#, Go was faster (higher throughput) for smaller batch sizes, and it always had significantly lower latency.

.NET Core is particularly good these days, so I'm sure there are times when C# is definitely faster than Go, but the network driver isn't a great example to support that.


Are we looking at the same charts? Go and C# are practically on top of each other for smaller batch sizes, then C# pulls ahead significantly.


If you’re looking at the latency chart, higher is worse. The right hand side of the C# graph is not a good thing.

The throughput graph doesn’t show C# pulling ahead significantly. Significant is how far behind Java is from Go and C# in that benchmark.

Also be sure you’re not looking at JavaScript on the throughout graph, since it is colored very similarly to Go.


[Another] keen observation on that network driver thread is performance is bound by memory copying. For which Java, C#, etc. can all be about the same.

FWIW:

Back in ~1998, I wrote a VRML browser that was faster (FPS and event loop) than Sony's. Benchmarks showed Sony's was then the fastest (publicly available). They used 'C'. I used Java (JDK 1.2). We both used same OpenGL stack.

Same kind of deal as a network stack. Most of FPS was due the graphics card and use of OpenGL. Java's JNI added a minor performance penalty.

As for the all the other stuff, like reading files, parsing, user interface, I just figured Sony's VRML team sucked (worse than me).

Update: Others made the point about latencies.


> BTW, the author claims Go is faster than Java. Is this true, because it didn't used to be?

Not for any meaningful deployments. The amounts of optimizations done by HotSpot completely dominate anything that golang does (which isn't much). In golang, not even function parameters are optimized to be passed in registers, let alone aggressive interface devirtualization, etc.


I tend to agree with the message in the article, even as someone who's in the past been a bit overzealous about evangelising Rust. The most striking point to me though was the one about not using tribalistic names like Rustaceans and Gophers. These never sat well with me because they sound silly, but I also see how they could be inadvertently reinforcing silloing across programming languages.


I agree and it echoes what Paul Graham said here [0] about ego.

If your identity is tied to being a 'Xer' you are going to have a hard time working with 'Y' even if it is the best solution in this case.

Don't identify as a Xer or Yer, but as someone solving problems and creating value.

[0] http://www.paulgraham.com/identity.html


Rust is lower-level, it is a replacement for C and C++. I'd love to work with Rust but there are only so much low-level projects where it makes sense - kernels, Web engines, things like HTTP servers. Go covers those cases where you would like to use Python but it would be too slow.


Eh, I use Rust as a python replacement for most tasks (exception is for something super simple that's just a couple lines with pandas). Rust isn't bound to low level tasks - it certainly excels at them, but there's nothing stopping you from writing web services in it


>I use Rust as a python replacement for most tasks

Wie bitte??? My wig has been snatched.


Rust complexity is a bit overblown. I also revert automatically to it now if I feel like Typescript (Node.js) won't cut it, even for < 1K LOCs.

I may be biased since I've been using it for almost 6 years, but you mostly don't need to worry about lifetime annotations which IMO is the really foreign part if you are already comfortable with functional languages, especially since the compiler is unbelievably useful.

Many well-known Rust crates also account for the majority of the best all-around libraries I've ever personally used in any language, and you can actually use them without a PHD in build systems thanks to Cargo.

Re-usability is superb thanks to traits (composition over inheritance) and generics, even if still immature domains tend to be overly generic for the end-user (HTTP networking for instance, though it's getting better fast).


I'm beginning to lean toward the camp of Rust replacing Python, too--and I've been using Python since 1996-ish. My primary limitation in Rust is the state of the libraries--Python libraries and frameworks are simply a LOT more stable in general.

The first question is "What does Python do better than Rust?" Libraries/frameworks is the big one--there is no excellent Django-equivalent or Jupyter notebook-equivalent, for example (maybe there never will be by virtue of the language differences, but it is what it is). Python helper scripts generally ship in .py form so you can fix bugs in crappy Python scripts (if people start shipping Rust auxiliary scripts that's going to be a step backwards). Verbosity--okay, I'll concede that if I want things to be mostly GC, I'm writing a lot of extra wording in Rust.

And when I reverse the question: "What does Rust do better than Python?" The answer is a whole lot. Strong typing is a big one--because of Rust doing okay type inference I don't wind up with types repeated very often--it feels more like Python. Performance, sure, but I rarely care. Tooling seems to be a bit better--and Rust seems to be targeting to be a first class citizen on Windows. And, of course, concurrency--Python should have just sucked it up, taken the performance hit in 3.0 to remove the GIL and then optimized performance back over 7 versions (Good concurrency is becoming a standard in languages--I suspect Python is eventually going to concede this or start losing mindshare--and the pain would have been best suffered back at 3.0).

Yes, there are issues. If you have a doubly-linked anything in Rust, you're about to have a bad time (or not, if you REALLY need double-linkage--it's time to quit being pedantic, to break out "unsafe" and to encapsulate things). If you're writing a library, your lifetime annotations look like 300 baud line noise.

I really learn a new language about every 10 years or so--I have to feel a significant upside without a lot of downside in order to move (my last big change was Perl->Python in about 1996--my history of languages that I knew cold is assembly->C(with a C++ excursion for a bit--I got better)->Perl->Python). At this point, for one-off things that I'm writing a quick 100 lines, I'm starting to reach for Rust more and more and Python less and less.


To be honest, I find it extremely frustrating to write anything non-trivial in Python (or any dynamically typed language, for that matter).


Would be cool if Rust could cover that gap.


It's close enough with its syntactic sugar and type system. There's no gap at all, really.

One place where Java and C# win, and sometimes C++, is the availability of excellent libraries. Rust doesn't have as many and not as fleshed out either. Go has the same problem though.


Agreed, I write everything in Rust these days. I usually dogfood areas where normally I would recommend Go but I'm curious on what the end product will end like in Rust. So far, all the projects have been super easy and only as complex as I want them; with one exception:

Diesel. If a developer came to me and wanted to write something in Rust over Go, but didn't know Rust well and wanted to use Diesel, I'd warn them not to. The errors you can get from Diesel are insanely unhelpful and quite advanced in the type system.

With that said, I don't think Go really has a comparison to Diesel either.. so perhaps this is an unfair warning. If instead you use raw SQL with Diesel, suddenly it becomes just as friendly as Go's SQLX, and yet again Rust becomes largely as simple as Go.

Rust is in my view surprisingly simple. If you choose advanced features you usually understand them. It seems to me that using libraries are the biggest threat to making Rust feel insanely complex; ie Diesel's type system.

As an aside, I hope one day Diesel can achieve errors akin to TQL (which I found recently), as TQL really nails the error reporting from what I've seen, at least.


I'm having a similar experience with Rust. Libraries that expose complex types can be hard to use because mistakes lead to confusing compiler error messages with over 100 lines of mostly irrelevant details. I can use libraries that expose complex types, but I have to be prepared for a steep learning curve for each such library. On the other hand, there are many libraries that expose only simple types and I can use them easily.

BTW, I love how the Rust compiler uses all CPU threads. It's fast on my 8 year old Ubuntu desktop with 6 cores / 12 hyperthreads.


I agree. I made my own pseudo-ORM layer instead. I make a small python script that spit out rust structs and stuff and cover the rest with traits.

Is not bad.

I think for RDBMS Rust lack the Python DBAPI interface. Too much reimplementation at the low level (ie: I adapt postgres and sqlite and frankly, both drivers are dissimilar is unnecessary ways).


Garbage collection makes Java, C#, and Go remarkably more productive for most programs.


Until your memory blows up or the GC stalls become a performance issue you have to solve. And by then it's very hard to fix as it got interwoven in the design of your application.

Rust forces you to think about memory use and ownership ahead of time, but after some short practice it really gets out of the way and does not become a problem, while avoiding bad code nasties like interlinked singletons, cyclical references or random shared_ptr held forever. Additionally the explicit memory model makes interacting with C (and often C++) code much more straightforward.

Source: had to wrangle that performance issue on Android both in Java and C++. Had to help JVM not stall forever in IntelliJ.


I've done a bunch of performance work.

For many applications I've had to speed up, the performance issues were either localized with low hanging fruit, or were architectural, where layers of abstraction led to translations between impedance mismatches throughout the whole application -- transforming an image 4 times between different coordinate spaces before drawing it on the screen does not make for a fast application.

A language change isn't going to solve that. GC pauses have almost never been an issue: It's irrelevant for servers. For UI, most code just doesn't end up allocating that much in the UI thread. Yeah, for things like games that do a lot of work and need to update at 120fps, GC can be a problem. I generally don't write that kind of code.

I'd really rather have a GC for 90% of all code that I ship.


For Kubernetes in general this has been my experience over the last 5 years (and the year preceding on Docker and other "learning Go" projects).

For web style apps with moderate to heavy client complexity (Kube APIs, client libraries) I have appreciated about being able to start with the GC, when we hit a scale cliff go and remove a 5-10 stupid allocation patterns (inlining, value types, removing pointers or replacing with internal index references so the collector ignores them), get another 10x scale from that, then go back and do architectural improvements, get another 10x, then trim some fat around the edges and get another 5x.

You definitely hit a wall at some point (comparing the best that you can do in Go with protobuf / JSON relative to say Serde) where the remaining 2-10x efficiency is not achievable. That is where I'd be most interested in generic like constructs in Go where I can type and define down. It's hard to beat the clarity and brevity of Serde - no one in Go has done it at anywhere near the performance.


Most modern programs are written to be short lived, so memory leaks aren't a showstopper (just kill the instance if it's a server or refresh the page if it's a client)


> but after some short practice it really gets out of the way and does not become a problem

It's similar in C#. Once you have some experience writing latency-sensitive .NET code like games or realtime multimedia, GC works quite well in practice because you learn how to write code in a way that doesn't stress the GC too much. People who don't care about latency are still able to get their stuff done.


>> cases where you would like to use Python but it would be too slow.

> Would be cool if Rust could cover that gap.

To really cover most of that gap, writing Rust would need to be as easy as writing Python, since that's one of the main selling points of Python. But Rust's massive benefit, compile-time memory management, is definitely not as easy to write as code that uses GC.

GC has overhead that you sometimes can't tolerate, but it's just simpler to write. If you have a graph of objects, it's easy to do in Python or C# or JavaScript. In Rust it's more complicated, because it's hard to do that at compile time!

The huge benefits of Rust are directly relevant for systems code, but for most Python code it's probably not the best tool.


I agree with the author that some Rust zealots are overdoing it and don't do their community a huge favour, they were one among several reasons why I chose Go instead of Rust for a larger project I'm working on. It's especially annoying to talk to the fanatics when you know Ada fairly well. (Most of them seem to come primarily from C++...?)

Like the article suggests and was stated numerous times, Rust and Go are not competing languages, they have different use cases. Rust is a good choice if you want to write a rock solid and secure library to replace a C++ library, or instead of writing it in C++ in first place. Go is a good language if you want to write a web service and want to rely on many existing libraries to cut down development time.

Both are great languages if you know them intimately as an expert and are deeply familiar with the common frameworks. Almost every language is great in that case.

However, for my use cases Rust would be a bad choice. Relying on GC is not only perfectly fine for my performance requirements, it also makes life much easier.

Personally, I think that both Rust and Go are a step back when we're talking about pure language features. They are both needlessly and artificially restrictive in comparison to good old languages like Ada and CL. But in the end, the frameworks and 3rd party libraries are more important anyway.


I was having a conversation the other day with a principle JavaScript engineer who's been in the industry for 20 years. I asked him, "What's the use-case for Node in industry? It's so much slower than other server platforms, which in the age of the cloud translates directly to real expense. Is it only used by startups who need maximum agility?"

His answer was that for larger companies, server costs are orders of magnitude cheaper than payroll. It's often much more valuable for a company to improve their development process than server performance. So high-productivity (and ease of hiring) technologies like Node and Go can end up being a really good business tradeoff in terms of actual dollars, even if they result in double or triple the server costs.


> Go is faster than Java / C#, more memory-efficient than Java / C#, and definitely has better concurrency than Java / C#

C# is a great language and has a ton of modern features. It's also quite a bit faster than the author is giving it credit for. In fact there was a network stack written in it that was posted here last week that had it beating Go by an order of magnitude.

I don't really agree with the statement that 'Rust is a better C++' either. It very much can take the place of C in many applications also.


"beating Go by an order of magnitude."

This is not what happend, Go was faster until batch size was > 16 then C# took over, but still C# latency was 2-3x higher than Go in every benchmarks.

Also C# / Java had to use C code where the Go driver was pure Go.

https://github.com/ixy-languages/ixy-languages/raw/master/im...

https://github.com/ixy-languages/ixy-languages/raw/master/im...

At 20Mpp/sec the C# latency was no longer on the graph:

https://github.com/ixy-languages/ixy-languages/raw/master/im...


I think it's fascinating that someone could look at those graphs and come away with the conclusion that c# is an order of magnitude faster than go.


On the repo page, there’s a latency graph, and on it there are two green lines. One is for Javascript and has poor latencies. The other is Go and has some of the best latencies. Because the Go one is clustered with other overlapping lines, I think people often misinterpret the Javascript line as the Go line.

https://github.com/ixy-languages/ixy-languages


It's also fascinating that someone could look at those graphs and say that "Go was faster until batch size was > 16 then C# took over," when Go was barely faster but C# was significantly faster.


Well, in my defense, I wasn't looking at the graph when I wrote that, I was trying to remember a graph that I saw a week ago.


C# had to use C code due to lack of .NET knowledge of the authors as discussed on the other thread.


Would be interesting to get a PR then and see the difference.


If you go to the other thread someone posted a GitHub gist showing how to use mlock() from C#, which apparently was impossible to do.


There would be 20 fewer lines of C?


Yes, interop and Span<> should be fine...


As if you have a choice to not use pure Go code, it doesn't play nice with C.


Cgo is a very well supported paradigm and many projects use cgo copiously.


I considered using Rust for a project, but I ended up going with C# because it appeared to have better support for Oracle database access and I could easily have a nice Windows desktop UI. C# seems more mature or fleshed out, but some of that may just be due to the "simplicity" of Go.


GUI is definitely an open problem in Rust, but to be fair, cross platform GUI is hard. In all of programming, it seems that Qt is about the best that we have when it comes to cross platform GUI, and I don't think most Rust developers care enough about Windows to write and maintain native Windows-only GUI libraries. C# (or I guess F# or any of the other .NET languages) is definitely a better fit for Windows UI programming.


If I was doing windows programming I would definitely write a c# GUI everyday-all day over what's currently available in rust, but still do my business logic in Rust :) . I simply adore the language at this point.


I agree. .NET is not bad at all. I work with F# and like the results

BUT

Is easier to make faster/memory-efficient app in Rust or Go than in .NET.

The "defaults" of a language matter a lot. And what it give for "free" impacts a lot. Specially for the code made by less skilled OR skilled-but-tired developers.


Go is garbage-collected just like .NET.

Rust indeed always pushes you towards efficiency :)


Not necessarily. I've moved C++ code to C#, and it got much faster. The reason is because of copy (and other flavors of) c'tors that needlessly copied data because it was either happening unbeknownst to the author, or it was easier than dealing with the consequences of proper aliasing semantics. I have a feeling that this is not uncommon. So IOW, bad manual memory management is going to perform worse than modern GC systems. Pretty much the same as an automatic transmission on a high-end sports sedan -- unless you really know what you're doing, the AT is going to do a better job of shifting than you are.


> Go is garbage-collected just like .NET.

And I bet the .NET GC is much better than Go.

But what I was trying to say is that is very common in .NET to build layers of layers of abstraction. .NET is still largely used by a lot of "enterprisey" developers with bad or non-existence training.

With Go you get a much simpler deal: You have structs (like POCOS in C#) and you pass them. Sometimes, you put traits.

Is alike what is preached for good F#/C# code, but .NET still carry the inertia.

However, if you get into .NET with modern runtime (aka: around 4.5+) and idioms then I think the results will be very good.


'Rust is a better C++' works well in that, insofar as C++ aimed at being a better C, (IMO) Rust seems well positioned to do it better.


What features does Rust have that C++ lacks that allows it to be used as a C replacement when C++ can't be?


Memory safety?

C++ has lots of bells and whistles, some of which Rust doesn't have (classes, to name one), but those aren't necessarily more important bells and whistles.

Also, the Rust infrastructure is light-years ahead of linking in libraries in C/C++. Compile times are higher, but not horribly so.


> Also, the Rust infrastructure is light-years ahead of linking in libraries in C/C++. Compile times are higher, but not horribly so.

Sorry, but you've got this backwards. From a user's perspective, cargo seems like a much nicer build system than the more-or-less manual build systems of make et al. But that's because cargo is opinionated and inflexible: it really wants to be the primary driver of the build system, and it also doesn't quite support all of the things you can do with custom linking.

As a result, when you have large projects composed of multiple libraries, your choices are either to drive rustc manually and skip cargo altogether, or to create a single macro crate for cargo to link all the rust code as a sublibrary for linking into your main project code. You end up with something like this: https://dxr.mozilla.org/mozilla-central/source/toolkit/libra... to describe all of the Rust libraries you have in your project.


What you're saying is essentially that Cargo follows the principle of "easy things should be easy; hard things should be possible". CMake, Autotools, Make, etc. do not follow this principle; hard things are possible, but easy things are not easy. This might look like inflexibility, but I don't really see that. Cargo does let you do hard things; it's just that it has a "normal" workflow as well, which makes the simple case simple.


... is that bad?

Seems different. But it doesn't seem bad.

Granted, I've not worked on a project of that size, but it seems to me that if it works for 'normal' sized projects quite well and works as well as anything else for large projects (as in, nothing works perfectly for large projects, they all need tweaking), then it is ahead of the alternatives.


What it shows is that Rust's aggressive approach to static linking means that it doesn't slot into build systems the same way that C/C++ code does. This has benefits for smaller projects, but it probably discourages use in larger projects.

I don't know what all of the trade-offs are, so I don't know where the current situation sits in this space. https://gist.github.com/rylev/0e3c3895dcb40b6a1c1cf8c427c01b... is the minutes of a session between the scary-custom-build-system people discussing (among other things) the pain points of cargo with their build systems.


That only happens in an extremely small amount of cases. Almost all of the time cargo does nothing but make your life easier.

No tool is perfect, but the team seems pretty dedicated to improving things.


Sure, but memory safety is irrelevant when considering C++ as a C replacement (C++'s memory safety, terrible as it is, is not worse than C's). I'm purely curious as to why you'd be able to write something in either a) C b) Rust but not c) C++.


You are reading that backwards. The comment said that Rust is better positioned as a C replacement than it is as a C++ replacement. It said nothing about C being able to do things C++ can’t.


I think that's true. Also given the dominance of C in the unix eco system, it's a more important problem to solve.


> Sure, but memory safety is irrelevant when considering C++ as a C replacement (C++'s memory safety, terrible as it is, is not worse than C's).

Wouldn't it be fair to say that reducing memory unsafety as much as possible without hurting performance is a good goal to have when making a replacement for C? Assuming that's the case, it's relevant that C++ doesn't eliminate it as much as Rust when comparing the two languages as C replacements.


(1) Memory safety guarantees through the ownership model, a clear delineation of unsafe code that doesn't follow it. Also, no UD.

(2) Both Rust and C++ can be used to write most / all of the code that can be written in C, at a zero runtime cost. Their difference is in ergonomics, safety features, ecosystem maturity, etc.


Arguably, the point is precisely that Rust lacks features that C++ has which make C++ not as good as it could be at replacing C.


Rust has put more effort than C++ into figuring out how much of the standard library can be used in "freestanding environments" (to borrow the C terminology). C++ without exceptions, RTTI, and the standard library is already a pretty effective C replacement.

The other "feature" that C++ has in this space is that it has a bad reputation as far as the C stalwarts are concerned, whereas Rust has not gained that reputation.


C++ isn't very homogenous in the features that people use. I was speaking more to the fact that one can use Rust in situations where C traditionally dominates. OS's, embedded development. It's memory safety model allows you to section off unsafe code and craft safe interfaces while still operating on the bare metal. Like sibling comments mentioned, no_std is pretty great too.


I don't think the parent was saying you can use Rust anywhere you can't use C++. I think they were saying, Rust is not only a good C++ alternative, but also a good C alternative.


Assuming GP's post was not edited, I don't see where it implies that Rust can be used as a C replacement where C++ can't.


This is a good question, and it's a shame that none of the responses so far answer it.


The answer to that specific question is borrow checker (i.e. memory safety) and much better type system.


Those things can't be essential in a C replacement, because C doesn't have them. The statement implies that there are things C can do that C++ can't but Rust can.


I feel this article so hard on so many levels. I've experienced Dante's Enterprise.

BUT, regarding what I feel to be a false equivalency - Is Go truly right for the exact kind of Enterprise Mush that the Author is describing? Go was invented at Google to solve Google problems is the claim in TFA. And yet, Google is famous for having a very high bar for engineering hiring with a strong focus on CS prowess.

So, is the kind of code mush that gets created at Enterprises who put production code written by one-week-of-pluralsight Junior devs headed by analyst-preached-cargo-culting managers - the same as - the Software produced at Google? If not, does Go still fit that former zeitgeist?


Hello fellow lost soul.

I don't ultimately know, I think some enterprises are doomed to fail regardless of the technology they end up choosing.

In some ways their inevitable doom is capitalism's greatest gift to society, but I've worked in a shop where we had both Go and C# at the same time, and I would have quit way earlier if I had to work on the C# code full-time. Both green-field projects btw.


Go's hidden feature is that there are no personal dialects of Go, not even in the whitespace, because everyone uses "go fmt".

As such, you look at it, you see what it's doing, not how clever the programmer was. And you can feel free to dive in and edit.


I write golang at my dayjob and rust by the night. Rust has the same with rustfmt. Run "cargo fmt" on your working directory and you're done.

NB: When I was writing C++, we had clang-format setup to do the same for the C++ code. This advantage is in no way unique to go.


I haven't written a ton of go, but my read on the language is that it's built for large teams with lots of turnover. It makes it really easy to write boring, readable code and it makes it really hard to write code which is too clever by half.

If I am choosing a language for a personal project, or for a small team of developers I'm really confident in, I'm probably going to choose something which is more fun to write. But if I had to recommend a tool for a large shop where long-term maintainability regardless of the team makeup is more important than programmer happiness, Go seems like a great option.


> I'm probably going to choose something which is more fun to write.

Fun-to-write code isn't always that great to read; and your code will probably be read more times than it was written. I get it - I wrote Perl applications in a previous lifetime and I had fun doing it (Perl's text manipulation is unparalleled): until it was time to grok something a teammate wrote. I appreciate that Go is 'boring'


Until about six months ago, the only programming I'd done was a little qbasic in my teens - over 20 years ago. What I love most about Go is that I can read someone else's code somewhere, then write my own code based on that, and then months later come back to my own code and actually understand it even without comments.

Maybe in time I'll get advanced enough where I'll miss features in other languages that other people argue about. But for the moment I'm having fun making useful programs in Go.


And yet Go programmers are often happy.

I think the reason is: Go code has a low level of abstraction-construction and a high level of explicitly writing the algorithm. You can see through the language, which is "boring" and does not distract your attention, to what it is trying to achieve, which is interesting.


Yes, I agree with this, but need to caveat it. Go is a fun language to write code in (it has the simplicity and directness of C, which is also a fun language to write code in, while avoiding the warts of C). Like C, it isn’t a fun language to rapidly develop features in, because it is very boilerplatey due to limited abstraction.

Incidentally, for this reason I’ve always enjoyed coding in C because in domains where I’ve used it (most embedded code) the required feature set is quite limited, so C’s slow pace of development isn’t a problem. On the other hand when working with Go server code at a startup, I was deeply frustrated because I just wanted a much more abstract language to rapidly write features in.


I can totally understand the appeal of Go, and I can understand why a lot of people probably really like working in it. I also really prefer boring code so that's not a knock!

It's just that in some cases I do like to work with a bit more expressive and low-level tools than Go has on offer, so it's not my first choice. But that's 100% personal preference.


what's that saying about ignorance and bliss again


”While the story above is 100% the result of my imagination, it’s no secret that the Rust fandom has a few overexcited members who feel compelled to enlighten every lost soul about the virtues of the Crab-God.”

Just like any programming language and probably most of all Go. The fanaticism around it is just next level.


I can't think of any other programming language discussed on HN besides Rust where hordes of people will doggedly praise it and nitpick any criticism of it to death. Maybe Haskell, although even that doesn't compare.

Every mention of Rust on HN feels like a public relations exercise where one is force fed an opinion until they learn to like it.

The Go community's actually quite relaxed, all things considered, which makes them look more self-confident and less desperate.


Was this supposed to be sarcasm?

I have actually never seen anything like what you mentioned when it comes to Rust. I think I have actually lost ~50-70 karma here on HN for constructively calling out some flaws and things I don't like about Go. Hasn't happened with any other languages though.

I think the cult mentality with Go is a special kind.

Btw. I'm Go developer myself and I do like the language, but I just don't get the "jihadist" attitude that seems to be far too common here on HN.


It's incredible how Rust stans get so much stick, when there is if anything more blind fanboyism around Go.


Indeed. This is from a comment on this thread:

And yet Go programmers are often happy.

If the programming language you use is your primary source of happiness, you might want to reflect a bit on what happiness means to you. :)


This is pointlessly dismissive. Is it somehow weird to think that a language you use and think about 8 hours a day, 5 days a week will significantly affect your general happiness? "yet Go programmers are often happy" is a vague and slippery argument for sure, but somehow this response is even worse in how pretentious it is.


A minor nit about this article:

> Go has great concurrency support, but Rust has provably-correct concurrency.

I believe it's incorrect to say Rust has provably-correct concurrency. It prevents data-races, but not race conditions: https://doc.rust-lang.org/nomicon/races.html.


I love Common Lisp, which is definitely on the other end of the spectrum, but still enjoy Go a lot too.

For me it is a perfect fit for writing automation, services and command line tools. It is stable, it is simple to use/write/read, it is easy to deploy, it has great IDE support and tooling, it just makes sense in the enterprise, as author himself notes.

I like Rust too, but compared to Go it has the feel of a research project. I think Rust has great future, I definitely see using it, but for now I will stick with CL/TS/Go.


Out of curiosity, do you use Common Lisp for work or just for fun? I get to do Clojure about half-time (it was an easy sell due to a lot of our stack being Java), and really like it, but haven't used Common Lisp at all.

If you do get to use it for work, how does it scale (programmer/code-wise, not efficiency-wise) for "real" projects?


I used it at work[0] for a few years and it was great, but I did not scale that project to large number of people, mainly because it did not need to.

Folks at Grammarly have a writeup about how they are using it, it might have more relevant information for you[1].

I would definitely consider using it again, but proper buy-in takes time, and I have simply been moving too fast since then.

I am still not a big fan of Clojure myself, but it is definitely seems easier to sell than CL. In my humble opinion, if you are already a convert the benefits of going from Clojure to CL will not be as great as the ones for going from Java to Clojure.

[0] https://news.ycombinator.com/item?id=13979002

[1] http://tech.grammarly.com/blog/posts/Running-Lisp-in-Product...


I actually went Haskell->F#->Clojure in terms of job progression, so I am ok with largely theoretical stuff.

The reason I ask is that Clojure is an easy-ish sell largely because there's a near guarantee that you will never be blocked due to lack of libraries, since you can mooch off of anything in the Java ecosystem (similar arguments can be made for F#). I hate Java, but it has been around a long time and is extremely popular, and as a result there is a library for virtually anything for it. As far as I know, there is no such guarantee for CL...unless I'm mistaken (which wouldn't surprise me).

When using it for work, did you ever get stuck because of a lack of libraries (e.g. JSON parsing, protobufs, socketing stuff, threadsafe collections)?


> When using it for work, did you ever get stuck because of a lack of libraries..?

Nope, I did find some to be less well documented than ideal, but pretty much everything had tests and/or examples making up for it. Have a look at Quicklisp[0] and explore yourself.

CL has many implementations[1], including the one targeting the JVM, so you can make use of Java ecosystem from it too.

CL has been around a long, long time. First edition of CLTL[2] was released in 1984! and at that point Lisp had already been in heavy use for almost three decades. The standard has not been altered since 1994, and likely never will, but due to its nature innovation continues in the individual implementations and in the community, and there is a well established culture of portability libraries.

[0] https://www.quicklisp.org/beta/

[1] https://common-lisp.net/implementations - Not an exhaustive list

[2] https://en.wikipedia.org/wiki/Common_Lisp_the_Language


I would choose Go to solve network I/O bound problems, Rust to solve CPU / memory bound problems, and Clojure to solve business bound problems: where mental cycles are more expensive than computational cycles.


From the article: "Go is faster than Java / C#"

The is more often untrue, than it is true. There are times when the low latency GC is beneficial though.


That’s right, and this is confirmed by many benchmarks I’ve seen. I agree with pretty much everything in this article except the repeated claim that “Go is fast”.

Of course “fast” is relative, but I would reserve it for languages that are nearly as fast as competitors in their segment. There are too many languages very similar to Go’s ergonomics that are much faster for us to meaningfully call Go “fast”.

That said, I don’t really think that’s a problem. I think people who use Go are often just happy it’s faster than Python, and that’s okay.

My personal dislike of Go comes simply from their unapologetic[1] embrace of default nullable pointers (the “billion dollar mistake”[2]): There is very strong theoretical (and practical) ground supporting the approach of Rust/Zig/Swift/etc’s (to use algebraic data types instead) as objectively better (yielding inherently more reliable results with virtually no ergonomic compromise[3]).

In other words, in the 21st century, we know how to design statically typed languages that guarantee the impossibility of null dereference exceptions (not counting bugs in external libraries from other languages). And we can do this without any runtime performance or code ergonomics compromise!

Therefore there are no good excuses anymore for any statically typed language in the 21st century to not provide this extremely beneficial guarantee.

[1] There are no plans to fix this, ever: I’ve seen entire articles written by members of the Go team not just defending “all pointers are nillable”, but encouraging this as an idiomatic Go style of coding.

[2] https://www.infoq.com/presentations/Null-References-The-Bill...

[3] The ergonomic difficulties of Rust come from the borrow checker, not from their use of algebraic data types to replace nullable pointers.


I replied to the parent comment on my experience, but I do agree that Java and C# can be very fast.

I also agree on the problem with null pointers with is really ridiculous. Another complaint that I have about Go is how for some reason Google decided to implement many web protocols in the standard library, but never really cared to get websockets right, to the point that they just sedn you to a third-party library from their own documentation. That + QUIC/HTTP3 make me want to take my tinfoil hat out from the drawer.

https://godoc.org/golang.org/x/net/websocket


I would add parametric enums to your excellent rant. When I first learned Go I thought it’s use of constants with iota as quite a clean approach for enums. But after spending some time with Scala, Rust and Swift, well, I was wrong. Being able to exhaustively pattern match is simply excellent. And like non-nullable types, this is a zero overhead language feature that is simple, feels great to use, and reduces bugs.

It feels like a real step backwards using languages without parametric enums. My litmus test when learning a new language involves porting across some plain text operational transform code. The go code came out about 40% larger than the rust and swift implementations for this reason. It was also much uglier and harder to read. Like, those extra lines were pure overhead.

Eg this rust code is beautiful: https://github.com/josephg/textot.rs/blob/bb14b4b483e7dace67...

And that’s prettier and about as performant as this C implementation of the same function (I think I somehow lost the Go code - but it wasn’t much better than this): https://github.com/ottypes/libot/blob/902470a22d3a99d9b776ce...

The equivalent javascript code (my go-to language!) is larger than the equivalent swift / rust code and, last time I checked, about 8x slower. Most of the gap in readability is this one beautiful feature!


Has there been a statically typed language with no null and no generics? I'm pretty sure that every language I've used that implements an Option-like type has it be generic. Maybe TypeScript, where you can have "number | null"?


Go is already getting away with a magic generic map type, they could have done it the same way, it they wanted to.


Go has generics (e.g. in its data structures). They just aren’t usable by the programmer. The same could have been done for optionals.


QBasic :-P.

Though all dynamic allocation happens by resizing predefined arrays.


Glad that you had a better experience than what I got. The statement you quote gains full meaning with the sentence that follows about how it's not necessarily a matter of how fast the language is intrinsically as it is about what people end up writing in practice in the midst of "real life" development.

In my experience C# is a great language by nature, but riddled with foot guns, bloated libraries that love doing runtime reflection and other organizational problems that typically surround the development process.


Interesting, given the Go footguns of nil interfaces, reading from closed channels, gc of goroutines, reflection via magic struct tags.


Apples and oranges. OpenJDK does have low-pause collectors in the 1ms range which also perform compacting. Go on the other hand buys its sub-millisecond pause times at the expense of fragmented heap and lower memory locality (an advantage the JVM then immediately wastes through pointer chasing...)


"Well, you could answer that Go is what you know, so that’s what you used to solve your problem,"

This is definitely an appropriate answer for the author to give. He wrote a small tool, and most folks writing a small tool would just run with what is already know by coder.


Yeah. The whole premise of the article is "if someone trolls you, feed the troll".


I come from a network background where it is common to separate data plane (speed) from control plane (complexity). I think Rust and Go can fill one of these niches each.

Because I wanted speed, Rust became my choice. But that zero cost abstraction, does not extend to the programmer. The cost is in the productivity. Although productivity in Rust is likely better than most older languages, C in particular.

I also like Starlark, very weak language in terms of features. That is its strength imho.

For the sake of context. Rust is my eleventh language, but I haven’t written much software, and most of it was ten years ago.


That's a very good point. There's a comment on Reddit [1] that outlines how this distinction is one way of seeing how Rust and Go embody two ways of understanding "C", I think you both make two excellent points, I'll keep both in mind.

[1] https://old.reddit.com/r/programming/comments/d50u9g/why_go_...


We've even seen this in shipping products; Linkerd 2 is exactly that: data plane in Rust, control plane in Go.


Maybe just me, but I feel this thread is just validating everything he’s saying. Literally every comment is an explanation/excuse why you should use Rust or why Rust is better at literally everything.

Disclaimer: I’ve had a “few” beers so I might talking with my ass, but damn...


Every article about Go you can CTRL-F "rust"


Honestly I'm a bit confused why these languages are constantly pitted against each other as though one has to eventually win out. They were each designed with different goals in mind and suit different tasks.


A good answer is: if you can afford it (and you often easily can for server side code), GC is a no-brainer as it offers memory safety, ease of use and very good performance all at the same time.


Go is simple & simplicity scales when it comes to development (learnability, dev-effort, tooling, etc). So from a manager's point-of-view he/she can hire/train people far more quickly with Go than Rust. As much as I love Rust and have advocated for it in my company I cannot argue on this point with my management.


As a devoted and long-time Rust user, I'll tell you: by all means, use the language in which you're most comfortable and/or which seems best suited for the task!

Rust is not meant to replace every single language on Earth. Rust is, as you mention, a better C++, with great predictability, great safety and great concurrency. That's a feature set that's critical for some classes of applications, but absolutely secondary for others.

It's a good thing we have more than one language to pick from :)


I think they have different applications. If I were to choose a language for writing a web service I'd use Go but if I wanted to write an opengl renderer I'd use Rust.


Memory safety is Rust's hallmark. Is memory safety such a concern with opengl renderers?


Graphics code has a long history of memory unsafety, in more recent years causing serious security issues when we tried to put OpenGL on the web with WebGL, resulting in every device driver bug turning into a kernel-level vulnerability in the JS code. You've got a lot of people writing code with more brains than discipline [1], writing in an environment where performance is priorities one, two, and three, writing in memory unsafe languages... yeah, memory unsafety in rendering code been a big problem for a long time.

[1]: One of the most dangerous types of coder there is is the brilliant one, who has several times the working memory capacity of a normal person, and who can write 1000-line "functions" without breaking a sweat, and builds entire codebases out of such things. (Sometimes, perhaps the core loop of a program will be a bit of a monster, but the whole program isn't like that.) The cost of salvaging such things can be disturbingly close to the cost of rewriting such things, and indeed, the process for either may not be all that different.


Also, even disregarding security issues. Maybe you just don't want your video game to crash during a boss fight? Memory safety isn't just important for security; it can also prevent lots of annoying bugs.


Rust is more than just memory safety. For example, it also enables safe concurrency, which sounds like something you would want in a renderer.


> it also enables safe concurrency

Small correction, Rust provides a barrier against data access race conditions. It does not prevent deadlocks, cache hits/misses, work distribution bottlenecks, or any number of other concurrency issues.


Knowing very little about go, I jumped in to do some maintenance on a few go projects two weeks ago. I was able to do what needed doing within a few hours and implement a few simple changes. I did not consult any books or tutorials. But simply learned by example and did a few targeted searches to figure out basic stuff like "how do I assign a value to a variable", "what syntax do I need to use to create a struct". Mostly, it's all fairly straightforward.

The code base has a history of non go programmers doing work on it. It's not pretty but it works and is maintainable. IMHO that is impressive. I'm sure a few senior go developers would be all over the code base to improve it. But as it is; it kind of does what it is supposed to without much fuss. Having looked at but not really used Rust, I'm pretty sure there's no way I can be productive in that language on a similar timeline. Unreadable rust is a thing if you are not familiar with all it's syntax, macros, type voodoo, and pointer juggling.

That being said, go does look kind of ugly and verbose too me. I've never seen so many "if err != nil" statements before. The code I was working on is littered with it. Error handling is going on all over the place along with a lot of conditional logic around that which is a bit of a code smell. I also noticed a poor man's version of OO where you have structs and then a bunch of functions operating on that struct as the first parameter. I think I'd prefer the real thing. It seems variables are mutable by default, which seems a bit backward these days. So, there's also a lot not to like.


> There are only two levels of visibility for variables, and the only concurrency model is CSP.

I think this is generally true, but in the interest of pedantry, I feel like I have to point out that Go does have "traditional" locks and mutexes and whatnot, if you're feeling masochistic and don't want to use channels for some reason [0].

[0] : https://gobyexample.com/mutexes


Love these points. They answer "which language is better for building enterprise services in the vast majority of cases." Go, is simply good enough and makes well-thought out compromises for "non"-zero cost abstractions.

For me, I'm a part-time solo "services" developer using Rust. I've built some terrible projects that I hate peeking into to add features and maintain. But, ultimately that's gotten better and the web-services story is maturing (Actix is somewhat unstable but easy to work with, and async/await will finally hit stable in a few months). I also write non-idiomatic code quasi-functional code that works for me but others would scoff at or at least find hard to maintain. Rust, to me, is an enjoyable language to use and has great tooling (debugging being a major exception). I've become productive in it after a heavy upfront cost. And, although I haven't quantified this exactly, its speed saves me money considering my relatively small single server.

I think, as a sole-developer, the largest issue with using Rust for services is the package support. Python, Go and JS all have mature Redis packages, for instance. Rust's works, but it has some peculiarities particularly with connection management and some Redis commands. It's enough where I could get everything working with enough time. But I don't have all the time in the world. It's a similar story for lots of packages, even in Actix web. You often have to dig into the source code to figure things out. I'm trialing a "microservice"-like architecture solely because it's easier to do some things in other languages. Hopefully widespread WASM support hits soon but I'm a long-time single-server guy now trying to figure out Kubernetes because I find Rust enjoyable.


Well maybe the Redis packages problem can be solved, I started working on one for Zig [1], maybe I'll bite the bullet in the future and contribute to redis-rs too :)

[1] https://github.com/kristoff-it/zig-heyredis (far from complete)


To summarize the answer, Go was created as a better replacement for Java and other managed languages. Rust was created as a better replacement for systems programming languages (the author brings this point in the conclusion).

This claim however is somewhat misleading: "Rust is a C++, not a C". It's clearly not either of them - it's a new language. However, it's a good replacement for both C and C++. May be the above idea means, that those familiar with C++ concepts can find Rust easier to understand - that's true, it uses major ideas from it like scope based resource management (aka RAII) and so on. But it uses other ideas as well.

No language is perfect in absolute sense, and each has its own trade offs. But the above gives a rough framework to answer "why Go and not Rust" or "why Rust and not Go" in some particular situation.

There is always going to be some overlap area, where both are adequate enough, so any answer to such questions would be moot there.


This comment on Reddit I think explains why I think there is a big difference between C and C++ and why in my opinion Rust is not the best substitute for the former:

https://old.reddit.com/r/programming/comments/d50u9g/why_go_...


I agree with the view of "portable assembly", or to say it more correctly, language that allows high level of control. Rust at least aims at that, so it should be a good replacement for C.

Some things were still missing though, if I remember correctly something like handling memory allocation errors was an issue, which is important for embedded systems and similar scenarios. I suppose there was some plan to address that.


Tangentially, what language right now most closely hits the mark of Rust + GC?

I'm thinking Algol based syntax, expressive type system (sum types/generics/structurally typed structs), and optional mutability like Rust, but without the mental overhead of memory management.


I'd say Scala probably, but it leans very heavily towards providing escape hatches that let you use it as Java++, so it's easy to write code with mutability everywhere. This isn't much of a problem if you're disciplined about it yourself, but scaling across an organization, it's very tempting to treat it as Java++ and use it as such.


You might want to look at Kotlin, though the mutability options aren’t as good as Rust’s.


Can you describe (or link) a scenario where more advanced mutability options would be preferred / better practice?


This article looks pretty good: https://blog.stylingandroid.com/kotlin-mutability/

I would describe the difference between Rust and Kotlin as: Rust offers control over interior mutability and guarantees that there is only ever one mutable reference at a time—Kotlin offers exterior mutability control, not all that dissimilar from final in Java or const in C/C++, but it is better.


Crystal has been very nice on a project at work. Ruby-like syntax but statically typed, compiled language so a lot of the things that bites ruby programmers don’t even come up. It’s what I was hoping Go to be, it feels clean. It beats Go in lots of benchmarks, loses in others.


I would second this. I moved from Go to Crystal. Crystal has been great to work with. Due to a rich standard library, the absence or fewness of third-party libraries may not be much of a problem, especially if you are looking for developing web applications and command line tools (two areas where I have used it). It also has a very friendly and helpful community.


Swift. Automatic reference counting still requires more developer attention (to break reference cycles) than a GC, but is still easy to get right. Too bad that outside of iOS development it's still unusable.


I agree with this assessment. Swift has a lot of the things that make Rust nice to work with, but favors usability over performance while Rust does the reverse.

> Too bad that outside of iOS development it's still unusable.

Sub-optimal for sure, but not unusable. It's being used in production on the server in several instances.


I have a swift web service running in production, it's definitely not unusable. You're right it's rough around the edges but it's early days.


D.

It's not based on an ML-ish type system, but the amazing metaprogramming capabilities allowed std.variant.visit (sum type matching) to be implemented in normal library code:

https://dlang.org/blog/2018/03/29/std-variant-is-everything-...

immutability is not default too… but on the other hand, there's @nogc and betterC mode, purity annotations, contract programming, C++ (!) interop, an option to use the "C-ish" linking model (to build with e.g. Meson instead of Dub, install dynamic libs system-wide, and so one), and again, just next level metaprogramming.

You can do anything with Rust procedural macros, but they give you a token stream and you drag in the rust parser as a dependency and operate on the raw AST. That's hard and "special". With D metaprogramming you can, for example, just `static foreach` over the list of the current class's members directly inline where you want it.


IMO, Rust + GC = OCaml.


I'd say D, but it has optional immutability. It feels like C++ with GC more than Rust.


I’ve written a bit of Go and am learning Rust. In my toolbox I believe there will be a place for both.


> Well, you could answer that Go is what you know, so that’s what you used to solve your problem, but that’s probably not going to be a satisfactory answer.

It should be


It is, for sure.

As long as the tool gets the job done, I don't see the issue.

That's not to say some tools aren't better suited to a task than others, or that there isn't room for improvement, but, unless things are manifestly mismatched I don't see why it wouldn't be a satisfactory answer, especially for a "hey i wrote this cool thing" project.


Contrary to popular belief there is not one ring to rule them all.


As an experienced Java dev, I switched to golang for a few years and really liked it. Recently, I find myself on a "enterprise" job with Java. I really miss golang.

Boxed types? What a crazy idea that an int can be null. Just fixed a bunch of bugs related to that. String can be null? Just say no. golang's value types are much less error prone. You can go out of your way and pass a pointer to a string, but having to always worry about whether something is null in Java is a PITA.

The concurrency story in Java is also horrible. So much effort is required to achieve simple things. You're always having to make trade offs with the heavy weight threading model. Thread pools? What is this, the 90s?

With golang, I never once messed with the gc. Not once. We have a gc related discussion about once a week with Java.

And the deployment story. So simple with golang. So much extra work for nothing with Java. Just give me an executable I can run. How hard is that?


The "enterprise software" bit is a drum I've been banging on for a bit, and which always gets me into trouble from Java/.NET/OOP advocates who usually sail under the "legacy modernisation" flag when they point out that dependency management in Go is sub-optimal in closed environments.

But I think it will get there, eventually.


While I don’t think it should be the only consideration when deciding on a language to use, I think there is something to the idea that you should use a language that makes you happy. I know a lot of people that use Go that really like using it.


AFAICT This is a "GO is great" article not "GO Vs. Rust" article as the title suggests.

It is OK to love and propmote a language. GO is terrific and we have been waiting for it for a long time.

But so is Rust.

If you want to say GO is better choice than Rust ("Why GO and not Rust" is the title after all) listing featurs of Rust ("...Go doesn’t want unused variables or imports...") is not a convincing way to proceed.

Horses for courses and we should have stopped language wars in 1995!


I like to think that Go is C for the Cloud.

And don't let detractors get you down. No matter what you do, or how successful you are, there will always be critics. Ignore them.


> Go was created at Google to solve Google problems

Still, the vast majority of critical and infrastructure code at Google remains written in C++ and Java. Languages with superior performance, ecosystems, and that have actually proven to work in large scale programming (as opposed to hand wavy incorrect claims about being that).

golang is more or less a hobby project with a lot of bad design decisions that went too far.


I like them both but honestly if you don't know which one to use for your task, you probably don't know them both well enough.


Because the Go standard library is nice - net/http is a joy to use


I'm not discounting your experience, nor am I trying to devaluate the problem, but I think there are more than one reason why someone may ask "Why not Rust?".

In my case, I sometimes ask it to learn about particular real or perceived shortcomings of Rust, which may be fixable.

If I believe that Rust should address the problem, and someone chose not to use Rust (which is different from choosing to use Go!) I am curious as to what obstacle the person hit.

We should tame the zealous members of our community, but don't cluster everyone into the same bucket :) Some of us are just trying to build a great ecosystem and that requires us to find blindspots and notice hard-to-observe-from-inside problems.


> I think there are more than one reason why someone may ask "Why not Rust?"

Strongly agree, that's how I concluded the blog post :)


Go was designed by ken Tompson. Arguably the best programmer alive.


There is a lot of why go in the article, but not much about why not rust. I couldn't tell why the conclusion was that Go is better than Rust at certain things.


"The devils you know vs. the devils you don't." It's been keeping C alive for decades. Now go is an old boy, too. Rust is the new kid that seasoned developers don't trust. It's got not enough features and yet too many already.


I always thought rust was for system level stuff, like writing a database engine and go was for more backend server jobs, like distributed processing,etc.


Try creating a type in Go to represent arbitrary JSON.


interface{}


very safe option


It's 100% memory safe. In other languages you would use the same (some variation of Object.)


There is no safer option anyway if you can be given an arbitrary JSON object in the first place.


you mean the interface{} or the json?

:)


Why are people upvoting this religious language war drivel? The arguments aren't even good. Heck, they aren't even substantiated, not even badly.

I'd like to offer constructive criticism, but really, there is nothing to salvage here.

- Go is much easier to learn than Java or C#.

- The Go community regards as anti-patterns many abstractions regularly employed by Java / C#.

- With Go, it’s easier as a junior developer to be more productive, and harder as a mid-level developer to introduce brittle abstractions that will cause problems down the line.

Huge red flags all.


I haven't used go much yet but I can say that the O'Reilly book on Rust is a really enjoyable educational experience.


I love this. I haven't found many articles comparing and contrasting languages this way, and I really appreciate it.


"Rust is a better C++"

Rust is very clearly an ML derivative, not (just) a better C++


Can you share memory between cores with go without copying the memory?


> Can you share memory between cores with go without copying the memory?

Yes. Go's "do not communicate by sharing memory; instead, share memory by communicating" thing [1] is advice, not a language requirement. Go's memory model is similar to C's. Channels are completely optional; you can share pointers and protect stuff with mutexes. You can skip the mutexes and have data races, too, unfortunately. At least Go has good tooling for finding these kinds of races dynamically. [2]

Rust is similar, except that it prevents data races (as well as other types of memory errors) in safe Rust code, barring compiler bugs. I absolutely love this property, and it can save you from very difficult-to-debug production problems. It has a significant upfront cost in terms of "fighting the borrow checker", adding extra annotations to your code, etc., so it's definitely not worth it for every developer in every situation.

[1] https://blog.golang.org/share-memory-by-communicating

[2] https://blog.golang.org/race-detector


I will remember these talking points when talking to Rust prophets. That said, although Rust is a mediocre language at best, Go is truly quite poor. Neither of them are good languages, and Go will stay that way; only Rust can be good in the future.


The problem, for me, with Rust is a lot of people who evangelize it seem to think there should only be one programming language.

It's kind of a put-off.


I would recommend not factoring that into your decisions. Yes it's off-putting, but the technical merits of the language are truly there.


Warning: brain dump ahead.

Broadly speaking, in any system, there is necessary complexity (imposed by the problem domain) and unncessary complexity (imposed by the tools, the developers, the customer's interpretation of The Matrix, etc).

Go aims to minimise the latter, which is a worthy goal: I understand very personally how easily it is to add unnecessary complexity while trying to 'contain' the necessary complexity...

However, languages also provide useful and usable means to tame the necessary complexity. It's a tradeoff.

These days I try to write C# like Go: no IoC, composition over inheritance, specificity over genericity, etc. But enforcing those kinds of limitations at the language layer places a hard cap on the abstractions available later on.

To some extent, again, that's a good thing: far too many abstraction astronaut libraries in DotNet-land, spawned by someone's pet project getting overgeneralised. Such things need to be better considered and better contained within the developer group using them, unless they're extremely well-designed which is rare.

But if you remove the clean way to do something, people will do it anyway and it will be a mess. Quite probably in an 'edge' codebase, in a system where the senior devs have enough to deal with already, and the clique of devs looking after that particular edge develop their own little dialect of the language. (I don't care how much you think 'any dev can modify anything' in your code, if you've got more than ~10 devs you have 'preferred' people for certain areas, and some newbies will pick up some bits preferentially.)

Or you could give them a tool with higher-order concepts, and they'd be able to work within its idioms.

Restricting the available idioms just forces people to create new ones. You get doubleplus ungood code from that in fairly short order. Idioms get generated when the culture doesn't already have them, and the culture fragments according to the idioms its subcultures generate if they can't crosspollinate effectively.

Don't try to solve a social problem with technology. Solve it with education, discussion and code review instead. If you don't have time for that, you don't have time to fix the mess resulting from the technical solution either.

That said, the smaller the service the less likely that it's going to need to deal with higher-order abstractions. If you really are 'just' gluing together services in interesting ways (note that the filesystem is a service of sorts, and Git LFS is written in Go) and you can confine your code to solving specific problems, you probably don't need the higher order abstractions anyway.

In which case any other language could probably do the job too, to be honest, but they'd make it easier to sprawl... which brings us back to a language which puts an 'awkward' cap on making things sprawl.

Remember that it's very easy for a project to grow beyond its initial scope, and choice of language can encourage, enable, smother or doom that, and any of those outcomes can be good depending on your outlook...


Fanaticism in programming languages sometimes sounds very funny


I've read this article and I am yet to conclude why Go and not Rust. All your talk is very generic and no proof or concrete argument to support article title.


[flagged]


(author here) Uhh, I've just setup the normal progressive webapp manifest, it's not supposed to be asking you actively about it.

I'd recommend checking out the settings in your browser.


It's not asking me actively, but it's just crazy that I'm offered to install a random blog as an app.


Though impolitely made, I think he has a point.

Why does a blog need to be a progressive webapp? Why would one want to “install” it?


Looks like the site is made with gatsby. PWA install comes as standard.


Yep!


[flagged]


You needed to create a new account to make this comment?


That's... not how those work.


User name checks out... his Zen is rusty.

Sorry, I'll show myself out...


Why <trendy language 1> and not <trendy language 2> ?

Maybe because trendy languages aren't a panacea.


Maybe people aren't looking for the panacea, but to ease particular pains.


We all use C instead of assembly because it makes it easier to express the same thing, though we lose some control, and pick up bloat. Each successive language tends to do the same. But once you're comparing languages that are basically at the same level, what are you really gaining by switching?

There's always going to be pain. I think we just assume the other pain will be more tolerable.


"Rust has provably-correct concurrency"? Too bad they're so busy being passive-aggressive that they won't fix the build of Rust itself so that it's deterministic.


It seems like Elixir is a better fit than Go for all of the benefits listed here, especially in the age of distributed systems.

https://blog.codeship.com/comparing-elixir-go/


Your linked article states quite clearly that Elixir does not cross compile, and is best deployed via some deployment tool, seeing as it doesn't produce easily managed, static, standalone executables like Go does. So it seems like Elixir isn't.


OTP release tarballs aren't much harder to deploy than static binaries :)

Elixir doesn't need to "cross compile" because its only compile target is BEAM. That's like saying "Java doesn't cross-compile".


But how something is deployed is often not the job of the code author, as deployment has become its own technical craft. The argument should largely go to whether a tool can fit the domain given typical resource constraints (team size), not whether you get a static binary.


I tell folks that working in Go is like coming full-circle to C again. It is the easiness of Go, the general presentation of good ole shoot-yourself-in-the-foot pointers, and even the presence of interface{} as an analog of void*, that causes the link in my mind.

As many of you know, the Go community has been having a knock-down drag-out argument over whether/how generics should be added to the language. I joked a bit about this, but I am serious in my contention that much of the value proposition I see of Go over Java would be wiped away if a feature that complex gets bolted onto it.


"the general presentation of good ole shoot-yourself-in-the-foot pointers... presence of interface{} as an analog of void*"

Go doesn't have "shoot yourself in the foot" pointers. For that, you need pointer arithmetic, and Go doesn't have that. (Outside of the "unsafe" library.) interface{} may be an analog of void∗, but it lacks the failure in C where casting something to void∗ loses all type information; in Go, if you put an "string" into an interface{}, the runtime still knows it's a "string" and you can't put it into any sort of strongly typed variable except a string. The runtime/virtual machine Go implements knows the types of all values at all times.

I really wish the Go designers had called their pointers "references". Considered across the entire landscape of what various programming languages call "pointers" and "references", there's substantial overlap but I think Go's pointers are closer to the center of the "references" cluster than the "pointers" one.


Not shootable in the foot in terms of pointer arithmetic, but shootable in the foot in terms of unchecked dereferences of nil. As a C++ programmer historically, I think of references as not being nilable.


AKA has 'pointers' but not in a way that adds value.


I have little doubt that Go is easier to get started with than Rust. However, I'm very skeptical that Go "scales" better on any other axis than onboarding new people quickly. Also not convinced the Go toolchain is better than Rust (other than in terms of compile time).

Especially for "big scope", I place a lot of value on leveraging Rust's more extensive type system and abstractions to model the "complex domains".

I would also bet that services written in Rust would be more robust than those initially written in Go. Initially standing it up is, in the end, such a small part of your long-term productivity, and Rust really shines in all the other parts.

Seems to me that Go is more compelling than Rust for enterprise software development because many businesses do a bad job of looking at their development costs over a longer time frame, thus valuing short learning curve over high reliability.

So really, it seems like the author does not have a profound familiarity with Rust. Which is fine, but not a great basis for confidently expressing semi-universal truths on your blog.


I'm an open-source, solo dev. Not an enterprise dev. I use Go for most of my projects.

For me, the biggest hit to productivity are long compile times. One of my projects is ~80k loc C++ app (SumatraPDF) and I pretty much abandoned working on it because the long compilation times are killing me.

Go is good in that regard. My medium sized projects compile pretty much instantly.

I hear Rust is not doing well there.

The second productivity boost is GC (Garbage Collector). I don't want to manually track every allocation (C++) or think about how to contort the code to a form that Rust will be happy about.

Long term productivity is very much why I use Go.

In addition to GC and fast compile times, there's now a very rich ecosystem of libraries for almost everything you might need.

The language has been stable in past 10 years. I don't need to fix the code every major release because language changed (Swift) and I don't need to learn a large number of new things because of significant new addition (Rust).

And for what I use it (backend servers, cmd-line apps) Go is more than fast enough (which can't be said about out languages that have similar productivity, like Python, Ruby, Node).

If there's a C++ competitor I'll be willing to look at, it'll be JAI (when it's released).


> Also not convinced the Go toolchain is better than Rust (other than in terms of compile time).

Never claimed this. I compared the Go toolchain with other toolchains I happened to encounter in enterprise development.

> I place a lot of value on leveraging Rust's more extensive type system and abstractions to model the "complex domains". I would also bet that services written in Rust would be more robust than those initially written in Go.

In my experience it's not easy to get there in an environment like the one I described in my post, and any non-trivial abstraction carries a relative risk to later become a problem. The problem is that it doesn't matter how "sound" the abstraction is, the product owner will come in and change things in surprising ways. So at that point investing in sophisticated mechanisms becomes a form of premature optimization. That's the reason you can't afford microservices if the business doesn't have a well-defined structure.

>Seems to me that Go is more compelling than Rust for enterprise software development because many businesses do a bad job of looking at their development costs over a longer time frame, thus valuing short learning curve over high reliability.

For some business, "high reliability" does have less value than a shorter time-to-market, and in enterprise software development providing value to the business is the entirety of your job, not building cool tech. That's why reconciliation procedures exist even now that we have tons of tools to ensure transactional/eventual consistency.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: