I've been using Go since 2011. One year less than the author. Channels are bad. No prioritization. No combining with other synchronisation primitives without extra goroutines. In Go, no way to select on a variable number of channels (without more goroutines). The poor type system doesn't let you improve abstractions. Basically anywhere I see a channel in most people's code particular in the public interface, I know it's going to be buggy. And I've seen so many bugs. Lots of abandoned projects are because they started with channels and never dug themselves out.
The lure to use channels is too strong for new users.
The nil and various strange shapes of channel methods aren't really a problem they're just hard for newbs.
Channels in Go should really only be used for signalling, and only if you intend to use a select. They can also act as reducers, fan out in certain cases. Very often in those cases you have a very specific buffer size, and you're still only using them to avoid adding extra goroutines and reverting to pure signalling.
This is almost completely down to Go's type terrible system and is more proof that Google should have improved SML/CML (StandardML/ConcurrentML) implementations/libraries rather than create a new language. They'd have a simpler and more powerful language without all the weirdness they've added on (eg, generics being elegant and simple rather than a tacked-on abomination of syntax that Go has).
Go user for ten years and I don’t know what happened, but this year I hit some internal threshold with the garbage type system, tedious data structures, and incessant error checking being 38% of the LoC. I’m hesitant to even admit what language I’m considering a full pivot to.
Java 21 is pretty damn nice, 25 will be even nicer.
For your own application code, you don't have to use exceptions you can write custom Result objects and force callers to pattern match on the types (and you can always wrap library/std exceptions in that result type).
Structured Concurrency looks like a banger of a feature - it's what CompletableFuture should've been.
VirtualThreads still needs a few more years for most production cases imo, but once it's there, I truly don't see a point to choose Go over Java for backend web services.
And Java has non-trivial advantage over Go of being arch-independent. So one can just run and debug on Mac Arm the same deployment artifact that runs on x86 server.
Plus these days Java GC has addressed most of the problems that plagued Java on backend for years. The memory usage is still higher than with Go simply because more dynamic allocations happens due to the nature of the language, but GC pauses are no longer a significant problem. And if they do, switching to Go would not help. One needs non-GC language then.
If you're building tools that need to be deployed to machines, Go/Rust with their static binaries make a lot of sense. But for backend web services, it's hard not to go with Java imo.
fwiw - My favorite language is Rust, but Async Rust has ruined it for me.
My company is trying to force Kotlin as the default, but I just prefer modern Java tbh. Kotlin is a very nice language, and I'd be fine with writing it, but modern Java just seems like it has "caught up" and even surpassed Kotlin in some features lately.
I like the idea behind Go, but I feel physical pain everytime I have some sort of `go mod` behaviour that is not immediately obvious. Import/Export is so easy, I still don't get how you can fuck it up.
I found Golang to be a gateway drug to Rust for me.
If you want strong control and very unforgiving type system with even more unforgiving memory lifetime management so you know your program can get even faster than corresponding C/C++ programs, then Rust is a no-brainer.
But I did not pick it for the speed, though that's a very welcome bonus. I picked it for the strong static typing system mostly. And I like having the choice to super-optimize my program in terms of memory and speed when I need to.
Modelling your data transformations with enums (sum types) and Result/Option was eye-opening and improved my programming skills in all other languages I am using.
> you know your program can get even faster than corresponding C/C++ programs
I have seen this argument a few times here are HN. To be clear, I interpret your phrase that Rust "will eventually" be faster than C or C++. (If I read wrong, please correct.) However, I have never seen any compelling evidence that demonstrates equivalent Rust code is faster than C or C++ code. Some lightly Googling tells me that their speeds are roughly equivalent (within 1-2%).
As mentioned, I didn't go to Rust for the performance alone so I didn't track the articles that tried to prove the claim. The stated logic was that due to having more info to work with the toolchain could optimize away, quite aggressively so, many things, to the point of the code becoming lighter-weight than C.
Whether that's true or not, I make no claims. Not a compiler developer.
And yeah I'd be quite content with 1-2% margin compared to C as well.
OCaml is underrated IMO. It's a systems language like Go with a simple runtime but functional with a great type system and probably best error handling out of any language I've used (polymorphic variants).
Scripting languages would be (usually interpreted) languages for doing small snippets of code. Examples would be bash, Applescript, python, etc.
Application languages are generally managed/GC'd languages with JITs or AOT compilation for doing higher-performance apps. Examples would be Java, Go, Ocaml, Haskell, etc.
System languages are generally concerned with portability, manual memory layout, and direct hardware access as opposed to managed APIs. Examples would be C/C++, Rust, Zig, Pascal, etc.
Scripts are programs that carry out a specific task and then exit. If failure occurs, it is usually on the operator to fix the issue and run the script again. I agree that bash, AppleScript, Python, etc. are geared towards these types of programs.
Applications are programs with interfaces (not necessary graphical, but can be) that are used interactively. If failure occurs, the software usually works with the user to address the problem. It's not completely unheard of to use Go here, but I wouldn't call that its strength.
Systems are "forever"-running programs to provide foundational services other programs. If failure occurs, the software usually does its best to recover automatically. This is what Go was designed for – specifically in the area of server development. There are systems, like kernels, that Go wouldn't be well suited for, but the server niche was always explicit.
Your pet definitions are fine too – it is a perfectly valid take – but know that's not what anyone else is talking about.
Caught me, C#. Library quality has improved a lot in ten years, the language feels modern, and one of Go's biggest advantages, single binary cross-compile, is way less relevant now that dotnet standard installs easily on every OS I care about. I was prototyping some code that needed to talk to OpenAI, Slack, and Linear and the result was.. fast and extremely readable inline async code. I've interacted with these APIs in Go as well and by comparison, ultra clunky.
We're a video game studio as well using C#, and while game programmer != backend programmer, I can at least delegate small fixes and enhancements out to the team more easily.
“Game studio” suggests you made the right choice, but the advantages you mention apply to rust and typescript too. Both those alternatives are data race free, unlike go, c# c++ and java. (Typescript is single threaded and gc’ed. Together, those properties mean it doesn’t need a borrow checker.)
As you mentioned "improvement of existing language," I'd like to mention that Haskell has green threads that most probably are lighter (stack size 1K) than goroutines (minimum stack size 2K).
Haskell also has software transactional memory where one can implement one's own channels (they are implemented [1]) and atomically synchronize between arbitrarily complex reading/sending patterns.
The inner channel is a poor man's future. Came up with this to have lua runtimes be able to process in parallel while maintaining ordering (A B C in, results of A B C out)
I have a channel for my gRPC calls to send work to the static and lock free workers; I have a channel of channels to reuse the same channels as allocating 40k channels per second was a bit of CPU. Some days I am very pleased with this fix and some days I am ashamed of it.
You joke, but this is not uncommon at all among channels purists, and is the inevitable result when they try to create any kind of concurrency abstractions using channels.
Ugh... I hope I never have to work with channels again.
I've always thought a lot of it was due to how channels + goroutines were designed with CSP in mind, but how often do you see CSP used "in the wild"? Go channels are good for implementing CSP and can be good at similar patterns. Not that this is a big secret, if you watch all the concurrency pattern videos they made in Go's early days you get a good feeling for what they are good at. But I can only think of a handful of time I've seen those patterns in use. Though much of this is likely due to having so much of our code designed by mid-level developers because we don't value experience in this field.
One nit: reflect.Select supports a dynamic set of channels. Very few programs need it though, so a rough API isn’t a bad trade-off. In my entire experience with Go, I’ve needed it once, and it worked perfectly.
I almost always only use Channels as the data path between fixed sized pools of workers. At each point I can control if blocking or not, and my code uses all the (allocated) CPUs pretty evenly. Channels are excellent for this data flow design use case.
I have a little pain when I do a cli as the work appears during the run and it’s tricky to guarantee you exit when all the work is done and not before. Usually Ihave a sleep one second, wait for wait group, sleep one more second at the end of the CLI main. If my work doesn’t take minutes or hours to run, I generally don’t use Go.
When I did my 20% on Go at Google, about 10 years ago, we already had a semi-formal rule that channels must not appear in exported function signatures. It turns out that using CSP in any large, complex codebase is asking for trouble, and that this is true even about projects where members of the core Go team did the CSP.
If you take enough steps back and really think about it, the only synchronization primitive that exists is a futex (and maybe atomics). Everything else is an abstraction of some kind. If you're really determined, you can build anything out of anything. That doesn't mean it's always a good idea.
Looking back, I'd say channels are far superior to condition variables as a synchronized cross-thread communication mechanism - when I use them these days, it's mostly for that. Locks (mutexes) are really performant and easy to understand and generally better for mutual exclusion. (It's in the name!)
> When I did my 20% on Go at Google, about 10 years ago, we already had a semi-formal rule that channels must not appear in exported function signatures.
That sounds reasonable. From what little Erlang/Elixir code I’ve seen, the sending and receiving of messages is also hidden as an implementation detail in modules. The public interface did not expose concurrency or synchronization to callers. You might use them under the hood to implement your functionality, but it’s of no concern to callers, and you’re free to change the implementation without impacting callers.
How large do you deem to be large in this context?
I had success in using a CSP style, with channels in many function signatures in a ~25k line codebase.
It had ~15 major types of process, probably about 30 fixed instances overall in a fixed graph, plus a dynamic sub-graph of around 5 processes per 'requested action'. So those sub-graph elements were the only parts which had to deal with tear-down, and clean up.
There were then additionally some minor types of 'process' (i.e. goroutines) within many of those major types, but they were easier to reason about as they only communicated with that major element.
Multiple requested actions could be present, so there could be multiple sets of those 5 process groups connected, but they had a maximum lifetime of a few minutes.
I only ended up using explicit mutexes in two of the major types of process. Where they happened to make most sense, and hence reduced system complexity. There were about 45 instances of the 'go' keyword.
(Updated numbers, as I'd initially misremembered/miscounted the number of major processes)
How many developers did that scale to? Code bases that I’ve seen that are written in that style are completely illegible. Once the structure of the 30 node graph falls out of the last developer’s head, it’s basically game over.
To debug stuff by reading the code, each message ends up having 30 potential destinations.
If a request involves N sequential calls, the control flow can be as bad as 30^N paths. Reading the bodies of the methods that are invoked generally doesn’t tell you which of those paths are wired up.
In some real world code I have seen, a complicated thing wires up the control flow, so recovering the graph from the source code is equivalent to the halting problem.
None of these problems apply to async/await because the compiler can statically figure out what’s being invoked, and IDE’s are generally as good at figuring that out as the compiler.
That was two main developers, one doing most of the code and design, the other a largely closed subset of 3 or 4 nodes. Plus three other developers co-opted for implementing some of the nodes. [1]
The problem space itself could have probably grown to twice the number of lines of code, but there wouldn't have needed to be any more developers. Possibly only the original two. The others were only added for meeting deadlines.
As to the graph, it was fixed, but not a full mesh. A set of pipelines, with no power of N issue, as the collection of places things could talk to was fixed.
A simple diagram represented the major message flow between those 30 nodes.
Testing of each node was able to be performed in isolation, so UT of each node covered most of the behaviour. The bugs were three deadlocks, one between two major nodes, one with one major node.
The logging around the trigger for the deadlock allowed the cause to be determined and fixed. The bugs arose due to time constraints having prevented an analysis of the message flows to detect the loops/locks.
So for most messages, there were a limited number of destinations, mostly two, for some 5.
For a given "request", the flow of messages to the end of the fixed graph would be passing through 3 major nodes. That then spawned the creation of the dynamic graph, with it having two major flows. One a control flow through another 3, the other a data flow through a different 3.
Within that dynamic graph there was a richer flow of messages, but the external flow from it simply had the two major paths.
Yes, reading the bodies of the methods does not inform as to the flows. One either had to read the "main" routine which built the graph, or better refer to the graph diagram and message flows in the design document.
Essentially a similar problem to dealing with "microservices", or plugable call-backs, where the structure can not easily be determined from the code alone. This is where design documentation is necessary.
However I found it easier to comprehend and work with / debug due to each node being a prodable "black box", plus having the graph of connections and message flows.
[1] Of those, only the first had any exerience with CSP or Go. The CSP expereince being with a library for C, the Go experience some minimal use a year earlier. The other developers were all new to CSP and Go. The first two developers were "senior" / "experienced".
I think the two basic synchronisation primitives are atomics and thread parking. Atomics allow you to share data between two or more concurrently running threads whereas parking allows you to control which threads are running concurrently. Whatever low-level primitives the OS provides (such as futexes) is more an implementation detail.
I would tentatively make the claim that channels (in the abstract) are at heart an interface rather than a type of synchronisation per se. They can be implemented using Mutexes, pure atomics (if each message is a single integer) or any number of different ways.
Of course, any specific implementation of a channel will have trade-offs. Some more so than others.
To me message passing is like it's own thing. It's the most natural way of thinking about information flow in a system consisting of physically separated parts.
I think they mean that message channels are an expensive and performance unstable abstraction.
You could address the concern by choosing a CPU architecture that included infinite capacity FIFOS that connected its cores into arbitrary runtime directed graphs.
Of course, that architecture doesn’t exist. If it did, dispatching an instruction would have infinite tail latency and unbounded power consumption.
This still exists today. For example, I am on the payments team but I have a 20% project working on protobuf. I had to get formal approval from my management chain and someone on the protobuf team. And it is tracked as part of my performance reviews. They just want to make sure I'm not building something useless that nobody wants and that I'm not just wasting the company's time.
I never worked at Google (or any other large corp for that matter), but this sounds like the exact opposite of an environment that spawned GMail.
As you think back even to the very early days of computing, you'll find individuals or small teams like Grace Hopper, the Unix gang, PARC, etc that managed to change history by "building something useless". Granted, throughout history that happened less than 1% of the time, but it will never happen if you never try.
Maybe Google no longer has any space for innovation.
Before LLMs and ChatGPT even existed ... a lot of us somehow hallucinated the idea that GMail came from Google's 20% Rule. E.g. from 2013-08-16 : https://news.ycombinator.com/item?id=6223466
I see, thank you for debunking. But I think my general point still stands. You can progress by addressing a need, but true innovation requires adequate space.
I see why they do this, but man it almost feels like asking your boss for approval on where you go on vacation. Do people get dinged if their 20% time project doesn't pan out, or they lose interest later on?
Unlike the author, I would actually say that Go is bad. This article illustrates my frustration with Go very well, on a meta level.
Go's design consistently at every turn chose the simplest (one might say "dumbest", but I don't mean it entirely derogatory) way to do something. It was the simplest most obvious choice made by a very competent engineer. But it was entirely made in isolation, not by a language design expert.
Go designs did not actually go out and research language design. It just went with the gut feel of the designers.
But that's just it, those rules are there for a reason. It's like the rules of airplane design: Every single rule was written in blood. You toss those rules out (or don't even research them) at your own, and your user's, peril.
Go's design reminds me of Brexit, and the famous "The people of this country have had enough of experts". And like with Brexit, it's easy to give a lame catch phrase, which seems convincing and makes people go "well what's the problem with that, keeping it simple?".
Explaining just what the problem is with this "design by catchphrase" is illustrated by the article. It needs ~100 paragraphs (a quick error prone scan was 86 plus sample code) to explain just why these choices leads to a darkened room with rakes sprinkled all over it.
And this article is just about Go channels!
Go could get a 100 articles like this written about it, covering various aspects of its design. They all have the same root cause: Go's designers had enough of experts, and it takes longer to explain why something leads to bad outcomes, than to just show the catchphrase level "look at the happy path. Look at it!".
I dislike Java more than I dislike Go. But at least Java was designed, and doesn't have this particular meta-problem. When Go was made we knew better than to design languages this way.
Go's designers were experts. They had extensive experience building programming languages and operating systems.
But they were working in a bit of a vacuum. Not only were they mostly addressing the internal needs of Google, which is a write-only shop as far as the rest of the software industry is concerned, they also didn't have broad experience across many languages, and instead had deep experience with a few languages.
In it, he seems to believe that the primary use of types in programming languages is to build hierarchies. He seems totally unfamiliar with ideas behind ML or haskell.
Rob Pike is not a PL theoretician, but that doesn't make him not an expert in creating programming languages.
Go was the third language he played a major part in creating (predecessors are Newsqueak and Limbo), and his pedigree before Google includes extensive experience on Unix at Bell Labs. He didn't create C but he worked directly with the people who did and he likely knows it in and out. So I stand by my "deep, not broad" observation.
Ken Thompson requires no introduction, though I don't think he was involved much beyond Go's internal development. Robert Griesemer is a little more obscure, but Go wasn't his first language either.
Re-reading Pike's C++ critique now, I do chuckle at things like;
> Did the C++ committee really believe that was wrong with C++ was that it didn't have enough features?
C++ was basically saved by C++11. When written in 2012 it may have been seen as a reasonable though contrarian reaction, but history does not agree. It's just wrong.
I'm very much NOT saying that every decision Go made was wrong. And often experts don't predict the future very well. But I do think that this gives further proof that while not a bad coder by any means, no Pike is very much not an expert in PL. Implementing ideas you had from the 1980s adding things you've learned since is not the same thing as learning lessons from the industry in that subfield.
I would actually say even in 2012 the observation was probably wrong already, but notably Pike actually made it in the mid-aughts and was simply relaying it in 2012. However, being wrong about what was good for the future of C++ isn't really here nor there; Thompson also famously disliked C++. Designing a language that intentionally isn't like C++ does not equate to me as a rejection of expertise.
I think both C and Go (the former is relevant due to Thompson's involvement in both and the massive influence it had on Go) are very "practical" languages, with strict goals in mind, and which delivered on those goals very well. They also couldn't have existed without battle-tested prior experience, including B for C and Limbo for Go.
I also think it's only from the perspective of a select few, plus some purists, that the authors of Go can be considered anything other than experts. That they made some mistakes, including some borne of hubris, doesn't really diminish their expertise to me.
But my point is that articles like this show how that if you don't keep up with the state of the art, you run the risk of making this predictable mistakes.
If Go had been designed in the 1980s then it would have been genius. But now we know better. Expertise is more than knowing state of the art as of 30 years prior.
I don't think the state of the art ca. 2007 was the same as it seems today.
For one thing, Go took a number of forward-thinking stances (or ones which were, at least, somewhat unusual for its time and target audience), like UTF-8 strings (granted, Thompson and Pike created the encoding in the first place), fat pointers for strings and slices, green threads with CSP (though channels proved to be less useful than envisioned) and no function coloring, first-class functions, a batteries-included standard library, etc.
The only things I can think of which Go did that seemed blatantly wrong in that era would be the lack of generics and proper enums. Null safety, which IMO proved to be the killer development of that era, was not clearly formed in industry yet. Tony Hoare hadn't even given his famous "billion-dollar mistake" talk yet, though I'm sure some idea of the problem already existed (and he did give it in 2009, a couple of years into Go's development but also before its first public release). I know others find the type system lacking, but I don't think diving hard into types is obviously the best way to go for every language.
If one were to seriously investigate whether expertise was valued by Pike et al., I think it would start by looking at Erlang/OTP. In my opinion, that ecosystem offers the strongest competition against Go on Go's own strengths, and it predates Go by many years. Were the Go designers aware of it at all? Did they evaluate its approach, and if so, did they decide against it for considered reasons? Arguing against C++ was easy coming from their desired goals, but what about a stronger opponent?
The Brexit comparison doesn't hold water — Brexit is widely viewed as a failure, yet Go continues to gain popularity year after year. If Go were truly as bad as described, developers wouldn't consistently return to it for new projects, but clearly, they do. Its simplicity isn't a rejection of expertise; it's a practical choice that's proven itself effective in real-world scenarios.
This is optics versus reality. Its goal was to address shortcomings in C++ and Java. It has replaced neither at Google and its own creators were surprised it competed with python, mostly on the value of having an easier build and deploy process.
If we're using "did not meet the stated goal" as a bar for success, then Java also "failed", because it was developed as an embedded systems language and only pivoted to enterprise applications after being a dismal and abject failure at the stated goal.
If Java is not a failure then neither is Go.
If Go is a failure then so is Java.
Personally I think it is inaccurate to judge a mainstream, popular and widely adopted language as a failure just because it did not meet the goal set at the initiation of the project, prior to even the first line of code getting written.
We use a boatload of off the shelf go components; but i don't see it making any progress at replacing java at my bank. We are extremely happy with where java is these days...
Except that it did. Just because people aren’t rewriting borg and spanner in go doesn’t mean it isnt default choice for many of infra projects. And python got completely superseded by go even during my tenure
I would say this is another thing that would take quite a while to flesh out. Not only is it hard to have this conversation in text-only on hackernews, but HN will also rate limit replies, so a conversation once started cannot continue here to actually allow the discussion participants to come to an understanding of what the they all mean. Discussion will just stop once HN tells a poster "you're posting too often".
Hopefully saving this comment will work.
Go, unlike Brexit, has pivoted to become the solution to something other than its stated target. So sure, Go is not a failure. It was intended to be a systems language to replace C++, but has instead pivoted to be a "cloud language", or a replacement for Python. I would say that it's been a failure as a systems language. Especially if one tries to create something portable.
I do think that its simplicity is the rejection of the idea that there are experts out there, and/or their relevance. It's not decisions based on knowledge and rejection, but of ignorance and "scoping out" of hard problems.
Another long article could be written about the clearly not thought through use of nil pointers, especially typed vs untyped nil pointers (if that's even the term) once nil pointers (poorly) interact with interfaces.
But no, I'm not comparing the outcome of Go with Brexit. Go pivoting away from its stated goals are not the same thing as Brexiteers claiming a win from being treated better than the EU in the recent tariffs. But I do stand by my point that the decision process seems similarly expert hostile.
Go is clearly a success. It's just such a depressingly sad lost opportunity, too.
> It was intended to be a systems language to replace C++
More specifically, it was intended to replace the systems that Google wrote in C++ (read: servers). Early on, the Go team expressed happy surprise that people found utility in the language outside of that niche.
> but has instead pivoted to be a "cloud language"
I'm not sure that is really a pivot. At the heart of all the "cloud" tools it is known for is a HTTP server which serves as the basis of the control protocol, among other things. Presumably Go was chosen exactly because of it being designed for building servers. Maybe someone thought there would be more CRUD servers written in it too, but these "cloud" tools are ultimately in the same vein, not an entirely different direction.
> or a replacement for Python
I don't think you'd normally choose Go to train your ML/AI model. It has really only gunned for Python in the server realm; the very thing it was intended to be for. What was surprising to those living in an insular bubble at Google was that the rest of the world wrote their servers in Python and Ruby rather than C++ like Google – so it being picked up by the Python and Ruby crowd was unexpected to them – but not to anyone else.
> I do think that its simplicity is the rejection of the idea that there are experts out there, and/or their relevance. It's not decisions based on knowledge and rejection, but of ignorance and "scoping out" of hard problems.
Ok, I'll ask the obvious question. Who are these experts and what languages have they designed?
> Another long article could be written about the clearly not thought through use of nil pointers, especially typed vs untyped nil pointers (if that's even the term) once nil pointers (poorly) interact with interfaces.
You're getting worked up about something that's hardly ever an issue in practice. I suspect that most of your criticisms are similar.
I think there's two layers to the typed vs. untyped nil issue (which is really about nil pointers in an interface value vs. the zero value of interface types).
The first is simple confusion, such as trying to handle "all nils" the same way. This is not necessary, though I have seen some developers coming from other languages get hung up on it. In my experience, the only real use for "typed nil" is reflection, and if you're using reflection, you should already know the language in and out.
However, the other aspect is that methods with pointer receivers on concrete types can have default behavior when the receiver is nil, while interfaces cannot. In the expression foo.Bar(), you can avoid a panic if foo is *T, but when foo is an interface, there is no way around the panic without checking foo's value explicitly first. Since nil is the default value of all interface types, even if you create a dummy/no-op implementation of the interface, you still risk nil panics. You can mitigate but never truly remove this risk.
The creators thought that having 50% of your codebase be `if (err != nil) { ... }` was a good idea. And that channels somehow make sense in a world without pattern matching or generics. So yeah, it's a bizarrely idiosyncratic language - albeit with moments of brilliance (like structural typing).
I actually think Java is the better PL, but the worse runtime (in what world are 10s GC pauses ever acceptable). Java has an amazing standard library as well - Golang doesn't even have many basic data structures implemented. And the ones it does, like heap, are absolutely awful to use.
I really just view Golang nowadays as a nicer C with garbage collection, useful for building self contained portable binaries.
The intersection of nil and interfaces is basically one giant counter-intuitive footgun.
Or how append() sometimes returns a new slice and sometimes it doesn't (so if you forget to assign the result, sometimes it works and sometimes it doesn't). Which is understandable if you think about it in terms of low-level primitives, but in Go this somehow became the standard way of managing a high-level list of items.
For what it is, the iota design is really good. Languages like C and Typescript having having the exact same feature hidden in some weird special syntax look silly in comparison, not to mention that the weird syntax obscures what is happening. What Go has is much more clear to read and understand (which is why it gets so much grief where other languages with the same feature don't – there is no misunderstandings about what it is).
But maybe you are implying that no language should present raw enums, rather they should be hidden behind sum types? That is not an unreasonable take, but that is not a design flaw. That is a different direction. If this is what you are thinking, it doesn't fit alongside the other two which could be more intuitive without completely changing what they are.
It's very possible I'm just bad at Go but it seems to me that the result of trying to adhere to CSP in my own Go projects is the increasing use of dedicated lifecycle management channels like `shutdownChan`. Time will tell how burdensome this pattern proves to be but it's definitely not trivial to maintain now.
You're not bad at Go, literally everyone I know who has tried to do this has concluded it's a bad idea. Just stop using channels, there's a nice language hidden underneath the CSP cruft.
I find myself using channels in async Rust more than any other sync primitives. No more deadlock headaches. Easy to combine multiple channels in one state-keeping loop using combinators. And the dead goroutines problem described in the article doesn't exist in Rust.
This article has an eerie feeling now that async rust is production grade and widely used. I do use a lot the basic pattern of `loop { select! { ... } }` that manages its own state.
And compared to the article, there's no dead coroutine, and no shared state managed by the coroutine: seeing the `NewGame` function return a `*Game` to the managed struct, this is an invitation for dumb bugs. This would be downright impossible in Rust, and coerces you in an actual CSP pattern where the interaction with the shared state is only through channels. Add a channel for exit, another for bookeeping, and you're golden.
I often have a feeling that a lot of the complaints are self-inflicted Go problems. The author briefly touches on them with the special snowflakes that are the stdlib's types. Yes, genericity is one point where channels are different, but the syntax is another one. Why on earth is a `chan <- elem` syntax necessary over `chan.Send(elem)`? This would make non-blocking versions trivial to expose and discover for users (hello Rust's `.try_send()` methods).
Oh and related to the first example of "exiting when all players left", we also see the lack of proper API for go channels: you can't query if there still are producers for the channel because gc and pointers and shared channel objetc itself and yadda. Meanwhile in rust, producers are reference-counted and the channel automatically closed when there are no more producers. The native Go channels can't do that (granted, they could, with a wrapper and dedicated sender and receiver types).
Same. It’s a pattern I’m reaching for a lot, whenever I have multiple logical things that need to run concurrently. Generally:
- A struct that represents the mutable state I’m wrapping
- A start(self) method which moves self to a tokio task running a loop reading from an mpsc::Receiver<Command> channel, and returns a Handle object which is cloneable and contains the mpsc::Sender end
- The handle can be used to send commands/requests (including one shot channels for replies)
- When the last handle is dropped, the mpsc channel is dropped and the loop ends
It basically lets me think of each logical concurrent service as being like a tcp server that accepts requests. They can call each other by holding instances of the Handle type and awaiting calls (this can still deadlock if there’s a call cycle and the handling code isn’t put on a background task… in practice I’ve never made this mistake though)
Some day I’ll maybe start using an actor framework (like Axum/etc) which formalizes this a bit more, but for now just making these types manually is simple enough.
The fact all goroutines are detached is the real problem imo. I find you can encounter many of the same problems in rust with overuse of detached tasks.
Channels are only problematic if they're the only tool you have in your toolbox, and you end up using them where they don't belong.
BTW, you can create a deadlock equivalent with channels if you write "wait for A, reply with B" and "wait for B, send A" logic somewhere. It's the same problem as ordering of nested locks.
I think since a concept of channel was something new and exciting back when Go was introduced, people (including myself) tried using it everywhere they could. Over time, as you collect your experience with the tool you get better at it, and certainly for shared state management channels are rarely the best option, however there still are quite a few places where you can't do something equivalent to what channels provide easily, which is to block until you've received new data. It just so happens that those situations are quite rare in Go.
You've misunderstood the example. The `scores` channel aggregates scores from all players, you can't close it just because one player leaves.
I'd really, really recommend that you try writing the code, like the post encourages. It's so much harder than it looks, which neatly sums up my overall experience with Go channels.
In both examples, the HandlePlayer for loop only exits if .NextScore returns an error.
In both cases, you’d need to keep track of connected players to stop the game loop and teardown the Game instance. Closing the channel during that teardown is not a hurdle.
It’s not entirely clear whether the author is describing a single or multiplayer game.
Among the errors in the multiplayer case is the lack of score attribution which isn’t a bug with channels as much as it’s using an int channel when you needed a struct channel.
Hi! No, I think you've misunderstood the assignment. The example posits that you have a "game" running, which should end when the last player leaves. While only using channels as a synchronization primitive (a la CSP), at what point do you decide the last player has left, and where and when do you call close on the channel?
I don't think there's much trouble at all fixing the toy example by extending the message type to allow communication of the additional conditions, and I think my changes are better than the alternative of using a mutex. Have I overlooked something?
Assuming the number of players are set up front, and players can only play or leave, not join. If the expectation is that players can come and go freely and the game ends some time after all players have left, I believe this pattern can still be used with minor adjustment
(please overlook the pseudo code adjustments, I'm writing on my phone - I believe this translates reasonably into compilable Go code):
type Message struct {
exit bool
score int
reply chan bool
}
type Game struct {
bestScore int
players int // > 0
messages chan Message
}
func (g *Game) run() {
for message := range g.messages {
if message.exit {
g.players = g.players - 1;
if g.players == 0 {
return
}
continue
}
if g.bestScore < 100 && g.bestScore < message.score {
g.bestScore = message.score
}
acceptingScores := g.bestScore < 100
message.reply <- acceptingScores
}
}
func (g *Game) HandlePlayer(p Player) error {
for {
score, err := p.NextScore()
if err != nil {
g.messages <- { exit: true
}
return err
}
g.messages <- { score, reply }
if not <- reply {
g.messages <- { exit: true }
return nil
}
}
}
I don't think channels should be used for everything. In some cases I think it's possible to end up with very lean code. But yes, if you have a stop channel for the other stop channel it probably means you should build your code around other mechanisms.
Since CSP is mentioned, how much would this apply to most applications anyway? If I write a small server program, I probably won't want to write it on paper first. With one possible exception I never heard of anyone writing programs based on CSP (calculations?)
> Since CSP is mentioned, how much would this apply to most applications anyway? If I write a small server program, I probably won't want to write it on paper first. With one possible exception I never heard of anyone writing programs based on CSP (calculations?)
CSP is really in the realm of formal methods. No you wouldn't formulate your server program as CSP, but if you were writing software for a medical device, perhaps.
This is the FDR4 model checker for CSP, it's a functional programming language that implements CSP semantics and may be used to assert (by exhaustion, IIRC) the correctness of your CSP model.
I believe I'm in the minority of Go developers that have studied CSP, I fell into Go by accident and only took a CSP course at university because it was interesting, however I do give credit to studying CSP for my successes with Go.
Adding an atomic counter is absolutely a great solution in the real world, definitely, and compare and swap or a mutex or similar totally is what you want to do. In fact, that's my point in that part of the post - you want an atomic variable or a mutex or something there. Other synchronization primitives are more useful than sticking with the CSP idea of only using channels for synchronization.
This was 2016. Is it all still true? I know things will be backwards compatible, but I haven't kept track of what else has made it into the toolbox since then.
Absolutely nothing has changed at the language level, and for using channels and the `go` keyword directly, there isn't really tooling to help either.
Most experienced Golang practitioners have reached the same conclusions as this blog post: Just don't use channels, even for problems that look simple. I used Go professionally for two years, and it's by far the worst thing about the language. The number of footguns is astounding.
The only thing that changed was Context and its support in networking and other libraries to do asynchronous cancellation. It made managing network connections with channels somewhat easier.
But in general the conclusion still stands. Channels brings unnecessarily complexity. In practice message passing with one queue per goroutine and support for priority message delivery (which one cannot implement with channels) gives better designs with less issues.
My hot take on context is that it's secretly an anti-pattern used only because of resistance to thread locals. While I understand the desire to avoid spooky action at a distance, the fact that I have to include it in every function signature I could possibly use it in is just a bit exhausting. Given I could inadvertently spin up a new one at will also makes me a bit uneasy.
One of the often mentioned advantages of Go thread model is that it does not color functions allowing any code to start a goroutine. But with Context needed for any code that can block that advantage is lost with ctx argument being the color.
Context is only needed where one has a dynamic graph where one wishes to cleanly tear down that graph. One can operate blocking tx/rx within a fixed graph without the need for any Context to be present.
Possibly folks are thinking of this only for "web services" type of things, where everying is being exchanged over an HTTP/HTTPS request.
However channels / CSP can be used in much wider problem realms.
The real gain I found from the CSP / Actor model approach to things is the ability to reason through the problems, and the ability to reason through the bugs.
What I found with CSP is that I accept the possibility of accidentally introducing potential deadlocks (where there is a dependancy loop in messaging) but gain the ability to reason about state / data evolution. As opposed to the "multithreaded" access to manipulate data (albeit with locks/mutexes) where one can not obviously reason through how a change occurred, and its timing.
For the bugs which occur in the two approaches, I found the deadlock with incorrect CSP to be a lot easier to debug (and fix) vs the unknown calling thread which happened to manipulate a piece of (mutex protected) shared state.
This arises from the Actor like approach of a data change only occurring in the context of the one thread.
Actor model does not need CSP. Erlang uses single message passing queue per thread (process in Erlang terminology) with unbounded buffer so the sender never blocks.
As the article pointed out lack of support for unbounded channels complicates reasoning about absence of deadlocks. For example, they are not possible at all with Erlang model.
Of cause unbounded message queues have own drawbacks, but then Erlang supports safe killing of threads from other threads so with Erlang a supervisor thread can send health pings periodically and kill the unresponsive thread.
Function arguments do not color functions. If you'd like to call a function that takes a Context argument without it, just pass in a context.Background(). It's a non-issue.
Coloring is a non-issue, context itself is. Newcomers find it unintuitive, and implementing a cancellable piece of work is incredibly cumbersome. One can only cancel a read/write to a channel, implementing cancellation implies a 3 line increase to every channel operation, cancelling blocking I/O defers to the catchphrase that utilises the name of the language itself.
Cancelling blocking IO is such a pain point in Go indeed. One can do it via careful Close() calls from another goroutine while ensuring that close is called only once. But Go provides absolutely no help for that in the standard library.
By the same token, being async doesn't color a function since you can always blocking-wait on the returned promise. But then the callers of that function can't use it async.
Similarly with Context, if your function calls other functions with Context but always passes in Background(), you deprive your callers of the ability to provide their own Context, which is kinda important. So in practice you still end up adding that argument throughout the entire call hierarchy all the way up to the point where the context is no longer relevant.
Yes. See update 2 FTA for a 2019 study on go concurrency bugs. Most go devs that I know consider using higher level synchronization mechanisms the right way to go (pun intended). sync.WaitGroup and errgroup are two common used options.
Channels haven't really changed since then, unless there was some significant evolution between 2016 and ~2018 that I don't know about. 2025 Go code that uses channels looks very similar to 2018 Go code that uses channels.
I'm also wondering about the internals though. There are a couple of places that GC and the hypothetical sufficiently-smart-compiler are called out in the article where you could think there might be improvements possible without breaking existing code.
I'd like to refute the 'channels are slow' part of this article.
If you run a microbenchmark which seems like what has been done, then channels look slow.
If you try the contention with thousands of goroutines on a high core count machine, there is a significant inflection point where channels start outperforming sync.Mutex
The reason is that sync.Mutex, if left to wait long enough will enter a slow code path and if memory serves, will call out to a kernel futex. The channel will not do this because the mutex that a channel is built with is exists in the go runtime - that's the special sauce the author is complaining doesn't exist but didn't try hard enough to seek it out.
Anecdotally, we have ~2m lines of Go and use channels extensively in a message passing style. We do not use channels to increment a shared number, because that's ridiculous and the author is disingenuous in their contrived example. No serious Go shop is using a channel for that.
Do you have any benchmarks for the pattern you described where channels are more efficient?
> sync.Mutex, if left to wait long enough will enter a slow code path and if memory serves, will call out to a kernel futex. The channel will not do this because the mutex that a channel is built with is exists in the go runtime
Do you have any more details about this? Why isn’t sync.Mutex implemented with that same mutex channels use?
> [we] use channels extensively in a message passing style. We do not use channels to increment a shared number
What is the rule of thumb your Go shop uses for when to use channels vs mutexes?
Since my critique of the OP is that it's a contrived example, I should mention so is this: the mutex version should be a sync.Atomic and the channel version should have one channel per goroutine if you were attempting to write a performant concurrent counter, both of those alternatives would have low or zero lock contention. In production code, I would be using sync.Atomic, of course.
On my 8c16t machine, the inflection point is around 2^14 goroutines - after which the mutex version becomes drastically slower; this is where I believe it starts frequently entering `lockSlow`. I encourage you to run this for yourself.
> Do you have any more details about this? Why isn’t sync.Mutex implemented with that same mutex channels use?
Why? Designing and implementing concurrent runtimes has not made its way onto my CV yet; hopefully a lurking Go contributor can comment.
If I had to guess, the channel mutex may be specialised since it protects only enqueuing or dequeuing onto a simple buffer. A sync.Mutex is a general construct that can protect any kind of critical region.
> What is the rule of thumb your Go shop uses for when to use channels vs mutexes?
Rule of thumb: if it feels like a Kafka use case but within the bounds of the local program, it's probably a good bet.
If the communication pattern is passing streams of work where goroutines have an acyclic communication dependency graph, then it's a no brainer: channels will be performant and a deadlock will be hard to introduce.
If you are using channels to protect shared memory, and you can squint and see a badly implemented Mutex or WaitGroup or Atomic; then you shouldn't be using channels.
Channels shine where goroutines are just pulling new work from a stream of work items. At least in my line of work, that is about 80% of the cases where a synchronization primitive is used.
> On my machine, the inflection point is around 10^14 goroutines - after which the mutex version becomes drastically slower;
How often are you reaching 10^14 goroutines accessing a shared resource on a single process in production? We mostly use short-lived small AWS spot instances so I never see anything like that.
> Why? Designing and implementing concurrent runtimes has not made its way onto my CV yet; hopefully a lurking Go contributor can comment.
> If I had to guess, the channel mutex may be specialised since it protects only enqueuing or dequeuing onto a simple buffer. A sync.Mutex is a general construct that can protect any kind of critical region.
Haha fair enough, I also know little about mutex implementation details. Optimized specialized tool vs generic tool feels like a reasonable first guess.
Though I wonder if you are able to use channels for more generic mutex purposes is it less efficient in those cases? I guess I'll have to do some benchmarking myself.
> If the communication pattern is passing streams of work where goroutines have an acyclic communication dependency graph, then it's a no brainer: channels will be performant and a deadlock will be hard to introduce.
I agree with your rules, I used to always use channels for single processt thread-safe queues (similar to your Kafka rule) but recently I ran into a cyclic communication pattern with a queue and eventually relented to using a Mutex. I wonder if there are other painful channel concurrency patterns lurking for me to waste time on.
> How often are you reaching 10^14 goroutines accessing a shared resource on a single process in production? We mostly use short-lived small AWS spot instances so I never see anything like that.
I apologize, that should've said 2^14, each sub-benchmark is a doubling of goroutines.
2^14 is 16000, which for contention of a shared resource is quite a reasonable order of magnitude.
> We do not use channels to increment a shared number, because that's ridiculous and the author is disingenuous in their contrived example. No serious Go shop is using a channel for that.
Talk about knocking down strawmen: it's a stand-in for shared state, and understanding that should be a minimum bar for serious discussion.
According to the article, channels are slow because they use mutexes under the hood. So it doesn't follow that channels are better than mutexes for large N. Or is the article wrong? Or my reasoning?
Putting aside this particular topic, I'm seeing posts talking negatively about the language. I got my feet wet with Go many many years ago and for unknown reasons I never kept digging on it, so...
Is it worth learning it? What problems are best solved with it?
Author of the post here, I really like Go! It's my favorite language! It has absolutely nailed high concurrency programming in a way that other languages' solutions make me cringe to think through (await/async are so gross and unnecessary!)
If you are intending to do something that has multiple concurrent tasks ongoing at the same time, I would definitely reach for Go (and maybe be very careful or skip entirely using channels). I also would reach for Go if you intend to work with a large group of other software engineers. Go is rigid; when I first started programming I thought I wanted maximum flexibility, but Go brings uniformity to a group of engineers' output in a way that makes the overall team much more productive IMO.
Basically, I think Go is the best choice for server-side or backend programming, with an even stronger case when you're working with a team.
I once read a book from 1982 that was arguing about CSP implementation in Ada that it lead to proliferation of threads (called tasks in Ada) and code complexity when mutex -based solutions were simpler.
Go implementations of CSP somewhat mitigated the problems raised in the book by supporting buffered channels, but even with that with CSP one end up with unnecessary tasks which also brings the problem of their lifetime management as the article mentioned.
Unfortunately, Go also made their channels worse by having their nil semantics be complete lunacy. See the "channel API is inconsistent and just cray-cray" section in the article.
The fact that Go requires a zero value for everything, including nil for objects like it was Java and the 1990's all over again, is one of the reasons I'm not particularly inclined to use Go for anything. Even PHP is non-nullable-by-default nowadays.
Nil is IMO far and away Go's biggest wart, especially today with a lot of the early language's other warts filed off. And there's really no easy way to fix this one.
I think this article on channels suffers from "Seinfeld is Unfunny" syndrome, because the complaints about channels have largely been received and agreed upon by experienced Go developers. Channels are still useful but they were way more prominent in the early days of Go as a solution to lots of problems, and nowadays are instead understood as a sharp tool only useful for specific problems. There's plenty of other tools in the toolbox, so it was easy to move on.
Whereas, nil is still a pain in the ass. Have any nontrivial data that needs to turn into or come from JSON (or SQL, or XML, or ...)? Chances are good you'll need pointers to represent optionality. Chaining structVal.Foo.Bar.Qux is a panic-ridden nightmare, while checking each layer results in a massive amount of boilerplate code that you have to write again and again. Heck, you might even need to handle nil slices specially because the other side considers "null" and "[]" meaningfully distinct! At least nil slices are safe in most places, while nil maps are panic-prone.
> the complaints about channels have largely been received and agreed upon by experienced Go developers. Channels are still useful but they were way more prominent in the early days of Go as a solution to lots of problems, and nowadays are instead understood as a sharp tool only useful for specific problems.
As the author of the post, it's really gratifying to hear that this is your assessment nowadays. I agree, and while I'm not sure I had much to do with this turn of events (it probably would have happened with or without me), curbing the use of channels is precisely why I wrote the post. I felt like Go could be much better if everyone stopped messing around with them. So, hooray!
Wait until you try refactoring in Go. Java has plenty of warts, but the explicit inheritance in it and other languages makes working across large codebases much simpler when you need to restructure something. Structural typing is surprisingly messy in practice.
I love structural typing, use it all day long in TypeScript, and wish every language had it. I've never run into issues with refactoring, though I suppose renaming a field in a type when the consumers are only matching by a literal shape might be less than 100% reliable. It looks like Go does support anonymous interfaces, but I'm not aware of how much they're actually used in real-world code (whereas in TS inlined object shapes are fairly common in trivial cases).
I too like it, in fact I find myself missing it in other languages. I will say that anonymous interfaces are kind of rare in Go, and more generally interfaces in my experience as used in real codebases seem to lean more on the side of being producer-defined rather than consumer-defined.
But there are a couple of problems I can think of.
The first is over-scoping: a structural interface is matched by everything that appears to implement it. This can complicate refactoring when you just want to focus on those types that implement a "specialized" form of the interface. So, the standard library's Stringer interface (with a single `String() string` method) is indistinguishable from my codebase's MyStringer interface with the exact same method. A type can't say it implements MyStringer but not Stringer. The solution for this is dummy methods (add another method `MyStringer()` that does nothing) and/or mangled names (change the only method to `MyString() string` instead of `String() string`) but you do have to plan ahead for this.
The second is under-matching: you might have intended to implement an interface only to realize too late that you didn't match its signature exactly. Now you may have a bunch of types that can't be found as implementations of your interface even though they were meant to be. If you had explicit interface implementation, this wouldn't be a problem, as the compiler would have complained early on. However, this too has a solution: you can statically assert an interface is supposed to be implemented with `var _ InterfaceType = ImplementingType{}` (with the right-hand side adjusted as needed to match the type in question).
I have to trace structure and data handoffs across multiple projects sometimes that span multiple teams. I'm used to rather easily being able to find all instances of something using a thing, where something is originally defined, etc. I can't do that easily in Go due to the structural typing at times without either a very powerful IDE (JetBrains is the only one I know of that does Go well via Goland) or intimate knowledge of the project. It's surprisingly painful, and our tooling while not Jetbrains level is suppose to be some of the best out there. And Yahweh help me if I actually have to refactor.
I really doubt this comes up in smaller, one-team repos. But in the coding jungle I often deal with, I spend much more time tracing code because of this issue than I'drefactoring. Make no mistake: I like Go, but this irks me.
I had time to spare so I toyed with the example exercise.
Now I am not sure if I misunderstood something because solution is fairly simple using only channels: https://go.dev/play/p/tD8cWdKfkKW
I replied to your comment on my website, but for posterity here, yes, I do think you did a good job for the part about exiting when bestScore > 100. There's nitpicks, but this is fine! It makes sense, and nice use of a select over a send.
I did expect that this exercise would come after the first one though, and doing this on top of a solution to the first exercise is a bit harder. That said, I also don't mean to claim either are impossible. It's just tough to reason about.
I don't think you examined the code in full. main spawns 10 go routines that are constantly sending player scores to the game. That means 10 different players are sending their scores concurrently until someone reaches score of 100.
Seriously! This caused such a ruckus when I posted this 9 years ago. I lost some professional acquaintanceships over it! Definitely a different reception.
The biggest mistake I see people make with Go channels is prematurely optimizing their code by making channels buffered. This is almost always a mistake. It seems logical. You don't want your code to block.
In reality, you've just made your code unpredictable and there's a good chance you don't know what'll happen when your buffered channel fills up and your code then actually blocks. You may have a deadlock and not realize it.
So if the default position is unbuffered channels (which it should be), you then realize at some point that this is an inferior version of cooperative async/await.
Another general principle is you want to avoid writing multithreaded application code. If you're locking mutexes or starting threads, you're probably going to have a bad time. An awful lot of code fits the model of serving an RPC or HTTP request and, if you can, you want that code to be single-threaded (async/await is fine).
>The biggest mistake I see people make with Go channels is prematurely optimizing their code by making channels buffered. This is almost always a mistake. It seems logical. You don't want your code to block.
Thank you. I've fixed a lot of bugs in code that assumes because a channel is buffered it is non-blocking. Channels are always blocking, because they have a fixed capacity; my favorite preemptive fault-finding exercise is to go through a codebase and set all channels to be unbuffered, lo-and-behold there's deadlocks everywhere.
If that is the biggest mistake, then the second biggest mistake is attempting to increase performance of an application by increasing channel sizes.
A channel is a pipe connecting two workers, if you make the pipe wider the workers do not process their work any faster, it makes them more tolerant of jitter and that's it. I cringe when I see a channel buffer with a size greater than ~100 - it's a a telltale sign of a misguided optimization or finger waving session. I've seen some channels sized at 100k for "performance" reasons, where the consumer is pushing out to the network, say 1ms for processing and network egress. Are you really expecting the consumer to block for 100 seconds, or did you just think bigger number = faster?
> So if the default position is unbuffered channels (which it should be), you then realize at some point that this is an inferior version of cooperative async/await.
In Go in many cases channels are unavoidable due to API. As already was pointed out in other threads a good rule of thumb is not to use them in public method signatures.
The valid use case for channels is to signal to consumer via channel close in Context.Done() style that something is ready that then can be fetched using separated API.
Then if you need to serialize access, just use locks.
WorkGroup can replace channels in surprisingly many cases.
A message passing queue with priorities implemented on top of mutexes/signals can be used in many cases that require complex interactions between many components.
Cheers for clear summary, I take it you mean sync.WaitGroup and not WorkGroup? As a beginner I started up with WaitGroup usage years ago since it was more intuitive to me.. worked fine afaict.
Amusing, like the Blub paradox but backwards. Programmers with no experience in Go think they can critique it before they've understood it.
If you don't understand how to use channels, you should learn first. I agree that you might have to learn by experimenting yourself, and that
a) there is a small design flaw in Go channels, but one that is easily fixed; and
b) the standard documentation does not teach good practices for using channels.
First, the design flaw: close(channel) should be idempotent. It is not. Is this a fatal flaw? Hardly. The work around is trivial. Create a wrapper struct with a mutex that allows you to call Close() on the struct, and that effects an idempotent close of the member channel. Yes this is a bit of work, but you do it once, put it in a re-usable library, and never bother to think much about it again.
b) poor recommended practices (range over channel). The original article makes this mistake, and it is what causes his problem: you can never use range over a channel in production code. You must always do any select on a channel alongside a shutdown (bailout) channel, so there will always be at least two channels being select-ed on.
So yes. The docs could be better. It was immediately obvious to me when I learned Go 12 years ago that nobody at Google ever shuts down their services deliberately. Fortunately I was learning Test Driven Development at the time. So I was forced to figure out the above two rules pretty quickly.
Once those two trivial fixes are in place, Go sails.
There are Go libraries on github that do this for you.
You don't even have to think. But you should.
Handling errors is only as verbose as you want it to
be. You do realize you can call a function instead
of writing if err != nil so much, right? Sheesh.
Go _is_ something of a sharp tool. Maybe we need to put a warning on it: for mature audiences only.
> you can never use range over a channel in production code. You must always do any select on a channel alongside a shutdown (bailout) channel, so there will always be at least two channels being select-ed on.
What if you spawned a new goroutine that waits for a waitgroup to complete and then closes the channel?
Well the problem is best stated as "I don't know how to use channels and I don't intend to learn". So perhaps some kinda of developer education is the solution?
My rule of thumb is that the goroutine that writes to a channel is responsible for closing it. In this case, a deferred call to close the channel in HandlePlayer is sufficient.
Still, this example has other issues (naked range over a channel?!) potentially contributing to the author’s confusion.
However, this post was also written almost a decade ago, so perhaps it’s a result of being new to the language? If I cared to look, I’d probably be able to find the corresponding HN thread from that year full of arguments about this, hah.
> My rule of thumb is that the goroutine that writes to a channel is responsible for closing it. In this case, a deferred call to close the channel in HandlePlayer is sufficient.
This isn't your rule of thumb, it's the only practical way to do it. The problems arise when you have multiple goroutines writing to a channel, which is the case here.
> Still, this example has other issues (naked range over a channel?!) potentially contributing to the author’s confusion.
You sound way more confused than the author. I think you've misunderstood what the (admittedly very abstract) example is supposed to be doing.
The lure to use channels is too strong for new users.
The nil and various strange shapes of channel methods aren't really a problem they're just hard for newbs.
Channels in Go should really only be used for signalling, and only if you intend to use a select. They can also act as reducers, fan out in certain cases. Very often in those cases you have a very specific buffer size, and you're still only using them to avoid adding extra goroutines and reverting to pure signalling.
For your own application code, you don't have to use exceptions you can write custom Result objects and force callers to pattern match on the types (and you can always wrap library/std exceptions in that result type).
Structured Concurrency looks like a banger of a feature - it's what CompletableFuture should've been.
VirtualThreads still needs a few more years for most production cases imo, but once it's there, I truly don't see a point to choose Go over Java for backend web services.
Plus these days Java GC has addressed most of the problems that plagued Java on backend for years. The memory usage is still higher than with Go simply because more dynamic allocations happens due to the nature of the language, but GC pauses are no longer a significant problem. And if they do, switching to Go would not help. One needs non-GC language then.
fwiw - My favorite language is Rust, but Async Rust has ruined it for me.
If you want strong control and very unforgiving type system with even more unforgiving memory lifetime management so you know your program can get even faster than corresponding C/C++ programs, then Rust is a no-brainer.
But I did not pick it for the speed, though that's a very welcome bonus. I picked it for the strong static typing system mostly. And I like having the choice to super-optimize my program in terms of memory and speed when I need to.
Modelling your data transformations with enums (sum types) and Result/Option was eye-opening and improved my programming skills in all other languages I am using.
Whether that's true or not, I make no claims. Not a compiler developer.
And yeah I'd be quite content with 1-2% margin compared to C as well.
OCaml needs more coherence and unity. It offers too much choice.
Application languages are generally managed/GC'd languages with JITs or AOT compilation for doing higher-performance apps. Examples would be Java, Go, Ocaml, Haskell, etc.
System languages are generally concerned with portability, manual memory layout, and direct hardware access as opposed to managed APIs. Examples would be C/C++, Rust, Zig, Pascal, etc.
Scripts are programs that carry out a specific task and then exit. If failure occurs, it is usually on the operator to fix the issue and run the script again. I agree that bash, AppleScript, Python, etc. are geared towards these types of programs.
Applications are programs with interfaces (not necessary graphical, but can be) that are used interactively. If failure occurs, the software usually works with the user to address the problem. It's not completely unheard of to use Go here, but I wouldn't call that its strength.
Systems are "forever"-running programs to provide foundational services other programs. If failure occurs, the software usually does its best to recover automatically. This is what Go was designed for – specifically in the area of server development. There are systems, like kernels, that Go wouldn't be well suited for, but the server niche was always explicit.
Your pet definitions are fine too – it is a perfectly valid take – but know that's not what anyone else is talking about.
We're a video game studio as well using C#, and while game programmer != backend programmer, I can at least delegate small fixes and enhancements out to the team more easily.
Haskell also has software transactional memory where one can implement one's own channels (they are implemented [1]) and atomically synchronize between arbitrarily complex reading/sending patterns.
[1] https://hackage.haskell.org/package/stm-2.5.3.1/docs/Control...
In my not so humble opinion, Go is a library in Haskell from the very beginning.
https://github.com/twpayne/go-pubsub/blob/master/pubsub.go#L...
The inner channel is a poor man's future. Came up with this to have lua runtimes be able to process in parallel while maintaining ordering (A B C in, results of A B C out)
Ugh... I hope I never have to work with channels again.
I have a little pain when I do a cli as the work appears during the run and it’s tricky to guarantee you exit when all the work is done and not before. Usually Ihave a sleep one second, wait for wait group, sleep one more second at the end of the CLI main. If my work doesn’t take minutes or hours to run, I generally don’t use Go.
If you take enough steps back and really think about it, the only synchronization primitive that exists is a futex (and maybe atomics). Everything else is an abstraction of some kind. If you're really determined, you can build anything out of anything. That doesn't mean it's always a good idea.
Looking back, I'd say channels are far superior to condition variables as a synchronized cross-thread communication mechanism - when I use them these days, it's mostly for that. Locks (mutexes) are really performant and easy to understand and generally better for mutual exclusion. (It's in the name!)
That sounds reasonable. From what little Erlang/Elixir code I’ve seen, the sending and receiving of messages is also hidden as an implementation detail in modules. The public interface did not expose concurrency or synchronization to callers. You might use them under the hood to implement your functionality, but it’s of no concern to callers, and you’re free to change the implementation without impacting callers.
I had success in using a CSP style, with channels in many function signatures in a ~25k line codebase.
It had ~15 major types of process, probably about 30 fixed instances overall in a fixed graph, plus a dynamic sub-graph of around 5 processes per 'requested action'. So those sub-graph elements were the only parts which had to deal with tear-down, and clean up.
There were then additionally some minor types of 'process' (i.e. goroutines) within many of those major types, but they were easier to reason about as they only communicated with that major element.
Multiple requested actions could be present, so there could be multiple sets of those 5 process groups connected, but they had a maximum lifetime of a few minutes.
I only ended up using explicit mutexes in two of the major types of process. Where they happened to make most sense, and hence reduced system complexity. There were about 45 instances of the 'go' keyword.
(Updated numbers, as I'd initially misremembered/miscounted the number of major processes)
To debug stuff by reading the code, each message ends up having 30 potential destinations.
If a request involves N sequential calls, the control flow can be as bad as 30^N paths. Reading the bodies of the methods that are invoked generally doesn’t tell you which of those paths are wired up.
In some real world code I have seen, a complicated thing wires up the control flow, so recovering the graph from the source code is equivalent to the halting problem.
None of these problems apply to async/await because the compiler can statically figure out what’s being invoked, and IDE’s are generally as good at figuring that out as the compiler.
The problem space itself could have probably grown to twice the number of lines of code, but there wouldn't have needed to be any more developers. Possibly only the original two. The others were only added for meeting deadlines.
As to the graph, it was fixed, but not a full mesh. A set of pipelines, with no power of N issue, as the collection of places things could talk to was fixed.
A simple diagram represented the major message flow between those 30 nodes.
Testing of each node was able to be performed in isolation, so UT of each node covered most of the behaviour. The bugs were three deadlocks, one between two major nodes, one with one major node.
The logging around the trigger for the deadlock allowed the cause to be determined and fixed. The bugs arose due to time constraints having prevented an analysis of the message flows to detect the loops/locks.
So for most messages, there were a limited number of destinations, mostly two, for some 5.
For a given "request", the flow of messages to the end of the fixed graph would be passing through 3 major nodes. That then spawned the creation of the dynamic graph, with it having two major flows. One a control flow through another 3, the other a data flow through a different 3.
Within that dynamic graph there was a richer flow of messages, but the external flow from it simply had the two major paths.
Yes, reading the bodies of the methods does not inform as to the flows. One either had to read the "main" routine which built the graph, or better refer to the graph diagram and message flows in the design document.
Essentially a similar problem to dealing with "microservices", or plugable call-backs, where the structure can not easily be determined from the code alone. This is where design documentation is necessary.
However I found it easier to comprehend and work with / debug due to each node being a prodable "black box", plus having the graph of connections and message flows.
[1] Of those, only the first had any exerience with CSP or Go. The CSP expereince being with a library for C, the Go experience some minimal use a year earlier. The other developers were all new to CSP and Go. The first two developers were "senior" / "experienced".
I would tentatively make the claim that channels (in the abstract) are at heart an interface rather than a type of synchronisation per se. They can be implemented using Mutexes, pure atomics (if each message is a single integer) or any number of different ways.
Of course, any specific implementation of a channel will have trade-offs. Some more so than others.
You could address the concern by choosing a CPU architecture that included infinite capacity FIFOS that connected its cores into arbitrary runtime directed graphs.
Of course, that architecture doesn’t exist. If it did, dispatching an instruction would have infinite tail latency and unbounded power consumption.
As you think back even to the very early days of computing, you'll find individuals or small teams like Grace Hopper, the Unix gang, PARC, etc that managed to change history by "building something useless". Granted, throughout history that happened less than 1% of the time, but it will never happen if you never try.
Maybe Google no longer has any space for innovation.
Friendly fyi... GMail was not a "20% project" which I mentioned previously: https://news.ycombinator.com/item?id=39052748
Somebody (not me but maybe a Google employee) also revised the Wikipedia article a few hours after my comment: https://en.wikipedia.org/w/index.php?title=Side_project_time...
Before LLMs and ChatGPT even existed ... a lot of us somehow hallucinated the idea that GMail came from Google's 20% Rule. E.g. from 2013-08-16 : https://news.ycombinator.com/item?id=6223466
Go's design consistently at every turn chose the simplest (one might say "dumbest", but I don't mean it entirely derogatory) way to do something. It was the simplest most obvious choice made by a very competent engineer. But it was entirely made in isolation, not by a language design expert.
Go designs did not actually go out and research language design. It just went with the gut feel of the designers.
But that's just it, those rules are there for a reason. It's like the rules of airplane design: Every single rule was written in blood. You toss those rules out (or don't even research them) at your own, and your user's, peril.
Go's design reminds me of Brexit, and the famous "The people of this country have had enough of experts". And like with Brexit, it's easy to give a lame catch phrase, which seems convincing and makes people go "well what's the problem with that, keeping it simple?".
Explaining just what the problem is with this "design by catchphrase" is illustrated by the article. It needs ~100 paragraphs (a quick error prone scan was 86 plus sample code) to explain just why these choices leads to a darkened room with rakes sprinkled all over it.
And this article is just about Go channels!
Go could get a 100 articles like this written about it, covering various aspects of its design. They all have the same root cause: Go's designers had enough of experts, and it takes longer to explain why something leads to bad outcomes, than to just show the catchphrase level "look at the happy path. Look at it!".
I dislike Java more than I dislike Go. But at least Java was designed, and doesn't have this particular meta-problem. When Go was made we knew better than to design languages this way.
But they were working in a bit of a vacuum. Not only were they mostly addressing the internal needs of Google, which is a write-only shop as far as the rest of the software industry is concerned, they also didn't have broad experience across many languages, and instead had deep experience with a few languages.
In it, he seems to believe that the primary use of types in programming languages is to build hierarchies. He seems totally unfamiliar with ideas behind ML or haskell.
Go was the third language he played a major part in creating (predecessors are Newsqueak and Limbo), and his pedigree before Google includes extensive experience on Unix at Bell Labs. He didn't create C but he worked directly with the people who did and he likely knows it in and out. So I stand by my "deep, not broad" observation.
Ken Thompson requires no introduction, though I don't think he was involved much beyond Go's internal development. Robert Griesemer is a little more obscure, but Go wasn't his first language either.
> Did the C++ committee really believe that was wrong with C++ was that it didn't have enough features?
C++ was basically saved by C++11. When written in 2012 it may have been seen as a reasonable though contrarian reaction, but history does not agree. It's just wrong.
I'm very much NOT saying that every decision Go made was wrong. And often experts don't predict the future very well. But I do think that this gives further proof that while not a bad coder by any means, no Pike is very much not an expert in PL. Implementing ideas you had from the 1980s adding things you've learned since is not the same thing as learning lessons from the industry in that subfield.
I don't think the word encompasses "have done it several times before, but has not actually even looked at the state of the art".
If you're a good enough engineer, you can build anything you want. That doesn't make you an expert.
I have built many websites. I'm not a web site building expert. Not even remotely.
I also think it's only from the perspective of a select few, plus some purists, that the authors of Go can be considered anything other than experts. That they made some mistakes, including some borne of hubris, doesn't really diminish their expertise to me.
If Go had been designed in the 1980s then it would have been genius. But now we know better. Expertise is more than knowing state of the art as of 30 years prior.
For one thing, Go took a number of forward-thinking stances (or ones which were, at least, somewhat unusual for its time and target audience), like UTF-8 strings (granted, Thompson and Pike created the encoding in the first place), fat pointers for strings and slices, green threads with CSP (though channels proved to be less useful than envisioned) and no function coloring, first-class functions, a batteries-included standard library, etc.
The only things I can think of which Go did that seemed blatantly wrong in that era would be the lack of generics and proper enums. Null safety, which IMO proved to be the killer development of that era, was not clearly formed in industry yet. Tony Hoare hadn't even given his famous "billion-dollar mistake" talk yet, though I'm sure some idea of the problem already existed (and he did give it in 2009, a couple of years into Go's development but also before its first public release). I know others find the type system lacking, but I don't think diving hard into types is obviously the best way to go for every language.
If one were to seriously investigate whether expertise was valued by Pike et al., I think it would start by looking at Erlang/OTP. In my opinion, that ecosystem offers the strongest competition against Go on Go's own strengths, and it predates Go by many years. Were the Go designers aware of it at all? Did they evaluate its approach, and if so, did they decide against it for considered reasons? Arguing against C++ was easy coming from their desired goals, but what about a stronger opponent?
If Java is not a failure then neither is Go.
If Go is a failure then so is Java.
Personally I think it is inaccurate to judge a mainstream, popular and widely adopted language as a failure just because it did not meet the goal set at the initiation of the project, prior to even the first line of code getting written.
Except that it did. Just because people aren’t rewriting borg and spanner in go doesn’t mean it isnt default choice for many of infra projects. And python got completely superseded by go even during my tenure
Hopefully saving this comment will work.
Go, unlike Brexit, has pivoted to become the solution to something other than its stated target. So sure, Go is not a failure. It was intended to be a systems language to replace C++, but has instead pivoted to be a "cloud language", or a replacement for Python. I would say that it's been a failure as a systems language. Especially if one tries to create something portable.
I do think that its simplicity is the rejection of the idea that there are experts out there, and/or their relevance. It's not decisions based on knowledge and rejection, but of ignorance and "scoping out" of hard problems.
Another long article could be written about the clearly not thought through use of nil pointers, especially typed vs untyped nil pointers (if that's even the term) once nil pointers (poorly) interact with interfaces.
But no, I'm not comparing the outcome of Go with Brexit. Go pivoting away from its stated goals are not the same thing as Brexiteers claiming a win from being treated better than the EU in the recent tariffs. But I do stand by my point that the decision process seems similarly expert hostile.
Go is clearly a success. It's just such a depressingly sad lost opportunity, too.
More specifically, it was intended to replace the systems that Google wrote in C++ (read: servers). Early on, the Go team expressed happy surprise that people found utility in the language outside of that niche.
> but has instead pivoted to be a "cloud language"
I'm not sure that is really a pivot. At the heart of all the "cloud" tools it is known for is a HTTP server which serves as the basis of the control protocol, among other things. Presumably Go was chosen exactly because of it being designed for building servers. Maybe someone thought there would be more CRUD servers written in it too, but these "cloud" tools are ultimately in the same vein, not an entirely different direction.
> or a replacement for Python
I don't think you'd normally choose Go to train your ML/AI model. It has really only gunned for Python in the server realm; the very thing it was intended to be for. What was surprising to those living in an insular bubble at Google was that the rest of the world wrote their servers in Python and Ruby rather than C++ like Google – so it being picked up by the Python and Ruby crowd was unexpected to them – but not to anyone else.
Ok, I'll ask the obvious question. Who are these experts and what languages have they designed?
> Another long article could be written about the clearly not thought through use of nil pointers, especially typed vs untyped nil pointers (if that's even the term) once nil pointers (poorly) interact with interfaces.
You're getting worked up about something that's hardly ever an issue in practice. I suspect that most of your criticisms are similar.
The first is simple confusion, such as trying to handle "all nils" the same way. This is not necessary, though I have seen some developers coming from other languages get hung up on it. In my experience, the only real use for "typed nil" is reflection, and if you're using reflection, you should already know the language in and out.
However, the other aspect is that methods with pointer receivers on concrete types can have default behavior when the receiver is nil, while interfaces cannot. In the expression foo.Bar(), you can avoid a panic if foo is *T, but when foo is an interface, there is no way around the panic without checking foo's value explicitly first. Since nil is the default value of all interface types, even if you create a dummy/no-op implementation of the interface, you still risk nil panics. You can mitigate but never truly remove this risk.
I actually think Java is the better PL, but the worse runtime (in what world are 10s GC pauses ever acceptable). Java has an amazing standard library as well - Golang doesn't even have many basic data structures implemented. And the ones it does, like heap, are absolutely awful to use.
I really just view Golang nowadays as a nicer C with garbage collection, useful for building self contained portable binaries.
Java is a child of the 90s. My full rant at https://blog.habets.se/2022/08/Java-a-fractal-of-bad-experim... :-)
Or how append() sometimes returns a new slice and sometimes it doesn't (so if you forget to assign the result, sometimes it works and sometimes it doesn't). Which is understandable if you think about it in terms of low-level primitives, but in Go this somehow became the standard way of managing a high-level list of items.
Or that whole iota thing.
What is the whole iota thing?
For what it is, the iota design is really good. Languages like C and Typescript having having the exact same feature hidden in some weird special syntax look silly in comparison, not to mention that the weird syntax obscures what is happening. What Go has is much more clear to read and understand (which is why it gets so much grief where other languages with the same feature don't – there is no misunderstandings about what it is).
But maybe you are implying that no language should present raw enums, rather they should be hidden behind sum types? That is not an unreasonable take, but that is not a design flaw. That is a different direction. If this is what you are thinking, it doesn't fit alongside the other two which could be more intuitive without completely changing what they are.
Actually... https://100go.co/
And compared to the article, there's no dead coroutine, and no shared state managed by the coroutine: seeing the `NewGame` function return a `*Game` to the managed struct, this is an invitation for dumb bugs. This would be downright impossible in Rust, and coerces you in an actual CSP pattern where the interaction with the shared state is only through channels. Add a channel for exit, another for bookeeping, and you're golden.
I often have a feeling that a lot of the complaints are self-inflicted Go problems. The author briefly touches on them with the special snowflakes that are the stdlib's types. Yes, genericity is one point where channels are different, but the syntax is another one. Why on earth is a `chan <- elem` syntax necessary over `chan.Send(elem)`? This would make non-blocking versions trivial to expose and discover for users (hello Rust's `.try_send()` methods).
Oh and related to the first example of "exiting when all players left", we also see the lack of proper API for go channels: you can't query if there still are producers for the channel because gc and pointers and shared channel objetc itself and yadda. Meanwhile in rust, producers are reference-counted and the channel automatically closed when there are no more producers. The native Go channels can't do that (granted, they could, with a wrapper and dedicated sender and receiver types).
Care to show any example? I'm interested!
- A struct that represents the mutable state I’m wrapping
- A start(self) method which moves self to a tokio task running a loop reading from an mpsc::Receiver<Command> channel, and returns a Handle object which is cloneable and contains the mpsc::Sender end
- The handle can be used to send commands/requests (including one shot channels for replies)
- When the last handle is dropped, the mpsc channel is dropped and the loop ends
It basically lets me think of each logical concurrent service as being like a tcp server that accepts requests. They can call each other by holding instances of the Handle type and awaiting calls (this can still deadlock if there’s a call cycle and the handling code isn’t put on a background task… in practice I’ve never made this mistake though)
Some day I’ll maybe start using an actor framework (like Axum/etc) which formalizes this a bit more, but for now just making these types manually is simple enough.
BTW, you can create a deadlock equivalent with channels if you write "wait for A, reply with B" and "wait for B, send A" logic somewhere. It's the same problem as ordering of nested locks.
1. send a close message on the channel that stops the goroutine
2. use a Context instance - `ctx.Done()` returns a channel you can select on
Both are quite easy to grasp and implement.
I'd really, really recommend that you try writing the code, like the post encourages. It's so much harder than it looks, which neatly sums up my overall experience with Go channels.
In both cases, you’d need to keep track of connected players to stop the game loop and teardown the Game instance. Closing the channel during that teardown is not a hurdle.
What am I missing?
I was thinking the same, the only problem is the author not keeping track of players
On HandlePlayer return err you would decrement a g.players counter, or something, and in the Game.run just do if !g.hasPlayers() break close(g.scores)
The solution requires nothing special, just basic logic that should probably be there anyway
If anything this post shows that mutexes are worse, by making bad code work
Among the errors in the multiplayer case is the lack of score attribution which isn’t a bug with channels as much as it’s using an int channel when you needed a struct channel.
Assuming the number of players are set up front, and players can only play or leave, not join. If the expectation is that players can come and go freely and the game ends some time after all players have left, I believe this pattern can still be used with minor adjustment
(please overlook the pseudo code adjustments, I'm writing on my phone - I believe this translates reasonably into compilable Go code):
Since CSP is mentioned, how much would this apply to most applications anyway? If I write a small server program, I probably won't want to write it on paper first. With one possible exception I never heard of anyone writing programs based on CSP (calculations?)
CSP is really in the realm of formal methods. No you wouldn't formulate your server program as CSP, but if you were writing software for a medical device, perhaps.
This is the FDR4 model checker for CSP, it's a functional programming language that implements CSP semantics and may be used to assert (by exhaustion, IIRC) the correctness of your CSP model.
https://cocotec.io/fdr/
I believe I'm in the minority of Go developers that have studied CSP, I fell into Go by accident and only took a CSP course at university because it was interesting, however I do give credit to studying CSP for my successes with Go.
Most experienced Golang practitioners have reached the same conclusions as this blog post: Just don't use channels, even for problems that look simple. I used Go professionally for two years, and it's by far the worst thing about the language. The number of footguns is astounding.
But in general the conclusion still stands. Channels brings unnecessarily complexity. In practice message passing with one queue per goroutine and support for priority message delivery (which one cannot implement with channels) gives better designs with less issues.
Context is only needed where one has a dynamic graph where one wishes to cleanly tear down that graph. One can operate blocking tx/rx within a fixed graph without the need for any Context to be present.
Possibly folks are thinking of this only for "web services" type of things, where everying is being exchanged over an HTTP/HTTPS request.
However channels / CSP can be used in much wider problem realms.
The real gain I found from the CSP / Actor model approach to things is the ability to reason through the problems, and the ability to reason through the bugs.
What I found with CSP is that I accept the possibility of accidentally introducing potential deadlocks (where there is a dependancy loop in messaging) but gain the ability to reason about state / data evolution. As opposed to the "multithreaded" access to manipulate data (albeit with locks/mutexes) where one can not obviously reason through how a change occurred, and its timing.
For the bugs which occur in the two approaches, I found the deadlock with incorrect CSP to be a lot easier to debug (and fix) vs the unknown calling thread which happened to manipulate a piece of (mutex protected) shared state.
This arises from the Actor like approach of a data change only occurring in the context of the one thread.
As the article pointed out lack of support for unbounded channels complicates reasoning about absence of deadlocks. For example, they are not possible at all with Erlang model.
Of cause unbounded message queues have own drawbacks, but then Erlang supports safe killing of threads from other threads so with Erlang a supervisor thread can send health pings periodically and kill the unresponsive thread.
Full rebuttal by jerf here: https://news.ycombinator.com/item?id=39317648
Similarly with Context, if your function calls other functions with Context but always passes in Background(), you deprive your callers of the ability to provide their own Context, which is kinda important. So in practice you still end up adding that argument throughout the entire call hierarchy all the way up to the point where the context is no longer relevant.
Like closures, channels are very flexible and can be used to implement just about anything; that doesn't mean doing so is a good idea.
I would likely reach for atomics before mutexes in the game example.
If you run a microbenchmark which seems like what has been done, then channels look slow.
If you try the contention with thousands of goroutines on a high core count machine, there is a significant inflection point where channels start outperforming sync.Mutex
The reason is that sync.Mutex, if left to wait long enough will enter a slow code path and if memory serves, will call out to a kernel futex. The channel will not do this because the mutex that a channel is built with is exists in the go runtime - that's the special sauce the author is complaining doesn't exist but didn't try hard enough to seek it out.
Anecdotally, we have ~2m lines of Go and use channels extensively in a message passing style. We do not use channels to increment a shared number, because that's ridiculous and the author is disingenuous in their contrived example. No serious Go shop is using a channel for that.
> sync.Mutex, if left to wait long enough will enter a slow code path and if memory serves, will call out to a kernel futex. The channel will not do this because the mutex that a channel is built with is exists in the go runtime
Do you have any more details about this? Why isn’t sync.Mutex implemented with that same mutex channels use?
> [we] use channels extensively in a message passing style. We do not use channels to increment a shared number
What is the rule of thumb your Go shop uses for when to use channels vs mutexes?
https://go.dev/play/p/qXwMJoKxylT
go test -bench=.* -run=^$ -benchtime=1x
Since my critique of the OP is that it's a contrived example, I should mention so is this: the mutex version should be a sync.Atomic and the channel version should have one channel per goroutine if you were attempting to write a performant concurrent counter, both of those alternatives would have low or zero lock contention. In production code, I would be using sync.Atomic, of course.
On my 8c16t machine, the inflection point is around 2^14 goroutines - after which the mutex version becomes drastically slower; this is where I believe it starts frequently entering `lockSlow`. I encourage you to run this for yourself.
> Do you have any more details about this? Why isn’t sync.Mutex implemented with that same mutex channels use?
Why? Designing and implementing concurrent runtimes has not made its way onto my CV yet; hopefully a lurking Go contributor can comment.
The channel mutex: https://go.dev/src/runtime/chan.go
Is not the same mutex as a sync.Mutex: https://go.dev/src/internal/sync/mutex.go
If I had to guess, the channel mutex may be specialised since it protects only enqueuing or dequeuing onto a simple buffer. A sync.Mutex is a general construct that can protect any kind of critical region.
> What is the rule of thumb your Go shop uses for when to use channels vs mutexes?
Rule of thumb: if it feels like a Kafka use case but within the bounds of the local program, it's probably a good bet.
If the communication pattern is passing streams of work where goroutines have an acyclic communication dependency graph, then it's a no brainer: channels will be performant and a deadlock will be hard to introduce.
If you are using channels to protect shared memory, and you can squint and see a badly implemented Mutex or WaitGroup or Atomic; then you shouldn't be using channels.
Channels shine where goroutines are just pulling new work from a stream of work items. At least in my line of work, that is about 80% of the cases where a synchronization primitive is used.
> On my machine, the inflection point is around 10^14 goroutines - after which the mutex version becomes drastically slower;
How often are you reaching 10^14 goroutines accessing a shared resource on a single process in production? We mostly use short-lived small AWS spot instances so I never see anything like that.
> Why? Designing and implementing concurrent runtimes has not made its way onto my CV yet; hopefully a lurking Go contributor can comment. > If I had to guess, the channel mutex may be specialised since it protects only enqueuing or dequeuing onto a simple buffer. A sync.Mutex is a general construct that can protect any kind of critical region.
Haha fair enough, I also know little about mutex implementation details. Optimized specialized tool vs generic tool feels like a reasonable first guess.
Though I wonder if you are able to use channels for more generic mutex purposes is it less efficient in those cases? I guess I'll have to do some benchmarking myself.
> If the communication pattern is passing streams of work where goroutines have an acyclic communication dependency graph, then it's a no brainer: channels will be performant and a deadlock will be hard to introduce.
I agree with your rules, I used to always use channels for single processt thread-safe queues (similar to your Kafka rule) but recently I ran into a cyclic communication pattern with a queue and eventually relented to using a Mutex. I wonder if there are other painful channel concurrency patterns lurking for me to waste time on.
I apologize, that should've said 2^14, each sub-benchmark is a doubling of goroutines.
2^14 is 16000, which for contention of a shared resource is quite a reasonable order of magnitude.
Talk about knocking down strawmen: it's a stand-in for shared state, and understanding that should be a minimum bar for serious discussion.
The article that the OP article references does not show the code for their benchmark, but I must assume it's not using a large number of goroutines.
Is it worth learning it? What problems are best solved with it?
If you are intending to do something that has multiple concurrent tasks ongoing at the same time, I would definitely reach for Go (and maybe be very careful or skip entirely using channels). I also would reach for Go if you intend to work with a large group of other software engineers. Go is rigid; when I first started programming I thought I wanted maximum flexibility, but Go brings uniformity to a group of engineers' output in a way that makes the overall team much more productive IMO.
Basically, I think Go is the best choice for server-side or backend programming, with an even stronger case when you're working with a team.
Go implementations of CSP somewhat mitigated the problems raised in the book by supporting buffered channels, but even with that with CSP one end up with unnecessary tasks which also brings the problem of their lifetime management as the article mentioned.
I think this article on channels suffers from "Seinfeld is Unfunny" syndrome, because the complaints about channels have largely been received and agreed upon by experienced Go developers. Channels are still useful but they were way more prominent in the early days of Go as a solution to lots of problems, and nowadays are instead understood as a sharp tool only useful for specific problems. There's plenty of other tools in the toolbox, so it was easy to move on.
Whereas, nil is still a pain in the ass. Have any nontrivial data that needs to turn into or come from JSON (or SQL, or XML, or ...)? Chances are good you'll need pointers to represent optionality. Chaining structVal.Foo.Bar.Qux is a panic-ridden nightmare, while checking each layer results in a massive amount of boilerplate code that you have to write again and again. Heck, you might even need to handle nil slices specially because the other side considers "null" and "[]" meaningfully distinct! At least nil slices are safe in most places, while nil maps are panic-prone.
As the author of the post, it's really gratifying to hear that this is your assessment nowadays. I agree, and while I'm not sure I had much to do with this turn of events (it probably would have happened with or without me), curbing the use of channels is precisely why I wrote the post. I felt like Go could be much better if everyone stopped messing around with them. So, hooray!
But there are a couple of problems I can think of.
The first is over-scoping: a structural interface is matched by everything that appears to implement it. This can complicate refactoring when you just want to focus on those types that implement a "specialized" form of the interface. So, the standard library's Stringer interface (with a single `String() string` method) is indistinguishable from my codebase's MyStringer interface with the exact same method. A type can't say it implements MyStringer but not Stringer. The solution for this is dummy methods (add another method `MyStringer()` that does nothing) and/or mangled names (change the only method to `MyString() string` instead of `String() string`) but you do have to plan ahead for this.
The second is under-matching: you might have intended to implement an interface only to realize too late that you didn't match its signature exactly. Now you may have a bunch of types that can't be found as implementations of your interface even though they were meant to be. If you had explicit interface implementation, this wouldn't be a problem, as the compiler would have complained early on. However, this too has a solution: you can statically assert an interface is supposed to be implemented with `var _ InterfaceType = ImplementingType{}` (with the right-hand side adjusted as needed to match the type in question).
I really doubt this comes up in smaller, one-team repos. But in the coding jungle I often deal with, I spend much more time tracing code because of this issue than I'drefactoring. Make no mistake: I like Go, but this irks me.
I did expect that this exercise would come after the first one though, and doing this on top of a solution to the first exercise is a bit harder. That said, I also don't mean to claim either are impossible. It's just tough to reason about.
In reality, you've just made your code unpredictable and there's a good chance you don't know what'll happen when your buffered channel fills up and your code then actually blocks. You may have a deadlock and not realize it.
So if the default position is unbuffered channels (which it should be), you then realize at some point that this is an inferior version of cooperative async/await.
Another general principle is you want to avoid writing multithreaded application code. If you're locking mutexes or starting threads, you're probably going to have a bad time. An awful lot of code fits the model of serving an RPC or HTTP request and, if you can, you want that code to be single-threaded (async/await is fine).
Thank you. I've fixed a lot of bugs in code that assumes because a channel is buffered it is non-blocking. Channels are always blocking, because they have a fixed capacity; my favorite preemptive fault-finding exercise is to go through a codebase and set all channels to be unbuffered, lo-and-behold there's deadlocks everywhere.
If that is the biggest mistake, then the second biggest mistake is attempting to increase performance of an application by increasing channel sizes.
A channel is a pipe connecting two workers, if you make the pipe wider the workers do not process their work any faster, it makes them more tolerant of jitter and that's it. I cringe when I see a channel buffer with a size greater than ~100 - it's a a telltale sign of a misguided optimization or finger waving session. I've seen some channels sized at 100k for "performance" reasons, where the consumer is pushing out to the network, say 1ms for processing and network egress. Are you really expecting the consumer to block for 100 seconds, or did you just think bigger number = faster?
I feel so validated by this comment.
The valid use case for channels is to signal to consumer via channel close in Context.Done() style that something is ready that then can be fetched using separated API.
Then if you need to serialize access, just use locks.
WorkGroup can replace channels in surprisingly many cases.
A message passing queue with priorities implemented on top of mutexes/signals can be used in many cases that require complex interactions between many components.
I think that was one of the successes of Go
Every big enough concurrent system will conclude sync primitives are dangerous and implement a queue system more similar to channels
Mutexes always look easier for starters, but channels/queues will help you model the problem better in the long term, and debug
Also as a rule of thumb you should probably handle panics every time you start a new thread/goroutine
If you don't understand how to use channels, you should learn first. I agree that you might have to learn by experimenting yourself, and that
a) there is a small design flaw in Go channels, but one that is easily fixed; and
b) the standard documentation does not teach good practices for using channels.
First, the design flaw: close(channel) should be idempotent. It is not. Is this a fatal flaw? Hardly. The work around is trivial. Create a wrapper struct with a mutex that allows you to call Close() on the struct, and that effects an idempotent close of the member channel. Yes this is a bit of work, but you do it once, put it in a re-usable library, and never bother to think much about it again.
b) poor recommended practices (range over channel). The original article makes this mistake, and it is what causes his problem: you can never use range over a channel in production code. You must always do any select on a channel alongside a shutdown (bailout) channel, so there will always be at least two channels being select-ed on.
So yes. The docs could be better. It was immediately obvious to me when I learned Go 12 years ago that nobody at Google ever shuts down their services deliberately. Fortunately I was learning Test Driven Development at the time. So I was forced to figure out the above two rules pretty quickly.
Once those two trivial fixes are in place, Go sails. There are Go libraries on github that do this for you. You don't even have to think. But you should.
Handling errors is only as verbose as you want it to be. You do realize you can call a function instead of writing if err != nil so much, right? Sheesh.
Go _is_ something of a sharp tool. Maybe we need to put a warning on it: for mature audiences only.
What if you spawned a new goroutine that waits for a waitgroup to complete and then closes the channel?
But in any case you will end up using a wrapper on either
Still, this example has other issues (naked range over a channel?!) potentially contributing to the author’s confusion.
However, this post was also written almost a decade ago, so perhaps it’s a result of being new to the language? If I cared to look, I’d probably be able to find the corresponding HN thread from that year full of arguments about this, hah.
This isn't your rule of thumb, it's the only practical way to do it. The problems arise when you have multiple goroutines writing to a channel, which is the case here.
> Still, this example has other issues (naked range over a channel?!) potentially contributing to the author’s confusion.
You sound way more confused than the author. I think you've misunderstood what the (admittedly very abstract) example is supposed to be doing.
It is not clear from the example, but I presume there would multiple players, i.e there will calls of the form:
in such a case one player closing the channel would affect rest of the producers too.