> and (c) requires a language feature (sequence points) to disambiguate in which order side effects get executed. With Haskell, we just don’t have to care.
Reading up to this point, I had to chuckle a bit. I have struggled with the Haskell type system more than I care to admit; It's almost never "we just don't have to care" when comparing to most other popular languages.
That being said, this article does a nice job of gently introducing some of the basics that will trip up somebody who is casually looking through code wondering what *> and <*> and <* do. As usual, there is a steep learning curve because of stuff like this all over the codebase. If I walk away for a month, I need to revisit >>= vs >> and other common operators before I can be productive. It probably doesn't help that I never actually speak to a human about these concepts so in my head it's always ">>=" and not "bind."
> It's almost never "we just don't have to care" when comparing to most other popular languages.
Struggling with Haskell type system is not an experience of somebody who has developed an intuition about Haskell type system. Granted, it is not a binary thing, you can have good intuition about some parts of it and struggle with others.
I think they way you put it is, while technically true, not fair. Those "most other" languages are very similar to one another. It is not C# achievement, that you don't struggle with its type system coming from Java.
This is like people struggling with Rust because of burrow checker, well, they have probably never programmed with burrow checker before.
I try to avoid >>= and >> (or *>) because I know it trips people up; do-notation is more than fine. The exception is when parsing with one of the parsecs where you get a lot of <* and *> usage and all the tutorials use those symbols.
But I like <|> , it feels very clear that it has a sort of "or" meaning.
> I try to avoid >>= and >> (or *>) because I know it trips people up; do-notation is more than fine
Interesting. Probably it's just me, but I first learned monads using >>= (in OCaml), so at the beginning I found the Haskell do notation more confusing (and indentation rules didn't help). >>= is just a function and I understand its signature well. On the other hand, "do" is a syntactic sugar that I sometimes had to "mentally desugar" in some cases.
In a nutshell, first class effects and built in set of patterns for composing them get rid of boilerplate code. Combine that with type safety and you can churn out relatively bug free code very fast.
I always maintain that this is just familiarity, Haskell is in truth quite a simple language. It's just that the way it works isn't similar to the languages most people have started with.
I believe there's a strange boundary around the idea of simple vs easy (to quote rich hickey) and I don't know how to call it.. (or if somebody named it before)
functional and logical languages are indeed very simple, small core, very general laws.. (logic, recursion, some types) but grokking this requires unplugging from a certain kind of reality.
Most people live in the land of tools, syntax and features .. they look paradoxically both simpler than sml/haskell so people are seduced by them, yet more complex at the same time (class systems are often large and full of exceptions) but that also makes it like they're learning something advanced, (and familiar, unlike greek single variables and categ-oids :).
Accessibility is not an issue. It takes only a little bit of effort to get productive with a Haskell codebase. I think it's more of a mental block because the language is different from what one might be used to. What Haskell needs, and doesn't have, is a compelling reason for people to make that small effort (i.e. the killer usecase).
"Relatively bug free code very fast" sounds like a killer use case to me.
So why hasn't it happened? Some possibilities:
1. People are just ignorant/unenlightened.
2. Haskell is too hard to use for most people. I think that different programmers think in different ways, and therefore find different languages to be "natural". To those whom Haskell fits, it really fits, and they have a hard time understanding why it isn't that way for everyone, so they wind up at 1. But for those who it doesn't fit, it's this brick wall that never makes sense. (Yes, this is about the same as 1, just seen from the other side. It says the problem is the language, not the people - the language really doesn't fit most people very well, and we can change languages easier than we can change people.)
3. Haskell isn't a good fit for many kinds of programming. The kind of programs where it fits, it's like a superpower. The kinds where it doesn't, though, it's like picking your nose with boxing gloves on. (Shout out to Michael Pavlinch, from whom I stole that phrase.)
What kinds of programs fit? "If you can think of your program like a pipe" is the best explanation I've seen - if data flows in, gets transformed, flows out. What kind of program doesn't fit? One with lots of persistent mutable state. Especially, one where the persistent mutable state is due to the problem, not just to the implementation.
Oracle influenced (read: bought) academia into teaching for a generation. See Dijkstra’s criticisms[1] from the time, from when his department was forced to stop teaching Haskell to undergrads for political reasons. Note that Haskell had not been too hard for Dijkstra’s undergrads.
Later, Python took its place, since people realized the Java ecosystem was way too complicated and was turning off would-be CS students. Python directly targeted the academic use case by having similarities to C, Java, and Bash——it was not a better language, it just made existing imperative and object-oriented assignments easier for classroom environments. Believe it or not, a lot of programmers and even academics sort of give up on exploring significantly unfamiliar directions after graduating.
4. History. In those types of discussions, there are always "rational" arguments presented, but this one is missing.
> One with lots of persistent mutable state.
You mean like a database? I don't see a problem here. In fact, there is a group of programs large enough, that Haskell fits nicely, that it cannot be 3; REST/HTTP APIs. This is pretty much your data goes in, data goes out.
The generalized version of ‘traverse/mapM’ that doesn’t just work for lists, but any ‘Traversable’ type is absolutely amazing and is useful in so many cases.
‘traverse :: Applicative f => (a -> f b) -> t a -> f (t b)’
And you can derive it for free for your own datatypes!
The amount of code I’ve manually written in other languages to get a similar effect is painfully large.
I had a generic report class that essentially fetched a bunch of database rows, did some stuff for each row, and then combined the results together into a report. (This was in Scala, I know Haskell doesn't have classes, but presumably similar situations can happen)
For one client, we needed to accumulate some extra statistics for each. For another, we needed to call their web API (so async I/O) to get some of the data used in the report. By making the generic superclass use a generic Applicative type, we could keep the report business logic clear and allow the client-specific subclasses to do these client-specific things and have them compose the right way.
Wanting custom applicative types is rarer than using a standard one, but it can be a good way to represent any kind of "secondary effect" or "secondary requirement" that your functions might have. E.g. "requires this kind of authorisation" or "must happen in a database transaction". But a lot of the time you can implement custom things using reader/writer/state, or free, rather than having to write a completely from-scratch applicative.
Haskell dilettante here… The “IO a” vs “a” reminded me of async vs sync - where the first one returns a promise/future to be awaited on, rather than a result.
Is there any parallel there, or is it an entirely wrong perception I got ?
There’s a parallel because Promises in a language like JavaScript are “monad-like”, so they’re similar to the IO Monad here. I am not a functional wizard so I’m sure that was not a fair comparison in some way, but it’s how I have thought of it. They’re both a representation of a side effect and require that effect be respected before you can get to the value inside it
I like Haskell, but I think it suffers from the same issues preventing most lisps to gain any traction: every codebase is different and reinvents its own DSL or uses different extensions.
Lispers hailing macros and metaprogramming cannot understand that power is also the very thing that makes jumping from project to project difficult, I have no intention of relearning your cleverly designed DSL or new extensions.
There's a reason why Java or PHP have plenty of widely used killer software, while monocle-wielding lispers and haskellers have very little to show after many decades.
It's not FP being the issue, it's just that their power attracts crowds interested in code more than the features/business/product.
I don't blame them, I love Haskell and Racket but I think very few teams can scale and make the compromise worth it.
The upside for haskell is that whenever you come to some new code, or return to your own old DSL-y code, the types are there as spikes in a cliff to help you move onward. With, e.g elisp, I always get a mild headache when I need to add some new feature to a codebase I myself wrote.
Everyone else is responding with FOSS, so I'll respond with some companies:
Co-Star, the astrology SaaS, is apparently written with a Haskell backend. I'd love to have seen the casting call for that.
I believe the Mercury bank also runs most of their backend stuff on Haskell. Functional languages in general are surprisingly common among financial investment firms.
Some of Target's stuff is written in Haskell. I think there was at least one big Facebook project that was written in Haskell, but they may have moved away from it by now. Awake Security does some Haskell stuff.
One thing which might be surprising is Haskell is apparently quite strong for general backend web dev.
> Haskell is apparently quite strong for general backend web dev
Yep. Mostly because of the https://www.servant.dev/ framework (but see also IHP, Yesod, and other frameworks). Servant lets you declare your HTTP API at the type level, and then it will infer the correct types for your endpoint handlers. You can also extract OpenAPI specs from it, generate clients for Haskell or other languages, etc.
My current employer, Bellroy, uses Haskell for pretty much all new code and Servant for all new HTTP APIs. https://exploring-better-ways.bellroy.com/our-technology-sta... is an older post discussing the shift to Haskell. We've found Haskell code to be much more compact than the equivalent Ruby, and significantly more robust.
Going from Ruby to Haskell is, itself, quite a good signal of quality for me. Start strong and end stronger. Sounds like you've got a good thing going!
I maintain Haskell code for five different customers, some large projects, some smaller, projects of varying ages up to over a decade. All the projects do "server backend" stuff, some web frontend too. I love how secure I feel making changes to things I haven't touched in a while.
I've used Haskell several times for implementing isolated 'maths business logic units in commercial backend applications.
In one such system I built had the main (REST API exposing) backend implemented in Kotlin with a separate application in Haskell doing a complex set of maths driven business rules against GIS data to calculate area specific prices.
The amount of IO on the Haskell side was fairly minimum and abstracted away quite nicely.
Haskell allowed expressing all complexity in a way that was easy to audit and translate from business/data analyst requirements.
Would do again :-) But only with the correct amount isolation so you can lean into Haskell's strong sides.
If you want to stay in the land of monads there is https://github.com/SimulaVR/Simula?tab=readme-ov-file "a VR window manager for Linux". Should've been called MetaMonad ;) but I guess that was already taken by the phylum metamonada, don't want to get on their bad side.
I once had a hard to track down bug in some code making use of conduit[0], which is introduced using examples like `main = runConduit $ (yield 1 >> yield 2) .| mapM_C print`.
Dutifully replacing every occurrence of (>>) with (>), because it was more modern, suddenly changed the semantics somewhere, due to the fact that (>>) is defined with fixity `infixl 1 >>` and (>) as `infixl 4 >` - i.e. both are left-associated operators, but (*>) binds tighter than (>>) and some of the myriad of other operators you may encounter.
"This seems rather … procedural. Even though we get all the nice guarantees of working with side effectful functions in Haskell, the code itself reads like any other procedural language would. With Haskell, we get the best of both worlds."
Working with the IO monad is much more complex, especially if you want to use other monadic types inside that code.
Am I the only one who never tried Haskell but who when reading discussion about it ends up thinking real-world (with GHC extensions) Haskell has way too much (sometimes historical) cruft? It really detracts me from it.
Am I right in thinking that there are efforts to provide a better out-of-the-box experience, with some of that cruft dealt with for people who don't need the backwards compatibility? For myself, I found long prologues of extensions/options/whatever massively off-putting.
It's a thirty year-old language, it's bound to have cruft. However modern codebases tend to showcase a pretty efficient combination of language features, oftentimes optimised for productivity rather than research in lazy FP. Such codebases are https://github.com/flora-pm/flora-server or https://github.com/change-metrics/monocle
- map is easier on newcomers. Once you you understand functor, you'll remember to use fmap anyway. Also, I still use map for lists after 10+ years.
- I don't fully understand this, but do you mean that every `IO a` function would be better off being `IO (Maybe a)`?
- AFAIK, there are no dependent types in Haskell yet, but say that the type level programming you can do today is what you mean then you are already in quite advanced territory. Yeah, I guess it could be more polished.
I mean that `head` should be `List a -> Maybe a` rather than `List a -> a`. If you really really want an `a`, you should provide `NonEmptyList a` or something.
It's a very good explanation but I'd welcome more humility. Instead of using phrase lie-to-kids, I could say that all the dance around effects in Haskell is just implementation detail and/or leaky abstraction.
It makes Haskell even more procedural, you can use intermediate variables in do blocks.
http://blog.sigfpe.com/2023/08/what-does-it-mean-for-monad-t...
Reading up to this point, I had to chuckle a bit. I have struggled with the Haskell type system more than I care to admit; It's almost never "we just don't have to care" when comparing to most other popular languages.
That being said, this article does a nice job of gently introducing some of the basics that will trip up somebody who is casually looking through code wondering what *> and <*> and <* do. As usual, there is a steep learning curve because of stuff like this all over the codebase. If I walk away for a month, I need to revisit >>= vs >> and other common operators before I can be productive. It probably doesn't help that I never actually speak to a human about these concepts so in my head it's always ">>=" and not "bind."
Struggling with Haskell type system is not an experience of somebody who has developed an intuition about Haskell type system. Granted, it is not a binary thing, you can have good intuition about some parts of it and struggle with others.
I think they way you put it is, while technically true, not fair. Those "most other" languages are very similar to one another. It is not C# achievement, that you don't struggle with its type system coming from Java.
This is like people struggling with Rust because of burrow checker, well, they have probably never programmed with burrow checker before.
FYI it's "borrow" (as in when someone lends something to you) not "burrow" (which is a tunnel/hole)
But I like <|> , it feels very clear that it has a sort of "or" meaning.
Interesting. Probably it's just me, but I first learned monads using >>= (in OCaml), so at the beginning I found the Haskell do notation more confusing (and indentation rules didn't help). >>= is just a function and I understand its signature well. On the other hand, "do" is a syntactic sugar that I sometimes had to "mentally desugar" in some cases.
thing <- getThing
case thing of
writing this:
getThing >>= \case
Not so much because it is less code, but fewer variables to name.
In a nutshell, first class effects and built in set of patterns for composing them get rid of boilerplate code. Combine that with type safety and you can churn out relatively bug free code very fast.
functional and logical languages are indeed very simple, small core, very general laws.. (logic, recursion, some types) but grokking this requires unplugging from a certain kind of reality.
Most people live in the land of tools, syntax and features .. they look paradoxically both simpler than sml/haskell so people are seduced by them, yet more complex at the same time (class systems are often large and full of exceptions) but that also makes it like they're learning something advanced, (and familiar, unlike greek single variables and categ-oids :).
So why hasn't it happened? Some possibilities:
1. People are just ignorant/unenlightened.
2. Haskell is too hard to use for most people. I think that different programmers think in different ways, and therefore find different languages to be "natural". To those whom Haskell fits, it really fits, and they have a hard time understanding why it isn't that way for everyone, so they wind up at 1. But for those who it doesn't fit, it's this brick wall that never makes sense. (Yes, this is about the same as 1, just seen from the other side. It says the problem is the language, not the people - the language really doesn't fit most people very well, and we can change languages easier than we can change people.)
3. Haskell isn't a good fit for many kinds of programming. The kind of programs where it fits, it's like a superpower. The kinds where it doesn't, though, it's like picking your nose with boxing gloves on. (Shout out to Michael Pavlinch, from whom I stole that phrase.)
What kinds of programs fit? "If you can think of your program like a pipe" is the best explanation I've seen - if data flows in, gets transformed, flows out. What kind of program doesn't fit? One with lots of persistent mutable state. Especially, one where the persistent mutable state is due to the problem, not just to the implementation.
Myself, I lean toward a combination of 2 and 3.
Later, Python took its place, since people realized the Java ecosystem was way too complicated and was turning off would-be CS students. Python directly targeted the academic use case by having similarities to C, Java, and Bash——it was not a better language, it just made existing imperative and object-oriented assignments easier for classroom environments. Believe it or not, a lot of programmers and even academics sort of give up on exploring significantly unfamiliar directions after graduating.
[1] https://www.cs.utexas.edu/~EWD/OtherDocs/To%20the%20Budget%2...
4. History. In those types of discussions, there are always "rational" arguments presented, but this one is missing.
> One with lots of persistent mutable state.
You mean like a database? I don't see a problem here. In fact, there is a group of programs large enough, that Haskell fits nicely, that it cannot be 3; REST/HTTP APIs. This is pretty much your data goes in, data goes out.
‘traverse :: Applicative f => (a -> f b) -> t a -> f (t b)’
And you can derive it for free for your own datatypes!
The amount of code I’ve manually written in other languages to get a similar effect is painfully large.
I understand how Applicative works, but I don’t know how to apply (pun intended) to my data types.
For one client, we needed to accumulate some extra statistics for each. For another, we needed to call their web API (so async I/O) to get some of the data used in the report. By making the generic superclass use a generic Applicative type, we could keep the report business logic clear and allow the client-specific subclasses to do these client-specific things and have them compose the right way.
Wanting custom applicative types is rarer than using a standard one, but it can be a good way to represent any kind of "secondary effect" or "secondary requirement" that your functions might have. E.g. "requires this kind of authorisation" or "must happen in a database transaction". But a lot of the time you can implement custom things using reader/writer/state, or free, rather than having to write a completely from-scratch applicative.
Is there any parallel there, or is it an entirely wrong perception I got ?
Lispers hailing macros and metaprogramming cannot understand that power is also the very thing that makes jumping from project to project difficult, I have no intention of relearning your cleverly designed DSL or new extensions.
There's a reason why Java or PHP have plenty of widely used killer software, while monocle-wielding lispers and haskellers have very little to show after many decades.
It's not FP being the issue, it's just that their power attracts crowds interested in code more than the features/business/product.
I don't blame them, I love Haskell and Racket but I think very few teams can scale and make the compromise worth it.
Co-Star, the astrology SaaS, is apparently written with a Haskell backend. I'd love to have seen the casting call for that.
I believe the Mercury bank also runs most of their backend stuff on Haskell. Functional languages in general are surprisingly common among financial investment firms.
Some of Target's stuff is written in Haskell. I think there was at least one big Facebook project that was written in Haskell, but they may have moved away from it by now. Awake Security does some Haskell stuff.
One thing which might be surprising is Haskell is apparently quite strong for general backend web dev.
Yep. Mostly because of the https://www.servant.dev/ framework (but see also IHP, Yesod, and other frameworks). Servant lets you declare your HTTP API at the type level, and then it will infer the correct types for your endpoint handlers. You can also extract OpenAPI specs from it, generate clients for Haskell or other languages, etc.
My current employer, Bellroy, uses Haskell for pretty much all new code and Servant for all new HTTP APIs. https://exploring-better-ways.bellroy.com/our-technology-sta... is an older post discussing the shift to Haskell. We've found Haskell code to be much more compact than the equivalent Ruby, and significantly more robust.
hledger - a plain text accounting tool
In one such system I built had the main (REST API exposing) backend implemented in Kotlin with a separate application in Haskell doing a complex set of maths driven business rules against GIS data to calculate area specific prices.
The amount of IO on the Haskell side was fairly minimum and abstracted away quite nicely.
Haskell allowed expressing all complexity in a way that was easy to audit and translate from business/data analyst requirements.
Would do again :-) But only with the correct amount isolation so you can lean into Haskell's strong sides.
Private messenger for desktop and mobile platforms. It's mostly written in Haskell except for UI.
> cpp-for-mobile > Template for cross-platform mobile app with native UIs and C++ core
Their flake.nix indicates they use Haskell to generate cross compiled shared library for Android, iOS, Mac, Linux and Windows.
I am not expert in Nix but at high level I can see they are indeed using Haskell.
also http://detexify.kirelabs.org/classify.html was surprisingly useful in university
CADQuery/build123d is the other big one I'm interested in.
https://www.shellcheck.net/
I tried it on couple of one liners and it found a couple of potential problematic points, one for each one liner.
I once had a hard to track down bug in some code making use of conduit[0], which is introduced using examples like `main = runConduit $ (yield 1 >> yield 2) .| mapM_C print`.
Dutifully replacing every occurrence of (>>) with (>), because it was more modern, suddenly changed the semantics somewhere, due to the fact that (>>) is defined with fixity `infixl 1 >>` and (>) as `infixl 4 >` - i.e. both are left-associated operators, but (*>) binds tighter than (>>) and some of the myriad of other operators you may encounter.
-- [0] - https://github.com/snoyberg/conduit
"This seems rather … procedural. Even though we get all the nice guarantees of working with side effectful functions in Haskell, the code itself reads like any other procedural language would. With Haskell, we get the best of both worlds."
Working with the IO monad is much more complex, especially if you want to use other monadic types inside that code.
- map only works on Lists (one needs fmap for functors)
- head throwing exceptions instead of returning Maybe
- dependent types bolted on later: they're much nicer in Idris
- I don't fully understand this, but do you mean that every `IO a` function would be better off being `IO (Maybe a)`?
- AFAIK, there are no dependent types in Haskell yet, but say that the type level programming you can do today is what you mean then you are already in quite advanced territory. Yeah, I guess it could be more polished.