Hacker Newsnew | past | comments | ask | show | jobs | submit | shigeo's commentslogin

Out of curiosity, how and how often do you make use of Haskell's laziness, and how often does it get in your way?

And yes, the syntax of Haskell (besides unary minus) really does feel like it's close to some kind of optimum.


If you remove monadic IO, laziness and immutability from Haskell, then you're a lot closer to OCaml than you are to Haskell.

But more importantly, immutability is used everywhere in the industry.


Nice idea! Some cheapshots:

1. The comments aspect of this doesn't really jump out at me from the landing page.

2. The words "knowledge base" scare most people.

3. "You will be our customer and we promise to treat you the way we’d like to be treated too… with respect" is much more ominous with ellipsis than it would be with an em dash or a colon.


Thank you. Great points. I'll fix that.


Intuitionistic logic and intuitionistic type theory follow essentially the same set of rules, so how does type theory allow for a "marriage of mathematics and computation" any more than logic, which everyone already uses?


Because conventional mathematics, whether based on set theory or some categorical alternative, has (usually classical) logic as a background theory. In type theory, that's all you need. It is a theory of computation that has a "built-in" logic. And there's more than just intuitionism. There are type theories that model non-constructive/classical logic and are yet still computable programming languages. So we can write computable code and prove properties about that code in the same theory.


> There are type theories that model classical logic and are yet still computable programming languages.

Thanks, but this seems absurd to me. Please elaborate!


I'm not an expert, so please take the following with the grain of salt.

One of the examples of computation formalisms providing the classical logic at the type level is the Parigot's [λμ-calculus](https://en.wikipedia.org/wiki/Lambda-mu_calculus). λμ-calculus adds μ-abstraction to the classical λ-calculus.

μ-abstraction resembles the notion of continuation. With that addition Peirce's law of classical logic becomes deductible without any modifications (e.g. double-negation translation). The program that proves Peirce's law is just call/cc function.

This topic seems to be an ongoing research, so you may find lots of articles in public internet space. Many interesting introductory articles are among the advanced materials of [this course](https://www.cs.ru.nl/~freek/courses/tt-2011/), including the original Parigot's paper.


There are a number of different type theory models for classical logic. The first to my knowledge was a model that uses the call-with-current-continuation (call/cc) control flow operator. While this does model classical logic, some say it is not particularly useful as a practical programming construct, which is perhaps why it does not seem to be adopted by any major proof assistant. More recently I've seen a few papers showing that type theoretic models of concurrency can also model classical logic, e.g. see "Classical Proofs as Parallel Programs" < https://arxiv.org/abs/1809.03094 >. The original idea that the Curry-Howard correspondence only applies to intuitionist logic is wrong.


Tangentially, Rust goes well out of its way to avoid "type inference at a distance". For example, unlike Haskell or most MLs, type inference will not work across a function boundary.


Here's one that caught me off guard:

    #[derive(Debug)]
    struct Foo<T> { pub bar: T }

    fn hmm<T>(x: T) -> Foo<T> {
        Foo { bar: x }
    }

    fn main() {
        let f:Foo<f32> = hmm(1.23);
        print!("got: {:?}\n", f);
    }
So maybe that's not crossing function boundaries, but the type for the generic "hmm" function as the rvalue is being inferred from the explicitly declared type of the lvalue. I know it's pointless to try and change anyone's mind about this, but I personally find it unsettling :-)


Why does that matter?


Out of curiosity, why does the condition "without egregious side effects" exist? Aren't egregious side effects already factored ibto the "net benefit" calculation?


My moral compass says, don't harm individuals for the greater good.

I should add, you're not wrong to question the necessity of including the second part. It is most certainly a factor. There's a natural justice strand in the formulation of Benthamite utilitarianism (which is, broadly speaking, where I'm coming from) that reinforces an enlightened approach to individual consequences, since in the long run the absence of individual justice poisons the greater good anyway.

However that's a bit of a mouthful, and it does need surfacing because there are other formulations of utilitarianism that lead to dystopian nightmare societies, and besides, I'm rather fond of the word egregious.

However I do regret forgetting to riff on the laws of robotics in the phrasing


> I wouldn’t have bet that this is what would have happened.

Which part of the results are you referring to? It's well-known that reference counting has significantly lower throughput that tracing garbage collectors, so the fact that C++ is outperformed here isn't surprising at all.


As a Lisp programmer, I confess that when I see a performance shootout between Go, Java, and C++, I don't expect to hear the result that Go won -- and that its performance is on par with their old Common Lisp program.


Great point. Here’s the issue: there are tons of ways of doing reference counting in C++. Some go all-in with implied borrowing. Some make great use of C++’s various references. Some use the reference counting only for a subset of objects and rely on unique_ptr for hot things.

So, there is no universal answer to how C++ reference counting compares to GC.

There is a well known answer, that you’re probably referring to, if you reference count every object and you do it soundly by conservatively having every pointer be a smart pointer. But it just isn’t how everyone who does reference counting in C++ does it, and I was surprised because I’m used to folks going in the other direction: being unsound as fuck but hella fast.


> there are tons of ways of doing reference counting in C++

There are but critically there are also a lot of ways to not do ref counting at all. C++ isn't a refcounted language, it's a language where you can use refcounting (shared_ptr), but you don't have to (unique_ptr, value types). It's not even recommended to be primarily refcounted.

They chose a really odd subset of C++ to use here (shared_ptr exclusively), very unorthodox and not something I've ever seen elsewhere or recommended.


If either of those affects performance there are WAY too many memory allocations happening.


Sure, but I guess that's the point --- the GCs in Java and Go can handle any allocation pattern you throw at them reasonably well, but there's not, as far as I know, such a "one-size-fits-all" solution in C++ (not that it needs one.)


If you don't want to plan out proper (as in performant) data structures and memory management, C++ is indeed the wrong language. And judging by the stuff discussed in comments above, this is exactly what happened here.


I'm not sure I would call 'using your entire CPU to do trivially small heap allocations' a pattern unless you mean that it's a pattern you see when people wonder why their software is so slow.


Side note: the study of monoids isn't a very big field in mathematics, because they're too simple to say too much about them. But if you modify them just a bit, either in the direction of groups or in the direction of categories, you get massive fields of study.


I'm reminded of John Baez writing recently (https://twitter.com/johncarlosbaez/status/124647011232195379...) about commutative semigroups. Even simpler, but still lots to say!


I probably still stand corrected, but commutativity is a much stronger condition than the existence of an identity!


I think the existence of an identity isn't very important. You can adjoin an identity to any semigroup to turn it into a monoid.

In any case, doesn't adding more properties make a theory simpler? For example the classification of finite simple groups is much harder than the classification of finite simple commutative groups.


Or that a monoid is merely a category with exactly one object.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: