Polymarket being an incompetent cabal that doesn't deserve to have a license is so passe, so ingrained in the standard recurring narrative that comes up whenever prediction markets are mentioned, and so universally agreed upon at this point that it frankly isn't that interesting to a lot of people. We all agree with it already, there is no one left to convince, certainly not on this site, better to talk about more interesting things that are nevertheless related. It's, well, boring, and this is exactly why a comment like this wasn't the top comment. I'm not saying a comment like this wouldn't be true, it would, it just isn't going to be a top comment, and that fact is unsurprising. I do share your sentiment though, that it would have been nice if human attention worked slightly differently here, but it does not.
> Hand over the names of the large bettors on that side of the market to the Israeli police
Woah there, didn't see this coming to be honest. But, I mean we already trust Israel to spend our tax dollars on genocide, so heck, why not, they'll do a great job!
> If Polymarket (and its competitors) cannot fix this, then it's time to get tough. Maybe it's time to actually crack down on Americans using VPNs
First of all, why is it time to get tough on limiting human rights, why is your freedom of speech and freedom of privacy the first thing on the chopping block, rather than Polymarket's license to operate, which is a much more obvious target for all of our disdain.
And also, it's physically impossible to crack down on VPNs 100%, especially within the constitutional framework we have in the US, and the technical reality of the modern internet. Keep in mind the US Government itself invented TOR to help destabilize and leak information out of autocratic regimes that have implemented sweeping censorship and blocking of VPNs (you know, the sort of autocratic regime you apparently want to start here in the US given your comments on the matter).
From a technical perspective, with SSL being standard for most web traffic it is utterly trivial to set up VPNs that hide in all kinds of ways that look completely innocuous and are unblockable. You can even hide exfiltration traffic in DNS queries, and I even got to see this a few times in practice when I still worked for the DoD back in the day. People will go to all kinds of lengths to get information in and out of a closed system and stopping that is like trying to catch air with your hands, you will always just end up hurting/restricting the average citizen and the bad actor who wants to slip through will still slip through.
Such a ban or KYC program is also fundamentally against every notion of privacy and freedom enshrined by the constitution and even more so by the founding spirit and values of the internet and reputable tech circles that care about privacy and freedom. You sound just as bad as Zuckerberg and Palantir trying to push KYC on Linux right now to help build a better mass-surveillance state for their fascist overlords. You seem to be the sort who would be extremely supportive of that project, though I sincerely hope that is not the case.
> we have extradition with Israel
For now. Once the Boomers finish their reverse mortgages and are finally out of the equation we'll hopefully cut all funding, sanction Israel, and label them a terroristic genocidal state, which they are. As usual, moral progress is always delayed by rich old men refusing to let go. Ethnostates are also extremely problematic in their own right, never mind genocidal ones. Certainly not morally praiseworthy, certainly not good "ally" material. Should be sanctioned in the very very least, definitely not subsidized.
And the funniest part is I agree with you that Polymarket is bad, which is your underlying point that you care so strongly about throughout this thread, but your arrogance and sudden undeserved spitefulness, in response to completely innocuous, imaginative, on-topic comments, just comes off as really bitter and makes you hard to talk to and even harder to agree with. Like you really have an axe to grind and I think it is rooted in some sort of deep-seated fear that everyone around you is as evil and misanthropic as you claim to be through your espoused values.
I really wonder whether privacy would actually be the answer here --- prediction markets as they are now where the odds are public really shouldn't be called prediction markets, they should be called "outcome-shaping markets" because largely that is what they do, they let people shape real-world outcomes with massive amounts of liquidity. If instead these were privacy protocols where you can see how much liquidity is on a specific market, but no one can decrypt how much liquidity is on each side until the outcome executes (i.e. using threshold encryption, commit reveal, etc), you'd have a very different situation where your ability to predict in advance is what gets rewarded and there is no ability to "copy trade"
I heard something remarkable many years ago, that there are websites on the dark web where you can bet on which day someone is going to die. In other words you can pay to have them assassinated in a very indirect way. It's like kickstarter, for assassinations.
When enough funds are raised, a willing volunteer simply places a bet on the day when they plan to carry out the deed. So they win the bet by default.
Obviously this is a horrible use case, and I'm not sure if such websites actually exist or are just rumors. But I have to wonder about the model itself.
I often thought about an inverse Kickstarter. Where you post an idea, people fundraise the idea, so the idea itself is validated. And then worthy contenders step forth to build it. (I guess donors could vote on who ends up getting the money to actually build it, or if there's enough they could even get divided between them for prototypes.)
For example, there seems to be a lot of interest in e-ink laptops, and a successor to flash. People have been complaining for decades, but not much gets built. How much interest is there? Well we can measure that objectively! You vote with your wallet.
Right now, it's a "pull" system. You have to hope and wait for a small number of highly motivated people. You have to hope they will launch something you're interested in. But what if you could push?
I think we could do a lot more proactiveness on the crowd side of things. As a recent article here mentioned... people actually do know what they want.
And of course this idea isn't just limited to products and services. I think there's a lot of potential for this idea in government as well.
> Obviously this is a horrible use case, and I'm not sure if such websites actually exist or are just rumors.
polymarket has an @died tag, which I assume is for betting on people's deaths (I never used the site, and it's currently inaccessible) given apparently someone recently made half a mil betting on Khamenei's death, and a cool billion was traded on bets on the timing of the bombing of iran https://www.npr.org/2026/03/01/nx-s1-5731568/polymarket-trad...
It's also fundamentally something you can't really do properly in crypto. You need a central authority to decide if "thing has happened". There's no way to properly incentivize accuracy in cases where the majority of stake benefits from an inaccuracy.
Not acknowledging a problem doesn't make it go away. Isolating people decreases the problem, but alienates them from the wider economy. I'd rather live in a world with open, regulated [0] , assassination markets even if that increases the amount of money behind assassinations.
Of course I'd rather live in a world with no muder and death, but seriously, grow up.
[0]the value of a human life is $xM + $yM for their occupations, so now payout for hits below that amount
I actually really like that inverse Kickstarter idea, makes a lot of sense especially if you did some kind of enter some pain/problem, search through existing idea markets and you can throw money towards ideas that would solve whatever pain point you have. Builders would essentially just have a market of validated ideas and could submit a ‘bid’ before a set deadline and the finders would vote on which is best then the funds would resolve to whatever builder made the best product.
Kickstarter exists. The real problem is most things cost more than the average person should dare risk. Some widget today is worth more than the same tommorow. Most people should not back your eink laptop even though if it existed they would pay more for it.
For the consumer the ideal case is that the money is not released from the escrow until their product actually arrives.
But that would be overly restrictive, and limit development to companies with significant funds.
Kickstarter has a balance where, you only pledge money after deciding you trust the company.
I think we could do it so the initial fundraiser is provisional. And then everyone would know that some portion of that would fall through. Over time you would learn what the ratio is. If a million is pledged, maybe you can expect 200k to be committed in the final round.
We could even divide it further. There is some portion of users which are more open-minded and enthusiastic. They might be happy to fund a round of crappy prototypes. So in this way smaller players could gain trust and credibility.
That wouldn’t solve the issue in the article. The gamblers threatening the journalist know how much money they individually gambled on an outcome, which means they know how much they stand to lose if they cannot pressure a source to alter reporting in their favor.
Problem intrinsic to gambling is that people appropriate expected winnings instead of what they really put in, thus making the feeling of loss even greater than what it objectively is.
The problem in the article is sort of boring, this is nothing new, centralized authorities who run prediction markets have been doing a terrible job at wording the predictions and then people get pissed when those poorly written words are put through a forcing function. Incompetence is boring.
Yeah. It seems like one of these two things is true:
Case 1. sam0x17 has read TFA carefully and he is proposing, without spelling it out very well, that the oracle should point at anonymous deciders instead of at vetted outlets with well-known reporters. This is the charitable case, but it seems unlikely, and it carries its own problems.
or
Case 2. sam0x17 goes through his life wielding a giant hammer called "privacy", which he swings about wildly without ever feeling the need to look very closely at any given nail. In fact, he'll even shout "privacy" in the face of a situation where privacy *is the problem*. Goons are hiding behind privacy to threaten a reporter and his family? Let's give them more privacy, that'll fix it!
HN can be very predictable sometimes, but hey, at least nobody ITT has suggested Rust yet.
You didn't even read my message. At no point do I say the oracle target should be private, that would be ridiculous. Public oracle, private markets on both sides of the issue, until the issue is considered closed, at which point we get to decrypt and find out who betted yes and who betted no. Liquidity would still be public, so you know how much money, but you get no intelligence signal, so world events don't get shaped by the market.
> At no point do I say the oracle target should be private
Yeah, that's why I said that Case 1 (the one where you actually read TFA) was unlikely. I included it out of charity, because it was the only way I could eke out something remotely relevant to the article from your original comment.
Instead, your original said stuff like:
> they let people shape real-world outcomes with massive amounts of liquidity
> no one can decrypt how much liquidity is on each side
> where your ability to predict in advance is what gets rewarded and there is no ability to "copy trade"
These things are not at issue in TFA. You are talking about completely different news stories. So now, we're in Case 2, the more likely case, where your comment has nothing to do with the news story under discussion, and you're just injecting your irrelevant fixation.
In case you didn't notice, I was replying to a commenter who exposed your irrelevance:
"The gamblers threatening the journalist know how much money they individually gambled on an outcome, which means they know how much they stand to lose if they cannot pressure a source to alter reporting in their favor."
> because it was the only way I could eke out something remotely relevant to the article from your original comment.
Remotely relevant? It's not like I'm talking about cats, I'm talking about prediction markets, which is the topic of the article. You would have to be extremely uncharitable to not see a holistic discussion about other problems prediction markets face and possible solutions to those problems as relevant. These topics very seldom come up on HN so a mere mention of prediction markets can and should be a very good excuse to have many side threads on the topic that have been waiting to happen.
If you want to bet on a binary outcome without knowing how much other participants are betting on the 2 possible outcomes, that's a bookie, not a prediction market.
The entire rationale for claiming that prediction markets "aren't just gambling" and have a legitimate social value is that they surface sentiment and valuation in a transparent and credible way.
(And that's even before we get to the fact that any kind of betting dependent on journalistic work could incentivize outcomes like the death threats reported here.)
You're still making a prediction, it's just much less likely to affect the outcome
I personally think that entire social value rationale is actually what makes prediction markets dangerous / socially bad
Seeing the total liquidity, fine, seeing the odds in real time, really really bad, will shape real-world outcomes vs if the market didn't exist once liquidity is high enough
> but no one can decrypt how much liquidity is on each side until the outcome executes
This would remove the supposed purpose of prediction markets, where you can get information about the probability of an event based on how much people are betting on it.
I mean the real purpose is to make money on intelligence-based bets. And what's great is after the fact when everything decrypts you would still get to see the exact timeline of sentiment as it changes, completely undisturbed by the market's existence. This is extremely good data for designing systems to make the correct prediction next time based on things happening in the real world, instead of following the herd and looking for tea leaves in the odds graph. It's the only way to get pure, dollar-weighted sentiment data without the market skewing itself.
Speaking of privacy, there's a cost to fame and notoriety in societies in which these systems exist. imho markets like this, taken to their extremes, incentivize small local communities, local governance, and very effective communication across boundaries between communities, since they have an event horizon that means individuals needn't be known outside -- where you never want to be known too much as an individual outside your circle of community.
I'm not sure that sounds like such a bad world tbh. I just don't like how it gets there
Should be the opposite IMO. I don't think anonymity should exist on websites that are publicly accessible.
If people want to start forming their own meshnets over wifi or LoRA or whatever and remain anonymous, then all the power to them - because those kind of people are the exact opposite of the type of people who make death threats to journalists.
Surely I am misreading what you mean here. But your comment very much reads like you think "Iran strikes Israel on March 10" at 25% odds is what in fact caused Iran to strike Israel on March 10. I'll search the space of other interpretations in the mean time, but if you could help me out here and clarify…
> But your comment very much reads like you think "Iran strikes Israel on March 10" at 25% odds is what in fact caused Iran to strike Israel on March 10
I don't know how you extrapolated that from the parent's comment. It literally said nothing about the cause and effect of this particular event.
Knowing the odds in a prediction market IS a big part of the problem brought up in the linked article though (and the bets themselves). Knowing how much can be made from being right creates an upper-bound on what a financially-rational malicious actor will spend in trying to change the outcome.
Their comment proposed something that would "be the answer here".
What does "here" mean? It's logical to expect "here" to refer to a scenario that includes cases like the one in the article. If it's some scenario that excludes cases like the one in the article, then it's not actually relevant to the discussion.
(Tangents are OK. It's just confusing if they're introduced with phrasing that makes them sound like they're not tangents.)
"here" in that comment is not referring to any specific scenario. It is referring to the problem discussed in the sentence immediately following it, that public prediction markets can shape the outcome of the events they are predicting.
I wasn't extrapolating - it was the literal meaning of the words. The context was that someone commented "shouldn't be called prediction markets, they should be called outcome-shaping markets" in direct reply to "Polymarket gamblers threaten to kill me over…[the prediction market "Iran strikes Israel on March 10"]. I interpret that as polymarket gamblers outcome-shaped Iran striking Israel. It was at 25% odds when they struck. I don't think the commenter actually meant that literally, which is why I asked them to clarify. I'm just doing my best here.
I'm not even talking about the specific situation. The problem with prediction markets that has been talked about for months is the predictions themselves are being used as an indicator that "thing will happen" and eventually there is so much liquidity on certain markets that the market determines the outcome not the other way around
> I'm not parent commenter, but any example of an event that actually matters?
Elections, wars, the economy, healthcare systems, laws, court cases?
I think the point is more that if it's already happening at this level of adoption, and these are only the ones we know about, surely there are many more where this is happening and we don't know, and as adoption increases, it will get worse and worse since the liquidity on the line will be much higher.
I'm not parent commenter, but any example of an event that actually matters?
Elections, wars, the economy, healthcare systems, laws, court cases?
The example you provided does look like some insider trading type of stuff, just like how 1v1 professional sports players will sometimes intentionally lose after receiving a bribe to do so, but both your example and sports don't really seem to have any kind of meaningful impact on anything or anyone who isn't gambling, no?
I don't know. I imagine if someone was going to adjust the timing or targets of strikes in a war in response to a prediction market bet, or the decisions of a high-profile court case etc, they wouldn't say so in public the way the CEO of Coinbase did on that earnings call. It's pretty rare that someone would actually claim they're taking an action specifically with the purpose of altering the outcome of the prediction market bet, rather than giving some other reason. So even if it were happening, the only evidence I might expect to find would be suspicious prediction-market trades around low-probability events.
In situations like this, where you wouldn't expect to see evidence of something even if it were happening, you're basically left to make a judgement based on prior probability. So here that would come down to: is the financial incentive provided by the prediction market high enough relative to the decisionmaker's perceived risk and penalty of being caught? IMO, the answer is currently 'no' for most high-profile cases, but in a future where more money is riding on the bets, or where decisionmakers are insulated from consequences, that could swing to 'yes'.
The decision-makers don't need change their decision whatsoever to corruptly profit from it, though, they can just place bets on the timeframe or outcome of whatever decision they originally intended to make. Why risk operational disruptions and a greater chance of getting caught when you can still profit exorbitantly from insider trading on your knowledge without incurring those unnecessary risks?
How can we possibly find an example, if the names of the betters aren't public? There are public bets on activities from the IS government that can easily be used by people that control the outcome.
Maybe not directly so clearly, but there is some influence factor for sure. For example we see this with sports betting, at the far end of the spectrum it is literally players or coaches fixing games to satisfy bets. But somewhere in between that overt fraud, there is influence making going on. Submarine stories on ESPN or sports blogs highlighting a players bad shoulder, hurting their perceived value going into a free agency period.
This applies to governance as well. Note how there was a bet placed on polymarket on maduros capture mere hours before the raid were conducted, and this has lead to legislation moving forward in effort to combat suspicious prediction market activity based on whitehouse insider knowledge.
I was confused as well: After further consideration, I realized that Polymarket only resolves on actual event outcomes insofar as mainstream journalism accurately and credibly reports on those events and their resolution criteria rely on those reports. When it's easier or more reliable to influence reporting around outcomes than the outcomes themselves, bettors will seek to influence the behavior of reporters rather than the behavior of eg. national militaries.
"Iran strikes Israel on March 10" is a difficult outcome to force one way or the other. But "The Times of Israel’s military correspondent or other credible sources reports that Iran strikes Israel on March 10" merely requires intimidating or bribing one journalist. The existence of the bet didn't cause the missile strike or the failed interception. But it did cause a significant, heavily motivated dis-information campaign from people who stood to lose a lot of money.
In a similar way, people fear that prediction markets estimating times of death are equivalent to assassination markets. But murder is an aggressively prosecuted serious crime. It seems that it would be far easier and lower risk to bribe, coerce, pretend to be, or literally be a reporter who got a false obituary published - wait until the "victim" is going to be offline for a few days and can't be contacted to prove they're not dead, trick some tropical country's coast guard into confirming that the victim's yacht exploded offshore, point Polymarket at the obituaries, grab all the crypto, and disappear. If you fail to disappear successfully, your worst crime is publishing a fictional news article/bribing/threatening a journalist. You don't have to risk being in the legal jurisdiction of the victim and getting your hands bloody, you don't even have to be in the same hemisphere: you only need convince the prediction market resolution criteria that something happened.
Scott Alexander wrote about a related issue last month in the colorfully named section [1] "Annals of the Rulescucks", where he described a half dozen scenarios where the outcome may or may not have diverged from the actual event. A bet isn't resolved by eyewitnesses, it's resolved over the Internet through another financial instrument.
> you only need convince the prediction market resolution criteria that something happened.
You don't even need the market to actually resolve, as soon as your rumor becomes credible the market pricing will adjust and you can just sell your shares before resolution.
I guess the next step in this evolution is to set up controlled news sources. You get people who have an official press card to report on things as you need as part of the reporting manipulation business.
"Hey there's this newspaper that says this obscure thing happened, please resolve the bet in my favour"
One reason could be to correct uncalibrated markets. This only works if your intuition is better than the market's current intuition. If a big whale with a big idea makes a big splash, you can profit off of the instability by gently betting against them. This doesn't require you to have any particular knowledge.
Get a pet and narrate everything you do, you can even talk to the pet, they love that. I do this whenever my husband is away and I find it pretty calming/enjoyable
I've always felt very alone in my view on this, so don't feel bad if you disagree with me because most people probably do, but I just feel super morally icky when I hear about how part of our justice system is built around "retribution" / "vindication". Like it is one thing to punish, it is quite another to allow others to derive some sort of satisfaction from that punishment, even if they were victims, I just find it sick. It means as a society we are no better than the perpetrators at the end of the day.
> it is quite another to allow others to derive some sort of satisfaction from that punishment
I sometimes see this behavior in close friends, and it totally changes the way I see them. I don't know if it's a moral failing on their part, but I just don't experience the desire for vengeance the same way they do, and it really scares me to see how they experience it. What will they do when they start to have mental decline, and (incorrectly) decide they were wronged in some way? :(
I think this thought process is something that only people who have never been wronged can afford. There comes a time in life where the punishment must fit the crime, even if its only to make an example of the criminal.
Life is hard enough, we should deter crimes at every possibility, people are rarely punished for every evil they commit.
See this is my point though, it shouldn't matter what has happened to you, if that matters, then this is 100% emotional and not based on reason or justice.
I feel you, but hear me out. OP is right. I've wanted pretty much everything he's talking about here for years, I just never thought of all of this in as quite a formal way as he has. We need the ability to say "this piece of code can't panic". It's super important in the domains I work in. We also need the ability to say "this piece of code can't be non-deterministic". It's also super important in the domains I work in. Having language level support for something like this where I add an extra word to my function def and the compiler guarantees the above would be groundbreaking
IMO rust started at this from the wrong direction. Comparing to something like zig which just cannot panic unless the developer wrote the thing that does the panic, cannot allocate unless the developer wrote the allocation, etc.
Rust instead has all these implicit things that just happen, and now needs ways to specify that in particular cases, it doesn't.
He's talking about this problem. Can this code panic?
foo();
You can't easily answer that in Rust or Zig. In both cases you have to walk the entire call graph of the function (which could be arbitrarily large) and check for panics. It's not feasible to do by hand. The compiler could do it though.
"Panic-free" labels are so difficult to ascribe without being misleading because temporal memory effects can cause panics. Pusher too much onto your stack because the function happened to be preceded by a ton of other stack allocations? Crash. Heap too full and malloc failed? Crash. These things can happen from user input, so labelling a function no_panic just because it doesn't do any unchecked indexing can dangerously mislead readers into thinking code can't crash when it can.
There's plenty of independent interest in properly bounding stack usage because this would open up new use cases in deep embedded and Rust-on-the-GPU. Basically, if you statically exclude unbounded stack use, you don't even need memory protection to implement guard pages (or similar) for your call stack usage, which Rust now requires. But this probably requires work on the LLVM side, not just on Rust itself.
Failable memory allocations are already needed for Rust-on-Linux, so that also has independent interest.
Or effect aliases. But given that it's strictly a syntactic transformation it seems like make the wrong default today, fix it in the next edition. (Editions come with tools to update syntax changes)
Something like that, except you probably also want to be able to express things like “whatever the callback I’m passed can throw, I can throw all of that and also FooException”. And correctly handle the cases when the callback can throw FooException itself, and when one of the potential exceptions is dependent on a type parameter, and you see how this becomes a whole thing when done properly. But it’s doable.
> Comparing to something like zig which just cannot panic unless the developer wrote the thing that does the panic
The zig compiler can’t possibly guarantee this without knowing which parts of the code were written by you and which by other people (which is impossible).
So really it’s not “the developer” wrote the thing that does the panic, it’s “some developer” wrote it. And how is that different from rust?
Huh? It seems to me that in these respects the two languages are almost identical. If I tell the program to panic, it panics, and if I divide an integer by zero it... panics and either those are both "the developer wrote the thing" or neither is.
In Zig, dividing by 0 does not panic unless you decide that it should or go out of your way to use unsafe primitives [1]. Same for trying to allocate more memory than is available. The general difference is as follows (IMO):
Rust tries to prevent developers from doing bad things, then has to include ways to avoid these checks for cases where it cannot prove that bad things are actually OK. Zig (and many others such as Odin, Jai, etc.) allow anything by default, but surface the fact that issues can occur in its API design. In practice the result is the same, but Rust needs to be much more complex both to do the proving and to allow the developers to ignore its rules.
Could you clarify what's going on in the Zig docs[0], then? My reading of them is that Zig definitely allows you to try to divide by 0 in a way the compiler doesn't catch, and this results in a panic at runtime.
I'd be interested if this weren't true, since the only feasible compiler solutions to preventing division-by-0 errors are either: defining the behaviour, which always ends up surprising people later on, or; incredibly cumbersome or underperformant type systems/analyses which ensure that denominators are never 0.
> the only feasible compiler solutions to preventing division-by-0 errors are either: defining the behaviour, which always ends up surprising people later on, or; incredibly cumbersome or underperformant type systems/analyses which ensure that denominators are never 0.
I don't think it's very cumbersome if the compiler checks if the divisor could be zero. Some programming languages (Kotlin, Swift, Rust, Typescript...) already do something similar for possible null pointer access: they require that you add a check "if s == null" before the access. The same can be done for division (and remainder / modulo). In my own programming language, this is what I do: you can not have a division by zero at runtime, because the compiler does not allow it [1]. In my experience, integer division by a variable is not all that common in reality. (And floating point division does not panic, and integer division by a non-zero constant doesn't panic either). If needed, one could use a static function that returns 0 or panics or whatever is best.
>Some programming languages (Kotlin, Swift, Rust, Typescript...) already do something similar for possible null pointer access: they require that you add a check "if s == null" before the access.
For Rust, this is not accurate (though I don't know for the other languages). The type system instead simply enforces that pointers are non-null, and no checks are necessary. Such a check appears if the programmer opts in to the nullable pointer type.
The comparison between pointers and integers is not a sensible one, since it's easy to stay in the world of non-null pointers once you start there. There's no equivalent ergonomics for the type of non-zero integers, since you have to forbid many operations that can produce 0 even on non-0 inputs (or onerously check that they never yield 0 at runtime).
>The same can be done for division (and remainder / modulo). In my own programming language, this is what I do: you can not have a division by zero at runtime, because the compiler does not allow it... In my experience, integer division by a variable is not all that common in reality
That's another option, but I hardly find it a real solution, since it involves the programmer inserting a lot of boilerplate to handle a case that might actually never come up in most code, and where a panic would often be totally fine.
Coming back to the actual article, this is where an effect system would be quite useful: programmers who actually want to have their code be panic-free, and who therefore want or need to insert these checks, can mark their code as lacking the panic effect. But I think it's fine for division to be exposed as a panicking operation by default, since it's expected and not so annoying to use.
The syntax in Kotin is: "val name: String? = getName(); if (name != null) { println(name.length) // safe: compiler knows it's not null }"
So, there is no explicit type conversion needed.
I'm arguing for integer / and %, there is no need for an explicit "non-zero integer" type: the divisor is just an integer, and the compiler need to have a prove that the value is not zero. For places where panic is fine, there could be a method that explicitly panics in case of zero.
I agree an annotation / effect system would be useful, where you can mark sections of the code "panic-free" or "safe" in some sense. But "safe" has many flavors: array-out-of-bounds, division by zero, stack overflow, out-of-memory, endless loop. Ada SPARK allows to prove absence of runtime errors using "pragma annotate". Also Dafny, Lean have similar features (in Lean you can give a prove).
> I think it's fine for division to be exposed as a panicking operation by default
That might be true. I think division (by non-constants) is not very common, but it would be good to analyze this in more detail, maybe by analyzing a large codebase... Division by zero does cause issues sometimes, and so the question is, how much of a problem is it if you disallow unchecked division, versus the problems if you don't check.
More specifically, Zig will return an error type from the division and if this isn't handled THEN it will panic, kind of like an exception except it can be handled with proper pattern matching.
I can't find anything related to division returning an error type. Looking at std.math.divExact, rem, mod, add, sub, etc. it looks to me like you're expected to use these if you don't want to panic.
Actually you're right, I was going by the source code which was in the link of the comment you replied to, but I missed that that was specifically for divExact and not just primitive division.
I have no argument against using the right tool for a job. Decorating a function with a keyword to have more compile-time guarantees does sound great, but I bet it comes with strings attached that affect how it can be used which will lead to strange business logic. Anecdotally, I have not (perhaps yet) run into a situation where I needed more language features, I felt rust had enough primatives that I could adapt the current feature set to my needs. Yes, at times I had to scrap what I was working on to rewrite it another way so I can have compile-time guarantees. Yes, here language features offer speed in implementing.
Could you share a situation where the behavior is necessary? I am curious if I could work around it with the current feature set.
Perhaps I take issue with peers that throw bleeding edge features in situations that don't warrant them. Last old-man anecdote: as a hobbyist woodworker it pains me to see people buying expensive tools to accomplish something. They almost lack creativity to use their current tools they have. "If I had xyz tool I would build something so magnificent" they say. This amounts to having many, low-quality single-purpose built tools where a single high-quality table saw could fit the need. FYI, a table-saw could suit 90% of your cutting/shaping needs with a right jig. I don't want this to happen in rust.
> Could you share a situation where the behavior is necessary?
The effects mentioned in the article are not too uncommon in embedded systems, particularly if they are subject to more stringent standards (e.g., hard realtime, safety-critical, etc.). In such situations predictability is paramount, and that tends to correspond to proving the absence of the effects in the OP.
Ah, the embedded application. Very valid point. I'm guilty of forgetting about that discipline.
I do wonder if it is possible to bin certain features to certain, uh, distributions(?), of rust? I'm having trouble articulating what I mean but in essence so users do not get tempted to use all these bells and whistles when they are aimed at a certain domain or application? Or are such language features beneficial for all applications?
For example, sim cards are mini computers that actually implement the JVM and you can write java and run it on sim cards (!). But there is a subset of java that is allowed and not all features are available. In this case it is due to compute/resource restrictions, but something to a similar tune for rust, is that possible?
I guess the no_std/alloc/std split is sort of like what you're talking about? It's not an exact match though; I think that split is more borne out of the lack of built-in support some targets have for particular features rather than trying to cordon off subsets of the language to try to prevent users from burning themselves.
On that note, I guess one could hypothetically limit certain effects to certain Rust subsets (for example, an "allocates" effect may require alloc, a "filesystem" effect may require std, etc.), but I'd imagine the general mechanism would need to be usable everywhere considering how foundational some effects can be.
> Or are such language features beneficial for all applications?
To (ab)use a Pixar quote, I suppose one can think of it as "not all applications may need these features, but these features should be usable anywhere".
Sorry I don't understand. The result we all want is at compile-time ensuring some behavior cannot happen during runtime. OP argues we need for features built into the language, I am trying to understand what behavior we cannot achieve with the current primatives. So far the only compelling argument is embedded applications have different requirements (that I personally cannot speak to) that separate their use case from, say, deploying to a server for your SaaS company. No doubt there are more, I am trying to discover them.
I am biased to think more features negatively impact how humans can reason about code, leading to more business logic errors. I want to understand, can we make the compiler understand our code differently without additional features, by weidling mastery of the existing primatives? I very well may be wrong in my bias. But human enginuity and creativity is not to be understated. But neither should lazyness. Users will default to "out of box" solutions over building with language primatives. Adding more and more features will dilute our mastery of the fundamentals.
For example, in substrate-based blockchains if you panic in runtime code, the chain is effectively bricked, so they use hundreds of clippy lints to try to prevent this from happening, but you can only do so much statically without language level support. There are crates like no_panic however they basically can't be used anywhere there is dynamic memory allocation because that can panic.
Same thing happens in real time trading systems, distributed systems, databases, etc., you have to design some super critical hot path that can never fail, and you want a static guarantee that that is the fact.
Perhaps there are similarities to Scala, from my anecdotal observation. Coming from Java and doing the Scala coursera course years ago, it feels like arriving in a candy shop. All the wonderful language features are there, true power yours to wield. And then you bump into the code lines crafted by the experts, and they are line for line so 'smart' they take a real long time to figure out how the heck it all fits together.
People say "Rust is more complex to onboard to, but it is worth it", but a lot of the onboarding hurdle is the extra complexity added in by experts being smart. And it may be a reason why a language doesn't get the adoption that the creators hoped for (Scala?). Rust does not have issues with popularity, and the high onboarding barrier, may have positive impact eventually where "Just rewrite it in Rust" is no more, and people only choose Rust where it is most appropriate. Use the right language for the tool.
The complexity of Rust made me check out Gleam [0], a language designed for simplicity, ease of use, and developer experience. A wholly different design philosophy. But not less powerful, as a BEAM language that compiles to Erlang, but also compiles to Javascript if you want to do regular web stuff.
At least from what I’ve seen around me professionally, the issue with most Scala projects was that developers started new projects in Scala while also still learning Scala through a Coursera course, without having a FP background and therefore lacking intuition/experience on which technique to apply when and where. The result was that you could see “more advanced” Scala (as per the course progression) being used in newer code of the projects. Then older code was never refactored resulting in a hodgepodge of different techniques.
This can happen in any language and is more indicative of not having a strong lead safeguarding the consistency of the codebase. Now Scala has had the added handicap of being able to express the same thing in multiple ways, all made possible in later iterations of Scala, and finally homogenised in Scala 3.
I agree. IMO, Scala can be written in Li Haoyi's way and it's a pleasure to work with. However, the FP and Effect Scala people are too loud and too smart that if I write Scala in Li Haoyi's way, I feel like I'm too stupid.
I like Rust because of no GC, no VM and memory safe. If Rust has features that a Joe java programmer like me can't understand, I guess it'll be like Scala.
I honestly just don't believe that Rust is more complex to onboard to compared to languages like Python. It just does not match my experience at all. I've been a professional rust developer for about three years. Every time I look at python code, it's doing something insane where the function argument definition basically looks like line noise with args and kwargs, with no types, so it's impossible to guess what the parameeters will be for any given function. Every python developer I know makes heavy use of the repl just to figure out what methods they can call on some return value of some underdocumented method of a library they're using. The first time I read pandas code, I saw something along the lines of df[df["age"] < 3] and thought I was having a stroke. Yet python has a reputation for being easy to learn and use. We have a python developer on our team and it probably took me about a day to onboard him to rust and get him able to make changes to our (fairly complicated) Rust codebase.
Don't get me wrong, rust has plenty of "weird" features too, for example higher rank trait bounds have a ridiculous syntax and are going to be hard for most people to understand. But, almost no one will ever have to use a higher rank trait bound. I encounter such things much more rarely in rust than in almost any other mainstream language.
The language itself is not more complex to onboard. For Scala also not. It feels great to have all these language features to ones proposal. The added complexity is in the way how expert code is written. The experts are empowered and productive, but heightens the barrier of entry for newcomers by their practices. Note that they also might expertly write more accessible code to avoid the issue, and then I agree with (though I can't compare to Python, never used it).
Hm, you claim that Rust and Scala are not more complex to onboard than Python... but then you say you never used Python? If that's the case, how do you know? Having used both, I do think Rust is harder to onboard, just because there is more syntax that you need to learn. And Rust is a lot more verbose. And that's before you are exposed to the borrow checker.
Well, the parent wrote "I honestly just don't believe that Rust is more complex to onboard to compared to languages like Python." And you wrote "The language itself is not more complex to onboard." So... to contract Rust with Scala, I think it's clearer to write "The language itself is not more complex to onboard _than Scala_."
To that, I completely agree! Scala is one of the most complex languages, similar to C++. In terms of complexity (roughly the number of features) / hardness to onboard, I would have the following list (hardest to easiest): C++, Scala, Rust, Zig, Swift, Nim, Kotlin, JavaScript, Go, Python.
I see the confusion. ChadNauseam mentions Python to another comment of mine, where I mentioned Gleam. In your list hardest-to-easiest perhaps Gleam is even easier than Python. They literally advertise it as "the language you can learn in a day".
Thanks a lot! I wasn't aware of Gleam, it really seems simple. I probably wouldn't say "learn in a day", any I'm not sure if it's simpler than Python, but it's statically typed, and this adds some complexity necessarily.
> I honestly just don't believe that Rust is more complex to onboard to compared to languages like Python.
Most people conflate "complexity" and "difficulty". Rust is a less complex language than Python (yes, it's true), but it's also much more difficult, because it requires you to do all the hard work up-front, while giving you enormously more runtime guarantees.
Doing the hard work up front is easier than doing it while debugging a non-trivial system. And there are boilerplate patterns in Rust that allow you to skip the hard work while doing throwaway exploratory programming, just like in "easier" languages. Except that then you can refactor the boilerplate away and end up with a proper high-quality system.
> And then you bump into the code lines crafted by the experts, and they are line for line so 'smart' they take a real long time to figure out how the heck it all fits together.
Thing is, the alternative to "smart" code that packs a lot into a single line is code where that line turns into multiple pages of code, which is in fact worse for understanding. At least with PL features, you only have to put in the work once and you can grok how they're meant to be used anywhere.
Yeah what a lot of people are missing here is tons of small startups are laying people off, but it's not because they don't need engineers, it's because they are out of runway because their entire vertical (usually some sort of SaaS, often b2b SaaS) is basically now nonexistent. Traditionally businesses favored buying software over building it for cost reasons. Now they can cheaply build exactly what they want instead of paying through the teeth for something that is only slightly like what they want. This doesn't mean the work is gone, but it does largely mean large swathes of the SaaS vertical will be gone. The work itself is shifting to the individual businesses that were once the customers of the SaaS.
SWEs will be fine, all these small VC-funded startups building another CRUD app will not.
I work for a startup that makes a b2b SaaS that is _way_ too complex for anyone to spec out in a markdown file, especially when taking things like ITAR compliance into consideration.
We have seen steady growth and there’s been no signs of slowing down.
Our software facilitates order/quote/factory floor workflow automation with auditable trails in the manufacturing space, with cad file analysis and complex procedural pricing equations for quote generation, alongside a Shopify style storefront and many more goodies. We interface with things like shipping, taxes, erp integrations, and so much more.
I don’t see anyone vibe coding an alternative to our software even if they could. Manufacturers have enough on their plate managing their factory floors.
That said, we facilitate $millions in manufacturing orders per week and our engineering team is 3 people. We couldn’t do what we do without AI, and we would have needed to hire more engineers to handle the scale of our business if it weren’t for the power of Claude Code and Cursor.
I see it the opposite way actually with respect to the CS degree. If you earned your CS degree (or any degree) before 2022 or so, the value of that degree is going to grow and grow and grow until the last few people who had to learn before AI are dying out like the last COBOL developers
AI has fundamentally broken the education system in a way that will take decades for it to fully recover. Even if we figure out how to operate with AI properly in an educational setting in such a way that learners actually still learn, the damage from years of unqualified people earning degrees and then entering academia is going to reverberate through the next 50 years as those folks go on to teach...
What I think is disappearing is not so much the quality of academic education, but the baptism by firehose that entry level CS positions used to offer - where you had no choice but learn how things actually work while having a safe space to fail during a period in your career when productivity expectations of you were minimal to none.
That time when you got to internalise through first hand experience what good & bad look like is when you built the skill/intuition that now differentiates competent LLM wielding devs from the vibers. The problem is that expectations of juniors are inevitably rising, and they don't have the experience or confidence (or motivation) to push back on the 'why don't you just AI' management narrative, so are by default turning to rolling the dice to meet those expectations. This is how we end up with a generation of devs that truly don't understand the technology they're deploying and imho this is the boringdystopia / skynet future that we all need to defend against.
I know it's probably been said a million times, but this kinda feels like global warming, in that it's a problem that we fundamentally will never be able to fix if we just continue to chase short term profit & infinite growth.
> What I think is disappearing is not so much the quality of academic education, but the baptism by firehose that entry level CS positions used to offer - where you had no choice but learn how things actually work while having a safe space to fail during a period in your career when productivity expectations of you were minimal to none
I would say that baptism by fire _is_ where the quality of an academic education comes from, historically at least. They are the same picture.
Agreed. I remember (a long time ago) being on an internship (workterm) and after doing some amount of work for the day, I spent some time playing around with C pointers, seeing what failed, what didn't, what the compiler complained about, etc.
That's not something enthusiasts here and elsewhere want to hear, that's pretty obvious also in this discussion. People seems extremely polarized these days.
AI is either the next wheel or abysmal doom for future generations. I see both and neither at the same time.
In corporate environment where navigating processes, politics and other non-dev tasks takes significantly longer than actual coding, AI is just a bit better google search. And trust me, all these non-dev parts are still growing and growing fast. Its useful, but not elevating people beyond their true levels in any significant way (I guess we can agree ie nr of lines produced per day ain't a good idea, rather some Dilbert-esque comic for Friday afternoon).
Agree that AI is a force multiplier in small-cap and a better search at best in large-cap due to internal bureaucracy.
The bigger question (that remains to be seen and played out) is whether AI will be a forcing function towards small-cap. If smaller companies can build the same products as larger companies with fewer people then they can compete on price and win, hollowing out the revenue base of large-cap companies and leading to their downfall due to the dwindling workforce not having the culture to adapt. Of course reality is more complicated (moats, larger companies making too-good-to-be-true offers to acquire smaller competitors to prevent the emergence of genuine competition, boys-club who-you-know-not-what-you-know corruption via lawfare ...) so, yeah, remains to be seen.
This comment would make sense 6 months ago. Now it is much, much, much more likely any given textually answerable problem will be way easier for a bleeding edge frontier AI than a human, especially if you take time into account
Besides being tough, it's shaping students' writing in a specific direction. That dense style I think of as 19th century English philosophy prose, though I hear it may still be the ideal in parts of Europe.
We're now reaching the point where people have gone their whole college education on AI, and I've noticed a huge rise in the number of engineers that struggle to write basic stuff by hand. I had someone tell me they forgot how to append to a list in their chosen language, and couldn't define a simple tree data structure with correct syntax. This has made me very cautious about maintaining my fluency in programming, and I'll usually turn off AI tools for a good chunk of the day just to make sure I don't get too rusty.
I hope so. I will never type a single thought of my own or personal detail into an OpenAI product again. I have no doubt at some point OpenAI will be asked by DoD to hand over customer data and they will do so. If I use AI at all for nonprofessional reasons it will be Anthropic/Claude.
US Gov doesn't really need a contract with anthropic to force them to hand over customer data do they? What would prevent that from happening if they are still a US based company?
> Hand over the names of the large bettors on that side of the market to the Israeli police
Woah there, didn't see this coming to be honest. But, I mean we already trust Israel to spend our tax dollars on genocide, so heck, why not, they'll do a great job!
> If Polymarket (and its competitors) cannot fix this, then it's time to get tough. Maybe it's time to actually crack down on Americans using VPNs
First of all, why is it time to get tough on limiting human rights, why is your freedom of speech and freedom of privacy the first thing on the chopping block, rather than Polymarket's license to operate, which is a much more obvious target for all of our disdain.
And also, it's physically impossible to crack down on VPNs 100%, especially within the constitutional framework we have in the US, and the technical reality of the modern internet. Keep in mind the US Government itself invented TOR to help destabilize and leak information out of autocratic regimes that have implemented sweeping censorship and blocking of VPNs (you know, the sort of autocratic regime you apparently want to start here in the US given your comments on the matter).
From a technical perspective, with SSL being standard for most web traffic it is utterly trivial to set up VPNs that hide in all kinds of ways that look completely innocuous and are unblockable. You can even hide exfiltration traffic in DNS queries, and I even got to see this a few times in practice when I still worked for the DoD back in the day. People will go to all kinds of lengths to get information in and out of a closed system and stopping that is like trying to catch air with your hands, you will always just end up hurting/restricting the average citizen and the bad actor who wants to slip through will still slip through.
Such a ban or KYC program is also fundamentally against every notion of privacy and freedom enshrined by the constitution and even more so by the founding spirit and values of the internet and reputable tech circles that care about privacy and freedom. You sound just as bad as Zuckerberg and Palantir trying to push KYC on Linux right now to help build a better mass-surveillance state for their fascist overlords. You seem to be the sort who would be extremely supportive of that project, though I sincerely hope that is not the case.
> we have extradition with Israel
For now. Once the Boomers finish their reverse mortgages and are finally out of the equation we'll hopefully cut all funding, sanction Israel, and label them a terroristic genocidal state, which they are. As usual, moral progress is always delayed by rich old men refusing to let go. Ethnostates are also extremely problematic in their own right, never mind genocidal ones. Certainly not morally praiseworthy, certainly not good "ally" material. Should be sanctioned in the very very least, definitely not subsidized.
And the funniest part is I agree with you that Polymarket is bad, which is your underlying point that you care so strongly about throughout this thread, but your arrogance and sudden undeserved spitefulness, in response to completely innocuous, imaginative, on-topic comments, just comes off as really bitter and makes you hard to talk to and even harder to agree with. Like you really have an axe to grind and I think it is rooted in some sort of deep-seated fear that everyone around you is as evil and misanthropic as you claim to be through your espoused values.
reply