Hacker Newsnew | past | comments | ask | show | jobs | submit | badamp's commentslogin

Like all “good” tech ideas this isn’t a terribly bad idea on its surface. It has no tech requirements. Running this just sounds like a nightmare. I really don’t want to be adminning griefing and trolling targeted at cancer patients.

Well run moderated communities for the dying are nothing new.


> in one control is just given up and regained unpredictably

Which one? It’s “cooperative” ie not unpredictable. The points where one can block are predictable and documented explicitly, otherwise how would the programmer know they won’t block forever. The same should hopefully be the case for async/awaitable apis.

In fact where async/await will actually give up control are harder to tease out.

The differences are really not as big as they would seem.


In cooperative multitasking you can program when to give up control, not when it is regained. The regaining part is unpredictable. Which introduces a lot of non-determinism to deal with and overhead.


This is no different than async/await. At some point you await a scheduled primitive, it could be a timer, io readiness, an io completion... and yield to a scheduler. You don’t specify explicitly when you return. These are not tightly coupled coroutines. This is precisely what is going on in cooperative multitasking.

I don’t see how this increases overhead to deal with either.

Basically, coop multitasking and async/await operate on the exact same execution framework, the latter just gives convenient syntactic support.

Perhaps you should see how typescript turns async await into js.


Await is just syntactic sugar. You do not really await anything. What actually happens is an event handler gets called on an event, where it sets up more event handlers for more events and so on. This is the essence of asynchronous programming. There are no tasks, no yielding, practically no overhead and everything is deterministic (in relation to external events obviously) [1]. The only cooperative multitasking implementations that have the same amount of determinism are those implemented strictly on top of event loops and that lack yielding function, so they cannot really be called cooperative multitasking implementations, as they can't "cooperate". All actual implementations have yielding, do not get control deterministically (dealing with that non-determinism requires stuff like semaphores) and have relatively significant overhead.

[1] If implemented with care, not doing syscalls in the middle of async primitives and using fast nearly-O(1) algorithms for timers, etc. it can be incredibly fast. And of course Rust also gives enough room to mess up all that nice determinism.


> You do not really await anything. What actually happens is an event handler gets called on an event,

So the event handler gets called immediately? No that’s not right. What would be the point of that? The event handler or continuation obviously needs to be scheduled on something that is awaitable. Meanwhile, other concurrent tasks may be able to run.

> This is the essence of asynchronous programming. There are no tasks, no yielding, practically no overhead and everything is deterministic

This is just totally wrong. Especially re tasks: https://docs.python.org/3/library/asyncio-task.html#creating...

There is nothing inherent about async and await that prevents “yielding”... the issue of yielding and semaphores is a concurrency issue and since async and await are used in concurrent programming environments, the same issues apply.

While it is true async and await don’t require any kind of cooperative concurrent framework to work, that is kind of their whole point for existing. A single task async/await system isn’t terribly interesting.


> So the event handler gets called immediately? No that’s not right. What would be the point of that? The event handler or continuation obviously needs to be scheduled on something that is awaitable. Meanwhile, other concurrent tasks may be able to run.

It's kind of like this: async/await is syntactic sugar for higher-order abstractions around event loops. At the level of event loops and event hadnlers there is no awaiting anymore. And the whole point of event loops is to not run event handlers concurrently, that's why they are even called loops, they invoke handlers one by one in a loop deterministically without concurrent tasks and once there is nothing more to run they just block and wait for new events. Obviously you can run multiple event loops in parallel, but you shouldn't share memory between them, as it defeats the purpose, is always slower and is never really necessary, you can just use asynchronous message passing to communicate between event loops when you have to.

> A single task async/await system isn’t terribly interesting.

And yet this is the whole point of async/await, promises, futures and event loops. All of them exist to avoid mistakes and performance problems of shared memory concurrency. I mean, really, if you have semaphores or mutexes in event handlers, futures, promises or async functions - you are in a broken concurrency model zone.


It is regained in exactly the same cases it would be in the async model: when a blocking operation completes and the scheduler resumes the now ready thread. As scheduler is called executor in the async world, while a thread is a coroutines, but the concepts are very similar.


He’s not getting swap. Or in a sense he is... you can still thrash.. the read only pages of the executable and any memory mapped files are still eligible to be paged out. When you get into a memory pressure situation you end up with a handful of executable pages of all the active programs getting faulted-in on every context switch.


Worse is better is a simple minded and wrong interpretation. In reality the outward simplicity of pledge(2) masks a great deal of high quality engineering and research. The categories for pledge were not just pulled out of someone’s ass.

Seccomp like so many Linux interfaces is the “fuck it” here’s an exhaustive yet half baked set of tools, you can do anything! This barely works out in gp programming, but is always an unmitigated disaster in anything security related.


Linux already has what you’re talking about with eventfd and epoll.

In Linux each thread can get an eventfd and you can POLLIN all of them.

In fact I would argue that using futexes is the “roll your own solution” using lower level primitives (and easier to fuckup) much more so than eventfd and epoll.

As mentioned somewhat poorly in the post, using futexes gives a performance boost which is not surprising since they are fast user mutexes. FWIW I didnt think windows events had a fast user space path but I may be mistaken.

For most worker pool scenarios you’re describing, the overhead of eventfd is probably in the noise.


You’re talking about interfaces for waiting on multiple kernel resources but the new futex interface enables you to wait for multiple user resources.

Though it can emulate a win32 api for waiting on multiple “objects”, it’s strictly more powerful than WaitForMultiple if you are dealing with user objects since futexes impose very few constraints on how your user synchronization object is shaped and how it works.

So, the new interface is totally different from things like epoll. In one case the kernel is helping you wait for multiple user objects and in the other case it’s helping you wait for multiple kernel objects. The distinction is intentional because the whole point is that the user object that has the futex can be shaped however user likes, and can implement whatever synchro protocol the use likes.

Finally, it’s worth remembering that futex interfaces are all about letting you avoid going into kernel unless there is actually something to wait for. The best part of the api is that it helps you to avoid calling it. So for typical operations, if the resource being waited on can have its wait state represented as a 32-bit int in user memory, the futex based apis will be a lot faster.


They point out that they already have an implementation that does just this .... and it fails on some programs due to running out of file descriptors (they have one program that needs ~1 million of them ...)


If you read the full thread that is a bit of a red herring and beside the point (thats why I said the conveyance of the performance implication was poor)... indeed window WFMO only supports 64 objects per call. They mention that the fd issue is due to leaking objects in many windows programs..which was an odd mention and a little off the main subject. The main motivator is performance. If eventfds performed better it would likely be better to fix the fd leak issue with a cache.

Again.. eventfd and epoll covers the same use case as WFMO and EVENTs.


Curious, how would a cache fix the fd leak issue?


Perhaps a better term would be “pool”. Anyway, what’s being leaked is “handles” or events not actually fds. You only actually need as many fds as the maximum possible number passed to a syscall. The mapping of handles/event objects in user space does not have to be 1:1 with the kernel resource.


I recall higher performance browsers also use up large numbers of FDs; I suspect it might be for this very reason.


Yes, and you have to cobble together an event implementation out of eventfd and epoll. There are two problems (specifically talking about multi-platform software)

1. You'll likely get it wrong and have subtle bugs.

2. This is significantly different than the Windows model where you wait on events. Now you have two classes of events - regular ones, and ones that can be waited on in multiple. The second class also comes with its own event manager class that needs to manage the eventfd for this group of events.

You end up with a specialised class of event that needs to be used whenever you need to wait in several of them at once. Then you realise you used a normal POSIX event somewhere else and now you want to wait on that as well, so you have to rewrite parts of your program to use your special multi-waitable event.

It's mostly trivial to write a event wrapper on top of POSIX events that behaves the same as Windows Events, except for the part where you might want to wait on multiple of them. I would expect that once this kernel interface is implemented we'll get GNU extensions in glibc allowing for waiting on multiple POSIX events. I absolutely do not want to roll my own thread synchronisation primitives except for very thin wrappers over the platform-specific primitives. Rolling your own synchronisation primitives is about as fraught with peril as rolling your own crypto.

To be honest, WaitForMultipleObjects will probably become not very useful in the near future. We're getting 32-core workstation CPUs today, it's quite likely there will be CPUs with more than 64 cores in near future workstations making it impossible to use this classic Windows primitive, but I suspect Microsoft will provide WaitForMultipleObjectsEx2.


The short answer is it can be both (ie safe, beneficial and effectively a patent grab). It is also unlikely that an enantiopure chemical is less safe then it’s racemic counterpart and not unlikely that it is beneficial (this is not without precedent... this is also chemistry 101 and I don’t feel this is the forum for it)...

The doubt is not so much the safety but is the benefit of the enantiopure version worth the cost.


Per my comment above, thalidomide is a clear exception. This stands even if the chirality changes within the body.


That was an example on the other direction. A pure chirality was safe, but a racemic mixture was dangerous. Here we know the racemic mixture is safe, which strongly suggests that either chirality in isolation is also safe.


And to make in the previous worse it would apparently "autobalance" and flip to chirality which meant that even if they 100% isolated it at great expense is why we never saw "rebranded Thalimoide but isolated to be all morning sickness drug no birth defects". I don't know enough details to tell how long it took and if it could even be used safely under ludicrously impractical assumptions like "if you it freshly isolated in bulk and take it within five seconds it would be safe but expire in an hour".


I think it flipped when in the body, so however long the shelf life was, it still wouldn't matter.

Incidentally, and AIUI, the body has a tagging system for unwanted proteins. Once tagged, the proteins are removed. Thalidomide tagged the proteins the were specific to embryonic limb formation, and the body duly cleared them, leading to the infamous result. Again AIUI.


> I don't have numbers to counter this off the top of my head, but congenital adrenal hyperplasia is a rare syndrome, but still common enough that it is taught to every medical student and tested on our boards repeatedly.

You are confusing Congenital adrenal hyperplasia (ie 21 alpha hydroxylase deficiency), the bane of every 3rd year med student and adrenal medullary hyperplasia, a much rarer condition (I don’t recall it ever coming up during medical school).

Congenital Adrenal hyperplasia involves the cortex where the corticosteroids are produced. Epinephrine and norepinephrine are catecholamines produced by the medulla. AMH is more similar to pheochromocytoma though.


Temperature sensors in thermostats are nowhere near as precise as you seem to believe. And unless you have calibrated your sous vide to a standard I am willing to bet good money its not anywhere within 1 degree precision (having tested this.. you’re lucky to get +/- 3 C). Also nothing in cooking requires the precision down to fractional degrees C. You’re not going to obtain it anyway, even if you think you are.

Everything you’re saying is all in your head.


If it’s a recurring payment for a contract term sure. But most free trial offers are “cancel anytime” and pay in advance of the next subscription period. There’s nothing for them to pursue you for if you decline to pay. They just stop providing the service.


> That’s not how it works

What you’re saying makes little sense for most “free trial” offers. These offers do not come with a term agreement (ie they are cancel anytime), and furthermore they are prepay. Ie you are offered a trial period and then you are billed in advance for the continuing service. So without a term agreement and prepay you’re not on the hook for anything. You just never paid, they terminate your service and that’s it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: