Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But the result often has fairly poor load distribution (e.g. some threads finish long before others), so it could obviously be done better by software the changes the variable its parallel on and how it distributes those variables adaptively as it goes.

It's astonishing to me that you're staring my point straight in the face and still not getting it. Let me restate in terms of portfolios: If I distribute my stock picks across many portfolios, without an oracle to tell me how to distribute them, some of the portfolios will perform poorly and some will perform well, but on average they will perform as if I just had a single portfolio.



I do wonder how we're talking past each other. Usually when I use an SMT solver I'm interested in finding a satisfying solution. Breaking up the problem on some variable the solver would search across multiple processors can improve the wall-time until finding one.

I doubt I'm following your metaphors/allusions, but the parallel may be that you're only hoping for any portfolio to have excess performance, and not the sum total.


> I doubt I'm following your metaphors/allusions, but the parallel may be that you're only hoping for any portfolio to have excess performance, and not the sum total.

I'm sorry but I don't know how else to explain it to you when you're ignoring such an obvious fact: if you're hoping for any portfolio to hit big, but they're all picked without any intelligence (i.e., without an oracle) then you will always be only as good as the worst portfolio. It's just such a patently obvious logical argument I don't know how else to state it so that it becomes clearer. I mean I could easily write out the formula for expected value of the max of N uniform iid random variables but you've literally been witness to it by watching the wall-clock time on your threads!

Let me put it without any metaphors/allusions: you simply don't understand search and/or SAT if you think that dividing a 2^N search space into N pieces will give you a speedup. It will not. It's basically the "geometric" definition of NP.


Of course, dividing search space into K pieces is not guaranteed to give you a speed-up _in the worst case_: you might pick an unlucky division where K-1 pieces are easily UNSAT but the last remaining piece is as hard as the original (so the wall clock time is unchanged). However, in practice variable-fixing can and often does give an _expected time_ speed-up, especially if the SAT instance has some inherent parallel search structure (e.g., key or midstate bits for ciphers) and such heuristic tactics are still useful.


> However, in practice variable-fixing can and often does give an _expected time_ speed-up

Proof? Because as far as I know literally none of RP, BPP, ZPP relationships to NP are known.


Indeed no proof. By "in practice" I meant "on instances encountered in real-world applications."


then instead of _expected time_ you should say _hope-for time_ because _expected time_, in this context, is already firmly defined.


I'm afraid I'm still mystified and suspect we must be talking past each other, ... But, in any case, I'm grateful you took your time to try teaching me something.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: