Was he abused by his father or something? Sounds like it should be investigated further. People usually don't just become like this without a good reason.
Taking into account the history of how lines have changed isn't much better, sorry Anu. (Or if you think it is, please give some very compelling real world examples).
I believe that you need to understand the semantics of the code to truly do what you are trying to do well, and for all other cases the snapshot model is more than good enough and given how we structure and modify code, it works out really well in practice. Code dealing with a single aspect should and almost always is co-located, so to get a conflict of intention in a merge is very rare. There are other human aspects like code ownership and collaborating teams which makes the issue even less of a problem.
I don’t think there are any open implementations of data type aware DVCS yet (would be glad to be proved wrong). However, I believe a reliable file/line DVCS based on sound patch theory would be a step in the right direction. A type-aware DVCS not based on sound patch theory would probably be a disaster.
I don't know about Anu (haven't looked at it yet), but with Pijul it would be perfectly possible to take advantage of semantic knowledge. Line-based changes is a default, but you could certainly apply file deltas based on a richer understanding of the underlying filetype.
I’m not convinced by this but I’m also not convinced by the argument of the comment you’re replying to. The theoretical foundation Pijul/Anu works by starting with files as lists of lines (or some other thing) and patches as (injective) mappings from one list of lines to another which preserve the relative order between lines, then constructing the smallest generalisation of this structure to one where all merges exist and are, in some sense, well behaved. This generalisation is from lists of lines to partial orders of lines, where “B is preceded by A” becomes “A<B”.
To do something similar with more structured files, one must find the corresponding idea to “a list of lines”, and this must work in a good way (e.g. changes like x -> (x); [a; b] -> [a] foo [b]; [[p, q], [r, s]] -> [p, q, r, s] must in some sense be natural operations in your structure (and diffs need to be reasonably easy to compute)). And of course it still needs to work in a sane way for unstructured data in big comments. Therefore I don’t agree that Anu would be easily generalised to this.
I think this is basically impossible to do for situations where you want to capture all the structure (such that a patch to rename something merges well with other patches). I think it’s likely extremely hard for a part way solution.
Finally I’m not convinced that the change would be that useful. Much of the structure of computer programs is implicit in the scoping rules in such a way that the “move blocks around” changes that line-based VCSes often struggle with will still be invalid with structural diffs.
This is the same underlying theory as the “operational semantics” that is used by Google docs to merge out-of-order changes by simultaneous editors and resolve into a single consistent shared global state. So take that as a proof of principle that it works for more complex structured information.
The underlying theory is not really the same. The practice is also not the same.
Google doesn’t need a different representation where all push outs exist because they rely on a centralised server, low latency, and arbitrarily choosing how to resolve conflicts. In a DVCS, you can rely on none of these.
In not sure if Google still used operational semantics for Docs, but that is not how operational semantics works. The theory allows you to take two quite different stacks of changes and interleave them in a consistent way. It does not rely on low latency or a centralized server. The choice of arbitrary tie breaker vs. manual resolution in the case of conflicts is an application domain choice not mandated by the theory. Obviously in the case of Docs the tie breaker makes more sense.
I think this is getting off topic as Anu/Pijul is not doing operational transformations (I assume this is what you meant when you wrote operational semantics).
I still claim that the reason OT works well with google docs is that it can rely on a centralised server, low latency and tie breaking.
Tie breaking means one doesn’t need to worry about representations of conflicts (and allowing changes to merge in sound ways) which is in some sense the main thing pijul does.
Low latency means that users are able to cope with the tie breaking rules doing the wrong thing
A centralised server means that there is less need for the merges to work in the sound way that pijul aims to make them work.
Therefore I put it to you that google docs is neither an example of the same theory that pijul is based on not evidence that OT would work for some kind of well-behaved structure-aware DVCS.
You should ask the people having the issue to switch graphics cards, PSU and RAM since this >90% smells like a memory/power issue. I think you've just gotten unlucky and collected a few similar reports of people having memory corruptions. I am guessing you cache renderings and that is why the issue sticks around until you re-render.
If you want to reproduce it yourselves then perhaps try pointing a hairdryer from a distance at the various components until they start to create trouble, or alternatively just overclock them towards the breaking points.
I once lost a job opportunity by passionately speaking about nothing but jiu-jitsu during the interview. They kept asking and I kept talking. After about 20 minutes they said thank you and goodbye.
I think one of the problems people have with free open source in general is that it's hard to get a bearing on exactly what additional risks come with it now and in the future.
Will it stay open source and free to use?
Will it be actively maintained?
Can I get support for issues in a timely manner?
Will someone be there to helpfully guide or at least review and accept my contributions?
There needs many more dimensions to the categorization and it would be very nice if OSS projects could be graded along them all.
This is a concern of mine as well, so when I find a repo I want to guarantee future access to, I fork it. Then I have a copy that's licensed with the original open source license and which will stick around as long as I care to keep it around.
>Can I get support for issues in a timely manner?
IMO this should not be an expectation in open source. It's nice when it happens, but expect to DIY.
>Will someone be there to helpfully guide or at least review and accept my contributions?
Also nice when it happens, also by no means guaranteed in open source. If someone else isn't available, you can always carry on in your own fork.
Overall I think people place certain expectations on FOSS because they tend to align with the open source culture but strictly speaking, the reality of FOSS doesn't support those expectations in the long term.
Hah I wish. My experience in a javascript shop is someone finds a library, they add it to the project, then one day in the distant future, long after the developer has left, everything breaks.
"Something you are" and "something you have" are the same class, just that the thing you have is physically attached to your body. Doesn't matter if it's a fingerprint, a chip installed under your skin or a tattoo. Pretty pointless distinction. Fingerprints, faces and eyes are merely conveniences.
Nope, they are quite different exactly because "something you are" is attached to you and "something you have" is not. One can be swapped out if compromised or get lost. The other can not (intentionally or unintentionally) be replaced, but -- because it is something biological -- undergoes slow changes over time. These differences are sufficiently large that it makes sense to split it into two categories when modeling the whole system from a security -- or usability -- standpoint.
And can be compromised without theft, coercion or any other trace.
> One can be swapped out if compromised or get lost.
Which makes something you are strictly worse than something you have.
> undergoes slow changes over time
You are lacking an argument for anything attached to this point.
> ...it makes sense to split it into two categories
So you are arguing that because something is strictly worse from a security standpoint, it should be categorised as a new category? Have I summed up your position correctly?
There are usability benefits which would exist similarly by attaching something which couldn't be easily compromised to your body. For example a chip under your skin or just carrying a watch on your wrist which you could authenticate with after putting it on and which would un-authenticate automatically when it is taken off. Nobody would argue that you are your chip or your watch.
Something you know is different because there are no plausible ways aside coercion and similar for extracting such secrets in idle, and the other alternative is to get compromised on usage. It's about the threat models.
"From the days immediately following the Lion Air accident, we've had teams of our top engineers and technical experts working tirelessly..."
If they've been working tirelessly, then they should have understood the risks and grounded the fleet.
Either they understood the risks, but neglected to ground the fleet, or they didn't understand the risks and hence we can't trust the fix.
I also find it sort of nauseating that the CEO implicitly gets the message through that the fix already has been worked on for a long time, has thus matured, can now be fully trusted, and we are just weeks away from flying with a safe plane.
I don't buy any of it. Let's analyse this critically.
The MCAS still needs to augment the flight characteristics. There is nothing that can be fundamentally changed regarding this fact. We can only change the conditions under which MCAS activates and the conditions under which it is deactivated.
It still has to have the same authority for a nose-down and recovering from an erroneous high-magnitude nose-down will still be mechanically hard or require additional pilot knowledge and actions. The latter should be impossible without recertification.
The operational characteristics of the airplane are not matched with the operational controls offered to the pilots, by design constraint. The plane is thus unsafe and will forever be unsafe, without redesign and recertification, because with the constraints in place, they can only add additional information on displays, add more reliability by having the MCAS utilise input from more sensors, add more conditions under which the the MCAS deactivates, etc, but none of this attacks the fundamental impedance mismatch between characteristics and controls, as well as the lack of education for it. Deactivation also simply exchanges the risk of stalls for nose downs.
All-in-all, the MAX is simply an airplane with a worse flight envelope as far as safety is concerned, and nothing can be done about it.
XKCD has a long history of citing their work, why can't an effective argument be made with visuals? I don't understand why the medium of the argument matters here.
There is definitely an argument made. It might not be in the form that you're looking for, but the argument is there. The graph clearly points to a high correlation between the industrial revolution and a fast and huge (relative to other changes) rise in temperatures. How is this not an argument?