What CRDT's solve is conflicts at the system level. Not at the semantic level. 2 or more engineers setting a var to a different value cannot be handled by a CRDT.
Engineer A intended value = 1
Engineer B intended value = 2
CRDT picks 2
The outcome could be semantically wrong. It doesn't reflect the intent.
I think the primary issue with git and every other version control is the terrible names for everything. pull, push, merge, fast forward, stash, squash, rebase, theirs, ours, origin, upstream and that's just a subset. And the GUI's. They're all very confusing even to engineers who have been doing this for a decade. On top of this, conflict resolution is confusing because you don't have any prior warnings.
It would be incredibly useful if before you were about to edit a file, the version control system would warn you that someone else has made changes to it already or are actively working on it. In large teams, this sort of automation would reduce conflicts, as long as humans agree to not touch the same file. This would also reduce the amount of quality regressions that result from bad conflict resolutions.
Shameless self plug: I am trying to solve both issues with a simpler UI around git that automates some of this and it's free. https://www.satishmaha.com/BetterGit
The crdt library knows that value is in conflict, and it decides what to do about it. Most CRDTs are built for realtime collab editing, where picking an answer is an acceptable choice. But the crdt can instead add conflict marks and make the user decide.
Conflicts are harder for a crdt library to deal with - because you need to keep merging and growing a conflict range. And do that in a way that converges no matter the order of operations you visit. But it’s a very tractable problem - someone’s just gotta figure out the semantics of conflicts in a consistent way and code it up. And put a decent UI on top.
For that you need a very centralized VCS, not a decentralized one. Perforce allows you to lock a file so everybody else cannot make edits to it. If they implemented more fine-grained locking within files, or added warnings to other users trying to check them out for edits, they'd be just where you want a VCS to be.
How, or better yet, why would Git warn you about a potential conflict beforehand, when the use case is that everyone has a local clone of the repo and might be driving it towards different directions? You are just supposed to pull commits from someone's local branch or push towards one, hence the wording. The fact that it makes sense to cooperate and work on the same direction, to avoid friction and pain, is just a natural accident that grows from the humans using it, but is not something ingrained in the design of the tool.
We're collectively just using Git for the silliest and simplest subset of its possibilities -a VCS with a central source of truth-, while bearing the burden of complexity that comes with a tool designed for distributed workloads.
> It would be incredibly useful if before you were about to edit a file, the version control system would warn you that someone else has made changes to it already or are actively working on it. In large teams, this sort of automation would reduce conflicts, as long as humans agree to not touch the same file. This would also reduce the amount of quality regressions that result from bad conflict resolutions.
Bringing me back to my VSS days (and I'd much rather you didn't)
Yeah, same thoughts. I also think semantic merge is the best. Also it would be nice if you could add a plugin for custom binary file formats, such as sqlite (which obviously can't be merged like a text file).
well, the mismatch here is widened by the fact that almost everyone it seems uses git with a central, prominent, visible, remote repository. Where as git was developed with the a true distributed vision. Now sure that truely distributed thing only becomes final when it reaches some 'central' repo, but it's quite a big different than we all do.
I haven't used them, but doesn't SVN or Mercurial do something like this? It blocks people from working on a file by locking them, the problem is that in large teams there are legitimate reasons for multiple people to be working on the same files, especially something like a large i18n file or whatever.
Absolutely. Cheaper Chromebooks are terrible machines. Those screens should be illegal and probably causes a lot of eye strain and headaches. Same with a lot of the sub $800 PC laptops. The colors aren't even... colors. The trackpad? Yuck. Everything else falls apart right outside of the warranty period of just 90 days or 1 year. Oh and good luck spending the first day just uninstalling/formatting everything from scratch and getting the vendor specific features to work again.
For people who have always wanted an Apple laptop, this is it. The niceties are not necessary, and perfect little things to cut out to bring the price down for the masses.
I wanted to believe this article, but the writing is difficult to follow, and the thread even harder. My main issue is the contradiction about frameworks and using what the large tech companies have built vs real engineering.
The author seems to think that coding agents and frameworks are mutually exclusive. The draw of Vercel/next.js/iOS/React/Firebase is allowing engineers to ship. You create a repo, point to it, and boom! instant CICD, instant delivery to customers in seconds. This is what you're complaining about!? You're moaning that it took 1 click to get this for free!? Do you have any idea how long it would take to setup just the CI part on Jenkins just a few years ago? Where are you going to host that thing? On your Mac mini?
There's a distinction between frameworks and libraries. Frameworks exist to make the entire development lifecycle easier. Libraries are for getting certain things that are better than you (encryption, networking, storage, sound, etc.) A framework like Next.js or React or iOS/macOS exist because they did the heavy work of building things that need to already exist when building an application. Not making use of it because you want to perform "real engineering" is not engineering at all, that's just called tinkering and shipping nothing.
Mixing coding agents with whatever framework or platform to get you the fastest shipping speed should be your #1 priority. Get that application out. Get that first paid customer. And if you achieve a million customers and your stuff is having scaling difficulties, then you already have teams of engineers to work on bringing some of this stuff in house like moving away from Firebase/Vercel etc. Until then, do what lets you ship ASAP.
You don't need a form fitting faraday cage. Want a free one? Find a food delivery bag that is insulated with foil. I believe some of the meal prep delivery services package their groceries in this. Stick your phone in there and wrap it up. All signals gone as far as I can tell.
Or, even more free or cheap: Wrap it in aluminum foil.
This is a good write up on the workings. However, in actual use, Turbopack has severe limitations when compared to Webpack. I’ve been working on https://jsonquery.app and rely on the jq WASM dependency for the queries. But, Turbopack cannot handle importing a WASM binary or the glue code for it directly. The workaround is to have a script copy the binary to the public directory. But that's not all. jq-wasm has a dependency on the ‘fs’ module, even though no fs functions are used. But trying to resolve this in Turbopack is not possible and 2 days of fighting this was a waste of time.
Webpack solved this problem with a few lines in the next.config.ts
For now, I’m back to using Webpack with NextJS 16 with the —Webpack flag. Hope they allow this for future versions.
Spot on. The misconceptions, even from other EV owners is astounding. People are constantly confused about kWh vs kW, Amps, voltage, temperature, range, mi/kWh, etc. Even PhD Computer Science and other highly educated folks who have owned EVs for a long time can't quite communicate the difference between those units of measurement. So of course when a curious person asks them or others, they only quote the falsehoods that someone told them.
Some examples:
1. I constantly see EV owners install 60A/11kWh service, costing them on average $10k when their driving needs don't require it.
2. People thinking they need more than 300mi of range and think they will run out of batteries like they do on their headphones.
All of this needs an understanding of the aforementioned units and basic physics. But, you're not going to get that by just talking to people. Salespeople are especially not going to do that, they can't even do that for combustion cars.
Most households do not drive more than 100KM per day... yet people are obsessed with range.
My next EV will be a small BYD (dolphin or dolphin surf), these things can get between 200KM and 400KM per FULL charge, depending on your speed and settings. If you use the "slow" wall charger (that doesn't require installation or modifications to home circuits), not only will the batteries last longer, it will easily charge up your 100KM actual drive range in a couple of hours, typically overnight.
If you empty the battery each day and recharge it each night, that nets you 300KM per charge, or 2100KM per week. I don't know a single person or family that does 2100KM a week with their cars. So the whole range anxiety is rubbish. Just plug in every night and go to bed and tomorrow you have another 300km available.
Oh and then there are public fast chargers if you do get stuck. I live in Africa and this is solved problem.
Sorry for the rant..your comment about the expensive charger installations makes my blood boil as most people can just use the normal wall charger and charge overnight.
The thing with range is it's another "thing to worry about" - with a gas car, it's basically nothing to worry about unless you happen to be absolutely on empty and no time to fill up the tank (5-10 minutes unless you have to go way out of your way for gas; rare).
It's like when phones went from 8-10 hour capacity to over a day; suddenly it wasn't a thing you think about anymore.
LOL, it's because they started with "regulations bad" and then went the usual technocrat/libertarian move of let the markets decide. And then rehashed the exact same arguments in favor of regulation.
Interesting read. I had the KR7A-133R motherboard as it won Anandtech's gold award for the best 4 bank DIMM support. It was $200 IIRC and was one the more expensive boards. Paired it with an Athlon XP 1800+ and Radeon 8500. Funny enough the AMD naming at the time was to reflect how Athlon's lower clock speed (1.5GHz) was a competitor to Intel CPU's (1.8GHz).
Asus was a strong competitor even then and I remember buying one just a few years before the Abit board that supported SD-RAM as well as DDR as a way to ease the transition for consumers.
It was a good time when IRC, AIM, and physical electronics shopping was still a thing. The only big tech presence that techies hated was Microsoft. Sigh.
A strange and anecdotal take. Infrastructure is already here. Millions of Americans are happily driving across the country charging their EVs just fine. You probably had a bad experience with a Chargepoint L2 charger in a garage that needed an app to operate and the FTUX on that is really painful. But, no one is using a L2 charger while roadtripping. L2 charging is primarily only for home or work use while parked for 8-12 hours. All the L3 chargers that people use during trips feature tap to pay.
As for maintenance, seeing 1 charger be down out of 10 is not an infrastructure problem. EV drivers figured out waiting in line and queuing just fine. And with most stations charging at L4 speeds, the wait time is short.
Engineer A intended value = 1
Engineer B intended value = 2
CRDT picks 2
The outcome could be semantically wrong. It doesn't reflect the intent.
I think the primary issue with git and every other version control is the terrible names for everything. pull, push, merge, fast forward, stash, squash, rebase, theirs, ours, origin, upstream and that's just a subset. And the GUI's. They're all very confusing even to engineers who have been doing this for a decade. On top of this, conflict resolution is confusing because you don't have any prior warnings.
It would be incredibly useful if before you were about to edit a file, the version control system would warn you that someone else has made changes to it already or are actively working on it. In large teams, this sort of automation would reduce conflicts, as long as humans agree to not touch the same file. This would also reduce the amount of quality regressions that result from bad conflict resolutions.
Shameless self plug: I am trying to solve both issues with a simpler UI around git that automates some of this and it's free. https://www.satishmaha.com/BetterGit
reply