A good API does two things. Firstly it DRYs the code out. Many APIs start life doing only that, as a collection of routines you get tired of writing over and over again. Secondly, the functions are designed in a way that reduces the need to share information. Typically they do that by hiding a whole pile of details in their implementation so knowledge of those details is all in one place rather than scattered across a code base. A term often used for APIs that don't do that well is "leaky", or we say it's a "leaky abstraction".
Perhaps a good API has other more subjective attributes but it must have those two. LLMs suck at both. You can see that in the comments here, when people say they write verbose code. It's verbose because the LLM didn't go looking for duplicated functionality - if it needed it, it just put the code where it was focused on at the time.
If they are bad at DRY then I need a better superlative to describe how they fare at respecting the isolation boundaries that underpin good module design. As far as I can tell, they have no idea about the concept or how to implement it. Let loose, they are like a bull in a china shop, breaking one boundary after another.
> I suggest reading up on wifi and RF before going further.
I'd suggest neither matter in the face of how the problem is solved in the consumer cards the OP was talking about. They solve it by locking down the firmware that controls the radios.
The reality is most routers do that too. You can replace the firmware in most of them with OpenWRT or something similar. You still can't exceed regulatory limits because of the signed blobs of firmware in the radios.
Nonetheless, here we are getting comments like yours, which imply all firmware in the device must be behind a proprietary wall because a relatively small blob of firmware in them must be protected. It has its own protections. It doesn't need to be protected by the OS or the application that runs on top of it.
Yet it's in those applications where most of the vulnerabilities show up. Making them consumer replaceable would help in solving the problem. Protecting the firmware is not a good reason to not do it.
I was responding to the original post about open standards. My point is that anything with an RF transceiver will never be as open as a standard PC with replaceable components. The radio portion will always be blocked off. That relatively small blob will always limit how much control you can exert over the device.
We don't have to look far. The embedded space with Arduinos, ESP32s and even RPis is a hacker's paradise. Yet the radio stack is restricted in all of them. For instance, it's not possible to take an ESP32 board and turn it's single antenna into a MIMO configuration, even if you make a custom PCB with trace antennas.
My point is that anything with an RF transceiver will never be as open as a standard PC with replaceable components. The radio portion will always be blocked off.
sure, but again, why would the RF transceiver on my desktop PC or in my laptop be any different than the one in my router?
> Seems like everyone, everywhere is overworked, underpaid, and under supported. How much longer can we frogs survive the boiling?
I'm Australian. In Australia, if you are forced to work overtime the rate of pay goes up, by 50% or if it's extreme, double. As a consequence "underpaid" isn't a common complaint of people working lots of overtime.
This has some negative consequences of course. If labour is plentiful you can have lots of people on hand and pay them on an hours-worked basis. The same deal applies - if you go beyond 40 hours a week their rate of pay goes up, but that shouldn't happen if labour is plentiful and management is on the ball.
But if, as in this case labour isn't plentiful, then they are going to have to fix it some other way - like paying to train more staff. What the employers can't do is offload the problem entirely onto their employees, so there are forces compelling them to get their act together.
The OP makes it sound like the dynamic is very different in the US.
The USA has time and a half overtime above 40 hours as well under the FLSA. This applies to ATC.
Unfortunately, this is now priced into certain government jobs in the USA and people rely on it. Americans see the obscene amounts of money and hours as a challenge until they actually burn out.
ATC isn't even the worst offender. Law enforcement and prison guards can pull 100+ hours a week on a regular basis. This is how prison guards can pull $400k/year.
> ATC isn't even the worst offender. Law enforcement and prison guards can pull 100+ hours a week on a regular basis. This is how prison guards can pull $400k/year.
There's definitely elements of that - but part of that is that many pensions are based on the two highest earning years of your career, so it's "common" among cops when they are planning to retire to spend two years working every possible piece of OT available, to maximize their pension income.
Sounds like a weird incentivization for sure. Why not base the pension on the average over all the years worked as in many other countries? When you offer such incentives, people will naturally work in such a way.
Because you'll loose half a career's worth of inflationary salary rises that way. Also, women might work part time after having children which would skew the average annual salary down. Over a 40 year career, just from inflation alone, you'd be getting about half your final salary that way, even ignoring any increases later on from being better qualified or taking on more responsibility.
Mind you, in the UK, defined benefits pension schemes are very rare nowadays, but where they exist they are defined as a percentage of the final year salary with that company, so the highest 2 year thing seems a bit weird to me but for a different reason.
In the US, social security is based on the 35 highest paying years. If that system is good enough for social security, I don't see why we don't do the same for government pensions.
But wouldn't it be cheaper for them to just hire more people to do the same amount of hours so that no overtime was used? And they would get better work output as well, since people would be rested.
Yes, but it's a local maximum since hiring more people is going to be expensive/difficult until overtime is fixed.
Some state prisons have escaped the overtime pit by offering huge sign-on bonuses and doing a hiring surge. But it takes longer to train ATC than a CO.
It would, yes. There's large worker/union pressure in many of these fields to not take away overtime by reducing hours, though, since it is such a huge part of total compensation.
Workers in these jobs in the US have less protections than the private sector as they are deemed imperative to operating the country. As such it is illegal for them to strike for better wages, but they do receive 1.5x wages during their mandatory overtime work, and have a base wage over twice that of the annual median income, before their significant overtime income. I think the burn out is a bigger cause.
> The OP makes it sound like the dynamic is very different in the US.
The obvious reason that US air traffic control has been understaffed for "a while now" is that, roughly a decade ago, the FAA caved in to political pressure to stop having so many white controllers by decommissioning any hiring practices that posed a risk of hiring white controllers.
This meant the size of the workforce froze, stressing the system.
That scandal exacerbated the problem, but there would still be a severe shortage had it never happened. The core issues, pay and grueling hours, predate that scandal by decades.
I've met truck drivers in the US that were driving 16 hours per day. I'm not sure if it is legal or not but it certainly wasn't considered exceptional. It's insane the kind of pressure some jobs put you under. Now ATC has obviously more potential for misery than a truck driver, still a passenger bus / truck collision isn't a small thing either.
16 hours is generally not allowed unless there are severe adverse conditions, but it's only recently with ELD (Electronic Logging Device) mandates that these rules are being forced to a degree. Before that, many drivers would simply go as many hours as they humanly could to keep moving.
It's mostly around engineering whether you have enough downtime to "move" your "driven" hours into.
For long-haul it's probably a bit different, but other routes have a lot of annoying delays.
E.g. waiting at a port, waiting for a trailer replacement, waiting for receiving, etc.
Afaik, these are all classified as driving hours for logbook purposes.
It creates a situation where you legally have to park a truck on the side of the road when you hit your cap, even though 1/2 of your day might have been waiting around for something.
Imho, that's a bit ridiculous, and I'm sympathetic to shadow logbooks there.
For the 16 hours straight cross-country pounders, less-so. But long-haul is what autonomous trucking will likely eat first.
The toll it takes on your sleep schedule is also brutal, because the rule is 10hr on / 8hr off. If those 8 "off" hours happen to coincide with sleeping hours you might get some rest but that won't be frequent, or enough. It would be better, smarter, and safer to just drive 16hr and then sleep for 8hr. But the rules are the rules, they don't have to make sense.
much of my extended family was in teh trucking industry one way or the other. Before the electronic books you had manual log books. Lying in your log book was a very big deal, i want to say you could get in trouble with the law in addition to getting fired. Before that though it was even more the wild west than it is now. My step-father knew my grandfather's "outfit" and he would joke that if they had a chain long enough to go around it they would haul it no questions asked.
> The rest of us do not have the upfront capital to purchase these trucks.
You don't need any upfront capital. Do it when the trucks become due for refurbishment a truck. Then it's almost a no-brainier, as its cheaper convert it to an EV: https://www.januselectric.com.au/
The company you are thinking of still exists. It was split from HP in 1999. It is called Agilent Technologies. HP kept the name and went into the business of flogging commodity computer products, Agilent continues to design and sell low volume high end gear and kept the engineering culture that requires.
HP later split again into consumer and corporate. To put the result into perspective HP Inc's (consumer) revenue is $55B/yr, HP Enterprise is $37B/yr, and Agilent is $7B/yr.
Given the crap being thrown here you would think the splits were a disaster. I don't know if the engineering culture of Agilent would have survived if it hadn't happened.
If you start blaming people rather than processes, the obvious fix is to disenfranchise the people (or worse). If you blame the process and then change it to get a better outcome, everyone wins.
There is a lot of low-hanging bad fruit in how the USA runs it's democracy. You allow gerrymandering, you allow politicians to make it difficult for people to vote. The small voter turnout means the fringe single issue voters get a disproportionate say. You use first past the post, which means candidate the majority think is the "least worst" may not get elected. (No voting system is perfect, but FPP is by far the worst.) Your political donation laws favour corporates, who by definition have no interest in voter welfare.
Learning is OpenClaw's distinguishing feature. It has an array of plugins that let it talk to various services - but lots of LLM applications have that.
What makes it unique is it's memory architecture. It saves everything it sees and does. Unlike an LLM context its memory never overflows. It can search for relevant bits on request. It's recall is nowhere near as well as the attention heads of an LLM, but apparently good enough to make a difference. Save + Recall == memory.
> Context is the plateau. It's why RAM prices are spiking.
Yes, context is the plateau. But I don't think it the bottleneck is RAM. The mechanism described in "Attention is all you need" is O(N^2) where N is the size of the context window. I can "feel" this in everyday usage. As the context window size grows, the model responses slow down, a lot. That's due to compute being serialised because there aren't enough resources to do it in parallel. The resources are more likely compute and memory bandwidth than RAM.
If there is a breakthrough, I suspect it will be models turning the O(N^2) into O(N * ln(N)), which is generally how we speed things up in computer science. That in turn implies abstracting the knowledge in the context window into a hierarchical tree, so the attention mechanism only has to look across a single level in the tree. That in turn requires it to learn and memorise all these abstract concepts.
When models are trained the learn abstract concepts which they near effortlessly retrieve, but don't do that same type of learning when in use. I presume that's because it requires a huge amount of compute, repetition, and time. If only they could do what I do - go to sleep for 8 hours a day, and dream about the same events using local compute, and learn them. :D Maybe, one day, that will happen, but not any time soon.
> It seems to me to be a 'solution' to a non-existent problem.
Electronic voting has lots of advantages. It can be end-to-end verified, it can be a great help to disadvantaged people (blind, illiterate), it can deliver results faster, it can probably be made more robust to retail-level tampering than paper ballots provided a paper audit trail it kept (as all electronic systems designed with security in mind do).
The one question mark in my mind: the current US system resisted Trump's efforts to corrupt it pretty well. I think that was because of the inertia created all the people involved in staffing the ballot stations, counting and verifying the votes. The machinations of the electoral college being highly visible put people doing the wrong thing at high risk for decades after Trump leaves the stage.
An automated electronic system could remove a lot of that human inertia. Human efficiency is not an advantage in an electoral system, it's a weakness. You want as many people involved as possible.
A good API does two things. Firstly it DRYs the code out. Many APIs start life doing only that, as a collection of routines you get tired of writing over and over again. Secondly, the functions are designed in a way that reduces the need to share information. Typically they do that by hiding a whole pile of details in their implementation so knowledge of those details is all in one place rather than scattered across a code base. A term often used for APIs that don't do that well is "leaky", or we say it's a "leaky abstraction".
Perhaps a good API has other more subjective attributes but it must have those two. LLMs suck at both. You can see that in the comments here, when people say they write verbose code. It's verbose because the LLM didn't go looking for duplicated functionality - if it needed it, it just put the code where it was focused on at the time.
If they are bad at DRY then I need a better superlative to describe how they fare at respecting the isolation boundaries that underpin good module design. As far as I can tell, they have no idea about the concept or how to implement it. Let loose, they are like a bull in a china shop, breaking one boundary after another.
reply