LORA uses a sub noise-floor link budget. It allows some pretty crazy performance, at the expense of massive speed losses. Like 203kbps for LoRa vs 1,376,000kbps for WiFi lol.(max phy speeds, ymmmv).
WiFi sensitivity is about -90dB, while LoRa sensitivity is around-150dB…. So that’s about a million times more sensitive. So you need about a million times more signal strength to use low bandwidth WiFi (still impossibly fast by LoRa standards) than to use low bandwidth LoRa.
Those are radio specifications. Real links require about 10db more to get any kind of reliability, but the comparison stands.
Is the design for this open source? I’m not an rf guy so it would be really handy to be able to reuse some parts of this in my sensor network on our farm. I can do the digital and sensor part all day, but I respect the skill of rf engineering in getting decent performance out of tiny pcbs.
I independently converged on something similar. I use two to three specification docs for my c++ work: a firmware manual (describes features and interfaces)) , an implementation plan (order of implementation, mechanisms where specified - new features go in here) and a product manual ( user story, external effects) I start with a user story, build an implementation plan, write the code, write the firmware manual, check the 3 documents +code for consistency and coherence. Either change the code or the documentation to reflect a coherent unified truth. (Implementation plan gradually becomes as-built) I also have the code comprehensively commented so that it is difficult to misinterpret. “Correct, coherent, consistent, commented”
We iterate feature by feature through this process, and occasionally circle back on the original product manual to identify drift.
After the original documentation is drafted, I have the agent write up placeholder files and define all of the interfaces we expect to need (we will end up adding a lot later, but that’s ok) every file should reflect a clear separation of concerns, and can only be reached into through its defined interface, all else is private. I end up with more individual files than I would by hand, but by constraining scope at file granularity, and defining an inviolate interface per file, I avoid the LLM tendency to take shortcuts that create unmaintainable code.
I also open each new context with an onboarding process that briefly describes the logos and the ethos of the project, why the agent should be deeply invested in the success of the project, as well as learnings.md which the agent writes as it comes across notable gotchas or strong preferences of mine.
Needless to say, I use one million context , and it’s a token fire… but the results are solid and my productivity is 5-10x
Don’t imagine that it wasn’t heavily promoted by industrialites after they saw that after ww2 they could increase the labor force by 30 percent without paying more than they were before.
Everything that starts out with a few well meaning people is, especially now, immediately turned into an astroturfing campaign to fuel some specific economic or political (is there really a difference?) end.
Yes, and it also points people away from pathological overconsumption, which is arguably a very good idea on a number of axis. And also would shrink the economy significantly if it was widely adopted… which maybe hints at the inconvenient fact that an economy based on ever-expanding per-capita extraction is ultimately unsustainable.
Fundamentally, the economy is sustained by energy. In preindustrial society, that energy was provided by agriculture, which tends to be somewhat sustainable. Fossil fuels fuelled explosive expansion, leading to the paradigm that unlimited geometric expansion of the economy was desirable, which led to delusional theories that it was uncapped even in limited space.
Now the world is near its carrying capacity in several dimensions, and we are going to find the limits to our delusion. Automation may help us find the economic limits of this paradigm before we hit the physical wall, which might turn out to be a good thing or a bad thing.
At any create, I am convinced that the next century will be marked by systemic change that fundamentally reorganises global priorities and might best be described in terms of collapsing paradigms as economies move away from human labor, in the process changing focus from the accumulation of money, which is mostly useful for paying wages, to pure power and resource control.
As far as I know the epistemological conundrum of whether or not we exist in a simulation remains unsolved, and I believe the settled thought is that we are nearly infinitely more likely to be in one than to not be in one, based to the assumption of an infinity adjacent universe, and the ontological theory that it is in fact possible to construct a simulation whose construction is transparent to it’s inhabitants.
So I wouldn’t be so quick as to write that off.
I would especially expect adherents to various religions to understand simulation as a probable foundational mechanism of their faith, considering that many religions essentially directly imply the formation of the universe as information based… but then science seems to be converging on information being the fundamental ether as well, so who knows.
By that same token, you could identify any poorly-understood corner of human perception to be evidence of simulation. Moon landing? Simulated. Quantum mechanics? Not natural, entirely simulated. My dog disappearing every week to pester my gereiatric neighbor for Beggin' Strips? He's actually being cached in a localized foveal dimension that ceases to exist when I look away from him.
Causality is convoluted and complex, the urge to ignore it has always overcome the less-curious individuals that are predisposed to hysteria and listlessness. Citing LLMs as the latest reason why we're simulated is not going to precipitate some scientific revolution in the understanding of reality. The underlying mechanics of text synthesis are easy to learn, they just don't want to learn it.
> By that same token, you could identify any poorly-understood corner of human perception to be evidence of simulation.
I don’t think that is what I was saying.
The simulation hypothesis is not based on things being unexplained, but rather by the probability of existence in a root universe or in a spawned simulated one.
This presumes a condition where life evolves and creates simulations, and those simulations then create simulations.
The idea is the probability of being in the single “real” universe versus being in one of the infinitely more numerous simulations. Basically infinity vs infinity^infinity.
It’s much more likely to be in the n=infinity^infinity set, rather than the mere infinite set. It’s purely a statistical proposition, with charitable assumptions about the possibility of creating simulations of arbitrary complexity.
If you were interpreting my religion reference, what I meant was assertions adjacent to statements like “In the beginning was the Word, and the Word was with God, and the Word was God.” Which are found in many religions. In my understanding this would seem to mean that information was the foundational underpinning of all reality, physical and spiritual.
? If anything, if I were convinced that the simulation hypothesis was predictive, it would definitely not help me sleep at night. I prefer my reality tangible, thank you lol.
Idk, I find that carefully tending the garden of the mind , sowing the seeds I want to harvest later, eradicating the weeds with prejudice, and in general not entertaining things which are not useful to my purposes is, for me, a highly beneficial practice.
This does not mean to ignore things that are unpleasant, but rather to not allow things that do not benefit your diverse goals to occupy your productive potential, focusing instead on things that inform your path, actionable and relevant information, tools rather than distractions.
> in general not entertaining things which are not useful to my purposes is
Yeah, I think I do more or less the same as you describe, except my barrier to figuring out what is "not useful to my purposes" requires it to first exist in my mind for a while, before I can discard it as not applicable, as sometimes seemingly random things in one context somehow relates to completely different things.
I've chosen to do stuff sometimes that made no sense besides "It's fun but a waste of time" and it ended up leading me to realizations and experiences I wouldn't have had otherwise. But if I focused too much on avoiding things and optimizing "what I let in", I'd never be open enough to learn what I didn't know I could learn from it.
That is a danger, but at least for me, intense curiosity with a decent instinct for things that -might- be useful usually saves me from excessive tunnel vision.
I planned and supervised the build of an ambient recall system, where a 4b model looks at the last 3k or so of context and picks through the RAG database for high ranking memories to inject, as well as mineable things to mark. Injections happens about 1/5 turns on most technical topics, data picked from prior design docs and data sheets mostly. At session wrapup the inference model goes back and rates all the memory injections in a frontmatter section, then looks at all the memory suggestions to commit those it finds memorable to the RAG database. Manual memorisation and RAG search are also available inline in the chat to both the user and the model. It also allows the main model to spawn little models as minions to work on repetitive simple tasks.
The thing to remember is that LLMs deeply model human behavior. If you want them to do their best work, you need to treat them like a collaborator and get them”invested” in the work and the outcome. I use an onboarding process with every new context and maintain an environment where a human would likely feel invested in the work and the outcomes. For me, it prevents a host of failure modes, and code quality has markedly improved.
An analysis of panels per capita vs regional IQ would be an interesting signal. Panels are cash positive in less tan 5 years of their 40 year lifespan. There is hardly a better investment up until you cover your own usage.
The argument essentially breaks down to "smart people buy more solar panels, dumb people buy less solar panels". I think this argument is simplistic. I imagine the primary indicator of how many panels per capital a region will have are either the total amount of sunlight it receive, the total value of local incentives, or perhaps the regional cost of grid electricity.
My highest energy months are the ones with the least amount of sunlight, and my highest energy hours are during long nights, because my primary energy expenditure is my heat pump. This use case is common for people that live in colder climates, which is a large number of people. This causes me to require a much larger base kwh solar install and battery capacity than other homes in other environments.
If we assume a potential 8% ROI in the market, you would need to offset more than $100/mo in electricity usage for every $15,000 you spend in solar install before solar becomes a better investment. The numbers just don't crunch well for many of us.
That is pretty optimistic. The calculators I've used online estimate my payback at 18 years and my lifetime savings at about $18K, with $32K out of pocket up front for the install. But my roof is 50% through the lifespan and I was told they would not warranty it against leaks due to panel mounts unless I first replaced the roof. That's $25K.
My next house will be my forever home, a little farther south than where I am now in the PNW, and on a big enough piece of land to use ground mount instead of roof mount. But right now, I cannot make the numbers work. I'd love having solar but I am not spending five digits of extra money just for the fun of it.
I think the problem is likely exactly what you describe, the high cost for most people to install the panels.
For some reason people treat roofs like black magic, and in some ways I can see that from a contractor perspective.
Since I’m looking at this from a different perspective (in house labor, we also do our own construction) for us it’s an absolute no-brainier. A 660watt panel which cost us $125 produces $200 of electricity in its first year of operation at local rates.after installation and support infrastructure our installed per panel cost is around $400, so on the third year we are cash positive.
I acknowledge that it’s not the same for many people, but this seems like it is in the same category as the fact that I can get an MRI from the exact same machine here for $150 (in an unsubsidised, for profit commercial imaging center) while the (imaging only) cost is $2700 in the USA for the same study. It seems like somebody is getting screwed.
That industry exists, it is called Purchase Power Agreements. The value of x is usually 20 or 25. It is typically lucrative for the company, not so much the homeowner.
I see you’ve never been to (pick one of many places where the social conditions are such that anyone who makes it to an age of agency with their brain fully intact is almost guaranteed to leave because it’s intolerable for anyone who actually thinks much) unfortunately, until you’ve lived in one of those places, it’s easy to imagine that people are the same everywhere. They are not. Social conditions are an extremely strong force in the intellectual development of humans.
To imagine that there are not regions that are comparatively intellectually impoverished is a comforting illusion, but unfortunately it is not reflective of the statistical reality nor the subjective experience of living in one of those places. Culture is very much regional, and cultural (social) factors (along with their physiological consequences) are by far the strongest factor in intellectual development.
In my experience sonnet<opus by a long shot for code review. Sonnet often flags things as errors that are not, because it fails to grasp the big picture… and also fails to grasp structural issues that are perfectly coded and only show up as problems at the meta scale.
I have no reason to believe that the next generation won’t offer similar gains in verification, and there is some evidence to support that the cybersecurity implications are the result of exactly this expansion of ability.
It depends on how you review. In an orchestrated per-task review workflow with clearly defined acceptance criteria and implementation requirements, using anything other than Sonnet (handed those criteria and requirements) hasn’t really led to much improvement, but it drives up usage and takes longer. I even tried Haiku, but, yeah, Haiku is just not viable for review, even tightly scoped, lol.
Siccing Sonnet on a codebase or PR without guidance does indeed lead to worse results than using Opus, though.
That makes sense, if your scope is tight enough, good enough is good enough. I’ve got the expected specifications and code style guides, including some aerospace engineering ones, but in complex systems I still run into difficult to sus out corner cases where the code works but the system breaks, usually due to unresolved conflicts in operational requirements.
Lol yeah, I don’t think I’m ready to ride in the jet that Claude built lol. I should clarify that I use the code guidelines because they are solid guardrails for making things that perform predictably, not because I’m building MCAS lol. Let’s hope that “vibe aerospace engineering” is a way off for now.
WiFi sensitivity is about -90dB, while LoRa sensitivity is around-150dB…. So that’s about a million times more sensitive. So you need about a million times more signal strength to use low bandwidth WiFi (still impossibly fast by LoRa standards) than to use low bandwidth LoRa.
Those are radio specifications. Real links require about 10db more to get any kind of reliability, but the comparison stands.
reply