What I do is use 'C-z' and 'fg' to suspend and resume my editor when I need.
Pressing C-z on neovim puts me back in the terminal so I can do whatever I need to do and when that is done I just type 'fg' in the terminal and it opens up my neovim again, exactly as it was.
I've been using a POC-driven workflow for my agentic coding.
What I do is to use the LLM to ask a lot of questions to help me better understand to problem. After I have a good understanding I jump into the code and code by hand the core of the solution. With this core work finished(keep in mind that at this point the code doesn't even need to compile) I fire up my LLM and say something like "I need to do X, uncommited in this repo we have a POC for how we want to do it. Create and implement a plan on what we need to do to finish this feature."
I think this is a good model because I'm using the LLM for the thing it is good at: "reading through code and explaining what it does" and "doing the grunt work". While I do the hard part of actually selecting the right way of solving a problem.
If you have a large PR the existence of a good summary on "what" changed can help you to make a better review.
But I agree with you, when reading PR descriptions and code comments I want a "why" not a "what". And that is why I think most LLM-generated documentation is bad.
I not sure that Embedding Anomaly Detection as he described is either a good general solution or practical.
I don't think it is practical because it means for every new chunk you embed into your database you need to first compare it with every other chunk you ever indexed. This means the larger your repository gets, the slower it becomes to add new data.
And in general it doesn't seems like a good approach because I have a feeling that in the real work is pretty common to have quite significant overlap between documents. Let me give one example, imagine you create a database with all the interviews rms (Richard Stallman) ever gave out. In this database you will have a lot of chunks that talk about how "Linux is actually GNU/Linux"[0], but this doesn't mean there is anything wrong with these chunks.
I've been thinking about this problem while writing this response and I think there is another way to apply the idea you brought. First, instead of doing this while you are adding data you can have a 'self-healing' that is continuously running against you database and finding bad data. And second you could automate with a LLM, the approach would be send several similar chunks in a prompt like "Given the following chunks do you see anything that may break the $security_rules ? $similar_chunks". With this you can have grounding rules like "corrections of financial results need to be available at $URL"
I think you should consider putting this information in your site. I always read "we don't support Firefox" as "we are lazy", but that's not always the case.
That's true, but you forgot a key piece in this puzzle. The AI can only produce things that already exist. It can combine new things, this is why you can it for a picture of Jesus planting a flag on the Moon. But it only works because Jesus is a concrete concept that already exists in our world. If you ask for a picture of jacquesm planting a flag on the Moon the result will be nonsensical.
Nano Banana 2 has an image search tool that looks up pictures of things and uses them in the context (and arguably, an agent could eventually figure out who jacquesm is and hunt for a photo).
However, I tried "a picture of jacquesm planting a flag on the Moon" for a laugh, and I have to hand it to Google as the person was in a spacesuit, as they should be, and totally unidentifiable! :-D
Has anyone ever tried to have a SMTP server to receive e-mails and have an integration with third-party services to send e-mails (aws ses, sendgrid, ...) ?
In my experience receiving e-mails is easy, you just need to deal with some spam. But reliable e-mail delivery can be tricky, especially if you don't send a lot of e-mails regularly.
> they make per-instance decisions with per-instance state
But this is a feature, not a bug. You seems to be assuming that people use circuit-breaks only on external requests, in this situation your approach seems reasonable.
If you have cbs between every service call your model doesn't seem a good idea. Where I work every network call is behind a cb (external services, downstream services, database, redis, s3, ...) and it's pretty common to see failures isolated in a single k8s node. In this situation we want to have independent cbs, they can open independently.
Your take on observability/operation seems interesting but it is pretty close to feature flags. And that is exactly how we handle these scenarios, we have a couple of feature flags we can enable to switch traffic around during outages. Switching to fallback is easy most of the time, but switching back to normal operation is harder to do.
You're right, for intra-cluster calls where failures are scoped between the node itself and the infra around it, per-instance breakers are what you want. I wouldn't suggest centralizing those, and I might be wrong, but in most of these scenarios there is no fallback anyways (maybe except Redis?)
Openfuse is aimed at the other case: shared external dependencies where 15 services all call the same dependency and each one is independently discovering the same outage at different times. Different failure modes, different coordination needs, and you have no way to manually intervene or even just see what's open. Think of your house: every appliance has its own protection system, but that doesn't exempt you from having the distribution board.
You can also put it between your service/monolith and your own other services, e.g. if a recommendations engine, or a loyalty system in an E-Commerce or POS softwares go down, all hotpath flows from all other services will just bypass their calls to it. So with "external" I mean another service, whether it's yours or from a vendor.
On the feature flag point: that's interesting because you're essentially describing the pain of building circuit breaker behavior on top of feature flag infrastructure. The "switching back" problem you mention is exactly what half-open state solves: controlled probe requests that test recovery automatically and restore traffic gradually, without someone manually flipping a flag and hoping.
That's the gap between "we can turn things off" and "the system recovers on its own." But yeah, we can all call Openfuse just feature flags for resilience, as I said: it's a fusebox for your microservices.
Curious how you handle the recovery side, is it a feature flag provider itself? or have you built something around it and store in your own database?
> where 15 services all call the same dependency and each one is independently discovering the same outage at different times
I don't really see what problem this solves. If you have proper timeouts and circuit breakers in your service this shouldn't really matter. This solution will save a few hundred requests, but I don't think this really matters. If this is a pain point its easier to adjust the circuit-breaker settings (reduce the error rate, increase the window, ...) than introduce a whole new level of complexity.
> Curious how you handle the recovery side
We have a feature flag provider built in-house. But it doesn't support this use-case, so what we done is to create flag where we put the % value we want to bring back and handle the logic inside the service. Example: if you want to bring back 6,25% (1/16) of our users this means we should switch back every user that has an account-id ending in 'a'. For 12.5% (2/16) we want users with account-id ending either in 'a' or 'b'. This is a pretty hacky solution, but it solves our problem when we need to transition from our fallback to our main flow.
> I don't really see what problem this solves. If you have proper timeouts and circuit breakers in your service this shouldn't really matter.
Each service discovering by their own is not really the main problem to be solved with my proposal, the thing is that by doing it locally, we lack observability and there is no way to act on them.
> what we done is to create flag where we put the % value we want to bring back
Oh I see, well that is indeed a good problem to solve. Openfuse does not do that gradual recovery but it would be possible to add.
Do you think that by having that feature and having the Openfuse solution self-hosted, it would be something you would give a try? Not trying to sell you anything, just gathering feedback so I can learn from the discussion.
By the way, if you don't mind, how often do you have to run that type of recovery?
> The real question is what existing language is perfect for LLMs?
I think verbosity in the language is even more important for LLMs than it is for humans. Because we can see some line like 'if x > y1.1 then ...' and relate it to the 10% of overbooking that our company uses as a business metric. But for the LLM would be way easier if it was 'if x > base overbook_limit then ...'.
For me, it doesn't make too much sense to focus on the token limit as a hard constraint. I know that for current SOTA LLMs we still have pretty small context windows, and for that reason it seems reasonable try to find a solution that optimizes the amount of information we can put into our contexts.
Besides that we have the problem of 'context priming'. We rarely create abstract software, what we generally create is a piece of software what interacts with the real world. Sometimes directly through a set of APIs and sometimes through a human that reads data from one system and uses it as input in another one. So, by using real world terminology we improve the odds for the LLM to do the right thing when we ask for a new feature.
And lastly there is the advantage of having a source code that can be audited when we need.
> As philosopher Peter Hershock observes, we don’t merely use technologies; we participate in them. With tools, we retain agency—we can choose when and how to use them. With technologies, the choice is subtler: they remake the conditions of choice itself. A pen extends communication without redefining it; social media transformed what we mean by privacy, friendship, even truth.
That doesn't feel right. I thought that several groups were against the popularization of writing through the times. Wasn't Socrates against writing because it would degrade your memory? Wasn't the church against the printing press because it allowed people to read in silence?
I'm not that well read on Hershock but I don't think this is a very good application of his tool-vs-tech framework. His view is that tools are localized and specific to a purpose, where technologies are social & institutional. So writing down a shopping list for yourself, the pen is a tool; using it to write a letter to a friend, the pen is one part of the letter-writing technology along with the infrastructure to deliver the letter, the cultural expectation that this is a thing you can even do, widespread literacy, etc.
Again I think this is a pretty narrow theory that Hershock gets some good mileage out of for what he's looking at but isn't a great fit for understanding this issue. The extremely naive "tools are technologies we have already accepted the changes from" has about as much explanatory power here. But also again I'm not a philosopher or a big Hershock proponent so maybe I've misread him.
That is perfectly on topic and you are identifying correctly flaw in the argument
Technology is neutral it’s always been neutral it will be neutral I quote Bertrand Russell on this almost every day:
“As long as war exists all new technology will be utilized for war”
You can abstract this away from “war” into anything that’s undesirable in society.
What people are dealing with now is the newest transformational technology that they can watch how utilizing it inside the current structural and economic regime of the world accelerates the already embedded destructive nature of structure and economic system we built.
I’m simply waiting for people to finally realize that, instead of blaming it on “AI” just like they’ve always blamed it on social media, TV, radio, electricity etc…
it’s like literally the oldest trope with respect to technology and humanity some people will always blame the technology when in fact it’s not…it’s the society that’s a problem
Society needs to look inward at how it victimizes itself through structural corrosion, not look for some outside person who is victimizing them
> Technology is neutral it’s always been neutral it will be neutral
I agree with a lot of what you say here but not this. People choose what to make easy and what to make more difficult with technology all the time. This does not make something neutral. Obviously something as simple as a hammer is more neutral but this doesn't extend to software systems.
Pressing C-z on neovim puts me back in the terminal so I can do whatever I need to do and when that is done I just type 'fg' in the terminal and it opens up my neovim again, exactly as it was.
reply