The crowd around this pot shows how superficial is knowledge about claude code. It gets releases each day and most of this is already built in the vanilla version. Not to mention subagent working in work trees, memory.md, plan on which you can comment directly from the interface, subagents launched in research phase, but also some basic mcp's like LSP/IDE integration, and context7 to not to be stuck in the knowledge cutoff/past.
When you go to YouTube and search for stuff like "7 levels of claude code" this post would be maybe 3-4.
Oh, one more thing - quality is not consistent, so be ready for 2-3 rounds of "are you happy with the code you wrote" and defining audit skills crafted for your application domain - like for example RODO/Compliance audit etc.
I'm using the in-built features as well, but I like the flow that I have with superpowers. You've made a lot of assumptions with your comment that are just not true (at least for me).
I find that brainstorming + (executing plans OR subagent driven development) is way more reliable than the built-in tooling.
I made no assumptions about you - I simply commented on the post replying to your comment which I liked and simply wanted to follow the point of view :)
Also if you are using language with more than 24 letters - like you know, most of the world... You can't do {left alt}+n in teams while {right alt}+n works perfectly fine, and I haven't found a way to disable this awful behavior.
Like mate - I'm on Mac, I use CMD+n for new tabs, not windows-like shortcuts...
I second this suggestion. This might sound obvious but during my therapy my psychologist asked me to do this, but in a non-personal/non-threatening way for the relation. Just by telling them that I'm working through my issues and I'd like to get an honest (best would be written/no-interaction type) feedback - what makes them uncomfortable etc.
This helped me a lot - to see how different the transmission was on the receiving end from my intentions.
While these optimizations are solid improvements, I was hoping to see more advanced techniques beyond the standard bulk insert and deferred constraint patterns. These are well-established PostgreSQL best practices - would love to see how pgstream handles more complex scenarios like parallel workers with partition-aware loading, or custom compression strategies for specific data types.
> It's pretty common to use a cheaper model to fix these errors to match the schema if it fails with a tool call.
This has not be true for a while.
For open models there's 0 need for these kind of hacks with libraries like Xgrammar and Outlines (and several others) both existing as a solution on their own and being used by a wide range of open source tools to ensure structured generation happens at the logit levels. There's no-need to add multiples to your inference cost, when in some cases (xgrammar) they can reduce inference cost.
For proprietary models more and more providers are using proper structured generation (i.e. constrained decoding) under-the-hood. Most notably OpenAI's current version of structure outputs makes use of logit based methods to guarantee the structure of the output.
When you go to YouTube and search for stuff like "7 levels of claude code" this post would be maybe 3-4.
Oh, one more thing - quality is not consistent, so be ready for 2-3 rounds of "are you happy with the code you wrote" and defining audit skills crafted for your application domain - like for example RODO/Compliance audit etc.