My solution is to _only_ chat. No auto completion, nothing agentic, just chats. If it goes off the rails, restart the conversation. I have the chat window in my "IDE" (well, Emacs) and though it can add entire files as context and stuff like that, I curate the context in a fairly fine-grained way through either copy and pasting, quickly writing out pseudo code, and stuff like that.
Any generated snippets I treat like StackOverflow answers: Copy, paste, test, rewrite, or for small snippets, I just type the relevant change myself.
Whenever I'm sceptical I will prompt stuff like "are you sure X exists?", or do a web search. Once I get my problem solved, I spend a bit of time to really understand the code, figure out what could be simplified, even silly stuff like parameters the model just set to the default value.
It's the only way of using LLMs for development I've found that works for me. I'd definitely say it speeds me up, though certainly not 10x. Compared to just being armed with Google, maybe 1.1x.
Any generated snippets I treat like StackOverflow answers: Copy, paste, test, rewrite, or for small snippets, I just type the relevant change myself.
Whenever I'm sceptical I will prompt stuff like "are you sure X exists?", or do a web search. Once I get my problem solved, I spend a bit of time to really understand the code, figure out what could be simplified, even silly stuff like parameters the model just set to the default value.
It's the only way of using LLMs for development I've found that works for me. I'd definitely say it speeds me up, though certainly not 10x. Compared to just being armed with Google, maybe 1.1x.