knowing how to trigger it properly is key though, just like all LLMs the context it has determines the quality of its output