Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, what's your point? That is literally what it does - it adds relevant knowledge to the prompt before generating a response, in order to ground it me effectively.


My point is that this doesn't scale. You want the LLM to have knowledge embedded in its weights, not prompted in.


It scales fine if done correctly.

Even with the weights the extra context allows it to move to the correct space.

Much the same as humans there are terms that are meaningless without knowing the context.


Would it be possible to make GPT3 from GPT2 just by prompting? It doesn't work/scale


Bit of a straw-man there.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: