Thanks for the explanation, this is very interesting! Just so that I’m sure I’m understanding, this doesn’t have to do with what GradientJ currently offers, or does it?
It does! Though right now we're focused on what teams need to get that first version out the door, ultimately, we want to offer people a platform that lets them manage their NLP app throughout its lifecycle (LLM or otherwise).
Going through that process of idea -> first model -> optimized model is the core "loop" of the LLM lifecycle. The problem is to do that effectively you need to set up the right infrastructure to both aggregate the data going into and coming out of your model AND set up benchmarks to run experiments.
Having this data-eval engine set up is what lets you easily (or even autonomously) evaluate whether it makes sense to switch from that prompted model to a smaller model.
Right now, GradientJ lays some of the rudimentary groundwork for this loop by letting you set up testing for prompt-based LLM models and automatically aggregate the input/output data that goes through your model in production. We've got some basic fine-tuning capabilities, but really we're still working on refining the tools to use that data to evaluate across multiple NLP models (both LLM and non-LLM).