Since most of our early users are just using foundational LLM models over API (like OpenAI models), we're still working on the best way to manage uploading custom weights and NLP models. However, for users that need it asap, we can upload and download fine-tuned weights/architectures manually.
In terms of privacy policy, we haven't had many users doing much with fine-tuned deltas, but we think of it the same way we think of all model data: All inference and benchmarking data belongs to the user and we don't aggregate it across other users or shared between orgs.
In terms of privacy policy, we haven't had many users doing much with fine-tuned deltas, but we think of it the same way we think of all model data: All inference and benchmarking data belongs to the user and we don't aggregate it across other users or shared between orgs.