Not to demerit the recording, but I felt more nostalgic for the last sentence of the article "Sometimes, the internet is good" than for the musics itself.
We all know it... but I think they were very bold in this warning about using your private messages to train public models.
_Your messages with AIs will be used to improve AI at Meta. Don't share information, including sensitive topics, about others or yourself that you don't want the AI to retain and use_
"A watchdog kernel thread monitors RAM and NVMe pressure and signals userspace before things get dangerous." - which kind of danger this type of solution can have?
Removing the "Market expert" which uses OHLCV (Open, High, Low, Close, Volume) also drops the sharpee from 5.01 to 1.88 while also increasing the max draw down to 13.29% (v.s. 9.70% for the index). I'd be very surprised if the pre training of the base model was the only source of leakage...
I think that there may be another solution for this, that is the LLM write a valid code that calls the MCP's as functions. See it like a Python script, where each MCP is mapped to a function. A simple example:
Yes! If you want to see how this can work in practice, check out https://lutra.ai ; we've been using a similar pattern there. The challenge is making the code runtime work well for it.
Unfortunately, it uses Miniconda, which does not allow usage in companies with more than 200 employees. I think it conflicts with AGPL license. I created a PR to fix that.
reply