No, I would argue that from the three main ingredients - training data, model source code and weights - weights are the furthest away from something akin to source code.
They're more like obfuscated binaries. When it comes to fine-tuning only however things shift a little bit, yes.
GPL defines the “source code” of a work as the preferred form of the work for making modifications to it. If Meta released a petabyte of raw training data, would that really be easier to extend and adapt (as opposed to fine-tuning the weights)?
No open weights are the output of a proprietary and secretive process of training. It’s like sharing a pre compiled application instead of what you need to reproduce the compiled application.
AI2’s OLMo is an example of what open source actually looks like for LLMs:
Open source requires, at the very least, that you can use it for any purpose. This is not the case with Llama.
The Llama license has a lot of restrictions, based on user base size, type of use, etc.
For example you're not allowed to use Llama to train or improve other models.
But it goes much further than that. The government of India can't use Llama because they're too large. Sex workers are not allowed to use Llama due to the acceptable use policy of the license. Then there is also the vague language probibiting discrimination, racism etc.. good luck getting something like that approved by your legal team.