It's not that simple. Originally OpenAI released a model to try and detect whether some content was generated by an LLM or not. They later dropped the service as it wasn't accurate. Today's models are so good at text generation it's not possible in most cases to differentiate between a human and machine generated text.
Well they could just not allow prompts that seem to participate in blogspam. If they wanted to stop it they definitely could.
Their argument is that since it's centralized, things like that are possible (while with llama2 you can't), they do "patch" things all the time. But since blobspam are contributing to paying back the billions microsoft expects they're not going to.
It would be easy to workaround using other open source models. You use GPT-4 to generate content and then LLAMA-2 or sth else to change the style slightly.
Also, it would require OpenAI to store the history of everything that it's API has produced. That would be in contrast with their privacy policy and privacy protections.
If it's a straightforward hash, that's easy to evade by modifying the output slightly (even programmatically).
If it's a perceptual hash, that's easy to _exploit_: just ask the AI to repeat something back to you, and it typically does so with few errors. Now you can mark anything you like as "AI-generated". (Compare to Schneier's take on SmartWater at https://www.schneier.com/blog/archives/2008/03/the_security_...).