I think it could potentially be useful. Sometimes I need "simple" shapes that still are somewhat annoying to create. And I think you don't need to one-shot these, the process is permitted to be iterative! The skills can be improved by time by revising AGENTS.md, e.g. "when I say L-bracket, I probably mean..".
I think going from a picture to an initial starting point with well-"thought"-out structure for CAD purposes could potentially be very useful. Optimally you could just enter the measurements and be done.
I always choose to go with positive terms with variables etc, so this would then be ALLOW_TRACKING=0. It brings in some consistence and makes it easier to reason, as you get to avoid double negation.
Perhaps the "DO NOT TRACK" name is somewhat of an established term, though.
One could also implement ALLOW_TRACKING as comma separated list for applications I choose to allow it. Say I would like to share telemetry with go and brew, but not aws and the rest ALLOW_TRACKING=go,brew
I think the difference is that Cloudflare is the one party providing streaming access for their customers: not just anyone can proxy the data through Cloudflare, they need to be a Cloudflare customer first.
When I'm posting this message to Hacker News, I'm the "customer" of this website. I'm not customer of all the intermediate nodes in the chain. So if I were to write something illegal and HN would be irresponsive to takedown requests, the courts could order the IP of HN to be blocked, not some intermediate ISP.
You're the customer of your ISP who's the customer of another ISP who's the customer of another ISP who Hacker News is a customer of.
The Digital Markets Act speaks of "conduits" instead of speaking of the specific form the conduits may take. It does not give special rights to someone who forwards IP packets unmodified or to someone who receives IP packets and reissues other IP packets or to someone who changes the IP addresses in the packets. It only cares about the net effect of the transmission, and the fact is that Cloudflare is a conduit with caching.
I once thought the same about all the copyrighted works on which LLMs are currently trained. Surely they can't just hoover everything up? Haha, silly me.
I understand that creating an LLM itself is transformative, but an LLM trained on copyrighted works remains capable of generating derivative works, which eventually will result in successful copyright lawsuits against LLM users who redistribute those derivative works.
In advance of that day, the great race is to build a licensed corpus as aggressively as possible (see Github's latest decision to opt in Copilot usage). Even if Blender doesn't send your data on every save, various options can be developed, such as publishing to a Blender-controlled public channel.
I'm relatively sure the source code I've written and stored in my local computer is not sucked up in the LLM training data. And I believe people working with Blender models are pretty much in the same situation: they don't host their data in a third-party service and openly share it.
I think it should be pretty clear that if you provided the tool the specification for the code you want, you have already provided creative input.
After all, is this not what happens with compilers as well? LLM agents are just quite advanced compilers that don't require the specification to be as detailed as with traditional compilers.
>it should be pretty clear that if you provided the tool the specification for the code you want, you have already provided creative input.
If you provided a human contractor with the specifications for the code you want, the courts have repeatedly made clear you have not provided the creative input from a copyright perspective, and the contractor needs to explicitly assign those rights to you if want to own the copyright on the code.
Let's say we didn't have assemblers, but instead we would have three professions:
- Specifiers, who make the specification for the system
- Programmers, who write C code
- Machine encoders, that take that C code and write machine code for a CPU
Would it be that the copyright would then belong to programmers, if no other explicit assignments would be made?
---
Thinking about it, probably yes: copyright of the spec belongs to specifies, copyright of the C belong to programmers, and copyright of machine code to machine encoders. Or would it depend on the amount of optimizations the machine encoders would do, i.e. is it creative or not? And then does this relate to the task and copyrightability of C compiler output, where optimizations can sometimes surprise the developer?
In music, you can have copyright for a composition (like, lyrics and sheet music), and then for a master record. If you sell a copy of a song, you generally have pay royalties to both copyright holders.
So, in your example, the specifiers would own the specification, the programmers the C code, and machine encoders own the machine code.
But the ownership wouldn't be complete. If you sell the machine code, you'd have to pay royalties to all three. If you only sold the C code, only to the specifiers and the programmers.
The compiler analogy is the right one to reach for and the Copyright Office addressed it directly: the question is not whether you provided input, it is whether the creative expression in the output reflects human authorship. With a traditional compiler, the programmer authors every expression in the source. With an LLM, the programmer authors the intent and the model makes the expressive decisions about structure, naming, pattern, and implementation. Whether that distinction matters legally is what Allen v. Perlmutter is working through right now. The summary judgment briefing completed in early 2026 and it may be the next landmark ruling on exactly this question.
Specifications are not necessarily creative input. Eg if I write a prompt that just says “write a rate limiter in Python”, there’s really no creative input. I didn’t decide on the API, or the algorithm to bucket requests, or where to store counters, or etc. I just gave it statements of fact, which are inherently not creative.
Compilers are different in that the resulting binaries are not separately copyrighted. They are the same object to the Copyright Office because one produces the other, in the same way that converting an image to a PDF is still the same copyright.
LLMs don’t do that. The stuff coming in may not be copyrighted, and may not be copyrightable. The stuff that comes out is not a rote series of transformations, there are decisions being made. In common use, running a prompt 10 times might yield 10 meaningfully different results.
I’m dubious the outcome will be “any level of prompting is enough creativity”.
Possibly; I'm not going to hazard a guess on what the Supreme Court will decide the exact bar is. I just don't think it will be either extreme. "Nothing is copyrighted" is too damaging to the economy, "everything is copyrighted" has weird impacts on non-LLM copyrights that conflict with precedent.
This is actually the opposite of what the copyright office has said. Directly addressing AI generated code/prompts, they compared it to someone who is commissioning art, describing to the artist what they want.
The copyright falls to the artist, not the person commissioning it.
Complicated in this case, because there is no artist.
The thing they were gesturing at, correctly, is the naming. This is of course a convention and not a promise, but by convention Goose::as_crow would be a function that is cheap and gets you say &Crow instead of the &Goose you might have now, whereas Goose::to_donkey suggests that although we can have a Donkey instead of this Goose it's expensive to do that.
Commonly as... conversions are actually no-ops at runtime (the type changes but the data does not, no CPU instructions are emitted) whereas to... conversions might do quite a lot, especially if they bring into existence an actual thing at runtime -- maybe Goose::to_donkey actually needs to go allocate memory for a Donkey and destroy the Goose.
Yes it's unsafe because the Vec doesn't enforce the promise we made about this being UTF-8 text whereas String did, so now that promise is ours to keep and `unsafe` is how we signify that you the programmer took on the responsibility for safety here.
Yes, naming does play a role here, but the biggest hint is `as_vec_mut` returning a reference. For that to work a `Vec<u8>` needs to exist somewhere, and continue to exist after this function returns. For comparison, `to_` conversions generally just return the new data, so this reasoning doesn't apply to them.
I don't know how large the cache is, but Gemini guessed that the quantized cache size for Gemini 2.5 Pro / Claude 4 with 1M context size could be 78 gigabytes. ChatGPT guessed even bigger numbers. If someone is able to deliver a more precise estimate, you're welcome to :-).
So it would probably be a quite a long transfer to perform in these cases, probably not very feasible to implement at scale.
Wouldn't it help if the system did compaction before the eviction happens? But the problem is that Claude probably don't want to automatically compact all sessions that have been left idle for one hour (and very likely abandoned already), that would probably introduce even more additional costs.
Maybe the UI could do that for sessions that the user hasn't left yet, when the deadline comes near.
I think going from a picture to an initial starting point with well-"thought"-out structure for CAD purposes could potentially be very useful. Optimally you could just enter the measurements and be done.
reply