Hacker Newsnew | past | comments | ask | show | jobs | submit | marcus_holmes's commentslogin

> For me personally, what is more interesting is that we might not even be able to copyright these creations at all. A court still might rule that all AI-generated code is in the public domain, because there was not enough human input in it. That’s quite possible, though probably not very likely.

As I understand it, the US Supreme Court has just this week ruled exactly this. LLM output cannot be copyrighted, so the only part of any piece of software that can be copyrighted is that part that was created by a human.

If you vibe-code the entire thing, it's not copyrightable. And if it can't be copyrighted that means it is in the public domain from the instant it was created and can't be licensed.


> As I understand it, the US Supreme Court has just this week ruled exactly this. LLM output cannot be copyrighted, so the only part of any piece of software that can be copyrighted is that part that was created by a human.

Your understanding is incorrect. The case was about whether an LLM can be an author, and did not whether the person using it can be (which will be the case). https://news.ycombinator.com/item?id=47260110


This is the correct understanding. Go back to the selfie of the monkey. Is the monkey the creator of the photo? Does he own the copyright? No. The photographer who created the opportunity for the monkey to take the selfie is the holder of the copyright on that image.

Similarly, the operator of the LLM is the holder of the copyright of the LLM’s output.


> This is the correct understanding. Go back to the selfie of the monkey. Is the monkey the creator of the photo? Does he own the copyright? No. The photographer who created the opportunity for the monkey to take the selfie is the holder of the copyright on that image.

This is incorrect. The monkey is unable to have a copyright on the photograph, but there was no court case suggesting the owner of the camera (Slater) has a copyright on the photo, and the Copyright Office's rules actually say the opposite, that it isn't copyrightable at all (the Wikipedia summary of the situation is good, pointing out the Copyright Office specifically added an example of "a photograph taken by a monkey" to their guidance to make their point clear).


I was indeed misremembering part of this.

The professional photographer claimed he engineered the situation that led to the photo and thus he owns the copyright on the images. That specific claim appears to not have been addressed by the court nor by the copyright office. Instead Slater settled by committing to donations from future revenue of the photos.


If it were a trained monkey, and the photographer held a button in his hand that triggered the photo taking mechanism, there'd be no question of copyrightability. Similarly, vibe-coding and eliciting output from a software tool which results in software or images or text created under the specification and direction and intent and deliberate action of the user of the tool is clearly able to be copyrighted.

The user is responsible for the output of the software. An image created in photoshop isn't the IP of Adobe, nor is text in Word somehow belonging to Microsoft. The idea that because the software tool is AI its output is magically immune from copyright is silly, and any regulation or legislation or agency that comes to that conclusion is silly and shouldn't be taken seriously.

Until they get over the silliness, just lie. You carefully manually crafted each and every character, each pixel, each raw byte by hand, slaving away with a tiny electrode, flipping each bit in memory, to elicit the result you see. Any resemblance to AI creations is purely coincidental, or deliberate as an ironic statement about current affairs.


Copyright is positive law created by humans, not natural law that we happen to recognize. The idea that adopted legislation or established caselaw can be wrong about what copyright fundamentally is makes no sense.

Not what I'm saying - if you meet the technical, intentional definition of a process, substantiated by precedent, then the law should support any variation of the process which has those same technical features meeting the definition.

Using AI as a tool to produce output, no matter how complex the underlying tool, should result in the authorship of the output being assigned to the user of the tool.

If autocorrect in Word doesn't nullify copyright, neither should the use of LLMs; manifesting an idea into code and text and images using prompts might have little human input, but the input is still there. And if it's a serious project, into which many hours of revision, back and forth, testing, changing, etc, there should be absolutely no bar to copyright.

I can entertain a dismissal based on specific low effort uses of a tool - something like "generate a 13 chapter novel 240 pages long" and seeing what you get, then attempting to publish the book. But almost anything that involves any additional effort, even specifying the type of novel, or doing multiple drafts, or generating one chapter at a time, would be sufficient human involvement to justify copyright, in my eyes.

There's no good reason to gatekeep copyright like that. It doesn't benefit society, or individuals, it can only benefit those with vast IP hoards and giant corporations, and it's probably fair to say we've all had about enough of that.


> And if it can't be copyrighted that means it is in the public domain from the instant it was created and can't be licensed.

I don't think this follows? If I vibe code something and never post it anywhere public, I can still license that code to a company and ask them to pay me for using the code?

So as a corollary, the business model of providing software where you can choose either free (as in beer) and restrictive license (e.g. GPL), or pay money and get a permissive business-compatible license, will cease to exist.

I think that's a shame actually, because it has been a good way of providing software that does something useful but where large companies that earn money from the use will have to pay the software creator.


> I can still license that code to a company and ask them to pay me for using the code

I believe you can do that with public domain/copyright free material in general. There is no requirement to tell someone that the material you license them is also available under a different one or that your license is not enforceable.


Depending on how you do it and they find out, you could certainly be sued for fraud and misrepresentation, though. And, if you put a "copyright by me" at the top of a public domain work, it's technically a crime under 17 U.S.C. § 506(c) - Fraudulent Copyright Notice

https://www.law.cornell.edu/uscode/text/17/506#c


Technically how will vibe code be identified? And how does one determine the level of human involvement that would make code copyrightable? What of the prompts? Are those copyrightable? What about the architectural and tactical design of the code if I do those myself?

I don't vibe code; I am firmly in charge of the architecture and code style of my projects, and i frequently give detailed instructions to AI tools I use. But, to me, this is leading to a weird place. Why would the result of using a tool to create something new not be copyrightable simply due to the specific tool used?

I think this whole hullabaloo is self inflicted. Code or an other creative work should stand on its merits. There is no issue with copyright and no issue with the ship of Theseus. The current copyright approach is still applicable: code (or any other creative work) that appears to be lifted verbatim from another work could be a copyright violation. Work that is sufficiently original (irrespective of how it was created) is likely not a copyright violation.


Code is one thing, but what about writing? There is no 100% foolproof way to identify content written by LLMs, and human writing routinely gets incorrectly flagged as such. If I write a book, and a checker says that it's written by LLM, is it automatically in the public domain?

The more data they have on you, the more valuable that data is to a third party. So they sell your data to someone else, who then phones you based on your known deep interest in <whatever it was that tracked you>. Or spams you. Or messages you. Or whatever method they think will most get your attention.

If you don't give them that information, they can't sell it, and the buyers won't annoy you.

It's not that the ads you get are more interesting, it's that you get more ads because they think they know more about you.


I had the same results a year ago. Everything has changed since ~Nov 25, give it another go and you'll be surprised

> The other thing is, human intelligence is the only real intelligence we know about.

There's a long and proud history of discounting animal intelligence, probably because if we actually thought animals were intelligent we'd want to stop eating them.

Octopodes are sentient. Cetaceans have well-developed language. Elephants grieve their dead. Anyone who has owned a dog knows that it has some intelligence and is capable of communicating with us. There's a ton of other intelligences that we know about.

> As humans, we have conveniently made those properties match things only we have.

I think this is the key point. Machine intelligence is not going to look like human intelligence, any more than animal intelligence does. We can't talk to the dolphins, not because they're not smart and don't have language, but because we can't work out their language. Though I'm not sure what we'd even say to them, because they live in a world we'll never understand, and vice versa. When Claude finally reaches consciousness, it's not going to look like a human consciousness, and actually talking to that consciousness is going to be difficult because we won't share a reality.

An LLM is a tool. I can just about stretch to it being an Artificial Intelligence, but I prefer to continue being specific and call it an LLM rather than an AI. It is not conscious or self-aware. It fakes self-awareness because as a tool the thing it does is have conversations with humans, and humans often ask it questions about itself. But I don't think anyone actually believes it is self-aware. Not least because the only time it thinks is when prompted.


This is an important point. We know what our DMN is and how we use language as a basis for thought to create concepts and complex ideas. However language also bounds our thought. What about the Dolphin? It is a fundamental philosophical problem of if advanced intelligence can exist without language. We have a pretty good notion that you need some sort of substrate (language) to create intelligence. And we know that mapping the internal state of a brain from inside of itself is incredibly hard and the way our human brain evolved to do it is really fascinating but also full of hacks and mismatched mappings based on what we know is actually going on.

Cognitive computer science explores this whole area of mapping language and the underlying semantic meaning. Ultimately, these intelligences will be bound by physics (unless some new physics or understanding therein happens). And classical intelligences are still bound by classical physics. So I am not sure we can't relate to these other intelligences. We may be limited to some translation layer that does not fully map, but can we still relate to some other consciousness? For that matter consciousness is just another word that vaguely maps to a vast and extremely complex thing in the human brain and each person has a different understanding of what that is. I don't really have any conclusions, you brought up interesting points. We should sit within this realm of inquiry with a lot of humility IMO.


The dolphin question, for me, is about what we'd even communicate with a creature that lives in such a different world. Humans mostly live in a 2D environment, for instance - we walk on flat planes, rarely looking up. We always have the ground beneath us, the unattainable sky above. Dolphins live in a 3D space, visiting the air above regularly to breathe, the "ground" below a varying distance away. I have no idea how that would shape their cognition and language, but I'd be amazed if there are any concepts that we would share and be able to talk about when considering our physical environment. Even basic concepts like "above" and "below" would be hard to talk about.

We have fundamental communication problems between humans who have different cultures, as anyone who has worked in a different culture knows. How much different would a dolphin be? And then how much different would an actual AI be? What concepts would we share and be able to build on to understand each other? How do we avoid the fundamental communication misunderstandings when we don't share any concepts of our reality?


They still have mammalian wet-ware. The dolphin has a relatively advanced neocortex which means they likely have some relatively advanced processing. They also have an interesting part of their brain that we don't have and it is likely for social and emotional information based on their behavior. We suspect they may even have a model of the self.

They still have roughly the same kind of hardware as we do. Their different brain region is kind of like a coprocessor we don't have. But based on their behavior they are likely doing the same things we are. I would say they would be more like an extreme human culture than something alien. They probably have very different category mappings based on echolocation.

I think because we know their brains are doing a lot of things that are analogs to ours, just with different sensory inputs we can reason about a dolphin brain and their semantic concepts and category mappings way easier than an AI. Dolphins do a lot of the same stuff we do. Grief. Social groups. Predicting the future. I would bet at a single level of semantic abstraction we have a lot of concepts that map. They have a lot of the same hormones we do. They react to danger very similar to us. I think a lot maps, we just don't know how to share that with each other beyond observation of one another and offerings like food and things that translate for any mammal.


Good points, interesting.

I think this is probably neuroscience vs psychology. We can explain a lot with neuroscience, but two people with essentially identical brain chemistry can have very different psychology. There are people out there who have beliefs and cognition processes that I find completely incomprehensible despite having the same brain and even sharing a language.

I'm not sure how I'd have a meaningful conversation with an animal that has such a different worldview. I guess there's a simple level of conversation, like that which we have with dogs - fetch the stick, good boy, food, need to wee, love the human, etc. But if that's the limit of what we can discuss with dolphins (or an actual AI) then I'd be disappointed.


But at some level, you can "just be" with the other organism. Eat some food. Make some dopamine. Hang out. I feed my dog. I exercise my dog. I exercise myself. I eat. We sit down together, I pet her. We both create oxytocin and perceive that positively when I pet her. Most animals map that to "safety" or "contentment". Survival needs satisfied for now. Who knows what that maps to for my dog, but we exist in a pretty similar state in that moment of being. That very desire to try and map the dolphin is our "I" narrative that /constantly/ wants to map things out and figure the patterns out.

Dolphin has concept maps between objects and semantic meaning and an "I" narrative. Dog is almost fully present with no narrative constantly mapping past to future. We probably have a lot more in common with dolphin, if we can map that somehow.

https://www.frontiersin.org/journals/psychology/articles/10....

This article is right up this conversation's alley; about chimps being fascinated with crystals. And I am not saying it is wrong to map and communicate, communication means cooperation, deeper connection and meaning, discovering boundaries of if we can socially coordinate and form new and exciting groups and collaboration, etc.


I think that engineering at a large organisation and engineering at a startup are two completely different disciplines with very little crossover.

I use Zed, but not the AI bits of it. Works really well as a plain code editor. I hope they remember there are folks like me who just want a better code editor than the miserable shite that is VSCode, without all the LLM stuff in it.

Not that I have a problem with the LLM stuff, I just use the LLM in a shell and then use Zed to fix the problems in the output.


You might like this then https://gram.liten.app/

ooo, yes I would. Thanks for the tip :)

> And Linux ? Good luck getting decent hardware that will run without having basic functionality issues.

I think that's probably a few years out of date. Certainly, it used to be completely true and was a major problem.

I'm just not finding that now. Drivers are better, and more widespread, and there are less odd hardware innovations in standard PC components that screw it up.

And, if you want a laptop that runs Linux perfectly, there are more than a few options out there that ship with Linux installed and supported now.


Get serious, none of them have a working fingerprint reader.

I prefer my MacBook, but the Thinkpad whatever I bought to have Windows and Linux available for some software I need occasionally has a fingerprint reader that worked out of the box on Ubuntu.

Since when is using a fingerprint reader on laptops at all common? If that's a requirement for you then fair enough, but not having a fingerprint reader doesn't make a laptop so niche that one would be justified in saying "get serious".

Um.. all MacBooks have had a fingerprint reader for years. Without it, I would be typing passwords a lot more.

My Thinkpad's fingerprint reader worked out of the box.


My framework 13 fingerprint scanner worked immediately out of the box.

It's a good point. I hated that butterfly keyboard, and the Touch Bar was an utterly useless gimmick for me. And they realised that and rolled it back (and added ports again!).

They do eventually listen to their customers. Let's hope it doesn't take as long for these changes to get rolled back.

I'm kinda stuck with Mac at work. I don't mind it, but I run Linux on all my personal computers and find that is way better.


I wonder how much connect there is between those in charge of hardware and those in charge of working software. It would be one thing if the software had a design direction, we all hated it, but it was implemented to its logical conclusion and pure stupid bugs weren't left to linger for years. That would be a matter of difference in taste and vision.

But I wonder if they have the ability to execute... anything, anymore. It's starting to look a little like Windows, which in a totally shameless and burlesque fashion has 3 or 4 design paradigms at the same time, jumbled together in a big stew.


It does feel like the decision making is internal-politics-driven rather than customer-satisfaction-driven, for both Mac and Windows now. Senseless changes that have little in common with other changes.

We've had this for decades with Windows, and internal leaks confirming that it's all to do with turf wars between departmental heads.

As you say, it's an indication that Apple are going down the same road, and are unable to actually execute a vision anymore.


We've seen hackathons where attendees build a SaaS business in a weekend. More than just Startup Weekend validation and a shitty MVP. A pretty-much complete SaaS product. It's a step change.

But this means the market for SaaS products is going to get hit hugely. If you can vibecode up a specific service for your specific requirement in a few days, why bother buying a SaaS product?

And, of course, if you can build a me-too SaaS product that imitates a successful competitor over a weekend, and then price it at 10% of their price, that's going to hit business models.

I think the SaaS startup gravy train is definitely over and done.

Personally, my sense is that there's a lot left to do in batteries + motors + LLMs. The drones in Ukraine could be smarter. Robot companions that can hold a conversation. Voice interfaces for robots generally [0]. Unfortunately, the people making all the batteries, motors, and increasingly the LLMs, are in China. So those of us stuck with idiot governments protecting their fossil-fuel donors are going to miss out on it.

[0] the sketch of two scots in a voice-controlled lift still resonates, though. There's probably still work to do here.


The value in SaaS was never the code, it was the focus on the problem space, the execution, and the ops-included side.

AI makes code "free" as in "free puppy".


Right, there are dozens of open source versions of wikis/task trackers/CRMs/ERPs/whatevers. Just because you can vibecode your way to a bad version of a bunch of SaaS products shouldn't fundamentally change anything. Companies buy SaaS products to make running the thing someone else's problem. It's times like these where I wish we had a functional SEC; I really wonder how much market manipulation is going on.

> The value in SaaS was never the code

I feel like a lot of people are about to learn this lesson for the first time. Except in some very niche areas the majority of the value was never the code. The SaaSs that everyone thinks will be replaced had much more than code if they were successful -integrations, contracts, expertise, reputation, etc…


Yeah, agreed, but it was at least part of the moat. Competitors can see the model, the approach to market, etc. They still had to code up a better product.

And part of the problem that the SaaS solves is that "I have this thing that I need to do. I can probably do it in software, but I don't know how. Can I buy that software?". Which is now becoming "Can I get an LLM to do it?" instead.


That’s where the “free as in puppy” comes in. It’s still a classic case of build vs buy, except building is now quicker than it used to be. You still have to ask, “suppose I did build it myself. Then what?”

Yeah. So then you get your own product, tailor-made to your organisation, that you own (well, it's public domain because LLM-generated, but same same), and that you can change whenever you want without having to deal with a SaaS company's backlog. If you don't like something in it, you fire up Claude Code and get it changed.

There's also no danger of it being enshittified. Or of some twat of a product manager deciding to completely change the UI because they need to change something to prove their importance. Or of the product getting cancelled because it's not making enough money. Or of it getting sold to an evil corp who then sells your data to your competition. Or any of the other stupid shit we've seen SaaS companies pull over the past 20 years.


Respectfully, I think you’re only considering upsides and not considering downsides, opportunity costs, and ongoing maintenance costs. This is not what smart managers do. Plus, just because you can build something cheaper with an LLM doesn’t mean you can operate it more cheaply than a specialist can. Economies of scale haven’t been obviated by AI.

It’s useful to take an argument and take it to its logical extreme: I just don’t see every company in the world, large and small alike, building everything they depend on in-house, as though they were a prepper stocking up for Armageddon. That seems pretty fanciful on its face.


I hear that, and yeah, it's a possible future but I'm not sure it's the certain one.

The maintenance costs are kinda overrated: you fire up the LLM, point it at the code base and say "this needs fixing". I'd say that the maintenance costs of dealing with endless patches and fixes from a SaaS for features that you don't use is probably more onerous.

And generally we're talking about the situation where the SaaS customer is a domain expert in their area of expertise, but that isn't software development. They can use a system incredibly well, they just can't develop it. They have in-house IT folks to maintain their computers, networks, etc, and they're really just adding a couple of people to develop and maintain some applications via LLM to that team.

We're already seeing some of this, so it'll be interesting to see how far it goes.


Why is it public domain because it's LLM-generated?

As an attorney (and this is not legal advice), I would argue--and the U.S. Copyright Office has already stated--that machine-generated content is not copyrightable, because it's not a form of human creative expression. https://www.copyright.gov/ai/Copyright-and-Artificial-Intell... ("Copyright does not extend to purely AI-generated material, or material where there is insufficient human control over the expressive elements.")

That said, the inquiry doesn't there. What happens next after the content is generated matters. If human creativity is then applied to the output such that it transforms it into something the machine didn't generate itself, then the resulting product might be copyrightable. See Section F on page 24 of the Report.

Consider that a dictionary contains words that aren't copyrightable; but the selection of words an author select to write a novel constitutes a copyrightable work. It's just that in this case, the author is creatively constructing from much larger components than words.

Lots of questions then obviously follow, like how much and what kind of transformation needs to be applied. But I think this is probably where the law is headed.


Can the output of the service be licensed? A bit like the AGPL, you're licensed to use/reuse/derive new works.

So if it's distributed outside of the license, that's subject to contractual penalties? I guess that's what all the "wrapper" SaaS businesses will do.

Read that report, it defined the issues and the boundaries well, for the current generation of AI tools. As they develop and expand, it's going to get interesting, especially if robotics/3d printing etc get involved.

If I use an Optimus Prime to help create art, similar to Andy Warhol's "factory", do I own the copyright on the completed work?

If a person uses AI to generate work that ends up being patentable, are patents also not available?


There's already been a case where an artist generated art using an LLM, and could not claim copyright on it [0]. Though the guy was torpedoing his own case by claiming the machine created the entire thing. The courts are supporting copyright claims when using an LLM as a tool to support human effort.

All software licensing depends on copyright. If no-one owns the copyright than it can't be licensed; it's in the public domain immediately and irrevocably.

Of course, if you rip off someone else's work, and they DMCA you, then you might need to prove that they generated the entire thing using an LLM with zero human input. Though there's plenty of folks posting blog posts claiming that that's their process, so it might not be that hard.

Patents are different. For a start they cost money and effort to get. And there are lots of rules around how they're applied and how you can defend them. Very different from copyright.

[0] https://www.cnbc.com/2025/03/19/ai-art-cannot-be-copyrighted...


Sometimes, but I think there are some SaaS products whose business model is really under threat. Look at PagerDuty. Their PE ratio is like 4.4. They have a lot of existing customers but virtually no pricing power now and I imagine getting new business for them is extremely difficult.

Canva is my go-to example - you can just get NanoBanana/whatever to generate and iterate on the image. Same for all those stock photo services. I used to use them a lot, now I just generate blog images

> AI makes code "free" as in "free puppy".

Exactly right


The biggest limiting factor is user acquisition. Just because you can build a competitor in a weekend doesn't mean you can easily acquire a user base. it's dam hard to get users even if your product is twice as good and your giving it away for free!

The implied risk isn't more SaaS competitors, it's that B2B SaaS consumers will just code up their own product instead of going with a SaaS vendor.

Started seeing even B2C folks just get the LLM to do it, or code up a quick solution that does most of it.

It's interesting that in the UK the traditional two-party system is broken, because everyone realises that both of the traditional parties have been bought by rich folk and business interests, only serve their own interests, and can't be trusted any more. The main contenders now are Reform and The Greens, a situation that no-one predicted five years ago.

The same is true in Australia, though there's no charismatic left-wing leader emerging, and the Farage-equivalent is a laughing stock who struggles to be coherent at times. But because of billionaire money, she's still up there on the polls.

The US system makes it much harder for new parties to form, so it's probably going to be factions in the existing parties. And, of course, MAGA is the new faction in the Republican party; effectively a new party itself. So the ground is fertile for a new left-wing faction in the Democrat party to rise.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: