Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Even Copilot loses money, to the tune of around $20 to $80 a month per user, according to The Wall Street Journal

Wow. Does that mean that once the VC money dries up that will be the real cost of the products ($30-$90 a month)? That is much less accessible than $10.



OpenAI/MS are also building a DB of code recommendations and user fixes (GPT writes crap, you fix it), which is an excellent asset. I expect in 5 years it’ll be a couple of sentences to replicate most of whatever startups we create while using Copilot, but now with the fixes built in.

I’ve used it constantly for a while now and assume all files in the project are basically training material to replicate what I’m doing. Annoying, but that’s what I get for not using local or a service from a privacy focused provider. Although I am using Claude in my scripts to distribute my footprints.

I’ve added my name to the wait list. This seems like a candidate for a company that intends to respect IP and that’s worth something.


The cost is supposed to go down as LLMs become more efficient and hardware efficiency goes up with time.


I fail to see what advantage they're getting from subsidising users for the meanwhile, and I definitely agree, these cloud AI solutions are subsidised.

Still the value gets worse for these cloud models all I have to do if I'm using say VScode is install a different extension and configure a few settings and bob's your uncle I'm using a different AI. What benefit exactly do companies get from blowing hundreds of dollars subsidising my usage again? Where's the moat? Where's the lock-in?


patterns of behavior (people will often keep using what they're used to, or assume their favorite model gives better answers regardless), free QA and training data, free advertising

Even when the switching cost is lower than what you describe, people tend to keep using the product they had gotten used to.


In business school you learn that you can't compete with "as good as". If you are using Copilot and another company comes along that is "as good as Copilot", there is very little incentive for people to switch.


This is a lot like the Amazon strategy. At one point you probably had a few alternatives in mind for online shopping. Much less so now.


High costs are due to inefficient chips and overpowered models. It’s like we discovered tnt and now we’re using it to weed a garden. The purpose built chips and tpus and pruned models will be cheaper. You can do speech to txt on $10 chip. https://www.nature.com/articles/s41586-023-06337-5

Imagine this chip for JavaScript.


If it's, say, $50/month, that will be easily affordable for businesses and not really most other users. (I'm assuming it's providing nontrivial benefit, of course)


It's probably expected that the cost goes down with time. Also, what VC money? Copilot is Microsoft.


WSJ is wrong. Autocomplete code models are small (5B param) and very cheap. A single inference is roughly $0.000001.


Copilot uses GPT4 now


I mentioned autocomplete, not the chat service.


GPT4 isn’t a chat service, friend.


I think the point that ipsum2 is trying to make is that Copilot's chat service and its code completion service could be using different models, which is not uncommon for coding assistants.

Continue[0] for example can use up to 3 different models in a session: a large model such as GPT-4 Turbo for chat and code QA, a smaller low latency model such as StarCoder2-3B for code completion, and yet another model such as all-MiniLM-L6-v2 for generating embeddings.

[0] https://www.continue.dev/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: