Calling a Polymarket dashboard a "Bloomberg Terminal" is doing a lot of heavy lifting here.
The value of a Bloomberg Terminal isn't the UI (which is famously terrible/efficient); it's the latency, the hardware, the proprietary data feeds, the chat network, and the reliability.
Building a React frontend that fetches some JSON from an API in 2 hours is impressive, sure, but it’s not the hard part of fintech. We need to stop conflating "I built a UI that looks like X" with "I rebuilt the business value of X."
Because to really have the "bloomberg terminal" is to have access to all of the sources that it aggregates. A vibe coded "bloomberg terminal" is just an interface for slop sources. It's the information sources and validation that makes it worth the price. "Vibe coding" is the most superficial of all possible implementations. "Slop coding" is a more appropriate term because the criteria is "looks passable to me".
How big were the lots? How far of a walk was the closest bar, grocery store, cafe? Do you have to walk onto someone's property to talk to them if they are sitting on the porch?
I lived in a car dependent burb for 20+ years and would rarely, if ever, run into my neighbors out on the town. Living in a walkable neighborhood in a medium-low density city for under a year and I regularly run into my neighbors.
Standard 0.25 acre suburban lots. No markets, cafes, or anything like that it was a bog-standard subdivision. There was a small park sort of centrally located but that was really the only ammenity. Supermarket was a few miles away. Nobody walked there, cars to go anywhere. Neighbors still knew one another, at least on the same streets. Kids met at school, figured out where each other lived.
I imagine a combination of stop loss and market share. If larger shops use up compute, you can't capture as many customers by headcount.
// There was a figure around o3, an astonishing model punching far above the weights (ahem) of models that came after, that suggested the thinkiest mode cost on the order of $3500 to do a deep research. Perhaps OpenAI can afford that, while Anthropic can't.
Sounds plausible they're not really making any. Arbitrary and inflexible pricing policies aren't unusual, but it sounds easy enough for a new rapidly-growing company to let the account managers decide which companies they might have a chance of upselling 150 seat enterprise licenses to and just bill overage for everyone else...
That leads to the obvious question; is the API next on the chopping block? Or would they just increase the API pricing to a point where they are A) making profit off it and B) nobody would use the API just for a different client?
I'm pretty sure everyone is pricing their APIs to break-even, maybe profit if people use caching properly (like GPT-5 can do if you mark the prompts properly)
There will be no royalties, simply make all the models that trained on the public internet also be required to be public.
This won't help tailwind in this case, but it'll change the answer to "Should I publish this thing free online?" from "No, because a few AI companies are going to exclusively benefit from it" to "Yes, I want to contribute to the corpus of human knowledge."
It can. The problem is the practice of using open source as a marketing funnel.
There are many projects that love to brag about being open source (it's "free"!), only to lock useful features behind a paywall, or do the inevitable license rug pull after other companies start profiting from the freedoms they've provided them. This is the same tactic used by drug dealers to get you hooked on the product.
Instead, the primary incentive to release a project as open source should be the desire to contribute to the corpus of human knowledge. That doesn't mean that you have to abandon any business model around the project, but that shouldn't be your main goal. There are many successful companies built around OSS that balance this correctly.
"AI" tools and services corrupt this intention. They leech off the public good will, and concentrate the data under the control of a single company. This forces well-intentioned actors to abandon open source, since instead of contributing to human knowledge, their work contributes to "AI" companies. I'm frankly not upset when this affects projects who were abusing open source to begin with.
So GP has a point. Forcing "AI" tools, and even more crucially, the data they collect and use, to be free/libre, would restore the incentive for people to want to provide a public good.
The narrative that "AI" will bring world prosperity is a fantasy promoted by the people who will profit the most. The opposite is true: it will concentrate wealth and power in the hands of a few even more than it is today. It will corrupt the last vestiges of digital freedoms we still enjoy today.
I hope we can pass regulation that prevents this from happening, but I'm not holding my breath. These people are already in power, and governments are increasingly in symbiotic relationships with them.
> The narrative that "AI" will bring world prosperity is a fantasy promoted by the people who will profit the most. The opposite is true: it will concentrate wealth and power in the hands of a few even more than it is today. It will corrupt the last vestiges of digital freedoms we still enjoy today.
I feel to be consistent the output of that model will also be under that same open license.
I can see this being extremely limiting in training data, as only "compatible" licensed data would be possible to package together to train each model.
So what? Figure it out. They have billions in investor funding and we’re supposed to just let them keep behaving this way at our expense?
Facebook was busted torrenting all sorts of things in violation of laws/regulations that would lead to my internet being cut off by my ISP. They did it at scale and faced no consequences. Scraping sites, taking down public libraries, torrenting, they just do whatever they want with impunity. You should be angry!
I'm getting sick and tired of people on Twitter/X making wild claims that they can build a profitable app with 100% vibe coding so I started poking around and can almost always find a business destroying vulnerability.
In this case it was a user claiming their app is doing $60k MRR while, get this, building a vibe coding management platform & boilerplate. Quite the house of cards.
You give the agent a URL it records itself going through UX flows, give that video to a coding agent and you have quite a feature.