We can now make 1$ million dollar commercials with 100,000$ or less. So a 90% reduction in costs - if we use AI.
The issue is they don’t look great. AI isn’t that great at some key details.
But the agencies are really trying to push for it.
They think this is the way back to the big flashy commercials of old. Budgets are lower than ever, and shrinking.
Big issue here is really the misunderstanding of cause - budgets are lower, because advertising has changed in general (TV is less and less important ) and a lot of studies showed that advertising is actually not all that effective.
So they are grabbing onto a lifeboat. But I’m worried there’s no land.
So my understanding - from a friend at WPP who told me the same and from a freakonomics episode - is that advertising was wildly oversold before digital.
When the metrics arrived with digital, they saw that advertising, in some ways, was just not as effective as they’d hoped. In some ways the ROI wasn’t there. Seth Godin agrees. He says that advertising in the digital era could be as simple as just having a good product. I think this is Tesla’s position on it - make the best product and the internet takes care of it.
Legacy companies have kept large ad budgets but those are diminishing. From what I spoke with my friend at WPP, he said their data science team showed that outside of a new product or a product that is not recognised by consumers, the actual outcomes from ads are marginal or incremental. Thats what he told me. If your product is already known to consumers, the ROI is questionable.
Always felt suspicious to me that so much of company dynamics are basically about selling yourself to management...and there's one team in the company who's full-time job is selling? Wonder how that will turn out.
None of my coworkers could figure out why I was laid off, and were shocked because I was important to getting the work done, but management made it clear I hadn't been selling myself to management.
My exit is storytelling. I think that’s the only thing that will remain. I suspect humans will still want to hear stories about and from other humans.
There’s something about AIs that feels wrong for storytelling. I just don’t think people will want AIs to tell them stories. And if they do… Well, I believe in human storytelling.
The models are so good that incremental improvements are not super impressive. We literally would benefit more from maybe sending 50% of model spending into spending on implementation into the services and industrial economy. We literally are lagging in implementation, specialised tools, and hooks so we can connect everything to agents. I think.
And you think the US, now currently sliding into authoritarianism itself, will install an enlightened democracy upon the Iranians?
This is WW3 in slow motion. The goal is to takeover Eurasia and contain the Russian-Chinese alliance by eating away at the edges and removing all unaligned or hostile energy sources.
Remember how much toppling Sadam Hussein, killing a million Iraqis, rounding up and torturing thousands of random Iraqi civilians, destroying most of the country's vital infrastructure, and selling their oilfields to American companies at bargain prices helped Iraqis? It's going to be the same for Iran. There's going to be massive suffering.
But at the end of 1 milion deaths (est.) Iraqi oil was dollarized.
Saddam had been selling dollars for euros and talking about shifting his oil to other currencies for years. 2003 put an end to that - it was literally the first thing that was done by the provisional Govt. was to make sure all Iraqi oil was sold in dollars.
The Petrodollar was not in jeopardy anymore, and for the post-1971 system, that was essential. Same thing is now happening with Iran and Venezuela. The real goal is - China must not be allowed to have substantial sources of energy that are not priced in dollars.
Yes, I assumed a mass surveillance Palantir program also. Interesting take on how it allows them to claim “we are not doing this” while asking Anthropic to do it.
Of course they can just say - we aren’t, Palantir is.
Claude Opus is just remarkably good at analysis IMO, much better than any competitor I’ve tried. It was remarkably good and complete at helping me with some health issues I’ve had in the past few months. If you were to turn that kind of analytical power in a way to observe the behaviour of American citizens and to change it perhaps, to make them vote a certain way. Or something like - finding terrorists, finding patterns that help you identify undocumented people.
I have used chatgpt 5.2 thinking for health, gemini hallucinates a lot, specially with dna analysis. Never tried using the new claude even though i have access through antigravity. Might give it a try. Do you have any tips on how to approach it for health ‘analytical power’?
I just made a project, added all my exams (they were piling up, me and my psychiatrist had been investigating for a year this to no avail) and started talking to it about my symptoms.
Within a few iterations of this it gave me a simple blood panel, then I did that one and it kept suggesting more simple lab or at home tests and we kept going through them until I was reasonably certain of “something” and now that I have hypothesis I am going to a doctor. I think it’s done a great job. I also kept asking it for simple lifestyle interventions to prevent progression of my issue and it consistent nailed it - one particular interverntion (adding salt to water and drinking it to prevent symptoms) made a huge improvement to my life - I was barely working before that.
I added in some text the instructions box (project master prompt) for it to realise - it’s not medical advice and I am aware of that (prevents excessive guardrails) - add confidence intervals and probability to all diagnostic statements (prevents me + Claude going into rabbit holes so easily, it often has 70-80% certainty of what it’s saying, but it’s clear that it doesn’t use the right language) - that It was talking to an non expert, to use simple language but to go into detail when necessary. I also ask it to stop doing unnecessary constant follow up questions to every answer as that causes me anxiety. I can share the prompt, in fact I might do so later as it might be useful to others.
Make sure your first chat is about the exams in the project files. Make sure it reads them all. It has a tendency to read a few and go “is this good”. Ask for a summary and note any absences.
Try using the research and extended thinking features a lot if you think it’s not fully aware of anything. It might not be aware of more recent research. If it’s a serious condition you are researching, just ask it to do sweeps / use research to look for new info about it and find new papers. It might also deepen its understanding.
After you do research you can make a simple artefact and throw it onto the project files. That allows it to refer to it and gain more knowledge about a condition or issue that might not be as rich in the training data.
So, I find GPT to be so so bad for this it made me realise a bit on why the USG is so insistent. Claude Opus is just on a different class.
Here’s the master project prompt:
Act as an expert who’s talking to an interested layman. Engage in detail when requested but be overall succinct in your answers. Short sentences are fine, no need into be lengthy. Do deep research. When arriving at any kind of conclusion or hypothesis assign it a probability and a confidence interval - define this in percentages as in “90%”
On Artefacts - all artefacts should be just text and markdown. Never do anything more complicated with formatting, unless by explicit request.
Don't ask follow up questions unless it's to make for better diagnosis. I.e. don't keep asking questions just to maintain conversation going please. But never hesitate to ask questions if it makes for better outcomes.
The purpose of Twitter is IMO no longer to be profitable.
For a man with a trillion dollar fortune it’s just his personal equivalent of Fox News, a way to shape the nations conversation.
Plus a way to get data for xAI.
In that regard it’s a huge sucess. I use grok to find out about stuff on X and it’s very effective. Grok is also nowhere as bad as it should be (it’s still not great).
A way to get data for xAI? Eh, I guess. But it's a source of bad data. Most social media is, even the best case is stuff like Stack Overflow. It wouldn't surprise me if this was at least a strong component of why Grok called itself "Mecha Hitler".
Huge success? Unfortunately I have to agree, given the US government still ended up integrating it despite the Mecha Hitler incident.
> I use grok to find out about stuff on X and it’s very effective.
As with all of these things, I have to ask: How confident are you that it's telling you true things, rather than just true-sounding things? My expectation is Grok will be overtraining on benchmarks (even relative to the others, who will also be doing so at least a bit), and Grok's benchmarks will include twitter reactions, and it will be Goodhart's-law-ing itself in the process to maximally effective rhetoric rather than maximally effective (even by the standards of other LLMs) "truth-seeking".
* plural, not "the", it also works in at least the UK as well as the US
You can ask Grok for “find me this tweet on X, with direct links for sources” and it will do that. It’s basically a super charged fuzzy search engine for X which is great, since a lot of my searches are half remembered tweets that I’d like to find again.
So it’s accurate in the sense that it’s accurate finding things on X. I don’t really use it for anything else.
Thanks, that makes sense. I read too much into your previous comment and thought you were finding out more about things beyond twitter after they were discussed on twitter.
reply