I recall in the late 1990s that physical synthesis was thought to possibly be the next big thing, that it might take over synthesis of musical instruments entirely from the options of wavetables and FM synth at the time. It didn't, but my point here is that is where it was, a prominent alternative that everyone in the relevant fields was aware of and many people tried to make work, not a recent invention and not just an obscure academic pursuit.
For myself, I've curated my recommendation algorithm down to the point that I don't mind the shorts I get recommended, they're generally from content creators I like anyhow, or content creators that use shorts as their primary medium in ways I'm generally OK with, but the UI is trash. For some reason, I can cast normal videos to my Roku, but if I try to cast a short, it cancels casting, quite explicitly with a popup saying "hey this is going to cancel casting, are you sure?". But the Roku YouTube app is perfectly capable of navigating to a short in the UI and playing it.
And no matter how much I curate the algorithm, the thing that it wants to play next in the Shorts UI is effectively random to me. Not once have I ever seen one that is even a decent recommendation. Maybe I'm hitting some weird edge case because I'm having the opposite problem some people report; Shorts aren't horrifically addictive and I can't stop scrolling, I can't start. The recommendations in my feed are OK but the "next short" is uniformly terrible for me.
That's why I try to prune them down a bit.
I keep up the fight because as a recent article noticed, YouTube is still a unique video service with an astonishing amount of high-quality content from small creators, fascinating math videos, how-to videos, etc. I'm more-or-less winning the fight with the algorithm at the moment and it still often turns up interesting things. But it is a constant fight to keep it from becoming a lowest-common-denominator feed. Goodness help you if someone links you a YouTube video of a cat being stupid or anything political, get that watch out of your History before you forget.
Huge, huge numbers of machines behind a single external IP mean that your internet access carries all their reputation by proxy. Since switching off Comcast to a smaller fiber company that uses CGNAT I've seen somewhat more Cloudflare challenges.
Short-short version, code will still be accruing value in proportion to how much of the real world it has encountered. The bottleneck on building valuable code will be how much real world there is to go around. As is so often the case, what may initially seem to kill SaaS will actually make them stronger as they end up with more exposure to the real world than some random guy's random AI code.
I still think even now they could print money if they'd just do an honest adaptation of the Thrawn trilogy. Even with all the damage they've done to the brand.
It's what they probably should have done from day one.
I'm not saying the Thrawn trilogy is like the highest art ever made. It's glorious pop schlock fun, as Star Wars should be. But it reminds me of the way that manga is so often treated as storyboards for the anime adaptations. The books, being books, can't be quite so directly translated, but it's close enough that it shouldn't strain any competent scriptwriter. Assuming Disney still knows how to find such people, a proposition not well supported by recent evidence.
Also, automatic three-movie plan, which could be used to fix up one of Hollywood's biggest problems. I don't know where Hollywood gets its swaggering confidence that they can make multi-movie epics while simultaneously having no plans whatsoever for what the next movie will be, after their repeated, catastrophic, and expensive failures trying to create them. What if, and hear me out here, try not to let your head explode at the audacity of this idea, they didn't try to spend billions of dollars just sort of "winging" it? What if they had an actual plan for how they were going to spend billions of dollars over the course of a decade? I know, I know, it's a crazy idea, but maybe they should give it a try.
> The books, being books, can't be quite so directly translated, but it's close enough that it shouldn't strain any competent scriptwriter.
It should be noted that many writers want to make their own mark and don't care too much about the original work and its popularity… which kind of defeats the purpose of "adapting" it.
I've heard The Witcher suffers/ed from this (≥S02), as some writers hated the original stuff (in which case, why are you here and/or why were you hired?).
Even Shadows of the Empire as a standalone movie would be a breath of something new and fun.
Maybe not the highest art ever but both were solid stories memorable enough for us to remember decades later. Thrawn was a respectable villain. We weren't supposed to like him but couldn't deny his competence.
Of memorable blue people, for comparison I've watched Avatar twice and don't remember a thing about it.
So much ripe fruit just sitting there that the decision to wander away and do their own thing would still have been objectively crazy even if they had fully succeeded. It just would have been an objectively crazy decision that they managed to succeed despite making.
>"I still think even now they could print money if they'd just do an honest adaptation of the Thrawn trilogy. Even with all the damage they've done to the brand."
They could print money by just having Bob Iger stand silently in front of a camera flipping off the audience for two hours and calling it Star Wars. Every one of those films were profitable. They'll keep doing it, and people will keep paying for it.
I fed an unpublished draft of mine to an AI. I saw it searching the internet and prompted it with the fact it could stop searching, it was not published. From there it guessed that it was me on the spot, which I thought was kind of funny. Can't deny the meta-logic there.
It referred to me by my login name on the AI site rather than the name it would have used if it actually found my website, so I think it was more logic than an actual identification, but it had clearly corrupted the search enough to no longer be a valid test.
Which does make me wonder about the original article; if the AI has in context any sort of clue that the user is "Kelsey Piper" (a memory of their name, a username of kpiper or kelseyp, etc.), that will radically tip the balance in favor of the AI guessing that way just by the nature of LLMs. That is to say, it highly increases the odds of that guess even if it's wrong.
Even if that is the case, though, the general identifiability of writing remains true. It's been shown for a while with techniques a lot less powerful than a frontier LLM.
It is weird to me that Amazon chose a fairly common name. There are plenty of short, more unique names out there.
I have ours set to “Computer” anyways, partly due to Star Trek and partly because it annoys my wife when we use the term in conversation and it picks it up. It has the side effect of being harder to pronounce for our kids, which was probably a good thing.
And when the CEO says "Hey, we really need to make our contact information more visible because I get a lot of customer reports that they can't figure out how to contact us", sure.
When the founders say they want the picture bigger and the logo a bit more purple and can we add underlines to all the menu items and also bold them, probably not.
> When the founders say they want the picture bigger and the logo a bit more purple and can we add underlines to all the menu items and also bold them
Simple: they’re trying to give you the solution, and it’s your duty as the responsible designer/developer to find out what problem they see. Here’s a nice set of questions I’m using (from Managing projects, people, and yourself [1] by Nick Toverovskiy):
1. What did you mean by that?
2. Why is it important?
3. How is this related to the purpose of the project?
4. How does this relate to other parts of the system? What else could be affected by this change?
5. Why is it critical to resolve this before the next release / deadline?
This should paint a fairly decent picture of what’s really on your client’s (or manager’s) mind. Then you can propose a solution to the real problem – which might very well be the one that your client has proposed!
(Some questions might sound stupid in context. You can skip them, or just admit it: “I’m gonna ask some questions which might make me sound like an idiot, but that would really help me figure out the problem better. Would that be alright with you?”)
My problem with most of these books is they are indirectly trying to solve the real problem. The problem that IME HN is allergic to discussing.
Power Dynamics.
The reason the CEO is nitpicking your job is because he is not a good CEO and doesn't know his place or how to do his job. Almost all these books are about an indirect way of dealing with the fact that, this person is a ID10T and you have to deal with them because they have more power than you. Yet it is literally NEVER discussed.
The books(IDK about this one) really summarizes indirect ways of how to be subservient and not accidentally antagonize your "superiors" which are frequently people just born into a better lot in life than you, without feeling like that is what you are doing.
What is the CEO's primary duties, networking?, Sales, COMMUNICATING yet its your job to read books on how to tiptoe around how to sus out what they cannot COMMUNICATE?
I'm a pretty opinionated engineer but I'll still volunteer that in a majority of "engineering" disputes, I care more about having a coordinated and consistent approach than I do about the absolute tack taken.
Maybe I've just been lucky to mostly work with decent managers, but basically I consider the tie-breaking function to be intrinsically valuable.
With this particular book, the prerequisite is that your client is trying to achieve something, yeah. I think know the type of CEOs and CTOs that you’re talking about, the ones that only want to sound smart and don’t really care about the end result. Unfortunately, there’s not much you can do in this case apart from looking for a workplace where people do care about what they do.
We do it like that with everything. If you consider yourself an artist it is quite simple to say you cant put your name on it if anything changes. You can also explain what you just wrote: Youve hired me, trust me to do it and focus on your tasks. Or: we will be different from otters but in a limited number of ways and your suggestions dont offer enough roi to make the cut.
> it’s your duty as the responsible designer/developer to find out what problem they see.
I tried with wildly varying degrees of success to impress this on my fellow developers for decades. In every case it was an utterly new and foreign idea to them, including those who had actually studied computer science at degree level.
In general a neural net does not have any way of knowing "why" it is doing what it is doing. This completely applies to humans too. Metacognition means we can make some decent guesses, and sometimes the "reasons" are at a metacognitive level (e.g., "having examined my three options it is only rational to select B" is a reasonable "reason") but that is the exception, not the rule.
You can get something of an intuitive sense of what I mean if I ask you to pick a neuron in your brain and tell me when it fires. You can't even pick a neuron in your brain. You can't even tell whether a broad section of your brain is firing. It is only through scientific examination that we have any idea what parts of the brain are doing what; we certainly have no direct access to that information. There are entire cultures who thought the seat of cognition was the heart or the gut. That's how bad our access to our own neural processes is.
So "why" explanations always need to be taken with a grain of salt when a neural net (again, yes, fully including humans) tries to "explain" what it is doing.
Contrast this with a symbolic reasoner, which has nothing but "why" some claim is true (if it yields the full logic train as its answer and not just "yes"/"no"), no pathway for any other form of information to emerge.
Sure; I just mean relative to the degree of plausibility LLMs typically provide with technical explanations. They're often wrong there too, but the difference in plausibility in these scenarios is something I found interesting.
reply