Hacker Newsnew | past | comments | ask | show | jobs | submit | Insensitivity's commentslogin

No matter what I wrote in the audio profile, AI Studio never followed it, regardless of scene or context.

For example, I tried to get a male voice and kept getting female ones. Not sure if it's an AI Studio bug or I was doing something wrong.


voice is determined by the voice parameter, you can't control it via the prompt, the prompt only directs how the chosen voices delivers the lines.


> LLMs are pretty good at picking up the style in your repo. So keeping it clean and organized already helps.

At least in my experience, they are good at imitating a "visually" similar style, but they'll hide a lot of coupling that is easy to miss, since they don't understand the concepts they're imitating.

They think "Clean Code" means splitting into tiny functions, rather than cohesive functions. The Uncle Bob style of "Clean Code" is horrifying

They're also very trigger-happy to add methods to interfaces (or contracts), that leak implementation detail, or for testing, which means they are testing implementation rather than behavior


Yeah. We have an AI reviewer. Just now I had a PR where I didn't normalize paths in some configuration and then compared them. I.e. let's say the configuration had

    file = /foo/bar/
and then my code would do:

    if file == other_file:
        ...
instead of:

    if normalized(file) == normalized(other_file):
        ...
and the AI reviewer suggested a fix by removing the trailing slash instead. I.e. the fix would have "worked", but it would've been a bad fix because the configuration isn't under program's control and can't ensure paths are normalized.

I've encountered a lot of attempted "fixes" of this kind. So, in my experience, it's good to have AI look at the PR because it's just another free pair of eyes, but it's not very good at fixing the problems when it does find them. Also, it tends to miss conceptual problems, concentrating on borderline irrelevant issues (eg. an error raised by the code in the use-case beyond the scope of the program like when a CI script doesn't address the case when Git is not installed, which is of no real merit).


And so many helper factories in places that don't need it.


They could've come out and said "Hey, due to increased demands, we're reducing the limit", but instead they chose to do a rug-pull, in the guise of "off-hour" limit increase, while absolutely butchering the limit in both peak, and non-peak hours.

Additionally, they chose to gaslight the users, wait until people make a lot of noise, "investigate", and reach the conclusion that the same users that have been using their tools, with the same exact models and the same exact workflows for months, suddenly can't approximate / estimate their own usage compared to previous months.

Such a bad business etiquette, but nothing unexpected.


the "useCanUseTool.tsx" hook, is definitely something I would hate seeing in any code base I come across.

It's extremely nested, it's basically an if statement soup

`useTypeahead.tsx` is even worse, extremely nested, a ton of "if else" statements, I doubt you'd look at it and think this is sane code


  export function extractSearchToken(completionToken: {
    token: string;
    isQuoted?: boolean;
  }): string {
    if (completionToken.isQuoted) {
      // Remove @" prefix and optional closing "
      return completionToken.token.slice(2).replace(/"$/, '');
    } else if (completionToken.token.startsWith('@')) {
      return completionToken.token.substring(1);
    } else {
      return completionToken.token;
    }
  }
Why even use else if with return...


> Why even use else if with return...

What is the problem with that? How would you write that snippet? It is common in the new functional js landscape, even if it is pass-by-ref.


Using guard clauses. Way more readable and easy to work with.

  export function extractSearchToken(completionToken: {
    token: string;
    isQuoted?: boolean;
  }): string {
    if (completionToken.isQuoted) {
      return completionToken.token.slice(2).replace(/"$/, '');
    }
    if (completionToken.token.startsWith('@')) {
      return completionToken.token.substring(1);
    }
    return completionToken.token;
  }


I always write code like that. I don't like early returns. This approximates `if` statements being an expression that returns something.


> This approximates `if` statements being an expression that returns something.

Do you care to elaborate? "if (...) return ...;" looks closer to an expression for me:

  export function extractSearchToken(completionToken: { token: string; isQuoted?: boolean }): string {
    if (completionToken.isQuoted) return completionToken.token.slice(2).replace(/"$/, '');

    if (completionToken.token.startsWith('@')) return completionToken.token.substring(1);

    return completionToken.token;
  }


I’m not strongly opinionated, especially with such a short function, but in general early return makes it so you don’t need to keep the whole function body in your head to understand the logic. Often it saves you having to read the whole function body too.

But you can achieve a similar effect by keeping your functions small, in which case I think both styles are roughly equivalent.


I'm not that familiar with TypeScript/JavaScript - what would be a proper way of handling complex logic? Switch statements? Decision tables?


Here I think the logic is unnecessarily complex. isQuoted is doing work that is implicit in the token.


Fits with the origin story of Claude Code...


insert "AI is just if statements" meme


useCanUseTool.tsx looks special, maybe it'scodegen'ed or copy 'n pasted? `_c` as an import name, no comments, use of promises instead of async function. Or maybe it's just bad vibing...


Maybe, I do suspect _some_ parts are codegen or source map artifacts.

But if you take a look at the other file, for example `useTypeahead` you'd see, even if there are a few code-gen / source-map artifacts, you still see the core logic, and behavior, is just a big bowl of soup


Lol even the name is crazy


Funny how before the announcement, people who were experiencing this were being gaslighted on different platforms, to think they have a "skill" issue using Claude Code

Additionally, this was practically predicted and expected by so many people, the second the off-hours increase was announced.

Shoddy company


Its a sign that they cannot simply raise the price of existing paying customer's to avoid putting on limits - and/or - convert more free users into paying customers.

The longer this goes on the more it becomes clear Google is going to be the last one standing.


Why because 7% of people complain the loudest? I would imagine that most power users have big followings on X, Reddit, HN, etc.. but that hardly reflects the reality of what most people are experiencing.


The remaining 93% exist mostly because these 7% were loud.

Users who aren't using their quota will gradually disappear when that 7% starts being loud in the other direction.


Are you within an organization? they can control which subset of models are available to you

maybe it's related to the supply chain risk designation


No, just a regular Pro subscription. Apparently it's not just me, Github seems to have removed these models from the "Student" subscription [0] but it seems as it was also removed from regular "Pro" subscriptions as there are many reports on their discussions. [1]

[0] https://github.com/orgs/community/discussions/189268

[1] https://github.com/orgs/community/discussions


Yeah, I got the email that they removed them from the student plan:

"As part of this transition, however, some premium models, including GPT-5.4, and Claude Opus and Sonnet models, will no longer be available for self-selection under the GitHub Copilot Student Plan."

They specifically said this was primarily for student plans. I'm surprised they did this for the normal pro plans too; it's likely a mistake since the plans page[1] still says that the models will be available.

However, TBH, I've never liked Microsoft's flavor of these; they always seem lobotomized compared to using the models directly in Claude Code / Codex. I rarely use AI in VS Code because it's just bad.

[1]: https://docs.github.com/en/copilot/get-started/plans


I was looking at the Meet repository as an example, people literally don't know how to write React, without drowning in `useEffect`, `eslint-disable`, `any`. React has it's issues (and a ton of them), but writing code like this, I expect it to end up exactly like Microsoft Teams quality wise.

Honestly, at that point, it's indistinguishable from LLM slop


Why would one decide to even go with React in recent years anyway? Strangely I've seen it happen a lot too.

I'd have thought that Vue or Svelte would be a slam dunk choice. Do project managers love bloat and lag or something?


I personally don't mind React, but I do acknowledge, after using it for a couple of years, that it seems to be a magnet for issues. It's the kind of framework, where if you're not writing properly, mostly like [Thinking in React](https://react.dev/learn/thinking-in-react) (with some caveat for niche performance optimizations), you're going to have a rough time, and you're going to make life miserable for anyone that does know what they're doing

It has a weird learning curve, where you can ship something somewhat working, fairly fast, but to write it properly, with no bugs, you need to understand a lot of niche React-specific things, and their solutions (and those solutions are never useEffect https://react.dev/learn/you-might-not-need-an-effect).

At that point, I wouldn't recommend it to anyone who isn't already experienced with React. It's been an uphill battle, trying to work with anyone that is using React, without understanding how to write properly.


The reactivity model fits well in real-time applications; perhaps SolidJS is better alternative in this context, though.


I have met very few devs who know how to avoid useEffect


As impressive as it sounds, the game is riddled with people cheating


I'm not sure if React Native is much different from React itself, but a few things that made me give a side eye, when looking at this write up:

1. The instant 5 levels of providers (and additional one later) seems excessive.

2. The usage of useAnimatedReaction which seems almost like a "useEffect" kind of hook, which was sprinkled in almost every code block.

3. The imperative size calculations, does React Native not support any responsive like constructs? I recall solving the same problem by separating "history" and "tail" and having a "grower" component, without having to use any JS (purely html & css), albeit it being web and not native.

4. Personally, when I see something like the scroll code, where you have to call scroll, wait a frame, call scroll again, set a timeout, scroll again- I would have raised my eyebrows about the architecture / code flow way before that.

5. the amount of "floating" hook calls like useKeyboardAwareMessageList() useScrollMessageListFromComposerSizeUpdates() useUpdateLastMessageIndex()

that don't return anything, always makes me raise an eyebrow, usually in React Web codebases, where the users just spam useEffects and effect chains.

Not sure if it's just my ignorance in React Native, but if I had seen the equivalent in a React Web app, I would've been baffled


I’ve been using jj for a few months now and still love its workflow, but I keep running into the same problem you mentioned. The advantages of jj far outweigh this issue, so I’d really like to figure out a clean way to avoid these conflicts.


What does git think of the tree after you pull? Does everything seem fine to git, but jj shows a conflict?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: