I remain convinced App Store Connect is the project they put interns on. It also explains why they keep redesigning / reimplementing it, then losing interest and leaving it part-finished and incoherent. It’s because the interns working on it go back to school.
Most of the time, I don't personally look at it as cheap labour because I am just ordering, e.g. 60,000 of something or 100,000 of something else.
It's cheap, yes. I can indeed buy 1,000 of something more locally or from other than China.
But when it comes to scale, needing vast shipments, then they are the ones who can actually ship it and do it reliably. It just also happens to be cheaper, too, which is more of a convenience or cherry on top, than the actual attractive part: vast scale.
Is it the noise cancellation making a feedback sound, or is it the pressure differential in the ear canal pulling the ear drum back to produce a white noise?
He said that it goes away when he yawns, so I'm thinking it might be the pressure differential.
Yawning alters the conformation of the external auditory canal by displacing the mandible, which articulates with the tympanic plate of the temporal bone adjacent to the canal.
The seal might be so good that a small pressure differential happens as cabin pressure drops, which causes some issue with the microphone or speaker. Yawning might break that seal, or otherwise cause pressure equalization. Why only the left one? Apple might put some kind of special signal diagnostics or sensors in that side that bugs out under those conditions, or maybe human anatomy on the left side is consistently subtley different in a set of people.
Because this doesn't happen to everybody it could be some kind of "instrument effect" where the particular shape of someone's ear canal, and the interaction with their ear drum and the speakers and sensors in the app creates this tone, likely assisted by the constant driving signal of air cabin white noise.
> The seal might be so good that a small pressure differential happens as cabin pressure drops
That's my guess. I'm very sensitive to pressure changes and I know that cabin pressure on most planes is not constant even when cruising. It's in a range that most people won't notice but it definitely fluctuates near constantly within that band.
You write the requirements, you write the spec, etc. before you write the code.
You then determine what are the inputs / outputs that you're taking for each function / method / class / etc.
You also determine what these functions / methods / classes / etc. compute within their blocks.
Now you have that on paper and have it planned out, so you write tests first for valid / invalid values, edge cases, etc.
There are workflows that work for this, but nowadays I automate a lot of test creation. It's a lot easier to hack a few iterations first, play with it, then when I have my desired behaviour I write some tests. Gradually you just write tests first, you may even keep a repo somewhere for tests you might use again for common patterns.
I want to have a CUDA based shader that decays the colours of a deformable mesh, based on texture data fetched via Perlin noise, it also has to have a wow look as per designer requirements.
Quite curious about the TDD approach to that, espcially taking into account the religious "no code without broken tests" mantra.
Break it down into its independent steps, you're not trying to write an integration test out of the gate. Color decay code, perlin noise, etc. Get all the sub-parts of the problem mapped out and tested.
Once you've got unit tests and built what you think you need, write integration/e2e tests and try to get those green as well. As you integrate you'll probably also run into more bugs, make sure you add regression tests for those and fix them as you're working.
1. Write test that generates an artefact (e.g. picture) where you can check look and feel (red).
2. Write code that makes it look right, running the test and checking that picture periodically. When it looks right, lock in the artefact which should now be checked against the actual picture (green, if it matches).
3. Refactor.
The only criticism ive heard of this is that it doesnt fit some people's conceptions of what they think TDD "ought to be" (i.e. some bullshit with a low level unit test).
You can even do this with LLM as a judge as well. Feed screenshots into a LLM as a judge panel and get them to rank the design 1-10. Give the LLM judge panel a few different perspectives/models to get a good distribution of ranks, and establish a rank floor for test passing.
Parent mentioned "subjective look and feel", LLMs are absolutely trash at that and have no subjective taste, you'll get the blandest designs out of LLMs, which makes sense considering how they were created and trained.
LLMs can get you to about a 7.5-8/10 just by iterating itself. The main thing you have to do is just wireframe the layout and give it the agent a design that you think is good to target.
Again, they have literally zero artistic vision and no, you cannot get an LLM to create a 7.5 out of 10 web design or anything else artistic, unless you too miss the facilities to properly judge what actually works and looks good.
You can get an AI to produce a 10/10 design trivially by taking an existing 10/10 design and introducing variation along axes that are orthogonal to user experience.
You are right that most people wouldn't know what 10/10 design looks/behaves like. That's the real bottleneck: people can't prompt for what they don't understand.
Yeah, obviously if you're talking about copying/cloning, but that's not what I thought the context here was, I thought we were talking about LLMs themselves being able to create something that would look and feel good for a human, without just "Copy this design from here".
Yeah, we really need LLMs to work swimmingly with Lean 4. It is currently hot garbage and it does not understand proof composition, exploring proof extensions, lemma search, etc. It does not explore an open-ended node to a mathematical knowledge graph by substituting various options.
I'd happily work with someone on a conversational theorem prover, if anyone's up for it.
"...one where founders don't have to be particularly talented to hit the jackpot."
That's where we're at right now anyways.
"If tech companies are this stupid, it ought to be very easy to disrupt and usurp them by simply shipping--"
And that's how we got here.
The code rot issue will blow up a lot more over the next few years, that we can finally complete the sentence and start "shipping competing code that works".
I worry that mopping up this catastrophe is going to be a task that people will again blindly set AI upon without the deep knowledge of what exactly to do, rather than "to do in general, over there, behind that hill".
The pattern itself is a little bit different, has some conceptual overhead, but it's also fairly clean and scaleable.