Last time I tried home row mods I could not get over how bad it felt having the letter not appear until I lifted the key rather than immediately on the keypress. Am I just overly sensative or is this just something you get used to over time?
I had a much better experience shifting them down a row so they’re “bottom row mods”, since those keys (if using QWERTY) are much less frequently used.
There’s also features like chordal hold and flow tap which have improved the home/bottom row mod behaviour.
> We also suspect that the geographical and cultural factors may have influenced interaction patterns, given that all our participants were residing in Türkiye.
It's a good suggestion, but where the 'autocomplete' quote is scoped too narrowly, this one is maybe scoped too broadly. Neither really represent what the article is about.
I like to think of models leaving "useless comments" as a way to externalize their reasoning process - maybe they are useless at the end, but leaving them in on a feature branch seems to marginally improve future work (even across conversations). I currently leave them in and either manually clean them up myself before putting up a PR for my team to review or run a final step with some instructions like "review the diff, remove any useless comments". Funnily enough Claude seems pretty competent at identifying and cleaning up useless comments after the fact, which I feel like sort of proves my hypothesis.
I've considered just leaving the comments in, considering maybe they provide some value to future LLMs working in the codebase, but the extra human overhead in dealing with them doesn't seem worth it.
I've been wondering if the "you're absolutely right!" thing is also similar. Like maybe it helps align Claude with the user or something, less likely to stray off or outright refuse a task.
- Kujtim Hoxha creates a project named TermAI using open-source libraries from the company Charm.
- Two other developers, Dax (a well-known internet personality and developer) and Adam (a developer and co-founder of Chef, known for his work on open-source and developer tools), join the project.
- They rebrand it to OpenCode, with Dax buying the domain and both heavily promoting it and improving the UI/UX.
- The project rapidly gains popularity and GitHub stars, largely due to Dax and Adam's influence and contributions.
- Charm, the company behind the original libraries, offers Kujtim a full-time role to continue working on the project, effectively acqui-hiring him.
- Kujtim accepts the offer. As the original owner of the GitHub repository, he moves the project and its stars to Charm's organization. Dax and Adam object, not wanting the community project to be owned by a VC-backed company.
- Allegations surface that Charm rewrote git history to remove Dax's commits, banned Adam from the repo, and deleted comments that were critical of the move.
- Dax and Adam, who own the opencode.ai domain and claim ownership of the brand they created, fork the original repo and launch their own version under the OpenCode name.
- For a time, two competing projects named OpenCode exist, causing significant community confusion.
- Following the public backlash, Charm eventually renames its version to Crush, ceding the OpenCode name to the project now maintained by Dax and Adam.
I'm surprised this doesn't get brought up more often, but I think the main explanation for the divide is simple: current LLMs are only good at programming in the most popular programming languages. Every time I see this brought up in the HN comments section and people are asked what they are actually working on that the LLM is not able to help with, inevitably it's using a (relatively) less popular language like Rust or Clojure. The article is good example of this, before clicking I guessed correctly it would be complaining about how LLMs can't program in Rust. (Granted, the point that Cursor uses this as an example on their webpage despite all of this is funny.)
I struggled to find benchmark data to support this hunch, best I could find was [1] which shows a performance of 81% with Python/Typescript vs 62% with Rust, but this fits with my intuition. I primarily code in Python for work and despite trying I didn't get that much use out of LLMs until the Claude 3.6 release, where it suddenly crossed over that invisible threshold and became dramatically more useful. I suspect for devs that are not using Python or JS, LLMs have just not yet crossed this threshold.
As someone working primarily with Go, JS, HTML and CSS, I can attest to the fact that the choice of language makes no difference.
LLMs will routinely generate code that uses non-existent APIs, and has subtle and not-so-subtle bugs. They will make useless suggestions, often leading me on the wrong path, or going in circles. The worst part is that they do so confidently and reassuringly. I.e. if I give any hint to what I think the issue might be, after spending time reviewing their non-working code, then the answer is almost certainly "You're right! Here's the fix..."—which either turns out to be that I was wrong and that wasn't the issue, or their fix ends up creating new issues. It's a huge waste of my time, which would be better spent by reading documentation and writing the code myself.
I suspect that vibe coding is popular with developers who don't bother reviewing the generated code, either due to inexperience or laziness. They will prompt their way into building something that on the surface does what they want, but will fail spectacularly in any scenario they didn't consider. Not to speak of the amount of security and other issues that would get flagged by an actual code review from an experienced human programmer.
I just pasted the YouTube link into AI Studio and gave it this prompt if you want to replicate:
reformat this talk as an article. remove ums/ahs, but do not summarize, the context should be substantively the same. include content from the slides as well if possible.
Highly recommend Three-Body, the Chinese version of the Three-Body Problem. I enjoyed it much more than the Netflix adaptation, much closer to the source material, and more of a slow burn. Episodes are available on YouTube with subs (https://www.youtube.com/watch?v=3-UO8jbrIoM).
Isn't the disillusionment of the main scientist related to the violent abuse of the CCP (and he loss of faith in humanity) core to the reasoning of why she reached out to the aliens, despite their warning? How do they restructure something so core to the plot?
Yeah, I mentioned 三体 in a parent comment. It's a great counterpoint to the "high fructose" Netflix version. And interesting to see the American character portrayed by an American actor...dubbed by a Chinese voice actor. (Just be prepared to fast-forward the musical interludes.)