Hacker Newsnew | past | comments | ask | show | jobs | submit | ekropotin's commentslogin

From what I understand, macOS uses weird kernel implementation, which is almost open source, but not 100%

You're correct, but for a bit more context: The macOS kernel is XNU, which is derived from/based on the Mach kernel, but heavily modified. The kernel itself is open source but some drivers/kernel extensions are not so it's not actually usable (unless you provide your own implementations of those).

Just reinforces the golden rule of not doing business with friends or family.

Cold showers - good for immune system. Heat expose - good as well. I guess what doesn’t kills us - makes us stronger is true after all.

Cryotherapy, also.

I'm wondering if what's actually going on is the temperature swings are what's important, not how they come about.

I have also seen both scuba diving and skydiving suggested as beneficial due to the oxygen changes.

Could this perhaps be a form of exercise for the body's regulatory systems?


Im genuinely curious what is a use-case for 25GB Internet in a typical Switzerland household?

It's true that few servers even provide that kind of bandwidth. That's why I personally stick to 1 Gbps and save a few bucks. But I have friends who use the 25 Gbps e.g. for off-site backups (a NAS at the office for home backups, one at home for office backups). Stuff like game downloads or uploading large media are also sometimes mentioned. And you can really benefit from the parallel connections in P2P stuff.

It absolutely doesn’t look as sharp as 4k, even on 22” inch screen

I've seen 4K monitors in person. Disabling anti-aliasing (with full hinting enabled) on a 1080p monitor increases text sharpness to equal that of anti-aliased text on a 4K monitor. The only drawback is that the font no longer approximates the shape of printed text. This doesn't matter except in the outlier cases of Chinese and Japanese, which use some extremely visually intricate characters.

I find the title very misleading. Linux containers typically means LXC, but when in readme you say it’s intended for running OCI-based containers.

I think you are re-inventing the wheel https://github.com/nix-community/home-manager

I recently used clause code to help me learn nix + home-manager! For anyone considering it - it’s been fun, genuinely useful in my day to day, and I can’t recommend it enough - I now have a source controlled tool kit that I can take with me anywhere I go

I agree. I started with Nix flakes in my project and fell in love with them. Then I started using Home Manager, and now I feel complete. I even played with nix-darwin and NixOS. It's an amazing piece of software.

I’ve gotten used to it and with LLM it’s easier to set up the config without learning all the obscure syntax but on macOS it’s still a very un-native feeling compared to home brew. Having to sudo all the time feels weird for just updating user space apps and configs.

If the wheel was a stellated rhombicosidodecahedron

Are we going to see a trillion dollar IPO?

Because 90% of training data was in English and therefore the model perform best in this language.

In my experience these models work fine using another language, if it’s a widely spoken one. For example, sometimes I prompt in Spanish, just to practice. It doesn’t seem to affect the quality of code generation.

They literally just have to subtract the vector for the source language and add the vector for the target.

It’s the original use case for LLMs.


Thank you. +1. There are obviously differences and things getting lost or slightly misaligned in the latent space, and these do cause degradation in reasoning quality, but the decline is very small in high resource languages.

It’s just a subjective observation.

It just can’t be a case simply because how ML works. In short, the more diverse and high quality texts with reasoning reach examples were in the training set, the better model performs on a given language.

So unless Spanish subset had much more quality-dense examples, to make up for volume, there is no way the quality of reasoning in Spanish is on par with English.

I apologise for the rambling explanation, I sure someone with ML expertise here can it explain it better.


I saw a curious post recently that explored this idea, and showed that it isn’t really the case. The internal layers of the model aren’t really reasoning in English, or in any human language.

Translation in/out of human languages only happens at the edges of the model.

Internal layer activations for the same concept are similar regardless of language, while activations at the top/bottom layers diverge. Meanwhile the pattern is reversed for same language different content.


So we do at least agree on the fact that quality of human language <-> embeddings transition depends on how good target language is represented in the training dataset?

Even if it happens at the edge, on every conversation turn, I may assume non captured small subtleties of meaning over time can accumulate into significant error.


The agent may be translating to English behind the scenes. There are all kinds of hacks layered on top of the underlying models. But however exactly it works, it’s clearly not the case that these things only make use of Spanish training data when you prompt them in Spanish.

Lets take a language with orders of magnitude less training data, like Nahuatl. The following is what happens if I ask Claude Code in Nahuatl to give me an example of a simple program that I can execute inside a web browser (avoiding any direct keyword prompts like 'Javascript' or 'Hello World'). Interestingly, it responds in Spanish. I doubt that it has any relevant training data in Nahuatl. (Apologies if the Nahuatl is wrong – I just used Google translate.)

    ¿Uelis techmakas se neskayotl tlen se programa tlen amo ouij tlen ueliskia nijchiuas ipan se navegador web?

    -----

    Entiendo tu pregunta en náhuatl. Aquí un ejemplo sencillo — una calculadora en HTML puro que puedes abrir directo en el navegador, sin servidor:

    <!DOCTYPE html>
    <html>
    <head>
        <meta charset="UTF-8">
        <title>Calculadora</title>
        <style>
        body { font-family: sans-serif; display: flex; justify-content: center; padding: 40px; }
        .calc { border: 1px solid #ccc; padding: 20px; border-radius: 8px; width: 200px; }
        input { width: 100%; margin-bottom: 10px; font-size: 1.2em; text-align: right; }
        button { width: 22%; margin: 2px; padding: 8px; font-size: 1em; cursor: pointer; }
        .wide { width: 47%; }
        </style>
    </head>
    <body>
    <div class="calc">
        <input type="text" id="pantalla" readonly>
        <div>
        <button onclick="tecla('7')">7</button>
        <button onclick="tecla('8')">8</button>
        <button onclick="tecla('9')">9</button>
        <button onclick="tecla('/')">÷</button>
    
        <button onclick="tecla('4')">4</button>
        <button onclick="tecla('5')">5</button>
        <button onclick="tecla('6')">6</button>
        <button onclick="tecla('*')">×</button>
    
        <button onclick="tecla('1')">1</button>
        <button onclick="tecla('2')">2</button>
        <button onclick="tecla('3')">3</button>
        <button onclick="tecla('-')">−</button>
    
        <button onclick="tecla('0')">0</button>
        <button onclick="tecla('.')">.</button>
        <button onclick="tecla('+')">+</button>
        <button onclick="limpiar()">C</button>
    
        <button class="wide" onclick="calcular()">=</button>
        <button class="wide" onclick="borrar()">⌫</button>
        </div>
    </div>
    
    <script>
        const pantalla = document.getElementById('pantalla');
    
        function tecla(val) {
        pantalla.value += val;
        }
    
        function calcular() {
        try {
            pantalla.value = eval(pantalla.value);
        } catch {
            pantalla.value = 'Error';
        }
        }
    
        function limpiar() { pantalla.value = ''; }
        function borrar() { pantalla.value = pantalla.value.slice(0, -1); }
    </script>
    </body>
    </html>
    
    Guarda esto como calculadora.html y ábrelo en cualquier navegador — no necesita servidor ni dependencias. Es un buen punto de partida para aprender HTML,
    CSS y JavaScript.

> it’s clearly not the case that these things only make use of Spanish training data when you prompt them in Spanish.

It’s not! And I’ve never said that.

Anyways, I’m not even sure what we are arguing about, as it’s 100% fact that SOTA models perform better in English, the only interesting question here how much better, is it negligible or actually makes a difference in real world use-cases.


It’s negligible as far as I can tell. If the LLM can “speak” the language well then you can prompt it in that language and get more or less the same results as in English.

That’s good to know your personal preferences. Please keep us posted!


I started with Claude for a basic JS project. It failed over and over. Gemini sorted out the same problems faster. Claude was always wanting to rip out huge blocks of code and replace them. Did it fix the problem? Almost never. It was a small JS code base Claude made itself.

Claude was my first coding AI, I liked it, I wanted to use it. But when I ran out of tokens I went to Gemini and got way better results.

And now every day I see Claude spam like it's the best thing that ever happened. Real world use tells a different story. I didnt just "one off" try it and have a problem. This is week and weeks of issues.

Claude fails basic questions when given very clear prompts - ON VANILLA JAVASCRIPT.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: