Hacker Newsnew | past | comments | ask | show | jobs | submit | NewsaHackO's commentslogin

But they don't local store that much data, which is what would be relevant for a discussion above local storage costs.

No, it doesn't.

What would be required for you to see it as a policy?

But why should they? This just seems like the groundwork for an initial refactor and moving from one language to another. They haven't actually committed to switching from Zig to Rust yet. I mean, I get if you are an investor and you want to see if they are using their time effectively, but why would it matter to anyone else?

They’re not required to do so, but like I said, it would be nice, because it removes a lot of speculation. And development is in the open, so people notice what they’re doing.

Lots of people, me included, heavily invested their time and expertise into Bun, using it as a daily driver, to bundle production code or even using it in production as a JS/TS runtime. Of course, we are interested in Bun to stay a useful tool. The Anthropic acquisition was worrying enough on its own.

But there isn't any change in someone's expertise in Bun though, currently, just in development. Why would they have to dive you into a daily stand-up about their development process?

Bun may become unusable after Antropic meddling with it. In that case the expertise would be wasted. It's not a great deal for most of users, but still.

Come on, they could still get a blood sample to really verify that its the user

Yeah, I cannot imagine how anyone could learn anything well with access to AI. I am grateful that I finished my schooling before AI hit mainstream, because it is just too easy to turn your brain off and just AI a question before thinking about it. Great for getting things done, useless for learning. I guess hallucinations still keep us on our toes.

"Useless for learning" is just wrong. I've found LLMs immensely useful for directing my learning projects. Of course, a lot of the actual learning must come from doing things and puzzling through them myself. But I now find LLMs to be indispensable in finding out what I need to learn to accomplish a task, finding keywords to search on Wikipedia or in textbooks, and answering questions when I'm confused about something.

Part of the difference in your case is the motivation for learning. Many of us in grade school had a motivation to get good grades/pass a class outside of the pursuit of knowledge. Even for those of us that really liked to learn, it was usually directed at a certain subject matter and not everything that we would need to be successful as adults (I loved math, but would never willingly write an essay if I could get away with it). Because grade school kids are "forced" to learn things they do not want to, they always look for the easiest way to get through the material, and AI provides a way to do this.

I agree with your general point, but if people are going to use AI regardless, the question is whether we should teach young people how to use it effectively. If they don't learn this, they're more likely to use it a way that hampers their development.

Now, I don't know at what level that should begin. Probably somewhere around the high school level, when they're learning to do research projects and synthesize information from multiple sources, is when teaching AI literacy will be most important.


What value to a person does teaching "how to use it effectively" deliver?

How does that benefit their development, learning, society as a whole?

Before you start in with "it'll help them get a job", full stop - education as a public good isn't strictly vocational technician work. It's not a work training for companies.


For the same reason that we should teach people how to use a library, or a search engine, or an academic database. The tools for information retrieval are constantly evolving, and in a democratic society it's important that people learn how to educate themselves on a continuous basis throughout their lives. If you use AI properly, you can learn things that you wouldn't have had the time or skillset to learn otherwise.

It's worth remembering that this isn't that. What the poster describes is constant pushing from the Chrome OS designed to train dependence on the tools and to essentially checkout of the education process. In my opinion this is definitely useless for learning.

I'm an adult with a fairly balanced view of AI and I find it difficult to learn coding without occasionally using AI to help me navigate to the most relevant bits of MDN or help me check if my thinking on an approach is correct (it's all entry level stuff so should be well represented in the training data).

I find it easy to to into a long chat with an LLM about some project I'd like to try and what's involved with it. I find it easy to get into a chat with an LLM about a lot of things as a kind of unproductive excursion that my brain tells me at the time is 'useful'. I'm of average will, so I dread to think how this will work out with children who get to 'partner up with AI to assist them' or whatever marketing speak is used to obfuscate their goals. Then combine that with social developmental issues or below average focus.

It's bleak because the more entangled they get with the system the more they'll seek to push back regulations.


Using AI in an intentional way with purpose and direction should be great for practicing thinking.

The right way to teach children to use AI is to teach them to scope, filter, design process, edit, refine. How to ask a question, how to think through steps, how to use language to describe all these things. Each kid has something that can think and respond as fast as they input.

The goal should be to perfect sequence and iteration, not skip to final output.

These skills also should NOT be framed in some kind of "teaching AI" as much as teaching communication and critical thinking and analysis. It is the exact same skills you need to solve problems and interact with humans.


> Yeah, I cannot imagine how anyone could learn anything well with access to AI.

You must not have much of an imagination then. Or maybe you're just being overdramatic? AI is arguably the best way to learn most subjects now. Frontier models have made a lot of progress on reducing hallucinations, and AI can teach you at whatever pace you're capable of learning it. There are very few topics it can't teach, and it can go into more depth than you'll find in any textbook.


"and it can go into more depth than you'll find in any textbook"

How does it manage that, when it it only knows as much as is written in text books?


It can peruse research papers, world news, encyclopedias, product manuals, and dissertations.

I would not say that. My child asks the AI factual questions the same way she would ask an adult. That's one kind of learning. There are others, of course.

When I say AI, I obviously don't mean using AI like people used to use search engines. Of course asking it factual questions like it is an encyclopedia is okay.

The other thing though is that there are situations where you only have a limited amount of tries for a password, and incorrect tries can have dire consequences. If you are being asked for a password by an armed guard, and you hack the system completely and get the password, but didn't know about the last obscured step that you were supposed to type it with your left hand, not your right, you will still face whatever consequences even though that step didn't add any security.

As a fan and believer of obscurity in support of security, I do not understand why

> that step didn't add any security.

It is a decision that’s part of the entire process. A branch of many in the decision tree. Other branches are deciding which characters to type for the password; ASCII characters can be as little as 1 bit apart. Deciding between left and right is also 1 bit apart.

I think it boils down to what people commonly understand to be publicly knowable information versus understood-to-be-secret information.

One example: I self-host my password manager at pw.example.com/some-secret-path/. That extra path adds as much to security as a randomly picked username in HTTP Basic Auth: arguably none. Yet, it is as impossible for attackers to enumerate and find that path as it is with passwords.

The difference is that the path leaks easier. It’s not generally understood to be a secret. Yet I argue it helps security. (Example: leaking the domain name through certificate transparency logs AND even, say, user credentials means an attack is still unsuccessful; a strictly necessary piece of the puzzle is missing).


>Both aim to avoid Windows, neither replace Linux but instead tries to move more to Linux.

I don't agree: WSL is an attempt to use programs developed for Linux in Windows. It is clearly for people who want to use Linux programs but don't want the headache of setting up Linux or dual booting.


> WSL is an attempt to use programs developed for Linux in Windows.

Then I'd think it be available as a "right-click > Launch Linux Program" or something like that, like WSL1, rather than the VM approach WSL2 takes which gives you entire environment. Even Microsoft themselves market WSL like that:

> Windows Subsystem for Linux (WSL) lets developers run a GNU/Linux environment - https://learn.microsoft.com/en-us/windows/wsl/

I agree with your last part though, it's for people who want to use Linux without the headache of dual-booting or managing their own VMs, so they use predefined packaged VMs ala WSL instead.


I guess I was more contesting that WSL is for people to get away from Windows, when it is actually the other way around; it reduces the friction between tools developed to only work on Linux and Windows users, so that the Windows user can stay using just Windows. Back when I used Windows, this was always a point of contention for installing most dev related apps, and trying to use MinGW was such a pain (WSL was broken on my computer then due to Hyper-V being BIOS disabled). I used Linux now on my main computer, but I recently tried WSL on a family member's computer and I can see how if you just do all dev work in WSL, you would never have to go through the process of migrating to an entirely new OS and still get all of the benefits.

It's just unfortunate. Like there was a pharmaceutical company named "Isis" that changed their name due to the association with the terrorist group. That said, while people will notice for the next couple of months, I don't think it warrants changing a name for.

More importantly there's a really good band named isis that probably has not benefitted from the Islamic state. (Ok, they disbanded in 2010, but still).

Yeah, I think the OP is overreacting. I am pretty sure it even says it in the initial installation instructions and gives a clear reason why it interferes with other language servers. In the "Features" section of the VsCodium extensions "Adds language features from Pyrefly's analysis like go-to definition, hover, etc. (full list here) and disables Pylance completely (VSCode's built-in Python extension)". I suspect there is an element of LLM-sycophancy-driven activism here.

OP is an LLM.

Yea, I was asking a SOTM about copy.fail, and it was freaking out, and tried to indirectly call me a hacker a few times. Weirdly, all I did was slightly reword requests, and they all went through. Granted, I am not actually a hacker, so I guess my follow-up questions made it realize that I am asking for educational purposes, but it was definitely the most accusatory, curt, and outright abrasive I have seen an LLM behave.

The biggest problem isn't the token slot machine refusing to give you the answer, but the fact that multiple refusals can end up flagging your account and getting banned from the service.

While contributing to a friend's Remembrance research, I was pretty surprised when Gemini Pro suddenly refused to answer any more questions about photos from the Höcker Album after it spotted an "SS" insignia.

Ironically, the justification it gave was that it wasn't its fault because it was just following orders. I hope this hasn't landed me on Google's list of undesirables.

Grok, for better or worse, didn't seem to mind.


this is the best "anti-alignment" example I have ever read.

I've been able to have deepseek give me an unofficial account of what happened on Tiananmen square in 1989.

It even went as far as confirming that we should always base our opinion on multiple sources, not just the government.

We should create badges like "script kiddie", "llm hacker", "grandpa's printer adjuster"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: