Hacker Newsnew | past | comments | ask | show | jobs | submit | Ucalegon's commentslogin

Thats the thing about a normalization system, it is going to normalize outputs because its not built to output uniqueness, its to winnow uniqueness to a baseline. That is good in some instances, assuming that baseline is correct, but it also closes the aperture of human expression.

I agree in a "the purpose of a system is what it does" sense but I'm not sure they're inherently normalization systems.

Token selection is based off normalization, even if you train a model to produce outlier answers, even in that process you are biasing to a subset of outliers, which is inherently normalizing.

Could you elaborate on "token selection is based off normalization"?

Sure;

https://arxiv.org/pdf/1607.06450

Depending on the model architecture, there is normalization taking place in multiple different places in order to save compute and ensure (some) consistency in output. Training, by its very nature, also is a normalization function, since you are telling the model which outputs are and are not valid, shaping weights that define features.


"Rep Josh Gottheimer (D-NJ) announced the Parents Decide Act, bipartisan, commonsense legislation to strengthen online protections for children and give parents greater control over what their kids can access on phones, tablets, and other devices. Gottheimer’s new Parents Decide Act will:

- Require operating system developers like Apple and Google to verify users’ ages when setting up a new device, rather than relying on self-reported ages.

- Allow parents to set age-appropriate content controls from the start, including limiting access to social media, apps, and AI platforms. - Ensure that age and parental settings securely flow to apps and AI platforms, so content is tailored appropriately for children. - Prevent children from accessing harmful or explicit content—including inappropriate AI chatbot interactions—by creating a consistent, trusted standard across platforms."

This is the summary [0] from the Benton Institute for Broadband & Society, who seem to be in support of the legislation. I get the feeling the definition of 'operating system' within the legislation isn't how many on HN, or in real life, would define what an OS is, since its implied to be aimed at mobile devices, but we shall see once the actual text is posted.

[0] https://www.benton.org/headlines/rep-gottheimer-announces-bi...


Seems like legislation should come after senators and members of congress directly call Tim Cook en masse to complain that:

1. Screen time reporting has been 100% broken for decades. Just does not work as advertised. False advertising is indeed illegal.

2. The parental controls are a joke. Can't block apps that were ever downloaded by a member of the household. Don't want the kid to have TikTok? You better not have downloaded it on any device ever.


I do not disagree that there is A LOT that Apple could, and should, be doing to enable parents. The problem that we have is, that if a vendor, like Apple, just decides to continue to have broken systems, there isn't a way to compel them to fix the problem outside of legislation. And, because most people in the House/Senate have a complete lack of technical literacy, we get situations where they define things poorly or special interests get to set those definitions in their favor/for ideological reasons, rather than to make good policy.

We agree that legislation won't work because legislators aren't competent.

But you claim that only legislation can force behavior, and I'm pretty sure that if a few senators just relayed their frustration with broken screen time reporting to Tim Cook personally we could get some results.


Calls don't have enforcement mechanisms/consequences needed to ensure compliance with the desired outcome. The whole point of government is not to ask nicely that something be done, it is to use the power of the state to ensure that something is done. Assuming that the state decides to actually enforce its laws, but that is an entirely different conversation.

You know it's bad when they call it the opposite of what it is.

Racism and fascism have been used correctly, its just that people do not like to be have their beliefs associated with negative things and thus, rather than perform self-reflection about themselves, instead the problem exists elsewhere. I am sure you can come up with outliers that prove what you are saying is true, but across the vast majority of applications of the use of both words they are correct relative to definitions of both words.

>As a former R&D scientist there is no way I’d inject any peptide that hasn’t at least gone through a phase 1 safety study in humans. Otherwise you have no idea what it could be doing to your body.

A lot of people do not understand the trial system or the value of Phase 0/1 tests when it comes to the substances that they put into their body. And thanks to the influencer/grifter/biohacker ecosystem that exists, more people would put their trust in accidental evidence, from people who's incentive it is to make money off of them, while complaining about the pharmaceutical industry operates off of a profit motive.



The problem with this argument is that forcing people to use technology, without proper training and against their will, introduces them to risks as well. Anyone with older parents/family can tell you the harms that come with phishing and other fraud scenarios that cost more than just accommodating people not using technology, both at the micro and macro level. Insulting people and bullying them into technology adoption when there are relatively simple fixes to the problem seem better than increasing risk exposure for no reason other than 'I believe that people who don't use technology are somehow lesser'.

[flagged]


I don't think the discourse is about just this one guy, it's about an entire class of people for whom swiping around a smartphone is a bewildering experience they managed to live their whole life so far without. If you're not adept at it, it makes you feel stupid, maybe you haven't had that experience but there's more to being a luddite than stubbornness.

If I can get along with the rest of my life on a flip phone, it seems pretty unreasonable to buy a device just to buy sports tickets.


> If I can get along with the rest of my life on a flip phone, it seems pretty unreasonable to buy a device just to buy sports tickets.

I would agree. It also seems unreasonable to expect the organization to make an exception to a completely legitimate anti-scalping measure for one person.


>for one person

For everybody. Nobody should be forced to use a proprietary phone app.


Why not? Going to a Dodgers game is not a constitutional right, if the business wants to make it harder for people to give them money that might be stupid but it's their right.

Do you know how many old people get scammed per year in the United States because they are using technology that they are trained on, but assume that they have to use the technology in order to function each year with minimal practical gain relative to the costs? Its around 12.5 billion dollars in 2024, up from 10 billion in 2023 [1]. Why is introducing someone to that risk worth it to watch a baseball game?

Asserting that individual 'get smart' doesn't actually solve for the actual harms and if it were just simple, we would not be seeing the upward trends in fraud that we are seeing within the elderly.

[1] https://www.aarp.org/money/scams-fraud/older-adults-ftc-frau...

edit: fixed the years


The numbers you mention are total fraud losses. Most of fraud has nothing to do with phones, it is fraudulent money transfers and card charges.

Where is the initial point of engagement when it comes to most scams targeting the elderly? It is via phones, email, and messaging services.

80 year old people do not have the same neuroplasticity as 20 year olds. It is not reasonable to expect them to quickly learn new things that are constantly changing.

In particular, it's very reasonable to be 80 and decide "I don't want to deal with learning how to use a smartphone and getting one".


> It is not reasonable to expect them to quickly learn new things that are constantly changing.

Of course it is. Maybe if we didn't normalize people refusing to learn things for no other reason than "I don't wanna" they'd have better neuroplasticity.

> it's very reasonable to be 80 and decide "I don't want to deal with learning how to use a smartphone and getting one".

I agree with you 100% on this but it doesn't logically follow from that that you get to make the Will Call clerk for the Dodgers print your ticket for every game even though you've been told for multiple years that season tickets are going paperless as an anti-scalping measure.


Then it’s reasonable to expect ticket sellers to use modern technology to implement zero-knowledge, physical rfid token, etc measures that prevent scalping.

The technology does exist, but it might take more effort than a lazy smartphone app - that probably isn’t effective against scalping anyway. Can’t a phone app / QR code etc be forged?


Im going to be harsh, sorry.

In this case nobody is forcing them to buy a dodgers ticket. It’s a completely optional and absurdly expensive luxury good that is purely for leisure. They can simply not but a ticket if they don't want to accept conditions of sale.


Yeah... I mean, who says I should have to put in wheelchair ramps for my ballpark that seats tens of thousands? I mean, so few people use/need them, I should just be able to refuse service to those people. Right?

/sarc


I don't want to blow your mind but choosing not to have a smartphone and being in a wheelchair are not remotely comparable.

So, you want to force people to give money to specific, monopolistic, corporations? Why would I want a smart phone if I'm blind... how am I expected to use a smart phone when I am blind, exactly?

Because quality of life doesn't have a value in of itself. Especially for the elderly, they should be excluded from enjoying the end of their life simply because no wants to think of a solution to the problem that doesn't require them to introduce massive amounts of risk into their life which, also, negatively impacts their quality of life.

If you work in an industry that is solely based off of customer delight, stories like these are what you are looking avoid due to brand damage. It is going to cost more time/energy to deal with the backlash than just coming up with a simple solution in the first place.



The devil is in the details. For example, OAI does not have regional processing for AU [0] and their ZDR does not cover files[1]. Anthropic's ZDR [2] also does not cover files, so you really need to be careful, as a patient/consumer, to ensure that your health, or other sensitive data, that is being processed by SaaS frontier models is not contained in files. Which is asking a a lot of the medical provider to know how their systems work, they won't, which is why I will never opt in.

[0] https://developers.openai.com/api/docs/guides/your-data#whic...

[1] https://developers.openai.com/api/docs/guides/your-data#stor...

[2] https://platform.claude.com/docs/en/build-with-claude/zero-d...


Azure OpenAI is not the same as paying OpenAI directly. While you may not be able to pay OpenAI for them to run models in Australia, you can pay Azure: https://azure.microsoft.com/en-au/pricing/details/azure-open...

The models are licensed to Microsoft, and you pay them for the inference.


There is no way to upload files as a part of context with Azure deployments, you have to use the OAI API [0], and without having an architecture diagram of the solution, I am not going to trust it based off of the known native limitations with Azure's OAI implementation.

[0] https://github.com/openai/openai-python/issues/2300


Marketing is marketing, nothing about it was ever about being factual when there is a total addressable market to go after and dollars to be made! This is inline with much of the other marketing that exists in the AI space as it stands now, not mention the use of AGI within the space as it stands currently.


Sure, but there are plenty of cases where a deceptive name has been considered enough to at least warrant an investigation: https://en.wikipedia.org/wiki/Long_Blockchain_Corp.

I'm not saying anything is going to happen, ARM holdings has a lot more money and lawyers than Long Blockchain did, but I'm just saying that it's not weird to think that a deceptive name could be considered false advertising.


That would not hold up considering that they consistently use 'agentic' in their press release and make no mention of 'artificial general intelligence'. Just because two things have the same acronym does not mean that they stand for the same thing. Marketing being cheeky is not a crime.


It's not "being cheeky". They know that the holy grail for AI is AGI. They know that people are going to see the acronym AGI and assume Artificial General Intelligence. They know that people aren't going to read the full article.

This isn't just a crass joke or a pun, it's outright deception. I'm not a lawyer, maybe it wouldn't hold up in court, but you cannot convince me that they aren't doing this on purpose.


of course they did it on purpose but thats not illegal. They are not at fault for individuals not reading what the acronym stands for and the intent that they place within the press release, which is very, very clear. They are not obligated or liable for others lack of due diligence.


They may not be criminally liable but they are at fault for sure.


The AGI in "Arm AGI CPU" isn't an acronym and there is no coincidence.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: