> For now, sure, we're in the transitional period, but in the long run? Why?
Assuming that after the transitional period it will still be humans working with ai tools to build things where humans actually add value to the process. Will the human+ai where the ai can explain what the ai built in detail and the human leverages that to build something better, be more productive that the human+ai where the human does not leverage those details?
That 'explanation' will be/can act as the human readable code or the equivalent. It does not need to be any coding language we know today however. The languages we have today are already abstractions and generalizations over architectures, OSs, etc and that 'explanation' will be different but in the same vein.
Which seems like a silly accidental overreach of the law. If that is the way it applies.
The literal reading of the law says this only required when a child is the primary user of the device.
> (b) (1) A developer shall request a signal with respect to a particular user from an operating system provider or a covered application store when the application is downloaded and launched.
but 'user' here is:
> (i) “User” means a child that is the primary user of the device.
So these rules should only apply to accounts/devices where a child is the primary user.
Grep on an adult's machine would not need to check how old you are, at least with a literal reading of the law.
I do not think the law provides guidance here. The signal is only required when children are the primary device/account users. So one model would be any initial account set up is automatically considered the 'account holder' and not a child account. Then it would be prerogative of the 'account holder' to set up child accounts or not. That seems to fit into the spirt and literal parts of the law.
So grep/ls/etc are all installed as part of that 'account holder' and do not need to do any age verification.
The signal only needs to be checked when the device/account user is a child and when downloading apps. I think an unfortunate consequence here is that the literal definition of the law says package managers probably can not run on children accounts without jumping through a bunch of hoops. Which is bad for children learning code/computers/etc.
The first thing I would change about this law would be:
> (b) (1) A developer shall request a signal with respect to a particular user from an operating system provider or a covered application store when the application is downloaded and launched.
Any application that does not need to know a users age should not be required request the 'signal'
The whole point of the bill is to create a cause of action for the Attorney General to sue companies. In the bill, they say the damages are up to $2,500 per negligently affected child ($7,500 if intentional), so it doesn't matter how many non-children it affects. E.g. if the OS/appstore/accounts/application is in the context of a workplace that only employs adults, none of this matters.
So it looks like the law only requires it on first launch. Which makes sense if the application can only be run from that one account. Apps that can be launched from multiple accounts are not singled out in the law, but the spirt of the law would have you checking what account is launching the app and are they in the correct age range.
That's not a guarantee. It's up to how the courts interpret that and. Given that this law is meant to handle a moving target like age, I fully expect them to interpret it as its disjunctive form.
> why does the operating system need to be involved in this?
The goal in my mind is to have an account a parent can setup for their child. This account is set up by an account with more permissions access. Then the app store depends on that OS level feature to tell what apps are can be offered to the account.
Let say the the age questions happen when you install the app store. That means if you can install the app store while logged in as the child account the child can answer whatever they want and get access to apps out side of their age range. The law could require the app to be installable and configurable from a different account then given access or installed on the child account, however at a glance that seem a larger hurdle than an os/account level parental control features.
The headline calls this age verification, but the quote in the article "(2) Provide a developer who...years of age." Make it sound way different and much more reasonable than what discord is doing.
I would much rather have OSs be mandated with parental control features than what discord is currently doing. I am going to read the bill later but here is how discord age verification could work under this law.
During account creation discord access a browser level api and verifies it server side. discord no knows if the OS account is label as for someone under 13 years, over 13 and under 16, over 16 and under 18, or over 18. Then sets their discord account with the appropriate access.
No face scan, no third party, and no government ID required.
> The goal in my mind is to have an account a parent can setup for their child. This account is set up by an account with more permissions access. Then the app store depends on that OS level feature to tell what apps are can be offered to the account.
That sounds like an OS feature that parents would like to have. Probably has some market value. Maybe just let the market figure that one out.
Or, we could have an overbroad law passed that torpedoes every open-source OS in existence. If I were MS, Google, or Apple, that'd be a great side benefit of this law. Heck, they probably already have this functionality in place.
The problem here is legally-mandated age verification, not where it is placed (although forcing it into all OSes is absolutely ...). The gains are minimal for children and the losses are gigantic for children and adults. I'm not keen to have children avoid blisters by cutting off their feet.
Put control back with the parents. Let them buy tech that restricts their children's access. This law doesn't protect children from the mountains of damaging content online.
And let all the adults run Linux if they want to without requiring Torvalds to put some kind of age question in the kernel and needing `ls` to check it every single run.
> That sounds like an OS feature that parents would like to have. Probably has some market value. Maybe just let the market figure that one out.
If there was a competitive market for OSs this probably would work, but we do not really have that. Getting the market to be competitive likely either takes considerable time, or other forms of government intervention. If there really was a competitive market then this would have been a solved problem ~15-20 years ago since parents have been complaining about this for ~25-30 years at this point.
> Or, we could have an overbroad law passed that torpedoes every open-source OS in existence. If I were MS, Google, or Apple, that'd be a great side benefit of this law. Heck, they probably already have this functionality in place.
I do not think the law does that. Either a additional feature making age/birth date entry and age bracket query available, or indicated the os is not intended for use in California, both seem to let developers continue along like normal. edit Or, I think, indicate that it is not for use by children.
> The problem here is legally-mandated age verification, not where it is placed (although forcing it into all OSes is absolutely ...). The gains are minimal for children and the losses are gigantic for children and adults. I'm not keen to have children avoid blisters by cutting off their feet.
In this case the mandate is entering an age/birth date at account creation where you can lie about said age/birth date. The benefit is the ability of an adult to set up parental controls for a child account.
> Put control back with the parents. Let them buy tech that restricts their children's access. This law doesn't protect children from the mountains of damaging content online.
This puts control in the parents hands. When they set up their child's account they can put in their child's age, or not, they can make it an adult account.
> And let all the adults run Linux if they want to without requiring Torvalds to put some kind of age question in the kernel and needing `ls` to check it every single run.
So from the literal reading of the law the age checks are only required when "a child that is the primary user of the device". It does not need to effect accounts where the primary user is not a child. Nor does it seem like any application needs to run the check every time the application is launched.
The law unfortunately does require:
> (b) (1) A developer shall request a signal with respect to a particular user from an operating system provider or a covered application store when the application is downloaded and launched.
So in the case where a child is the primary account/device user. The app needs to request the signal at least once when first launched, though it is not required to do anything with it. Delegating that to the package manager would make sense, but this part of the law should be modified, apps that can not use the signal for anything should not be required to request it, 'ls' for example.
I agree. The headline says "all operating systems, including Linux, need to have some form of age verification at account setup", which is pretty inaccurate.
It's just asking for some OS feature to report age. There's no verification during account setup. The app store or whatever will be doing verification by asking the OS. Still dumb to write this into law, but maybe not a bad way to handle the whole age verification panic we're going through.
> there are people out there who think it's trash because we can trick it if we ask questions in weird ways.
Some of this sentiment comes form wanting AI to be predictable and for me stumbling into questions that the current models interpret oddly is not uncommon. There are a bunch of rules of thumbs that can be used to help when you run into a cases like this but no guarantee that they will work, or that the problem will remain solved after a model update, or across models.
There are a lot of rules of thumb you can follow to avoid getting bitten by a rattlesnake, but the easiest way is to just not pick up random snakes. I don't know where I'm going with this, but I am going for a walk.
> No, they do not delegate the power to lay (set) taxes to the executive, they do assign the executive the function of collecting the taxes laid by Congress.
The quote from the constitution is "The Congress shall have Power To lay and collect Taxes," not for the executive to collect taxes. If they can delegate collecting to IRS in the executive branch, why not can they not delegate the "Power To lay" taxes?
None of these seem to apply and I am not a lawyer, but if they do not apply then, why would the president have the power of taxation when that is given to the legislative branch not executive branch.
Not clear to me why these new tariffs would be on better footing than the last and the last never seemed to be on good footing.
> Its like going 70 mph in a sleepy subdivision because a road sign on the interstate says you can go 70 there.
> Trump is taking an law that says "You can do X if Y" and saying "I can do X"
I think it's more like going 70mph downtown because there's a sign saying "if onn an interstate you can do 70mph" -- the "if on an interstate" is pretty important there!
I have to assume that some of that 4% has second order negative effects on US importers and consumers.
Profit margins can not always go down by 4% and in those cases goods and services would then not be available to US importers and consumers is only one example.
My assumption is that the 96% statistic does not fully encapsulate the negative costs to consumers. I have to to wonder how much higher the burden is over 96% when all second order effects are taken into account.
I kept having issues like this, with a different kinds of videos, until I scrubbed my history of any of the kinds of videos I did not want.
If I click on something I thought I would want to watch and it is the kind of video I do not want recommended to me I immediately delete it from my watch history, block the channel, and some times block that profile from viewing my youtube channel.
~2 years ago I never had to delete anything from my watch history and my feed/recommendations were ok, now I have to if I do not want my feed/recommendations to occasionally be flooded with something I do not want.
I watch things from unknown-to-me creators in a private window, then copy the URL over to logged in window if it's any good. Same idea, might be an easier workflow.
Absurd that we have these sorts of workarounds, but of course the view numbers are better if it keeps fishing for just the right kind of clickbait trash that you'll wolf it down endlessly.
From my reading yes, but I think I am likely reading the statement differently than you are.
> from first principles
Doing things from first principles is a known strategy, so is guess and check, brute force search, and so on.
For an llm to follow a first principles strategy I would expect it to take in a body of research, come up with some first principles or guess at them, then iteratively construct and tower of reasonings/findings/experiments.
Constructing a solid tower is where things are currently improving for existing models in my mind, but when I try openai or anthropic chat interface neither do a good job for long, not independently at least.
Humans also often have a hard time with this in general it is not a skill that everyone has and I think you can be a successful scientist without ever heavily developing first principles problem solving.
"Constructing a solid tower" from first principles is already super-human level. Sure, you can theorize a tower (sans the "solid") from first principles; there's a software architect at my job that does it every day. But the "solid" bit is where things get tricky, because "solid" implies "firm" and "well anchored", and that implies experimental grounds, experimental verification all the way, and final measurable impact. And I'm not even talking particle physics or software engineering; even folding a piece of paper can give you surprising mismatches between theory and results.
Even the realm of pure mathematics and elegant physic theories, where you are supposed to take a set of axioms ("first principles") and build something with it, has cautionary tales such as the Russel paradox or the non-measure of Feymann path integrals, and let's not talk about string theory.
Assuming that after the transitional period it will still be humans working with ai tools to build things where humans actually add value to the process. Will the human+ai where the ai can explain what the ai built in detail and the human leverages that to build something better, be more productive that the human+ai where the human does not leverage those details?
That 'explanation' will be/can act as the human readable code or the equivalent. It does not need to be any coding language we know today however. The languages we have today are already abstractions and generalizations over architectures, OSs, etc and that 'explanation' will be different but in the same vein.
reply