How about if I see an ad I should be allowed to see who bought the ad?
How about how they chose to target me? Is it because of my race or interests or gender or age?
How about the option to opt out?
How about the option to configure your future ad targets / attributes?
How about no special black box algorithms? Past the addiction issues, how can these exist while guaranteeing no racial and other types of discrimination?
How about no algorithm opt outs?
How about no more quasi-curated trending sections and the options to see a purely organic opt-out results (Twitter moments, Twitter comments, Facebook feed/news, Google Search, YouTube trending, etc...)
Finally... How about anti-consumer reparations in some form or jail time for executives?
I still would like to go back to the days where not all my feeds on things like YouTube, Twitter, Facebook, etc... all became just an algorithmic feed of what they think I like. It would be nice to go back to that day in age when it was chronological. I'm all for also having a recommended feed on things like YouTube because I do want to find similar content I may enjoy, so I appreciate that YouTube still has a normal chronological subscription feed I can view too.
How would you know that they haven't done illegal things, without actually looking up if they did? I think gaining insights into that blackbox is what they are after, and see if what happens in that blackbox is illegal.
I get that you are using algorithm as a short-hand here; but can you elaborate and describe what type of algorithms you suggest be opt-outable?
Algorithm is a super general term, but everyone uses it to describe something super-specific. I feel like everyone sorta has their own personal definition. I am in favor, at least in principal, of legislation that would curb some of the more abusive uses of datascience; but the idea of trying to legislate "algorithms" seems... fraught.
Agree it is hard but important to do to avoid discrimination. It’s less “regulating algorithms” and more “protecting personal data and access”.
Google and other big data co’s should not be allowed to profile unregulated and that’s exactly what they are doing.
I think a reasonable solution is that if a company is using someone’s “personal data” (legal definition pending) to configure/curate/personalize/uniquely-alter their experience in any way, that they allow the consumer or user to change those settings.
Configuration variables will open up transparency and is the first step to having a personal data right.
If an “Algorithm” is too complex to allow a user to configure (black box), they need to provide a way to opt-out to a general organic / main audience one then.
“race/gender/x/y/z is a protective sensitive category prohibited from advertising to directly”
Does anyone actually believe they don’t provide ways for advertisers (including nefarious ones) to do so through inferring it?
Or, that their own internal tools and algorithms to curate personalized content follows these rules? Even if they try and only discriminate a tiny little bit and are continuously working on it how on earth is that allowed or okay?
I agree with a lot of these but the option to opt out would be your choice already by stopping use of the service. Or are you thinking that by opting out you could pay for the service instead?
How about if I see an ad I should be allowed to see who bought the ad?
How about how they chose to target me? Is it because of my race or interests or gender or age?
How about the option to opt out?
How about the option to configure your future ad targets / attributes?
How about no special black box algorithms? Past the addiction issues, how can these exist while guaranteeing no racial and other types of discrimination?
How about no algorithm opt outs?
How about no more quasi-curated trending sections and the options to see a purely organic opt-out results (Twitter moments, Twitter comments, Facebook feed/news, Google Search, YouTube trending, etc...)
Finally... How about anti-consumer reparations in some form or jail time for executives?