I just spent three months hiring in NYC, and now that I think about it, I haven't seen a single person mention they were considering counteroffers from Facebook. For context, Facebook and Google are the two largest tech companies with a significant NYC presence. It's telling that a substantial portion of our candidates admitted to considering competing offers from Google, but literally no one was considering Facebook.
> Usually half of the close is done for recruiters with the brand Facebook has
I'm also finding that company brand plays a huge role in closing candidates. Our company's brand is generally pretty strong, and I've found one of the things candidates respond to most is the story we tell about our company's past, present, and future. Facebook's story has become "we were founded by a jerk who didn't care about privacy, our not caring about privacy has had massive consequences for American and global society, and our promises to improve our approach to privacy in the future have proven to be disingenuous smokescreens."
It's no wonder the substantial portion of people who care about their employer's ethics are turned off.
There is also an issue with the 'evaporative' effect. If no one who works there is seen as 'ethical', then you'd expect the people that do work there to be unethical/dubious. So, trying to get a promotion is then more cut-throat, the lunch crew has a few more 'jerks', the HR is a bit more biting, etc. Your hackels get raised and you are more suspicious of the motivations (however begnin) of others. Better to just not get involved.
Sheryl hired the swiftboat campaigners to stop Congress. Finding out about that made me assume that over time most of their employees would trend in the opposite direction of optimistic.
I wonder if the exec team realizes that the ad-tech industry had their Great Financial Crisis. Nobody is in love with them anymore. They'll get as much reception from politicians as Wall Street did when Congress passed Dodd-Frank. Banks don't earn much more than their cost of capital anymore.
I’m on mobile so I can’t read the numbers on the excel screenshot you provided but the historic high return on average equity for banks[1] (not including brokerage) was 16.29% in 1999. At last measure it was 11.85%. Dodd-Frank was merely a speed bump.
The vast majority of banks have long since recovered from the crisis.
That's not a good chart for banks. Their cost of capital is 8-10%, doing 11 or 12% is pretty shitty compared to the pre-crisis era.
And the smaller community banks have less onerous regulations than the big ones. Banks have to be much more capitalized and have less leverage because of Dodd-Frank. That's why the return on equity is mediocre.
It isn't right. The majority of banks are profitable, the industry is somewhere around $200B/year of profit (I think that's just retail banking, not including brokerages and stuff).
> In 2004, he played a critical role in President George W. Bush’s winning re-election campaign, serving as the Research Director for Bush-Cheney ’04, where he helped develop the campaign’s opposition research, message development and rapid response operations.
The oppo research director of the campaign can have no official links to the 527 group, of course. But the group exists for one reason only - to discredit political opponents.
Sheryl Sandberg and Zuck are a team. They are aligned in some bizarre goal that seems to function as a juggernaut that I got into tech specifically to avoid.
There’s also the reputation impact as well. When all of the bad things at Uber eventually became public, Uber engineers started reporting difficulty in getting new jobs. Apparently hiring managers assumed, perhaps correctly, that anyone who stuck it out at a toxic place that long was possibly the source of toxicity themselves.
Can confirm. Worked there. Deeply disenchanted with the ethics of many people especially in the product, marketing, and (as you would guess) senior leadership groups.
> "It's telling that a substantial portion of our candidates admitted to considering competing offers from Google, but literally no one was considering Facebook."
intersting anecdote. google is a bigger concern for privacy and personal liberty, yet jobseekers are shunning facebook because of the more wide-ranging negative press.
>google is a bigger concern for privacy and personal liberty
Big claim. Any proofs?
With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.
Extremely worrisome if you prefer to have people elected through a democratic process that is based on the discussion. This is not like understanding what people want or how do they think through big data analysis but manufacturing it.
Sure - provocations and lies are not new, it was always the case for politics but with a social media everything is at scale and everything is happening violently.
> With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.
(EU citizen here): I would prefer corporations (especially ones with such depth of funds and breadth of influence) be kept entirely outside the electoral process. If that's not feasible, then the second best option is, indeed, that they provide the same service to all candidates, no matter who those candidates are.
Am I getting it right that you'd prefer they pick some candidates to help in detriment of others? Because that option does not sound very healthy to me, personally.
My ideal system is to eliminate all private money from elections. You as a candidate are given a stipend by the FEC at the beginning of the campaign season, the same amount as any other candidate for the same office, and you are free to spend it. You ran out? Tough. See you next election cycle.
Nobody is allowed to give you money or anything else of value (including free airtime) -- not individuals, not companies, just the FEC. Anything else is bribery, and a crime. That way, you're not going to do things just to please your benefactors and get you an edge over your opponent next time, as your war budget is already accounted for. Instead you can focus on doing what's best for the people.
If you really want money out of politics then you replace all of them with sortition (assemblies of the people). They are representative, being drawn at random, for the whole population and they are not elected, so they don't need to campaign. Similar to politicians, they need to be supported by experts and advisors in the specific topic they are working on. I'd rather trust a group of random people deliberating than a bunch of professional liars. Sortition is a way for people to participate in democracy more than voting once every couple of years and posting on FB.
I have been thinking exactly the same. I now have to wonder if you work at Facebook and have "read my mind" via my likes… ;-)
I been musing on whether there should be some small barriers to joining, as there is in jury service - perhaps the elected get to oppose a certain number of candidates, or they must pass a civics/governance test first so at least they've some technical knowledge going in. I can see that being twisted into something bad though.
Much as I dislike the Lords Spiritual in the current system, I wonder if the sortition should embrace it and be a "tulip farm" of certain interest groups e.g. 20 each for religion, business, justice, commoners etc, as then there's a definite base of understanding in important areas.
However it would be arranged it'd be hard for it to be worse than having the Lords full of lords though.
OK, what if people with money want to spend that money on political speech _without_ coordinating with the candidate? That's the Citizens United problem.
There's no quid pro quo bribery, but if the NRA spends a bazillion dollars attacking your opponent but not you, it'd be hard to say there's no influence on your decision making process. At the same time, it's really tricky to ban. Is something like Michael Moore's Farenheit 9/11 a form of political advertising?
> Is something like Michael Moore's Farenheit 9/11 a form of political advertising?
Not only that, is CNN or Fox News coverage of a sitting politician who is running for reelection a form of political advertising? How they choose to report stories -- and which stories they choose to report -- can certainly affect how the voters view the candidates.
Probably the best solution is to just have a signature requirement where if you get that many signatures, the government gives your campaign an amount of money equal to the average amount of private money raised by successful candidates running for the same level of office in the previous election.
Then the average privately-funded campaign will have twice that much (if they get the signatures too), but a factor of two isn't huge here. It's more of a threshold situation where once you reach a saturation point it's diminishing returns. Get the candidate to that point with public money and the value of trading legislation for private money would be much diminished.
Of course, you still have the problem that too many people vote for who cable news tells them to.
This is reasonably similar, AFAIK, to how it works in France (and I assume many other developed countries).
Campaigns do spend their own money but it is capped at some low value to even the playing field, and they also receive public funding. TV stations are required to give equal time to all candidates (in the 2007 election there were 12 candidates, only 3 of which had any realistic chance of winning, but major TV stations spent equal time interviewing all of them).
> Nobody is allowed to give you money or anything else of value
That sounds like a form of extreme boycott - and however I despise politicians in general, subjecting them to essentially expulsion from the society (at least temporarily until the election ends) and complete gag order and media blackout (because otherwise I could promote a candidate without giving them money directly - I would just publish ads under my own name but would be praising the candidate in them) just for wanting to be elected seems a bit extreme to me. Not to say at least in the US it's probably incompatible with at least half of the constitutional amendments.
Yes, the idea that campaign donations = speech is a new one as well. Tbh I think we need a constitutional convention to really solve all the problems with the American political system.
And what if a candidate wants to spend that stipend on Facebook ads. Is Facebook not allowed to have a salesperson take that money and sell ads to the candidate?
What does free airtime mean in this context? People used that term a lot about the coverage of Trump during the 2016 election, but forbidding news outlets to cover a candidate is obviously absurd.
> I would prefer corporations (especially ones with such depth of funds and breadth of influence) be kept entirely outside the electoral process.
Do you mean corporations being banned from providing services to any electoral campaign? Probably not, because in that case election campaign would be impossible. If so, then Facebook would be free to provide promotion services to any political campaign too - they are service provider as any other.
> because in that case election campaign would be impossible
Would that be a bad thing?
I would love an election that was simply an announcement of the election date and a website to see the candidates and their politics. The only campaigning would then be the government campaigning to get people to vote.
With Facebook, it's easy for an average individual to leave the platform for good: stop using Fb/Insta/Whatsapp and install something like Privacy Badger to avoid tracking on all the other sites that have some form of Fb integration.
Leaving Google, by contrast, is way more difficult, their ecosystem reaches literally every corner of the web and you have to deal with it even if you don't consciously use any Google product, for example if Recaptcha doesn't like you, everyday online tasks like paying public school fees [1] or signing up to an online forum become much harder. Another example is Amp, where the fact that you are reading an article hosted into Google infrastructure is often hidden from you, there are many more examples. Trying to quit Google feels like that episode of Black Mirror where that woman is ostracised by everyone because she doesn't have the same cybernetic implant that everyone else is using. Just because Google hasn't been caught in any scandal comparable to the Cambridge Analytica one, it doesn't mean that it's OK for them to have so much unchecked power.
It's getting a little wearying to have to rehearse the ways in which Google is a threat to privacy. But let's get the band together one more time:
Google runs search and email for essentially the entire web, controls the market dominant browser and mobile OS, has tracking scripts on >75% of the top million websites and runs a fair amount of the internet's infrastructure. It is the senior partner in the online advertising duopoly (together with Facebook) and runs one of the three major cloud computing services. It has also become the de facto standards authority for the internet and runs a massive continuous operation to collect photos of every street on the planet, which it is now expanding into interior spaces. It sells always-on microphones for the home, as well as a line of internet-connected home appliances. It does so much invasive stuff that I've probably forgotten half of it here.
So it's neither a big or controversial claim in 2019 to point out that Google has unique breadth of visibility into both the physical world, and anything that touches a connected device.
No one disputes that Google has its tentacles in many pots -- and definitely needs to be kept on a leash. But the claim was that Google was not just a matter of concern -- somehow a clearly bigger threat than FB.
That seems obvious. If you don't use Facebook you're pretty much outside of the Facebook tracking network with a few exceptions wrt Facebook cookie tracking which you can kill with a browser plugin like Facebook Disconnect. With Google, the tracking surface area is orders of magnitude more ubiquitous - everything from Search to YouTube to Chrome to Email to Android and on and on. Facebook is almost (but not quite) negligible in comparison.
That's a pretty considerable exaggeration. There is a big difference between "I have friends that use Facebook" and "I have friends who take pictures of me and upload them to Facebook" or "I have friends that upload their contact list to Facebook" and even then, the amount of data that Facebook can extract from you in that way is pretty minimal relative to just about any other activity people commonly engage in online.
"Orders or magnitude" (plural) means something on the order of 100x.
Which suggest to me that either you're either exaggerating quite a bit - or you were using the term without quite knowing what it means. (Which something more specific than simply "a lot").
Anyway, a 100x tracking surface area is pretty accurate and not an exaggeration in the least; if anything it is too conservative of an estimate. Just Android, search and analytics on their own are easily 100x the tracking surface area of everything Facebook does, that's without considering:
gmail, home, docs, amp, drive, maps, hangouts, chrome, chrome os, messages, voice, ads, gcp, youtube, firebase, music, waze, play-store, places, wallet, domains, duo and so many more. I don't understand how this isn't completely obvious.
The "bubble" is called reality; I elaborated on my point with detailed reasoning and all you've done is throw around insults like a troll. No point in continuing this discussion any further. Have a nice day.
“includes detailed location records involving at least hundreds of millions of devices worldwide and dating back nearly a decade.”
US law enforcement had been regularly accessing Sensorvault user data in a dragnet-like fashion to obtain location details for hundreds or thousands of users
> With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.
I have become more and more convinced that this is Facebook's real business model; that enabling instances of archetype CambridgeAnalytica is the purpose the company actually exists for.
Just look at the amount of personal information Google knows/records about you. Your search history, web stats through Chrome, location history through Android, with whom you exchange emails if you're using GMail, which sites you visit and how long you stay on them through Google Analytics, probably online purchases with a combination of AdSense/AdWords & Analytics, everything you watch on YouTube etc.
They definitely collect much more data than Facebook. The only reason they haven't faced the same shitstorm is because they don't seem to share all that data with 3rd parties.
> The only reason they haven't faced the same shitstorm is because they don't seem to share all that data with 3rd parties.
This is the whole point though, is it not? As far as we know, Google treats the data they collect more thoughtfully and responsibly than Facebook. And so they are (rightly or not) viewed as less of a threat to the public good.
Of course, they could just be better at hiding their abuse of our data... But that's a conspiracy theory, not a matter of public record like the Cambridge Analytica scandal.
No, it's not. They share the data indirectly by allowing companies to target individuals for advertising purposes based on that data. You search for shoes on Google and then ads about shoes follow you all over the web. So while you can't download users' posts like CA did in order to profile them for their political affiliation you can surely target them for whatever product you want to sell. If it was just about ads on Google everything would be hunky dory. But it's not. Just because they're nice and cool doesn't mean we have to give them a free pass to our personal lives.
Is that really any better? Google is so monolithic and all encompassing that data collected by their services can be shipped around internally instead of having to be sold to third parties.
How is that a problem? The issue at hand is the irresponsible handling of data (especially wrt 3rd parties), not the general handling of 1st-party data competently within an internal network.
In what way? If Google uses your search history to target you with ads is that somehow better than them leaking the data and a third party service targetting you the same way? The end result is the same.
Assuming that the single source is trustworthy, sure. But we're talking about the likes of Facebook and Google here.
The two use cases for data aren't identical, and actually shipping the raw data out is worse. But, in my opinion, the two things are similar and the shipping out of data is not that much worse.
>They definitely collect much more data than Facebook. The only reason they haven't faced the same shitstorm is because they don't seem to share all that data with 3rd parties.
And that is something that is much more relevant to many users. I don't mind sharing a lot of my data as long as I know where my data actually ends up. If Google uses my data to improve their ad algorithm I'm fine with it, if my Facebook data ends up in the hand of some election manipulation company I'm not fine with it, no matter how much data it is.
And how do you know what Google does with it? AFAIK Google has never officially stated in specific detail what data they collect, what they do with it, who can access it, etc.
Their Privacy Policy gives them a giant escape hatch to essentially do anything with it -
"We provide personal information to our affiliates and other trusted businesses or persons to process it for us, based on our instructions and in compliance with our Privacy Policy and any other appropriate confidentiality and security measures. "
I think you not quoting the rest of that sentence is quite disingenuous
>"...For example, we use service providers to help us with customer support"
As far as I'm aware there is no evidence that Google shares my personal information, without my explicit consent, with third parties like Cambridge Analytica, which collected tens of millions of individual user profiles.
Sorry, how is it disingenuous? I didn't consider the example relevant to the policy itself, and I provided a link to the source material for anyone to read. Giving a benign example is meant to downplay the fact that Google can do anything they want with your data.
My wife and I typically donate to a few non profits, such as the ACLU and Trout Unlimited. They occasionally mail us, but we did give them our address so that’s ok.
But one day she donated to the environmental defense fund. Since then the number of surveys and donations requests from random non profits has exploded to 3-4 a week, including weird ones like evangelical surveys and pro-Israeli things. My wife is pissed at the EDF, and will never give them another dollar.
The point? We were both fine having the non-profits having our address and using it, but knowing that one of them sold that data really pissed her off.
A bigger problem to me is Google's search bias and subtle manipulation. The same goes for Facebook's news curation algorithm. These things can directly impact our democracy, yet it's much harder to tackle or even investigate, because the whole thing is so elusive and subjective.
To me privacy seems to be already a lost cause. We've lost it and there's little hope to take it back. Also privacy violation is a relatively easy problem to understand. For bias and manipulation, however, we don't even know what to do.
Has Google ever disclosed exactly what data they collect, what they do with it, who can look at it, etc? We "know" that Google takes privacy "seriously", but that is a faith based position.
And I can't vouch for all of Google, but regarding location data, Google has been pretty transparent regarding which data is collected and stored; papers like NYT covered it extensively - see [1].
And Google also gives you clear ways to delete this data, as referenced in that NYT article [2].
And moreover, Google has been consistently on track to store less private data. Example: location data is going to be auto-deleted for users that want that, as of this month[3]. Maps now gets an incognito mode[4].
>but that is a faith based position.
Hope the links I referenced will help dispel this notion. Google does take privacy seriously.
(Disclaimer: I work for Google. The opinions expressed here are mine and not of my employer; etc - what I said is public knowledge.).
> And I can't vouch for all of Google, but regarding location data, Google has been pretty transparent regarding which data is collected and stored; papers like NYT covered it extensively - see [1].
How did you read that article and come away with the conclusion that Google has been "pretty transparent". The story was written after more than a year of other news outlets reporting on law enforcement using Google's location data to fish for suspects. Google has been providing this data for at least two years before the Times reported on it [0].
> And moreover, Google has been consistently on track to store less private data.
Such as credit card transaction data collected without most people's knowledge [1] or location data after you've explicitly told it not to [2]?
Technology companies need to understand that both words "informed consent" are important. We currently have very little in the way of choices when it comes to data collection. It is simply not possible to opt-out anymore without tremendous effort and personal cost. I like this quote from Maciej Ceglowski:
"A characteristic of this new world of ambient surveillance is that we cannot opt out of it, any more than we might opt out of automobile culture by refusing to drive. However sincere our commitment to walking, the world around us would still be a world built for cars. We would still have to contend with roads, traffic jams, air pollution, and run the risk of being hit by a bus. Similarly, while it is possible in principle to throw one’s laptop into the sea and renounce all technology, it is no longer be possible to opt out of a surveillance society."
A big push towards openness and privacy has happened over the last year.
On an individual level, I don't think it's hard to opt out of Google's tracking.
I won't argue with Maciej's quote, though, because, just like with automobiles, people will still opt into the surveillance society willingly: because the utility it brings them outweighs other considerations.
Ask people if they want to be tracked at all times, and they'll say "no".
Ask people if they want to be able to locate their phone when they lose it, and their answer might be different.
Ask them if they'd want be able to cal 911 and ask to come and help them even if they aren't sure where they are, and you'll get a different distribution of answers again.
In the latter case, lack of "surveillance" is seen as a "tragic shortfall" [0], and adding it is a "feature"[1].
So see, it's not the surveillance per se that people object to. It's implementation details. Welcome to Ceglowski's world.
Two of them are more than a year old, but the practices described in each are ongoing. The third, which describes Google's tracking of users after they've specifically opted not to be tracked is from nine months ago.
> A big push towards openness and privacy has happened over the last year.
After literally a decade of constructing what is very likely the largest database of personal information in the world. Since the late 2000s, when Google purchased DoubleClick, it has worked to collect information without the informed consent of its users. What fraction of your users know that Google purchases their credit card transaction histories?
What is the "big push"? The only things I can think of were the opt-in auto-deletion of a subset of data announced over the last week or two. All the user has to do is pay attention to the tech press, then remember to activate the feature when it launches at an unspecified future date!
What is this "openness"? Working on a censored search engine for China without informing their own head of security?
> ...people will still opt into the surveillance society willingly: because the utility it brings them outweighs other considerations.
Sure, they absolutely do. There can be significant utility gains from large collections of information. But much of the utility could be gained from information collected in a anonymity-protecting matter. In order to have traffic information, for example, Google doesn't need to continuously track your location history.
> Ask people if they want to be tracked at all times, and they'll say "no". Ask people if they want to be able to locate their phone when they lose it, and their answer might be different.
And neither of these require surveillance. The phone could be located either by returning its location on command, or by uploading encrypted location data which only the user has the key to. Whatsapp, for example, shows that end-to-end encryption can be seamlessly integrated.
> Ask them if they'd want be able to cal 911 and ask to come and help them even if they aren't sure where they are, and you'll get a different distribution of answers again.
>
> In the latter case, lack of "surveillance" is seen as a "tragic shortfall" [0], and adding it is a "feature"[1].
Once again, this does not require ubiquitous surveillance, and it is misleading, at best, to imply that it does. Do you really not see the difference between location data provided to assist emergency response from a 911 caller and continuous location monitoring so that Google can serve more profitable ads?
Pre-Disclaimer: I don't mean to only pick on Google here, it applies to any company that collects such a vast amount of personal data on users. Also.. nothing personal :)
>Actually, Google has.
In extremely vague terms, yes. I want to see an itemized list.
For e.g. At company X, this is what we collect:
1) Your Name, age, location, DOB.
2) Your location is sent to COmpany X every 10 minutes
3) Your IP is tracked per-session
4) All this data is linked to your profile
5) Any thing you type in the search bar is sent to a company X server
6) After anonymizing (if we do it) this is what your data looks like
7) We never delete any of the above for the following reasons
etc,etc,etc
>And moreover, Google has been consistently on track to store less private data.
The default should be zero/as little as possible collection of data. From what you've said it seems like people can opt-out of some data collection, but its vague as to the specific nature of what data is still being collected versus what isn't.
>Hope the links I referenced will help dispel this notion. Google does take privacy seriously.
Unfortunately they don't. I won't dispute your second claim.
> The default should be zero/as little as possible collection of data.
Really? What about telemetry for self-driving cars? Is it immoral to develop a system that leads to less blunt trauma and death on roads? We (HN users, I don't work for any of these companies) can define your term "as little as possible" about like you seem to define parent's term "seriously". The point being that such adjectives are difficult to pin down but also difficult to avoid. Define "difficult" however you see fit.
They own the cars so they can track them all they want.
Tracking me all over the place after I click the "Do Not Track Me" button isn't acceptable.
> Is it immoral to develop a system that leads to less blunt trauma and death on roads?
It quite could be. Just as we humans decided to not use the scientific research generated by the Nazis on unwilling human subjects there are definite limits to what is acceptable even if the overall benefits are huge.
Collectively, we did no such thing. Many individual researchers and journals refused to use Nazi research, but many felt that it was unethical not to use it if it could save lives. In particular, I believe that the results of Nazi hypeothermia experiments were extensively used after the war. It's certainly not a cut-and-dry problem with an obvious ethical answer.
Facebook has their privacy policy too. So what? Even if all the listed policies are followed, even if they don't have loopholes (and they almost certainly do), Google still collects and retains metric fuckton of information that isn't necessary to provide the actual services it provides. The NYT article is great demonstration. And there is very little oversight around this.
Thanks for linking to the policy document. They have this convenient line that allows them to do anything.
"We provide personal information to our affiliates and other trusted businesses or persons to process it for us, based on our instructions and in compliance with our Privacy Policy and any other appropriate confidentiality and security measures."
>It's all here, and you can delete it (including batch delete by period or source)
That scratches the surface, but an iceburg hides underneath. For one, how do we know its all the data? For another, there is no indication as to who has seen it or how Google uses it. That is my point. Google has never detailed those things..I suppose for legal reasons. A user has a right to know exactly what they are trading with Google in exchange for free services. They can then make up their own mind if they think its worth it. I'm just picking on Google here, because its a soft target, but it should apply to any service. We need new privacy regulations to formalize this.
Sounds like they just needed to spin up one "affiliate" and provide the data to that for data mining / etc purposes.
Anyone deleting the data "Google" holds would have zero effect on the affiliate, while giving some people the feeling Google was doing the right thing.
So they claim, but I don't know why anyone should trust them about that.
Aside from that, though, what about the data collected from me? I have no Google account, but they're collecting data from me anyway. Same as Facebook.
> With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.
That sounds like a behavior of a honest service provider. Worse behavior would be if they'd help candidates which match their political biases, but work against candidates that disagree with them. That would look like abusing their position of a steward of a world-wide platform.
> With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.
Isn't political campaigning part of the discussion? If so, why is it bad to give candidates equal access to tools to perform this campaigning? Would you begrudge a printer that would print signs for any candidate, no matter who they are? If not, how electronic signs are functionally different from printed ones?
> everything is happening violently.
I feel people are really abusing the word "violently" nowdays. Nothing that happens on facebook is violence, it's mostly just talk.
>>google is a bigger concern for privacy and personal liberty
>Big claim. Any proofs?
All this skepticism around Google's capacity to abuse their troves of data and invasive services is a clear indicator that this discussion has very little to do with real privacy. It is mostly a playground for various corporate, political and media shills.
>>google is a bigger concern for privacy and personal liberty
>Big claim. Any proofs?
Well. How about the following facts?
-- Android market share is about 75 % of all smartphone users. Do non-smartphone users still exist?
-- It's probably safe to say that it's much much easier to avoid using Facebook's services consciously than to avoid using Google's services consciously?
-- Android's hard coded DNS server is a Google DNS server. The vast majority of people don't use a VPN (the only way in Android to change DNS server is through a VPN setup) to get around this. I haven't looked it up, but it's probably safe to assume that all Chromebooks also use Google's DNS by default? My limited networking knowledge tells me that this means that Google knows: who's using what Android/Google device at what time (match IP to Google login to Android device), who's visiting what website at what time, who's using what app at what time (requests to the app's server and to Google's servers for location, payment, auth and other info).
-- A lot of people use Gmail. Google can literally read all Gmail emails. Even those sent into Gmail from outside of Google servers.
-- The vast majority of Android users has enabled "Google location services" in Android. This is a one time "click OK to continue dialog" to permanently enable these location services until disabled manually again in settings (nobody does this). Weather apps, Tinder, Navigation apps, etc almost all require this. This means those people are continually sending < ~ 500 ms resolution data points of their location to Google servers. Even when those apps are turned "off" (meaning running in background, "off" doesn't exist in Android). Google can literally know how long you poop, who your secret girlfriend is and what specialist you visited at the hospital. NYT had a huge piece about this: https://www.nytimes.com/interactive/2018/12/10/business/loca...
-- People use Chrome. With automated Google login. Meaning Google knows everything they do online.
-- I'm not even going to go into other popular Google apps, we all know them: Youtube, Google Assistant, etc. With all the accounts nicely automatically linked to one another.
-- All of the above information and probably much more can be used for profiling information about users. I think it's safe to say that Google knows absolutely everything about it's users at this point?
Than the following:
-- Law enforcement (internationally?) can request any "sensor vault" data from any Google user in their country. AKA: they can request to look into all of the details of one's life.
-- The NSA can secretly request access to any of Google's data. Google is not allowed to disclose this access. By law. That's what they can legally do. Snowden has told us what they illegally do: anything they want. The NSA literally knows everything there is to know about everybody. In the world. And Google, by law, has to help them with that. In secret. This obviously has political consequences for the simple fact that "information = power". Those political consequences are not in favor of democracy.
Where I write "Google knows" I mean that Google servers receive that information (a fact). It is debatable whether Google stores that information or not. As for the location data mentioned above, it is a fact that third parties (app makers) do store this location information and sell it (illegally, see NYT link above). It is also debatable whether Google deletes your information when you ask them too.
My personal guess it that absolutely all information is stored, but that this storage of information is not disclosed to the public. I personally also believe that when you request your Google data to be deleted, only "the public facing layer" of your info gets deleted. I don't believe for a second they actually delete your "anonymized" data. In quotes, because such data can very easily be de-anonymized.
I mean, just the fact that there is deliberately no "DNS server address" setting in Android. Ask yourself why? Why would Google make it so much easier to just use the Google DNS server? Why does it offer a free DNS server to begin with? Why does all your location data have to go through Google servers before being consumed by the apps that run on your phone locally? That says it all to me.
Google makes a lot of genuinely useful products and services. We've all got to wrestle with the privacy tradeoffs of "free" maps, "free" email, "free" Android, etc. But at least the satisfaction of using well-built tools to accomplish more is enough of an offset to many people.
Facebook is much more likely to be seen as a guilty pleasure, or a marvelous time-waster, or something else that's a bit farther down the utility curve.
Perhaps in some countries/demos. I'd bet Instagram and WhatsApp for free, basic communication are seen as much higher utility in a significant amount of the global population.
But much of that utility comes from the fact that everyone uses it, rather than some inherent quality of the product, like is the case for Maps. Feeling somewhat "forced" to use it does not help with a positive view of the company.
Google and Facebook are exactly the same on this score. They both collect as much data about you as they can get their grubby paws on, they both use that data for themselves, and they both allow others to leverage that data in exchange for cash.
Has there been any evidence of abuse and misdirection on the part of Google at the same level as Facebook?
This is more than just negative press, this is a question of how data collection has been misused, and what lies executives have told about current and future plans surrounding privacy and data abuse.
But, personally, I'm staying away from FB, Google, Amazon, Snapchat, et al for the reasons you've mentioned; negative press or no, I cannot ethically work for companies that are haphazardly building the foundations of a potential technocratic dystopia in their chase for profitability.
I wonder how much of it is that Facebook isn't all that much fun anymore, and working there would provoke all sorts of "I don't use it anymore" comments from one's peers.
Google has far greater potential for invading an individual's privacy than Facebook does. Google has Android, Chrome, search, maps and gmail (and now photos). Those are all very critical pieces to a person's real world life. Facebook has FB, Instagram and WhatsApp. Yes, some private communication but it's limited to "social networking". Your taxes and utility bills don't get mailed to FB Messenger. You don't search for cancer research on Instagram and Facebook can't tell what other apps are installed on your phone.
well, it's the nature of the advertising business to defy privacy and liberty. competition occurs around how well you know consumers and how well you can manipulate those consumers into actions favorable to you (i.e., exerting power over you). further, online advertising is basically a duopoly of google and facebook, with google being twice as big as facebook and much more invasive.
google's, or more broadly, alphabet's, only competitive advantage is a thin lead on what might be called data intelligence (or surveillance, for the more cynical). they collect data across all internet ingresses/egresses, on not just those who opt-in, but even those who actively avoid google (through android, gmail, google apps, analytics, dns, internet access, etc.). and that data is super-valuable--alphabet had $30B in profits on $137B in revenue (an extraordinary margin).
to be clear, i'm not attempting to judge or disparage individual engineers at google. i'm sure most are mighty fine folks.
but for the foreseeable future, google really has no choice in the matter, not until it finds a different massive market from which to derive revenues. it's the nature of the business. and in the meantime, it's also under assault from intelligence, paramilitary, corporate, and governmental organizations from across the globe.
at least for americans, privacy and liberty are fundamental and inalienable rights. even though the consitution explicitly forbids only governmental interference in those rights, they apply more broadly to any entity, and particularly global corporations, attempting to exert power on individuals. and while inalienable, citizens still have a duty to be vigilant against such infringements.
I too was curious of this balance weighting. FB slurps in all of the data that users voluntarily post. Google just learns things through inference about users whereas FB is getting data posted directly by the user. Seems to me that FB is able to be way more invasive.
> FB slurps in all of the data that users voluntarily post.
That seems likely to be a grand understatement. FB has the opportunity to collect a great deal of data about their users beyond what they explicitly post -- for example, data about when and how they use Facebook mobile apps, how they interact with the Facebook web site, and what external web sites they visit which contain Facebook Like widgets.
On the other hand, FB is inherently social. I assume everything I give to FB has a chance of being public one day. I have some private conversations, but in the back of my head is that time the UI was deceiving and made seemingly direct messages public. FB is for sharing things. Google runs my phone, my work and personal email, my calendar, and more. I think they have a better attitude toward it, hence my willingness to trust them so far, but from a standpoint of ability to be invasive, Google blows everyone else out of the water on my devices.
I can see your concern about messages via email, but I know for me personally, email is just not a thing anymore. Forgetting plain SPAM, corporations/marketing/etc have ruined email into this signal that has such a low S/N ratio that it's just not useful. What percentage of internet users actually use email for communication anymore? Sure, some, but it's not my largest attack vector (I consider Google/FB as attacking me).
Anything serious goes trough emails and this is the data I’d be most worried about leaking - anything from security related stuff like login/id confirmation to receipts, confirmations, sensitive data, professional communication.
Waaay more valuable than FB scraping my phonebook and photos
This may be true for personal communication but any sort of business deal is going to be happening over email. Mortgages, selling your company, large sales... All of the contracts are going to end up in your inbox.
This whole debate about who is worse, Google or Facebook, is a bit ridiculous. They're both unacceptably awful, and practically speaking I don't think it matters which is more awful than the other.
> google is a bigger concern for privacy and personal liberty
I disagree. I think that on the whole, they're both about the same. But in terms of integrity, honesty, and ethics, Google has a (small) lead on Facebook.
In my experience, Facebook used to be a cool thing to be on when you were documenting college party shenanigans and sharing pictures with friends, before it reached mass adoption to the point that your parents/grandparents were trying to add you as a friend. This was a time when organizing/sharing pictures with friends digitally was not a straightforward process.
I've come to terms with a simple fact of life that after graduating, it gets harder to make friends as you get older and start to settle down away from your college towns. Most of the acquaintances I've added on Facebook might as well not exist as we don't talk offline and my core circle of friends communicate over imessage/sms or various chat apps and we try to make time to see each other, further cementing our friendships offline.
Another thing that bothers me about Facebook since I first joined around the time a .edu ending email address was required (I think?), is that everytime I visit the site the new interface and feature bloat makes it feel less and less like what made it dead simple to connect with people back in earlier times. The current experience for me consists of a noisy ad infested newsfeed, ultra-optimized to inject itself straight into your brain's reward center with statistically significant A/B tested precision and autoplaying clickbait media nonsense, all while functioning as an echo-chamber for long-lost acquaintance's political outrage spam.
I wonder if people from my age cohort feel similar cognitive dissonance and that's why Facebook isn't even on their mind career wise, cause it's like an ancient digital museum that houses dusty pictures from their younger years and has long been replaced by Instagram.
> a simple fact of life that after graduating, it gets harder to make friends as you get older
This is not really a simple fact of life, in my opinion. It only gets harder because people make less of an effort. If you put as much time and energy into being social later in life as you do in college, then it isn't any harder to make new friends.
The main difference is that in school, you're automatically surrounded by a lot of varied people. Out of school, that's not automatic -- you have to intentionally put yourself in such situation. Often this is done by joining and participating in clubs and organization that cover things you're interested in (dancing, crafting, whatever).
That's my point exactly, relatively speaking you'll never be surrounded by ~30,000 university students who are forced to cohabit the same location in their most formative years.
The situation is much different when you have to find a babysitter for your kids to free up what little time you might have each day that is then split between you and your life partner to afford to socialize regularly.
I've just internalized this phenomenon as a fact of life after entering mid adulthood and settling down.
Ah, I understand. I was reading more into your statement than I should have. I'm a 50-something man and I often hear others of my general age complain about how hard it is to make friends, but they rarely realize that's something they can actually fix.
> The situation is much different when you have to find a babysitter for your kids to free up what little time you might have each day
Indeed! That was what taught me the real reason to arrange "playdates". It's not really for the kids, it's so that the adults can socialize with less hassle around babysitters and such.
But having children certainly makes lots of things more difficult. Mine are adults now, and I can tell you from experience that once the kids are off to college and beyond, then your social life can come back in its entirety.
And, yet, here in the bay - my company (a startup) sent out two offers to candidates quite recently and they both went to FB instead.
There is no shortage of people joining FB because there's no shortage of people wanting to join a big company. Maybe if they're all comparing offers between big companies then they'll join some other big co but if the difference is startup vs Facebook... FB wins.
It seems like your computer should consider remote workers. I live in Denver and have told Facebook recruiters that I’m specifically not interested in working at Facebook, but I would consider a remote position at a startup. I’m sure as hell not relocating to the Bay Area is all.
I mean... does any startup when compared to FAANG? Salaries are basically the same but the total compensation is, obviously, wildly different since expected value for startup stock is horrible.
Facebook is offering new CS grads from top schools $180k+ a year plus $30k+ signing bonus. That's cash. Most startups can't afford that unless they are very well funded.
$180k salary? I know they offer some ridiculous numbers for top candidates but that seems high for any new grad. That'd push them near $250k with stock.
You should really count them. They're basically cash after you vest with a public company. Startups have a very low chance of that stock becoming worth anything even after it vests.
Comparing faang compensation at $300k tc vs $180k salary plus Monopoly money... Faang wins often enough since you never get enough stock in startups for it to be really worth it. (Short of being a founder)
I've been in the industry too long to put any real value on them, regardless of whether they're from a Fortune 50 company or a startup. Sure, sometimes they pay, but it's always a gamble. I sorta view them more like lottery tickets than actual compensation.
Dude, the stock you get as a Facebook employee is literally money. It vests every month, so you can go to a broker every month and sell it for thousands of dollars of hard cash. No waiting for IPO, no hoping the stock goes up, no 4 year vesting, no board approvals, no nothing.
It may be close, but it is not literally money. If it were, then why wouldn't they just pay the money rather than going through the hassle and expense (for both the company and the employee) of issuing stock?
But that's all beside the point. I understand why these sorts of things may be appealing to people. They just aren't to me, so they don't factor in as "compensation" when I'm evaluating a job opportunity.
> It's no wonder the substantial portion of people who care about their employer's ethics are turned off
Nope. People who I know have turned down FB offer was purely because they see them as less stable company and have doubts if their stock will keep falling. No one wants to wake up a month later to find out that their signing bonus just got reduced by 10% due to bad news cycle. I would estimate that less than 10% of people turn down employer due to privacy related ethics. Also, on side note, FB has jacked up stock bonuses for existing employees. Their attrition rate is virtually unaffected despite of all the bad news.
With SO much negative press, I feel that Facebook had lost its mission among wider public. If it is net bad for the society, even just the perception of it, it is hard to hire someone who shared that vision with you, only mercenaries.
Good people are weird, though. They work for money, like everyone else, but not just money.
> one of the things candidates respond to most is the story we tell about our company's past, present, and future
I hear this storyline fairly often (though exclusively from corporate recruiters) and I have a super hard time understanding why this would matter. Can someone who actually listens to this kind of (IMO) propaganda weigh in and help me understand why it matters to them?
It matters in terms of internal opportunities to advance. Say your one of the first data science or product people in a fast growing company, that's a lot of potential opportunity for someone ambitious and self driven.
It also matters in terms of how good will this look on my CV/the story I can tell later. I joined a now well established and fast growing tech company as employee no 267, we're now at ~1200. That looks great on my CV or in an interview where if I talk about scaling issues (both technical and cultural) they'll likely believe me.
I agree with everything you've said until the last part. Google is only marginally better than fb when it comes to some of these issues of privacy. The issue people have with facebook is that it has a reputation for being a pressure cooker.
In other words, Facebook now has no redeeming qualities.
I've gotten a bunch of pings from them over the last few months, and I just chuckle, say "hahahano", delete it and move on. I don't know if it's a coincidence that the pings happened after the scandal or if they have gotten into 'look under every rock' mode.
I think it is the latter. All the pings I see now are so mundane and banal (most of mine are friend suggestions for people I’ve never met). They really must be scraping the metaphorical bottom of the barrel.
> It's telling that a substantial portion of our candidates admitted to considering competing offers from Google, but literally no one was considering Facebook.
Anecdotal, but in the past year, I had tons of recruiters from Google/Amazon/etc. knocking on my LinkedIn box. However, not a single one from Facebook. Maybe they just simply didn’t fund recruiting efforts as much as the other tech companies or weren’t hiring as aggressively.
One of the (many) things that pleased me about deleting my LinkedIn account was that I no longer routinely heard from the likes of Google/Amazon/Facebook.
There was a time when recruiters would put on a sheepish and embarrassed-to-bring-it-up look when mentioning the higher paying jobs they had for tobacco companies. Paid more, but few wanted the social stigma, even if their personal ethics were OK with it.
> Usually half of the close is done for recruiters with the brand Facebook has
I'm also finding that company brand plays a huge role in closing candidates. Our company's brand is generally pretty strong, and I've found one of the things candidates respond to most is the story we tell about our company's past, present, and future. Facebook's story has become "we were founded by a jerk who didn't care about privacy, our not caring about privacy has had massive consequences for American and global society, and our promises to improve our approach to privacy in the future have proven to be disingenuous smokescreens."
It's no wonder the substantial portion of people who care about their employer's ethics are turned off.