Zucked (2019) is one premature Facebook investor’s personal alert about the hazards of the platform. It vividly depicts how Facebook is harming both public health and the health of our democracies. From manipulating public opinion to fostering our addiction to technology, the image painted in Zucked is of a business adrift from civic or moral responsibility.
- Who is it for?
- Grasp the genuine narrative of Facebook and its detrimental impact on society.
- Technological and economic changes facilitated Facebook’s expansion and a perilous internal culture
- Facebook drastically gathers data on its users and has displayed blatant disregard for user privacy.
- Facebook employs mind manipulation to prolong your online activity and increase its profits
- Filter bubbles foster polarization of viewpoints
- Russia leveraged Facebook surreptitiously yet effectively to sway US elections.
- The Cambridge Analytica narrative exposed Facebook’s casual attitude towards data privacy.
- Facebook and other major tech corporations require robust regulation to mitigate potential harm.
- Summary
- About the author
Who is it for?
- Everyone who utilizes Facebook
- Individuals concerned about data privacy, the manipulation of public opinion or tech-addiction
- Anyone intrigued by the future of social media and tech-giants
Grasp the genuine narrative of Facebook and its detrimental impact on society.
Facebook is one of the most incredibly popular enterprises in history. With 2.2 billion users, and revenues that surpassed $40 billion in 2017, it is nothing short of an extraordinary success. But more than merely being popular – and profitable – Facebook is influential. It has, in less than two decades, evolved into a vital part of the public sphere, the platform on which we not only communicate with our friends, but read the news, exchange opinions, and debate the news of the day.
However, Facebook’s popularity and influence conceal a grim reality: it lacks clear moral or civic values to steer it. And in the absence of efficient regulation, it is actively causing harm to our society.
In these summaries, you’ll discover how Facebook employs manipulative tactics to keep you engaged, and how one consequence is polarizing public discourse. The summaries demonstrate how Facebook prospers on surveillance, accumulating data on you to retain your interest on the site and boost your worth to its advertisers. And you’ll come to comprehend just how effortlessly external actors like Russia have been able to exploit Facebook to influence users in the United States.
Technological and economic changes facilitated Facebook’s expansion and a perilous internal culture
Back in the twentieth century, there weren’t many prosperous Silicon Valley start-ups managed by individuals fresh out of college. Prosperous computer engineering relied on skill and experience and had to surmount the constraints of limited computer processing power, storage, and memory. The necessity for serious hardware infrastructure meant that not just anyone could establish a start-up – and be an instant success.
Technological advancements in the late twentieth and early twenty-first centuries fundamentally transformed this. When Mark Zuckerberg initiated Facebook in 2004, many of these obstacles to new companies had simply vanished. Engineers could concoct a workable product quickly, thanks to open-source software components like the browser Mozilla. And the emergence of cloud storage meant that start-ups could merely pay a monthly fee for their network infrastructures, rather than having to construct something costly themselves.
All of a sudden, the lean start-up model surfaced. Businesses like Facebook no longer needed to progress gradually towards perfection before launching a product. They could rapidly establish something fundamental, push it out to users and enhance from there. Facebook’s renowned “move fast and break things” philosophy was born.
This also had a profound impact on the culture of companies like Facebook. An entrepreneur like Zuckerberg no longer required a large and experienced pool of engineers with profound systems expertise to execute a business plan.
In reality, we are aware that Zuckerberg didn’t want individuals with experience. Inexperienced young men – and they were more frequently men – were not only cheaper but could be fashioned in his likeness, rendering the company simpler to manage.
In the initial years of Facebook, Zuckerberg himself was unyieldingly confident, not solely in his business plan but in the self-evidently beneficial objective of connecting the world. And as Facebook’s user numbers – and ultimately, profitability – soared, why would anyone on his team question him? And even if they wanted to, Zuckerberg had devised Facebook’s shareholding rules so that he held a “golden vote,” meaning the company would invariably do what he chose.
To advance as speedily as possible, Facebook executed whatever it could to eradicate sources of friction: the product would be complimentary, and the business would steer clear of regulation, thus also evading a necessity for transparency in its algorithms that might invite criticism.
Regrettably, while these were the right circumstances for the growth of a global sensation, they were also circumstances that nurtured a neglect for user privacy, safety, and civic responsibility.
Facebook drastically gathers data on its users and has displayed blatant disregard for user privacy.
Now you know a tad about Facebook. But how extensively does Facebook recognize you?
Facebook retains up to 29,000 data points on each of its users. That’s 29,000 minute details it knows about your life, from the reality that you fancy cat videos to whom you’ve been socializing with recently.
So where does Facebook acquire that data?
Consider Connect, a service commenced in 2008 that enables users to sign into third-party websites through Facebook. Many users value the simplicity of not needing to recollect countless complex passwords for other sites. What most users don’t grasp is that the service doesn’t solely log them in. It also enables Facebook to monitor them on any site or application that utilized the log-in. Use Connect to log into news websites? Facebook is aware of precisely what you are reading.
Or consider photos. Many of us relish tagging our friends after a pleasant day or night out. You may believe it’s a simple method to share with your friends, but for Facebook, you’re furnishing a valuable collection of information about your location, your activities, and your social connections.
Now, if a business is so avid for your personal data, you’d at least expect that it would handle that data with caution, right? Regrettably, ever since the earliest days of Facebook, Mark Zuckerberg’s enterprise has displayed an apparent disregard for data privacy.
In reality, according to Business Insider, after Zuckerberg accumulated his first few thousand users, he messaged a friend to inform them that if they ever sought information on anyone at their university, they should simply inquire. He now possessed thousands of emails, photos, and addresses. People had simply provided them, the young entrepreneur indicated. They were, in his reported words, “foolish.”
An insouciant attitude toward data privacy at Facebook has endured ever since. For instance, in 2018, journalists disclosed that Facebook had dispatched marketing materials to phone numbers provided by users for two-factor authentication, a security feature, despite having vowed not to do so.
And during the same year, it transpired that Facebook had simply downloaded the phone records – encompassing calls and texts – of those of its users who used Android phones. Again, the users in question had no inkling this was transpiring.
Facebook covets your data for a reason: to earn more money by
Retaining your presence on the platform for an extended period, consequently enhancing its appeal to advertisers. Let’s examine this in greater detail.
Facebook employs mind manipulation to prolong your online activity and increase its profits
For social networking platforms, time equates to money, more specifically, your time translates to their revenue. The more time you dedicate to Facebook, Twitter, or Instagram, and the more attention you bestow upon them, the greater the volume of advertising they can vend.
Hence, seizing and retaining your focus lies at the core of Facebook’s commercial triumph. The company has excelled beyond all others in infiltrating your mind.
Several techniques employed revolve around the presentation of information. These encompass the automatic initiation of video playback and an incessant stream of content. These strategies keep you captivated by eliminating the customary signals to disengage. While you can reach the end of a traditional newspaper, there seems to be no conclusion to Facebook’s news stream.
Other methods delve deeper into human psychology by exploiting the fear of missing out (FOMO), for instance. Attempt to deactivate your Facebook account, and you won’t merely encounter a routine confirmation screen; you’ll be greeted with images of your closest pals, Tom and Jane, coupled with the statement, “Tom and Jane will miss you.”
The most intricate and malevolent tactics orchestrated by Facebook lie within the decision-making framework of its artificial intelligence, which determines what content to display to you.
While scrolling through Facebook, you may presume you’re perusing a straightforward news feed. However, you are essentially combating a colossal artificial intelligence armed with copious amounts of data about you, tailoring content to prolong your engagement on the platform. Unfortunately, this often entails material that resonates with your primal emotions.
Why does this matter? Well, stimulating our inherent emotions is what retains your interest. Positive emotions are effective, as evidenced by the prevalence of adorable cat videos. But which emotions yield the greatest impact? Feelings like fear and anger.
Hence, Facebook tends to steer us toward content that incites strong reactions, as emotionally charged users consume and distribute more content. Consequently, you are less inclined to encounter serene headlines depicting events and more predisposed to confront sensationalistic assertions in succinct, attention-grabbing videos.
This can pose risks, particularly when we find ourselves ensnared in a bubble where our fury, apprehensions, or other sentiments are continually reinforced by individuals sharing similar perspectives. This encapsulates the peril of the so-called filter bubble, delved into in the subsequent chapter.
Filter bubbles foster polarization of viewpoints
Every moment you browse Facebook, you are feeding information into its filtration algorithm. The outcome? A filter bubble emerges, wherein Facebook sieves out content it deems unsuitable for you, whilst ushering in content you are more inclined to peruse, appreciate, and distribute.
Eli Pariser, head of the advocacy group MoveOn, was among the first to shed light on the ramifications of filter bubbles during a Ted Talk in 2011. Pariser observed that, despite maintaining a fairly balanced roster of conservative and liberal acquaintances on Facebook, his newsfeed was conspicuously biased. His penchant for engaging with liberal content prompted Facebook to inundate him with what it assumed he desired, eventually leading to the exclusion of conservative content entirely.
This presents a conundrum. Many individuals source their news and updates from Facebook, presuming they are being exposed to a range of perspectives. Nevertheless, in reality, algorithms wielding significant influence but devoid of civic obligations feed them a prejudiced portrayal of global affairs.
The situation worsens when filter bubbles catalyze the transition of users from mainstream to more extremist viewpoints. This metamorphosis can transpire due to algorithms steering users toward emotionally charged, outlandish content.
For instance, a former employee of YouTube, Guillaume Chaslot, fashioned software illustrating the functionality of YouTube’s algorithmic suggestions. It illustrated that after viewing any video on the platform concerning the 9/11 attacks, the user would subsequently receive recommendations for conspiratorial 9/11 videos.
Even sans algorithms, social media frequently instigate radicalization among users. This tendency is particularly pronounced within Facebook groups. Facebook hosts an abundance of groups catering to diverse political inclinations, facilitating targeted advertisement placement.
However, complications arise from these groups. Behavioral economist and Nudge coauthor Cass Sunstein demonstrated that when individuals holding akin viewpoints deliberate on issues, their opinions tend to solidify and become more extreme over time.
Another pitfall with groups lies in their susceptibility to manipulation. Data for Democracy disclosed that merely one or two percent of a group’s membership possesses the potential to steer and dictate the dialogue, provided they possess the requisite know-how.
This is precisely what the Russians orchestrated prior to the 2016 US elections.
Russia leveraged Facebook surreptitiously yet effectively to sway US elections.
Can you confidently discern the origins of the content you encounter on Facebook? If you were residing in the United States throughout 2016, chances are high that you consumed and conceivably disseminated content on Facebook engineered by Russian provocateurs.
Despite mounting evidence, Facebook staunchly refuted accusations of Russia’s exploitation of the platform until September 2017, when it conceded uncovering roughly $100,000 of advertising expenditure by spurious Russian accounts. Subsequently, Facebook disclosed that Russian interference had reached 126 million users on its platform and an additional 20 million on Instagram. Given that 137 million individuals participated in the election, it’s arduous to discount the notion that Russian interference wielded some influence.
Russia’s tactics during the 2016 election aimed to energize Trump supporters while dampening the turnout among potential Democratic voters.
Regrettably, it was effortlessly achievable owing to Facebook groups providing Russia with a direct route to specific demographics. Russian operatives established numerous groups targeting individuals of diverse ethnicities, such as the group Blacktivist, purportedly to disseminate untruths that would dissuade users from backing the Democratic candidate, Hillary Clinton.
Furthermore, these groups facilitated the seamless dissemination of content. We often place implicit trust in fellow group members, assuming that information shared within a group aligning with our identity stems from trustworthy sources.
The author personally witnessed acquaintances sharing profoundly misogynistic depictions of Hillary Clinton originating in Facebook groups supporting Bernie Sanders, Clinton’s contender in the Democratic primary. It was bewildering to believe that Sanders’ campaign endorsed such content, yet it swiftly propagated.
Russia’s efficacy in influencing through groups was vividly illustrated by the infamous instance of the 2016 Houston mosque demonstrations. Facebook events orchestrated by Russians orchestrated simultaneous protests advocating for and against Islam outside a mosque in Houston, Texas. This manipulation formed part of Russia’s broader strategy to sow discord and confrontation in the United States rooted in anti-minority and anti-immigrant sentiments, a ploy anticipated to favor the Trump campaign.
Four million individuals supported Obama in 2012 but refrained from endorsing Clinton in 2016. How many of these four million abstained from Democratic support due to the dissemination of Russian disinformation and falsehoods about the Clinton campaign?
The Cambridge Analytica narrative exposed Facebook’s casual attitude towards data privacy.
In 2011, Facebook reached an agreement with the Federal Trade Commission, an American consumer protection agency and regulator, prohibiting deceptive data privacy practices. As per the settlement, Facebook was required to obtain explicit, informed consent from users before sharing their data. Unfortunately, Facebook failed to adhere to these conditions.
By March 2018, a revelation linked Facebook’s political influence to its disregard for user privacy. Cambridge Analytica, a firm offering data analytics for Donald Trump’s election campaign, unlawfully collected and exploited nearly fifty million Facebook user profiles.
Cambridge Analytica financed a researcher named Aleksandr Kogan to amass a database of American voters. Kogan crafted a personality quiz on Facebook, taken by 270,000 individuals in exchange for a small amount of money. This quiz acquired details about their personality traits.
Critically, it also gathered data about the friends of the quiz-takers — totaling 49 million individuals collectively — without these friends’ knowledge or consent. Subsequently, the data team supporting a controversial presidential candidate gained access to a vast trove of highly detailed personal information for approximately 49 million individuals. Despite Facebook’s terms of service prohibiting commercial use of the data, Cambridge Analytica proceeded to exploit it.
The situation became even more contentious when a whistleblower revealed that Cambridge Analytica successfully matched Facebook profiles with 30 million verified voter files. This enabled the Trump campaign to obtain valuable data on thirteen percent of the country’s voters, facilitating targeted propaganda at these individuals with remarkable precision. Considering that Trump’s victory margin in the Electoral College came from just three swing states, won by a mere 77,744 votes, it’s almost inconceivable that Cambridge Analytica’s precise targeting, based on Facebook’s data breach, didn’t impact the election result.
Following the scandal, Facebook attempted to assert that it was a victim of Cambridge Analytica’s misconduct. However, Facebook’s actions tell a different story. Upon discovering the data breach, Facebook requested Cambridge Analytica to destroy the dataset through written communication. Yet, there was no subsequent audit or inspection conducted. Instead, Cambridge Analytica was simply required to confirm compliance by checking a box on a form. Additionally, while Cambridge Analytica collaborated with Facebook, Facebook embedded three team members within the Trump campaign’s digital operations.
The Cambridge Analytica saga marked a pivotal moment, prompting many to believe that Facebook had disregarded its ethical and societal responsibilities in favor of growth and profit.
If these allegations are true, a pertinent question arises: What steps can society take in response?
Facebook and other major tech corporations require robust regulation to mitigate potential harm.
The episodes involving Russian interference and Cambridge Analytica illustrate Facebook’s inadequate commitment to self-regulation. Hence, there is a growing necessity to contemplate external regulatory frameworks.
One aspect of this could involve implementing economic regulations to dilute Facebook and other tech giants’ overwhelming market dominance, similar to the regulatory measures applied in the past to behemoths like Microsoft and IBM. Facebook’s formidable position stems partly from its strategic acquisitions of competitors like Instagram and WhatsApp.
Such regulations need not necessarily stifle economic growth or innovation negatively, as exemplified by the case of the telecommunications giant AT&T. In 1956, AT&T reached a settlement with the government to curb its expanding influence. The agreement obliged AT&T to confine its operations to landline telephony and provide patent licenses free of charge for others’ benefit.
Ultimately, this arrangement proved beneficial for the U.S. economy. By making AT&T’s pivotal patent, the transistor, freely accessible, this antitrust ruling catalyzed the emergence of Silicon Valley — the birthplace of computers, video games, smartphones, and the internet, all rooted in transistor technology.
Moreover, this regulatory approach favored AT&T itself. Even though confined to its core business, the company thrived to such an extent that it faced another monopoly case in 1984. Applying a similar rationale to entities like Facebook and Google would ensure their sustenance while curbing their market monopoly and fostering healthy competition.
While economic regulation is crucial, addressing Facebook’s societal impact necessitates regulations that tackle the root causes of harm.
One plausible starting point could involve mandating the availability of an unfiltered Facebook newsfeed view. By enabling users to switch between their personalized view – shaped by Facebook’s artificial intelligence algorithms favoring prolonged engagement — and a more unbiased perspective of global events with a single click, users can access a broader spectrum of information.
Another constructive approach would involve regulating algorithms and artificial intelligence. Establishing a technology equivalent to the FDA in the U.S. could oversee the ethical deployment of algorithms, ensuring they serve human interests instead of exploiting them. Mandated third-party audits of algorithms would enhance transparency, mitigating instances of echo chambers and manipulation.
Regulation is commonplace in various industries, striking a balance between public welfare and economic liberty. In the tech realm, this equilibrium requires fine-tuning. Change is imperative.
Summary
The crux of the matter lies in these summaries:
Facebook has morphed into a calamity, ensnaring users in endless screen time, nudging them towards extreme viewpoints, trampling over personal privacy, and swaying electoral outcomes. It’s time to push back and cease accepting Facebook’s adverse effects on individuals and society as tolerable.
Actionable suggestion: Alter your device aesthetics to diminish their impact on your well-being.
A couple of tweaks to your digital devices can yield substantial benefits. Firstly, enabling night-shift mode on your device diminishes the blue light emitted, reducing eye strain and aiding in better sleep. Secondly, switching your smartphone to monochrome mode lessens its visual intensity, consequently reducing the dopamine rush triggered by screen time.
About the author
Roger McNamee boasts over three decades of investment experience in Silicon Valley and was an early investor in both Facebook and Google. His latest endeavor, Elevation, co-founded with U2’s Bono, focuses on promoting awareness regarding the adverse impacts of social media.