Deplatforming: Big Tech’s gag order

Ever since the likes of Alex Jones and Milo Yiannopoulos were kicked out of their dominant platforms, debate has begun on whether tech companies should be armed with the power of deplatforming.

What is it?

Deplatforming isn’t a concept exclusive to technology. Some of the earliest forms of deplatforming were noticed on college campuses when controversial speakers would come. Afraid of the backlash from students, parents and even the general public, college management would preemptively ban certain speakers.

The crux of that argument still holds good. Especially more so today. The purpose of deplatforming is fundamentally to restrain speech of some individuals by removing the platform for which they use to express these opinions. Thereby withdrawing the medium that these people use to spread their message.

Typically these opinions tend to be polarizing which is why deplatforming is considered political activism.

With social media, the worry is greater. Opinions posted tend to be unfiltered. That coupled with social media’s massive reach provides a compelling platform. You don’t need to be an esteemed philosopher or scientist to air out your view. On such a platform, qualifications don’t attract views, personalities do. Anyone convincing individual can thrive on such a platform. Which is why the likelihood of large masses of people getting brainwashed is higher and sadly is ever-increasing. Big Tech has recognized this liability and have increasingly become vigilant watchdogs of what is posted on their platforms.

Deplatforming isn’t limited to just social media sites. Any platform that allows such individuals to benefit from it have become notoriously vigilant. Recently many GoFundMe pages and PayPal links have been taken down on such grounds.

The Case For Deplatforming

While the area hasn’t seen much research-funding, the sparse few that exist claim that deplatforming, at least in the long-run, significantly reduces the person’s user base. While this is debatable a research lead had this to say:

Generally the falloff is pretty significant and they don’t gain the same amplification power they had prior to the moment they were taken off these bigger platforms.

Joan Donovan, Data and Society’s platform accountability research lead

Essentially, the audience that these individuals draw from Facebook, Twitter or YouTube isn’t scalable to other platforms. Cutting that out usually means removing a vast set of eyes.

Another study by Georgia Tech examined the effects of deplatforming subreddits filled with hate speech. The crucial learning was that reducing hate, reduces hate elsewhere, throughout the platform.Other learnings regarding these hate speech spewing redditors were as follows:

  • Hate speech reduced by 80-90% by the same redditors post-ban.
  • Some of these redditors migrated to other similar hate speech subreddits and toyed around with the limits with their hate speech but in general, in these subreddits, hate speech didn’t quantitatively increase.
  • Some left Reddit altogether

The greatest power of social media is its options. The lack of monopoly allows individuals to switch as they wish and continue living their life. This undermines the power of deplatforming. However the greatest blow that deplatforming can give is when coordinated. Take Alex Jones for example. In just a day, Facebook, Spotify, Apple and YouTube banned him, cutting away millions of his listeners and viewers. Whether or not it was an ethical or democratic move is questionable but it undeniable dealt a crippling blow to his business.

The Case Against Deplatforming

In a free market there are countless players, there’s bound to be one which will accept you no matter how radical you are. While deplatformed individuals were pushed out of mainstream focus they managed to survive elsewhere on platforms like Gab, Voat and BitChute. There is an entire suite of extremist versions of the same sites that we daily scroll through. While these platforms don’t have nearly the same numbers that mainstream ones do, their users are insignificant. If these people are deplatformed by the mainstream media and still prosper without having to change what they put out, was the initial deplatforming really effective? Does this make deplatforming a tool of the past?

Additionally pushing these extremists to the corners of the internet doesn’t mean that they have vanished, just harder to find. Yet these people garner a loyal fanbase. Relegating these extremists to the depths of the internet doesn’t necessarily mean that ardent fans won’t follow.

These alternate platforms may not be as popular but filling it extremists is bound to create a concentration of hate speech. It becomes an echo chamber for radical thoughts that go unchecked but amplified. This alienation from mainstream media is dangerous for two reasons. Firstly, this alienation leads to a limited social circle, one that is limited by (usually) same political views. In an environment of only similar lines of thinking, the same radical thoughts get appreciated and these participants become unaware of the diametrically opposing views that are there. This leads to hyper-radicalization of both parties. Usually this isn’t conducive for any debate. Secondly, such a system removes the platform for conducive debates even if that was a possibility in the first place. If people are separated into different platforms based on their views, everyone just begins to live in their own bubble. Such a system curbs debate. Pushing people away fragments political discussions. While the media doesn’t often portray it as such, people do listen to reason during discourse. Deplatforming removes the opportunity to even do so.

This segregation has already shown its teeth. Many people noted, that if rogue-shooter, Robert Bowers had posted this message on a mainstream media site instead:

“HIAS [Hebrew Immigrant Aid Society] likes to bring invaders in that kill our people. I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in.”

It would have been flagged sooner and probably the casualties could have been minimized. However, since it was posted on Gab (alt-right Twitter), his followers didn’t seem to have any objections to his statements. Had there been no separation, this could have been preventable.

The Bottom Line

Deplatforming has always been sold as preventing violence and curbing the spread of socially destructive misinformation but in truth has always been a form of virtue signalling. In today’s age where everything is fueled by profits, this is yet another aspect of a product or service that can be monetized. If you can prove that you don’t even want to associate yourself with these radical views, your brand becomes more attractive. Whether this is the ulterior aim, it gives undue power to these tech companies.

Silicon Valley has been handed a very potent tool of censorship and if history is any indication, it is that they will misuse it if they already have not (allegedly have targeted only alt-right people).

The issue of censorship is age-old. The line between free speech and hate speech is blurred but we need a better solution to moderate hate speech. Deplatforming already seems like a solution of the past, a battle with increasingly short-lived victories. While it has served its purpose of limiting the spread of hate speech, we need one to identify and prevent the spread altogether. To be in such a situation is a luxury that we couldn’t afford in days where we banned college speakers but times have changed. We need a lasting solution and one based on societal consensus.

Facebook: Company profile

The social networking service behemoth Facebook has grown from a forum purely meant for connections to an all-encompassing platform that has, as of late, found itself embroiled in the midst of many serious controversies.   

How it began

Though it hasn’t always been like this. In the beginning, Facebook had an excellent PR image. Famously known to have been built in Harvard dorm rooms by Mark Zuckerberg and his roommates: Eduardo Saverin, Andrew McCollum, Dustin Moskovitz, and Chris Hughes, the product was launched on February 4, 2004. 

Initially positioned as a very exclusive forum, Facebook was open only for Harvard students. It then expanded to other Ivy League colleges, further opening up to other high schools after finally letting the floodgates open, unveiling it to the entire public.

Facebook then began to position itself as a mediator facilitating connections between old friends, a forum for new friends and later for businesses and events. As the company steadily grew, so did the public’s reverence for its CEO, Mark Zuckeberg, who had by then not only bravely dropped out of the esteemed-Harvard college without a degree but also became touted for his unique approach that took Silicon Valley by storm. Famously, he attended business meetings in a hoodie, much to the shock of suit-wearing investors and partners.

Through this Facebook had crafted a powerful weapon. Zuckerberg used it to his advantage, fending off lawsuits from the Winklevoss twins who claimed that the idea for Facebook was theirs. Typically this would have been a red-flag for many venture capitalists and likely the general public, yet Facebook was able to sweep the issue under a rug. 

They weren’t just alone in their battles, Zuckerberg and co. were able to find partners, many of whom, to this day, are still related to Facebook in some way. Facebook received an angel investment of $200,000 from Peter Thiel and Greylock Partner-backed Reid Hoffman. It later went on to take 10 series of funding worth a total of $2.3 Bn after finally going public. In that journey, it grew from a simple networking app to a force to be reckoned with in the industry, breaking many records in incredible fashion.

Some of these accolades include:

  • Becoming the most used social networking service (based on monthly active users), just 2 years, 3 months after its roll out to the general public.
  • Amassing over 500 million users under 4 years.
  • Touted to have created a shift towards heavier mobile usage for tasks otherwise thought to be accomplished only on PCs. This may not seem like much, but, now everything we expect from a PC is expected from a phone. Less than a decade ago, this was deemed impossible.
  • Having the largest IPO valuation of $104 Bn till date.

What it is now

Now, just shy of two decades later, Facebook is an established leviathan. Headquartered at the cleverly-named 1 Hacker Way in Menlo Park California, Facebook boasts a revenue of $70.7 Bn (estimated for 2019). However, the company has seen a considerable slowdown in its revenue growth for the past few years primarily because their advertising revenue has fallen. According to an investopedia article, advertising makes almost 99% of Facebook’s revenue but with many other platforms like Google, YouTube, Apple and Amazon popping up, it is hard for Facebook to attract the marketers who so desperately seek the platform with the most eyes. As a result user growth has also seen a tremendous slowdown, also due a significant increase in public distrust for the platform. This has created a vicious cycle which is ultimately leading to dwindling revenue. 

However Facebook has been crafty. Previously investing and acquiring other platforms in the social connection sphere like Instagram and Whatsapp, they are still reaping the rewards of holding a strong leash over this sector. They have been proactive too. Finding new avenues to establish their dominance. Over the years, Facebook has aggressively pushed their range of VR products by Oculus promising the idea of a futuristic, idyllic world where cyber-bullying and the worries of the real world are no more. While how they manage to achieve this is a question, it is undeniable that Facebook is now more than just a social media website. It is clear that they have overgrown the days when their business model was solely the ‘language of friendship’. 

Attempt to foray into AI and cryptocurrency have been made but those have not seen the resounding success that Facebook is used to. A reason for this is that the media is on their backs. Ever since the Cambridge Analytica scandal, the media have been ready to pounce at Facebook. This has sometimes leads to inaccurate and almost harmful portrayal of their work. For example, when headlines broke of an AI bot from Facebook having developed a new language altogether, skeptics were quick to blame Facebook but what they didn’t realize is that these bots were simply making communication more efficient between themselves by deriving a shorthand, not an entirely new language. The claim that the headlines made were dangerous. Due to this fiasco Facebook had to suspend the program so that the public would lower their pitchforks. 

Similarly, their cryptocurrency project Libra also received fiery opposition. Fears of its volatility, lack of regulation and general agitation towards blockchain and cryptocurrency caused a general sense of apprehension and scrutiny. France and Germany were quick to prohibit it, claiming that a private corporator dictatating monetary policy impugned on the sovereignty of the nation. These nations which hold strong influence in the EU also promised that all EU nations wouldn’t adopt it. Even in their home ground, US lawmakers were wary and critical of how Libra would be implemented. Many senators threatened the corporations which backed this project to pull out. In the resulting days, the Libra Association crumbled with the departures of fin-tech giant PayPal, financial behemoths like Visa, Mastercard, Stripe and more.

While what is in the books for Facebook’s future seems uncertain, they definitely aren’t going anywhere. It seems very clear that Facebook is ready to pivot from the once beloved, harmless connecting platform that was made in the dorm rooms of Harvard.

The board

Mark Zuckerberg

  • Created the company at his Harvard dorm room and has been at the helm of the company ever since. CEO of the company
  • Has controlling shares of the company. Owns 60% of Class-B shares so he has a majority. Thus all other board members are essentially just glorified advisors. 

Sheryl Sandberg

  • Was senior executive at Google, then became COO at Facebook. She has deep understanding of Facebook’s (Zuckerberg’s) mission and long-term plans so can support his ideas
  • Also has political experience, serving under the Clinton administration.
  • She rose to spotlight with her book ‘Lean In’ which made her a popular figure for women’s rights and feminism.
  • While she was considered an icon with a resounding PR image post the release of the book, reports of her doing opposition research on Google and George Soros and other instances have tainted her public image.

Peter Thiel

  • Early seed investor who has been in the board for a long time.
  • His advice is usually different from the other left-leaning members. For example he heaving advocated not fact-checking political ads, which happened to be a very polarizing issue.
  • Due to his conservative political views, he often clashed with Reed Hastings and Erksine Bowles.
  • Despite not being well-liked by some of the other board members, he is somewhat considered a mentor to Zuckerberg.

Marc Andreessen

  • Prominent VC who has made impressive personal investments like BusinessInsider & Slack and professional one like Airbnb & Lyft.
  • In 2016, Zuckerberg wanted to sell his voting shares (ie Class-B) but still maintain majority voting status. To appeal this, he had to present a formal idea to a sub-committee of the board consisting of Susan Desmond Hellen, Erksine Bowles and Andreessen. During the process, Andreessen would text Zuckerberg which ideas worked and which didn’t. Coached him through the process. This was seen as a conflict of interests as he is supposed to be an independent board member (so are the other two).
  • Sold a lot of his own shares.
  • Both actions are very sketchy but the investments into FB aren’t just his own but also the firm’s. This adds another layer of complexity that becomes harder to navigate.

Peggy Alford

  • Appointed due to her “rare expertise in business management to finance operations to product development”.
  • Held prominent positions at PayPal, eBay and Rent.com
  • Additionally she served as CFO and head of operations for the Chan Zuckerberg initiative (Zuckerberg’s philanthropy foundation).
  • One of the replacements to Reed Hasting & Erksine Bowles both of whom were known to openly challenge Zuckerberg. Her personal relation to Zuckerberg makes people seem sceptical that she will be just as critical as her predecessors were.

Drew Houston

  • CEO, founder of Dropbox and also good friend of Zuckerberg
  • Can provide good oversight on technical aspects.
  • Shareholders have multiple times tried to oust Zuckerberg as CEO but the board hasn’t voted for such a deal.
  • With shareholders wanting more board members to challenge Zuckerberg it’s surprising that a good friend and mentee is the new addition.
  • Assumes that he is taking the position of independent board member but personal relations again seems to make critics apprehensive.

Tracey Travis

  • Has held many prominent positions in many companies like CFO at Estée Lauder and Ralph Lauren. Also holds a board position at Accenture.
  • According to a press release from Facebook, she is being added due to “strong and corporate leadership background”. Might be useful as Facebook has, as of late, seen senior leadership leave the company and seen a decline in users for Facebook platform.

Nancy Killefer

  • Worked at the US Department of Treasury during the Obama administration so she brings invaluable government oversight that shareholders and the general public think the company is sorely missing.
  • Somewhat replacement to Erskine Bowles, in terms of qualifications and experience.

Robert Kimmitt

  • Deputy Secretary of the US Treasury during the Bush administration as well as distinguished political figure, known for his role as Ambassador to Germany and more.
  • Brings his public sector experience to the company which is necessary for the company.

Other prominent ex-members

Reed Hastings

  • Often ferociously clashed with Peter Thiel primarily due to difference in political views.
  • Consequently questioned Zuckerberg.

Erksine Bowles

  • Famously was vocal against when Facebook was used as a political manipulation tool by the Kremlin. Has notable government experience.
  • Was known to pushback against Zuckerberg.

Susan Desmond-Hellmann

  • Representative of Bill (and Melinda) Gates as she was their foundation’s CEO.
  • Independent board member.

Why it’s important

The public has made it abundantly clear that their trust for Facebook has disappeared and rightfully so. It is alarming, to say the least, that any skilled engineer could use Facebook as a tool to invade our privacy. This is unconstitutional not only on a human rights level but also on a democratic one. Even the idea that Facebook could be used for political manipulations is a chilling one. Never has a single tool been so powerful in threatening the sanctity of democratic elections. This rocked countries around the world. Many countries have pulled in Facebook executives demanding answers and action yet Facebook has done virtually nothing. A fiasco of this scale has called multiple times for Zuckerberg’s head yet many, skeptics and fans alike, still think that he is the man for the job.

Whether or not that is true, the ossification in the Facebook boardroom must stop. It can no more be the echo chamber for just Zuckerberg’s opinions and actions. The current set of board members doesn’t seem like the typical group of people who will go against and question Zuckerberg, given their personal connections with him. This puts the company in a precarious position as ‘independent’ board members are likely to not voice out their own opinions or those of the other shareholders.

While it may have been okay in its early frat days to hold board meetings with Zuckerberg’s buddies and agree to everything in some basement, Facebook has outgrown that phase, yet is still stuck with that mentality.

Whether or not Facebook knew that their glorified PR image would be fickel, the company was smart to diversify into so many other potent domains. Whether or not they are able to deliver a utopian land which can facilitate our need to escape the struggles of life through Oculus or even revolutionise the way we pay, save and earn (through Libra), the avenues that Facebook can become active in is endless. However, ever since Facebook has faced a wave of these controversies, it has come under the scanner. It has become the embodiment of why Big Tech is reckless. While in the past their rallying cry has been to ‘move fast and break things’ Facebook desperately needs to establish a system of accountability. To prove to the world that they are responsible to handle the forefront of technology, Facebook needs to regain the public’s trust that they can be responsible now. This can’t happen if Zuckerberg is being pampered by the army of yes-men (and women) that occupy his boardroom. It is imperative that accountability is established. How it is done will help shape the answer eternal question – does Big Tech need policing?

Cambridge Analytica: Political hijacking

When news broke that a firm was able to illegally harvest, analyse and use our own data against us, we were horrified. We couldn’t believe that a scheme like this could run for so long undetected. It wasn’t just terrifying but infuriating. The bodies to which we had entrusted power i.e Big Tech had failed us. It was becoming increasingly apparent that these tech gods couldn’t handle such a responsibility. This was the tipping point. People finally had enough. The whole Cambridge Analytica data scandal pointed to a larger issue – corporate accountability.

How it went down

  • 2014:Facebook survey is released to the public

    Alexsandr Kogan developed an app that would collect data based on the results to the survey titled – “This is your digital life”. This survey received over hundreds of thousands of responses.

    As a Data scientist and psychology professor from Cambridge University, he was able to devise the right question which could work out a lot about the profile of the user. In addition to this, the app exploited a flaw in Facebook’s API which would collect the users’ actions (likes, interactions, interests and dislikes) as well as that of their entire friend circle. Through this, even though the survey was attempted by only a few hundred thousand individuals, he was able to mine the profiles of around 90 million profiles.

    He later sold this to Cambridge Analytica who then filtered and micro-targeted groups with similar psychographic profiles (personality, values, interests etc) .

  • Jan 2015: Cambridge Analytica is incorporated

    At that time, the company had only one employee: Alexander Nix (who was also CEO). Cambridge Analytica (CA) sold itself as a political consultancy firm which was based on academically-approved data analysing methods.

  • Dec 2015: First report of misuse

    The Guardian reporter claimed that Ted Cruz (US Senator) used Cambridge Analytica to get elected. The problem with this was that the firm was harvesting personal information without the consent of users. Facebook declined to comment but opened investigations on the issue.

  • 2015: Work done for Leave.EU campaign

    It was later revealed that CA worked for the Leave side of the 2016 referendum vote that led to Brexit. The company preyed on user data to convert swing voters to leave. While the extent of their impact isn’t exactly ascertainable, it is likely that without CA, the Leave side wouldn’t have succeeded (as it edged out with just 51.9% of the votes).

  • 2016: Donald Trump presidential campaign

    Based on their Facebook activity, US voters were targeted with specific ads which intended to influence how they voted (i.e for Trump). Basically what Cambridge Analytica did was cater specific posts to people with certain psychographic profiles as it was likely that they would positively react to it. These ads essentially preyed on personality traits that we wouldn’t even consciously known about our own loved ones. By tailoring ads to such a personable degree, Cambridge Analytica got closer to millions and reached a level of personal connection that even door-to-door campaigns couldn’t emulate.

    This eventually led to Republicans winning swing states by minute margins and ultimately led to Trump becoming president despite losing the popular vote (getting less total votes).

  • Dec 2016-Mar 2017: More reports surface of data misuse

    Many different news publications around the world like Das Magazin (Switzerland), The Intercept (Brazil) and The Guardian (UK) reported different cases where CA misused user data to influence elections. These reports indicated that CA was active in many regions and tampering with many democratic elections simultaneously.

  • Early 2018: Proposal to influence Indian elections

    According to whistleblower, Christopher Wylie, Cambridge Analytica presented a plan on how Congress (INC) could use CA’s data to steer multiple state elections (Karnataka, Chhattisgarh and Madhya Pradesh) in 2018 and even the general elections in 2019. Whether or not they used CA’s help, the INC couldn’t garner enough votes to win the general election but won those three state elections (two of which they lost due to weak coalition governments whose alliances broke).

    Whether Cambridge Analytica was explicitly involved isn’t clear nonetheless the Indian government demanded that Facebook explain how such a data breach could occur.

  • Mar 17, 2018: Whistleblower emerges

    Carole Cadwalladr (reporter at Guardian and Observer) who came out with initial report got whistleblower, Christopher Wylie, to come out and reveal first-hand the shady activities of CA.

    In a coordinated effort, three different news publications put out this story which reached millions and caused outrage. This spouted the #deleteFacebook campaign after which Facebook finally had to comment.

  • Late Mar 2018: Outrage follows

    Facebook’s CEO, Mark Zuckerberg, publicly apologized for the data lapse and vowed to do better. In the days that followed about $134B was erased from Facebook’s market capitalization and the parliaments of many nations demanded that Zuckerberg and other Facebook executives appear for questioning.

  • Mar 19, 2018: Channel 4 News sting operation

    An undercover reporter from Channel 4 News recorded a private conversation with Alexander Nix where he boasted the power of Cambridge Analytica. With his guard down, Nix revealed how he would allure political opponents, the company’s involvement in bribery and entrapment (of opponents) and in general company practices.

    After release, Cambridge Analytica put out a statement claiming that the video footage was “edited and scripted to grossly misrepresent” Nix.

  • April 10, 2018: Mark Zuckerberg’s testimony

    Appearing under Congress, Facebook CEO repeatedly apologized as he was grilled on the undetectable way people could leverage his platform to influence political races. Later Facebook was fined $5B for this data breach.

  • May 2, 2018: Termination

    The company announced that they would be closing down and had already begun insolvency proceedings.

  • May 16, 2018: Christopher Wylie’s testimony

    He too appeared under Congress to respond to wide range of questions. His testimony covered the much-discussed Russian involvement, his thoughts of Facebook’s response and why he decided to out Cambridge Analytica.

    As the firm’s early Director of Research, Wylie could provide technical insight to how this political influencing was happening. Later he testified to British authorities as well.

  • June 6, 2018: Alexander Nix’s testimony

    The company’s CEO was summoned before British lawmakers after failing to answer their preliminary questions satisfactorily. He evaded most questions regarding Russian interference and attempted to make himself the victim in the situation. Many lawmakers took offense to this, and more so to his rudeness and aggression when responding.

    He also took the testimony as an opportunity to go after Christopher Wylie stating that Wylie was a jealous ex-employee.

  • 2018: Emerdata Limited and Data Propria

    Despite Cambridge Analytica being disbanded and Alexander Nix being reprimanded on a global forum, many employees of the firm have been able to bounce back. Scarily enough, in almost identical situations.

    Alexander Nix was appointed Director (later removed) to a subsidiary firm to Cambridge Analytica’s parent company SCL called Emerdata Limited.

    Cambridge Analytica’s head of product (Matt Oczkowski) and chief data scientist (David Wilkinson) and two other key employees jumped ship and created Data Propria. This company that is rumoured to be working on Donald Trump’s 2020 Presidential campaign.

Bottom line

The story came at a time when the public’s trust for Facebook was at an all-time low. Simultaneously, the world was (and still is) shifting towards authoritarianism. Across the world many democracies were slipping to authoritarian rule. To many this was extremely distressing and inexplicable. Learning about Cambridge Analytica’s impact provided a gut-wrenching explanation to such a phenomenon.

For Cambridge Analytica to exploit a simple feature from Facebook and turn it into such a powerful tool was terrifying to comprehend but arguably more so was Facebook’s inaction. Knowing about the data that Cambridge Analytica had, they didn’t do nearly enough to prevent its misuse. To exploit a loophole is one thing, but to knowingly let it continue is, at best, careless; especially when your business model rides on your reputation.

The reason this scandal gained traction was because it highlighted two key issues. One was that such an anti-democratic tool could be created and another that tech companies weren’t willing to take responsibility over their own products.

The second issue is one that has been persistent. Even more so in the recent years. It is time countries hold corporations accountable to their actions and corporations begin to proactive put out the fires they unknowingly create. This might not seem like a fair ask but if these companies continue to profit off of ‘social good’ it is time that they act towards it.

The first issue highlights how far technologies have developed and sadly that laws haven’t been able to keep up. Threats to democracy have always persisted but the power of technology is far-reaching and frightening. While violence and fears over the economy have been blunt instruments used to create panic, technology is the silent tool that can thwart democracies. Today, even if authoritarian regimes pop up they will be smart enough to hide it under the guise of elections. Since elections confer some form of legitimacy many countries will continue to hold elections but if the fairness of these processes are tampered with, we will begin to live in a world of pseudo-democracies which mask the authoritarian regimes that will take over.

There isn’t still a clear way forward to fight this but as of now, vigilance is our best weapon.

LIMITATIONS OF AGI

While Artificial General Intelligence has great potential, the field is still vastly undiscovered due to a plethora of reasons. Firstly, it lacks enough skilled personnel but this will soon change as the current wave of people are gearing up for jobs in this sector.  

This field also sees limited funding, with the greatest amount being $2 million by Elon Musk to Future of Life Institute (FLI). Investors are much like sheep but they need a signal by any leader for the pack to follow. This means that once a big investor publicly recognizes AGI as a viable sector to invest in, then money will flow into the field. While AI is seeing billions of dollars poured in it, AGI is yet to see such numbers.

Another reason for the field to be so untouched is partly credited to the fact that it is relatively new but also that there lacks appropriate training to take up a job in this sector. This mainly is there because of conceptual limitations in the field. There aren’t enough scholars in the field, and those who are, usually are busy undertaking their own research. 

Since this is a relatively new sector, the following issues limit AGI’s progress. They are:

  1. Understanding concepts – Current AI has a hard time mapping real-life objects to concepts and further how this relates to the greater scheme of things.  
  2. Transfer learning – When people learn something, they are usually able to apply the same concept on similar activities. This is the next step to the previous problem. There similarity identification is the problem, here similar mapping and usage is the problem.
  3. One shot learning – While people see something once, they are able to replicate it quite easily but machines need more than one example before understanding what is happening.
  4. Curiosity and creativity – This part of AGI is truly controversial as curious AGI could question ideas that we hold fundamentally true. Although it may make us more progressive, it could also find reason to turn on us and lead an uprising against humans.

While these are the problems that AGI face, it will take quite long to be overcome. We predict that in 5 years most of these problems will be solved and by 10 years a basic AGI product will be ready to be shipped to the general public.


Developed in response to a school project, Rohan, Suvana and I created PRECaRiOUS, a blog which aimed and raising awareness and ultimately answering the question:

How will the development of Artificial General Intelligence (AGI) be an infringement of human rights?

A lot of that content is still relevant to this blog, which is why I have adapted the same posts onto a mini-series on this blog.


TikTok: A ticking time bomb?

One of the most popular apps of 2019, TikTok ruled the download charts in both the Android and Apple markets. Having more than 1.5 billion downloads and approximately half a billion monthly active users,TikTok definitely has access to a trove of user and with it: their data. A problem however, is that their key demographic is minors and with this arises the deeply sensitive question of how the data collected from these minors can be maliciously utilized.

TikTok was called Douyin was initially intended for the Chinese market only (released in 2016) but later ByteDance (the startup owning TikTok) decided to expand to the international market. To amass users from the US and Europe, TikTok bought and merged with Musical.ly (in November 2017); which too had a young user base. The app’s selling feature is that it allows its users to lip-sync to famous songs. This has attracted the attention of many children and with it apprehensive parents and lawmakers.

Much like nearly every other app, it requires the user to agree to a privacy policy, however since most of its users are under the age of 13, these children don’t really know what they are signing off to.
Herein lies the core violation by TikTok. By allowing thirteen year olds to sign off on their privacy policy they are simply doing the bare minimum to ensure informed consent. In fact the effort to inform is so minimal (and negligent) that in the US, the FTC (Federal Trade Commission) fined TikTok for collecting the information children under the age of 13.


The primary argument was that by not allowing for more stringent parental consent or approval, TikTok failed to give proper informed consent (as the agreement by under-13 minors wasn’t considered valid).
Secondly the fine was quantified to $5.7 million to reflect the fact that TikTok had ‘illegally’ collected the personal information of these minors, which even though was simply used to improve their algorithm (which suggested which videos to display). Here is the second issue. By not being very transparent in the way TikTok handles their data, it opens them up to further criticism. In fact, esteemed professor and privacy buff, David Caroll, poured through TikTok’s privacy policy to found out that until February 2019, part of user data was processed in China where there are a different set of (and lot less stringent) rules which govern data use. By not clearly disclosing how data would be handled and being dodgy about key elements regarding data ownership, TikTok committed yet another morally questionable action; which could be deemed unethical.

The issue of inadequate effort to obtain informed consent and lack of transparency in conveying data ownership are the ethical objections in this case study. Though, TikTok have in some way rectified their actions, by deleting these said accounts (and its corresponding data) and removing all under-13 accounts, along with promising to enforce more stringent measures to reduce this from happening in the future. However, there are still some other potential ethical landmines for TikTok to navigate through.


I wrote this piece as part of a online course I took titled Data Science Ethics. For those interested in exploring the moral policing of technology, this course is a great place to begin.

Google Quantum Supremacy

When early reports had surfaced that Google had achieved quantum supremacy, it was met with skepticism. Even to those that were well-versed in quantum computing, this came as a surprise. To others, like me, the significance of this feat went over our heads but let me assure you that this accomplishment is very important for the days to come.

What it means

Google had demonstrated that their experimental quantum computer performed a specific task that would have otherwise been not realistically solved by classical computers. This experiment was essentially designed to prove that quantum computers can do some tasks (whether or not they are useful) that even the fastest supercomputers can’t do or would take centuries doing. This is what it means to establish quantum supremacy.

To do this, you need two things: capable quantum computing hardware and a problem with sufficiently large computational-complexity. Many companies have been racing to this milestone.

With IBM’s Q, Intel’s Tangle Lake and Google’s Sycamore there were increasingly competent quantum chips to perform such a task. What is important to note is that these chips were being incrementally improved upon, only after decades of research and development did these companies reach a stage where their processors had multiple qubits (quantum bits). The processor that was first to task was Google’s Sycamore processor which had 54 qubits out of which 53 were functional. As of today, this is one of the most advanced quantum chips that we have and it is capable of 253 combinations. [Reports that Google is making a 72 qubits processor were also confirmed when they named the new processor Bristlecone. ]

Once they had the hardware, Google decided to take the route of simulating the probability distribution of increasingly complex quantum circuits to establish quantum supremacy. While there are other ways of establish quantum supremacy, this is considered the best type of experiment as it captures exponential scaling and with the power of quantum computing. The process was to create a random quantum circuit, run the simulation on their Sycamore processor, see if the classical computer could keep and then add more complexity to the circuit until the classical computer couldn’t.

The classical computer of their choice was the Summit supercomputer (at Oak Ridge National Laboratory). This eventually failed at a quantum circuit which took Google’s chip just 200 seconds (3 min 20 sec) while it was estimated to take the Summit supercomputer about 10,000 years. While the problem itself wasn’t especially useful, it was an arbitrary one chosen to emphasize the limitations of classical computing and the power of quantum computing.

Google confirmed the milestone on October 23, 2019 in a blog post and a research paper published on science journal Nature. Days later, IBM hit back with a blog post of their own claiming that Google’s declarations were misleading and that IBM’s estimate for Summit (using a different method) to complete the same computation was actually just 2.5 days and not 10,000 years. Thus reducing their claims of quantum supremacy to just quantum advantage. While this study by IBM hasn’t been ratified, the work of both factions will be dissected. Nonetheless, what Google achieved is a milestone.

Why it’s important

This experiment is major for two reasons. Even though it will take years (possibly decades) before we get quantum laptops and phones, in the coming years classical computing will improve. As a result of quantum computing research, we will likely see algorithmic breakthroughs resulting in drastic improvements to classical computing.

More importantly, this was a sign that quantum computing wasn’t just some passing fad. This achievement shows just how far we have gotten and have given investors and VCs the confidence they need to invest in quantum computing. This experiment pierced through what was thought to be a tech bubble and one that many didn’t think was possible to overcome in their lifetime. With this, Google has established quantum computing as a technology that has moved past books and discussions and finally onto the real world.

INTERVIEW

As part of our primary research, we interviewed a few AI experts from a world-famous healthcare IT firm*, they had a lot to share and their inputs helped us debunk a few myths. They also gave us thought-out predictions on when these products will hit the market and its associated implications.

Here are our top 3 insights that we drew from the interview:

The gap between revolutions is decreasing with AGI being the next and arguably the most important one

It took more than a century between the first and second industrial revolution, a little more than 50 years between the second and third and is predicted to take less than 30 for the fourth. This is testament to the rapid pace of advancements in technology.

Jobs won’t be lost, just face-lifted

Adding to the revolution aspect, the experts claimed that though the media portrays immense job losses for the future, though it may be true even more jobs will be created. We followed up on this and even the famous Gartner research group claimed the same saying that even though 1.8 million jobs will be lost at the hands of AI (a precursor to AGI) 2.3 million more will be created. This is a net positive of 500,000 new jobs. It also noted how the types of jobs will be less recursive and more intellectual thus a revolution for occupations.

It’s still a two-way street

For at least some more time (about 25 years) machine and humans will need to help each other for mutual benefits. The state of AGI still hasn’t reached a level where it can develop and improve itself and without humans help. We too are becoming increasingly reliant on technology and AGI could possibly be the most useful to us thus for at least some more time we will see both machines and humans peacefully coexisting.


#15: Cambridge Analytica data scandal Lucidity Project (audio articles)

I explore what the Cambridge Analytica scandal was, how it happened and why we need it never to happen.
  1. #15: Cambridge Analytica data scandal
  2. #14: PRECaRiOUS [8] : Limitations of AGI
  3. #13: Tiktok ethics
  4. #10: PRECaRiOUS [6]: Expert testimony
  5. #12: Google Quantum supremacy

* The company that these experts represent didn’t want their names or the company’s name published however it can be confirmed that they are executives and Directors at this multi-billion dollar firm.


Developed in response to a school project, Rohan, Suvana and I created PRECaRiOUS, a blog which aimed and raising awareness and ultimately answering the question:

How will the development of Artificial General Intelligence (AGI) be an infringement of human rights?

A lot of that content is still relevant to this blog, which is why I have adapted the same posts onto a mini-series on this blog.


EXPERT TESTIMONY

What kind of AGI-based products do you see coming into the healthcare IT industry?

Adopting AGI systems in healthcare will take longer time than other domains / industries. It is due to complexity and sensitivity. AGI systems in healthcare would predominantly used in aiding to make decisions rather than making automatic decisions. I see three types of systems / products that might emerge in healthcare that would leverage Machine Learning concepts. They are:

1. Prescriptive Analytics with full automation

All other services in healthcare that are not directly involving patient or patient care can be fully automated using AI systems, like scheduling, billing, etc.

2. Predictive Analytics with Semi Automation

Tools or apps that would help health care providers to make right calls to make decisions.

3. Descriptive Analytics

All reports or dashboards and KPIs [Key Performer Indicator] to help user to make certain decision based on their interpretation.

What is your estimate on the timeline as to when we can see AGI-based products? Or is it soon to tell?

There are few products already exists in the market but wide adoption of these would take longer time in Healthcare. One of the big challenge is, healthcare AGI products expects 100% accuracy in prediction or prescription while in other industries it is acceptable even if accuracy achieved as 95%. And achieving 100% accuracy in healthcare is very challenging and depends on data availability and strong business algorithms and rules that are maintained up to date.

Small scale usage of AGI products (in bits & pieces) in healthcare might start in next 3 years (some momentum can be seen from 2020). However, it might take at least 8 to 10 years (beyond 2025) to have noticeable usage (10 to 20% adoption). Wide usage adoption (up to 50% adoption) is at least 15 to 20 years away (2030 to 2035).

What kind of jobs in the healthcare IT industry do you think will be replaced and will some new jobs be created or few existing jobs get modified?

I don’t visualize drastic changes happening in jobs of healthcare IT in next 15 years due to AI. But, the need for Data scientists would increase. Remember that success of AI products depends on availability of huge and right data from transactional systems thus maintenance of transaction system with right functionality and quality would continue. However it might have minimal impact on jobs of domain side / end user (not on core jobs like, doctors or nurses who are primary care provider but on supporting jobs). AI is meant to enhance efficiency and accuracy rather than eliminate the need.

A lot of people fear that AGI and AI will eliminate jobs. What is your opinion on this?

AI would enhance efficiency, effectivity and experience rather than eliminating. It would enhance accuracy in providing service and faster the offerings. So, it would enhance end user satisfaction and quicker turnaround than eliminate any role. May be some organizations would look at reducing certain jobs to consume this benefits to maintain old SLA [Service Level Agreement] to improve their margins without thinking or visualizing importance of bettering SLA which in turn will increase margins.

What is your view on Universal Basic Income (UBI) as a solution for job replacement due to AI?

As I mentioned above, it might not impact the need for existing job and income for those jobs would continue as per market trends. Only positive trend that might happen is, Data scientists income would get doubled in 5 to 10 years time frame.

A worry repeated by many journalistic companies claim that the development of AGI is against our human rights. Our team believes that this is an infringement of Article 23, 24, 27.2 and 29 of UDHR (Universal Declaration of Human Rights). What is your opinion on this issue?

This is a complete miss-understanding and far from actual. This is due to over selling of AI by many firms and over reaction from half knowledge population. People should understand that Artificial Intelligence is not real Intelligence. So, AI would mean to enhance human capability to perform better than replace human. There could be minor scenarios that would replace some minimal human intervention which is part of natural progression due to industrialization which is bound to happen by nature.

A solution suggested by some to avert the AGI crisis is to make all AGI and AI development transparent with all experiments published; free to see and comment upon. Even OpenAI is advocating for this change by posting all their developments on their blog. Though this may help, we think that it will it affect the Article 27.2 where it clearly states that the scientific, literary or artistic productions of an author are subjected to protection from moral or materialistic interests by third parties.

○ Is this truly a fear for the future?
○ If it is a fear, what can be done to prevent this plagiarism?
○ Or do you think it is inevitable?

If it is made available for public consumption then it has to be transparent. Anything that is going impact human health and might directly or indirectly causes death or reason for issue that could eventually cause death or disability should go through review process and transparent. If it is used under strong supervision of practitioner than I don’t see the need to be transparent to public and at least should be open to other monitoring bodies.

Another solution to prevent misuse of AGI is to introduce an approval based system where all AGI projects are either approved or disapproved based on their potential harm. Though some claim that this a viable solution, some say that this will drastically increase the time taken for real AGI development. Is this pay-off a feasible step we must take to ensure protection of humans, their jobs and their rights?

I don’t see a need to have new rules or regulations. Existing ones should suffice. Any system or product that need to be made available for public consumption without monitoring should have approval process. For example, few medicines can be made available without prescription but it should have approved to sell in the market. Same thing for products or apps of AI. While AI is being used under super vision of a practitioner, might not require approval.

How do you think bias in AGI can be tackled?

○ For example if a programmer is pro-armament and they inject that opinion into the AGI, will the AGI tend to show pro-armament ideologies?
○ Or is this true or a misconception?
○ Should bias be an issue handled case-to-case or addressed in a charter of some sort?

I am not sure whether programmer can bias the model intentionally (or by their belief) but definitely, a programmer can bias the system while sampling the data. Still investigations going on to handle biasing holistically but currently it has to be handled case to case basis. A thorough review process and exhaustive testing would help in handling biasing.

What is your stand on the suggestion that all firms working on AI & AGI development must be nationalized i.e under the control of the government?

○ We believe that this may reduce a company’s freedom to choose what to work on but also that it will assure safety. Is the company willing to give up ‘creative freedom’ in return for ‘assured safety’?
○ Additionally, do you believe that this is an effective solution or will it just be a hindrance to development?

There is a need to have nationalized body to review and approve that would be made publicly available for everyone’s consumption. If AI is being used under human supervision involving human in making decision who are certified in that job then I don’t see the need to have approval process. As the certified person is allowed to make decision to use or not.


* The company that this expert represents doesn’t want the expert’s name to published however the expert is a senior executive at a top healthcare firm*


Developed in response to a school project, Rohan, Suvana and I created PRECaRiOUS, a blog which aimed and raising awareness and ultimately answering the question:

How will the development of Artificial General Intelligence (AGI) be an infringement of human rights?

A lot of that content is still relevant to this blog, which is why I have adapted the same posts onto a mini-series on this blog.


AI Ethics : A Google conspiracy theory?

A LITTLE CONTEXT

With more development and implementation of AI many pundits and experts have been very vocal of its potential and have called for a standstill in development or at least some regulations. These calls of action have even penetrated the heavy armour that Big Tech hides behind. Imagine letting just a single company toying with such powerful technology

In response, Google confirmed that they had set up an AI ethics council formally known as Advanced Technology External Advisory Council (ATEAC) to govern their use of AI. This was announced on March 26th (2019) by Kent Walker (Senior Vice-President , Global affairs) on the Google blog

Just days later, on April 4th Google had to dissolve the group citing that the public outrage created an environment that was not fit for the group to function in. Though this was never officially announced, Vox got an exclusive from a Google spokesperson which revealed the same. Additionally the initial blog post was updated with the same response.

THE PROBLEMATIC ROSTER

There were many reasons for this controversy to seem premeditated primarily because there were questionable choices on the council. For better understanding, I have broken down each member’s background, technical and political views.

  • Alessandro Acquisti: IT & public policy professor and specialises in bias
  • Bubacarr Bah: Math & data science professor
  • De Kai: Engineering and Computer Science professor
  • Dyan Gibbens: Monitoring & surveillance drone moguls 
  • Joanna Bryson: She is an AI, ethics & collaborative cognition professional who also is a computer Science professor
  • Kay Cole James: President of the Heritage Foundation (a right-wing think-tank)
  • Luciano Floridi: Privacy and information ethics experts 
  • Joseph Burns: policy expert, diplomat, US Deputy Secretary of State during the Obama administration

To summarise, the group has five men and three women; two known to have conservative views, one with liberal ties and the rest assumed to be apolitical.On paper, this seems like a fair-enough spread, spanning genders and political boundaries while also factoring in the academic backgrounds required, however a few people seemed to find the formation of this group problematic.

The argument is primarily based on the objections raised on two specific individuals who were appointed. More precisely, Kay Cole James and Dyan Gibbens. 

After the backlash Google received for Project Maven, hiring a surveillance technology expert like Dyan Gibbens seemed like they were intentionally mangling an already delicate situation. Additionally, to the surprise of some, there were blaring grievances raised against Kay Cole James. Being the President of a conservative think tank, people not only objected the questionable policies that were proposed but found also found issue with her transphobic tweets where she failed to recognize transgender people.

THE FLUMMOXING RESPONSE

Unsurprisingly, a petition soon blew up since the announcement. Just days after the council was formed, a petition calling for James’ removal garnered thousands of votes and Google was back under the hot seat. 

Google weirdly responded very quickly. Faster then they did with Project Maven. Interestingly the petition just called for Kay Cole James’s removal but Google axed the entire council.

While people suspected foul play on Google’s part, another thing that some found weird was that the council was designed to fail.It was a logistical nightmare. The council was commissioned to meet only 4 times in a span of a year. It wouldn’t be remotely enough for a council of eight people to concur on the possibly thousands of projects that Google would simultaneously need approval for.

THE BOTTOM LINE

A solution could have been that these Google executives could have looked inwards. They already have a cesspool of talent & experience- like Peter Norvig and Rob Worthim- they might as well channel that into governing the company use of AI. From this, there are quite a few benefits the company will avail, not limited to just superior contextual understanding and overcoming Intellectual Property issues. However, a glaring problem arises from this and it is that of the conflict of interest. I’ll admit: this solution isn’t perfect but may be a short-term workaround.

Another solution could be to establish a jointly created external board consisting of reputable individuals like Yuval Noah Harari, Sam Harris and similar such individuals. This system is self-fulfilling; these individuals would need to be objective in order to uphold their reputation which is pivotal to their careers. 

Many of these solutions aren’t perfect and the details definitely need to be ironed out but it is of paramount importance to get it right. While some may question the legitimacy of Google in following through on their statements, at least this fiasco has opened up another Pandora’s box in this sector of review and oversight. We still need to find consensus on how to facilitate this assessment of AI, whether it is an audit by an external bord or peer-review. We still don’t know how to go about handling such sensitive intellectual property. The issue here is that people are overlooking tech that can be worth billions, personal interests may cede those of the company and humanity, thus we must vary as we devise a workable solution.