Featured

From Talos to GPT-3: Journey from physical to moral policing

The island Crete in Greek mythology is strongly associated with the ancient Greek gods. It is the backdrop to many famous Greek myths, my favourite being Talos.

While there are conflicting stories as to how he was created, the popular theory was that Hephaestus (god of Blacksmiths and fire) was busy in his forge inventing a new defender for the island. His goal was to build a giant man whose insurmountable power could replace feeble mortal soldiers. Made with beaming bronze, the giant man possessed previously unimaginable strength and was powered with the blood of the gods- ichor. The only indication of this was a single vein that ran from his neck to his ankle.

He had a simple task: protecting Crete. This bronze defender would, three times a day, circuit the boundary of the island looking for pesky invaders. When he found some, he would hurl boulders with ease, sinking their ill-fated ships. Even then, there were sometimes some intruders who would bypass this flurry of boulders. For those, a worse death lie ahead. Heating his metal body up, he would embrace these trespassers, quite literally killing them with kindness!

He was the model guardian, never tiring and always consistent; able to replace the legions of human soldiers who had previously done a sub-par job of keeping the island safe. However internally, he yearned for more.

This was until other Greek heroes came into the picture. Jason, Medea and the other Argonauts had just completed yet another quest and were looking for a place to rest. Attempting to find the relief of a safe cove on Crete, they invoked the defence mechanisms of Talos. While the other Argonauts cowered in fear, Jason veered the boat away from boulders while Medea came up with a plan. Once they cleared the first round of defence, Talos began to heat himself up. Then Medea, the witch, ventured onto the land attempting to coax Talos. Using her honeyed voice, she offered him immortality in exchange for safe passage. Somehow this resonated deeply in his core. Accepting, Talos allowed Medea to chant the necessary invocations. This proved to be a sly distraction as Jason pulled the screw from Talos’ ankle as the ichor flowed out as molten lead, draining his power source. Having attacked his only blindspot, the robot collapsed with a thunderous crash.

So you might be wondering how a mini-story from the Percy Jackson series figures onto a tech blog but the links between technology, imagination and AI are much older than we can imagine. In fact, this story which contains themes of automation and machine sentience was supposedly written in the 700 BCE!

Thus proving that we haven’t just begun thinking about AI. While the headlines may be currently dominated with the release of GPT-3, I believe it is important to recognize its roots.

For those of you that may be living under a rock or simply don’t care for tech news (I don’t judge 🤐), OpenAI – an AI research firm with backing from Silicon Valley big shots (like Sam Altman and Reid Hoffman)- announced the release of GPT-3. This is considered to be a deeply consequential milestone, and also one that is unique in the way it’s being delivered.

Lesser known as the Generative Pre-trained Transformer 3, GPT-3 is the third installment to OpenAI’s increasingly impressive line of Natural Language Processing (NLP) AI models. Renowned for its zero-shot and few-shot learning power, what this basically can do is generate text based on the limited prompt given by the user. It is especially unique because it can generate a response to a prompt it may have never seen before with little to no context, while also maintaining an impressive quality of results.

The various use cases of GPT-3 is seriously impressive and one who’s nuance is not easy to condense in a few lines. However I’ll give it a try:

  • Given a simple single-line prompt specifying either the topic, tone or style of author (you can even suggest a specific individual) or a combination of all three, it can generate complete written pieces. This means that the barrier to “good” and (wherever applicable) accurate creative writing is now lowered.
    • You could apply this to scripts, novels, speeches, essays, articles, emails and any such task that requires creative writing. Thus it has the potential to disrupt the field of journalism, thought-leadership, film-making and more.
    • Now the problem with such a powerful tool is that if it is easily accessible, it could mean that the barrier to creating believable misinformation is also lowered as these programs can convincingly copy the style of famous individuals.
  • GPT-3 isn’t limited to just creative writing, it can also provide unique philosophical revelations ranging from what is god to the meaning of life.
  • Programming jobs aren’t safe either, there are already demos which use GPT-3 to dynamically create apps from a given prompt.
    • A key feature of GPT-3 is not just is ability to build on and use its preexisting knowledge bank but also learn on the go i.e meta learning. This is an yet another impressive feat.

However I would like to reiterate that this isn’t all that GPT-3 can do. These are just glimpses of what people are posting online and keep in mind that these are only a select few who have access to it.

(GPT-3 is currently in its private beta phase as OpenAI wants to control the potential misuse of the API. )

However, if you would like a slightly more exhaustive list of the other applications of GPT-3, check out this article I wrote. I try to shine light to the other resources and topics that I couldn’t cover in sufficient detail in this article.

Comprehending the scale

To understand why GPT-3 is such a big deal, here’s a graph of the different NLP programs before GPT-3 given in terms of the number of parameters the model processes:

Pre GPT-3 era
This figure was adapted from an image published in DistilBERT.

Now here’s a look after:

Post GPT-3 era

The number of parameters is indicative of the power of the model and GPT-3 boasts 175 billion such parameters. The next closest model is from Microsoft which, though impressive in its own right, has only 17 billion parameters. Given the power GPT-3 has over other models, it is clear to see why this is different and the scale at which it can disrupt jobs.

While I may sound like a broken record player, kind of ironic given how far technology has come, I cannot stress the wide-spread implications this AI model can have. It will affect every field and you can bet on it. Whether you’re an author with oddly specific knowledge about Greek mythology to even the programmers who made this model, GPT-3 can do exactly what you can, if not better, at least at an unimaginably fast rate. It has the potential to carry out a significant portion of human jobs and the frightening bit is that it’s just getting started.

History of NLP

For some context here’s a timeline of the key developments in the field of NLP:

  • February 2019: GPT-2 (OpenAI)

    Built using 1.5 billion parameters, it was a large and powerful model in its own right. This was considered the biggest and most powerful model for a while. Like the researchers predicted, there were significant ethical concerns which delayed public delivery.

  • July 2019: RoBERTa (Facebook)

    To improve data tagging, Facebook created RoBERTa which would be able to take into account the context of the query, instead of treating it as a bag of words. This meant it could produce more accurate results as it would avoid randomly eliminating words (the old method).

  • October 2019: BERT (Google)

    Built using the same principle as Facebook’s RoBERTa, it claimed to exponentially improve the quality of search results. Some reports suggest it affected 10% of all Google searches. This marked an important milestone of context awareness in NLP.

  • February 2020: Turing NLG (Microsoft)

    Dethroning previous champion GPT-2, Microsoft’s predictive text algorithm was built using 17 billion parameters.

  • June 2020: GPT-3 (OpenAI)

    Lo and behold, OpenAI’s astronomically encompassing text-predictive engine with 175 billion parameters. This marked an improvement from its predecessor (GPT-2) in four orders of magnitude and a three order of magnitude difference in superiority over Turing NLG.

  • The future

    Many machine learning scientist and AI researchers have hailed that once these AI models get to the level of the human brain they can finally be considered intelligent.

    It is estimated that our human brain has over a trillion synapses and each can be considered analogue to a parameter. This highlights how far we are yet to move in developing intelligent AI. This, at least for me, gives me a newfound appreciation for the human brain and revelation over the nascency of this field.

Under the hood

So you may ask, how does a machine with not even 0.02% of our thinking capacity compete with our “intellectual” abilities? The answer lies in the data it is fed. 60% of the data fed came from the Common Crawl dataset – an open source crawler which has inched the corners of the internet getting information from Al Jazeera to Alex Jones. To complement this, the OpenAI creators trained the model also with hand-picked resources from WIkipedia and historically relevant books. So while it has picked up very reputable and necessary information, in this process it has also picked up the biases, bigotry, misogyny and other things toxic from the internet. In somewhat relieving news, the researchers recognise this issue, and have even reported the results of problematic bias.

To elucidate why this is a problem I would like to look at two examples.Say you identify as non-binary and are seeking healthcare advice from a GPT-3 powered medical diagnosis chat-bot. Now since it is also fed data from the spiteful corners of Reddit, it could generate a response that isn’t “professional”. This could mean, it may prompt offensive comments about why they shouldn’t exist and even deny giving proper medical advice.

Similarly, imagine you are a minority who is using a GPT-3 powered program to appeal a speeding ticket. There is already sufficient evidence to suggest that minorities are less likely to win the appeal. Now, this same data is fed to GPT-3 and the program is “corrupted” into not advising you to take legal action because you have a lesser likelihood of success, while for another individual it may suggest a different course of action. Now do you see the problem with this?

While there isn’t anything inherently wrong, biased or immoral about these programs itself, its decisions are simply coloured by the data it is fed. This sadly will only mean that GPT-3 and other real-world data-powered models will only accentuate pre-existing socio-economic problems that the data is unknowingly recording.

There are also other other concerns with models such as this. Though not an significant, they pose similarly discouraging questions about further development.

The first being cost. It is estimated to have cost $4.6 million to train GPT-3. This simply isn’t sustainable and especially isn’t for smaller startups. As I have mentioned before, this barrier could mean that Big Tech extends the lead they have on AI development.

Another concern specific to GPT-3 is that it isn’t distinctly unique from previous models. It contains the same underlying architecture as its predecessor (GPT-2). This is like swapping out the bronze for titanium but still leaving in the ichor. What I mean by this is that GPT-3 isn’t as innovative as headlines make it out to be (even the OpenAI founder has said so) rather it is a strong testament to the power of brute force and incremental improvements.

Bottom Line

Even if GPT-3 isn’t distinctively different, its vastly superior model, is able to power previously unimaginable tasks. It marks a defining moment in AI development. I am also glad that with this, relevant questions about its future are being asked.

The lack of “innovation” is the least worrying claim. While costs are a concern, even this isn’t the largest problem. For me (and many in Silicon Valley) how GPT-3 models existing (socio-economic) problems through tainted data is the cause for greatest concern. I am worried that while we look at technology as impartial medium, we may cast a blind eye to the unjust outcomes it may model from historically problematic data. There is still a lot of work that needs to be done in mitigating the potential misuse of such programs. However, I think that OpenAI must be given some credit for taken steps in the right direction. Not only have they delivered an impressive product but also done so in a (seemingly) safe way. I am aligned with their vision of delivering ethical and just AI. There aren’t many ideas as how we can practically achieve this but for now a conversation on this topic has begun.

While we need to make strides on that front, we need to immediately grapple with the implications programs such as these will have on jobs. These technologies will leave millions without a job and many scrambling in a new world with the promise of transformed jobs. This transitioning window will cause great uncertainty and significant discomfort for many. All of these point towards the need for UBI in a world where most human jobs become automatable. It is imperative that we avoid the unfortunate fate of the forgotten human soldiers of Crete.

Featured

Artificial Intelligence: Time to terminate the Terminator tale?

What is it?

With outdated dystopian movies like Terminator back in the headlines for possibly predicting the future and with companies like Google already releasing Artificial Intelligence (AI) tools and bots, is it time to rethink the Terminator narrative as we move towards a world where we truly begin to understand what it means to have and use AI?

How does it work?

Traces of AI have appeared in mythology, literature, scriptures, and almost everywhere else you wouldn’t expect before it did in academic discussions!  For centuries many different cultures put forth their versions of AI but it wasn’t until the late-1950s till the idea was formally theorized.

All of Artificial Intelligence is based on a single underlying assumption that all human thoughts can be mechanized.

While the way humans learn isn’t a single dimensional process, almost all of it is emulatable by a machine. To make this analogy a little clearer, I shall look at intelligence (both human and machine) as a toolbox of skills.

The most basic of this is sensory learning. Even the most unintelligent of animals and organisms use information through sound, vision, and other senses to react to the external world appropriately. For humans, it is no different and is typically something that comes innately to us. However, this makes it all the harder for machines to replicate. This idea is known as Moravec’s paradox which states that contrary to assumptions and traditional reasoning, the sensorimotor skills which come so naturally to humans, for machines require enormous computational resources.

As a result, we have made strides in Computer vision, RoboticsTactile signaling, and similar sensory-based learning technologies only in the last 3-5 years. Although now, these technologies are just as adept (if not better) at recording sensory input.

A slightly more advanced tool that not all ‘intelligent’ creatures possess is the ability to gather, store and analyze information. In the past, for AI, this was the biggest challenge. 

Training a Machine Learning model (the technology that ‘teaches’ AI how to behave) needs huge computational power. This was an obstacle for quite a few decades until about 6-8 years ago when GPUs were put to use for purposes beyond graphic processing. With the parallel processing power it offered at a relatively low-cost, machines were finally able to crunch a lot more numbers overcoming computational difficulties that we struggled with in the past.

To make the whole process a little more natural, supporting technologies like Speech recognition and Natural Language Processing was developed to convert sensory and other inputs into a serviceable format. Then the development of Neural networks and Big data could take place to back the machine learning models that were being put in place for information interpretation.

Finally, with these technologies under our belt, we are able to build fairly reliable AI for particular context-specific use cases. This is known as narrow AI. Most current applications fall under this category. 

If we look to scale things further, we must look at the most powerful skill in our human intelligence tool kit: our ability of abstraction. While AI hasn’t fully gotten there yet, cutting-edge research about transfer-learning and meta-learning is being done to make AI more useful for a broader range of tasks. With this, AI will be able to reproduce humans’ extraordinary ability to generalize learning from one situation and apply it in a different context. If this achieved, we may finally get to the AI that the Terminator predicts: AGI (Artificial General Intelligence). This form of AI can essentially mimic all intellectual human activities and eventually will be able to supersede human abilities. The earliest estimates for this, if even possible, are at least 10 years out.

Bottom Line

It has been made abundantly clear by our art and pop culture that we think AI is a distressing technology to be worried about. Whether or not this is true, wherever there is data, there will be AI and I believe that it will be the backbone to the technologies that will govern our future.

While my naivety would like for me to think that AI will lead to a utopian world with better healthcare, self-driving cars and more, if current trends are indicative of anything then this will push the world towards more disparity. While it is undeniable that AI is an empowering technology, it is also key to note that the people who are profiting off of its development are the one-percent. They will continue to do so, driving further the power, income and social inequality.

Therefore, it is inaccurate and unlikely that the conflict between AI and humans would even occur. Rather, the more real conflict to be worried about is the one between humans who have AI at their fingertips against those who don’t. Therefore the narrative from the Terminator is rather misleading (admittedly it was a movie released nearly 40 years ago!) as a likely reality will be that AI further aggravates the dichotomy between the haves and the have-nots.

Featured

ART + AI

Introduction

The idea that AI can infiltrate the field of art is frightening and rightfully so. While it has been no secret that AI can definitely replace blue-collar jobs and possibly threaten white-collar jobs, the idea that it can impact the livelihood of artists isn’t one that the media has foretold, nor have dystopian movies explored. However, we can see early traces of AI in art. It has slowly seeped into written literature, journalism, paintings and even music. 

Having said that, this isn’t a novel (😉) idea. Sometime in the 90s a music theory professor trained a program to write Bach-styled compositions. Then, to his students, he played both the real and computer-generated versions. To them, both were indistinguishable. Since then, technology has rapidly improved to a state that AI can create music of its own.

Similarly the sphere of paintings have seen leaps and bounds in development so much so that someone bought an AI-generated ‘Portrait of Edmond Belamy’ for $432,500. 

These definitely indicate a shift in the tides. If we don’t know the difference between human and AI generated then what stops us from appreciating both? Possibly just robots as over time they’re undoubtedly going to improve. We still compare the Beatles from the 60s to the music of today. Their music has stood the test of time or at least for 60 years. AI music on the other hand is on a completely different level than it was 30 years ago.  One might argue even just 10 or 5 years ago. Many of today’s artists get compared to the work they have put out at the beginning of their careers, yet the same can’t be said for AI art. Imagine how much better AI music could get. Does this push many human artists out of the door? This thought definitely has many artists quaking in their boots.

The Debate

While it may seem like the end is near for artists, in an odd occurrence of harmony engineers and artists seem to concur. Those educated with this budding field are still under the belief that art is one of the few fields that isn’t conquerable by AI. The main arguments are as follows. 

Primarily, it is repeatedly argued that art is an innately human ability that it isn’t reproducible by anyone else, animal or machine.

Art is the result of a base emotion. While in the Stone Age humans were preoccupied with hunting, over time we have gotten better-and-better at self-introspection and as a result are incrementally fine-tuning our emotional quotient. Machines can’t replicate that process as they lack emotional consciousness thus the result of whatever they produce can’t be classified as art. Simply put:

Art is a matter of subjectivity whereas AI functions on objectivity. These two are mutually exclusive

While on a philosophical level this may be true, from an appreciation standpoint, humans just can’t tell the difference. Sure an AI song is yet to top the charts but as AI art becomes mainstream, it is bound to happen.

Some also argue that AI music is a passing fad but the evidence so far suggests that it is here to stay. Some may feel a sense of hopelessness. Tasks that we once considered abstract and complex are being done with ease. Tasks that we considered exclusive only to humans are no more the case. This is extremely demoralising, especially for those whose self-worth is closely tied to the success of their art. Whether they consciously acknowledge it or not, artists sense a threat. The fact that we can’t tell the difference may seem deeply insulting to artists but is simply testament to how far AI has come.

Bottom line

AI is evolving. At an unimaginable pace today. While less than a decade ago it was only as competent as a parrot – able to imitate human art- it now is a powerful tool for collaboration. It seems imminent that in the near-future AI is likely to become potent enough to become independent creators. Who knows what the future will hold? But as AI rewrites a new normal to what we consider ‘daily life’ we need art to help us explore who we are and who we want to become.

Check out this article in Medium if this topic still interests you.

(Google) Unionizing: A resurging trend?

In a previous post, I wrote about ATEAC (Google’s external advisory board for AI projects) and the mysterious circumstances behind it being dismantled. In somewhat related (and unprecedented) news, two employees at Google just announced Alphabet Workers Union (AWU)— their attempt at holding the company more ethically accountable.

In an Op-Ed via the New York Times, this union was formally announced and an invitation was extended to all Alphabet employees. At the time of release, the union had about only 200 members in the US and through this post, I want to breakdown why this is potentially great news.

Why now?

This isn’t the first time workers have mobilized together against Google’s morally reprehensible projects, so why is this different? From its past, I can point to four definitive moments where Google employees mobilized to hold the company more accountable to what they considered ethical lapses.

Two were in response to projects which many employees believed to be ethically questionable. These projects were code-named Dragonfly and Maven. Project Dragonfly was a search engine that engineers allegedly were building for China that abided by their censorship laws. The second was Project Maven, where AI was being developed for the US military to autonomously track drone targets. To both projects, depending on how you view these issues, there are certain benefits. For Dragonfly, if you aren’t so keen on freedom of speech and free movement of ideas then maybe this doesn’t seem so bad. Especially in a world where fake news was increasingly wreaking havoc on society, one could empathize with such a viewpoint. Similarly, Project Maven was seen as a way to reduce casualty of the military in conflict zones. To say that projects are a sign of a company sacrificing its values for profit (as many media outlets would put it) would be to paint an incomplete picture. I do believe that there were certain merits to these projects but at the end of the day, I personally, don’t think such projects should have been developed. To me, this is because the cons outweigh the pro and by a margin that no economic gain would justify. Many Google employees seemed to agree and protested with petitions and walk-outs.

The third event was when a top Google executive got an exit package valued at about $90 million despite being accused of sexual assault. This enraged workers and rightfully so. This time, globally, Google employees were engaged. Project Maven and Dragonfly mainly got US employees to care but this time was different. In a coordinated effort, on the 1st of November, 2018 about 20,000 Google employees engaged in a worldwide walk-out demanding that executives be more transparent with their sexual harassment report and that in the future they don’t set such dangerous precedents by paying out executives with exorbitant sums of money.

The final event that broke the camel’s back was the controversial firing of Timnit Gebru. As a leading artificial intelligence researcher (and one of the few Black women in her field) her work on algorithmic bias was highly relevant to Google. A paper that she was working on seemingly highlighted the risks of large language models such as possibly dangerous biases and the potential ability to deceive people. As a company that has made significant investments in language models (for at least Google Translate) such news wouldn’t give the public the confidence that such development must be undertaken. As a result, to stifle her voice, she was fired. Other employees didn’t take lightly to such action, choosing to organize another protest.

With these growing concerns, I think it was inevitable that employees would formalize their coordinated mobilization and unions seemed like an obvious way to go. I think Google was expecting this too, given that they hired IRI Consulting, a company that provides anti-unionization services.

Going global

In just a week since the announcement, registration numbers in just US and Canada offices grew to 700 while efforts across the world are beginning to pick up steam. In another subsequent announcement, Alpha Global was revealed. This is a union alliance between workers from 10 different countries.

While it’s important to unite Google employees across the globe, holding the company accountable may prove to be a logistical nightmare. Sure such a move will help in building negotiating power of the employees but how do you consistently enact a moral/ethical code? Currently, there isn’t any universally-accepted ethical handbook. This will often mean falling back on relevant legislation. As these projects span geographic boundaries, how do you being to consistently enact laws across countries?  

Though the exact numbers at the moment may seem low, this move garnered sufficient media attention. It will likely result in the enrollment of a significant number of employees. Additionally, Alpha Global’s affiliation with UNI Global Union should prove useful in the future. Representing 20 million workers worldwide, UNI brings with it resources, bargaining power, and experience to lead such a movement at least in the short run.

If this movement gains enough traction then the union should have sufficient bargaining power to influence decisions on ethically grey projects but is this a perfect solution? Trade unions have been notorious for pushing a political agenda that often works against innovation. For Google to retain top talent from the Bay Area they would need to obey the demands of these employees. Demographic data suggests that these Bay Area employees predominantly support the Democratic Party. How does Google toe the line between retaining top talent and ensuring that it remains politically neutral?  

An obsolete medium?

Trade unions are a tricky term. Often associated with creating rigid labor forces and promoting inefficient use of resources, they have in recent years become a taboo especially in competitive capitalistic markets. Often seen as a drag on productivity which can eventually lead to economical decline, Silicon Valley and Big Tech companies, in general, have shunned trade unions. Consequently, they have often seen the lowest union membership rates amongst many sectors in an already falling category. This is in part to two key union avoidance strategies: early-adoption of gig workers and being economically alluring.

The latter reason should come as no surprise. For years working for FAANG (Facebook, Amazon, Apple, Netflix, and Google) was highly coveted. These companies attracted top talent not just because they work at the cutting-edge but also because they have an edge when it comes to being able to compensate their workers handsomely. This generous pay and free perks don’t necessarily need to keep them at their jobs but are often enough to avoid unions because if there are sufficient alternatives in the Big Tech job market then the workers can easily switch and don’t need a union to advocate for them.

In regards to adopting gig-workers, this has been a novel move that other industries are also picking up on since the pandemic forced remote working. Essentially by hiring these temp workers, these companies build a culture of replaceability which may stifle dissent but will definitely result in weak bargaining power for unions.

So for these reasons it was surprising to hear that Google workers, renowned for their creativity and ability to innovate, were adopting such an antiquated model. However, I found it interesting how they shuffled past this PR hurdle. For all administrative purposes, Alphabet Workers Union and Alpha Global are unions. They are however trying to draw a very clear distinction— the traditional economic motive is largely absent. Their goal is predominantly to discuss social issues such as military applications of AI and other such ethical conundrums, not matters of pay. The main reason they have chosen to be a trade union is that this is the only way through current bureaucratic methods for employees to have collective bargaining power.

Template for the future?

For so long lawmakers have struggled to keep up with innovation, much less string up up-to-date legislation to mitigate the harms that many of these technologies could possess. The idea of self-accountability by Big-Tech leaders after every Congressional hearing has been a letdown so far. While hearing the same broken promises get old, I do believe that there is some merit to self-accountability. I just think that these unions will provide the right fire to make these leaders act in accordance to their promises.

In general, I prefer this libertarian path because I believe this will maximize innovation while keeping the company in check through a system that lawmakers simply won’t be able to emulate. With internal employees well-versed with the context of the dilemmas, I think they will be a better judge of the moral implications than your average senator. I find this solution to be a neater way of solving this issue. This puts the onus on the companies to regulate themselves rather than out-of-loop lawmakers. Needing to toe the line between keeping its top talent satisfied and managing corporate interests I think this strategy can be adopted by other tech companies facing similar ethical struggles too.

Bottom Line

Trade unions aren’t a perfect solution but on paper, it seems like it is capable of being a lot more proactive at holding Big Tech companies accountable than lawmakers are. Often Congressional hearings are only called in reaction to certain incidents. Such a move shifts our attitudes towards dealing with ethical projects more proactively.

Going forward, I still think there a few details that need to be ironed out. Consistent enactment of an ethical code and assurance that these unions do not abuse their power are just some concerns that will need to be clarified before it becomes a viable alternative to other forms of ethical accountability. I am however still optimistic that such a scheme could be the blueprint to solve Big Tech’s “Don’t Be Evil” crisis.

The race to trace: How Big Tech raced to develop Coronavirus tracing apps

For years policymakers and privacy watchdogs have feared the power Big Tech wields with its vast collection of user data. Yet, these tech giants were given the go-ahead to spearhead the contact tracing apps that many government predominantly use to identify and notify all those who come in contact with a carrier.

At a time when this market sorely needed an effective product, Apple and Google stepped up. Together they promised their own contact tracing API. With the companies behind iOS and Android, they can control the movement data of millions of people. Here’s what they had to say:

Through close cooperation and collaboration with developers, governments and public health providers, we hope to harness the power of technology to help countries around the world slow the spread of COVID-19 and accelerate the return of everyday life.

Apple and Google’s joint statement

Despite the avalanche of services, however, we know very little about contact tracing apps or how they could affect society. What data will they collect, and who is it shared with? How will that information be used in the future? Are there policies in place to prevent abuse?

While it’s of utmost importance to deliver effective contact tracing apps, being an especially necessary tool in curbing the spread of an airborne virus like COVID-19, there is also a very real need to address these privacy concerns.

Under the hood

Before we get into dissecting the potential privacy concerns, I just wanted to quickly break down the different underlying technology that these apps use.

  • Location: Using GPS or triangulation from nearby cell towers these apps identify if you’ve has come in contact with by tracking the phone’s movement and comparing it to other phones that have spent time in the same location.
  • Bluetooth: Phones swap encrypted tokens with any other nearby phones over Bluetooth and essentially use the idea of proximity tracking to identify if you’ve come in contact with someone. This is generally considered better for privacy as it is easier to anonymize user data. If you think about using GPS as storing absolute addresses, Bluetooth is essentially uses relative addresses thus ensuring a great layer of anonymity.
  • DP-3T: Similar to how Bluetooth tracking works but the only difference is that an individual phone’s contact logs are only stored locally, thus leading to its name- decentralized privacy-preserving proximity tracing. Here, there is greatest care taken to address privacy concerns however there is some tradeoff to creating an effective app.

What Google & Apple’s joint API offers is only a set of underlying protocols inside Android and iOS. In other words, Apple and Google have done the groundwork, making sure that health apps can talk to each other across Android and iOS and get access to the features they need. It’s now up to countries to develop the apps that plug into these foundations and provide the actual front-end interface for users. A crucial part of this underlying framework is access to Bluetooth signals. As mentioned before, the validation of tokens used to trace people who come in contact with those with the virus is generally seen as a pro-privacy move.

If you want a more detailed breakdown of the different contact tracing apps, check this tracker.

If you look at how these technologies function, you can see that as the app begin to address privacy concerns there are increasingly more trade-offs to be made in the effectiveness of the app itself. Understanding which underlying technology is being used will help us better understand what data is being collected and who it is being shared with.

Apple and Google have also said that access will be granted only to public health authorities, and that their apps must meet specific criteria around privacy, security, and data control to qualify. This has meant that some countries and states have decided to go their own way; usually unhappy with the restraints of non-centralized storage.

Even though some may call this a misuse of influence, only limiting certain players into the market, I believe that such a restriction is a positive move. Creating such credible technology but ensuring that people only utilize it in a privacy-preserving manner shows responsible deployment of potentially misusable tech. I think this move deserves more praise than it’s getting.

Why it’s significant

In the past, most significant inventions like the computer and the internet were a result of public sector involvement: funding and personnel. Governments and publicly-funded bodies had a huge role to play in the development of such valuable ideas. Specifically, a lot of these innovations were catalyzed by national emergencies (such as COVID-19). However, as of late, there has been a heavy reliance on the private sector to deliver these innovations. This shift in dependency for inventions and innovations towards the private sector is not only somewhat novel but marks an acknowledgement of the importance of the private sector.

Other than the acknowledgment, for many of these private tech companies there is an ulterior motive- accumulation of data. This bring us to our next concern: How will that information be used in the future?

During past national emergencies, there is substantial evidence of governmental overreach, often disregarding the rights its citizens are granted. Such times are used as an opportunity to fast-track legislation that advocates for aforementioned overreach. For example, soon after 9/11 the Patriot Act was passed. It was a heavily criticized bill that allowed for tapping of domestic and international phones, increased surveillance of citizens and other similar measures that privacy watchdogs called an “erosion” of citizen’s rights.

So the idea that the powers-that-be may misuse influence during times of national emergency to profit off of it in the future isn’t surprising. It’s just that the potential perpetrator is different. While with privately-made contact tracing apps we may have avoided governmental oversight, but what private company oversight? Either party can do damage with movement data and in the past governments have misused this power not just generally but also in this specific context (of movement data) too. The upcoming challenge will be to limit private companies from extorting their users the same way citizens of oppressive countries have been extorted. Knowing how the data is going to be handled in the future is going to be of utmost importance here if we want to prevent companies misusing the data that they are currently collecting.

Here I think there’s a larger question to be asked: Who’s hands are you more comfortable giving ownership of your data to? The government or private companies? While privacy experts would argue that there is a workable solution where you own your data, given current time restrictions I don’t believe this is currently a practical alternative, though easily the best one. There’s a certain element of weighing out different political and profit intentions when deciding in who is going to own your data. This brings us back to who we trust more to handle responses to national emergencies: Big Brother or Big Tech?

National look

To understand what guardrails are there to prevent misuse of data, I think it’s important to understand how different countries are using contact tracing apps. Also I think it’s important to note that Apple and Google are issuing blanket terms- applicable globally- while different governments, with their own set of varying political intentions, are trying to bargain for different terms at a national level.

A sticking point between the tech giants and nations’ representatives is who will process the exposure notifications. Apple and Google want to process the notifications on users’ phones without storing them on a central server, to preserve the maximum degree of privacy possible but many nation’s representatives argue that storing this data in centralized servers allows for better analysis of the data helping them identify additional exposures quicker thus more rapidly containing the spread of the virus. This is quite an interesting dilemma: corporations trying to do right by their users versus countries trying to do right by their citizens.

When countries decide to go against the grain, developing individual apps, there’s a mixed bag of results.

UK

Initially committed to creating an independent app, this exposure notification app hasn’t seen great success. This essentially boils down to one key idea: not even a UK-government approved body have the influence to override permission on an OS level.

Generally, constantly broadcast of Bluetooth signals is frowned upon and even restricted by Android and iOS because it can be used for targeted advertising. However if Google and Apple are creating an API, they can control the operating system to allow only their solution to have the permission to constantly broadcast. This by design gives them an inherent technical advantage as UK’s app won’t be granted such a permission.

As a result of mounting criticism that the app isn’t effective, and given EU’s reluctance to work with Google and Apple on this project, UK has had to consider shifting to a centralized storage based system, which again will be criticized for being more vulnerable to privacy loss.

Australia’s COVIDSafe app also suffers from similar performance issues as the OS restricts how constantly one can relay Bluetooth signals.

South Korea

This country was praised for their handling of the virus outbreak and this can be attributed to how contact tracing was handled.

The government used a combination of different tools like cellphone-location data, CCTV footage, and credit-card records to broadly monitor citizens’ activity. When someone tested positive, like a flood warning, almost all available data was broadcasted online. These victims were put on blast with their last name, sex, age, district of residence, and credit-card history, with a minute-to-minute record of their comings and goings from various local businesses all revealed. The general ideology adopted by countries like South Korea, Singapore and China is that the wider social good trumps privacy and that this calls for overreach.

With a look at just these two countries you can see a micro chasm of the world. There are generally two schools of thought which essentially means- do you sacrifice privacy for effectiveness of the app? Whether or not you may agree with South Korea’s approach, evidence does suggest that this (and countries following similar strategies) see more effective dividends in containing the virus when allowing for such overreach. Ultimately, care must be taken that such governmental overreach is only for the time being and not an opportunity to enforce a long-term authoritarian regime.

Bottom Line

Google and Apple have designed a well-intentioned framework. They have done a good job in acknowledging the intrusive power of the tool and the potential for malicious abuse of it. However in doing so, they have also promoted creating an app that evidence suggests won’t be just as effective as government would prefer.

Big Tech has been put in an unfortunate position. Countries which often criticize them for ethical lapses are also asking for the same companies to lower their privacy standards. They’re in a lose-lose position. Other than a credibility-building exercise, these companies don’t have much to gain from helping out with contact tracing apart from obviously helping limit the spread of the virus.

While drawing the line between privacy and effectiveness (essentially extent of overreach) will be an iterative process that involves cooperation from both parties, I am more concerned with two larger questions that this situation begs to ask. One, how far are we personally comfortable sacrificing our rights and liberties in times of crisis for the greater social good? And two, for how long will democratic countries allow technology companies to dictate the terms?

RESOURCES TO CHECK OUT

A lot of scholars and other esteemed academicians are already discussing AGI and its future and here is our attempt at amalgamating some of these resources.

1. What is Artificial General Intelligence (AGI)

Galactic Public Archives video

This video is a simple and fun introduction to the topic itself, though it barely scratches the surface of the topic it proves to be a great precursor before further research and understanding in the field.

2. Only humans need apply

Book

This book is the result of the thorough research of Thomas Davenport and Julia Kirby which breaks down stereotypes about AI while building about a sense of urgency towards the need for discussion about AI and AGI’s development. The book calls for serious action, something even we are trying to advocate. It’s a great read and definitely a mind-opener that is filled to the brim with insightful analysis and comparisons which are backed with relevant data.

3. AI element

a podcast by element AI

This research firm hosts a podcast which bring on relevant experienced guests on each episode to demystify the hype behind all aspects of AI.

AI element podcast

4. The dawn of artificial intelligence [EP.53]

Waking Up podcast with Sam Harris and Stuart Russell

Podcast host Sam Harris brings in Computer Science professor Stuart Russell to understand challenges of AI while also discussing how it will affect human’s well-being.

Episode 53

5. The future of intelligence [EP.94]

Waking Up podcast with Sam Harris and Max Tegmark

Max Tegmark is a physics professor at MIT who talks about his book which in itself is a useful resource. They address the risks of superhuman AI, the relevance and irrelevance of consciousness for the future of AI and near-term breakthroughs in AI.

Episode 94

6. The nature of consciousness [EP.96]

Waking Up podcast with Sam Harris and Thomas Metzinger

This German professor and Sam Harris talk about how WWII influenced a history of ideas then transitioned to the ethics involved in building conscious AI and we identify with our thoughts.

Episode 96

7. AI: racing towards the brink [EP.116]

Waking Up podcast with Sam Harris and Eliezer Yudkowsky

Eliezer is a computer scientist at the Machine Intelligence Research Institute in Berkeley, California who discusses with Sam Harris the types of AI and its deceiving future. They also address the AI arms race and the ethics involved with it.

Episode 116

Developed in response to a school project, Rohan, Suvana and I created PRECaRiOUS, a blog which aimed and raising awareness and ultimately answering the question:

How will the development of Artificial General Intelligence (AGI) be an infringement of human rights?

A lot of that content is still relevant to this blog, which is why I have adapted the same posts onto a mini-series on this blog.


Resources to check out

As I wrap up my series on AI, here are a few articles and pages which cover the topics at a level of nuance that I couldn’t do justice to in my previous posts. Some are easy-reads while others are long tutorials that can teach technical details of machine learning to the level of skilled programmers. Hope you find them interesting or of use.

AI Basics

  1. A quick-read to differentiate between the different buzzwords that dominate this sphere of AI technology.
  2. An overview of hardware used and how the future of AI hardware will look.
  3. History of Machine learning – Will give you a deeper appreciation for how far we have come and the significance of certain developments (like GPU use).
  4. This essay does an excellent job in highlighting how AI will make strong tech companies stronger.
  5. Taken from a debate, research scientist at DeepMind articulates a very morally conscious argument as to why robots deserve rights.
  6. Packed with economic and historical context this video very eloquently conveys why this AI wave is different from previous ages of development. It also looks predicts the implications AI will have on jobs and existing power structures (as I did in this post).
The Rise of the Machines – Why Automation is Different this Time – Kurzgesagt – In a Nutshell

Using AI

  1. A Neural Network is somewhat analogous to a brain. This is a core part of the AI design process. This article is a walkthrough for novices as to how neural networks work.
  2. For those with some background knowledge, here’s a walkthrough as to how to make a simple neural network.
  3. For slightly more involved learning, this article takes you through how to implement a Recurrent Neural Network (RNN).
  4. Described as the next frontier for Machine Learning, transfer learning is a type of concept that is considered an advanced cognitive phenomenon in humans. For machines to replicate it would be marvellous. This article covers the concept in great detail.
  5. If you don’t mind a math heavy approach to transfer learning, this Medium article looks at the same concept at a more hands-on level.
  6. This website approaches machine learning from a point of providing tutorials and working code rather than intimidating and math-heavy academic material that is typically restrictive.

Current roadblocks

  1. For a deeper understanding as to how bias in AI occurs. Using an excellent analogy to dogs, it details how Machine learning programs are inherently biased or even unbiased. It is simply learning from what it is taught.
  2. This post looks at the ethical concerns of AI use with a focus on facial recognition.

A way forward

  1. An essay about regulating technology. As AI becomes a part of our everyday society, maybe we need to understand how to control it, or at least how to responsibly develop AI.
  2. In this often quoted essay title The Bitter Lesson, Rich Sutton details his learnings from assimilating 70 years of AI research and what common mistakes we need to avoid for further research.
  3. This AI Op-Ed was created using GPT-3. The result is a surprisingly sarcastic and piercing piece about what human intelligence is and isn’t.
  4. A very interesting video about the philosophical implications of AI with a focused look on the idea of robot rights.
Do Robots Deserve Rights? What if Machines Become Conscious? – Kurzgesagt – In a Nutshell



If we do it right, we might be able to evolve a form of work that taps into our uniquely human capabilities and restores our humanity. The ultimate paradox is that this technology may become a powerful catalyst that we need to reclaim our humanity

John Hagel

GPT-3: An encyclopedic entry

To wrangle with the text predictive AI powerhouse that is GPT-3, I have tried compiled everything you might need to know regarding it. Feel free to add more resources in the comments.

Understanding GPT-3

GPT-3: A Hitchhiker’s Guide

Lambda labs

This beginner’s guide will take you through:

  • How the how the A.I. research community is reacting to GPT-3.
  • Short summaries of the best technical write-ups on GPT-3.
  • Demos by people with early beta access to the GPT-3 API.

How GPT-3 works: visualisations and animation

Jay Alammar

This article breaks down how the text prediction is done using easy-to-follow animations along with insightful explanations.

GPT-3: Language Models are Few-Shot Learners (Paper Explained)

A comprehensive paper breakdown

Yannic Kilcher

This video does a thorough review of the paper detailing the technical aspects, trying to predict how the model is built (as this knowledge isn’t publically available) understanding data contamination and more.

Demystifying GPT-3: Technical overview

Lambda labs

This slightly more advanced guide takes you through the nuances of the model, its limitations and understanding the data behind it.

Seeing GPT-3 in action

These two megathreads contain almost all the various applications of GPT-3 that I have come across:

Collection of demos and articles about the OpenAI GPT-3 API.

Awesome GPT-3

Github | Yaser Martinez Palenzuela

This is an easy-to-follow guide to the different demos with excellent classification to help you gauge to vast power of GPT-3.

Vast use cases as well as an analysis of GPT-3’s implications.

Gwern webpage

Website | Gwern Branwen

While its long-form content may seem intimidating, it is nicely structured and well articulated.

It contains an exhaustive list to the applications of GPT-3 in the creative field as well his the author’s musings about this technology and its implications.

Since the previous two are deep rabbit holes you can spend hour on. I have listed a few key application (all of which you can find above) that I think are a must-know if you are strapped for time (however I highly recommend the previous two sources, they are very well curated).

WARNING: This is only the tip of the iceberg to what you can achieve with GPT-3.

General videos

These videos go over the different use cases, sometimes explaining what is going on in the background and what it means for humans currently holding those jobs.

Why GPT-3 changes everything | Sebastian Schuchmann

This video breaks down the underlying principles used to build GPT-3 and its applications.

Crazy GPT-3 Use Cases |
Przemek Chojecki

A brief overview of some of the most impressive use cases.

OpenAI GPT-3 – Good At Almost Everything! 🤖 | Two Minute Papers

A simpler break-down the paper published by OpenAI while also talking about the biases that could have been modelled in.

Philosophy
On god
On the meaning of life
Commercial applications
A no-frills search engine
Layout generator

For similarly powerful text-to-app uses, check this out: Debuild.co

Personal use
Spreadsheet automation
Resume builder
Creative writing

Blog series where a developer with access to GPT-3 details the responses the API gives to his prompts. Here are a few of his posts:

  1. An AI op-ed about human intelligence
  2. A comedy routine in the style of Jerry Seinfeld and Eddie Murphy about San Francisco.
  3. Dr. Seuss themed poems about Elon Musk (there’s a sentence I never thought I would say!)
  4. Short stories: succinct creative writing
  5. Interview – more creative writing
  6. 2020 has been a whirlwind year and someone on Twitter had the genius idea of asking GPT-3 to predict the rest of the year. Here are the results.

Another blogger (Andrew Mayne) applied GPT-3 to creative writing in a series he called OpenAI Alchemy. He converts a script into a novel, uses emojis to summarise movies (i.e emoji storytelling) and accelerates email writing.

Potential misuse

While this has been opened to the general public via private beta testing, the researchers, journalists and other inhabitants of Silicon Valley have warned of its potential misuse.

Here are some of the initial reports:

What could possibly go wrong with GPT-3?

Medium | Surya Raj

A look at the bias in GPT-3 as well as how it can be misused for plagiarism.

How OpenAI’s GPT-3 Can Be Alarming For The Society?

Analytics India Magazine | Sejuti Das

It summarises the algorithmic biases, impact on jobs and threat GPT-3 poses to disinformation.

The way forward

This article summarises the whole concept well while also tying in what this means for the bigger picture. Here’s an excerpt:

We are still far from AI that possesses general intelligence – i.e. with the ability to read a textbook, understand what it says and apply its lessons in new contexts, in new ways, much like a human might. This said, the versatility and generalisation exhibited by GPT-3 marks a significant step towards making that scenario real.

Viraj Kulkarni | The WIRE Science

Similarly, a blog post on Haptik details a more constructive way of reaping the benefits and mitigating the risks of GPT-3.

GPT-3 is undeniably exciting and will revolutionise many industries but care must be taken in the safe deployment of these powerful AI systems. Luckily, this aligns with the philosophies of OpenAI however it is still upto us to stay vigilant. The general public must make it their task to uphold these companies to a fair moral standard at least until meaningful legislative practices are put in place.

The missing pieces: Limitations of AI

While the release of GPT-3 marks a significant milestone in the development of AI, the path forward is still obscure. There are still certain limitations to the technology today. These are some of the major ones:

Data

For the prediction or decision models to be trained properly, it needs data. As many people have put it, data is now one of the most sought-after commodities ousting oil. It has become a new currency. Currently, large troves of data sit in the hands of large corporate organizations. These companies have an inherent advantage making it unfair to the little startups who have just entered the AI development race. If nothing is done about this, it would further drive a wedge in the power dynamic between Big Tech and startups.

Bias

The ways biases can creep into data-modeling processes (which fuel AI) is quite frightening not to mention the underlying (identified or unidentified) prejudices of the creators to factor in. Biased AI is much more nuanced that just tainted data. There are many stages of the deep-learning process that bias can slip through and currently, our standard design procedures simple aren’t aptly equipped to identify them.

As this MIT Technology Review article points out, our current method of even designing AI algorithms aren’t really meant to identify and retroactively remove biases. Since most of these algorithms are tested only for its performances, a lot of unintended fluff flows through. This could be in the form of prejudiced data, a lack of social context and a debatable definition of fairness. While these factors, from a brutally objective point of view, will drag down the accuracy of such algorithms, it must become standard moral practice in the near future.

Compute time

Even though technological advancements have been rapidly extending in recent years, there are still some hardware limitations like limited computation resources (for RAM and GPU cycles) that we have to overcome. Here again, established companies have a significant advantage given the costs that arise from developing such custom and precise hardware.

Cost

Mining, storing and analyzing data will be very costly both in terms of energy and hardware use.

The estimated training cost for the GPT-3 model was $4.6 million. Another video predicted that, for a model similar to the brain, the training costs would be about thousandfold ($2.6 billion).

There are certain underlying assumptions about the brain, linear scaling of costs and others that could drastically taint this estimate but it is likely to be about a thousand times costlier (to create in 2020 at least).

Also, given that skilled engineers in these fields are currently a rare commodity, hiring them will definitely dent the pockets of these companies. Here too, putting newer and smaller companies at a disadvantage.

Adversarial attacks

Since AI isn’t human, it isn’t exactly equipped to adapt to deviations in circumstances. For example, simply applying tape on the wrong side of the road can cause an autonomous vehicle to swerve into the wrong lane and crash. A human might not even register or react to the tape. While in normal conditions the autonomous vehicle may be far safer, it is these outlier cases that we need to be worried about.

It is this inability to adapt that highlights a glaring security flaw that is yet to be effectively addressed. While sometimes ‘fooling’ these data models can be fun and harmless (like misidentifying a toaster for a banana), in extreme cases (like defense purposes) it could put lives at risk.

No consensus on safety, ethics, and privacy

There is still some work to be done in figuring out the limits to which we use AI. Current limitations highlight the importance of safety in AI and it must be acted upon swiftly. Additionally, most critics of AI argue along lines of the ethics of implementing it, not just in terms of how it makes privacy a forgotten concept but also philosophically.

We consider our intelligence inherently human and unique. Giving away that exclusivity can seem conflicting. One of the popular questions that arises is that if robots can do exactly whatever humans can and in essence become equal to humans, do they deserve right? If so, how far do you go in defining these robot rights? There are no definite answers here. Given the recency of AI development, the field of philosophy of AI is still in its nascent stages. I am very excited to see how this sphere of AI develops.

Bottom Line

There are certain facets of AI development which have made entry into this field very restrictive. Given the cost, engineering and hardware needs, AI development poses significant capital requirements thus creating high barriers of entry. If this problem persists then the minds behind its development is likely to be predominantly employed by Big Tech.

In the past, technological revolutions as such have allowed for new players to burst into the scene with their fresh ideas. This is exactly how the company we now refer to Big Tech (Amazon, Google, Facebook, Apple and others) got their start. While we now begin to untangle the implication of their vast power, it is undeniable of the impact that they have had on society. It is only fair to presume that allowing new companies and minds to spruce up from a new generation will lead to positive outcomes.

As I previously warned, if let be, i.e without timely and practical intervention, the development of AI can aggravate the dichotomy between those in power and those without. It might also accelerate the divide between those humans with AI and the unfortunate few without. Rather than humans versus AI, the future might look like humans with AI versus humans without.

While that may, ironically, be the most tangible impact of AI development, I don’t think that it will be the most significant one. I believe that the philosophical implications of AI are the ones of greatest importance. Though the idea of such a technology making us question the very basic tenets of our existence seems daunting, I think that this experience will be wholly humbling. It hopefully will lead to startling discoveries whose implications transcend mere individuals and companies.

Quiet Disruption: Applications of AI

For years AI was touted to be the next big technology. Expected to revolutionise the job industry and effectively kill millions of human jobs, it became the poster child for job cuts. Despite this, its adoption has been increasingly better received. To the tech experts this wasn’t really surprising given its vast range of uses. Here is a list of some of the most notable applications:

Predictive algorithms

Your actions on the internet create data. Many companies store that data and feed it into different machine learning models which then power predictive algorithms.

Through this, platforms like Amazon and Netflix give you suggestions on what you may want to buy or watch. This predictive power is not only convenient for users but also to those behind these platforms, who can analyze trends better and run their businesses more efficiently. Reviewing this high volume data manually is impossible and highly prone to error, but automating it will likely help identify more trends, some that might even be too subtle for the human eye.

Virtual Assistants and chatbots

The earliest traces of a chatbot were in the early 1960s. Designed by MIT professor Joseph Weizenbaum, ELIZA was an attempt to demonstrate communications between humans and machines. It was able to pass the Loebner test, one designed to check if a machine could understand an input (of any form). While ELIZA famously never understood what people would say to it, the chatbot could respond convincingly leading to people being persuaded that it was intelligent despite the creator’s disagreement. Since those days of having to type out our side of the conversation and hoping that a relevant answer would return, we have come quite far.

With virtual assistants like Alexa and Siri, they are essentially performing the roles of glorified chatbots, except conversations are a lot more natural and reliable. Depending on the amount of control you give to these devices, they can go from just answering questions about the weather to switching on the lights in your room to reserving an appointment at the dentist. Tools like this bring convenience to your fingertips.

Healthcare

Every step of the way in the healthcare industry can be improved with AI. From disease detection and prevention to minimizing surgical errors to even drug creation. AI has a large role to play and one that will likely raise the average life expectancy, significantly for those who can’t afford current healthcare solutions.

With consistent logging of each visit, a thorough review of the symptoms might reveal a disease that even a seasoned doctor might miss. A deeper knowledge of similar cases around the world could also lead to quicker diagnosis and even warnings of a larger trend- like a pandemic.

As a result of this cataloged data, it may even be easier to run simulations which eventually will lead to more efficient drug creation.

Space exploration

Exploring the great unknown has been an eternal passion for mankind but one that we have simply scratched the surface of. While in the past there have been some limitations be it technical or physiological; we have incrementally worked around the challenges. Now with established satellites and rovers, there is a heavy influx of data. With AI, this can be analyzed and we can begin to understand the world beyond our own better. Not only can AI be used for this but in other scenarios as well: The same underlying technology behind self-driving cars can be adapted to identify asteroids and help identify and solve on-board problems. What’s more, is that AI can make it safer for humans to travel in space or can just replace high-risk travel altogether.

Companion tool

This is just one of the many forms of AI. Providing companionship is at least wholesome use of this technology. Many countries in the world have an aging population and with it a larger group of senior citizen inhabitants. Typically they tend to live alone and have limited social interactions. As a result, they are more prone to developing dementia and depression, over and above their pre-existing medical conditions. With these AI companions, not only do they get a tool to communicate with but one that can monitor their health and even save them if a medical situation arises.

Similarly, these AI companions can be adapted to work with children as well. This doesn’t limit it to just conversing, but also keeping them safe.

Gaming

Automated gaming created the initial buzz of AI among the public. While people in the realm of technology even in the late 1980s knew the potential of AI, the public was skeptical until IBM’s Deep Blue beat Grandmaster Garry Kasparov in a game of chess in 1998. Through this, the world finally began to grasp the true power of AI and realize how close the world was to developing useful technology with it, something that, until then, only seemed attainable in the distant future.

The reason that even early AI algorithms performed well in gaming was and still is because there is a clear objective and set of rules that all players must function under. There are minimal deviation and only a limited set of steps that can be played. Since most of these computers have exponentially larger computing power than humans, it can simulate a larger set of moves and determine the best course of action. As these algorithms begin to collect data over moves and games, they can develop better strategies and use them to beat their human opponents.

Today, many companies have tried their hand in different, modern games. With Google’s AlphaGo and openAI’s Dota2 bot, these AI programs overthrew the game’s best players despite having to wrangle with a lot more variables and combinations of moves. This has been possible only due to developments in hardware and computing capabilities.

Autonomous vehicles

Self-driving cars have many underlying technologies that power it but AI plays a predominant role in improving the entire experience. While this isn’t just limited to the Teslas and other self-driving cars that most of the public can’t afford, AI is already behind some of the functions we use today like self-parking, lane-assist, and cruise control.

Artificial creativity

Despite the abstractness and irreproducibility of art, if fed sufficient data, AI models can replicate almost all art forms that humans thrive in like music, pieces of literature, paintings, acting, and more.

AI is already used for music creation and drawings. With Amper, A.I Duet, and more for music production and tools like LANDR for post-music technicalities (mixing and mastering), AI is becoming an instrumental part of a musician’s life.

Similarly, today’s cutting-edge AI is being used to compose literature sonnets, digital masterpieces, and even entire movie-script. In the future, seeing this integrated other technologies like robotics (for painting) and deep fakes (for movies and plays) could redefine art as we know it.

(if this topic interests you, check out this other post I wrote where I cover the intersection of ART + AI in greater detail.)

Finance

Trading

Trading algorithms are now being powered by AI. Already, a firm called Nomura Securities is making predictions on current market trends based on compiled trading actions of its past traders. It does so by drawing parallels to what happened in the past, in similar economic situations. This is neither the first company to adopt AI-powered models or even algorithmic-based stock trades.

Already, forms of technology dominate the trading sector with high-frequency trading becoming more and more popular. In the last 10 years, actual humans buying and selling stocks has become extinct. Algorithmic trading is the norm. While algorithmic trading consistently provides marginal gains compared to humans trading (through a thorough analysis of stock price variation with respect to time as seen below), AI trading platforms are the next wave. These will likely be able to replace the functions of a human trader and would also remove the emotional aspects that distort trading.

Index price at hour granularity
Index price at minute granularity

Index price at millisecond granularity

Image by Sabrina Jiang © Investopedia 2020

As the images suggest, through more thorough investigations of stock prices, machines may find a more optimum time to buy and sell stocks (for a much smaller window of time as well) that simply isn’t natively intuitive for humans to detect. In fact, due to this, the average time a stock is held dropped from an agonizing eight months in 2000 to just under 22 seconds in 2011. However it isn’t all rainbows and unicorns (😉), these forms of trading can malfunction. In fact, in May, 2010, there was a flash crash that lasted just 36 minutes in which time the Dow Jones fell by a whopping 9% (and almost immediately recovering back fully). While these high frequency trading algorithms cannot be completely blamed, they had a significant role to play. Following this, there were an onslaught of regulations and it’s likely that the same will follow once AI-powered trading algorithms become mainstream. I only hope that the necessary oversight is taken before business are unnecessarily affected by a similar lapse.

Personal finance

Some startups and banks have also come up with personal financing solutions that range from tasks as minuscule as reminding you when to pay, which actions could help your credit score or even as broad as portfolio management.

An issue, however, since the topic of finance is sensitive is that of data anonymity. As studies have shown, even if data is protected, attempts can be made to match identities to data sets.

Military

Possibly one of the most frightening uses of AI is in the military. Many governments worldwide are keen on leading this arms race. Even though it is mainly governments that are developing these weapons they desperately need and have asked for collaboration from top AI researchers and Big Tech but both have increasingly expressed their ethical concerns. Critics of AI have been most worried about this use citing that it will be the most dangerous for mankind.

There is sufficient evidence to back this. According to a war game conducted by the Pentagon, an AI combatant was able to withstand 5 human combatants. This in itself seems like an overwhelming jump from current standards not to mention that AI can also be used in unmanned armed drones, flight and missile assist, weapon maintenance, autonomous weapons and so much more. 

Bottom Line

Seeing that the use of AI will become more prominent in so many domains, there is no point in just fearing it. AI is inevitable and will disrupt almost ever sector it can reach.

With many of these applications our lives will unequivocally be transformed. Better universal healthcare, safer cars, easier access to companionship and more point towards a utopian world with vastly improved standards of living. Other uses cases like gaming, chatbots, space exploration and artificial creativity may seem trivial but convenient. Again contributing to an idealistic future. However some implementations of AI like in finance and military are, to put it lightly, worrisome. While they both may seem lucrative in the short-term as countries fight to prove their economic and defence dominance, there are significant red flags that will arise if its development and use continues unrestricted. The risks that come with it like a loss of personal privacy, identity fraud shouldn’t be brushed aside, but even still pale in comparison to the destructions of races and possibly even humanity. Don’t believe me? Even Stephen Hawkings said so!

I think that some of the most important decisions that humanity will make this century will be around controlling the use of AI. I would rather prefer precautionary measures rather than reactionary ones even if this means getting to that utopian world a little later.