Short on time? This section is basically the meme TL;DR version of my blog posts.
If you’re unsure of which post to read, then click on whichever meme interests you the most.
If you want more structured takeaways from my longer posts, then look no further. With this I am trying to highlight ideas that hopefully should stick even if the details lose your attention but as always, I would appreciate it if you would go through the original post.
Talos (first mention of AI in literature), protector of Crete, was outwitted by Argonaut heroes (Jason & Medea) who leveraged this machine’s internal yearning for immortality to defeat him. Here you see AI being employed in the most primitive form: physical policing.
Renowned for its zero-shot and few-shot learning power, GPT-3 can generate text based on the limited prompt. This will have profound implications on a lot of jobs. Just with limited roll-out, we have seen it do marvelous tasks: spout out unique philosophical revelations, dynamically create mobile and desktop apps from just an input, create movie scripts, write novels and so many other exciting applications.
The scale of GPT-3 isn’t one that is intuitive to comprehend. Different NLP programs have come and gone but GPT-3 marks innovation that is leaps and bounds ahead of its predecessors. With its 175 billion parameters, the next closest NLP program only has 17 billion.
Here’s what’s under the hood and the implications of it. 60% of the data fed from an open source crawler (Common Crawl) which includes everything from from Al Jazeera to Alex Jones. To complement this, the OpenAI creators trained the model also with hand-picked resources from Wikipedia and historically relevant books.
One implication of real-world data-powered models is that it will only accentuate pre-existing socio-economic problems that the data is unknowingly recording. Other than the bias that comes with this data, it’s costly to train ($4.6 million).
GPT-3 is a vastly superior model, able to power previously unimaginable tasks. It marks a defining moment in AI development. However, there is still a lot of work that needs to be done in mitigating the potential misuse of such programs. We need to also grapple with the implications such programs will have on jobs. Also, delivering ethical and just AI must be the goal when wielding such powerful technology which calls for a shift in our thinking. We need to stop thinking of AI as a tool to enable physical policing (as it did in Crete) but more of a tool that needs to be morally policed when it’s being deployed.
Pop culture portrays the future state of AI as a binary possibility: either human annihilation at the hands of a sentient machine or a utopian society. Traces of AI have appeared in mythology, literature, scriptures even before they emerged in academic discussions. This has had an impact on public perception of the technology. While we view it as such a distressing technology as portrayed by the Terminator movies, the more sobering reality would be things like GPT-3 (which, don’t get me wrong, are still revolutionary but not in the way media portrays AI to be).
While the way humans learn isn’t a single dimensional process, almost all of it is emulatable by a machine. If you think about intelligence (both human and machine) as a toolbox of skills, you can draw an almost perfect map of which technologies are trying to mimic which human behavior.
With all the merits that come with having a machine, it’s clear to see why people are panicking about losing their jobs at the hands of AI.
It is inaccurate and unlikely that the conflict between AI and humans would even occur. Rather, the more real conflict to be worried about is the one between humans who have AI at their fingertips against those who don’t. Therefore the narrative from the Terminator is rather misleading (admittedly it was a movie released nearly 40 years ago!) as a likely reality will be that AI further aggravates the dichotomy between the haves and the have-nots.
While Google has mostly maintained a pristine image when it comes to its work culture with the company having lavish offices, great pay, a closer look reveals a different image.
Accumulation of many incidents, especially these 4 – protests against Project Maven, Dragonfly, sexual misconduct allegations and controversial firings- highlight a track record of Google’s ethical lapses.
With growing concern that Google needs to be held ethically accountable, two US employees at the company decided to start a union called the AWU. This was a well-received move as similar unionization efforts began globally.
Essentially with the union, workers can collectively bargain for greater ethical accountability from company management.
However, trade unions have been notorious for pushing a political agenda that often works against innovation so such a move has been criticized by some whom see trade unions as an obsolete medium.
For so long lawmakers have struggled to keep up with innovation, much less string up up-to-date legislation to mitigate the harms that many of these technologies could possess. With internal employees well-versed with the context of the dilemmas, I think they will be a better judge of the moral implications than your average senator. I think this strategy can be adopted by other tech companies facing similar ethical struggles too. I am optimistic that such a scheme could be the blueprint to solve Big Tech’s “Don’t Be Evil” crisis.
Ever since the likes of Alex Jones and Milo Yiannopoulos were kicked out of their dominant platforms, debate has begun on whether tech companies should be armed with the power of deplatforming.
Often their opinions are polarizing and err on the tight line of freedom of speech and hate speech. Towing this fine line comes with creates liability for Big Tech companies as now there travesties happening around the world that can be tied to social media posts. Recognizing this liability, Big Tech is put in the precarious position of judging between freedom of speech and hate speech.
There is research (though limited) to suggest that deplatforming, at least in the long-run, significantly reduces the person’s user base. Then essential takeaway from deplatforming users posting hate speech is that reducing hate, reduces hate elsewhere, not just throughout the same platform but also in general. However in a free market, just the social media apps market is, there are other viable options for the deplatformed personalities to continue their past behavior.
Deplatforming has always been sold as preventing violence and curbing the spread of socially destructive misinformation but in truth has always been a form of virtue signalling. Where everything is fueled by profits, this is yet another aspect of a product that is being monetized by these tech companies to appease to certain political parties.
Silicon Valley has been handed a very potent tool of censorship and only time will tell how they choose to deal with that power.
The sphere of paintings have seen leaps and bounds in development so much so that someone bought an AI-generated ‘Portrait of Edmond Belamy’ for $432,500. This definitely indicate a shift in the tides. If we don’t care to know the difference between human and AI generated then what stops us from appreciating both?
In an odd occurrence of harmony, engineers and artists seem to concur: art is one of the few fields that isn’t conquerable by AI. They argue that art is a matter of subjectivity whereas AI functions on objectivity. These two are mutually exclusive.
However, from an appreciation standpoint, humans just can’t tell the difference. The fact that we can’t do so may seem deeply insulting to artists but is simply testament to how far AI has come. While less than a decade ago it was only as competent as a parrot – able to imitate human art- it now is a powerful tool for collaboration. It seems imminent that in the near-future AI is likely to become potent enough to become independent creators.