While the release of GPT-3 marks a significant milestone in the development of AI, the path forward is still obscure. There are still certain limitations to the technology today. These are some of the major ones:
For the prediction or decision models to be trained properly, it needs data. As many people have put it, data is now one of the most sought-after commodities ousting oil. It has become a new currency. Currently, large troves of data sit in the hands of large corporate organizations. These companies have an inherent advantage making it unfair to the little startups who have just entered the AI development race. If nothing is done about this, it would further drive a wedge in the power dynamic between Big Tech and startups.
The ways biases can creep into data-modeling processes (which fuel AI) is quite frightening not to mention the underlying (identified or unidentified) prejudices of the creators to factor in. Biased AI is much more nuanced that just tainted data. There are many stages of the deep-learning process that bias can slip through and currently, our standard design procedures simple aren’t aptly equipped to identify them.
As this MIT Technology Review article points out, our current method of even designing AI algorithms aren’t really meant to identify and retroactively remove biases. Since most of these algorithms are tested only for its performances, a lot of unintended fluff flows through. This could be in the form of prejudiced data, a lack of social context and a debatable definition of fairness. While these factors, from a brutally objective point of view, will drag down the accuracy of such algorithms, it must become standard moral practice in the near future.
Even though technological advancements have been rapidly extending in recent years, there are still some hardware limitations like limited computation resources (for RAM and GPU cycles) that we have to overcome. Here again, established companies have a significant advantage given the costs that arise from developing such custom and precise hardware.
Mining, storing and analyzing data will be very costly both in terms of energy and hardware use.
The estimated training cost for the GPT-3 model was $4.6 million. Another video predicted that, for a model similar to the brain, the training costs would be about thousandfold ($2.6 billion).
There are certain underlying assumptions about the brain, linear scaling of costs and others that could drastically taint this estimate but it is likely to be about a thousand times costlier (to create in 2020 at least).
Also, given that skilled engineers in these fields are currently a rare commodity, hiring them will definitely dent the pockets of these companies. Here too, putting newer and smaller companies at a disadvantage.
Since AI isn’t human, it isn’t exactly equipped to adapt to deviations in circumstances. For example, simply applying tape on the wrong side of the road can cause an autonomous vehicle to swerve into the wrong lane and crash. A human might not even register or react to the tape. While in normal conditions the autonomous vehicle may be far safer, it is these outlier cases that we need to be worried about.
It is this inability to adapt that highlights a glaring security flaw that is yet to be effectively addressed. While sometimes ‘fooling’ these data models can be fun and harmless (like misidentifying a toaster for a banana), in extreme cases (like defense purposes) it could put lives at risk.
No consensus on safety, ethics, and privacy
There is still some work to be done in figuring out the limits to which we use AI. Current limitations highlight the importance of safety in AI and it must be acted upon swiftly. Additionally, most critics of AI argue along lines of the ethics of implementing it, not just in terms of how it makes privacy a forgotten concept but also philosophically.
We consider our intelligence inherently human and unique. Giving away that exclusivity can seem conflicting. One of the popular questions that arises is that if robots can do exactly whatever humans can and in essence become equal to humans, do they deserve right? If so, how far do you go in defining these robot rights? There are no definite answers here. Given the recency of AI development, the field of philosophy of AI is still in its nascent stages. I am very excited to see how this sphere of AI develops.
There are certain facets of AI development which have made entry into this field very restrictive. Given the cost, engineering and hardware needs, AI development poses significant capital requirements thus creating high barriers of entry. If this problem persists then the minds behind its development is likely to be predominantly employed by Big Tech.
In the past, technological revolutions as such have allowed for new players to burst into the scene with their fresh ideas. This is exactly how the company we now refer to Big Tech (Amazon, Google, Facebook, Apple and others) got their start. While we now begin to untangle the implication of their vast power, it is undeniable of the impact that they have had on society. It is only fair to presume that allowing new companies and minds to spruce up from a new generation will lead to positive outcomes.
As I previously warned, if let be, i.e without timely and practical intervention, the development of AI can aggravate the dichotomy between those in power and those without. It might also accelerate the divide between those humans with AI and the unfortunate few without. Rather than humans versus AI, the future might look like humans with AI versus humans without.
While that may, ironically, be the most tangible impact of AI development, I don’t think that it will be the most significant one. I believe that the philosophical implications of AI are the ones of greatest importance. Though the idea of such a technology making us question the very basic tenets of our existence seems daunting, I think that this experience will be wholly humbling. It hopefully will lead to startling discoveries whose implications transcend mere individuals and companies.