THE OPENAI CHARTER: WHAT IS IT?

Elon Musk, Peter Thiel, Sam Altman and other such individuals took up the project of creating awareness towards the idea that AGI and AI should see safe development with laws in place to govern its development and subsequently it usage.

To further show their seriousness the OpenAI team detailed an AGI charter which covered in fair detail the rules that the world should follow. They seem generally relevant and a fair assessment of future consequences with adequate measures taken in place against it.

The gist of the charter is that they wish to harness AGI’s potential and use it for the betterment of humanity. They also designed these rules in a way where if enforced, everyone will be obliged to develop only for the betterment of humanity and keep their focus only this. OpenAI also seems to have taken time-related measures to combat late mass-scale development.

The charter is attached below:

https://openai.com/charter/


Developed in response to a school project, Rohan, Suvana and I created PRECaRiOUS, a blog which aimed and raising awareness and ultimately answering the question:

How will the development of Artificial General Intelligence (AGI) be an infringement of human rights?

A lot of that content is still relevant to this blog, which is why I have adapted the same posts onto a mini-series on this blog.


Stadia Gaming: Console killer

On March 19, 2019 Google announced their first gaming product and it took the Esports community by storm. Their product was first to make good on the promise of cloud gaming and much like most other Google products, it made lofty claims. This gripped the attention of the gaming community and put Google under the spotlight. It became patient zero for a case study on cloud gaming.

To those of you that wonder what cloud gaming is, look no further. Some of the earlier video games like Pong and Atari Breakout were either exclusively single player or multiplayer (dual at most) games. Most didn’t have an option to switch and even so had the obstacle of having to have both players be present at the same place and at the same time using some complicated joystick that costs a ton. Then, as time progressed, games came up where people could either play alone or with the world, allowed to toggle back and forth. Still a problem remained. While the technology had become consequentially cheaper, graphics had become more notable. The need for capable hardware persisted and so did the high costs. Cloud gaming aims at solving this.

Stadia is software (and a joystick) that makes use of a cloud gaming model called game streaming which details that the actual game is stored, executed and rendered in a distant server and only the result is streamed onto your device. While this may seem radically different from what we typically use, it is still online gaming. Stadia players can enter an online environment with other gamers despite what console (or PC) they use to game. As of now, even the PS-exclusive Dualshock and Xbox controller will work with the Stadia app.

To circumvent the need for consoles, Google has leveraged the vastly improved broadband connection to record user responses and transmit it to servers, render the reaction and return back the result all in the matter of milliseconds. Here latency has been the greatest obstruction and it is especially impressive that Google has managed such a feat. Sure internet connections have gotten quicker but they have also had to improve the performance of their servers to accommodate such quick actions and moves. It also important, to note that even the best TVs have some form of input delay in the range of 15-50 ms meaning that Google has very little scope for its own delay.

Convenience and concern

As hype over the release of new consoles hits almost every other year, most users see past the smoke and mirrors for what it really is: a money grab. In a day and age where old consoles become deprioritized as soon as the next one comes along, gamers become slaves to the constant upgrade cycle of consoles. With a product like Stadia, over-the-air updates can keep even five-year old hardware up-to-date with the newest consoles. This is the same way that old Teslas keep up with the new models of not just their own fleet of cars but others too. This makes gaming not only convenient but considerably cheaper too.

Another benefit that we’re seeing as the gaming industry develops is that graphics are becoming better. While this means a more authentic gaming experience, it also means large files to download and run. This puts a certain pressure on the device hardware that cannot be sustained (thus the need to upgrade consoles). However, with remote servers running, you avoid long download wait times and local storage limitations. Additionally, these servers are built to last longer thus is also more durable.

Lastly, Stadia and other cloud gaming solutions make gaming portable. You no more have to lug around a heavy console, just a controller or even your phone will do. Even if you’re keen on watching yourself play on a big screen, all you need is a pocket-sized Chromecast (Ultra).

However, since Stadia is heavily reliant on internet speeds, this means that your gaming experience for the same game might be entirely different. While Google recommends that a broadband connection which provides a speed of anything in the range 10-35 MBps, initial tests suggest that the real number is closer to 50-75MBps.

Variations in broadband speed and proximity to Google’s server could give you the extra microsecond that you just need to score the last goal or get that last kill. This added latency in do-or-die situations could affect the result of games. Consoles, on the other hand, will likely provide more consistent user experiences.

The only other complaint and one that is specific to Stadia is its limited game catalogue, but in its defense it has just been announced. Many game developers recognize the potential of it and are slowing flocking towards Google’s product. Reports have also come out claiming that Google will develop its own line of games known as Stadia exclusives.

Bottom Line

The success of such a tool could spell the future for gaming. With centralised infrastructure, Google could provide a cheap but rich gaming experience to its users and mark the era that consoles became fossils of the past.

No Brainer: Effect of AGI on careers

With the progress that AGI is making, the fear of people losing their jobs to machines has increased tremendously in the past few years. AGI may prove to be the culmination of what some people know and fear: losing their jobs.

“In almost every occupation, there are at least some tasks that could be affected, but there are also many tasks in every occupation that won’t. That said, some occupations do have relatively more tasks that are likely to be affected by machine learning,”

Erik Brynjolfsson (MIT professor)

First wave: repetition

First, AGI will mostly replace repetitive rubric based tasks like per say record keeping. Though its adoption will face some opposition, it can reach a stage where machines can outperform even multiple humans at a fraction of the cost. For companies and even countries to object to this will leave them in the dust. That day will be coming soon and if AGI is able to complete more and more complex tasks then it can replace at least the run of the mill jobs.

To name of a few jobs that we won’t likely see in the future headed by humans:

  • Telemarketing
  • Receptionists
  • Bookkeeping
  • Couriers
  • Cashier
  • Accountants

This reflects about hundreds of millions of jobs. Already forms of narrow AI are able to perform these tasks because they don’t require too much skill, just some form of repetition or doing a task within a set of well-defined actions; all of which is reproducible in code and not necessarily human labour.

Second wave: data-backed decision

The next sphere of jobs that AGI can revamp are those which require more than one set of actions to be performed. Say an action needs to done based on a judgement, AGI can take over here too. With its immense data-crunching capabilities, AGI can process the data it captures from its surrounding to suggest or do an action. Some examples include- Security guards, assistants, event planning and much more. For these set of occupations, humans take decisions based on predominantly data (and sometimes what they feel), one would argue that machines make data-based decisions far better than humans, which is why it’s a no-brainer to replace humans with machines in this instance. However, a lacking trust is likely to slow down this adoption, as many won’t be likely to leave the fate of their security upto some robot.

Another example of a job that can be replaced and made more efficient is one that is mentioned in our article including our interview with experts in the field of healthcare and AI. Radiology is a profession that is frequently in the crosshairs, as numerous AI applications have proven to be more effective and efficient in medical scan images. This job is one in which 26 distinct tasks associated with it, and whilst analysing medical images is well suited to AI, interpersonal skills are currently not.

The Bottom Line

The essence of the AI/AGI revolution is automation, to make our lives easier. Robotic takeover will mean the death of some jobs but also, more importantly, the birth of new opportunities. Firstly, intelligence needs human ingenuity, people are still needed to create these machines, that is a void to fill. Sadly, people will need to fall back to their skill of adapting, self-educating and surviving to bounce back. Whatever the future holds, with possibilities of such drastic changes to human employment, it may be time to consider solutions like UBI (Universal Basic Income) to sustain the human race.

As we wonder what jobs aren’t turned upside down, it seems somewhat apparent that creative jobs won’t be hurt. This might mean that the human race make a shift towards creative occupations, one that prove to be enlightening and fulfilling.

AGI TESTS

AGI has been feared by many to become the technological terrors portrayed by the Terminator franchise. However, like a student passing out of school and being deemed eligible, AGI has been put through some tests as well. Here are a few of them:

The Turing test ($100,000 Loebner prize interpretation)

The Turing test was proposed in Turing (1950) and has many interpretations. One specific interpretation is provided by the conditions for winning the $100,000 Loebner Prize. Since 1990, Hugh Loebner has offered $100,000 to the first AI program to pass this test at the annual Loebner Prize competition. Smaller prizes are given to the best-performing AI program each year, but no program has performed well enough to win the $100,000 prize.

The exact conditions for winning the $100,000 prize will not be defined until a program wins the $25,000 “silver” prize, which has not yet been done. However, we do know the conditions will look something like this:

A program will win $100,000 if it can fool half the judges into thinking it is human while interacting with them in a freeform conversation for 30 minutes and interpreting audio-visual input.

The coffee test

Stephen Wozniak suggest a (probably) more difficult test — the “coffee test” — as a potential operational definition for AGI:

Go into an average American house and figure out how to make coffee, including identifying the coffee machine, figuring out what the buttons do, finding the coffee in the cabinet, etc.

If a robot could do that, perhaps we should consider it to have general intelligence.

The robot college student test

Goertzel suggests a more challenging operational definition, the “robot college student test”:

When a robot can enroll in a human university and take classes in the same way as humans, and get its degree, then we’ll say that we’ve created an artificial general intelligence.

The employment test

Nils Nilsson, one AI’s founding researchers, once suggested an even more demanding operational definition for “human-level AI” (what we’ve been calling AGI), the employment test.

Machines exhibiting true human-level intelligence should be able to do many of the things humans are able to do. Among these activities are the tasks or “jobs” at which people are employed. I suggest we replace the Turing test by something I will call the “employment test.” To pass the employment test:

AI programs must… [have] at least the potential [to completely automate] economically important jobs. To develop this operational definition more completely, one could provide a canonical list of “economically important jobs,” produce a special vocational exam for each job (e.g. both the written and driving exams required for a U.S. commercial driver’s license), and measure machines’ performance on those vocational exams.

This is a bit “unfair” because I doubt that any single human could pass such vocational exams for any long list of economically important jobs. On the other hand, it’s quite possible that many unusually skilled humans would be able to pass all or nearly all such vocational exams if they spent an entire lifetime training each skill, and an AGI — having near-perfect memory, faster thinking speed, no need for sleep, etc. — would presumably be able to train itself in all required skills much more quickly, if it possessed the kind of general intelligence we’re trying to operationally define.

Overall, this shows how we as a community think about the uniqueness of our functioning, however we need to come to a consensus on what defines AGI and the boundaries up till which we can experiment with it.


Developed in response to a school project, Rohan, Suvana and I created PRECaRiOUS, a blog which aimed and raising awareness and ultimately answering the question:

How will the development of Artificial General Intelligence (AGI) be an infringement of human rights?

A lot of that content is still relevant to this blog, which is why I have adapted the same posts onto a mini-series on this blog.


AGI: Utopian Fantasy or Terminator Reality?

As of today, humans are Earth’s smartest species but in a few years experts predict that this tide will change. Humans will pave way for intelligent AI and AGI which will soon put our top position at risk. Not only will these machines take away our worthless position in the leaderboard but it will replace jobs too as they can do it quicker, better and cheaper.

WHAT IS IT?

Artificial General Intelligence or AGI refers to any machine that can perform an intellectual task that the average human can do as well. This field has been deeply covered in science fiction, literature and in films but the world is yet to see any real products from the vast research the people in the field are doing.

What AGI aims to do is to have equal or better ability in duplicating a huge range of tasks that humans can do too. As of today no true AGI exists (only narrow use cases of AI) and there is no realistic estimate to when AGI will come along.

WHY IS IT IMPORTANT?

Media-hype and ominous warnings by Silicon leaders like Elon Musk, Stephen Hawking and Sam Harris have caused frenzy in the public. The general image of it is that AGI and AI will take over jobs. While AGI is helpful for businesses in terms of being more productive and making them more profits, it could also gain control of sectors that government agencies don’t necessarily want to give up control of. This includes the weaponry and defense sector which could have unpredictable consequences.

The main problem with AGI is that it has unknown consequences both good and bad but mainly news agencies are highlighting the cons thus the field is facing a poor public image.

As mentioned before, a great worry regarding AGI is how it can be weaponized. Sure it will reduce casualties as lesser human soldiers will be there but the ethics of the issue are deeply troubling. In an open letter published on the Future of Life Institute’s website, the authors, AI researchers brought out very valid points. This included how it would create a global AI arms race which could not only result in mass-produced weapons as the materials needed will be readily available but also selective killing. If these weapons fall in the wrong hands, it could cause many moral dilemmas not to add that it breaks 9 different human rights articles specified in the UDHR. In the words of the letter:

“Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

Open letter published on Future of Life Institute website

The problem with AGI is that it will either lack empathy thus develop aggressive attitudes towards humans or may take their ethics rules too seriously or interpret it in a manner that it wasn’t intended to be in.

However, AGI can solve many of the world’s problems like poverty, world hunger etc. It could also extend the life of humans by allowing them to merge with this technology, known in the aspirational academic community as the Singularity.
In summation, many aspects of AGI are considered bad only because many have a fear of the unknown. What is definitely known is that AGI interfering in the defense sector is for sure a no-no but nothing definite is known about how it will work in other fields.


Developed in response to a school project, Rohan, Suvana and I created PRECaRiOUS, a blog which aimed and raising awareness and ultimately answering the question:

How will the development of Artificial General Intelligence (AGI) be an infringement of human rights?

A lot of that content is still relevant to this blog, which is why I have adapted the same posts onto a mini-series on this blog.


VR, AR & MR: The Battle of Acronyms

The introduction of these new technologies have led to its confusing corresponding acronyms but fret not for it shall be explained.

What are they?

VR or Virtual Reality is a type of technology that replaces your real view with a synthesized and digital view. This simulated environment tries to recreate as many senses as possible. As of now, sight and sound have been easy to simulate and this is achieved at cheap levels as well but recreating the other senses have been a costlier and less fruitful task comparatively. Currently the gaming industry is the key consumer of virtual reality goods but it is soon expanding to other industries requiring data visualisation (architecture & weather) and to those industries where mistakes are nearly unpardonable (medicine & military).

Due to its deceiving yet fictional or intangible nature people don’t have an idea of their real surroundings or even the changes occuring in it. This could pose problems to safety and health but now VR gadgets are coming with warning signs to prevent such accidents.

AR or Augmented Reality is also a type of technology that manipulates our reality except that it morphs a digital view with our view i.e it is a mix of both. It is sort of in the middle with its feet in both worlds. Here the superimposition of images onto to the real world is harder to undertake as it has to calibrate orientation and position of both the viewer and the camera while also running its background processes. Due to this it is still in its development phase but some companies have taken steps in this direction.

MR or Mixed Reality is a cross-breed having elements of both the real and virtual world. This seems awfully close to augmented reality so one might ask what differentiates the two? As of now companies fail to realize the clear distinction between the two thus they are currently synonymous. As of now augmented reality is the favoured term.

In truth the two terms are simply sub-sections of what I call Morphed Reality. While one is bound to the real-world (i.e. mixed reality) the other simply isn’t (augmented reality). Due to this synthetic elements of mixed reality react and interact in real-time with the real elements. In augmented reality, the overlay elements don’t interact much and act simply like a layer over reality.  

What does the future hold?

As mentioned before, currently VR is used primarily in the gaming sector but as it improves (cost reduces) its application could multiply. Some of them are:

  • Education: field trips to once thought costly and faraway lands are now available at a fraction of the cost and that too with just a click of a button. This is slowly being implemented by museums. The possibility of creating a virtual environment allows not just   students but professionals to develop their skills and not have to face the consequences of failure as they would have to in the real-world.

AR too has applications in the field of education. When students wear AR lenses, items in the school can be set to AR triggerable thus the more curious students can go around school use the lenses to get a better understanding of the functioning of everyday objects.

  • Robotics: machinery can be better tele-operated (by a human from a distance) using VR, AR and AI (Artificial Intelligence) systems. The ability of telepresence (sense of being there) and telerobotics (interacting with the machinery i.e moving it) makes the possibilities endless, ranging from:
    • Teleconferencing
    • Long-distance surgery
    • Inter-galactic roving
  • Further Gaming: Arcades and VR theme parks began opening up and the trend is catching ablaze. Companies have also begun their forage into AR gaming with Pokémon Go capturing the world’s attention.
  • Tourism: VR could revolutionise the tourism sector with its advancements. If tourist hotspots agree to digitize their areas (allow it to be seamlessly viewed on a VR screen) then they could potentially make a lot of savings. These are:
    • Cutbacks on air pollution and travel
    • Longer sustenance and easier maintenance of heritage sites.
    • No accidents as there are no actual people there (thus no overcrowding too).
    • Time savings as long hours on planes and airports are cut.

AR too has its applications in tourism with QR codes and other apps providing an additional (layer of virtual) view to tourist spots.

  • Treating mental & physical problems: As mentioned before VR could be used to guide machines in completing the physical procedures. AR tools could be used in physical procedures as well. Neurosurgeons can use AR to get a visual of the brain as part of their usual view and it would save time and increase precision of the procedure.                                                                                                                 

For solving mental issues, VR is still used at a low-level but its application could blossom to solving PTSD and other stress-related disorders. Exposure therapy is used to create a past stress-inducing incident and target the source of anxiety without the intention of causing danger. Usually a successful technique, in a controlled environment like VR, it would be more effective.

Another method in the field of medical VR is empathy VR, which is basically putting yourself in the shoes of a person with some mental or physical issue. This is done to raise awareness and explain and understand the extent of some diseases.

  • Journalism: While Wall Street Journal has already begun a small VR division for journalism and reporting it could become a hit as news will become as tangible it can get. Stories will become personal and real problems will gain more heartfelt traction than ever before.
  • Communication: While some tout VR to be the downfall to face-to-face communication and communication in general, some question this claim because VR allows the boons of communication – physical collaboration and emotive expressions. This is done while adding to it not being in the same physical space rather at the comfort of one’s own home. Thus many claim VR is the next logical step in the evolution of communication.
  • Archeology & architecture: Using AR architects can build site configurations. With it, it makes the blueprint-creating process easy. AR lenses can also give a general overview of what is underground and this simplifies the archaeologist’s job of digging.
  • Everyday-use: AR could incorporate everyday apps into its lenses or other devices. This includes:
    • Music: A sound sensor and reader could help recognize background music thus acting like Shazam.
    • Translation: With AR tools signs in foreign languages can instantly be translated into preferred languages.
    • Video-conferencing: AR tools can help simulate face-to-face communication better than most video-calling apps.
    • Messaging: With AR, messages can be given the appropriate priority they deserve as the more important ones could be filtered out and shown directly to the user.
  • Military: Other than VR simulations to sharpen skills, AR could be used as an added layer of the scope to improve accuracy of the shot.
  • Repair and maintenance: With AR repair and maintenance can’t become any easier to do by yourself. With the camera pointing at the problem, AR tools can help find areas to fix or improve, then fetch appropriate prompts that experts would have told it to do and then provide a fool-proof step-by-step solution for the person.
  • Transportation: While VR provides a fictional approach to transportation, AR could develop this sector. Other than AR tools helping in navigation with its real-time views, it could also plan an optimized route (using various modes of transportation) not known to the person while also guaranteeing the person improved time of arrival as well as the directions all the way through.

What drawbacks do they have?

Despite these promising prospects VR still has a few steps to overcome before becoming the prime-engrossing technology.

  1. Health problems: VR especially has caused many known health problems like – visual abnormalities, seizures, loss of awareness, nausea, impaired hand-eye coordination and balance, fatigue, drowsiness and lightheadedness. What’s worse is that experts claim that the list goes on. Manufacturers see this thus many are pivoting to AR alternative in order to have a longer shelf life.
  2. Not enough apps – Since VR is a new industry there aren’t many popular apps for it. In addition, the lack of a monetization plan leads to lesser incentives for producers to even venture in the market. Current players in the market are also quickly losing interest so it is of utmost priority to come up with a business plan on how the sector should make money. It doesn’t matter if a solution is short-term, it just has to be gauge the interest of producers, later the business model can evolve to a long-term solution.
  3. Price: Since they all are in their development phase, prices tend to be very costly. High-end models like the Vive, Oculus Rift range from $600-$800.

The bottom line

VR, AR & MR all show great potential not just business-wise but also evolution-wise. If its drawbacks are sorted out the field could truly make wonders while also creating a domino effect of advancement in other major industries. As it develops and enters mainstream use the potential of the 3 technologies is only our imagination.