What kind of AGI-based products do you see coming into the healthcare IT industry?
Adopting AGI systems in healthcare will take longer time than other domains / industries. It is due to complexity and sensitivity. AGI systems in healthcare would predominantly used in aiding to make decisions rather than making automatic decisions. I see three types of systems / products that might emerge in healthcare that would leverage Machine Learning concepts. They are:
1. Prescriptive Analytics with full automation
All other services in healthcare that are not directly involving patient or patient care can be fully automated using AI systems, like scheduling, billing, etc.
2. Predictive Analytics with Semi Automation
Tools or apps that would help health care providers to make right calls to make decisions.
3. Descriptive Analytics
All reports or dashboards and KPIs [Key Performer Indicator] to help user to make certain decision based on their interpretation.
What is your estimate on the timeline as to when we can see AGI-based products? Or is it soon to tell?
There are few products already exists in the market but wide adoption of these would take longer time in Healthcare. One of the big challenge is, healthcare AGI products expects 100% accuracy in prediction or prescription while in other industries it is acceptable even if accuracy achieved as 95%. And achieving 100% accuracy in healthcare is very challenging and depends on data availability and strong business algorithms and rules that are maintained up to date.
Small scale usage of AGI products (in bits & pieces) in healthcare might start in next 3 years (some momentum can be seen from 2020). However, it might take at least 8 to 10 years (beyond 2025) to have noticeable usage (10 to 20% adoption). Wide usage adoption (up to 50% adoption) is at least 15 to 20 years away (2030 to 2035).
What kind of jobs in the healthcare IT industry do you think will be replaced and will some new jobs be created or few existing jobs get modified?
I don’t visualize drastic changes happening in jobs of healthcare IT in next 15 years due to AI. But, the need for Data scientists would increase. Remember that success of AI products depends on availability of huge and right data from transactional systems thus maintenance of transaction system with right functionality and quality would continue. However it might have minimal impact on jobs of domain side / end user (not on core jobs like, doctors or nurses who are primary care provider but on supporting jobs). AI is meant to enhance efficiency and accuracy rather than eliminate the need.
A lot of people fear that AGI and AI will eliminate jobs. What is your opinion on this?
AI would enhance efficiency, effectivity and experience rather than eliminating. It would enhance accuracy in providing service and faster the offerings. So, it would enhance end user satisfaction and quicker turnaround than eliminate any role. May be some organizations would look at reducing certain jobs to consume this benefits to maintain old SLA [Service Level Agreement] to improve their margins without thinking or visualizing importance of bettering SLA which in turn will increase margins.
What is your view on Universal Basic Income (UBI) as a solution for job replacement due to AI?
As I mentioned above, it might not impact the need for existing job and income for those jobs would continue as per market trends. Only positive trend that might happen is, Data scientists income would get doubled in 5 to 10 years time frame.
A worry repeated by many journalistic companies claim that the development of AGI is against our human rights. Our team believes that this is an infringement of Article 23, 24, 27.2 and 29 of UDHR (Universal Declaration of Human Rights). What is your opinion on this issue?
This is a complete miss-understanding and far from actual. This is due to over selling of AI by many firms and over reaction from half knowledge population. People should understand that Artificial Intelligence is not real Intelligence. So, AI would mean to enhance human capability to perform better than replace human. There could be minor scenarios that would replace some minimal human intervention which is part of natural progression due to industrialization which is bound to happen by nature.
A solution suggested by some to avert the AGI crisis is to make all AGI and AI development transparent with all experiments published; free to see and comment upon. Even OpenAI is advocating for this change by posting all their developments on their blog. Though this may help, we think that it will it affect the Article 27.2 where it clearly states that the scientific, literary or artistic productions of an author are subjected to protection from moral or materialistic interests by third parties.
○ Is this truly a fear for the future?
○ If it is a fear, what can be done to prevent this plagiarism?
○ Or do you think it is inevitable?
If it is made available for public consumption then it has to be transparent. Anything that is going impact human health and might directly or indirectly causes death or reason for issue that could eventually cause death or disability should go through review process and transparent. If it is used under strong supervision of practitioner than I don’t see the need to be transparent to public and at least should be open to other monitoring bodies.
Another solution to prevent misuse of AGI is to introduce an approval based system where all AGI projects are either approved or disapproved based on their potential harm. Though some claim that this a viable solution, some say that this will drastically increase the time taken for real AGI development. Is this pay-off a feasible step we must take to ensure protection of humans, their jobs and their rights?
I don’t see a need to have new rules or regulations. Existing ones should suffice. Any system or product that need to be made available for public consumption without monitoring should have approval process. For example, few medicines can be made available without prescription but it should have approved to sell in the market. Same thing for products or apps of AI. While AI is being used under super vision of a practitioner, might not require approval.
How do you think bias in AGI can be tackled?
○ For example if a programmer is pro-armament and they inject that opinion into the AGI, will the AGI tend to show pro-armament ideologies?
○ Or is this true or a misconception?
○ Should bias be an issue handled case-to-case or addressed in a charter of some sort?
I am not sure whether programmer can bias the model intentionally (or by their belief) but definitely, a programmer can bias the system while sampling the data. Still investigations going on to handle biasing holistically but currently it has to be handled case to case basis. A thorough review process and exhaustive testing would help in handling biasing.
What is your stand on the suggestion that all firms working on AI & AGI development must be nationalized i.e under the control of the government?
○ We believe that this may reduce a company’s freedom to choose what to work on but also that it will assure safety. Is the company willing to give up ‘creative freedom’ in return for ‘assured safety’?
○ Additionally, do you believe that this is an effective solution or will it just be a hindrance to development?
There is a need to have nationalized body to review and approve that would be made publicly available for everyone’s consumption. If AI is being used under human supervision involving human in making decision who are certified in that job then I don’t see the need to have approval process. As the certified person is allowed to make decision to use or not.
* The company that this expert represents doesn’t want the expert’s name to published however the expert is a senior executive at a top healthcare firm*
Developed in response to a school project, Rohan, Suvana and I created PRECaRiOUS, a blog which aimed and raising awareness and ultimately answering the question:
How will the development of Artificial General Intelligence (AGI) be an infringement of human rights?
A lot of that content is still relevant to this blog, which is why I have adapted the same posts onto a mini-series on this blog.