How Artificial SuperIntelligence could Shape the Future

Courtesy Image by Gerd Altmann from Pixabay

The role of Artificial Intelligence’ techniques and tools in business and in the global economy is a hot topic. Artificial Intelligence / machine intelligence enhances the productivity of the businesses and the quality of life in their communities. A single day does not go by without a news and article reporting some remarkable development in artificial intelligence.

Artificial Intelligence will be better than human intelligence in many intellectual tasks, like scientific creativity, general wisdom, social skills and many more.

If machine brains one day come to surpass human brains in general intelligence, what will be next? With the amazing progress made in the field of AI over the last decade, it is the most important than ever to make sure that the technology that we are developing has a beneficial impact on humanity. In this regard, Artificial Superintelligence [ASI] marks the beginning of a new era. “ASI’s aim is to capture enough of the computationally functional properties of the human brain to enable the resultant emulation to perform intellectual work” (Nick Bostrom).

Bostrom claims that there is possibility that Superintelligent machines might be so much more intelligent than humans that they may not be tools any more. They will have their own goals. This might be possible that SuperIntelligent machines’ goals will not be compatible with humans’ goals and with the continued existence of humans.

Artificial Superintelligence [ASI] is an aspect of intelligence, which is more powerful and sophisticated than a human’s intelligence. Superintelligence can surpass human intelligence; ASI can bring abstractions, which are IMPOSSIBLE for humans to think.

ASI does not exist now, but AI research community predicts in this human race of technology, ASI could be created sometime in the future, and it might be possible that ASI could cause a severe global disaster, possibly even resulting in human loss. We must keep in mind that ASI might be accompanied by some existential risk. To take seriously, it is important to analyze ASI’s risk factors in AI and ASI research and development.

Many people forecast ASI as a faraway future goal, but with rising development in AI, it seems very closer now. Optimistic predict that by 2030 AGI will prevail. When we will reach AGI, ASI could take just hours or years. AI Experts also foresee that AGI will happen by 2040, and superintelligence by 2060.

We should consider and discuss positive role of Artificial Superintelligence in economic sectors as well, that is one of humanity’s greatest challenges in future. Will super-human machines be good or bad for humanity?, while no one can predict what superintelligence will look like, we can take measures today to increase likelihood that intelligent systems we build are effective, ethical and elevate human goals and values positively. This might be the most important event in human history. AI safety must be considered seriously, as AI has the potential to become more intelligent than human intelligent. We have to work on the smartest thoughts to control AI and ASI in positive way.

Cubent Ltd organises first ever conference on Artificial SuperIntelligence AI & ASI Expo London 2020: Plan Artificial Intelligence & Artificial Superintelligence Roadmaps for Economic Sectors’ Success” on 23 September 2020 at London Marriott Hotel Regents Park. This conference covers 60% AI latest and upcoming techniques, technology, and 40% Artificial Superintelligence functionalities and consequences. There will be good networking opportunity at which the public can interact with leading AI and ASI experts.

31 views0 comments