Connect with us

Support The NewsHawks

Opinion

Artificial Intelligence and Robotics: Great expectations and daunting existential risks

Published

on

PROFESSOR ARTHUR G.O. MUTAMBARA

ARTIFICIAL Intelligence (AI) is both exciting and challenging. There are opportunities and dangers. There are fascinating possibilities and existential risks.

The starting point is to understand the basic definitions of key concepts such as Robotics, AI, Machine Learning, Deep Learning, Single-Task AI, Artificial General Intelligence (AGI), Strong AI, and Generative AI (e.g. ChatGPT and Google Bard).

Artificial Intelligence refers to intelligence programmed into and demonstrated by machines. This capability is in contrast to the natural intelligence manifested by humans and other animals.

Put more succinctly, AI refers to the development of computer systems that can perform tasks that ordinarily require human intelligence, such as perception, reasoning, learning, decision-making, obstacle avoidance and path planning.

Robotics is the design, construction and operation of physical machines that autonomously carry out functions ordinarily ascribed to human beings, such as obstacle avoidance, path planning, language processing, perception, reasoning, learning, and decision-making.

Put differently, robotics is the intelligent connection of perception to action in a machine.

With the advent and growth of AI, most robots are driven by AI-based automation.

Single-task AI systems are designed to perform a specific task or set of functions rather than being able to perform a wide range of tasks. They are also called traditional AI systems.

Single-task AI is typically developed using machine learning algorithms that are trained on a specific data set to perform a particular function.

Examples of single-task AI systems include speech recognition software, image recognition software, obstacle avoidance, path planning, and fraud detection systems.

Traditional AI systems drive most robots, drones and driverless cars. These highly specialised systems are typically designed to perform their task at a very exacting level of accuracy.

Overall, single-task AI systems are highly effective for performing specific tasks.

However, they are incapable of the same flexibility and adaptability as more general-purpose AI systems.

General-purpose AI systems are designed to perform a wide range of tasks.

The ultimate and most advanced form of such capability is called Artificial General Intelligence (AGI).

This is the intelligence of a machine that can learn, understand and execute any intellectual task that a human being can.

The idea is to have in a device, machine or computer system the ability to perform multiple complex tasks as opposed to specific task competence.

Unlike narrow AI systems designed to perform specific functions, such as image recognition, language translation, obstacle avoidance and path planning, AGI can reason, learn, and adapt to new situations while accomplishing multiple complex tasks in much the same way as a human being.

The concept of AGI is sometimes associated with the idea of Strong AI, which suggests that it is possible to create machines that are not only capable of mimicking human intelligence but are actually sentient, conscious and self-aware, that is, machines that manifest intentionality, consciousness and feelings.

A special subset of AI is machine learning – the study of algorithms and statistical models that computer systems use to perform tasks effectively without explicit instructions, relying on patterns and inference.

This category of AI involves training machines to learn from data rather than explicitly programming them to perform a specific function.

In machine learning, algorithms use statistical models and mathematical techniques to automatically improve their performance on a given task based on the data they are trained on.

The goal is to develop models that can generalise well to new, unseen data and make accurate predictions or decisions.

Deep learning is a subset of machine learning and involves training neural networks with multiple layers to learn representations of data that can be used for classification, prediction, or other tasks.

In deep learning, the neural network architecture is designed to automatically learn hierarchical representations of the data, with each layer learning more abstract features than the previous layer.

This makes deep learning particularly effective for tasks involving complex data, such as images, speech, and natural language processing.

Generative AI is a special type of AI that involves using algorithms and models to create new content or generate novel ideas.

This can include the creation of images, music, text, or even entire virtual environments.

Unlike traditional AI systems that are designed to perform specific tasks or solve specific problems, generative AI is designed to create something novel that did not exist before – new content and knowledge.

Recent popular examples of generative AI are OpenAI’s ChatGPT and Google’s Bard.

Why the excitement about AI now?

Why the concerns and fears about AI now?

What is going on?

While AI has been around for a while, the early stages of the field witnessed sluggish growth.

Now there is an exponential growth of AI capabilities through various levels: basic intelligence, millipede level, mouse level, monkey level, average Joe/Jane level, Albert Einstein level, and beyond Albert Einstein (i.e., Super Intelligence)….. of course, AI will not stop at human-level intelligence! It is baseless human arrogance to think otherwise.

The spectacular and unprecedented exponential growth in AI capabilities has set the world ablaze with great expectations and acute worries from ordinary folks and experts.

Prof Geoffrey Hinton, a winner of the Turing Award (considered the Nobel Prize for Computing), a man considered the godfather of AI for his pioneering work on developing the supervised learning algorithm called Backpropagation, last week quit his job at Google, warning about the growing dangers from developments in the AI field.

Elon Musk, CEO of SpaceX, Tesla and Twitter, has also recently expressed concerns about AI, saying: “It has the potential of civilisation destruction.”

What are the issues?

Once one highly intelligent agent is developed, it can be replicated effortlessly across the world.

Hence an infinite number of these agents can be created, which then learn together simultaneously.

This can lead to multiplicity (large numbers or swarms) of equally Super-Intelligent Agents.

All these developments are unprecedented, and hence the excitement and fear, in equal measure.

With this exponential growth and broad nature of AI capabilities, it is clear that AI has applications in every industry, sector and society, ranging from automated manufacturing, autonomous mining, speech recognition, natural language processing to image and video analysis, mechatronics, robotics, and autonomous weapons.

AI is thus used in various industries, such as healthcare, finance, education, and manufacturing, to improve efficiency, reduce costs, and develop new products and services.

Consequently, AI is transforming society in many ways as an effective tool for solving human challenges.

The impact is ubiquitous.

This is the source of the excitement.

As a recent phenomenon, Generative AI has taken the world by storm.

For example, ChatGPT has over 100 million users worldwide, while its website now generates 1 billion monthly visitors.

This user and traffic growth was achieved in a record-breaking two-month period from December 2022 to April 2023.

These are fascinating statistics demonstrating the dramatic upsurge in the AI revolution.

Of course, the AI opportunities are immense, but it is instructive and prudent that the broad range of threats and risks is appreciated, understood and appraised.

These dangers include privacy violation, bias, discrimination, copyright violation, lack of accountability, insecurity, job losses, autonomous weapons, cyber-attacks, and terrorism.

In particular, a critical danger occurs when AI capabilities get into the hands of bad actors such as thieves, terrorists and warmongers.

Furthermore, there is the existential risk – concerns that AGI could eventually become so powerful that it poses an existential threat to humanity by accident or design.

What happens when we have a million Super-Intelligent Agents

whose capabilities surpass those of Albert Einstein, Isaac Newton, Elon Musk, Prof Edward Witten or Prof Andrew Wiles?

How can humans control society or the world with a million such Super-Intelligent Agents?

Food for thought!

Clearly, there is a need for careful and systematic mitigation of all these dangers and risks.

First and foremost, many of these AI threats and challenges are not inherent to AI itself.

They arise from how AI is developed, deployed, and regulated. There is a lot that can be done.

For example, we must have diverse (including women, Africans, Blacks, different cultures/languages, etc.) teams of AI developers and regulators.

We must accept the destruction of some jobs and then prepare (acquire new capabilities and skills) for the new AI jobs and AI-modified careers.

AI must be developed ethically and responsibly, with appropriate safeguards to mitigate these risks.

However, the existential risk remains an open question!

Accept that.

As a way forward, we must concentrate on developing responsible AI for solving global problems and improving the quality of life of ALL people worldwide.

Africans must be key players (not just consumers) in the development of AI.

They must proactively use AI to solve African socio-economic problems.

AGI and Strong AI remain ostensibly areas of current active research – inconclusive Work in Progress.

Whether and when such systems will be fully developed are open questions.

Let us all keep an open mind.

More specifically, ethical considerations, religious beliefs or fear of the unknown should not be allowed to deter the pursuit of AGI and Strong AI.

The research and experimentation must continue.

Once technology has been invented, it cannot be uninvented.

Once the genie is out of the bottle, nobody can put it back.

We must brace ourselves for a brave AI-driven new world fraught with both opportunities and threats.

Indeed, Artificial Intelligence and Robotics present great expectations and daunting existential risks.

About the writer: Prof Arthur G.O. Mutambara is the director and full professor of the Institute for the Future of Knowledge (IFK) at the University of Johannesburg

(This article is an excerpt from the upcoming Electrical Engineering book: Design and Analysis of Control Systems: Driving the Fourth Industrial Revolution by Prof Mutambara).

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *


Advertisement




Popular