HomeTop HeadlinesElon Musk and Big Tech Leaders Warn About the Dangers of Artificial Intelligence 

Elon Musk and Big Tech Leaders Warn About the Dangers of Artificial Intelligence 

- Advertisement -

The recent craze in AI technology is not only making citizens worried about its impacts but also is concerning to tech leaders, who have now called for artificial intelligence (AI) labs to stop training AI systems for at least six months, saying the continued ungoverned craze might have profound risks to humanity and society.

Elon Musk (CEO of Tesla), Steve Wozniak (co-founder of Apple), and Bill Gates (co-founder of Microsoft) have led dozens of other tech leaders, researchers, and professors to ask AI labs to pause, in a letter written by the Future of Life Institute.

The letter comes barely two weeks after OpenAI announced GPT-4, a more powerful version of the technology that runs the AI chatbot tool, ChatGPT.

In early company tests, GPT-4 was shown doing various tasks, including drafting lawsuits, building a website from a hand-drawn sketch, and even passing standardized exams.

The letter, which OpenAI’s CEO also signed, said the pause should affect AI systems more powerful than GPT-4.

The letter also proposes that independent experts work on developing and implementing safe AI tools during the pause period.

According to the letter, advanced AI could cause a significant change in life as we know it and should be planned for more carefully and managed with adequate attention and resources.

Unfortunately, the planning and management of AI systems is not happening, with labs locked in a race to develop the most powerful digital minds in recent months that no one can understand, control, or predict.

The letter proposed that if the pause is not implemented, governments should create a moratorium.

ChatGPT’s release last year created a renewed race among tech companies to create and deploy similar AI tools.

OpenAI, Google, and Microsoft are at the forefront of AI technology, but other companies are quickly catching up and creating similar technologies.

Numerous startup companies are now creating AI image generators and writing assistants.

Experts have become increasingly worried about the potential of AI tools to give biased responses and spread misinformation and their impacts on consumer privacy.

Some of the big questions society has to answer are: 1) Should we let computers bombard us with propaganda, biases and lies? 2)  Should we let computers do our jobs, even the ones we enjoy and find fulfilling? 3) Should we risk letting computers become smarter than us, causing us to become obsolete?  4) Should we let computers take control of our lives, to the point that we risk losing control?

- Advertisement -

Latest Articles

More Articles Like This