14.6 C
New York
Saturday, September 23, 2023

Artificial intelligence, expert alert. They ask for rules: “Reduce risk of extinction”

Must read

James
James
I am James Novak, a passionate and experienced news writer with the ultimate goal of delivering the most accurate and timely information to my readers. I work in the news department at a website dedicated to providing reliable and up-to-date information about technology. My articles are widely circulated, often featured on major publications, and have been read by millions of people around the world. With over four years of writing experience in various fields such as tech startups, industry trends, cybersecurity, AI/ML advances, and more, I bring an informed perspective to all topics I write on. Beyond my published work online and in print media outlets, I'm also an avid speaker at local events where I share my insights on current issues related to technology.

“Reducing the risk of extinction” from artificial intelligence “should be a priority, along with other social risks such as pandemics and nuclear war,” reads an open letter signed by more than 350 executives and released by the nonprofit Center for AI Safety Signatories include Sam Altman, CEO of ChatGPT creator OpenAI; Demis Hassabis, CEO of Google DeepMind; Dario Amodei of Anthropic; Geoffrey Hinton, a so-called Godfather of AI

Artificial intelligence can lead to the extinction of humanity and should be considered a threat, a social risk, like pandemics and nuclear wars. This is the alarm of some industry experts, in an open letter signed by more than 350 executives and distributed by the nonprofit Center for AI Safety.

The appeal of the AI ​​experts

“Reducing the risk of extinction” from artificial intelligence “should be a priority, along with other social risks such as pandemics and nuclear war,” the letter reads. Among the signatories are Sam Altman, CEO of the producer of ChatGPT OpenAI; Demis Hassabis, CEO of Google DeepMind and Dario Amodei of Anthropic. Geoffrey Hinton, a so-called godfather of AI, also supported the call.

“Must set boundaries and limitations”

“The extensive use of artificial intelligence is on the one hand leading to a real revolution and on the other hand it is causing serious problems,” said Luca Simoncini, an information technology expert, former professor of information engineering at the University of Pisa and former director of the Institute of Information Technologies of the National Research Council. He is also one of the signatories of the statement. He explained that the Center for AI Safety’s warning was prompted by the urgency of regulations in a ubiquitous sector such as artificial intelligence. “Artificial intelligence is so ubiquitous that it has a strong impact in many sectors of social life (think of the risk of producing fake news or the control of self-driving cars), as well as economic, financial, political, educational and ethical aspects.” It is clear that no one can object if an emerging technology is used for beneficial purposes, for example in the biomedical or pharmacological field,” he said. Simoncini to the manifesto in which Bertrand Russell and Albert Einstein denounced the risks of nuclear weapons in 1955. Artificial intelligence is different, experts say, but the point is that clear rules and awareness are needed. systems are fallible”, Simoncini noted, and the big companies operating in the sector “base their activities only on technological prevalence, they have not considered the problem of regulation”. For example, he added that in the case of chatbots like ChatGpt “its use should be understood as a tool, not the replacement of human capabilities with a system of artificial intelligence”. We now need to think “about the need to set boundaries and limits,” Simoncini concluded.

“May cause unexpected side effects”

Also among the signatories of the letter is physicist Roberto Battiston, from the University of Trento. Rules are needed to manage powerful algorithms like those of artificial intelligence and to prevent unexpected effects, he explained. “This type of generative artificial intelligence algorithms – said Battiston – has proven very powerful in getting people to communicate using data on the web and natural language, so powerful that they can generate unexpected side effects. No one really knows what these could be these days effects, positive or negative: it takes time and experimentation to create rules and regulations that allow us to manage the effectiveness of this technology while protecting us against the relative dangers It is not about the threat of a superintelligence that the can overwhelm humanity, but of the consequences of how people become accustomed to using these algorithms in their work and in the daily life of society”. Let us think, for example, he added, “of the possible interference in electoral processes, the spread of false news, the creation of news channels that respond to precise disinformation interests”. For this reason, he noted, “we must be prepared to deal with these situations.”

“Increased chance of risks from other causes, such as climate change”

However, Luca Trevisan, professor of computer science at Bocconi, who has been involved in artificial intelligence for some time, is not one of the signatories of the open letter. Even more than human extinction, artificial intelligence could lead to less catastrophic but more concrete near-term risks that we need to be aware of and manage, he explained. Commenting on the warning, he noted that “it arose from the confluence of two ideas that have begun to circulate in the academic and philosophical world in recent years. According to the former, we should be more concerned about risks that could lead to the extinction of humanity and the second believes that the advancement of artificial intelligence could be one of them. It is a risk that, however unlikely, deserves more attention”. “In principle – he added – it is right to have a broader horizon than our concerns, which consider risks over a long period of time, but purely statistically, the likelihood of risks from other causes, such as climate change, is greater” . Still on the alert: “Emphasizing distant and less likely risks risks diverting attention from more concrete risks,” such as the easy production of fake news and changes in the world of work that can lead to social instability. “In the short term, we could find that we have a very strong social and economic impact from these technologies, and this is something that needs to be managed” and for this reason, said the expert, “we need to specify the objectives and consider that , by realizing them, one could create unforeseen consequences”. We therefore need “rules” and it is “necessary to drive change”. Sure, “any technological change can generate wealth, but will it be distributed throughout society or will it only go to big companies? And in case of negative impacts, who pays? These are not technological problems, but political problems, and it would be good if politicians tackled them. If I had to appeal, I would do it in this direction,” concludes Trevisan.

Source: TG 24 Sky

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article