Google and Alphabet CEO Sundar Pichai said Sunday that lawmakers should regulate the use of artificial intelligence, suggesting the advanced technology could impact the economy and potentially create a dangerous environment worldwide.
“We need to adapt as a society for it,” Pichai told CBS during an interview on “60 Minutes.”
As nations race for AI dominance, Pichai said regulations should “align with human values, including morality,” arguing that companies should not decide on such laws.
“It’s not for a company to decide,” Pichai said. “This is why I think the development of this needs to include not just engineers but social scientists, ethicists, philosophers, and so on.”
Critics of the technology have expressed concern about how humanity and AI would co-exist, fearing that robots could soon replace more occupations, resulting in either an enhanced civilization or one dominated by machines unaware of its behaviors. The technology has already been used to create deepfake videos, which Pichai warned could be used further to spread disinformation and “cause a lot of harm.”
Google published recommendations on AI regulation, which reads while self-regulation is vital, it’s not enough to implement “balanced, fact-based guidance from governments, academia and civil society is also needed to establish boundaries, including in the form of regulation.”
Google recently began developing several AI products, including Bard, a chat AI system currently undergoing public testing. However, Pichai said that other advanced products produced by the tech giant had been put on hold until society grows accustomed to AI systems already introduced.
Italy officials temporarily banned the AI chatbot ChatGPT over concerns about data privacy earlier this month.
The Guarantor for the Protection of Personal Data (GPDP), which oversees data privacy online, banned the U.S.-based chat website and its parent company, AI developer OpenAI, from processing data from users in Italy. The agency said that OpenAI has no legal basis for collecting data from Italian users to train the model and has no age verification system to protect children against inappropriate answers.
Pichai also said the fast development of such technologies would impact every company and product and disrupt knowledge-based workers, including writers, architects, accountants, and software engineers.
“For example, you could be a radiologist; if you think about five to 10 years from now, you’re going to have an AI collaborator with you,” Pichai said. “You come in the morning, let’s say you have a hundred things to go through, it may say, ‘These are the most serious cases you need to look at first.'”
Pichai said society is unprepared for AI technology like Bard, considering a mismatch in adaptation between societal institutions and technological evolution. However, he said he’s optimistic that several people have begun worrying about AI’s implications earlier than other technologies in the past.
Former Google CEO Eric Schmidt said earlier this month that AI could hurt American politics, noting that lawmakers should rein in the technology.
Schmidt said AI has significant potential to do good for society, but it must first overcome the present challenges. He warned that authorities need to help regulate the technology now.
Schmidt’s remarks come after leaders in the tech industry, including Tesla and Twitter CEO Elon Musk and Apple co-founder Steve Wozniak, called for pausing the development of new AI models past the current generation.
An open letter from the Future of Life Institute noted that recent AI developments could significantly impact information channels and employment prospects for many industries and accelerate the timeframe in which AI can outsmart humans. The document called for a six-month moratorium on developing AI solutions stronger than GPT-4, the latest version of ChatGPT released by OpenAI earlier this month, as the world considers possible ramifications of the technology.
John Rigolizzo contributed to this report.