Leading corporations and investors have poured considerable funds into developing AI tools meant for consumer products and business solutions. Musk is presently in talks with Jimmy Ba, an AI researcher at the University of Toronto, about launching a firm called X.AI, three people familiar with the matter said in a report from The New York Times released on Thursday. The entrepreneur has also hired senior researchers from DeepMind, an AI research laboratory owned by Google, to work at Twitter, the social media behemoth he acquired last year.
The move comes after Musk canceled a relationship between Twitter and OpenAI, the startup that produced language processing tool ChatGPT, through which the latter paid $2 million each year to license data and build the breakthrough mass market AI. Musk reportedly believed that the company was not paying Twitter enough for the data.
Musk is also a co-founder of OpenAI and resigned his seat on the company’s board of directors five years ago. He renewed his concerns regarding the rapid development of AI in recent weeks as ChatGPT and other tools gained footholds in the marketplace, signing an open letter with hundreds of other technology leaders which called for a six-month moratorium on developing AI solutions stronger than GPT-4 as the world considers the effects of the technology.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter asserted. “This confidence must be well justified and increase with the magnitude of a system’s potential effects.”
Other news outlets have likewise confirmed that Musk has intentions to launch alternative AI initiatives: he recruited Igor Babuschkin, a researcher who formerly worked at DeepMind and OpenAI, to develop an alternative to ChatGPT, which he characterizes as “woke” because of the system’s tendency to offer left-leaning responses, according to a report from The Information.
Musk also met with members of Congress on Wednesday to discuss the possibility of regulating AI from the federal level. Senate Majority Leader Chuck Schumer (D-NY), who took part in the discussion with Musk, recently unveiled a broad regulatory framework that will attempt to “increase transparency, responsibility, and accountability” for AI systems while “reducing the potential for misuse” or promoting “misinformation and bias.”
“That which affects safety of the public has, over time, become regulated to ensure that companies do not cut corners,” the world’s current second-richest man commented after the meeting. “AI has great power to do good and evil. Better the former.”
Despite the uncertainty inherent with the nascent technology, including the possibility of widespread unemployment in white-collar professions, studies have indicated that AI systems drastically improve worker productivity. One recent analysis of customer support employees showed that generative AI helped workers respond to 14% more chats than their colleagues who did not have access to the system; Amazon, which released several mass-market AI solutions earlier this month, likewise found that coders who used AI computer programming tool CodeWhisperer completed tasks 57% faster and were 27% more likely to achieve success than those who did not use the system, which can generate real-time code suggestions.