Co-founder of company behind ChatGPT in Israel to recruit talent for new AI venture
The co-founder of OpenAI, Ilya Sutskever, is looking for hi-tech talent in Israel for a new venture he is launching.
OpenAI, the company behind ChatGPT, is an AI research and deployment company that aims to “ensure that artificial general intelligence benefits all of humanity.”
Sutskever is about the embark on his latest venture, Safe Superintelligence Inc., together with Daniel Gross, who used to work in AI at Apple, and Daniel Levy, a former member of the technical staff at OpenAI. Their new venture is about creating a safe environment for “superintelligent” AI systems that are smarter than humans.
“Building safe superintelligence (SSI) is the most important technical problem of our time. We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence,” Sutskever, Gross and Levy wrote in a post on 𝕏.
“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead…Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the three entrepreneurs added.
Safe Superintelligence Inc. will be based in the United States, with offices in Palo Alto, California, as well as in Tel Aviv, where they have "deep roots and the ability to recruit top technical talent,” the three wrote.
Sutskever was born in Russia, grew up in Jerusalem and moved to Canada with his family at age 16.
The executives plan to create a team of “the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.”
Last month, Sutskever left OpenAI, which he founded with CEO Sam Altman in 2015.
One year ago, Altman and Sutskever spoke at Tel Aviv University, where they cautioned that building machines that are smarter than humans could be dangerous.
“AI will be a very powerful technology used for amazing applications to cure diseases for example, but it could also be used to create a disease as time goes by and the capabilities will be increasing,” Sutskever said in June 2023. “It would be a big mistake to build superintelligence AI that we don’t know how to control.”
“We will need to have structures in place to control the use of the technology,” he warned.
At the time, Altman said that the Israeli tech industry would play a “huge role” in the artificial intelligence revolution in the coming years.
“There are two things I have observed that are particular about Israel: the first is talent density and the second is the relentlessness, drive, and ambition of Israeli entrepreneurs,” Altman said. “Those two things together are optimal to lead to incredible prosperity both in terms of AI research and AI applications.”
When asked why he continued to develop AI if it is potentially dangerous, Altman replied that the creation of digital superintelligence is both a moral imperative and an unstoppable reality.
“Why to build it? Number one, I do think that when we look back at the standard of living and what we tolerate for people today, it will look even worse than when we look back at how people lived 500 or 1,000 years ago,” Altman said.
“I think everyone in the future is going to have better lives than the best people of today…"
The All Israel News Staff is a team of journalists in Israel.