everything happens too often. I'm in Seoul for the International AI Summit. This is a six-month follow-up to last year's Bletchley Park AI Safety Summit (a full sequel will be held in Paris this autumn). As you read this, the first day of the event will have just ended. However, in keeping with the lack of fuss this time around, it was just a “virtual” leadership meeting.
When the date for this summit was set, it was surprisingly late, as four days away from home, for example, is a tough job for a journalist with two preschoolers, but there's a lot to cover. That was clear. The hot summer of AI is here.
first AI safety At a summit held at Bletchley Park in the UK last year, the company announced an international testing framework for AI models after a conference call. …We are taking a six-month hiatus from developing a powerful system.
There was no pause. The Bletchley Declaration, signed by the UK, US, EU, China and others, welcomed the “enormous global opportunities” presented by AI, but also warned of its potential to cause “catastrophic” damage. It also secured commitments from major tech companies, including OpenAI, Google and Mark Zuckerberg's Meta, to work with governments on pre-release testing of their models.
Industry AI development continues while UK and US establish national AI safety institutes … OpenAI has released GPT-4o (o stands for “omni”) online for free. The next day, Google previewed a new AI assistant called Project Astra and an update to its Gemini model. Last month, Meta released a new version of its proprietary AI model Llama. … And in March, Anthropic, an AI startup founded by former OpenAI staffers who disagreed, was founded. Altman's approach updates Claude model
And it all started at OpenAI the weekend before the summit started. Perhaps most strikingly, the company got into a spat with Scarlett Johansson over one of his voice options available in the new edition of ChatGPT. She approached the actor to lend her voice to her new assistant, but she declined the offer twice, but OpenAI launches her ChatGPT-4o and Sky explains its new features. The resemblance to Johansson was immediately apparent, even before CEO Sam Altman tweeted “she” (the name of the Spike Jonze movie in which Johansson played the voice of a super-intelligent AI) after the presentation. was. Despite denying the similarities, the Sky audio option has been removed.
But more importantly, these are two men leading a company/nonprofit/secret. villain The organization’s “Super Coordination” team, which was dedicated to ensuring that efforts to build superintelligence do not end humanity, has quit. First to depart was Ilya Sutskeva, co-founder of the organization and leader of the boardroom coup that temporarily and ineffectively ousted Mr. Altman. His resignation raised eyebrows, but it was by no means unexpected. You are coming towards the king, you better not miss. And on Friday, Sutskever’s co-head of Super Alignment, Jan Rijke, also left the company and said more.
A former senior OpenAI employee says the company developing ChatGPT prioritizes a “shining product” over safety, leaving after disagreements over key objectives reached “breaking point” It revealed that.
Rijke detailed the reasons for his resignation in a thread about X posted on Friday, in which he said safety culture had become a lower priority. “In recent years, safety culture and processes have taken a backseat to shiny products,” he writes.
“These problems are extremely difficult to solve, and I am concerned that we are not on track to get there,” he wrote, adding that it is “increasingly difficult” for teams to conduct their research. He added that it has become.“Building machines that are smarter than humans is an inherently risky endeavor. OpenAI has a great responsibility on behalf of all humanity,” Reike wrote, adding that OpenAI is committed to “safety-first AGI. must be,” he added. [artificial general intelligence] company”.
Reich’s resignation provides valuable insight into the opposition to the group, which has been portrayed as almost single-minded in pursuing its (and sometimes Sam Altman’s) goals. . When the charismatic chief executive was fired, nearly all staff reportedly accepted an offer from Microsoft to follow him to a new AI institute established under the House of Gates. However, Gates also holds the largest foreign stake in a subsidiary of OpenAI. Even when many staff members left to start Anthropic, a rival AI company that distinguishes itself by being vocal about how much it values safety, the amount of talk was kept to a minimum. .
It turns out (surprisingly!) that it’s not because everyone loves each other and has nothing bad to say. From Vox’s Kelsey Piper:
I have seen very restrictive offboarding agreements with confidentiality and non-disparagement clauses applicable to former OpenAI employees. This prohibits them from criticizing their former employers for life. Merely acknowledging the existence of an NDA is a violation.
If a departing employee refuses to sign the document or violates the document, he or she may lose all vested rights earned during his or her tenure (possibly worth millions of dollars).Former employee Daniel KokotajiroHe posted that he left OpenAI “because I lost confidence in OpenAI’s ability to act responsibly during my time at AGI,” and publicly stated that he would have had to hand over what would have been a large sum of money to quit without permission. Admitted. sign documents.
Just a day later, Altman said the clawback clause “should never have been in any document.” He added, “We have never taken back vested interests in anyone, and we have no intention of doing so if people do not sign a separation agreement.” It was one of the few things I was really embarrassed about doing. I didn’t know this was happening and I should have. ” (proprietary to the capitalization model)
Mr. Altman did not address broader allegations about the strict and wide-ranging NDA. And while it promised to fix the clawback clause, it said nothing about other incentives, carrots and sticks, offered to employees for signing termination papers.
Perfect as set dressing. Mr. Altman is a key advocate for state and interstate AI regulation. Now I know why it's needed. What if his OpenAI, one of the world’s largest and best-resourced AI labs that claims safety is at the core of everything it does, can’t even unite its own team? , what hope is there for the remaining companies? industry?
Sloppy
It’s fun to watch art terminology develop before my eyes. There was a spam email in the mailbox. The email contained spam. The world of AI is tilted.
“Slop” is what you get when you publish artificial intelligence-generated content on the web for anyone to see.
Unlike chatbots, slops are not interactive and are rarely intended to actually answer readers’ questions or serve their needs.
However, like spam, its overall impact is negative. The loss of time and effort to the user who has to wander through the slop to find the content they are actually looking for far outweighs the benefit to the slop creator.
I want to help popularize this term for much the same reasons as developer Simon Willison, who alerted me to its emergence. So it’s important to have a way to easily talk about AI failures in order to maintain the ability to recognize what AI can do well.
The presence of spam means emails you want to receive. The presence of slop comes with desirable AI content. For me, it’s content that I generate myself, or at least that I expect to be generated by an AI. No one cares about the dream you had last night, no one cares about the response you got from ChatGPT. Keep that to yourself.
Wider TechScape
Source: www.theguardian.com