0

This Week in AI: Can we (and could we ever) trust OpenAI? | TechCrunch

Keeping pace with such a fast-moving industry Hey This is a tough job. So until an AI can do it for you, here is a handy summary of some of the recent stories in the world of machine learning, along with notable research and experiments we haven’t covered ourselves.

By the way, TechCrunch plans to launch an AI newsletter on June 5. stay tunedIn the meantime, we’re increasing the pace of our semi-regular AI column, which used to be twice a month (or something like that), to now being weekly – so stay tuned for more editions.

This week in AI, OpenAI launched discounted plans for nonprofits and education customers and pulled back the curtain on its most recent efforts to prevent bad guys from abusing its AI tools. There’s not much to criticize in it—at least not in this writer’s opinion. But I Desire He says the timing of the spate of announcements appears to be meant to counter some of the bad press the company has received recently.

Let’s start with Scarlett Johansson. OpenAI A voice was removed It was used by the AI-powered chatbot ChatGPT after users pointed out that it closely resembled Johansson’s voice. Johansson later released a statement saying that she had hired legal counsel to inquire about the voice and obtain precise details about how it was developed – and that she had declined OpenAI’s repeated requests to license her voice to ChatGPT.

now one Article published in the Washington Post This means that OpenAI didn’t actually try to mimic Johansson’s voice and any similarity was coincidental. But then, why did OpenAI CEO Sam Altman contact Johansson and ask him to reconsider two days before a spectacular demo that featured a similar voice? This is a bit suspicious.

Then there is also the issue of trust and security of OpenAI.

like us reported earlier this monthOpenAI is now dissolved Superalignment TeamThose responsible for developing ways to control and run “super intelligent” AI systems were promised 20% of the company’s compute resources – but they sometimes (and rarely) got only a fraction of that. This led (among other reasons) to the resignation of two of the team’s co-leaders, John Leakey and Ilya Sutskever, who had previously been OpenAI’s chief scientists.

Nearly a dozen security experts have left OpenAI Last year, many people, including Leakey, publicly raised concerns that the company was prioritizing commercial projects over security and transparency efforts. In response to the criticism, OpenAI said the company had prioritized commercial projects over security and transparency efforts. A new committee was formed overseeing safety and security decisions related to the company’s projects and operations. But it included company insiders on the committee – including Altman – rather than outside observers. OpenAI reportedly said this considers quitting It has ditched its nonprofit structure in favor of a traditional profit-based model.

Incidents like these make it hard to trust OpenAI, a company whose power and influence grows every day (see: its deals with news publishers). Few corporations, if any, are worth trusting. But OpenAI’s market-disrupting technologies make the breaches even more troubling.

It doesn’t help that Altman himself is no paragon of truth.

When the news of OpenAI broke Aggressive tactics toward former employees broke — a strategy that involved threatening employees with losing their vested equity or blocking equity sales if they didn’t sign restrictive nondisclosure agreements — Altman apologized and claimed he had no knowledge of the policies. But, According to VoxAltman’s signature is on the incorporation documents that enacted the policies.

and if Former OpenAI board member Helen Toner Altman, one of the former board members who tried to remove Altman from his position late last year, is believed to have concealed information, misrepresented what was happening at OpenAI, and in some cases outright lied to the board. Toner says the board learned about the release of ChatGPT through Twitter, not Altman; Altman provided false information about OpenAI’s formal security practices; and Altman, unhappy with an academic paper co-authored by Toner that shed a critical light on OpenAI, tried to trick board members into removing Toner from the board.

None of this is a good sign.

Here are some other notable AI stories from the past few days:

  • Voice cloning made easy: A new report from the Center to Combat Digital Hate finds that AI-powered voice cloning services make it incredibly easy to fake a politician’s statement.
  • Google’s AI observations conflict: AI overview, AI-generated search results that Google began offering more broadly on Google Search earlier this month, needs some workThe company acknowledges this – but claims it is working on it quickly. (We’ll see.)
  • Paul Graham said on Altman: In a series of posts on X, Paul Graham, co-founder of startup accelerator Y Combinator, rejected claims that Altman was pressured to resign as Y Combinator president in 2019 because of a potential conflict of interest. (Y Combinator has a small stake in OpenAI.)
  • xAI raises $6 billion: Elon Musk’s AI startup, xAI, has raised $6 billion in funding, as Musk seeks capital to compete aggressively with rivals including OpenAI, Microsoft and Alphabet.
  • Perplexity’s new AI feature: With its new capability Perplexity Pages, AI startup Perplexity aims to help users create reports, articles, or guides in a more appealing format, Evans reports.
  • Favorite numbers of AI models: Devin has written about the numbers that different AI models choose when they are tasked with giving random answers. As it turns out, they have favorite numbers — which are a reflection of the data on which each was trained.
  • Mistral issued Codastral: French AI startup Mistral, backed by Microsoft and valued at $6 billion, has released its first generative AI model for coding, named Codestral. But it cannot be used commercially due to Mistral’s quite restrictive license.
  • Chatbots and Privacy: Natasha writes about the EU’s ChatGPT Taskforce, and how it offers a first look at solving privacy compliance for AI chatbots.
  • ElevenLabs’ sound generator: Voice cloning startup ElevenLabs has introduced a new tool, first announced in February, that lets users generate sound effects through gestures.
  • Interconnects for AI chips: Technology giants including Microsoft, Google and Intel — but not Arm, Nvidia or AWS — have formed an industry group, the UALink Promoter Group, to help develop next-generation AI chip components.

this-week-in-ai-can-we-and-could-we-ever-trust-openai-techcrunch