0

This Week in AI: OpenAI and publishers are partners of convenience | TechCrunch

Keeping pace with such a fast-moving industry Hey This is a tough job. So until an AI can do it for you, here is a handy summary of some of the recent stories in the world of machine learning, along with notable research and experiments we haven’t covered ourselves.

By the way, TechCrunch plans to launch an AI newsletter soon. stay tunedIn the meantime, we’re increasing the pace of our semi-regular AI column, which used to be twice a month (or something like that), to now being weekly – so stay tuned for more editions.

This week in AI, OpenAI announced the It struck a deal with News Corp, the new publishing giant, to train OpenAI-developed generative AI models on articles from News Corp brands The Wall Street Journal, financial Times and MarketWatch. The agreement, which the companies describe as “multi-year” and “landmark,” also gives OpenAI the right to display News Corp. mastheads in response to certain questions within apps such as ChatGPT — presumably in cases where the answers are derived partly or entirely from News Corp. publications.

It seems like a win-win for both sides, doesn’t it? News Corp. gets cash for its content – more than $250 million, Allegedly — at a time when the media industry’s outlook more severe than usual.( Generative AI It didn’t help the matterthreatening Greatly reduces referral traffic to publicationsMeanwhile, OpenAI, which is fighting copyright holders on multiple fronts over fair use disputes, has a less costly court battle to worry about.

But the real deal is in the details. Note that the News Corp deal has an expiration date – as do all of OpenAI’s content licensing deals.

This in itself is not bad faith on OpenAI’s part. Perpetual licenses are a rarity in the media, as all parties involved intend to keep the door open to renegotiate the deal. However, it Is This is a bit questionable in light of recent comments from OpenAI CEO Sam Altman on the decreasing importance of AI model training data.

In his appearance on the “All-In” podcast, Altman Said that he “definitely [doesn’t] Looks like an arms race is about to begin [training] data” because “when models get smart enough, at some point, it shouldn’t be about more data — at least not for training.” Elsewhere, he wrote told James O’Donnell of MIT Technology Review said he is “optimistic” that OpenAI – and/or the broader AI industry – will “find a way out of this” [needing] More and more training data.”

The models are not yet that “smart”, which allows OpenAI to Reportedly experimenting with synthetic training data and scour the far reaches of the web — and youtube — for biological sources. But let’s assume that one day they No Making rapid improvements requires a lot of additional data. What does this do to publishers, especially when OpenAI has dug through their entire archives?

The point I’m getting at is that publishers — and other content owners with whom OpenAI has worked — appear to be short-term partners of convenience, nothing more. Through licensing deals, OpenAI effectively neutralizes a legal threat — at least until courts determine how fair use applies in the context of AI training — and celebrates a PR victory. Publishers get much-needed capital. And work on AI that could seriously harm those publishers continues.

Here are some other notable AI stories from the past few days:

  • Spotify’s AI DJ: Spotify’s inclusion of its AI DJ feature, which provides users with personalized song selections, was the company’s first step toward an AI future. Sara writes that Spotify is now developing an alternative version of that DJ that will speak Spanish.
  • Meta’s AI Council: Meta announced the construction of one on Wednesday. AI Advisory Council. However, there is one big problem: it features only white men. This seems a bit absurd because marginalized groups are the ones most likely to suffer the consequences of the shortcomings of AI technology.
  • FCC proposes AI disclosures: The Federal Communications Commission (FCC) has imposed a requirement that political ads must disclose AI-generated content — but has not banned it. DeWine has that right Full Story,
  • Answer calls in your own voice: TrueCaller, the widely known caller ID service, will soon allow customers to use its AI-powered assistant to answer phone calls within its network. own The move has been taken due to the new partnership with Microsoft.
  • Humane considers selling: Humane, the company behind the campaign more publicity AI Pin, which was launched last month to a lukewarm response, is now looking for a buyer. The company has reportedly priced itself between $750 million and $1 billion and the sale process is still in its early stages.
  • TikTok turns to generative AI: TikTok is the latest tech company to incorporate generative AI into its advertising business, as the company announced on Tuesday that it is launching a new TikTok Symphony AI suite for brands. These tools will help marketers write scripts, create videos, and enhance their existing ad assets, reports Ayesha.
  • Seoul AI Summit: At the AI ​​Security Summit held in Seoul, South Korea, government officials and AI industry executives agreed to implement priority security measures in the fast-moving field and establish an international security research network.
  • Microsoft’s AI PC: In two keynotes during its annual Build developer conference this week, Microsoft revealed a new lineup of Windows machines (and Surface laptops) it’s calling Copilot+ PCs, along with generative AI-powered features like Recall, which helps users find apps, files, and other content they’ve seen before.
  • OpenAI’s voice failure: OpenAI is removing one of the voices from ChatGPT’s text-to-speech feature. Users found the voice, called Skye, to be too similar to Scarlett Johansson (who has played AI characters before) – and Johansson herself released a statement saying she has hired legal counsel to inquire about the Skye voice and get the exact details about how it was developed.
  • UK autonomous driving law: Rules for self-driving cars in the UK are now official after they received royal assent, the final seal of approval for any legislation before it becomes law.

More Machine Learning

Some interesting AI-related research for you this week. Renowned researcher Shyan Gollakota from the University of Washington has done it again with a pair of noise-cancelling headphones you can use on your computer. Block everything except the person you want to hear fromWhile wearing the headphones, you press a button while looking at the person, and it samples the sound coming from that specific direction, which it uses to power the auditory exclusion engine to filter out background noise and other sounds.

The researchers, led by Gollakota and several graduate students, have named this system Target Speech Hearing, and presented it at a conference in Honolulu last week. Useful as an accessibility tool and an everyday option, it’s definitely a feature you could see one of the big tech companies stealing for the next generation of high-end cans.

Chemist at EPFL They’re tired of doing 18 tasks in particular, because they’ve trained a model called ChemCrow to do them instead. Not IRL tasks like titrating and pipetting, but planning tasks like sifting through the literature and planning reaction chains. ChemCrow doesn’t just do all of this for researchers, but acts more as a natural language interface to the whole set, using any search or calculation options as needed.

Image Credit: EPFL

The lead author of the paper showing ChemCro said it is “similar to a human expert with access to a calculator and a database,” in other words a graduate student, so hopefully they can work on something more important or skip the boring parts. Reminds me Coscientists A little bit. As for the name, it’s “because crows are known to use tools well.” Pretty cool!

Disney Research roboticists are working hard to make their creations move more realistically without having to animate every possible movement by hand. A new paper they will present at SIGGRAPH in July shows a combination of procedurally generated animation with an artist interface for tweaking it, all working on a real bipedal robot (Groot).

The idea is that you can let the performer create a type of movement – ​​bouncy, stiff, unstable – and the engineers don’t have to implement every detail, just make sure it falls within certain parameters. This can then be done instantly, with the proposed system essentially improving the precision of movement. Expect to see this at Disney World in a few years…

this-week-in-ai-openai-and-publishers-are-partners-of-convenience-techcrunch