Trendfeed

The risks of AI to music streaming services… according to Tencent Music Entertainment


MBW Explains is a series of analytical features in which we explore the context behind major music industry talking points – and suggest what might happen next. Only MBW+ subscribers have unlimited access to these articles.
WHAT’S HAPPENED?

When music companies talk about AI, there are usually two broad themes:

  • The benefits of AI to their business – more efficient music creation, enhanced user experience, better data collection and analysis, and so forth, and
  • The costs associated with abuse of AI – copyright infringement by unauthorized deepfakes, unlicensed training of AI on copyrighted works, etc.

In other words, it’s either “look how much money we can make from AI” or “look how much more money we could be making from AI.”

Yet the reality is that AI is a new and powerful technology, and like other new and powerful technologies before it, it’s practically impossible to predict what the longer-term impacts of it might be.

When the internet was the new and hot technology in the 1990s, plenty of people saw its commercial potential, but could anyone have predicted the spread of misinformation on social media, the epidemic of depression among teenagers linked to smartphone use, or harmful trends like the Tide Pod Challenge?

So it’s worth looking at AI from the perspective of What are we not seeing? What are the risks?

Tencent Music Entertainment, China’s largest operator of music streaming services, has taken a step towards doing just that. In its annual report for 2023, filed with the US Securities and Exchange Commission earlier this month, TME laid out a number of risks – some of them fairly vague – that it sees from its use of AI.

And use AI it does: among streaming services, TME may well be the most advanced in the world when it comes to developing and deploying AI tools.

For listeners, it has launched an AI “companion” that can “listen” to music along with you, simulating a listening party. TME’s karaoke app WeSing, and the karaoke function in its Kugou streaming app, now have an integrated “AI singing” function, which allows users to perform “duets” with virtual singers. AI has also been integrated into TME’s music search and discovery functions.

For music creators, TME last year rolled out a suite of AI music-creation tools on Venus, its music production and promotion platform, available to the nearly half a million subscribers of its Tencent Musician Platform. Among those tools are the ability to separate different music instruments on a recorded track, the ability to automatically generate sheet music, a lyric-writing assistant, and a composition assistant.

TME has also developed PDM, short for “predictive model,” which the company says can now predict “the next hit song,” based on analyses of music and lyrics and changes to music trends around the world.

So it makes sense that TME has spent considerable time thinking about the risks involved in staking so much of its business on the development of AI. Here are the core aspects of AI where TME sees risk:


1) FLAWED ALGORITHMS

“AI algorithms may be flawed, and the data used could be incomplete or biased,” TME said in a section of its report titled Risks Related to Our Business and Industry.

This could indeed be a problem. Since AI technology “went mainstream” about a year and a half ago, numerous cases of “AI hallucinations” have been reported – meaning instances where AI simply made up false information and passed it off as factual.

In one notorious instance, a lawyer arguing a personal injury case in a New York court admitted that he had cited nonexistent legal precedents in his argument – the result of ChatGPT “hallucinating” those cases. What’s more, when questioned about these results, ChatGPT insisted that the fake cases were real.

“Certain AI applications could trigger ethical issues. Should our AI-based offerings become controversial due to their effects on human rights, privacy, employment, or other social matters, we risk reputational harm or legal repercussions.”

Tencent Music Entertainment

Such “hallucinations” could prove to be a problem for music streaming services as well. TME’s PDM could generate completely incorrect predictions about what songs are likely to be hits, or it could mislead analysts about listening habits. Other AI apps could produce undesired results for listeners and music creators. It doesn’t take a genius to see how such things could damage a streaming service’s business operations.

TME is also right to point to the possibility of bias in AI models. That AI algorithms reflect the biases, conscious or unconscious, of its developers is well documented at this point. IBM has published a rundown of the way biases can affect the usefulness of AI technology. If a streaming service is using a biased AI model, it could result in the company missing cultural trends or overlooking entire demographic groups – to the detriment of its bottom line.


2) ‘CONTROVERSIAL PRACTICES’

“Inappropriate or controversial data practices, by us or by others, could limit the acceptance of our AI-enhanced products and content,” TME states in its annual report.

“Certain AI applications could trigger ethical issues. Should our AI-based offerings become controversial due to their effects on human rights, privacy, employment, or other social matters, we risk reputational harm or legal repercussions.”

It’s hard to say exactly what TME had in mind when it offered up this warning, but we can certainly imagine how any number of a music streaming company’s AI applications could run into “ethical issues.”

If an AI algorithm were to analyze user’s listening habits deeply enough, it could potentially create a “model” of a given user that so accurately predicts what that user will want to hear that it might feel like a violation of privacy – as if someone knows things about you that they shouldn’t.

The potential impact on employment is self-evident. If generative AI tools reduce the demand for session musicians, sound engineers or even producers, the job losses in the music industry could pile up. And if those unemployed masses blame tools like TME’s Venus, that’s a clear risk to the company’s image and reputation.


3) COPYRIGHT ISSUES

Copyright is the big issue when it comes to AI in the music industry. From lawsuits over unauthorized reproductions of copyrighted materials by AI chatbots, to takedown notices for AI-generated deepfakes of famous artists, the risks to rightsholders – and the companies that could run afoul of them – are coming into sharp focus.

“There are uncertainties around the ownership and intellectual property protection of AIGC [AI-generated content] products. Using AIGC tools could also lead to potential copyright infringement and other legal challenges,” TME notes in its annual report.

“If we are unable to secure the needed permissions or licenses for using AI tools — whether because we cannot identify the rights holder or for any other reason — we might infringe on others’ rights which could lead to monetary claims, fines, penalties, or less content for our users.”

“There are uncertainties around the ownership and intellectual property protection of AIGC [AI-generated content] products. Using AIGC tools could also lead to potential copyright infringement and other legal challenges.”

Tencent Music Entertainment

TME’s reference to “uncertainties around the ownership and intellectual property protection of AIGC” likely refers to the ongoing debate about whether or not AI-generated content can be copyrighted.

In the US, a court ruling in 2023 determined that a work created entirely by AI can’t be copyrighted. However, this still leaves some unresolved grey areas. How much human involvement is needed for a work to be copyrightable? If something was 50% generated by a person and 50% by AI, can it be copyrighted? What if it was 25% human work, and 75% AI? There certainly remains plenty to be determined about the extent of copyright protection for AI-generated content.

What stands out in TME’s assessment is its level of uncertainty about its own uses of AI; can it be that TME doesn’t even know if any given AI tool has used copyrighted material in its training, or can be used to violate copyright? That could potentially be the case, and if so, it might have to do not only with the complexity of AI algorithms, and the massive amounts of data they are typically trained on, but also with…


4) LEGAL AND REGULATORY UNCERTAINTY

“The regulatory and legal framework on generative AI is evolving rapidly,” TME’s report states, “and might not fully address every aspect of its research, development, and use.”

Talk about an understatement. As things stand, the legal and regulatory framework for AI is just getting started – if that – in most countries.

In the US, so far, there has been little in the way of legislation, and much of the work of determining the rules around copyright has fallen to the courts.

Judges around the country are currently hearing arguments about whether or not unauthorized use of copyrighted materials should be granted a “fair use” exception to copyright law; whether an AI-generated summary of a book should count as copyright infringement, and other related issues.

In the European Union, the recently-passed AI Act requires developers of “general purpose AI”  models to keep track of and disclose what content is used in training. It further states that “any use of copyright-protected content requires the authorization of the [rights holder] concerned unless relevant copyright exceptions and limitations apply.”

Yet no one seems entirely certain what this might mean in specific cases. As legal experts have pointed out, it’s not entirely clear what is and isn’t a “general purpose AI.” Does an AI-driven music-generating tool count?

There’s also the carve-out for “relevant copyright exceptions and limitations.” EU copyright has an exemption for “temporary acts of reproduction which are an essential part of a technological process.” Would developing an AI model fall under that exemption?

“The regulatory and legal framework on generative AI is evolving rapidly and might not fully address every aspect of its research, development, and use.”

Tencent Music Entertainment

Interestingly enough, the legal jurisdiction that’s most relevant to Tencent Music Entertainment – China – might actually have the world’s clearest and most comprehensive rules surrounding AI, at least for now.

Beginning in late 2022, Chinese regulators issued rules guiding how AI can be developed and deployed, including a requirement for AI developers to submit security assessments of their technology to the government.

Providers of AI-powered services are required to ensure that users “scientifically understand and rationally use” the content generated by AI and “not to use the generated content to damage the image, reputation and other legitimate rights and interests of others, and not to engage in commercial hype or improper marketing.”

“Non-compliance with the AIGC Measures may subject the providers of generative-AI services to penalties, including warning, public denouncement, rectification orders and suspension of the provision of relevant services,” TME noted in its report.

“However, since these laws and regulations are still relatively new and significant uncertainties remain with respect to their interpretation and implementation, we cannot assure you whether we will be able to comply with the requirements of such laws and regulations in a timely manner or at all.”

That’s a pretty stark admission from one of China’s most prolific developers of AI in the entertainment industry.


A FINAL THOUGHT…

The final thought here might be best left to Tencent Music Entertainment itself:

“Uncertainties regarding the development and application of AI technology present a potential risk. There remains the possibility that AI technology may not progress as anticipated or deliver expected benefits, which could limit the acceptance and popularity of our AI-based offerings.”

TME seems entirely aware that new technologies don’t always work out in the way they were originally envisioned, and it’s leaving itself some breathing room for the possibility that its enthusiastic and aggressive embrace of this new technology might not yield the desired results in the longer run.

Given that TME is a leader in the application of AI technology among music streamers, that should give us pause. At a time when companies both within and outside the music industry are racing each other to be the first out the door with AI-driven innovations, we might suggest three words of advice: proceed with caution.Music Business Worldwide



Source link

Exit mobile version