IDENTITY VALLEY NEWSLETTER #2

6/13/2024

Are we in a responsible AI bubble?

Following enormous hype around AI over the past two years more and more people are asking: Can AI live up to the expectations? At least in an economic sense the answer seems to be increasingly: no. Venture-capital firm Sequoia recently did the maths and estimated that the AI industry has spent $50 billion on chips but brought in only $3 billion in revenue. And prominent economist Daron Acemoglu predicts an expected annual productivity growth of 0,06% through AI - hardly a revolution. Unsurprisingly, some analysts are warning of an economic bubble.

However, arguably more interesting than the bubble itself is a side effect of the hype: the sudden growth of a “responsible AI” ecosystem alongside the bubble. Today, the OECD counts over 1,000 policy initiatives worldwide for the regulation and governance of AI, many new organisations devoted their mission specifically to the challenges of AI, and many discussion events have been dominated by AI risk topics in recent months.

While it is hugely important to investigate and mitigate the risks of AI, this inflated attention raises the question: Are we, perhaps, focusing too much on AI problems at the expense of other pressing digital issues? Are we heading towards a responsible AI bubble?

Doug Rushkoff, tech and society writer, recently said something that rang very true in this regard: “Digital technologies are really good at exacerbating the problem while also camouflaging the problem.” The point is that AI is not creating completely novel problems. Rather, it is making existing challenges worse, e.g. user profiling in the attention economy, the automation of mis- and disinformation or a lowering of the threshold to do cybercrime.

And while civil society, academia, governments, and responsible businesses are rallying around the flag of the “new” risks posed by AI, the old problems of the digital world in the age of AI remain unsolved. The real problems are still camouflaged under the hype around AI. But they would need to be tackled with as much fervour as AI risks currently are.

AI is mysterious and carries an air of apocalypse. It's no wonder people are finding the technology captivating. But challenges in the digital space are manifold and much broader than just AI. Going forward we need to find a way to tackle these challenges more systematically without focusing all the attention on and being distracted by the most interesting or most recent digital technology. (We have some ideas for that).

- Ferdinand Ferroli, Director Policy & Research

What we are reading

  • Thoughtworks predicts in its Looking Glass 2024 report that organisations will need to be increasingly prepared for their technology practices to come under more scrutiny and think through the ethical ramifications of their technology choices — not just for end-users, but for society as a whole.

  • Could AI-generated content be dangerous for our health? The Guardian's Alex Hern asks if we are heading towards "cognitohazards": "Something so compellingly realistic that you involuntarily treat it as reality, even if you’re told otherwise".

  • The decisions on how venture capital is spent today define the digital future. Only now, 12 years after Facebook was listed on the stock exchange, we are more clearly seeing the platform’s negative impacts on users’ mental health and even democracy. Drawing on similarities to the sustainability challenge, Paul Fehlinger and Johannes Lenhard propose a "1.5°C goal for responsible tech".

  • Social technologist Glen Weyl and former digital minister of Taiwan, Audrey Tang have initiated an open-source, community-driven book project called "Plurality". It investigates how Taiwan achieved inclusive, technology-fueled growth that harnesses digital tools to strengthen both social unity and diversity. Many lessons to learn!

  • Finally, in this episode of the Digital Food Podcast, Ferdinand Ferroli, our Director for Policy & Research explains why it is crucial to bring responsible technologies to the food sector and how we can create a more trustworthy digital ecosystem.

Hidden gem

“If you are a student interested in building the next generation of AI systems, don't work on LLMs” - Yann LeCun, Chief AI Scientist at Meta, on the dead end of LLMs (@ylecun on X)

Some upcoming events