Identity Valley Newsletter #9

2/19/2026

Reality, Curated.

Always-on AI can personalise what we perceive in real time; the key is designing it to broaden reality, not compress it.

by Ferdinand Ferroli

We used to worry about curated timelines. Now we should prepare ourselves for curated reality. AI assistants are no longer tools we consult only through a web interface. They are moving into everyday life through wearables, audio devices, and camera based systems, like smart glasses, that can see and hear what we see and hear, as major AI companies race to launch assistant-first hardware. The filter bubble is no longer just online. It is becoming a layer over lived experience.

With wearable assistants, the risk is not only factual error or hallucination but a more structural risk is: enclosure. An always present system can gradually shape perception by highlighting, filtering, and interpreting the world in ways that mirror what it thinks we want to see. Over time, that can narrow reality into a personalised stream that feels natural because it is tailored to our reflexes. If advertising enters the model, the optimisation target can shift again, away from our longer term interests and toward whatever drives attention, influence, or spending.

This is how the old echo chamber mutates:

In the feed era, we were at least in public, surrounded by other people, even if many were like-minded.
In the chatbot era, that public setting gave way to private conversations that are harder to see or challenge.
With wearables and augmented reality, mediation will move from what we read to what we notice in real time. Each their own reality, curated.

This raises a more basic cognitive question of whether people can even notice when or how AI has framed what they are seeing, and then challenge it if needed. Research with blind users of AI visual assistance shows how hard it can be to detect and correct errors when AI mediates perception in the moment. Always on AI devices make privacy, transparency and user agency even more urgent.

So the question is not whether we will use "always-on AI". We will. What we might need, rather sooner than later, are concrete protections, such as built-in exposure to different perspectives, user control over reality curation, and "reality checks" to reconnect with the real world, or even the realities of other people.

Otherwise, personalisation won’t just tailor reality; it will fragment it, weakening the shared ground we need to coordinate, deliberate, and find common ground.

­

Worth sharing

Reality, curated: A decade ago, aspects of the world described above have been imagined in the short film "HYPER-REALITY". by Keiichi Matsuda.

Smartwatches track your heart rate, sleep and location around the clock. But who else gets that data? This article on accountability in wearable AI explores how AI-powered wearables raise urgent questions about consent, algorithmic bias and the hidden decisions shaping your health insights.

Virtual and augmented reality are quickly stepping out of sci-fi and into everyday life. This deep dive into Principles for a Responsible Virtual Future asks a simple question: what rules do we need now to make sure these new worlds stay fair, trustworthy, and human?

In a controlled debate experiment, researchers tested whether AI can change people’s minds. The result is surprising: when allowed to personalise arguments, GPT-4 often outperformed humans, raising new questions about persuasion, influence, and the future of online conversations.

­

Connect & Share

As we are always looking to find like-minded people and expand the responsible tech community subscribe to this newsletter if you want to stay connected or share if you think someone else might be interested.

­ Subscribe here ­

­

Hidden gem

Humans are attention-paying machines, paying attention to machines.

- inspired by this article in the New York Times

­

We asked you a question

Results poll on family tracking: In our last newsletter, we ran a poll asking whether you support or oppose tracking family members. Unsurprisingly for the readership of this newsletter all participants voted against it.Thanks for participating!

­

Some upcoming events

03 March 2026, London

European Resilience Summit

https://www.europeanresiliencesummit.com/london/

30 March - 2 April 2026, Washington D.C.

IAPP (International Association of Privacy Professionals) Global Summit

https://iapp.org/conference/iapp-global-summit

19 - 22 May 2026, Brussels

CPDP (Computers, Privacy and Data Protection) Conference

https://www.cpdpconferences.org/

24 - 26 June 2026, Vienna

Digital Humanism Conference

https://dighum.wien/