You are currently viewing Let’s decouple the algorithm from the platform
Transparent and black-box content recommendation

Let’s decouple the algorithm from the platform

On choice, recommendation algorithms, and the Digital Fairness Act

In the Bits of Freedom-podcast of January 9, discussing the upcoming Digital Fairness Act (DFA), one segment stood out for me. Starting around 25:50, the conversation focused on recommendation algorithms — and the question of who should actually control them.

Policy advisor Lotje Beek explained to host Inge Wannet that recommendation algorithms are the systems that determine what we see when scrolling through apps like TikTok, Instagram, or Facebook. In practice, these are often profiled recommendation algorithms: highly personalized and designed to keep users engaged on the platform for as long as possible.

Bits of Freedom argues that algorithms that serve only the platform and not the user should be banned. At the same time, they advocate for user choice. Users should be able to decide for themselves how content is sorted and recommended — including algorithms developed by third parties, not just the platform itself.

This aligns closely with an idea I have been working on for some time.

Recommendation algorithms do not only shape what we see, but also what we don’t see. They influence information consumption, opinion formation, and ultimately democratic processes.

Today, these algorithms are almost always:

  • opaque,
  • platform-owned,
  • engagement- and ad-revenue-driven,
  • and hardly adjustable by users themselves.

What if we decouple the algorithm from the platform?

My approach is simple:
separate the recommendation logic from the platform.

This can take several forms:

  • a standalone recommendation service where users can discover personalized links to content from diverse sources;
  • or, once platforms allow it or regulations enforce it, a choice of algorithms within a platform.

Users could then choose, for example:

  • a chronological feed;
  • feeds based on popularity or interaction;
  • thematic recommendations (e.g., sustainability or culture);
  • algorithms that actively promote diversity of perspectives;
  • or very specific preferences, such as only black-and-white videos (an example mentioned in the podcast).

Open source, transparent and reproducible

Together with a team from the South-Eastern Finland University of Applied Sciences, I am currently working on making an open-source recommendation system available, based on collaborative filtering. It helps people discover personalized information based on links shared by others with similar information consumption patterns.

Key principles:

  • no use of large language models (LLMs) or other black-box AI as part of the recommendation logic;
  • fully transparent and reproducible;
  • no behavioral profiling for advertising purposes;
  • designed to serve users, not to maximize attention.

The system demonstrates that personalized recommendations are perfectly possible without manipulation, without surveillance, and without closed algorithms.

When I heard in the podcast: “Maybe there’s a listener out there who has a nice idea,” I felt immediately addressed.

Invitation for dialogue

I would love to discuss this further — with organizations like Bits of Freedom, open-source developers, designers, and policymakers working at the intersection of civic technology, democracy, and digital autonomy.

Not because this idea is “finished,” but because it can be developed further through dialogue, experimentation, and collaboration.

Anyone interested in continuing this conversation can reach me at jos.schuurmans@cluetail.com.

Jos Schuurmans

Feel free to book a call with me, send me an email, and connect with me via LinkedIn.

Leave a Reply