Nov 23, 2022
Consumers often use recommendation systems to discover relevant content more easily when reading media or watching Video-On-Demand. However, what is shown to consumers is nowadays often automatically determined by opaque AI algorithms. The highlighting or filtering of information that comes with such recommendations may lead to undesired effects on consumers or even society, for example, when an algorithm leads to the creation of filter bubbles that implicitly discriminate against sensible social aspects such as race, gender, social, cultural inclusion, etc… or amplifies the spread of misinformation.
For this reason, Optiva Media is developing a content recommendation system for end-users that can be tuned to align with societally important aspects such as inclusivity, cultural and sexual diversity, etc promoting patterns of responsible content consumption while still matching users’ preferences.
How do standard recommendation algorithms work?
Recommendation algorithms, particularly those based on deep learning, suffer from a lack of explainability to orient end users about the latent aspects guiding the recommendations they are provided with.
These algorithms typically rely both on individual user interests and collective preference patterns in a community, they are trained based on historical data collected within a specific application context. Personal recommendations are formed using this data to build user profiles, calculate similarities, and find correlations. Consequently, recommendations are prone to replication of structural and behavioural bias since the data is captured from human interactions.
In the context of content recommendation within TV services, these biased and opaque recommendation technologies perpetuate social discrimination against vulnerable groups, values, and cultures by emphasizing mainstream content. These algorithms often dismiss an important part of the existing cultural offer, typically non-mainstream, that might otherwise receive a greater deal of attention from end users and society in general.
In media, the best-known example is the phenomenon of the “filter bubble” (Pariser, 2011). An algorithmic bubble can emerge when it learns about users’ interests and opinions over time and only displays content that matches these assumed interests and opinions. Ultimately, this can lead to self-reinforcing feedback loops which may then result in undesired societal effects such as opinion polarisation or the increased spread of one-sided information.
How is Optiva Media addressing the issue?
The use of equitable AI-based recommender systems improves the interaction and collaboration between humans and AI. The systems enhance the understanding of the recommender by providing adaptive and interactive explanations that can be presented to different stakeholders in a way that enhances perception and visualisation.
Optiva Media´s recommender system allows human-driven tuning and promotion of societal values such as gender equality, cultural and political diversity. A scoring system is integrated into recommendations to help end-users understand content recommendations and increase societal responsibility.
Optiva Media has used its socially-responsible recommendation system on a number of proposals both at European and national levels.
Use case potentially applicable to TV services, media, shopping portals etc. wishing to promote a socially responsible corporate agenda.