Filter Bubble
A filter bubble is the intellectual isolation that can occur when websites make use of algorithms to selectively assume the information a user would want to see, and then give information to the user according to this assumption. Websites make these assumptions based on the information related to the user, such as former click behavior, browsing history, search history and location.
For that reason, the websites are more likely to present only information that will abide by the user's past activity. A filter bubble, therefore, can cause users to get significantly less contact with contradicting viewpoints, causing the user to become intellectually isolated.
Personalized search results from Google and personalized news stream from Facebook are two perfect examples of this phenomenon.
Search engines and social networks are increasingly used for health related inquiry by the public. The information found through search engines, or presented by social network services are typically tailored to the individual through the use of complex algorithms taking into consideration comprehensive information about the individual performing the search, often without the knowledge of the searcher.
In this paper, I discuss how the technology poses challenges both for patients and clinicians, and present some ideas to mitigate these problems.
Imagine sitting in front of your computer trying to decide whether or not your children are to be vaccinated against common childhood diseases. You go to Google Search, and search for “vaccines and children”. You will get an overwhelming number of results (at the time of writing, I got about 35 million hits).
The results are sorted and presented to you, with the ten top hits on the first page. Most people will click one of the links on that first page. What’s interesting is that the results are sorted not only by objective relevance, but rather is heavily influenced by your search history, your social network, when you are searching, and where you are searching from.
In fact, over 200 so called “signals” go into that simple search, making your results almost certainly different from mine.
In most cases, this personalized search is beneficial to us, since it produces results that seem relevant to the user. However, as I will argue in this paper, there are serious problems with this, which in certain situations can mean the difference between life and death.
In our example, it could mean the difference between choosing to vaccinate a child, or leaving it vulnerable to common, easily preventable diseases. The main reason why this may happen is that the technology we are using is hiding the complexity of the search algorithms, and is not revealing the additional information on which the filtering is based.
This is a problem for at least two reasons. First, most people do not know about this filtering, and even if they do, it is still inherently difficult to understand and grasp how it influences the search results. Second, the way the algorithms work can lead to the creation of a filter bubble.
However, it is not only your own behavior that influences the results. The interests and preferences among people in your social network are also part of the algorithms, making it more likely that you will receive search results that your social network in general is gravitating toward.
In many cases, these filters are providing relevant and good results. However, it becomes a problem as soon as your profile contains elements that make the search results gravitate toward misinformation.
The filters are to a large degree invisible, which adds to the problem. Many users are not even aware that the filtering is taking place, and even if they are, it is difficult to take control of how the filter is being applied. Granted, you can go to Google and delete your search history, or click the “Hide private results” button in the top right of the search results.
Still, the complexity of the algorithms and the lack of usable explanations about how the filters actually work make it difficult for the user to take control.
Naturally, the technology is not solely responsible for the quality of the information we find. Our prior beliefs, and the sources we seek for information are personal starting points that influence how we approach the information gathering. However, as the search algorithms learn about our preferences and history, the personal starting points are embedded into the technology as part of the filter algorithms.
One of the aims of the Knowledge Landscapes network is to better understand how the public uses online resources to make decisions about personal health. The rise of the internet as a common medium has led to well documented changes in how people in general get informed about their own health situation
In short, the main problem is not the search algorithms as such; they are welcome in most cases, and help us navigate extensive amounts of information in a manageable way. The problem is the invisibility of the algorithms, both in terms of the way they are hidden from explicit view on search engines and social networks, and the way they directly impact the quality of the information we find when we look for information online.
Hopefully, through the Knowledge Landscapes network, we will gain an even better understanding of the way the public use online resources for managing their own health, and be able to provide ideas that can improve the quality of health related information that reaches the public.