Bursting the Filter Bubble: Pro-Truth Librarians in a Post-Truth World

Depending on your perspective, the social media chickens have been either coming home to roost, or learning to soar recently. For information professionals, these are fascinating times. While the world has been contemplating the unprecedented results of the Brexit referendum in June and the recent US Presidential election, the simmering debate around the influence of social networking sites such as Facebook on the outcomes of elections and referendums has reached boiling point over the past few weeks. The outcome in both cases, which was the opposite to what was predicted by multiple polls, has led to suggestions that the polling systems seriously underestimated a number of factors, including the “power of alt-right news sources and smaller conservative sites that largely rely on Facebook to reach an audience” (Solon, 2016), and failed to take into account the deep polarisation that has been evident on social media sites, in particular. In the weeks since the US Presidential election, social media has been under the microscope, and there has been a flood of articles, polemics and opinion pieces, signalling varying degrees of concern about the apparent blurring of the lines between social media sites and traditional news channels, and the perceived effect that this has had – and may yet have - on national and global politics. Although emotions have been running high, particularly in the wake of the bitterly fought US campaign, it is helpful to sift through the hyperbole, and break down the key arguments that are shaping the discussion. What are the main issues emerging from this debate – and why do they concern us? 

    • Firstly, that the number of people who now consume news primarily via social media sites rather than traditional, editorialised media channels is increasing exponentially, with many turning directly to sites such as Twitter, Reddit and Facebook to keep up to date with current affairs. There does appear to be some evidence for this, although traditional channels have not been entirely jettisoned – for example, the most recent annual Reuters Institute Digital News Report found that 52% of Irish consumers now get their news from social media sites (BAI, 2016), while a Pew Research Center report on news consumption across social media platforms in 2016 found that a 62% majority of US adults also turn to social media for their news (Gottfried & Shearer, 2016). When broken down to examine specific sites, results showed that 66% of Facebook users get news on the site, while 59% of Twitter users get news on Twitter. Context is important for findings such as these – for example, the Reuters study also confirmed that TV is still the most popular news source in Ireland, while the Pew study showed that only 18% of respondents get news “often” from social media, while the demographics point to a primarily white, young and well-educated population who consume news in this way. However, caveats notwithstanding, the trend is notable and cannot be ignored.
    • Secondly, that the circulation of “fake news” items on social media sites has had a disproportionate effect on the outcomes of the US election and Brexit referendum. This point has driven much of the recent media discussion, although in the wake of the US election result Facebook’s Mark Zuckerberg publicly rejected the argument, referring to it as a “pretty crazy idea” (Shahani, 2016). Nonetheless, shortly after this, both Facebook and Google announced that they will be making changes to try to restrict the spread of fake news, in Facebook’s case by banning fake news sites from using its Facebook Audience Network (Murdock, 2016). While it is difficult to measure the actual effect of fake news on voter behaviour, there is certainly a lot of uncertainty and unease around this issue.
    • A wider perspective is the suggestion that fact-checking and “truth” in news items circulated via social media is now considered to be less important than content which appeals to the emotions, generates “clicks,” and can be monetized. It is no coincidence that “post-truth,” which encapsulates this concept, has been declared by Oxford Dictionaries as its international word of the year (Flood, 2016), its usage having spiked during the events of 2016. In exploring the impact of this trend on journalism, Declan Lawn in the Irish Times describes the “post-factual society” not as a society where facts no longer exist, but rather “a society where they exist, but don’t matter.” This, he argues, has had a profoundly damaging effect on journalistic practice, as sticking to the facts no longer produces the impact that it used to.
    • Alongside the concerns about misleading information and clickbait, is the more general sense that users of social media are shielded from content that does not chime with their own views, while links, videos and articles that reinforce their beliefs and preferences are channelled towards them in a continual stream. This is known as the Filter Bubble effect: “The more we click, like and share stuff that resonates with our own world views, the more Facebook feeds us with similar posts” (Solon, 2016). “Filter Bubble” was coined in 2011 by Eli Pariser in his book of the same name. Spurred on by concerns about the potentially reductive effects of personalised search, and predictive algorithms that customise social media content streams to satisfy user preferences (and, naturally, encourage more lucrative “clicking”), he raised a number of flags: to quote a passage from the book,

      “The new generation of Internet filters looks at the things you seem to like – the actual things you’ve done or the things people like you like – and tries to extrapolate. They are predictive engines, constantly creating and refining a theory of who you are, and what you’ll do and want next” (p.9).

      This, he argues, has fundamentally transformed the ways in which people consume information, as they are exposed less and less to ideas that oppose or challenge their own worldviews. Instead, through interacting only with content that reinforces their existing beliefs, they become trapped in this reverberating, digital echo chamber that serves only to strengthen their convictions, and irrevocably narrow their perspective on the world. Some media reports claim that it was this “red” and “blue” filter bubble effect that was the defining story of the US election; one piece in the Guardian newspaper in the UK even sought to investigate the effect, albeit in a spurious way, by asking five conservative-leaning and five liberal-leaning US voters to deliberately restrict their social media interactions to a stream (created by the journalists for the purpose) containing items that opposed their views (Wong, Levin & Solon, 2016). Results were predictably mixed, with some claiming more influence than others. In reality, it is a difficult claim to prove, and it also raises questions about individual agency – surely people have always “clicked” on sources that fit their worldview, and avoided others, no matter the medium? The recognised cognitive effect of confirmation bias supports this; it refers to people’s tendency to actively search out information that confirms what they already believe, and to avoid or reject information that conflicts with those beliefs. It seems that the speed and reach of social media has amplified this effect, and raised it to public consciousness in the wake of the election and the earlier referendum.

      Whose responsibility? 

      All of these issues have inevitably turned the spotlight towards the social media companies, and what their role should be. Do they, for example, have a moral responsibility to moderate content, to check facts, and to ensure that their users are fed a balanced diet of information? This is a tricky argument, as the companies tend not to define themselves as “media organisations” in the traditional sense, but rather as technology neutral platforms, which are not bound by editorial control. Of course, the counter-argument to this is that they do, of course, already exert some form of editorial control by setting rules and standards about what is acceptable and permitted content – the recent furore over the apparent removal of breastfeeding photographs on Facebook confirms this. And even if they are eventually defined as media organisations, how then are the boundaries between blocking or removing unacceptable content, and censorship to be drawn? These are big questions, with no easy answers.

      Pro-Truth Librarians 

      However, while these issues have been thrown into brilliant relief by the events leading up to November 8th, they are not exactly new to those of us who have been walking the information and digital literacy path for the past decade and more. We already know that the bedrock of the work that we do is the inculcation of a healthy kind of information scepticism in our students - or “crap detection,” as it is more colloquially known. Social media has been moving the goalposts since the mid-2000s, and “new” literacies such as those identified by Howard Rheingold (2010) have emerged; for example, attention; network awareness; critical consumption, amongst others. This is embedded in all of the frameworks and models that we use to inform our approach, most recently in the ACRL Framework: “Authority is Constructed and Contextual.” 

      As an information professional and long-time teacher of information and digital literacy, the public debate on the potential effects of social media is one of the most heart-thumping, genuinely thrilling moments in a more than a decade of working with undergraduates and future library professionals – in a strange way, it feels like a coming of age. We know this stuff. We knew what was coming down the line. We understand that education, education, education is the key. But it is also exciting because it demands that we re-appraise our role, and reflect deeply on what it is we are charged to do. It asks that we confront the issue of our responsibility towards our students in light of our growing awareness of the effects of social media – or consider whether it is our responsibility at all? 

      Because I am a teacher, I tend to frame these issues in terms of how I could, or should, address them in my modules. Following the US election, a story emerged about Melissa Zimdars, a communications professor in Massachusetts, who took the approach of compiling a list of misleading or questionable news organisations in a Google Doc of "False, Misleading, Clickbait-y, and Satirical ‘News' Sources” to distribute to students in her communications module (Dreid, 2016). Perhaps unsurprisingly, the list was shared and quickly went viral, followed by widespread questioning of the inclusion criteria that determined the sites on the list, as well as concerns about potential legal action from the site owners. That’s one way of addressing it; a top-down approach. However, while an interesting idea, it is far from certain that maintaining a register of questionable resources can really solve this problem; for me, this is the equivalent of placing a finger in the crack in the dam. You can hold back the deluge only for so long. My instinct has always been to turn the responsibility over to the students, although I try to equip them with the tools to make reasonable judgements. Since introducing a revamped Digital Literacy module for undergraduates in 2012, I have increasingly been aware of a new tone creeping into my classes; often, I seem to find myself exhorting my students to Be Alert! See how you are being manipulated! Understand that YOU are the product! Know what clues to look for, and avoid pitfalls! Always check the facts! These exhortations are typically grounded in explorations of digital footprints, online reputation management and cybersecurity. I explain that as individuals, they must decide for themselves where they stand on these issues, and what they are willing to accept. However, to do this in a balanced way seems challenging; I frequently feel like I am searching for that fine line between preaching, paranoia and common sense. I also wonder if I am somehow overstepping the mark?

      The fallout from the social media furore has also led me to look again at the concepts of critical information literacy (CIL), or critical pedagogy in library instruction, which are rooted in the broader notion of the social justice work done by librarians. CIL “aims to understand how libraries participate in systems of oppression and find ways for librarians and students to intervene upon these systems” (Tewell, 2016). Its goal is to highlight inequalities and injustices with regard to information access, to ask students to consider the ramifications of these injustices and to explore what might be done to address them. This can be powerful and transformative practice. But like the social media issues discussed above, it does also ask us to reappraise our role as teaching librarians, and to question whether this is or should be our responsibility?

      While I am not sure what the answer is, I apply the same reasoning as I have always done when advocating for information literacy: If not us – then who? I would be very interested in hearing other perspectives on this. It truly is an exciting time for information professionals.

       

      This was a guest post by Claire McGuinness, assistant professor in the School of Information & Communication Studies, UCD. 
      Claire has a long-held interest in information and digital literacies, new media, and the role of the teaching librarian. In this post, she examines filter bubbles, fake news and the effect of social media in the “post-truth society” and asks whether librarians have a responsibility to their users and students to point out where the line between fact and fiction has been blurred.

       

      Relevant References:

      BAI (2016, June 15). Over half of Irish Consumers (52%) now get their news via social media sites. Retrieved from: http://www.bai.ie/en/over-half-of-irish-consumers-52-now-get-their-news-via-social-media-sites/

      Dreid, N (2016, Nov 17). Meet the professor who’s trying to help you steer clear of clickbait. Chronicle of Higher Education. Retrieved from: http://www.chronicle.com/article/Meet-the-Professor-Who-s/238441

      Lawn, D (2016, Nov 16). Journalists are helping to create a dangerous consensus. Irish Times. Retrieved from: http://www.irishtimes.com/opinion/journalists-are-helping-to-create-a-dangerous-consensus-1.2868638

      Solon, O. (2016, Nov 10). Facebook’s failure: did fake news and polarized politics get Trump elected? Guardian. Retrieved from: https://www.theguardian.com/technology/2016/nov/10/facebook-fake-news-election-conspiracy-theories

      Flood, A. (2016, Nov 15) 'Post-truth' named word of the year by Oxford Dictionaries. Guardian. Retrieved from: https://www.theguardian.com/books/2016/nov/15/post-truth-named-word-of-the-year-by-oxford-dictionaries

      Gottfried, J., & Shearer, E. (2016). News use across social media platforms 2016. Pew Research Center. Retrieved from: http://assets.pewresearch.org/wp-content/uploads/sites/13/2016/05/PJ_2016.05.26_social-media-and-news_FINAL-1.pdf

      Murdock, S. (2016, Nov 15). Facebook, Google Take Small Steps To Stop Spread Of Fake News. Huffington Post. Retrieved from: http://www.huffingtonpost.com/entry/google-facebook-fake-news-election-2016_us_582b7955e4b0aa8910bd60e3

      Rheingold, H. (2010). Attention and other 21st Century Social Media Literacies. Educause. Retrieved from: https://net.educause.edu/ir/library/pdf/ERM1050.pdf

      Shahani, A. (2016, Nov 11). Zuckerberg denies fake news on Facebook had impact on the election. All Tech Considered: Tech, Culture and Connection. Retrieved from: http://www.npr.org/sections/alltechconsidered/2016/11/11/501743684/zuckerberg-denies-fake-news-on-facebook-had-impact-on-the-election

      Wong, J.C., Levin, S., & Solon, O. (2016, Nov 16). Bursting the Facebook bubble: we asked voters on the left and right to swap feeds. Guardian. Retrieved from: https://www.theguardian.com/us-news/2016/nov/16/facebook-bias-bubble-us-election-conservative-liberal-news-feed