Unless otherwise stated, lab meetings will be held at 10:30 am in room E94/1579 (245 First St. Cambridge, MA 02142, USA).
2/11. Erez Yoeli and Molly Moore.
As most of you know, we've been building a social media platform to motivate more--and more deliberate--giving. The key feature of the platform is a single profile that intelligently displays your giving history. We teamed up with Ideas42 to run simple M-Turk experiments that test some of our assumptions, like: Do people really feel more motivated to give when their giving is displayed in such a profile? Do people think more highly of others whose giving is displayed in such a profile? Would they feel comfortable using such a website? We'll present you with the results of these tests, and will hope to incorporate your feedback both into the design of the platform, as well as to figure out how/where to publish these studies. Thanks!
2/25 - Gordon Kraft-Todd.
Moral exemplars and leaders by example exhibit dissociable mechanisms of prosocial persuasion. What characteristics of an individual make them effective at influencing others to contribute to public goods? Three converging lines of evidence - prestige-biased cultural transmission (Henrich & Gil-White, 2001), moral exemplars (Walker & Hennig, 2004), and elevation (Algoe & Haidt, 2009) - support an account for a candidate: moral excellence which we test across 3 studies (total N=2411). In Study 1, we cross our manipulation of leading by example (Kraft-Todd et al. 2018) with 4 different manipulations of moral excellence, and consistently find that moral excellence and leading by example are two independent routes to successfully advocating for a non-normative public good contributions. In Study 2, we have subjects rate advocates in our stimuli along 11 dimensions and show that trait perceptions distinguish moral excellence from leading by example. Finally, in Study 3, we utilize secondary dependent variables in two experiments in Study 1 to conduct two multiple mediation models which both support the account that admiration is a dissociable mechanism explaining the effectiveness of moral excellence.
3/4 - Bethany Burum.
Empathy tracks incentives. We sometimes think of empathy as a shield against harming others, but empathy is malleable, as history (e.g., the increased callousness toward Jews under the Nazis) and lab experiments (e.g., the Stanford prison experiment) both show. In a series of studies, we show that empathy tracks our incentive to help, or avoid helping. In study 1 subjects gave more to a family in need when their giving would be observable to their partner in a subsequent trust game, and, critically, also reported feeling less empathy for the family in need. In study 2 subjects gave less to a family in need when giving was more costly, and, critically, also reported feeling less empathy for the family. Study 3 replicated the classic scope insensitivity effect, whereby empathy was no higher when contemplating five suffering victims than when contemplating one. However, when there was an evolutionary motive to care about impact because the victims were family members, empathy became scope sensitive.
3/18 - Ziv Epstein.
How can social media platforms effectively fight the spread of fake news and other misinformation online? One possibility is to use newsfeed algorithms to downrank content from low-quality sources. Although this proposal is essentially entirely algorithmic, the challenge lies not in the details of the algorithm, but instead in how to identify the quality of news sources - that is, the real challenge for this approach is a social science challenge. Here we assess the utility of crowdsourcing for addressing this challenge. Are laypeople good judges of news source quality? Or are they either uninformed, or motivated to “game” the crowdsourcing mechanism in order to advance their partisan agenda? To shed light on these questions, we conducted a survey experiment with a nationally representative sample of Americans. Participants were asked to rate their familiar with, and trust in, a range of mainstream, hyper-partisan, and fake news sites. Additionally, to study the tendency of people to game the system, half of the participants were told that their responses would be shared with social media companies to help inform ranking algorithms. We find that our sample of laypeople are quite successful in discriminating between high and low quality content: they provide much higher trust ratings to mainstream sources than hyper-partisan or fake news sources, and this successful discernment was unaffected by informing them that their responses would influence ranking algorithms. Our results have important implications on the deployment of decentralized, scalable approaches to fighting misinformation online.
3/21 - Nadia Brashier. This meeting will be held at 1:30 pm.
Cognitive and affective “shortcuts” for truth. Every day, we encounter false claims that range from silly (e.g., We use 10% of our brains) to dangerous (e.g., Vaccines cause autism). How do we know what to believe? My research suggests that people use cognitive shortcuts to infer truth. As politicians and marketers realize, one such heuristic involves repetition. Repeated statements feel easier to process, and thus truer, than new ones. This illusory truth effect occurs even when claims contradict well-known facts (e.g., The capital of France is Madrid). Unless prompted to “fact check,” young adults neglect their knowledge. Encouragingly, older adults spontaneously “stick with what they know.” In a related line of work, affect serves as another rule of thumb. Assertions incidentally paired with angry or fearful faces seem slightly less truthful than those paired with neutral faces. Again, older adults show more discernment – this time by disregarding irrelevant emotional information. My findings suggest ways to cope in the current climate of misinformation, where falsehoods travel further and faster than the truth.
4/1 - Jon McPhetres.
A Perspective on the Relevance and Public Reception of Psychological Science. One question which arises when considering issues of generalizability and replicability is whether or not social psychological research is relevant. To examine what research internet users find interesting, I collected data from the website Reddit (r/science) and compared 'upvotes' to traditional journal metrics. Do lay audiences and social psychologists value the same research? What are the most popular topics on r/science?
(DM) Informative Fictions: A Theory of Misinformation. Why is misinformation--fake news, rumors, and conspiracy theories—so popular? Research has uncovered a number of proximate explanations, such as group oriented biases or reduced analytic thinking, but little work has explored why such otherwise dysfunctional predilections would be so common. In this talk I will argue that misinformation actually can be functional, in the sense that it can more precisely communicate valuable information than can be conveyed purely with fact. While common forms of misinformation are dysfunctional for communicating property information—information about the state and operation of things—they can actually be valuable for communicating character information—information about the motivations of social agents. In particular, narratives containing “false facts” can more effectively portray a speaker’s theory of another individual’s character, thus making these “fictions” useful for gathering information about leaders and other important individuals in the community. This theory of “informative fictions” is then used to derive several testable propositions regarding the conditions under which misinformation will be accepted, tolerated or promoted are deduced. (AAA) Nudging Public Engagement Against Corruption. Corruption is one of the most challenging problems currently faced in Mexico. In this project I make use of a field experiment to show how social norms can be used in a short news story, presented on Twitter, to raise awareness and disseminate anti-corruption policies.
4/22 - Jillian Jordan. This meeting will be held via skype.
In this talk, I will be presenting new work with Nour Kteily looking at the role of reputation motives on moralistic punishment in the context of relatively more and less ambiguous transgressions.
The emergence of social media has changed the way we both communicate and gain access to information about our world. While allowing for new forms of connectivity and coverage, social media has also facilitated the proliferation of misinformation. But what are the cognitive mechanisms that underlie the spread of fake news? And how can we design interventions to circumvent these falsities, across ideological boundaries? Recent studies find that people who engage in more reasoning are better at identifying fake news, regardless of whether it aligns with their ideology. Here, we seek to establish whether this link is causal, and whether it extends to social media sharing rather than just belief. In doing so, we also aim to develop a scalable behavioral intervention to reduce misinformation. To that end, we conduct a field experiment on Twitter to see if inducing people to reflect on the accuracy of content can indeed increase the quality of the news they subsequently share.