<table style="border:1px solid #adadad; background-color: #F3F1EC; color: #666666; padding:8px; -webkit-border-radius:4px; border-radius:4px; -moz-border-radius:4px; line-height:16px; margin-bottom:6px;" width="100%">
<tbody>
<tr>
<td><span style="font-family:Helvetica, sans-serif; font-size:20px;font-weight:bold;">PsyPost – Psychology News</span></td>
</tr>
<tr>
<td> </td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/neuroticism-linked-to-liberal-ideology-in-young-americans-but-not-older-generations/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Neuroticism linked to liberal ideology in young Americans, but not older generations</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Jan 18th 2026, 08:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>New research published in the <em><a href="https://doi.org/10.1111/issj.70025" target="_blank">International Social Science Journal</a></em> suggests that the relationship between personality and political beliefs in the United States varies significantly by age. The findings indicate that higher levels of neuroticism are associated with liberal ideology among young Americans, but this association is absent in older generations. This generational divide implies that growing up in a highly competitive historical period may play a role in shaping both the mental health and political orientations of American youth.</p>
<p>Social scientists have observed that political polarization in the United States is increasingly defined by a generational gap. Older cohorts have trended toward conservatism in recent decades. Simultaneously, younger cohorts have moved decisively toward liberal positions. While much research has focused on why older adults have shifted right, fewer studies have examined the psychological drivers behind the leftward shift of the youth.</p>
<p>Francesco Rigoli, a social scientist at City St Georges, University of London, sought to address this by examining the role of neuroticism. Neuroticism is a fundamental personality trait associated with a predisposition toward negative emotions, including anxiety, sadness, and irritability. Previous data has shown that rates of anxiety and depression have surged among young people in the United States. Rigoli proposed that this increase in mental distress might be a contributing factor to the adoption of liberal ideologies.</p>
<p>“The study was motivated by the observation that, while in recent years old Americans have moved to the conservative camp, young Americans have become progressively more liberal. Why this has occurred remains poorly understood, and my article aimed at shedding light on this question,” Rigoli told PsyPost.</p>
<p>The rationale for this investigation centers on what the author calls the “Generational Hypothesis.” This theory posits that the social environment in the United States has changed drastically since the 1970s. Older Americans spent their formative years during the post-war period, an era often characterized by greater economic stability, stronger labor unions, and a more collectivistic culture.</p>
<p>In contrast, younger Americans have matured during a “contemporary” period defined by intense competition. This era has seen a decline in social capital, increased return on higher education alongside rising debt, and greater labor market insecurity. The researcher argues that growing up in this environment increases the likelihood of developing neurotic traits. Consequently, young people with higher neuroticism may turn to liberal ideology because it often critiques hyper-competition and advocates for social safety nets that offer protection against risk.</p>
<p>To test this hypothesis, Rigoli conducted a series of three studies. The first study utilized data from the 2022 General Social Survey, a long-running and representative survey of the American population. The final sample for this analysis included 1,644 participants. The study measured political ideology using a seven-point scale where participants identified themselves as anywhere from “extremely liberal” to “extremely conservative.”</p>
<p>Neuroticism in the first study was assessed using two specific questions. These items asked respondents how often they felt nervous or were unable to control their worrying during the previous two weeks. The researcher controlled for various demographic factors, including gender, income, education, and ethnicity, to isolate the relationship between age, personality, and politics.</p>
<p>The results of the first study revealed a significant interaction between age and neuroticism. Among younger adults, higher levels of neuroticism were predictive of a more liberal ideology. Statistical analysis showed this link was significant for individuals around ages 29 and 43. However, for participants aged 57 and older, the connection disappeared entirely. The data indicated that the link between emotional instability and liberalism was present up until approximately age 47.</p>
<p>The second study aimed to replicate these findings using a more robust measure of personality. The researcher recruited 600 participants living in the United States through an online platform called Prolific. The recruitment process ensured a balanced distribution of gender and political views across different age groups.</p>
<p>In this second study, neuroticism was measured using the Big Five Inventory. This is a validated psychological scale that employs eight different statements to gauge personality, offering a more reliable assessment than the two-item measure used in the first study. Participants rated their agreement with statements regarding their tendency to be depressed or handle stress well.</p>
<p>The findings from the second study confirmed the results of the first. Higher neuroticism scores significantly predicted liberal ideology in participants aged 23 and 41. Once again, this relationship was not found in the older cohort, represented by an average age of 59. Detailed statistical analysis determined that the effect of neuroticism on ideology was significant only for individuals younger than 54 years old.</p>
<p>The researcher also checked for potential confounds in both American studies. Additional analyses examined whether gender, ethnicity, income, or education influenced the interaction between age and neuroticism. The results showed that the age-specific link between personality and politics remained consistent regardless of these demographic variables.</p>
<p>The third study sought to determine if this phenomenon was unique to the United States or a universal consequence of aging. If the link between youth, neuroticism, and liberalism were biological, it should appear in other countries. The researcher analyzed data from the World Value Survey, which included 23,368 participants from 20 different countries. These countries spanned various cultural regions, including nations in Europe, Asia, Africa, and South America.</p>
<p>The analysis of the international data produced different results than the American studies. Across the 20 countries examined, there was no consistent evidence of an interaction between age and neuroticism. In the United States, young neurotics tend to be liberal while old neurotics do not show a preference. In the international sample, this generational difference did not exist.</p>
<p>This absence of a global effect supports the idea that the American findings are likely due to specific generational experiences rather than the aging process itself. The data suggests that the unique social and economic pressures of the contemporary United States may be driving the association between mental distress and political views among the youth.</p>
<p>“The article hypothesizes that, compared to older American cohorts, younger ones have grown up during a more competitive historical period that has led many to become more neurotic (i.e. to be more predisposed to low mood, anxiety, and irritability) and, in turn, to become more liberal. This predicts that, in the United States, neuroticism is linked with liberal ideology in young, but not old, people. This prediction is supported in two studies.” </p>
<p>“These studies show that young American liberals are more neurotic than young American conservatives. Meanwhile, among older Americans, liberals and conservatives have the same level of neuroticism. A third study found no such pattern outside the United States, suggesting that the effect observed in the United States is not due to aging but to generational experiences. Overall, these findings highlight a potential role for neuroticism in explaining why young Americans have become more liberal.”</p>
<p>Despite the consistent findings across the two American studies, there are limitations to consider. The research is correlational, meaning it cannot definitively prove that neuroticism causes young people to become liberal. It is possible that holding liberal views in a polarized society leads to higher anxiety, or that a third unmeasured factor causes both. Future research is needed to unpack the specific mechanisms at work.</p>
<p>“The next step is to investigate empirically why neuroticism in linked with ideology among young, but not old Americans. Is it because young Americans have grown up in a more competitive age, as hypothesized in the paper? This remains to be explored empirically.”</p>
<p>“In general, these findings encourage people to reflect on the benefits and costs of living in a competitive society like the American one. It invites readers to reflect on how competition may affect people’s mental wellbeing, and in turn on how this may have implications for politics.”</p>
<p>The study, “<a href="https://doi.org/10.1111/issj.70025" target="_blank">Neuroticism Is Linked With Liberal Ideology in Young, but not Old, People in the United States</a>,” was authored by Francesco Rigoli.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/trump-supporters-and-insecure-men-more-likely-to-value-a-large-penis-according-to-new-study/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Trump supporters and insecure men more likely to value a large penis, according to new research</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Jan 18th 2026, 06:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>New research published in the journal <em><a href="https://doi.org/10.1037/men0000546" target="_blank">Psychology of Men & Masculinities</a></em> provides evidence that men who feel insecure about their masculinity are more likely to place a high value on having a large penis. The findings suggest that for some men, the penis serves as a symbol of status and dominance, and the desire for a larger one is partly driven by feelings of humiliation regarding failures to meet social expectations of manhood.</p>
<p>Men’s concerns regarding penis size are often treated as a source of amusement in popular culture. Despite the comedic treatment, these preoccupations can have serious negative consequences for men’s mental health, sexual satisfaction, and romantic relationships. </p>
<p>Previous observations by historians and scientists have suggested that the human penis functions as an organ of display intended to signal status to other men. The authors of the current study sought to empirically test the psychological mechanisms underlying this phenomenon. </p>
<p>“For many years, I had noticed that men seemed to have an interest in penis size and to admire large penises. I found it curious that men seemed to have a lot more admiration for large penises than women do. I found research that suggested this is the case – men do have a greater value for penis size than women,” said study author Cindy Harmon-Jones, a senior lecturer at Western Sydney University. </p>
<p>“If the reason men value a large penis is not because big penises are desired by women, then what could account for it? Anecdotally, I observed that men who admired large penises seemed angry and hostile. But this was just a personal observation that needed to be tested empirically to see whether it was supported or not.”</p>
<p>“One of my co-authors (Eddie Harmon-Jones) subsequently found a passage in Jared Diamond’s book <em>Why is sex fun?</em> where Diamond proposed that a large penis might serve as a display signal of masculinity. That was interesting but also needed to be tested empirically,” Harmon-Jones continued.</p>
<p>“That led me to symbolic self-completion theory. This theory, created by Robert Wicklund, proposed that when a person feels threatened in an important domain they will be motivated to display other symbols of success in that domain in an attempt to receive validation. For example, I had a previous paper where I found that academics who aren’t succeeding in publishing as many articles display more titles in their email signatures. ”</p>
<p>“Based on that theory, I thought that men who are feeling like they are not succeeding in the masculine role might value large penises more, as an alternate symbol of masculinity.”</p>
<p>The researchers conducted four separate studies to test this hypothesis. In Study 1, the team recruited 88 heterosexual men from the United States through an online platform. The participants ranged in age from 20 to 68 years. </p>
<p>Harmon-Jones and her colleagues developed a new assessment tool called the Penis Size Value Scale. This scale measured the extent to which participants believed that larger penises are superior or more important. Participants also completed the Masculine Gender Role Stress Scale. This measure assesses the stress a man anticipates feeling in situations where he might fail to live up to masculine ideals, such as physical inadequacy, emotional inexpressiveness, or subordination to women.</p>
<p>The results of the first study showed a positive correlation between masculine gender role stress and the value placed on a large penis. Men who reported higher stress about failing to meet masculine standards were more likely to believe that penis size is significant. </p>
<p>The researchers also measured traits such as dominance, anger, and aggression. The data indicated that men with higher levels of these traits also tended to value large penises more. There was no significant relationship between valuing a large penis and the frequency of sexual activity or pornography consumption. This suggests that the value is not simply a product of sexual behavior or exposure to adult media.</p>
<p>In Study 2, the researchers sought to replicate the initial findings with a larger sample to allow for more robust statistical analysis. They recruited 201 men from the United States. The procedure remained similar to the first study, with participants completing the Penis Size Value Scale and measures of masculine stress, aggression, and dominance. The researchers performed a multiple regression analysis to determine which factors uniquely predicted the value placed on penis size.</p>
<p>The findings from the second study confirmed the results of the first. There was a clear link between feeling threatened in the masculine role and valuing a large penis. When the researchers analyzed the independent contributions of the different variables, they found that masculine gender role stress remained a significant predictor even when statistically controlling for dominance and aggression. </p>
<p>This provides evidence that insecurity about one’s standing as a man is a primary driver of these beliefs, distinct from general competitiveness or aggressive tendencies. The study also found that valuing a large penis was associated with feeling shame about one’s own penis size and engaging in behaviors to try to enlarge it.</p>
<p>“A concern about penis size might be about more than just the penis,” Harmon-Jones told PsyPost. “It might have to do with a man’s concerns about masculinity more generally.”</p>
<p>For Study 3, the research team recruited 270 men and expanded the scope of their inquiry. They added a new subscale to their assessment tool to specifically measure the desire to possess a large penis, rather than just the abstract value placed on it. They also introduced a measure of humiliation to gauge specific emotional reactions to masculine threats. Additionally, the researchers included questions regarding political and religious views to explore broader social correlates.</p>
<p>The results revealed that both the importance of and the desire for a large penis were positively correlated with masculine gender role stress. The study also found that men who felt humiliated by the prospect of failing as a man reported a stronger desire for a large penis. </p>
<p>In terms of social attitudes, valuing a large penis was correlated with higher levels of religiosity and support for Donald Trump. However, statistical models suggested that the connection to political and religious views might be partially explained by the underlying masculine role stress. The strongest predictor for desiring a larger penis remained the feeling of insecurity regarding masculinity.</p>
<p>The final study moved from correlational observation to an experimental design to test for a causal relationship. The researchers recruited 204 men and randomly assigned them to one of two conditions. In the “threat” condition, participants wrote about memories where they failed to meet masculine standards. In the “no-threat” condition, participants wrote about times they succeeded in the masculine role. Following this writing task, the researchers measured the participants’ current feelings of humiliation and their desire for a large penis.</p>
<p>The manipulation check confirmed that men in the threat condition experienced higher levels of negative emotions, particularly humiliation. Assessing the impact on penis size attitudes, the researchers found that the threat manipulation led to an increase in the desire for a large penis. </p>
<p>A path analysis revealed an indirect effect. The situational threat to masculinity increased feelings of humiliation, and these elevated feelings of humiliation subsequently increased the desire for a large penis. This supports the idea that the emotional experience of humiliation is a key mechanism linking masculine insecurity to body image concerns.</p>
<p>“Regarding my hypotheses, I was most convinced by the results of Study 4, because it’s an experiment,” Harmon-Jones said. “The first three studies are correlational, which means that there could be many alternative explanations for the results. With the fourth study, I manipulated masculine role stress. The men in the role stress condition reported a greater value for large penises. This suggested that even momentary threats to masculinity can change how important men find large penises.”</p>
<p>There are limitations to this research that should be noted. The samples across all four studies consisted entirely of heterosexual men from the United States. It is possible that men from different cultural backgrounds or sexual orientations might respond differently to masculine threats. </p>
<p>The experimental manipulation in the fourth study produced relatively small effects, which is not uncommon in brief laboratory experiments but may differ in intensity from real-world experiences of emasculation. The measure of humiliation used in the studies grouped several negative emotions together, which may obscure the specific role of distinct emotional states.</p>
<p>“This is basic research on a fairly novel topic, so I wasn’t trying to establish the practical significance with these studies,” Harmon-Jones said. “Instead, I hope it will open up the topic and that future studies can explore the practical implications as well as related ideas. I’d encourage readers not to worry too much about effect sizes, because these can vary greatly depending on the strength of manipulations and the sensitivity of measures.”</p>
<p>The findings offer a new perspective on male body image issues. The research suggests that for many men, distress about penis size is deeply intertwined with broader anxieties about their adequacy as men. Clinicians and educators might find this information useful when addressing male body image concerns. </p>
<p>“These studies aren’t primarily about whether men are satisfied or dissatisfied with the size of their own penis,” Harmon-Jones noted. “They are about how threats to masculinity affect how men feel about penis size in general, that is, their feelings about the importance or value of large penises.”</p>
<p>The study implies that the popular trope of men seeking status symbols to compensate for physical insecurities may have a psychological basis, but the dynamic also works in reverse: men who feel their masculinity is threatened may fixate on physical symbols to restore their sense of self.</p>
<p>“I hope that my paper normalizes discussing men’s sexuality and body image including how they feel about their penises and penises in general,” Harmon-Jones said. “This is a topic that’s often a source of jokes and amusement, instead of being taken seriously. Attitudes toward penis size can have serious implications for men’s relationships and mental health.”</p>
<p>The study, “<a href="https://doi.org/10.1037/men0000546" target="_blank">Men’s Value for a Large Penis Relates to Threatened Masculinity, Dominance, and Aggression: A Test of Symbolic Self-Completion Theory</a>,” was authored by Cindy Harmon-Jones, Brandon J. Schmeichel, Elizabeth Summerell, and Eddie Harmon-Jones.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/early-father-child-bonding-predicts-lower-inflammation-in-children/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Early father-child bonding predicts lower inflammation in children</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Jan 17th 2026, 20:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new study suggests that the way a father interacts with his infant can influence the child’s heart and metabolic health years later. Researchers found that fathers who were warm and engaged with their babies established family patterns that correlated with lower inflammation and blood sugar levels in children at age seven. These findings were published recently in the journal <a href="https://psycnet.apa.org/record/2026-94777-001?doi=1"><em>Health Psychology</em></a>.</p>
<p>Medical researchers have historically focused on the mother’s role when studying child health outcomes. Many existing studies examine how maternal stress or depression directly impacts a child’s development. Less attention has been paid to how fathers shape the broader family environment. Even fewer studies have looked at how these family dynamics impact physical biomarkers of disease risk in young children.</p>
<p>The family systems perspective suggests that a family functions as an interconnected network. Relationships between specific members ripple out to affect everyone else in the household. Alp Aytuglu, a postdoctoral scholar at The Pennsylvania State University, led a team to investigate these connections. They wanted to understand if early interactions could predict physiological signs of health in elementary school-aged children.</p>
<p>The team utilized data from the Family Foundations project. This larger research initiative followed nearly 400 families starting from pregnancy. The participating families consisted of a mother, a father, and a first-born child. Most of the participants were white, married, and had relatively high levels of education and income.</p>
<p>When the children were 10 months old, researchers visited their homes to observe family life. They filmed interactions where fathers and mothers played with their infants individually. The researchers later reviewed these videos to rate parenting behaviors. They looked for signs of sensitivity, warmth, and engagement.</p>
<p>Sensitivity was defined by how well the parent responded to the child’s cues. Engagement measured how much genuine interest the parent showed in the child’s activities. The researchers returned when the children were two years old. This time, they filmed the mother, father, and child playing together in a group.</p>
<p>These “triadic” interactions allowed the researchers to observe how the parents worked together. They were specifically looking for “coparenting” behaviors. The team focused on a specific negative dynamic known as “competitive-withdrawn” coparenting.</p>
<p>This dynamic occurs when one parent competes for the child’s attention or tries to outdo the other. It also includes instances where a parent disengages or withdraws from the interaction entirely. This pattern can signal underlying conflict or a lack of support between the parents. It often results in inconsistent caregiving or emotional tension that the child can sense.</p>
<p>Five years later, when the children were approximately seven years old, the team collected health data. They used dried blood spot samples taken via a finger prick. They analyzed these samples for four specific biological markers associated with cardiometabolic health.</p>
<p>The first two markers were C-reactive protein (CRP) and Interleukin-6 (IL-6). These are indicators of systemic inflammation in the body. Chronic inflammation is a known risk factor for cardiovascular disease. The team also measured HbA1c to assess average blood sugar control over time. Finally, they measured total cholesterol levels.</p>
<p>The data analysis revealed a specific chain of events for fathers. Men who displayed higher sensitivity and warmth at 10 months were less likely to engage in competitive or withdrawn coparenting at two years old. This suggests that early bonding creates a foundation for better teamwork between parents later on.</p>
<p>This reduction in negative coparenting behavior strongly predicted better health outcomes for the children at age seven. Specifically, children in these families had lower levels of CRP and HbA1c. This suggests that a father’s early emotional investment helps stabilize the family system in a way that protects the child’s physical body.</p>
<p>The researchers ran the same analysis for mothers. They did not find the same statistical link between early maternal warmth, coparenting behaviors, and the child’s physical health markers. This does not imply that mothers do not influence their children’s health.</p>
<p>Mothers in the study generally displayed higher levels of sensitivity overall compared to fathers. This consistency might make it harder to detect statistical variations based on their behavior. It is also possible that because mothers are often the primary caregivers, their warmth is the expected “norm” in the family. In this context, the father’s behavior might act as a variable that tips the scale toward stability or stress.</p>
<p>The researchers hypothesize that stress is the biological mechanism connecting these behaviors to physical health. A competitive or withdrawn coparenting dynamic likely creates a stressful environment for the child. Chronic stress can dysregulate the hypothalamic-pituitary-adrenal (HPA) axis.</p>
<p>The HPA axis controls the body’s response to stress. When it is overactive due to family tension, it can lead to increased inflammation and metabolic issues. The findings align with the “father vulnerability hypothesis.” This theory posits that fathers may be more susceptible to marital negativity.</p>
<p>If a father feels unsupported or competitive, that negativity may spill over into the parent-child relationship more easily than it does for mothers. This makes the father a unique channel for how relational stress affects the child. The study highlights that fathers contribute to the family system in distinct ways that are measurable in the child’s blood.</p>
<p>There are limitations to this study that should be noted. The population sample was not very diverse. The families involved were primarily White, heterosexual, and affluent. The results may not apply to families with different racial backgrounds or socioeconomic statuses.</p>
<p>The study focused on families where the father and mother lived together. Family structures involving single parents, same-sex couples, or multigenerational households might operate differently. Additionally, the researchers were unable to combine mother and father data into a single statistical model due to mathematical constraints.</p>
<p>Future research will need to look at more diverse populations to see if these patterns hold true. It would also be beneficial to explore how other family subsystems, such as sibling relationships, influence these health markers. Observing nutrition and exercise habits in these families could also provide a clearer picture.</p>
<p>The findings have practical implications for family health. They suggest that supporting fathers in early parenthood could be a preventive strategy for child health issues. Interventions that help fathers engage sensitively with their infants could prevent negative coparenting dynamics from developing.</p>
<p>Programs that teach parents how to support each other as a team could lower family stress. This, in turn, could reduce the child’s risk of developing heart and metabolic problems later in life. The research emphasizes that a child’s health is shaped by the entire family unit.</p>
<p>The study, “<a href="https://doi.org/10.1037/hea0001567">Longitudinal Associations Between Father- and Mother-Child Interactions, Coparenting, and Child Cardiometabolic Health</a>,” was authored by Alp Aytuglu, Jennifer E. Graham-Engeland, Mark E. Feinberg, Samantha A. Murray-Perdue, C. Andrew Conway, and Hannah M. C. Schreier.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/learning-from-ai-summaries-leads-to-shallower-knowledge-than-web-search/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Learning from AI summaries leads to shallower knowledge than web search</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Jan 17th 2026, 18:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Results of a set of experiments found that individuals learning about a topic from large language model summaries develop shallower knowledge compared to when they learn through standard web search. Individuals who learned from large language models felt less invested in forming their advice, and created advice that was sparser and less original compared to advice based on learning through web search. The research was published in <a href="https://doi.org/10.1093/pnasnexus/pgaf316"><em>PNAS Nexus</em></a>.</p>
<p>Large language models (LLMs) are artificial intelligence systems designed to interpret and generate human language by learning statistical patterns from vast collections of text. They are typically based on deep learning architectures, which allow them to process context and relationships between words over long passages. The most popular large language models today include those developed by OpenAI (GPT series used in ChatGPT), Google (Gemini), Anthropic (Claude), and Meta (LLaMA).</p>
<p>The development of large language models has progressed rapidly over the last decade due to advances in computing power, the availability of large datasets, and improvements in training algorithms. Early models focused mainly on simple text prediction, while modern models can perform complex reasoning, summarization, translation, and dialogue. Training usually involves two main stages: large-scale pretraining on general text and fine-tuning on more specific tasks or with human feedback.</p>
<p>These models are widely used in applications such as chatbots, virtual assistants, search engines, and automated customer support. In education and research, they assist with writing, coding, literature reviews, and data exploration. In business and industry, they are used for document analysis, marketing content generation, and decision support. Despite their usefulness, large language models sometimes produce errors, biases, or misleading information because they do not truly understand the world but rely on patterns learned from the materials used for their training.</p>
<p>Study authors Shiri Melumad and Jin Ho Yun note that many people use summaries of various materials generated by LLMs as learning tools. However, when learning from LLM summaries, users no longer need to exert the effort of gathering and distilling different informational sources on their own. The study authors hypothesized that this lower effort in assembling knowledge from LLM summaries might suppress the depth of knowledge users gain compared to learning through traditional web search, resulting in shallower knowledge. In turn, this shallower knowledge would result in less investment in giving advice based on that knowledge, and in sparser and less unique advice content. Such advice would then be seen as less informative and persuasive.</p>
<p>The study authors conducted a series of experiments to verify elements of their model. The first experiment involved 1,104 participants recruited via Prolific. They were told to imagine that a friend was seeking advice on how to plant a vegetable garden. One group of participants had to learn about this through Google search, while the other learned from ChatGPT. They would then give advice.</p>
<p>The second experiment involved 1,979 participants recruited via Prolific. It was the same as the first experiment, but the participants were limited to typing just one query. The query did not result in a typical search or response generation. Instead, participants were all given the same results formulated either as a series of linked websites or a summary of ChatGPT-style suggestions.</p>
<p>The third experiment was similar to experiment one, but the two groups of participants either used Google search or Google’s “AI Overview” (and not ChatGPT). They were to give advice about leading a healthier lifestyle. In this way, the platform was held constant. Participants in the fourth experiment rated various characteristics of the advice produced in the third study.</p>
<p>Results of these experiments showed that participants who used LLM summaries spent less time learning and reported learning fewer new things. They invested less thought and spent less time writing their advice. As a result, they felt lower ownership of the advice they produced. Overall, this supported the idea that learning from LLM summaries results in shallower learning and lower investment in acquiring knowledge and using it.</p>
<p>Participants learning from web searches and websites produced richer advice with more original content. Their advice texts were longer, more dissimilar to each other, and more semantically unique.</p>
<p>“A theory is proposed that because LLM summaries lessen the need to discover and synthesize information from original sources—steps essential for deep learning—users may develop shallower knowledge compared with learning from web links. When subsequently forming advice on the topic, this manifests in advice that is sparser, less original—and less likely to be adopted by recipients. Results from seven experiments support these predictions, showing that these differences arise even when LLM summaries are augmented by real-time web links, for example. Hence, learning from LLM syntheses (vs. web links) can, at times, limit the development of deeper, more original knowledge,” the study authors concluded.</p>
<p>The study contributes to the scientific understanding of how people learn using LLMs. However, it should be noted that the initial experiments involved hypothetical scenarios (advising a friend), though later experiments confirmed the results held even when the topics were of high personal relevance to the participants. </p>
<p>Additionally, the experiments involved paid participants—individuals who were likely primarily motivated by the award for participation, which was not dependent on the quality of the advice they produced. Results of studies looking into real-world learning situations where participants feel responsible for the outcomes of learning and have a personal stake in the quality of advice they produce may differ.</p>
<p>The paper, “<a href="https://doi.org/10.1093/pnasnexus/pgaf316">Experimental evidence of the effects of large language models versus web search on depth of learning,</a>” was authored by Shiri Melumad and Jin Ho Yun.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/elite-army-training-reveals-genetic-markers-for-resilience/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Elite army training reveals genetic markers for resilience</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Jan 17th 2026, 16:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new analysis of soldiers attempting to join the U.S. Army Special Forces suggests that specific genetic variations play a role in how individuals handle extreme physical and mental pressure. The research identified distinct links between a soldier’s DNA and their cognitive performance, psychological resilience, and physiological stress response during a grueling selection course. These findings were published recently in the academic journal <em><a href="https://doi.org/10.1016/j.physbeh.2025.115150" target="_blank">Physiology & Behavior</a></em>.</p>
<p>To become a member of the elite Army Special Forces, a soldier must first pass the Special Forces Assessment and Selection course. This training program is widely recognized as one of the most difficult military evaluations in the world. Candidates must endure nearly three weeks of intense physical exertion. They face sleep deprivation and complex problem-solving exercises. The attrition rate is notoriously high. Approximately 70 percent of the soldiers who attempt the course fail to complete it. This environment creates a unique laboratory for scientists to study human endurance.</p>
<p>Researchers have sought to understand why some individuals thrive in these punishing environments while others struggle. Resilience is generally defined as the ability to adapt positively to adversity, trauma, or threats. It involves a combination of psychological stability and physiological recovery. While physical training and mental preparation are essential, biological factors also play a substantial role. Genetics help determine how the brain regulates chemicals and how the body processes stress hormones.</p>
<p>To investigate these biological underpinnings, a team of researchers led by Martha Petrovick of MIT Lincoln Laboratory and senior author Harris R. Lieberman from the U.S. Army Research Institute of Environmental Medicine launched a comprehensive study. Their goal was to determine if genetic profiles previously linked to mental health or stress responses in the general public would also manifest as resilience markers in elite soldiers. They hypothesized that specific genetic variations would correlate with success in the assessment course.</p>
<p>The study included 800 male soldiers who volunteered for the selection course. The participants were already active-duty soldiers who had completed other rigorous military training. They were young and physically fit, with an average age of 25. Before the assessment began, the researchers collected blood samples to analyze each soldier’s DNA. They specifically examined 47 different genes known to influence brain function, sleep cycles, and hormone regulation.</p>
<p>The researchers also administered a battery of standardized tests to the candidates. These assessments measured general intelligence and vocational aptitude. They also evaluated personality traits such as grit and self-reported resilience. The team collected additional blood samples to measure levels of cortisol and C-reactive protein. Cortisol is the primary hormone the body releases in response to stress. C-reactive protein is a biological marker that indicates inflammation and immune system activation.</p>
<p>The analysis revealed that several specific genetic variations were indeed associated with better cognitive performance and higher resilience scores. The researchers found that the influence of these genes often varied depending on the soldier’s ancestral background. This highlights the complexity of using genetic markers across diverse populations.</p>
<p>One of the key genes identified in the study was COMT. This gene provides instructions for making an enzyme that breaks down dopamine in the brain. Dopamine is a chemical messenger critical for motivation, executive function, and the ability to solve problems under duress. The researchers found that among White Hispanic candidates, specific variations of the COMT gene were linked to higher scores on intelligence tests. These variations are often referred to as the “warrior allele” in scientific literature because of their association with performance in competitive environments.</p>
<p>The study also highlighted the TPH2 gene. This gene is involved in producing serotonin, a neurotransmitter that helps regulate mood and emotion. Variations in this gene were associated with performance on non-verbal intelligence tasks among White Hispanic soldiers. The researchers observed that soldiers with two copies of a specific minor genetic variant performed better on these tasks than those with different genetic makeups.</p>
<p>The body’s internal clock also appeared to play a role in cognitive aptitude. The researchers examined PER3, a gene that helps regulate circadian rhythms and sleep-wake cycles. They found that certain variations in this gene correlated with scores on the Army’s vocational aptitude battery. This test helps the military assess a soldier’s potential for specific occupations. The link suggests that genetic factors influencing sleep patterns may also affect general cognitive abilities required for military tasks.</p>
<p>Beyond cognitive ability, the study examined genes linked to how individuals perceive their own ability to bounce back from stress. The researchers used a standardized survey called the Connor-Davidson Resilience Scale. In participants categorized in the “Other” demographic group, variations in the CRHR1 gene were associated with higher scores on this resilience questionnaire. The CRHR1 gene produces a receptor for a hormone that initiates the body’s stress response. This receptor is a key component of the hypothalamic-pituitary-adrenal axis, the system that controls reactions to stress and regulates digestion, the immune system, and mood.</p>
<p>Similar associations were found with the MAOB gene. This gene is responsible for an enzyme that breaks down several neurotransmitters, including dopamine and epinephrine. The study found that specific variations in MAOB were linked to higher self-reported resilience. This aligns with previous research suggesting that this gene plays a role in various aspects of mental health and the ability to recover from adverse events.</p>
<p>The researchers also looked for links between genetics and physiological markers of stress. They measured the concentration of cortisol in the soldiers’ blood. Cortisol levels typically rise during times of intense strain to help the body mobilize energy. However, chronic or poorly regulated cortisol can be detrimental. The study found that cortisol levels varied based on genetic profiles.</p>
<p>For Black participants, variations in the FKBP5 gene were linked to higher concentrations of cortisol. The FKBP5 gene acts as a regulator for the body’s stress receptors. It is part of a negative feedback loop that helps the body return to a normal state after a stressful event. Variations in this gene have previously been linked to depression and anxiety disorders in the general population. This study suggests that these same genetic mechanisms influence how healthy soldiers biologically respond to the extreme demands of Special Forces training.</p>
<p>Among White Hispanic soldiers, the CYP1A2 gene showed a similar relationship with cortisol levels. This gene encodes an enzyme in the liver that metabolizes various substances. It is perhaps best known for its role in breaking down caffeine and certain medications. The researchers found that soldiers with a specific variant of this gene had higher levels of circulating cortisol. This suggests a potential overlap between the biological pathways that process foreign substances and those that manage physiological stress.</p>
<p>It is important to note that while these genetic markers were associated with resilience traits, no single gene predicted whether a soldier would ultimately pass or fail the course. The researchers emphasized that resilience is a multifaceted trait. It is shaped by a complex combination of genetics, environmental factors, physical training, and psychological preparation.</p>
<p>The study also has limitations that must be considered. The participant pool consisted entirely of male soldiers. This means the findings may not apply to women or the broader civilian population. Additionally, the study was observational. It identified statistical links between genes and performance but cannot prove that these genes directly caused the differences. The associations were also specific to certain racial and ethnic groups, reinforcing the need for diversity in genetic research.</p>
<p>Despite these caveats, the findings offer a rare glimpse into the biology of human endurance. Most genetic studies on stress focus on individuals with psychiatric disorders or those who have experienced severe trauma. This research demonstrates that the same genetic factors are relevant in healthy, high-performing individuals. The genes that influence susceptibility to depression or anxiety also appear to shape the resilience phenotype under very difficult circumstances.</p>
<p>These findings suggest that the genetic basis for resilience is preserved even under the most severe conditions. Future research could explore how these genetic markers might be used to optimize training. Understanding these biological factors could eventually lead to new strategies to help individuals recover from extreme stress. This could have applications not just for the military, but for anyone facing challenging life experiences.</p>
<p>The study, “<a href="https://doi.org/10.1016/j.physbeh.2025.115150" target="_blank">Genetic markers of stress, resilience and success at an elite military selection course</a>,” was authored by Martha Petrovick, Jessie Hendricks, Emily K. Farina, Lauren A. Thompson, Joseph J. Knapik, Stefan M. Pasiakos, James P. McClung, and Harris R. Lieberman.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/personal-beliefs-about-illness-drive-treatment-uptake-in-untreated-depression/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Personal beliefs about illness drive treatment uptake in untreated depression</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Jan 17th 2026, 14:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Individuals who recognize their experiences as depression, perceive that the symptoms have serious consequences, and believe that treatment will be effective are significantly more likely to seek therapy or medication. This new study was published in <em><a href="https://mentalhealth.bmj.com/content/28/1/e301666" target="_blank">BMJ Mental Health</a></em>.</p>
<p>Depression is one of the most common mental health conditions worldwide, yet a substantial proportion of affected individuals never receive treatment. Researchers have pointed to hurdles including long waitlists, limited providers, or missed referrals.</p>
<p>However, individual-level factors have received less attention. Illness beliefs—how people perceive the nature, consequences, and treatability of their symptoms—are known to influence health behaviors in physical illnesses. Yet, these have been less studied in untreated populations with depression.</p>
<p>Led by Matthias Klee and his team from Heidelberg University Hospital, Germany, the researchers sought to investigate if illness beliefs can predict who follows through with treatment after an online screening.</p>
<p>Klee’s team focused on individuals who had at least moderate symptoms, had not been treated or diagnosed in the past year, and were recruited into a nationwide German trial that delivered automated feedback after web‑based depression screening.</p>
<p>To test this, the researchers analyzed data from 871 adults, aged 37.5 years on average (73% female), who completed follow-up six months after screening. Participants filled out a short questionnaire about their beliefs right after the screening. It asked, for example, how much their symptoms were interfering with life, if they thought treatment could help, whether they could imagine that they were experiencing depression, and if this was their first time with such problems.</p>
<p>Researchers then assessed whether participants had initiated psychotherapy or antidepressant medication within six months. The results revealed that 233 (26.8%) participants had undertaken treatment.</p>
<p>Three beliefs emerged as significant predictors of treatment initiation. First, individuals who believed their symptoms had a substantial impact on daily life were more likely to seek care.</p>
<p>Second, those who believed treatment could help were more likely to begin therapy or medication.</p>
<p>Third—and most powerfully—those who recognized their symptoms as depression were much more likely to take action. Statistically, each step up in believing treatment would help was linked to a meaningful increase in the chances of starting care, and identifying one’s symptoms as depression was associated with up to an approximately 57% relative increase in the predicted probability of treatment initiation.</p>
<p>These patterns held even when the analysis accounted for many other factors, including the severity of symptoms, whether a structured diagnostic interview suggested major depression, and if the individual already had a preference for seeing a general practitioner or psychotherapist.</p>
<p>“Our findings highlight the importance of illness beliefs for the patient journey mounting at the uptake of depression treatment as a final result of help-seeking. We find that illness beliefs about consequences, treatment control and illness identity predict uptake of depression treatment in a naturalistic setting,” the authors concluded.</p>
<p>However, there are some caveats. For instance, the six-month follow-up may have been too short to capture all treatment initiation, given long wait times for care.</p>
<p>The study, “<a href="https://mentalhealth.bmj.com/content/28/1/e301666" target="_blank">Do illness beliefs predict uptake of depression treatment after web‑based depression screening? A secondary analysis of the DISCOVER RCT</a>,” was authored by Matthias Klee, Franziska Sikorski, Bernd Loewe, and Sebastian Kohlmann.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/people-readily-spot-gender-and-race-bias-but-often-overlook-discrimination-based-on-attractiveness/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">People readily spot gender and race bias but often overlook discrimination based on attractiveness</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Jan 17th 2026, 12:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>People are much harsher when they see outcomes biased by gender or race than by physical attractiveness, largely because attractiveness bias often goes unnoticed, according to research published in <a href="https://doi.org/10.1037/pspa0000459"><em>Journal of Personality & Social Psychology</em></a>.</p>
<p>Discrimination is widely recognized as unfair, but detecting it in everyday life is not always straightforward. People rarely witness explicit prejudice; instead, they often infer discrimination from patterns in outcomes, such as who gets hired, promoted, or punished. When certain groups are consistently overrepresented or underrepresented, these statistical imbalances can signal bias. Prior research shows that people readily interpret such patterns as unfair when they involve well-known forms of discrimination, such as race or gender.</p>
<p>Bastian Jaeger and colleagues were motivated by a puzzling gap in both research and public discourse: despite strong evidence that <a href="https://www.psypost.org/attractiveness-biases-attributions-of-moral-character-study-finds/">physically attractive individuals receive systematic advantages</a> in domains like hiring, pay, and legal outcomes, “lookism” attracts far less moral outrage than race- or gender-based discrimination. The authors asked whether this apparent tolerance reflects genuine acceptance of <a href="https://www.psypost.org/new-study-finds-beauty-bias-is-robust-but-irrelevant-for-accurate-predictions/">attractiveness-based bias</a>, or whether people simply fail to notice it in the first place.</p>
<p>The present research consisted of six primary studies and two supplemental studies examining how people judge the fairness of statistically biased decision outcomes. Across all studies, a total of 3,591 participants were recruited from the United States and the Netherlands, primarily through Prolific and university subject pools, with several of the samples recruited to be broadly demographically representative.</p>
<p>In most studies, participants were presented with realistic decision scenarios in which they first viewed a pool of individuals that was explicitly balanced on gender, race, and physical attractiveness. Participants were told that all individuals were equally qualified for the relevant decision, such as hiring for a job or determining guilt in a legal case, ensuring that outcome patterns, and not merit differences, were the sole basis for judgments.</p>
<p>In Studies 1 and the two supplemental studies, participants evaluated hiring decisions in which selected candidates were either unbiased or strongly biased along one dimension: gender, race, or attractiveness. These studies varied whether bias type was manipulated between or within participants and also varied how clearly the contrast between selected and nonselected candidates was displayed.</p>
<p>Study 2 built directly on this design by intensifying the attractiveness manipulation, using highly attractive versus unattractive AI-generated faces to test whether more extreme attractiveness differences would elicit stronger fairness concerns. Study 3 extended the paradigm beyond hiring to a criminal sentencing context, where participants evaluated the fairness of verdicts that disproportionately convicted individuals based on race or physical attractiveness, allowing the authors to test whether the pattern generalized from rewarding to punitive decisions.</p>
<p>The remaining studies focused on attention and awareness. Study 4 measured spontaneous detection of bias by asking participants to freely describe what stood out about a hiring decision, with responses coded for references to attractiveness, gender, or race. Study 5 manipulated awareness by explicitly informing participants, via a neutral algorithmic message, that a statistical imbalance favoring attractive or White candidates had been detected, while holding the visual information constant.</p>
<p>Finally, Study 6 tested attentional constraints by comparing judgments of attractiveness-biased outcomes when candidate pools varied on gender and race versus when all candidates were White women, making attractiveness the only possible source of bias. Together, these designs allowed the authors to isolate not only how people judge biased outcomes, but also whether they notice those biases in the first place</p>
<p>Across Studies 1 through 3 and the supplemental replications, the same overarching pattern consistently emerged. Outcomes that were biased by gender or race were judged as substantially less fair than unbiased outcomes, whereas outcomes biased by physical attractiveness were judged as only slightly less fair or, in some cases, no less fair at all.</p>
<p>This pattern held across different experimental formats, across both photographic and AI-generated stimuli, and even when attractiveness bias was extreme, such as when only highly attractive individuals were hired or only unattractive individuals were convicted. Importantly, this muted response to attractiveness bias generalized across domains, appearing both in hiring decisions and in criminal sentencing judgments.</p>
<p>Study 4 revealed a critical asymmetry in attention that helped explain these findings. When participants were asked to describe biased outcomes in their own words, the majority spontaneously identified gender discrimination in gender-biased outcomes and race discrimination in race-biased outcomes. In contrast, only a small minority mentioned attractiveness when outcomes favored attractive individuals, despite the bias being equally strong and visible. This demonstrated that attractiveness bias is far less likely to be noticed spontaneously, even when it objectively structures the outcome.</p>
<p>Studies 5 and 6 showed that this lack of awareness plays a central causal role in fairness judgments. When participants’ attention was explicitly drawn to attractiveness-based disparities, their fairness ratings dropped sharply, and this reduction was substantially larger than the effect of highlighting race bias.</p>
<p>Although race-biased outcomes remained somewhat more negatively evaluated even after awareness was equalized, the large shift in responses to attractiveness bias demonstrated that much of its apparent acceptability stems from failure to detect it. This conclusion was reinforced in Study 6, where attractiveness-biased outcomes were judged more negatively when gender and race were held constant, freeing participants’ attention to focus on attractiveness alone.</p>
<p>Together, these findings suggest that the apparent social tolerance of attractiveness discrimination stems less from moral approval and more from a systematic blind spot in how people scan outcomes for bias.</p>
<p>Participants were recruited from the United States and the Netherlands. Future research is needed to test whether these findings generalize across cultures and with culturally specific stimuli.</p>
<p>The research “<a href="https://doi.org/10.1037/pspa0000459">Social Bias Blind Spots: Attractiveness Bias Is Seemingly Tolerated Because People Fail to Notice the Bias</a>” was authored by Bastian Jaeger, Gabriele Paolacci, and Johannes Boegershausen.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/data-from-28000-people-reveals-which-conspiracy-debunking-strategies-tend-to-work-best/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Data from 28,000 people reveals which conspiracy debunking strategies tend to work best</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Jan 17th 2026, 10:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new comprehensive analysis suggests that while conspiracy beliefs are deeply entrenched, they are not entirely resistant to change. The findings indicate that specific intervention strategies, particularly those involving direct fact-checking and alternative explanations, can achieve modest reductions in these beliefs. This meta-analysis was published in the <em><a href="https://doi.org/10.1002/ejsp.70041" target="_blank">European Journal of Social Psychology</a></em>.</p>
<p>Conspiracy theories often serve as alleged explanations for complex events, ranging from the COVID-19 pandemic to geopolitical conflicts. Believing in these narratives can lead to tangible negative consequences, such as vaccine hesitancy or disregard for democratic norms. </p>
<p><a href="https://lukasz-stasielowicz.com/" target="_blank">Lukasz Stasielowicz</a>, a researcher at the Department of Psychology at the University of Salzburg, undertook this study to move beyond individual experiments and provide a systematic overview of what actually works. While many scientists have tested various persuasion techniques, there has been a lack of clarity regarding which methods consistently yield results and which factors moderate their success. Stasielowicz aimed to quantify the average impact of these interventions and identify the specific characteristics that enhance their effectiveness.</p>
<p>“Conspiracy theories play a role in everyday situations, such as deciding whether to get vaccinated after reading alarming statements in a messaging app, arguing with relatives about political news during a meal, and reading social media posts questioning medical advice from health professionals,” Stasielowicz told PsyPost. “However, there is surprisingly little solid evidence on what actually helps people believe less in harmful conspiracy theories.” </p>
<p>“Many research teams have tested various interventions, but these strategies have not yet been compared to determine what works, for whom, and under what circumstances. This study aimed to systematically compare the available interventions and identify features that make interventions most effective.”</p>
<p>To accomplish this, the researcher conducted a systematic literature search using databases such as Web of Science, PsycINFO, and Google Scholar. The inclusion criteria required studies to measure conspiracy beliefs following an intervention and to employ a control group for comparison. Stasielowicz identified 56 independent samples that met these standards. These samples represented a total of 27,996 participants. The studies were primarily conducted in Western nations, with a significant number originating from the United States and Europe.</p>
<p>The analysis utilized a Bayesian three-level meta-analytic model. This statistical approach allowed the researcher to account for the fact that single studies often report multiple outcomes, such as measuring beliefs about several different conspiracy theories simultaneously. The dataset included 273 distinct effect sizes. Stasielowicz examined a wide range of variables to explain differences in outcomes, including the design of the intervention, the demographics of the participants, and the specific nature of the conspiracy beliefs being challenged.</p>
<p>Stasielowicz found that the average intervention effect was small but positive. This suggests that it is possible to reduce conspiracy beliefs, though the typical impact is akin to dimming a light rather than flipping a switch. The overall effect size was estimated at 0.16, which indicates a modest shift in belief rather than a transformative change. However, the data revealed substantial variability between different studies, implying that the quality and type of intervention matter significantly.</p>
<p>“One of the most striking findings was how different the results were across studies,” Stasielowicz said. “Some interventions worked quite well, whereas others barely changed beliefs or even seemed to strengthen conspiracy thinking. Thus, designing and deploying interventions needs to be done carefully to avoid backfiring and pushing people deeper into conspiratorial rabbit holes full of implausible narratives. A bad intervention can be worse than no intervention at all.”</p>
<p>Interventions that included fact-checking of specific conspiracy claims tended to be more effective than other approaches. The analysis suggests that scrutinizing details such as dates, timelines, and numbers directly associated with a conspiracy theory yields better results than general attempts to promote skepticism. </p>
<p>Similarly, the data indicates that providing alternative explanations for the events in question can enhance the persuasive power of the message. This suggests that simply debunking a claim is often insufficient unless a factual narrative replaces the conspiratorial one.</p>
<p>The degree to which the intervention content matched the outcome measure also played a role. Stasielowicz found that effects were larger when the arguments presented in the intervention directly addressed the specific beliefs measured in the questionnaire. For example, countering myths about vaccines proved effective for reducing vaccine-related conspiracy beliefs but would likely have little impact on beliefs regarding government surveillance. This finding supports the idea that belief change is often domain-specific rather than a general shift in mindset.</p>
<p>“The main takeaway is that it is possible to reduce conspiracy beliefs, but the average impact of current interventions is modest rather than transformative,” Stasielowicz explained. “Approaches that directly fact-check specific conspiracy claims and carefully examine dates, timelines, quotations, numbers, and internal contradictions tend to be more effective than approaches such as ridicule or the teaching of general scepticism rules.”</p>
<p>“However, we must be careful. Since not every conspiracy theory is necessarily wrong, public institutions, influencers, and researchers need to avoid automatically dismissing discussions as conspiracy theorizing and instead focus on clearly documenting what is and is not supported by evidence.”</p>
<p>The characteristics of the sample population appeared to influence susceptibility to these interventions. The results showed that interventions tended to be more effective among students compared to the general population. Younger participants also appeared more willing to revise their beliefs than older individuals. </p>
<p>Additionally, the analysis provided some evidence that samples with a lower proportion of college graduates showed larger reductions in conspiracy beliefs. This might suggest that individuals with higher levels of formal education hold their non-standard beliefs more rigidly, or perhaps that the specific interventions used were better suited for general audiences.</p>
<p>The measurement tools used by researchers also affected the observed outcomes. Studies that employed longer questionnaires with multiple items to assess conspiracy beliefs tended to detect larger effects than those relying on single-item measures. This implies that nuanced changes in attitude might be missed by overly simplistic survey instruments. The analysis also indicated that interventions were slightly more effective when the baseline level of conspiracy belief was moderate to high, rather than low.</p>
<p>“As we know from everyday experiences, belief change is hard, and conspiracy beliefs are no exception,” Stasielowicz told PsyPost. “Interventions work like dimming a light rather than completely switching it on and off; they facilitate belief change rather than completely transforming people’s minds. They are unlikely to fully convince firm believers to completely reject conspiracy theories, but even slightly lowering beliefs in implausible conspiracy theories may matter at the population level for outcomes like vaccine uptake or peaceful transfer of power after elections. Still, these modest effects highlight how much room there is to improve interventions.”</p>
<p>Stasielowicz identified several limitations within the existing body of research. A significant portion of the included studies were classified as having a high risk of bias. This is largely because it is difficult to blind participants to the purpose of an intervention when they are being presented with arguments against specific conspiracy theories. If participants guess the goal of the study, they may alter their responses.</p>
<p>The meta-analysis also highlighted that many studies were underpowered, meaning they did not include enough participants to reliably detect small effects. This lack of statistical power can lead to an overestimation of effects in published literature. </p>
<p>Additionally, the vast majority of studies only measured immediate changes in belief. There is a scarcity of data regarding whether these reductions in conspiracy thinking persist over days, weeks, or months. The researcher notes that without longitudinal data, it is difficult to determine if these interventions provide lasting benefits or merely temporary shifts in opinion.</p>
<p>“A common misunderstanding is to treat the ‘average effect’ as if it describes every study, when in reality, some interventions helped more effectively than others,” Stasielowicz noted. “Many existing studies focus on short-term impact, are often underpowered, and are at high risk of bias. More high-quality, long-term studies are needed, especially on specific design features that so far have been examined only in a handful of experiments, such as <a href="https://www.science.org/doi/10.1126/science.adq1814" target="_blank">the use of LLM chatbots</a> to reduce conspiracy beliefs.”</p>
<p>“A key long-term goal is to identify which specific intervention components actually drive belief change so that future interventions can be shorter, less costly, and more tailored. By combining this meta-analysis with <a href="https://doi.org/10.1016/j.jrp.2022.104229" target="_blank">my review</a> of the characteristics of people who believe in conspiracy theories, it should be possible to test whether certain groups benefit more from particular intervention strategies and whether personalization adds value.”</p>
<p>“The article is open access, so anyone – including practitioners, journalists, and interested readers – can read the comprehensive results for free,” the researcher added. “The study offers concrete guidance on how to design the most promising interventions, for example, by tailoring messages to specific conspiracy claims.”</p>
<p>The study, “<a href="https://doi.org/10.1002/ejsp.70041" target="_blank">The Effectiveness of Interventions Addressing Conspiracy Beliefs: A Meta-Analysis</a>,” was authored by Lukasz Stasielowicz.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<p><strong>Forwarded by:<br />
Michael Reeder LCPC<br />
Baltimore, MD</strong></p>
<p><strong>This information is taken from free public RSS feeds published by each organization for the purpose of public distribution. Readers are linked back to the article content on each organization's website. This email is an unaffiliated unofficial redistribution of this freely provided content from the publishers. </strong></p>
<p> </p>
<p><s><small><a href="#" style="color:#ffffff;"><a href='https://blogtrottr.com/unsubscribe/565/DY9DKf'>unsubscribe from this feed</a></a></small></s></p>