<table style="border:1px solid #adadad; background-color: #F3F1EC; color: #666666; padding:8px; -webkit-border-radius:4px; border-radius:4px; -moz-border-radius:4px; line-height:16px; margin-bottom:6px;" width="100%">
<tbody>
<tr>
<td><span style="font-family:Helvetica, sans-serif; font-size:20px;font-weight:bold;">PsyPost – Psychology News</span></td>
</tr>
<tr>
<td> </td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/analysis-of-45-serial-killers-sheds-new-light-on-the-dark-psychology-of-sexually-motivated-murderers/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Analysis of 45 serial killers sheds new light on the dark psychology of sexually motivated murderers</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Nov 19th 2025, 06:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>New research provides evidence that sexually motivated serial killers are often driven by a complex combination of grandiose entitlement and deep-seated emotional vulnerability. A study analyzing the confessions and interviews of 45 serial offenders suggests that while many of these individuals display arrogance and a need for admiration, they are frequently characterized by intense feelings of resentment and hypersensitivity. The findings were published in the <em><a href="https://link.springer.com/article/10.1007/s11896-025-09780-4" target="_blank">Journal of Police and Criminal Psychology</a></em>.</p>
<p>Serial killers have historically been categorized based on their apparent motives. Criminologists often classify them into distinct types, such as those driven by a desire for power, sexual gratification, or mission-oriented delusions. While these categories help law enforcement understand the crime scene, they can oversimplify the psychological makeup of the offender. </p>
<p>Recent shifts in psychological research favor looking at personality traits on a spectrum rather than placing individuals into rigid boxes. Narcissism is one of the most prominent traits associated with serial homicide, yet it is often misunderstood as purely a manifestation of an inflated ego.</p>
<p>Prior research has largely focused on the grandiose aspects of narcissism in these offenders. This includes traits like dominance, aggression, and an exaggerated sense of self-importance. Less attention has been paid to vulnerable narcissism, which involves insecurity, social withdrawal, and defensive hostility. </p>
<p>Most existing knowledge comes from individual case studies, which provide deep detail but lack the ability to generalize to the broader population of offenders. The authors of the new study sought to address this gap by systematically analyzing a larger group of sexually motivated serial killers to determine how different dimensions of narcissism manifest and interact.</p>
<p>“I was always interested in studying criminal psychology and in particular the minds of serial killers because I felt the need to understand their thought pattern. And after doing an interpretative phenomenological analysis study on five serial killers for my MSc thesis, I felt I really needed to go down that path further,” said study author Evangelia Ioannidi, a researcher at the University of Bamberg.</p>
<p>“So when I discussed this idea with my current PhD supervisor Professor Dr. Astrid Schütz, we discussed how often serial killers are described as ‘narcissistic,’ but usually in a very vague way and almost always focused on the grandiose, showy side. At the same time, newer research on narcissism clearly distinguishes between grandiose and vulnerable forms, and I felt this distinction was missing in the serial killer literature.” </p>
<p>“I wanted to know whether sexually motivated serial killers really are just ‘inflated egos,’ or whether there is also a more fragile, hypersensitive, and hostile inner world behind their crimes. This study grew out of that gap: bringing modern narcissism theory into a very extreme but heavily discussed offender group.”</p>
<p>To conduct the study, the researchers utilized the Radford FGCU serial killer database. They identified a pool of male sexually motivated serial killers who were active in the United States between 1960 and 2021. From this group, they were able to obtain complete and accessible files for 45 offenders. The data consisted of police confessions, interrogation transcripts, and interviews conducted by the FBI. These documents were acquired through Freedom of Information Act requests to various state police departments and federal agencies.</p>
<p>The research team employed a method known as qualitative content analysis to examine the offenders’ statements. They used a specific coding scheme based on two psychological models: the Narcissistic Admiration and Rivalry Concept and the Vulnerable Isolation and Enmity Concept. </p>
<p>These frameworks break narcissism down into four distinct dimensions. Grandiose admiration involves self-promotion and a striving for uniqueness. Grandiose rivalry is characterized by the devaluation of others and striving for supremacy. Vulnerable isolation involves withdrawing from social contact to protect a fragile self-esteem. Vulnerable enmity is marked by varying degrees of paranoia, aggression, and a belief that one is being treated unfairly.</p>
<p>Two independent analysts reviewed the transcripts to identify phrases and sentiments that corresponded to these four dimensions. They assessed whether each trait was present or absent in the statements of each offender. The analysis focused specifically on sections where the killers discussed their childhoods, upbringing, and the motivations behind their first two murders.</p>
<p>The analysis revealed that traits of vulnerable narcissism were slightly more prevalent than grandiose traits in this sample. Indications of vulnerable narcissism appeared in 89 percent of the statements, while grandiose narcissism was found in 87 percent. When the researchers broke these down into specific dimensions, they found that vulnerable enmity was the most common trait, appearing in 84 percent of the cases.</p>
<p>“I expected grandiose traits to dominate, because that’s how serial killers are usually portrayed,” Ioannidi told PsyPost. “Instead, the most prevalent dimension was vulnerable enmity: deep hostility, hypersensitivity, and fixation on perceived rejection or disrespect. Seeing that this internal fragility showed up even more consistently than the ‘inflated ego’ side was striking, and it reshaped how I understand the psychology driving sexually motivated serial homicide.”</p>
<p>Grandiose admiration was the second most common trait, identified in 76 percent of the statements. This indicates that alongside their hostility, many of these individuals possess a strong desire to be admired and recognized as special. Grandiose rivalry was present in 71 percent of the cases, showing a tendency to view social interactions as competitions that must be won. Vulnerable isolation was the least common of the four, though it was still observed in 58 percent of the sample.</p>
<p>A key finding of the study was the significant overlap between these traits. The researchers observed that these dimensions rarely existed in isolation. For example, there was a strong association between grandiose rivalry and vulnerable enmity. Offenders who displayed a competitive drive to dominate others were also highly likely to express feelings of being wronged or persecuted. </p>
<p>The data showed that 62 percent of the sample exhibited both of these traits simultaneously. This combination paints a picture of offenders who are not only aggressive in their pursuit of superiority but are also reactively hostile due to a fragile sense of self.</p>
<p>The study also found that grandiose admiration and grandiose rivalry frequently co-occurred. Approximately 60 percent of the offenders displayed both traits. This suggests a dynamic where the individual alternates between seeking validation through self-promotion and asserting dominance through the devaluation of others. The authors suggest that this interplay serves to maintain the offender’s inflated self-image. When the strategy of seeking admiration fails, the offender may switch to rivalry and aggression to protect their ego.</p>
<p>The results provide evidence that the psychological profile of sexually motivated serial killers is multifaceted. It challenges the popular media portrayal of these offenders as purely cold and calculating figures driven solely by a god complex. </p>
<p>“These offenders aren’t driven only by ego or the desire to feel powerful,” Ioannidi explained. “Yes, many show grandiose traits, but an equally important part is the vulnerable side — the resentment, hypersensitivity, and deep sense of being wronged. Those two sides working together help explain why their violence is so personal and fueled by control. It’s not about excusing them; it’s about understanding that the psychology behind these crimes is more complex than people usually assume.”</p>
<p>The authors note that these findings have potential implications for criminal profiling. Recognizing that a suspect may be driven by a mix of grandiosity and hypersensitivity could help investigators in interview strategies. For instance, an offender who displays high levels of vulnerable enmity might react poorly to direct confrontation but could be more responsive to an approach that acknowledges their perceived grievances. </p>
<p>However, the researchers caution that these traits are not unique to criminals. Many people in the general population possess narcissistic traits without ever engaging in violent behavior.</p>
<p>“The biggest caveat is that narcissistic traits alone do <em>not</em> make someone dangerous,” Ioannidi noted. “These traits exist everywhere in normal life, and most people who show them are not violent. My study looks at how narcissism appears <em>within</em> an already extreme group of offenders — not how to identify future criminals. It’s important that readers don’t confuse psychological traits with destiny or prediction.”</p>
<p>The study, like all research, includes some limitions. The sample size of 45, while larger than typical case studies, is still relatively small. This limits the statistical power of the analysis. The reliance on archival police records means that the data is secondary and lacks the depth of a clinical psychological evaluation. </p>
<p>The study also focused exclusively on male offenders in the United States, so the findings may not apply to female serial killers or those from different cultural backgrounds. Additionally, the researchers did not have detailed information on the victims, which prevented them from analyzing how victim characteristics might influence the expression of narcissistic traits.</p>
<p>Future research is needed to validate these findings with larger and more diverse samples. The authors recommend investigating how these narcissistic dimensions interact with other personality traits. Understanding the developmental pathways that lead to this specific combination of grandiosity and vulnerability could also aid in the creation of early intervention strategies.</p>
<p>“I want to expand the work by examining how narcissistic traits interact with other dimensions like psychopathy, sadism, and early trauma,” Ioannidi said. “Another goal is to look more closely at developmental patterns: how vulnerable and grandiose traits shift over time and how they shape behavior. I’m also interested in creating more refined, evidence-based profiling tools that help investigators without falling into stereotypes or overgeneralizations.”</p>
<p>“One thing I want to emphasize is that this study bridges two fields that rarely talk to each other: modern personality science and forensic investigation. By using updated theoretical models rather than outdated labels, we can describe these offenders more accurately and responsibly. My hope is that this approach moves the conversation beyond sensationalism and toward a clearer understanding of what actually drives such crimes.”</p>
<p>The study, “<a href="https://link.springer.com/article/10.1007/s11896-025-09780-4" target="_blank">Narcissistic Traits in Sexually Motivated Serial Killers</a>,” was authored by Evangelia Ioannidi, Iris K. Gauglitz, Nicole Sherretts, and Astrid Schuetz.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/listening-to-your-favorite-songs-modulates-your-brains-opioid-system/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Listening to your favorite songs modulates your brain’s opioid system</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Nov 18th 2025, 20:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Listening to a favorite song can trigger a profound emotional response that rivals the feelings produced by biological necessities. A new brain imaging study reveals that music activates the same chemical system in the brain that is responsible for the pleasure associated with food and social bonding. The findings, which offer a biological explanation for why melodies can induce euphoria and physical chills, appear in the <em><a href="https://doi.org/10.1007/s00259-025-07232-z" target="_blank">European Journal of Nuclear Medicine and Molecular Imaging</a></em>.</p>
<p>The human brain contains a sophisticated network designed to regulate motivation and pleasure. This system relies heavily on mu-opioid receptors. These receptors serve as docking sites for specific chemical messengers that generate feelings of reward and well-being. Evolutionary forces likely developed this mechanism to encourage behaviors essential for survival, such as eating and reproduction.</p>
<p>Music presents a longstanding biological puzzle for scientists because it offers no direct survival advantage. It does not provide calories or physical protection, yet it persists across all human cultures. People consistently seek out music and report that it elicits strong emotional and physical reactions.</p>
<p>Previous research suggested that music might tap into the same neural circuitry as biological rewards. Most of these earlier studies relied on functional magnetic resonance imaging, or fMRI. That technology tracks blood flow to identify active brain regions, but it cannot visualize specific chemical interactions.</p>
<p>Researchers from the Turku PET Centre and the University of Turku in Finland sought to bridge this gap in understanding. They aimed to identify the specific molecular mechanisms that govern musical pleasure. The team, led by Vesa Putkinen, investigated whether the opioid system directly mediates the enjoyment of aesthetic rewards.</p>
<p>The investigators recruited fifteen female participants for the primary component of the study. Each participant provided a playlist of music that they found intensely pleasurable. These selections included various genres, ranging from contemporary pop to hip-hop.</p>
<p>The study utilized two different brain imaging technologies to capture complementary data points. The researchers first used positron emission tomography, or PET scans. This method allows for the visualization of receptor activity at the molecular level.</p>
<p>During the PET scans, the researchers injected a radioactive tracer called [11C]carfentanil into the participants’ bloodstreams. This tracer is a synthetic opioid that binds tightly to mu-opioid receptors in the brain. The logic behind this technique relies on the concept of competition between molecules.</p>
<p>When the brain releases its own natural opioids, those molecules occupy the available receptors. This occupancy prevents the radioactive tracer from binding to the same sites. By measuring the amount of tracer that successfully attaches to receptors, the researchers can infer the level of natural opioid activity.</p>
<p>The participants underwent these scans under two different conditions. In one session, they listened to their self-selected pleasurable music. In the other session, they underwent a baseline scan without musical stimulation.</p>
<p>The imaging results revealed that listening to music modulated opioid receptor availability in multiple brain regions. These changes occurred in areas such as the ventral striatum and the orbitofrontal cortex. These brain structures are well-documented centers for processing emotion and assessing value.</p>
<p>The researchers also asked participants to report when they experienced “chills” during the music. These physical sensations of goosebumps or shivers are often used as a marker for intense aesthetic pleasure. The team analyzed the relationship between these subjective reports and the chemical data.</p>
<p>A specific analysis focused on the nucleus accumbens. This region is deeply involved in the brain’s reward circuit. The data showed a correlation between the number of chills experienced and the receptor activity in this area.</p>
<p>Specifically, when participants reported more chills, there was evidence of greater endogenous opioid release in the nucleus accumbens. This suggests that the subjective intensity of the pleasure is directly linked to the amount of opioids released in this region. The finding connects the abstract experience of enjoying art to a concrete molecular event.</p>
<p>Following the PET scans, the researchers employed fMRI on the same participants. This allowed them to map the hemodynamic, or blood flow, responses to the music. They compared brain activity during music listening against a control condition consisting of random tone sequences.</p>
<p>The MRI data indicated that music increased activity in networks involved in processing emotions and body sensations. Active regions included the insula and the anterior cingulate cortex. These areas help the brain interpret internal physical states and regulate emotional arousal.</p>
<p>To ensure the music was affecting the participants physiologically, the team monitored autonomic nervous system activity. They measured heart rates and tracked changes in pupil size. Both metrics increased during music listening, confirming that the auditory stimulation caused heightened physical arousal.</p>
<p>A combined analysis of the PET and MRI data offered additional insights into individual differences. The researchers looked at the baseline density of opioid receptors each person possessed. They compared this “receptor tone” to the strength of the brain’s response during the MRI sessions.</p>
<p>The analysis showed that individuals with a higher baseline concentration of opioid receptors exhibited stronger brain activity when listening to music. This effect was particularly notable in regions associated with reward and interoception. This correlation suggests that the abundance of these receptors influences how intensely a person experiences musical pleasure.</p>
<p>It implies that molecular differences between people might explain why some individuals are more responsive to music than others. Those with more available opioid receptors may be biologically primed to derive stronger emotional rewards from aesthetic stimuli. This finding connects trait-level biology to state-level emotional experience.</p>
<p>The study provides evidence that the opioid system contributes to the processing of abstract rewards. It challenges the notion that these chemical pathways are reserved solely for basic survival needs. The brain appears to repurpose its survival mechanisms to process cultural and aesthetic experiences.</p>
<p>The study does have limitations regarding its generalizability. The sample size was relatively small due to the complexity and high cost of PET imaging. Additionally, the participants were exclusively women.</p>
<p>Future investigations will need to include male participants to ensure the findings apply universally. Replicating the results with larger groups would also strengthen the conclusions. Further research might also explore how different genres of music or active participation, such as singing, might alter these chemical responses.</p>
<p>These results hint at potential therapeutic applications for music. The connection between music and the opioid system supports the use of music-based interventions. Since the opioid system is also the primary mechanism for pain regulation, this link explains why music often has analgesic effects.</p>
<p>Clinical trials could test whether music can effectively reduce reliance on pain medication. Understanding the chemical roots of aesthetic pleasure might also aid in treating mood disorders. Conditions characterized by an inability to feel pleasure, such as depression, involve these same neural systems.</p>
<p>The study, “<a href="https://doi.org/10.1007/s00259-025-07232-z" target="_blank">Pleasurable music activates cerebral μ‑opioid receptors: a combined PET‑fMRI study</a>,” was authored by Vesa Putkinen, Kerttu Seppälä, Harri Harju, Jussi Hirvonen, Henry K. Karlsson, and Lauri Nummenmaa.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/personalitys-link-to-relationship-satisfaction-is-different-for-men-and-women/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Personality’s link to relationship satisfaction is different for men and women</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Nov 18th 2025, 18:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A study involving 3,780 adults from Australia, Denmark, and Sweden has found that the link between personality and relationship satisfaction differs between men and women. For example, extraverted men were more likely to have a romantic partner and tended to be more satisfied with their families, an effect less pronounced in women. In contrast, the link between agreeableness and family satisfaction was stronger for women. The paper was published in the <em><a href="https://doi.org/10.1016/j.jrp.2025.104674" target="_blank">Journal of Research in Personality</a></em>.</p>
<p>Personality is the consistent pattern of thoughts, emotions, and behaviors that defines how an individual interacts with the world. Currently, one of the most widely accepted frameworks for describing personality is the Big Five model.</p>
<p>The Big Five model proposes that there are five broad dimensions of personality: openness, conscientiousness, extraversion, agreeableness, and neuroticism. Openness refers to imagination, curiosity, and a preference for novelty. Conscientiousness involves organization, responsibility, and dependability.</p>
<p>Extraversion reflects sociability, energy, and a tendency to seek stimulation from others. Agreeableness represents kindness, empathy, and cooperativeness. Neuroticism describes emotional instability and the tendency to experience negative emotions like anxiety or irritability.</p>
<p>Study authors Filip Fors Connolly and Mikael Goossen hypothesized that the associations between personality and relationship outcomes would differ for men and women. They predicted that the link between agreeableness and relationship satisfaction would be stronger for women, while the connection between extraversion and relationship satisfaction would be stronger for men.</p>
<p>The authors analyzed data from an online survey conducted in 2016. The survey included 3,780 adult participants from Australia, Denmark, and Sweden, of whom about 54% were women.</p>
<p>Participants completed assessments of the Big Five personality traits (using the 20-item Mini-IPIP scale) and measures of relationship satisfaction with their friendships, family, and romantic partners (using items from the Social and Emotional Loneliness Scale for Adults). They also reported their relationship status by answering the question, “Are you in a stable relationship?”.</p>
<p>Results showed that personality’s link to partnership status was moderated by gender. For men, higher extraversion was strongly associated with being in a stable relationship. In contrast, higher neuroticism and agreeableness were associated with a lower likelihood of being in a relationship for men. For women, the patterns were different. Higher neuroticism was linked to a higher likelihood of being partnered, while agreeableness had a neutral-to-slightly-positive association.</p>
<p>For those already in ongoing relationships, the link between higher neuroticism and lower relationship satisfaction was stronger in men than in women, particularly regarding romantic satisfaction. The positive association between extraversion and family satisfaction was also stronger for men. Conversely, the link between agreeableness and family satisfaction was stronger for women. The study authors report that these patterns were largely stable across the three countries.</p>
<p>“The results revealed important distinctions across relationship domains, with gender-personality interactions being most robust in relationship formation processes,” the study authors concluded. “The gender moderation effects we observed—particularly the stronger associations between extraversion and men’s partnership formation—suggest the continued relevance of gender role expectations in intimate relationships, even within relatively egalitarian societies.”</p>
<p>The study provides evidence on how personality traits and gender interact in shaping relationship experiences. However, all data used in this study came from self-reports leaving room for reporting bias to have affected the results.</p>
<p>The paper, “<a href="https://doi.org/10.1016/j.jrp.2025.104674" target="_blank">The interplay between gender and personality in relationship outcomes: Satisfaction across domains and partnership status</a>,” was authored by Filip Fors Connolly and Mikael Goossen.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/social-reasoning-in-ai-traced-to-an-extremely-small-set-of-parameters/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Social reasoning in AI traced to an extremely small set of parameters</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Nov 18th 2025, 16:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new study reveals that the capacity for social reasoning in large language models, a trait similar to the human “theory of mind,” originates from an exceptionally small and specialized subset of the model’s internal parameters. Researchers found that these few parameters are deeply connected to the mechanisms that allow a model to understand word order and context. The work, published in <em><a href="https://doi.org/10.1038/s44387-025-00031-9" target="_blank">npj Artificial Intelligence</a></em>, provides a look into how complex cognitive-like abilities can emerge from the architecture of artificial intelligence.</p>
<p>Theory of mind is the ability to attribute mental states like beliefs, desires, and intentions to oneself and to others. It is what allows a person to understand that someone else might hold a false belief, for example, believing an object is in a box when it has been secretly moved to a drawer. This type of social reasoning is fundamental to human interaction. </p>
<p>In recent years, large language models have demonstrated an apparent ability to solve tasks designed to test this capacity, but the internal processes giving rise to this skill have remained largely opaque. Understanding these mechanics is a key goal for researchers working on making artificial intelligence more transparent and predictable.</p>
<p>This investigation was conducted by a team of researchers from Stanford University, Princeton University, the University of Minnesota, the University of Illinois Urbana-Champaign, and the Stevens Institute of Technology. Their work aimed to move beyond simply testing a model’s performance on social reasoning tasks. </p>
<p>Instead, they sought to identify the specific internal components responsible for this behavior, effectively looking under the hood to see how the machine performs its reasoning. The central questions were to pinpoint which of the billions of parameters in a model are most sensitive to theory-of-mind tasks and to determine how these parameters influence the model’s computational workflow.</p>
<p>To identify the parameters responsible for theory of mind, the researchers developed a novel method based on a mathematical tool that measures how much the model’s performance changes when a specific parameter is slightly altered. They first calculated this sensitivity for parameters while the model performed theory-of-mind tasks, specifically “false-belief” scenarios. </p>
<p>These tasks test if a model can recognize that an agent’s belief about the world is different from reality. For instance, a model would be presented with a story where a character places an item in one location, and then another character moves it without the first one’s knowledge. The model must correctly predict that the first character will look for the item in its original location.</p>
<p>This initial process identified a set of parameters sensitive to these social reasoning puzzles. However, the team recognized that some of these parameters might also be essential for general language processing. To isolate the ones specifically related to theory of mind, they performed a second sensitivity analysis on a general language modeling task and created a map of parameters vital for basic language functions. By subtracting this general language map from the theory-of-mind map, they were left with a very small, specialized set of parameters primarily dedicated to social reasoning.</p>
<p>With these “ToM-sensitive” parameters identified, the team conducted a perturbation experiment. They altered the values of this tiny group of parameters, which constituted as little as 0.001% of the model’s total. The effect on the model’s performance was significant. </p>
<p>Across several different language models, this small change caused a substantial drop in their ability to correctly answer theory-of-mind questions. As a control, the researchers also perturbed a randomly selected group of parameters of the same size. This random alteration had almost no effect on performance, indicating that the identified ToM-sensitive parameters have a specialized function.</p>
<p>The researchers discovered that this performance degradation was not just limited to social reasoning. The models also became worse at tasks requiring contextual localization, which is the ability to understand where a piece of information is located within a longer text. This suggested a link between the model’s ability to reason about mental states and its more fundamental ability to track the position of words and concepts in a sequence. The findings pointed toward the model’s positional encoding system, the architectural component that gives it a sense of word order.</p>
<p>The investigation then turned to how these sensitive parameters interact with the model’s core architecture. Many modern language models use a technique called Rotary Position Embedding, or RoPE, to understand word order. This method encodes the position of a word by applying a rotation to its numerical representation, with different dimensions of the representation rotating at different frequencies. </p>
<p>The analysis showed that the identified ToM-sensitive parameters were not random; they were precisely aligned with what are known as dominant frequency activations. These are the specific frequencies that the model relies on most heavily to process positional information.</p>
<p>When the ToM-sensitive parameters were perturbed, these dominant frequency patterns were disrupted. This effectively damaged the model’s internal map of the text, explaining why its ability for contextual localization diminished. The effect was specific to models that use the RoPE system. </p>
<p>In a model from a different family, which uses an alternative method for positional encoding, the same kind of sparse, sensitive parameter pattern was not found. This architectural contrast confirmed that the social reasoning ability in RoPE-based models is tightly coupled with this particular mechanism for handling word order.</p>
<p>The final piece of the puzzle was to trace how this disruption in positional encoding affects the model’s attention mechanism. The attention mechanism is what allows a model to weigh the importance of different words in a text when making a prediction. Many models exhibit a phenomenon known as an “attention sink,” where a significant amount of attention is consistently directed toward the very first token in a sequence. This first token acts as a stable anchor, helping the model organize its processing of the rest of the text.</p>
<p>The researchers found that the ToM-sensitive parameters play a role in maintaining the geometric relationship between the vector for the current word being processed and the vector for the first, anchor token. Perturbing these parameters altered the angle between these two vectors, making them more orthogonal, or perpendicular. </p>
<p>This change destabilized the attention sink. As a result, the model’s attention, no longer properly anchored, began to scatter to irrelevant parts of the text, such as punctuation. This breakdown in the model’s focus directly impaired its ability to form a coherent understanding of the language, leading to the observed failures in both social reasoning and general comprehension.</p>
<p>While this work provides a mechanistic explanation for theory-of-mind-like abilities in some models, the researchers note certain limitations. The analysis was primarily focused on specific types of false-belief tasks, and future work could explore whether similar parameter patterns govern more nuanced social skills like detecting irony or social faux pas. The findings also suggest that what appears to be a sophisticated cognitive skill may emerge from more fundamental mechanisms related to language structure and context.</p>
<p>The identification of such a localized set of parameters opens up new directions for research. It could lead to more efficient ways to align model behavior with human values or ethical norms. At the same time, it highlights potential vulnerabilities; if social reasoning is concentrated in such a small area, it could be a target for adversarial attacks designed to manipulate a model’s behavior. Understanding these structural underpinnings is a step toward developing artificial intelligence systems that are more transparent, reliable, and better aligned with human social cognition.</p>
<p>The study, “<a href="https://doi.org/10.1038/s44387-025-00031-9" target="_blank">How large language models encode theory-of-mind: a study on sparse parameter patterns</a>,” was authored by Yuheng Wu, Wentao Guo, Zirui Liu, Heng Ji, Zhaozhuo Xu & Denghui Zhang.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/singlehood-isnt-a-static-state-but-an-evolving-personal-journey-new-findings-suggest/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Singlehood isn’t a static state but an evolving personal journey, new findings suggest</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Nov 18th 2025, 14:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new longitudinal study published in <em><a href="https://doi.org/10.1111/pere.70038" target="_blank" rel="noopener">Personal Relationships</a></em> offers a closer look at how romantic relationship desire shapes long-term singles’ satisfaction with being single. The findings suggest satisfaction levels shift in ways that depend on age and birth cohort. While younger adults tend to become less satisfied, some older adults with unfulfilled desire show signs of adapting. The study highlights the complex and evolving nature of singlehood, pointing to the need to view it as a personal experience that changes across the life course.</p>
<p>Singlehood is becoming more common, especially in societies that emphasize personal freedom, career development, and autonomy. While some individuals embrace being single and derive satisfaction from it, others struggle with it, particularly when they want a romantic partner but remain unpartnered. Past research has shown that single people who wish to be in a relationship are often less satisfied with their singlehood, but most of this work has been based on one-time surveys.</p>
<p>The researchers aimed to move beyond snapshots in time. They wanted to understand how desire for a romantic relationship—and the absence of one—affects people across many years. They also wanted to explore whether these effects differ based on factors such as age and gender.</p>
<p>Specifically, the study introduced the concept of “prolonged desire,” referring to the experience of wanting a relationship that does not materialize over an extended period. The goal was to see whether this sustained desire leads to emotional strain, reduced life satisfaction, or eventual adaptation.</p>
<p>“Inspired by Olmstead’s concept of ‘chronic readiness’ (the idea that some people feel long-term readiness for a romantic relationship), I noticed that the long-term experience of relationship desire was rarely studied (mostly in a cross-sectional manner),” said study author <a href="https://www.linkedin.com/in/elise-t-hoen-8184b1278/" target="_blank" rel="noopener">Elise ‘t Hoen</a>, a PhD researcher at <a href="https://www.uantwerpen.be/en/projects/erc-singleton/" target="_blank" rel="noopener">the University of Antwerp</a>.</p>
<p>“I was particularly interested in how satisfaction with singlehood evolves over time, especially among those who remain single and continue to express desire for a relationship (which was a gap). Additionally, dominant social narratives still portray singles as unhappy compared to partnered individuals, also motivating this research.”</p>
<p>For their study, the researchers analyzed data from the German Family Panel (Pairfam), a large, ongoing longitudinal study tracking social and relationship experiences in Germany. They focused on a subset of 300 individuals who remained single for at least nine survey waves, spanning up to 14 years from 2008 to 2022. These participants provided yearly reports on two key measures: how much they wanted a romantic relationship and how satisfied they were with being single.</p>
<p>To measure relationship desire, participants rated the statement “I would like to have a partner” on a five-point scale. Scores of 4 or 5 were classified as expressing active desire, while scores from 1 to 3 were considered as no or low desire. Satisfaction with singlehood was rated on a scale from 0 (very dissatisfied) to 10 (very satisfied). Demographic information such as birth year and gender was also included in the analysis, allowing the researchers to compare patterns across different age groups and between men and women.</p>
<p>At the start of the study period, individuals who reported no desire for a relationship had the highest levels of singlehood satisfaction, averaging 7.36 on the 0–10 scale. In contrast, those with a desire for a romantic partner reported satisfaction levels that were, on average, nearly two points lower. This difference remained consistent across the study period, suggesting that desire itself is a strong and stable predictor of how people feel about being single.</p>
<p>Over time, average satisfaction with singlehood showed a slight downward trend. However, once other factors like age and cohort were considered, this overall time effect was no longer statistically significant. The data revealed that it is not simply the number of years spent single that affects satisfaction, but how that experience interacts with a person’s age and desire for a partner.</p>
<p>When examining sex differences, Hoen and her colleagues found no significant overall difference between men and women in either relationship desire or singlehood satisfaction. However, when looking at trends over time, women who wanted a relationship showed a small but statistically significant decline in satisfaction, while men with similar desires did not show the same drop. This suggests that unfulfilled relationship desire may weigh more heavily on women, possibly due to societal expectations around partnering and family life.</p>
<p>Differences also emerged across age cohorts. Participants were grouped into three birth cohorts: those born between 1971–1980 (older), 1981–1990 (middle), and 1991–2000 (younger). Both the youngest and oldest cohorts reported lower relationship desire than the middle group. Among those who wanted a relationship, the middle cohort experienced the sharpest decline in singlehood satisfaction over time. The youngest cohort also saw a decline, though it was more moderate. In contrast, the oldest cohort with sustained desire showed a more stable or even slightly improving satisfaction trajectory, suggesting some degree of emotional adaptation or reframing of expectations.</p>
<p>“I had expected that people who remained single for a long time while continuing to desire a relationship would show a steady decline in satisfaction,” Hoen told PsyPost. “That wasn’t the case for the oldest cohort – their satisfaction improved slightly over time. The middle cohort, by contrast, declined sharply. This shows that duration alone doesn’t explain satisfaction, context and identity matter.”</p>
<p>Among individuals who did not express relationship desire, the oldest cohort again reported the highest satisfaction, supporting the idea that some people come to embrace or accept their single status over time. Meanwhile, the middle-aged group tended to be less satisfied, even when they did not report wanting a relationship. This pattern may reflect increased societal pressure to settle down during midlife, especially when it conflicts with individual desires or circumstances.</p>
<p>The researchers also tested whether satisfaction followed a non-linear trajectory, meaning it might dip and then rise or follow other curving patterns across time. While they found some evidence for such patterns in younger and older adults, these effects were not strong enough to draw firm conclusions. Still, the results suggest that the emotional experience of being single is not a straight line but may include periods of change, reevaluation, and growth.</p>
<p>“Singlehood is a dynamic experience, shaped by relationship desire, age, and gendered societal contexts,” Hoen said. “One of the most important findings is that wanting a relationship is a strong predictor of how satisfied someone feels about being single. However, it’s not just about how long someone is single, but about who they are when that desire persists, and how they interpret their desire, that shapes how they experience singlehood over time.”</p>
<p>But the study, like all research, includes some caveats. The sample of 300 long-term singles was a small portion (1.8 percent) of the larger Pairfam study, which means the findings may not represent all single individuals. Also, participants had to remain single across nine waves, which limits the generalizability to people with shorter periods of singlehood.</p>
<p>Future research could build on these findings by exploring how identity, social support, and cultural norms influence how people experience being single. It would also be useful to examine how satisfaction and desire interact with broader well-being outcomes such as loneliness, depression, or self-esteem. Finally, future work could investigate how people come to redefine or accept their relationship goals over time and whether doing so enhances life satisfaction.</p>
<p>“I aim to further examine how singlehood satisfaction is shaped by how individuals define their own single status, particularly in relationship to whether they wish to remain single,” Hoen explained. “This includes a deeper look at identity, not just desire.”</p>
<p>“This study shows that singlehood is a dynamic, lived experience,” she concluded. “By taking a longitudinal, within-person approach, it highlights how desire is a personal experience that evolves with age, social norms, and gendered expectations. It encourages us to think of singlehood as a process shaped by context and time.”</p>
<p>The study, “<a href="https://doi.org/10.1111/pere.70038" target="_blank" rel="noopener">Who’s Waiting Ever After? An Exploration of Relationship Desire and Satisfaction Among Long-Term Singles</a>,” was authored by Elise ’t Hoen, Elke Claessens, and Dimitri Mortelmans.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/ai-conversations-can-reduce-belief-in-conspiracies-whether-or-not-the-ai-is-recognized-as-ai/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">AI conversations can reduce belief in conspiracies, whether or not the AI is recognized as AI</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Nov 18th 2025, 12:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Talking with an artificial intelligence chatbot can reduce belief in conspiracy theories and other questionable ideas. A new study published in <em><a href="https://doi.org/10.1093/pnasnexus/pgaf325" target="_blank">PNAS Nexus</a></em> finds that these belief changes are not dependent on whether the AI is perceived as a machine or a person, suggesting that the persuasive power lies in the quality of its arguments, not in the identity of the messenger. The research adds to a growing body of work showing that even beliefs often seen as resistant to correction may be influenced by short, targeted dialogues.</p>
<p>Beliefs in conspiracy theories and pseudoscientific ideas are often thought to be deeply rooted and difficult to change. These beliefs may fulfill emotional or psychological needs, or they may be reinforced by narratives that reject contradictory evidence. For example, someone who believes in a secret government plot might interpret any denial as further proof of the conspiracy.</p>
<p>Previous research has shown that conversations with artificial intelligence chatbots—particularly those tailored to an individual’s specific belief—can lead to meaningful reductions in belief certainty. However, it remained unclear whether these results were due to the facts presented, the way the message was framed, or the fact that the messenger was an AI.</p>
<p>One possibility is that people view AI systems as more neutral or less judgmental than human experts. That could make them more open to reconsidering their beliefs. But another possibility is that people might find arguments more credible when they come from a human source, especially when the tone is natural and conversational.</p>
<p>“We were motivated by a central question: does the identity of the source actually matter when correcting strongly held beliefs? Many frameworks—especially motivated reasoning—suggest that people filter information based on who they think is speaking. If a message comes from a source they distrust or associate with an opposing group, they may resist updating even when the evidence is strong. Conspiracy beliefs, in particular, are often tied to identity, so this issue seemed especially relevant,” explained study author<br>
Esther Boissin, a postdoctoral associate at Cornell University.</p>
<p>“Our study tests whether debunking depends on the messenger or on the quality of the counterarguments. The absence of differences between the AI and human-expert framings suggests that, in this context, people respond to the content (which is the same across framings) rather than the label of the source. This also speaks to a persistent limitation in the literature: many past debunking attempts fail not because people are unpersuadable, but because the corrections they receive are incomplete, low-quality, or too generic. High-quality, tailored reasoning appears to matter far more than who delivers it.”</p>
<p>The study involved 955 participants drawn from a larger group of over 8,000 individuals who began the online survey. Participants were asked to report a belief they held—either a conspiracy theory or another type of idea that most experts would likely reject. Examples included beliefs in secret government collaborations or ideas drawn from pseudoscience, such as unconventional theories about disease.</p>
<p>Each participant first described one of their beliefs and rated how confident they were in it on a 0 to 100 scale. They then engaged in a two-round conversation with GPT-4o, a large language model developed by OpenAI. In these conversations, the AI attempted to argue against the participant’s belief using facts and evidence.</p>
<p>Participants were randomly assigned to different experimental conditions. Some were told they were talking to an AI, while others were told they were interacting with an “expert,” a term that many interpreted as referring to a human. Additionally, in half the cases, the AI used a tone designed to sound more conversational and human-like.</p>
<p>Despite these differences, all participants saw the same underlying arguments, customized to address their specific belief. After the conversation, they rated their belief confidence again.</p>
<p>The results showed that belief in conspiracy theories decreased by about 12 percentage points on average, while belief in other unsupported ideas decreased by about 6 points. These changes were statistically significant and occurred regardless of whether the AI was described as a machine or a human. Whether the AI used a neutral or more human-like tone also had no measurable effect.</p>
<p>This pattern held even among participants who thought they were talking to a person. In the “expert” condition, 45% described the speaker as human in a follow-up question, while only 21% mentioned it might have been an AI. Still, belief reductions were similar across groups.</p>
<p>“The effects are modest at the individual level but meaningful in practice,” Boissin told PsyPost. “Shifting entrenched beliefs—even by a few points—after a short, two-round conversation is notable. These changes are comparable to, or larger than, what we typically observe in high-quality debunking interventions. From a public-communication perspective, even small shifts aggregated across many interactions can have substantial impact.”</p>
<p>“It is also worth noting that AI is becoming more present in everyday life, and concerns about its influence are rising. Our findings show that these tools can be used responsibly and for beneficial purposes, and that their effectiveness does not depend on replacing human experts. The same approach can be applied in many other contexts where people need clear, accurate, and tailored information. Of course, these same tools can also be misused, which makes it even more important to understand when and how they influence beliefs.”</p>
<p>To confirm that these findings were not due to differences in who completed the study, the researchers included participants who dropped out in a conservative analysis. They assumed that those who left early experienced no belief change. Even under this assumption, the results remained consistent.</p>
<p>The researchers also analyzed the language used by participants in their responses. People who thought they were talking to an AI tended to use slightly simpler and less varied language, which fits previous findings suggesting people may simplify their communication when interacting with machines. However, they did not significantly change the structure or logic of their arguments.</p>
<p>“A key takeaway is that people who hold conspiracy beliefs are not as resistant to change as commonly assumed,” Boissin explained. “When they receive clear, precise, and evidence-based explanations that address their belief directly, many reconsider their views—even on topics typically thought to be ‘immune’ to correction.”</p>
<p>“This also challenges the idea that people are fundamentally irrational. Our results suggest that, when the arguments are strong and grounded in evidence, people are willing to update. The issue is usually that such high-quality explanations are hard to produce in everyday settings. The AI helped because it could gather relevant information quickly and present it in a structured way, not because it was an AI. A human with the same amount of knowledge could likely have produced a similar belief reduction, but assembling this amount of information in real time is extremely difficult for a human.”</p>
<p>The study did have some limitations. The AI model used for the conversations was trained on data that is predominantly from Western, English-speaking sources. This means its argumentative style and the evidence it presents may reflect specific cultural norms, and the debunking effects might not be the same in different cultural contexts. Future research could explore the effectiveness of culturally adapted AI models.</p>
<p>“A common misunderstanding would be to conclude that AI is inherently more persuasive or uniquely suited for debunking,” Boissin said. “Our results do not support that. The AI was effective because it could generate high-quality, tailored explanations—not because people trusted it more or because it had some special persuasive power.”</p>
<p>“A remaining caveat is that, while the source label did not matter here, this does not mean that source effects never matter; it simply shows that they were not a limiting factor in this particular debunking setting.”</p>
<p>The researchers plan to continue this line of inquiry, aiming build a more complete picture of the psychology of belief and when evidence-based dialogue is most effective.</p>
<p>“We want to understand why some people revise their beliefs while others do not, even when they receive the same information,” Boissin explained. “This includes examining individual differences—cognitive, motivational, or dispositional—that shape how people respond to counterevidence.”</p>
<p>“We are also interested in the properties of the beliefs themselves. Some beliefs may be revisable because they rest on factual misunderstandings, while others may be tied more strongly to identity or group loyalty. And beyond factual beliefs, we plan to study other types of beliefs—such as political attitudes or more ambiguous beliefs that do not have a clear ‘true’ or ‘false’—to see whether the same mechanisms apply. Understanding these differences can help clarify the cognitive processes that allow certain beliefs to emerge, solidify, or change.”</p>
<p>“More broadly, we want to map the conditions under which evidence-based dialogue works well, when it fails, and what this reveals about the psychology of belief,” Boissin continued. “As part of that effort, we plan to test more challenging scenarios—for example, situations where the AI is framed as an adversarial or low-trust source or even behaves in a way that could trigger resistance.” </p>
<p>“These conditions will allow us to assess the limits of the effect and evaluate how far our conclusions generalize beyond cooperative settings. In short, we want to understand whether people change their minds mainly because they process evidence or because they protect their identity.”</p>
<p>The study, “<a href="https://doi.org/10.1093/pnasnexus/pgaf325" target="_blank">Dialogues with large language models reduce conspiracy beliefs even when the AI is perceived as human</a>,” was authored by Esther Boissin, Thomas H. Costello, Daniel Spinoza-Martín, David G. Rand, and Gordon Pennycook.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/active-short-video-use-linked-to-altered-attention-and-brain-connectivity/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Active short video use linked to altered attention and brain connectivity</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Nov 18th 2025, 10:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new study published in <em><a href="https://doi.org/10.1016/j.neuropsychologia.2025.109291" target="_blank">NeuroImage</a></em> has found that individuals who often interact actively with short video platforms may experience subtle reductions in their ability to maintain alertness. The researchers found that high levels of active short video use—such as liking, commenting, or switching between content—were linked to decreased performance on tasks that require staying alert to sudden signals. This association appeared to be connected to how two brain systems communicate while at rest.</p>
<p>The widespread use of short videos on platforms like TikTok and Instagram Reels has transformed how people engage with media. These videos are short, fast-paced, and often embedded in personalized content streams. Many users interact with the content by commenting, sharing, or reacting in real time, behaviors the researchers refer to as “active usage.” Others watch videos without much interaction, termed “passive usage.”</p>
<p>Past studies have raised concerns that frequent social media use might weaken sustained attention or increase distractibility. But much of this research focused on general media use, not short video platforms specifically. The authors of the current study wanted to explore how different types of short video engagement might influence distinct components of attention. </p>
<p>They also aimed to understand how such behaviors might be reflected in brain connectivity. Their goal was to pinpoint whether there is a measurable “cost” to engaging frequently with short videos in an interactive way.</p>
<p>“When we designed this study, we noticed that most research on social media had focused on platforms like Facebook, which are still largely text-driven. Short-form video platforms, however, are a newer, highly visual type of social media that rely on algorithmic, personalized recommendation, so existing findings may not automatically generalize to this context,” explained study author Guanghui Zhai of Tianjin Normal University.</p>
<p>“At the same time, people in everyday life are already asking, ‘Are short videos destroying our attention?’, but it is unclear whether all usage patterns carry the same risk. Building on prior work that distinguishes active from passive social media use, we applied this idea to short videos and focused on behaviors such as liking, commenting, and sharing.” </p>
<p>“We were especially interested in whether this active style of engagement places different demands on the brain than simply watching, and in the almost unexplored question of how everyday short video habits relate to specific attention functions—particularly the alerting system—and to the underlying brain networks that support it.”</p>
<p>The research involved two related experiments. In the first, the researchers studied 319 participants who completed a detailed questionnaire on their short video usage. The questionnaire distinguished between active behaviors like commenting or liking videos and passive behaviors like simply watching without interacting. Participants also completed the Attention Network Test, a behavioral task that measures three key components of attention: alerting, orienting, and executive control.</p>
<p>The alerting component reflects the ability to maintain a state of readiness to detect sudden events. The orienting component involves shifting attention to specific locations or cues. Executive control refers to the ability to resolve conflicts between competing stimuli. During the test, participants responded to arrows presented on a screen, with varying cues and distractions, and their reaction times were recorded.</p>
<p>The results showed that participants who engaged in high levels of active short video use tended to perform worse on the alerting component. Specifically, they showed less improvement in reaction time when given a cue to expect a target, which suggests lower readiness to respond. This pattern was not observed for passive users. </p>
<p>The orienting component showed a small effect, where passive usage was linked to slightly better performance, but only among users who engaged in low levels of active use. No clear relationship emerged between short video use and executive control.</p>
<p>“In our data, people who more frequently engaged in active short video behaviors (for example, liking and commenting while watching) tended to show lower efficiency in the ‘alerting’ part of attention—the basic readiness to detect and respond to sudden signals,” Zhai told PsyPost. “This pattern did not appear for passive watching to the same extent.” </p>
<p>To better understand these behavioral findings, the researchers conducted a second experiment using brain imaging. They invited 115 of the original participants to complete a resting-state functional MRI scan. This type of scan measures how different parts of the brain communicate when a person is not doing any specific task. </p>
<p>The researchers focused on the brain’s default mode network and executive control network. The default mode network is typically active during self-reflection and internal thoughts, while the executive control network supports goal-directed behavior and attention.</p>
<p>They analyzed the data to see whether active or passive usage predicted patterns of connectivity between brain regions. One connection in particular stood out: between the right ventral prefrontal cortex, a region involved in evaluating important signals, and the right posterior cingulate cortex, a key hub in the default mode network. Participants with higher levels of active short video use showed stronger connectivity between these two areas. This connection also statistically explained the link between active usage and lower alerting efficiency, meaning it acted as a mediator.</p>
<p>“One thing that surprised us was how right-lateralized the key functional network turned out to be, and how strongly it was embedded in the default mode network,” Zhai explained. “We found that the critical pathway involved a right-hemisphere circuit linking the ventral prefrontal cortex and the posterior cingulate cortex—regions that sit at the intersection of detecting salient events and managing internal, self-related processing. This pattern suggests that the ‘cost’ of very active short video use may lie in how the brain balances internal social–emotional processing with readiness to respond to external signals.”</p>
<p>“The effects we found are modest but reliable at the group level, with standardized regression coefficients in the small-to-moderate range. This means we are not talking about dramatic impairments that you would notice immediately in everyday life, but rather subtle differences that become clear when we test many people with sensitive behavioral and brain measures. In other words, heavy active use of short videos is unlikely to ‘destroy’ attention on its own, but it may nudge the alerting system in a less efficient direction, especially when combined with other demands on attention and sleep.”</p>
<p>“At the same time, many studies have also shown that social media use can enhance subjective well-being and, in some cases, help relieve anxiety and depressive symptoms, largely through opportunities for social connection,” Zhai continued. “So from a practical standpoint, our findings support a message of moderation: short-form video is not inherently harmful, but very intensive and highly interactive use may come with small attention costs that need to be balanced against its potential benefits.”</p>
<p>“For the average user, the message is not that short videos are “purely bad,” but that very interactive, multitasking-style use may subtly tax the brain systems that help us stay alert.”</p>
<p>As with all research, there are limitations. Importantly, the study was correlational and cross-sectional, meaning it measured variables at one point in time. As a result, the researchers cannot say for certain whether active short video use <em>causes</em> reduced alerting, or whether people with lower alerting efficiency are more drawn to active use. The sample also consisted primarily of young adults in China, so the findings may not generalize to other age groups or cultural contexts.</p>
<p>Despite these limitations, the study contributes to a growing body of research exploring how digital media use intersects with core cognitive functions. By identifying a specific brain connection that may underlie reduced alerting among frequent active users, the researchers offer a starting point for future work aimed at understanding and potentially mitigating attention-related effects of short video engagement.</p>
<p>“Our next steps are to move beyond correlations by combining longitudinal and experimental designs,” Zhai said. “For example, we would like to track people’s short video behavior over time using objective logs, manipulate how often they engage in active versus passive use, and examine the consequences for attention, sleep, and mood.” </p>
<p>“We are also interested in testing whether simple ‘digital hygiene’ strategies—such as limiting active interactions before bedtime or during study time—can help protect the alerting system. Ultimately, we hope to provide nuanced, brain-based guidelines for healthier media habits rather than a blanket message to avoid short videos.”</p>
<p>The researchers hope their findings can inform the design of digital platforms and help users make more informed decisions about how they interact with media. For example, limiting active engagement during tasks that require high alertness—such as before driving or while studying—may help preserve attentional readiness. </p>
<p>“One message I would emphasize is that the same behavior can have both benefits and costs,” Zhai said. “Active short video use can foster social connection and a sense of being heard, but our data suggest that this comes with a small ‘sacrifice’ in the brain systems that keep us alert to the outside world.” </p>
<p>“By identifying a specific brain pathway linking active use to alerting—the functional connectivity between the right ventral prefrontal cortex and the right posterior cingulate cortex—we hope to show that these trade-offs are real, measurable, and grounded in everyday habits. Recognizing them can empower users to make more informed choices about when, how, and how much they engage with short videos.”</p>
<p>The study, “<a href="https://doi.org/10.1016/j.neuropsychologia.2025.109291" target="_blank">The sacrifice of alerting in active short video users: Evidence from executive control and default mode network functional connectivity</a>,” was authored by Guanghui Zhai, Yang Feng, Xin Ling, Jiahui Su, Yifan Liu, Yiwei Li, Yunpeng Jiang, and Xia Wu.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<p><strong>Forwarded by:<br />
Michael Reeder LCPC<br />
Baltimore, MD</strong></p>
<p><strong>This information is taken from free public RSS feeds published by each organization for the purpose of public distribution. Readers are linked back to the article content on each organization's website. This email is an unaffiliated unofficial redistribution of this freely provided content from the publishers. </strong></p>
<p> </p>
<p><s><small><a href="#" style="color:#ffffff;"><a href='https://blogtrottr.com/unsubscribe/565/DY9DKf'>unsubscribe from this feed</a></a></small></s></p>