<table style="border:1px solid #adadad; background-color: #F3F1EC; color: #666666; padding:8px; -webkit-border-radius:4px; border-radius:4px; -moz-border-radius:4px; line-height:16px; margin-bottom:6px;" width="100%">
<tbody>
<tr>
<td><span style="font-family:Helvetica, sans-serif; font-size:20px;font-weight:bold;">PsyPost – Psychology News</span></td>
</tr>
<tr>
<td> </td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/neuroscientists-link-low-self-awareness-to-stronger-brain-reactions-to-moralized-issues/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Neuroscientists link low self-awareness to stronger brain reactions to moralized issues</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Apr 1st 2025, 10:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new study published in the journal <em><a href="https://link.springer.com/article/10.3758/s13415-024-01243-3" target="_blank" rel="noopener">Cognitive, Affective, & Behavioral Neuroscience</a></em> reveals that people who hold strong moral convictions about political issues make decisions more quickly—but that these choices are shaped by both emotional brain responses and metacognitive ability. The research shows that moral conviction activates specific brain regions involved in emotion and cognitive control, and that people with lower self-awareness about their own decision accuracy show stronger brain responses to morally charged political issues.</p>
<p>The findings help explain why deeply moralized political beliefs can feel so non-negotiable. When people see political positions as morally right or wrong, they not only respond more quickly but also engage brain systems associated with salience, conflict monitoring, and goal-driven thinking. But this fast, confident decision-making comes with a caveat: people who are less able to distinguish between correct and incorrect judgments—a trait known as low metacognitive sensitivity—appear to rely more heavily on these moral signals in the brain. This could help explain why some individuals become more rigid or dogmatic in their political beliefs.</p>
<p>The researchers behind the study, led by <a href="https://www.linkedin.com/in/jean-decety/" target="_blank" rel="noopener">Jean Decety</a>, an Irving B. Harris Distinguished Professor at the University of Chicago and the director of the <a href="https://voices.uchicago.edu/scnl/" target="_blank" rel="noopener">Social Cognitive Neuroscience Lab</a>, sought to better understand how moralized beliefs contribute to political polarization and intolerance. Moral convictions are beliefs that people view as tied to fundamental principles of right and wrong. Unlike regular opinions, they tend to be perceived as universal, unchangeable, and non-negotiable.</p>
<p>Prior studies have shown that people with strong moral convictions are more likely to engage in collective action—but also more likely to justify prejudice or violence against ideological opponents. While the emotional and behavioral consequences of moral conviction have been studied extensively, the brain mechanisms behind these effects remain poorly understood. The new study aimed to explore how moral conviction is processed during real-time political decisions and how it interacts with people’s ability to evaluate their own judgments.</p>
<p>To investigate these questions, the researchers recruited participants from the Chicago area for a two-part study. First, 80 adults completed an online survey about their attitudes toward a range of sociopolitical issues, such as gun control or climate change. They rated how strongly they supported or opposed each issue and how morally important those views felt. The researchers then selected 49 participants to complete a functional magnetic resonance imaging (fMRI) session, during which 44 participants provided usable brain data.</p>
<p>During the scan, participants viewed pairs of photographs showing protest groups advocating for or against various political causes. For each pair, they had to quickly decide which group they supported more. All issues had been previously rated in the online survey, allowing the researchers to match brain activity during the scan to each person’s level of moral conviction and support for those causes.</p>
<p>Before the scan, participants also completed a perceptual task to measure their metacognitive sensitivity. In this task, they had to judge which of two images contained more dots and then rate their confidence in their decision. By comparing accuracy with confidence, researchers could determine how well participants could tell when they were right or wrong—essentially measuring their insight into their own judgments.</p>
<p>The researchers found that participants made faster decisions when choosing between protest groups involving issues they felt strongly about morally. This was true even when controlling for how much they supported the issues or how familiar they were with them. In other words, moral conviction sped up decision-making above and beyond basic preference. Brain activity mirrored these effects. When making decisions about more morally convicted issues, participants showed increased activation in the anterior insula, anterior cingulate cortex, and lateral prefrontal cortex. These brain regions are known to be involved in emotional salience, conflict monitoring, and cognitive control.</p>
<p>The lateral prefrontal cortex was especially active in these high-conviction trials. This area of the brain is often involved in setting and pursuing goals, as well as enforcing social norms. Its increased activity suggests that moral conviction might engage higher-level thinking that treats political positions not just as opinions, but as imperatives that must be acted upon. The anterior insula and anterior cingulate cortex, by contrast, likely reflect the emotional intensity and personal significance of the issues.</p>
<p>When the researchers looked at overall support for the two protest groups shown in each trial, rather than moral conviction, they saw stronger activation in brain regions involved in value assessment—especially the ventromedial prefrontal cortex and the amygdala. These areas are commonly involved in subjective valuation and emotional reactions, suggesting that agreement with an issue feels rewarding, even if the belief is not morally framed.</p>
<p>To explore how these systems work together, the researchers conducted a functional connectivity analysis. They found that the lateral prefrontal cortex showed stronger connectivity with the ventromedial prefrontal cortex during decisions involving higher moral conviction. This suggests that the brain integrates moral information into the valuation process—essentially, moral beliefs are being factored into how much a person values a given choice.</p>
<p>But one of the most striking findings came from comparing brain responses with metacognitive sensitivity. Participants with lower ability to distinguish between correct and incorrect judgments—those with poor metacognitive insight—showed stronger brain activity in response to moral conviction. This was particularly evident in the lateral prefrontal cortex and anterior cingulate cortex. These individuals also showed more activity in valuation regions like the ventromedial prefrontal cortex and the ventral striatum when moral conviction was high.</p>
<p>In contrast, support-related brain activity in these regions did not correlate with metacognitive ability. This means that people with low metacognitive sensitivity are not necessarily more supportive of political issues—but they do show a stronger neural response when their beliefs feel morally justified.</p>
<p>These findings support the idea that low metacognitive sensitivity might amplify the influence of moral conviction on both decision-making and brain function. In practical terms, people who lack insight into the accuracy of their own beliefs may be more likely to treat political issues as moral imperatives and less willing to consider alternative viewpoints. This could help explain why low metacognitive ability has been linked to greater dogmatism and political extremism in previous research.</p>
<p>The study is not without limitations. The decisions made during the scan involved simplified visual comparisons between protest groups, which may not fully capture the complexity of real-world moral reasoning. Additionally, while the study shows that moral conviction affects brain activity and decision speed, it cannot prove that these brain responses cause moral conviction or rigidity. The researchers also note that moral conviction overlaps with related concepts like attitude strength, familiarity, and emotional arousal, making it difficult to isolate its specific effects.</p>
<p>Future research could investigate how moral conviction influences decision-making in more complex social contexts, such as conversations or negotiations. It could also explore whether training or interventions to improve metacognitive sensitivity might reduce dogmatic thinking and promote more flexible political reasoning.</p>
<p>The study, “<a href="https://doi.org/10.3758/s13415-024-01243-3" target="_blank" rel="noopener">Moral conviction interacts with metacognitive ability in modulating neural activity during sociopolitical decision‑making</a>,” was authored by Qiongwen Cao, Michael S. Cohen, Akram Bakkour, Yuan Chang Leong, and Jean Decety.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/extraverts-with-autotelic-personality-traits-are-more-likely-to-experience-flow/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Extraverts with autotelic personality traits are more likely to experience flow</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Apr 1st 2025, 08:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A study of undergraduate students in California found that extraverted individuals with pronounced autotelic personality traits—those who engage in activities purely for the experience—were more likely to experience flow while playing the game <em>Perfection</em> alone. The research was published in the <a href="https://doi.org/10.1111/jopy.12938"><em>Journal of Personality</em></a>.</p>
<p>Flow is a psychological state in which a person becomes fully immersed in an activity, experiencing a deep sense of engagement and fulfillment. During flow, individuals often lose awareness of time and their surroundings due to intense concentration on the task at hand. This state is typically achieved when the challenge of the activity is perfectly matched to the individual’s skill level—neither too easy to be boring nor too difficult to cause anxiety.</p>
<p>While in flow, a person tends to feel a strong sense of personal control over the situation and its outcome, which enhances both performance and enjoyment. Flow is also associated with a deep sense of joy and creative satisfaction, even if the activity doesn’t result in an immediate reward. It is commonly experienced during activities that require both skill and challenge, such as playing a musical instrument, participating in sports, or engaging in a favorite hobby.</p>
<p>Study author Dwight C. K. Tse and his colleagues set out to examine how personality traits influence the experience of flow in different social contexts. They hypothesized that individuals with pronounced autotelic personality traits would be more likely to experience flow. The researchers also expected that participants who experienced positive emotions and those with higher motivation to engage in the task would be more likely to enter a flow state. Additionally, they anticipated that the relationship between extraversion and flow would vary depending on the social context.</p>
<p>The study involved 396 undergraduate students from the Psychology Participant Pool at the University of California, Riverside. All participants had been in a romantic relationship for at least six months.</p>
<p>Participants were asked to play the board game <em>Perfection</em> under three different conditions. In the first, solitary condition, participants completed the game tasks individually in separate rooms. In the “mere-presence” condition, romantic partners completed the tasks individually but in the same room. In the interactive condition, couples worked together on the same task, and instructions explicitly encouraged them to communicate during the activity. <em>Perfection</em> is a timed puzzle game in which players must fit uniquely shaped pieces into corresponding holes on the board before time runs out and the pieces pop out.</p>
<p>After each game condition, participants completed the Short Flow State Scale to assess their experience of flow. They also completed assessments of extraversion (from the Big Five Inventory), autotelic personality (using the Autotelic Personality Questionnaire), and current emotional state (via a 24-item scale measuring various emotions).</p>
<p>An autotelic personality is characterized by an intrinsic motivation to engage in tasks for their own sake. Extraversion is a personality trait associated with sociability, energy, and a tendency to seek out stimulation and the company of others.</p>
<p>The results showed that individuals with strong autotelic personality traits experienced flow more frequently across all testing conditions. Extraversion was linked to stronger flow experiences in the two conditions where a romantic partner was present, but not in the solitary condition. However, in the solitary condition, extraversion predicted stronger flow only among individuals with high autotelic personality traits—not among those low in these traits.</p>
<p>Further analysis revealed that flow was more likely to occur in participants who experienced low-arousal positive emotions (e.g., calm, contentment) and less likely in those who experienced high-arousal negative emotions (e.g., anxiety, frustration). In other words, flow was more common in individuals who felt mild positive emotions and less common in those feeling strong negative emotions during the game.</p>
<p>The study sheds light on how personality influences the likelihood of experiencing flow. However, it’s important to note that all participants were undergraduate students, so the findings may not generalize to other demographic groups.</p>
<p>The paper, “<a href="https://doi.org/10.1111/jopy.12938">Alone but flowing: The effects of autotelic personality and extraversion on solitary flow,</a>” was authored by Dwight C. K. Tse, Ayodele Joseph, and Kate Sweeny.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/a-demanding-work-culture-could-be-quietly-undermining-efforts-to-raise-birth-rates/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">A demanding work culture could be quietly undermining efforts to raise birth rates</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Apr 1st 2025, 06:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>China’s falling birth rate has become a major national concern, and a new study published in <em><a href="https://doi.org/10.1080/19485565.2024.2422850" target="_blank" rel="noopener">Biodemography and Social Biology</a></em> suggests that the country’s demanding work culture may be partly to blame. The research shows that working more than 40 hours a week significantly reduces people’s desire to have children. Overtime, night shifts, and being constantly on call make it harder for people to imagine balancing work and family life — a finding that has important implications for future population policies.</p>
<p>China’s fertility rate has continued to fall even as the government has introduced a series of policies aimed at encouraging childbirth. After decades of strict population control, including the well-known one-child policy, China has now moved to allow two, and even three children per family.</p>
<p>However, these efforts have had limited success. The birth rate remains low, and the country is now facing the social and economic challenges of an aging population, including a shrinking workforce and increased pressure on social support systems. While financial constraints and housing costs are often cited as obstacles to starting a family, the study’s authors argue that time scarcity — particularly due to long working hours — may be just as important.</p>
<p>To investigate this issue, researchers at Nankai University and Henan University of Technology used data from the 2020 China Family Panel Studies (CFPS), a large-scale survey of over 20,000 people from across the country. The CFPS collects detailed information on demographics, employment, income, health, and family life. In this study, the researchers focused on respondents’ reported weekly working hours and their responses to a question about whether they intended to have a child within the next two years. People working more than 40 hours per week were classified as doing overtime, based on standards from both international labor guidelines and Chinese labor laws.</p>
<p>The team then analyzed how overtime affected fertility intentions, taking into account a wide range of other factors including age, sex, ethnicity, income, marital status, and region. They used both provincial and city-level data to examine regional variations, and also explored how different kinds of overtime work — such as weekend shifts, night shifts, and on-call duties — might affect people’s willingness to start or expand their families.</p>
<p>The results were clear: overtime work had a strong and statistically significant negative effect on fertility intentions. This pattern held across nearly every province and city analyzed. The more hours people worked beyond the standard 40-hour week, the less likely they were to say they planned to have children in the near future. This trend was especially pronounced for people working 40–50 hours per week, where fertility intentions dropped the most sharply. Those working more than 60 hours a week showed more varied responses, but the overall effect was still negative.</p>
<p>When the researchers broke down working hours into smaller segments, they found that moderate work schedules — especially those between 0 and 20 hours per week — were actually associated with higher fertility intentions. Between 20 and 40 hours, the effect was mixed: some people were more willing to have children as hours increased, while others were not. But once work passed the 40-hour threshold, the negative effects on fertility became much stronger.</p>
<p>The type of overtime also mattered. People who regularly worked on weekends, at night, or were expected to be reachable 24/7 were significantly less likely to plan for children. These types of schedules interfere not just with physical rest, but also with family and social life. Weekend work and night shifts disrupt routines, reduce time with partners, and can create chronic fatigue. Being constantly on-call added another layer of stress, keeping people mentally tethered to their jobs even during off hours. The authors suggest that this erosion of personal time leaves little room for planning or raising a family.</p>
<p>There were also differences based on gender and marital status. Women showed a stronger negative response to overtime than men, suggesting that long hours may be especially burdensome for women who still shoulder more of the childcare and household responsibilities. Unmarried individuals were also more affected than those who were already married, possibly because they are still in the phase of life where fertility decisions are more flexible.</p>
<p>To explore whether certain workplace conditions could ease the conflict between work and family goals, the researchers examined a few potential moderating factors. Flexible working arrangements — where employees could choose their start and end times — had a positive effect on fertility intentions. When people had more control over their schedules, they were more likely to consider having children.</p>
<p>Similarly, satisfaction with career advancement opportunities and wages was linked to stronger fertility intentions. Transparent promotion paths and fair pay may help employees feel that their career won’t be derailed by having children. Another important factor was maternity insurance, which was also associated with greater willingness to have children. These workplace benefits can reduce the financial and psychological burden of childbearing.</p>
<p>But there are limitations to consider. The study is based on cross-sectional data from a single year, so it cannot track changes in fertility intentions over time or definitively prove cause and effect. Fertility intentions do not always translate into actual births, and personal circumstances — such as relationships or health — may also influence family planning decisions. Additionally, although the CFPS is a large and representative survey, it may not capture all forms of informal or unreported work, which is still common in some parts of China.</p>
<p>The study, “<a href="https://doi.org/10.1080/19485565.2024.2422850" target="_blank" rel="noopener">Reasons for the continued decline in fertility intentions: explanations from overtime work</a>,” was authored by Jiawei Zhao, Yuxuan Li, and Wenqi Li.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/this-common-activity-is-linked-to-59-higher-odds-of-insomnia-study-finds/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">This common activity is linked to 59% higher odds of insomnia, study finds</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Mar 31st 2025, 16:45</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A large study of Norwegian college students has found that spending more time on screens after going to bed is tied to a higher likelihood of experiencing insomnia and getting less sleep — no matter the type of screen activity. For each additional hour spent on screens in bed, participants had 59% higher odds of reporting symptoms of insomnia and slept an average of 24 minutes less. Interestingly, students who used only social media in bed reported fewer insomnia symptoms and longer sleep duration than those who engaged in other screen-based activities.</p>
<p>The research was published in <em><a href="https://doi.org/10.3389/fpsyt.2025.1548273" target="_blank" rel="noopener">Frontiers in Psychiatry</a></em>.</p>
<p>The researchers conducted this study to better understand how screen time after going to bed impacts sleep in young adults, a group known for high levels of digital media use and frequent sleep issues. While many past studies have focused on children and teenagers, the relationship between screens and sleep among college-aged adults has received less attention. The researchers also wanted to explore whether different screen activities — such as social media, watching videos, or gaming — affect sleep differently. Some experts have speculated that social media might be particularly harmful because of its stimulating and interactive nature, but evidence has been mixed.</p>
<p>To investigate these questions, researchers used data from the 2022 Students’ Health and Wellbeing Study, a national survey of higher education students in Norway. From this dataset, the researchers focused on a sample of 45,654 full-time students aged 18 to 28. These students answered detailed questions about their bedtime screen use, types of activities they engaged in, and their sleep patterns. About 38,800 of them reported using screens in bed, making up the core sample for the main analyses.</p>
<p>Participants were asked how often they used screens in bed, how long they spent on them each night, and what types of activities they did — such as checking social media, watching movies, listening to music, gaming, or reading for school. The researchers grouped participants into three categories based on their responses: those who used only social media, those who used social media along with other activities, and those who did not use social media but engaged in other screen-based tasks.</p>
<p>To measure sleep, the researchers collected information about bedtime and wake time for both weekdays and weekends. They also asked about how long it usually took to fall asleep, whether participants woke up during the night, and how often they felt tired during the day. Participants were classified as having symptoms of insomnia if they reported trouble falling or staying asleep at least three times per week, daytime sleepiness at least three times per week, and a duration of these issues lasting at least three months.</p>
<p>The findings confirmed a consistent pattern: more screen time in bed was linked to worse sleep. On average, each additional hour spent on screens after going to bed was associated with 59% greater odds of meeting the criteria for insomnia and 24 fewer minutes of sleep per night. These associations were statistically significant and remained consistent across all screen activity groups. In other words, whether a student was watching videos, scrolling social media, or doing something else on their device, more time on screens was associated with poorer sleep.</p>
<p>However, when the researchers looked more closely at the types of activities students engaged in, some differences emerged. Those who used only social media in bed had the best sleep outcomes overall — they were less likely to report symptoms of insomnia and had the longest sleep duration. Students who used a mix of social media and other activities fell somewhere in the middle. The group with the poorest sleep were those who used screens for activities other than social media. Compared to students who only used social media, those who engaged in other activities had 35% higher odds of reporting insomnia and slept about 17 minutes less per night.</p>
<p>These differences were observed even after accounting for the amount of time spent on screens. That is, the type of screen activity was linked to sleep outcomes independent of duration. Still, the main factor that seemed to matter most was the total time spent on screens in bed. The link between screen time and sleep problems did not vary significantly across the three activity groups.</p>
<p>The researchers offer several explanations for these findings. One possibility is that screen time reduces sleep simply because it replaces time that would otherwise be spent resting. This idea, known as the displacement hypothesis, is supported by recent research showing that screen use is more likely to delay bedtime than to directly cause problems falling asleep once in bed. If screen use increased arousal or disrupted the body’s sleep rhythms through light exposure, different activities might have shown different effects — but that was not the case here.</p>
<p>Another explanation is that students who only use social media in bed might be more socially connected, which can protect against sleep problems. Previous studies have found that strong social ties are associated with better sleep. It’s also possible that students with existing sleep difficulties are more likely to use other screen-based activities, like watching movies or listening to audio, to help pass the time or relax while trying to fall asleep. In this case, screen use may be a result of poor sleep rather than the cause.</p>
<p>It’s important to note that the study’s cross-sectional design means it cannot establish causation. The researchers cannot say for sure whether screen use causes insomnia or if people with insomnia are simply more likely to use screens. Longitudinal studies or experiments tracking screen use and sleep over time would be needed to clarify the direction of these effects. It’s also possible that certain personality traits, such as being a night owl, influence both screen use and sleep patterns.</p>
<p>Despite these limitations, the study adds important insight into how screen use at bedtime is related to sleep in young adults. It challenges the idea that social media is uniquely harmful and suggests that overall screen time may be the more important factor. The results support the idea that reducing screen use in bed — regardless of the activity — could help improve sleep.</p>
<p>Lead author Gunnhild Johnsen Hjetland of the Norwegian Institute of Public Health emphasized this point: “We found no significant differences between social media and other screen activities, suggesting that screen use itself is the key factor in sleep disruption — likely due to time displacement, where screen use delays sleep by taking up time that would otherwise be spent resting.”</p>
<p>Hjetland also offered practical advice: “If you struggle with sleep and suspect that screen time may be a factor, try to reduce screen use in bed, ideally stopping at least 30 to 60 minutes before sleep. If you do use screens, consider disabling notifications to minimize disruptions during the night.”</p>
<p>The study, “<a href="https://doi.org/10.3389/fpsyt.2025.1548273" target="_blank" rel="noopener">How and when screens are used: Comparing different screen activities and sleep in Norwegian university students</a>,” was authored by Gunnhild Johnsen Hjetland, Jens Christoffer Skogen, Mari Hysing, Michael Gradisar, and Børge Sivertsen.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/home-based-smell-test-shows-promise-in-detecting-early-cognitive-decline-study-finds/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Home-based smell test shows promise in detecting early cognitive decline, study finds</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Mar 31st 2025, 14:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new study published in <em><a href="https://www.nature.com/articles/s41598-025-92826-8" target="_blank" rel="noopener">Scientific Reports</a></em> suggests that a simple, self-administered smell test taken at home could help identify older adults who may be at risk for Alzheimer’s disease. Researchers found that older individuals with mild cognitive impairment performed significantly worse on tests of smell identification and discrimination than their cognitively healthy peers. These differences were detectable using a new digital tool called the AROMHA Brain Health Test, which participants completed remotely using scratch-and-sniff cards and a web-based app.</p>
<p>The researchers behind the study wanted to explore whether subtle changes in a person’s sense of smell might offer a window into early brain changes that occur before memory problems become apparent. They designed the test to be cost-effective, easy to use at home, and available in both English and Spanish. By comparing results from healthy adults, individuals with subjective memory complaints, and people with diagnosed mild cognitive impairment, the team found that smell test scores declined not only with age but also with increasing levels of cognitive difficulty.</p>
<p>Mild cognitive impairment is a clinical stage between normal aging and more serious decline such as dementia. People with this condition often notice changes in memory or thinking but can still function independently in daily life. However, they are more likely to go on to develop Alzheimer’s disease than those without such symptoms. The researchers were interested in identifying noninvasive ways to detect early changes in the brain, particularly those that occur before symptoms become severe. Since certain brain areas related to smell are affected early in Alzheimer’s disease, the team focused on olfaction—our sense of smell—as a potential early indicator of cognitive decline.</p>
<p>To investigate this, the researchers developed the AROMHA Brain Health Test, a bilingual (English and Spanish) home-based test that evaluates different aspects of smell, including odor identification, memory, discrimination, and intensity. Participants used peel-and-sniff cards mailed to their homes and completed the test online using a web application. The platform guided users through a series of steps, asking them to smell each odor, select the correct name from four options, rate its intensity, and indicate their confidence in each answer. Additional tasks required them to recall previously presented odors and to decide whether pairs of smells were the same or different.</p>
<p>The study included 127 cognitively normal participants, 34 with subjective cognitive complaints, and 19 with mild cognitive impairment. Participants were recruited from 21 states and Puerto Rico through research registries and online postings. Some participants were classified based on formal cognitive testing from the Massachusetts Alzheimer’s Disease Research Center, while others were categorized based on self-reported memory concerns or a physician’s diagnosis. All participants gave informed consent and completed the test either under observation (via video call or in-person) or independently.</p>
<p>The researchers examined how participants performed on the different components of the smell test. They found that scores on odor identification, odor discrimination, and odor memory declined with age. Importantly, individuals with mild cognitive impairment scored significantly lower on odor identification and discrimination compared to those who were cognitively normal, even after accounting for age, sex, and education. These results suggest that the smell test can detect subtle cognitive differences associated with early stages of decline.</p>
<p>Interestingly, the test was equally effective whether participants were observed or completed it on their own. This supports the idea that the tool can be used reliably in a home setting without the need for supervision. The researchers also compared results between English- and Spanish-speaking participants and found no meaningful differences in performance, highlighting the test’s potential for use in diverse populations.</p>
<p>To validate the test’s ability to detect genuine smell impairments, the researchers included a small group of individuals diagnosed with anosmia, or complete loss of smell. As expected, this group performed at chance level across all measures, confirming that the test could distinguish between normal and impaired olfactory function.</p>
<p>Among the older participants, the researchers divided the group into three categories: cognitively normal, those with subjective memory concerns, and those with mild cognitive impairment. They found that those with mild impairment performed significantly worse than both of the other groups on several measures, particularly odor identification and discrimination. These differences were not explained by age alone, suggesting that cognitive impairment itself was responsible for the lower scores.</p>
<p>One feature of the test that the researchers highlighted was the inclusion of a confidence rating after each identification task. This allowed them to measure not only whether participants chose the correct answer, but also whether they were guessing. The results showed that when participants were more confident in their answers, they were more likely to be correct—but this pattern was weaker in those with cognitive impairment. This finding supports previous research suggesting that people with early cognitive decline may overestimate their abilities or be less aware of their errors.</p>
<p>While the results are promising, the study had some limitations. Not all participants underwent full neuropsychological assessments, which means that some classifications were based on self-report rather than clinical testing. The study was also cross-sectional, meaning it looked at participants at a single point in time. Future studies will need to follow participants over months or years to determine whether low scores on the smell test predict future memory decline or the development of Alzheimer’s disease.</p>
<p>Despite these limitations, the researchers see strong potential for the AROMHA Brain Health Test as a scalable, noninvasive screening tool. It could help identify individuals who may benefit from further testing or who might be eligible for clinical trials targeting early-stage Alzheimer’s. Because the test can be administered at home and in multiple languages, it may help reach populations that are often underrepresented in research.</p>
<p>“Early detection of cognitive impairment could help us identify people who are at risk of Alzheimer’s disease and intervene years before memory symptoms begin,” said study author Mark Albers, a neurologist at Massachusetts General Hospital and assistant professor at Harvard Medical School. Albers was involved in founding the company that developed the Aromha Brain Health Test</p>
<p>The researchers plan to continue studying the test in larger and more diverse groups. They are also interested in exploring how the results relate to brain imaging or blood-based biomarkers of disease. If validated in future studies, this type of smell test could become a valuable tool for early detection and monitoring of brain health in aging adults.</p>
<p>The study, “<a href="https://www.nature.com/articles/s41598-025-92826-8" target="_blank" rel="noopener">Early detection of cognitive impairment could help us identify people who are at risk of Alzheimer’s disease and intervene years before memory symptoms begin</a>,” was authored by Benoît Jobin, Colin Magdamo, Daniela Delphus, Andreas Runde, Sean Reineke, Alysa Alejandro Soto, Beyzanur Ergun, Sasha Mukhija, Alefiya Dhilla Albers, and Mark W. Albers.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/the-dark-side-of-dominance-victory-can-fuel-sexual-aggression-in-psychopathic-men/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">The dark side of dominance: Victory can fuel sexual aggression in psychopathic men</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Mar 31st 2025, 12:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new study published in <em><a href="https://doi.org/10.1002/ab.70025" target="_blank">Aggressive Behavior</a></em> provides new insight into how certain personality traits may interact with social dynamics to increase the risk of sexual aggression. The research focused on heterosexual male college students and found that men with elevated levels of callousness and unemotional traits were more likely to send sexually explicit and unwanted content to a woman after winning a competition against another man. These findings suggest that feelings of power and dominance following a win may activate sexually aggressive behavior in some men, particularly those with psychopathic tendencies.</p>
<p>The researchers designed the study to address ongoing debates in the field of sexual aggression research, where different theoretical approaches—such as feminist and evolutionary frameworks—have often been treated separately. Feminist theories focus on social power, gender inequality, and cultural norms, while evolutionary perspectives emphasize biological drives and intermale competition.</p>
<p>Both perspectives agree that status plays a role in shaping male behavior, but research has rarely examined how social context and personality traits may jointly contribute to sexual aggression. The goal of the current study was to test whether a simulated status challenge—winning or losing a competition—would influence men’s sexually aggressive behavior, especially among those with psychopathic personality traits.</p>
<p>“This study was initiated by Dr. Amy Hoffmann as part of her graduate studies at the University of South Florida’s Clinical Psychology program,” explained <a href="https://www.usf.edu/arts-sciences/departments/psychology/people/everona.aspx" target="_blank" rel="noopener">Edelyn Verona</a>, a professor of psychology, co-director of the <a href="https://www.usf.edu/arts-sciences/centers/cjrp/index.aspx" target="_blank" rel="noopener">Center for Justice Research & Policy</a> at the University of South Florida, and co-editor of the <em><a href="https://www.routledge.com/Routledge-Handbook-of-Evidence-Based-Criminal-Justice-Practices/Verona-Fox/p/book/9781032107349" target="_blank" rel="noopener">Routledge Handbook of Evidence-Based Criminal Justice Practices</a></em>.</p>
<p>“Amy was interested in pulling together constructs relevant to evolutionary theories and feminist theories of sexual violence perpetration, and merging those with the study of individual difference risk factors, especially psychopathic traits on which we had conducted some previous research. Thus, we used an experimental approach to examine how much intermale competition and status loss/win in this competition would influence the extent to which men would be willing to expose a woman to unwanted sexual content, and if psychopathic trait would moderate the relationship between winning/losing a cognitive challenge against another male and a lab proxy of sexual aggression.”</p>
<p>The research team recruited 298 heterosexual male college students, with 139 ultimately completing both parts of the study and providing valid data for analysis. The average participant was 21 years old, and the majority identified as white or Hispanic. Participants first completed a series of personality questionnaires, including a validated measure of psychopathic traits. This scale assessed two broad types of traits: interpersonal-affective traits (which include lack of empathy, manipulation, and dominance) and impulsive-antisocial traits (which include impulsivity, irresponsibility, and norm-violating behavior).</p>
<p>In the second part of the study, participants came to the lab and completed a simulated competition task. They were randomly assigned to either win or lose a cognitive challenge against another male, who was actually a trained confederate in half the cases. After the competition, participants were told they would take part in a media-sharing task with a female participant from another university lab (in reality, this “partner” was a prerecorded video of a woman who had expressed a strong dislike for sexual content).</p>
<p>Participants were given three video options—one sexually explicit, one romantic but not explicit, and one neutral—and were asked to choose and allocate time to clips they would send her. The amount of time they chose to send the sexually explicit clip, despite knowing the recipient disliked that kind of content, was used as a measure of sexually aggressive behavior.</p>
<p>The researchers found that, on average, men who won the competition sent longer durations of the unwanted sexually explicit clip than those who lost. This pattern contradicts the idea that a threat to status (such as losing) would provoke sexual aggression, at least in this context.</p>
<p>“We were expecting a status loss to be associated with more sexual aggression, with the idea that the sexual aggression would serve as an emotional repair technique,” Verona told PsyPost.</p>
<p>Instead, the data suggested that winning—and the increased feelings of power that likely accompanied it—was a more significant trigger. However, this effect was not uniform across participants. The most telling result came from the interaction between competition outcome and personality traits.</p>
<p>Specifically, men who scored higher on interpersonal-affective psychopathic traits were far more likely to engage in sexually aggressive behavior after winning the competition. For these men, the feelings of dominance and control following a win appeared to align with their callous, unemotional traits and produce a spike in aggressive sexual behavior. In contrast, men high in impulsive-antisocial traits did not show this pattern, and their behavior did not significantly change depending on whether they won or lost the competition.</p>
<p>This suggests that there may be more than one psychological route to sexually aggressive behavior. The study’s authors propose that men high in interpersonal-affective traits may pursue sexual aggression as a way of asserting control or reinforcing a sense of superiority—especially when they feel empowered by a recent win. Their actions are less about reacting to frustration or humiliation and more about taking advantage of a perceived opportunity to dominate.</p>
<p>“The results of the study were consistent with previous research in our lab which has suggested that psychopathic traits are linked to experiences of power/dominance, and those in turn are important for understanding different types of aggression (Hoffmann & Verona, 2019; Verona et al., 2022),” Verona said. “The results indicated that winning, and not losing, the intermale competition was associated with higher engagement in sexual aggression in the lab.”</p>
<p>These findings build on earlier research by the same lab, which has shown that psychopathic traits are linked to a desire for power and dominance in sexual contexts. In particular, individuals high in these traits may use sex as a way to feel in control, rather than as a form of intimacy or emotional connection. The new study adds to this by showing that situational factors—such as the outcome of a competitive encounter—can influence whether these tendencies are acted upon.</p>
<p>Interestingly, both psychopathy trait types were correlated with sexual aggression at the simplest level of analysis, but only the interpersonal-affective traits interacted with the win/loss condition to predict behavior. This points to the importance of distinguishing between different forms of psychopathy when trying to understand aggressive behavior. While impulsivity and norm-violating behavior may play a role in general aggression, it seems that calculated, unemotional dominance may be especially dangerous when combined with feelings of social superiority.</p>
<p>“Although both psychopathy factors were correlated with sending sexually explicit and unwanted content to a woman, only the interpersonal-affective traits interacted with winning to predict this sexual aggression,” Verona told PsyPost. “Thus, when presumed feelings of dominance associated with competition wins are combined with callous-unemotional traits, the risk of sexual aggression may be especially high in men.”</p>
<p>As with any study, there are some limitations. The sample consisted entirely of heterosexual, cisgender college men, so the results may not apply to other groups. The laboratory task was designed to simulate sexually aggressive behavior in a controlled environment, but it cannot fully replicate the complexity of real-world encounters. The study’s statistical power was also slightly below the ideal level for detecting some effects.</p>
<p>“The results require replication in a larger sample and using samples with higher range of scores on psychopathic traits,” Verona noted. “As with all laboratory-based experiments, improved internal control came at the cost of ecological validity, and the circumstances that lead to sexual aggression in the real world are more multifaceted and complicated.”</p>
<p>Nonetheless, the study makes a strong case for examining how situational and personality factors come together to shape sexually aggressive behavior. It challenges the idea that status threats are the only triggers for such actions, and instead suggests that feelings of power and dominance—especially when combined with traits like callousness and emotional detachment—may be just as risky.</p>
<p>The researchers emphasize that future work should continue exploring these dynamics in more diverse populations and with real-world variables, such as alcohol use or workplace hierarchies. Long-term, the goal is to develop interventions that help individuals with dominant tendencies channel their desire for status and control in healthier ways, reducing the risk of harm to others. This might involve strategies that promote empathy, encourage responsible expressions of leadership, or challenge cultural norms that equate masculinity with domination.</p>
<p>“The findings highlight the importance of attending to context, and dynamic social interactions in particular, as well as individual personality characteristics,” Verona said. “The role of dominance and attempts to maintain hierarchies seem to be important for aggression in its various manifestations, and future work hopes to build on this idea to examine potential treatment innovations that would channel desires for power in more healthy ways.”</p>
<p>The study, “<a href="https://doi.org/10.1002/ab.70025" target="_blank">Effects of Intermale Status Challenge and Psychopathic Traits on Sexual Aggression</a>,” was authored by Edelyn Verona, Amy M. Hoffmann, Stephanie R. Hruza.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<p><strong>Forwarded by:<br />
Michael Reeder LCPC<br />
Baltimore, MD</strong></p>
<p><strong>This information is taken from free public RSS feeds published by each organization for the purpose of public distribution. Readers are linked back to the article content on each organization's website. This email is an unaffiliated unofficial redistribution of this freely provided content from the publishers. </strong></p>
<p> </p>
<p><s><small><a href="#" style="color:#ffffff;"><a href='https://blogtrottr.com/unsubscribe/565/DY9DKf'>unsubscribe from this feed</a></a></small></s></p>