<table style="border:1px solid #adadad; background-color: #F3F1EC; color: #666666; padding:8px; -webkit-border-radius:4px; border-radius:4px; -moz-border-radius:4px; line-height:16px; margin-bottom:6px;" width="100%">
<tbody>
<tr>
<td><span style="font-family:Helvetica, sans-serif; font-size:20px;font-weight:bold;">PsyPost – Psychology News</span></td>
</tr>
<tr>
<td> </td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/living-in-greener-neighborhoods-is-associated-with-lower-risk-of-metabolic-syndrome/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Living in greener neighborhoods is associated with lower risk of metabolic syndrome</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Nov 23rd 2025, 08:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Living in neighborhoods with higher levels of green vegetation is associated with a significantly lower risk of developing metabolic syndrome. New research indicates that this protective effect is largely driven by cleaner air, higher vitamin D levels, and increased physical activity. These findings were published in the scientific journal <em><a href="https://doi.org/10.1016/j.envres.2025.122148" target="_blank" rel="noopener">Environmental Research</a></em>.</p>
<p>Metabolic syndrome is a serious health condition that affects a growing number of adults worldwide. It is not a single disease but rather a cluster of risk factors that occur together. These factors include high blood pressure, high blood sugar, excess body fat around the waist, and abnormal cholesterol or triglyceride levels.</p>
<p>When a person has three or more of these conditions, they are diagnosed with metabolic syndrome. This diagnosis indicates a much higher risk of developing heart disease, stroke, and type 2 diabetes. As global populations shift toward urban living, the prevalence of these metabolic issues has risen.</p>
<p>City environments often contribute to sedentary lifestyles and expose residents to higher levels of air pollution. Both physical inactivity and pollution are known contributors to chronic disease. Public health experts have sought to understand how modifying urban environments might mitigate these risks.</p>
<p>Previous scientific inquiries have suggested that access to nature can improve health outcomes. However, the specific mechanisms driving this relationship remain under investigation. It has been unclear whether the benefits come from exercise, stress reduction, or environmental factors like air quality.</p>
<p>“While there is growing evidence that exposure to residential greenspace is beneficial for adult health and well-being, we don’t really know how greenspace works to confer such health benefits. This study was motivated by the need to fill this research gap, with a specific focus on examining independent and multiple pathways, including biological, behavioral, environmental, and social factors, linking greenspace to adult metabolic health,” explained study author Chinonso Christian Odebeatu of The University of Queensland.</p>
<p>The researchers utilized data from the UK Biobank, a large-scale biomedical database. The analysis included 221,028 adults from across the United Kingdom. The participants had an average age of approximately 56 years at the time of recruitment.</p>
<p>To assess metabolic health, the team analyzed physical measurements and blood samples collected from the participants. They defined metabolic syndrome based on the Harmonized National Cholesterol Program criteria. A participant was classified as having the syndrome if they met three or more specific benchmarks regarding waist circumference, triglycerides, cholesterol, blood pressure, or glucose.</p>
<p>The study employed two distinct methods to measure exposure to greenspace. The first method involved the Normalized Difference Vegetation Index (NDVI). This uses satellite imagery to measure the density of living, green vegetation within a 500-meter radius of a participant’s home.</p>
<p>The second method utilized detailed mapping data to identify specific types of land use. This allowed the researchers to calculate the percentage of land dedicated to private residential gardens or public parks. This distinction provided a more nuanced view than satellite imagery alone.</p>
<p>The research team also gathered data on potential mediating factors. They assessed physical activity levels, sedentary behavior, and sleep patterns. They incorporated estimates of air pollution, specifically nitrogen dioxide and particulate matter, for each residential address.</p>
<p>Biomarkers such as serum vitamin D levels were measured from blood samples. Social factors were also considered, specifically self-reported feelings of loneliness. The researchers used statistical models to analyze the relationship between these environmental exposures and metabolic syndrome.</p>
<p>The results indicated a strong association between overall residential greenness and better metabolic health. Individuals living in areas with higher NDVI scores had significantly reduced odds of having metabolic syndrome. specifically, higher greenness was linked to a 29.3 percent lower likelihood of the condition.</p>
<p>This protective association was evident for several specific components of the syndrome. Higher greenness correlated with a lower risk of having a large waist circumference. It was also linked to healthier levels of triglycerides and lower systolic blood pressure.</p>
<p>The study did not find the same protective benefit for private residential gardens. The percentage of land covered by private gardens showed no significant association with metabolic syndrome in the overall analysis. This suggests that the total amount of vegetation may be more important than owning a garden.</p>
<p>A key objective of the study was to understand how greenspace provides these benefits. The researchers conducted a mediation analysis to estimate the contribution of different pathways. They found that nearly half of the protective effect of greenness could be explained by the identified mediators.</p>
<p>Reduction in air pollution emerged as the most significant pathway. Green vegetation acts as a buffer against pollutants, particularly nitrogen dioxide, which is often emitted by traffic. The study suggests that by lowering exposure to these harmful gases, greenspace reduces inflammation and metabolic stress.</p>
<p>Vitamin D levels also played a substantial role. Participants in greener areas tended to have higher levels of vitamin D. This is likely because attractive outdoor environments encourage people to spend more time outside, exposing them to sunlight.</p>
<p>“It was quite surprising and interesting to see vitamin D emerge as one of the key pathways linking greenspace to adult metabolic health, an important finding that, to our knowledge, has not been previously reported,” Odebeatu told PsyPost. “This opens new avenues for research into how behavioural, environmental, and biological pathways interact to shape health outcomes.”</p>
<p>Physical activity and reduced sedentary time were also identified as contributing factors. Greener neighborhoods may provide more inviting spaces for walking and recreation. However, the impact of physical activity was less pronounced than the impact of air quality and vitamin D.</p>
<p>The researchers also observed that the benefits of greenness were not uniform across all demographic groups. The association between greenness and reduced metabolic risk was stronger for men than for women. It was also more pronounced in adults aged 51 to 60 compared to younger or older age groups.</p>
<p>Socioeconomic factors influenced the strength of the association as well. The protective effects of general greenness were strongest for individuals living in more deprived neighborhoods. This suggests that public greenspace may be particularly valuable for communities that lack other health resources.</p>
<p>Conversely, private gardens only showed a protective effect for individuals in less deprived areas. This might reflect differences in how gardens are used or maintained in different socioeconomic contexts. It implies that simply having a garden space does not guarantee health benefits for everyone.</p>
<p>“Greenspace exposure has the potential to improve multiple aspects of adult health and well-being by facilitating increased physical activity and reduced sedentary behavior (e.g., driving, watching TV), enhancing vitamin D synthesis, lowering exposure to environmental air pollution, and alleviating loneliness through social connection,” Odebeatu said. “Therefore, spending time in nearby greenspaces may be a simple, natural way to support long-term health benefits.”</p>
<p>The study has some limitations. The research used a cross-sectional design, meaning data was collected at a single point in time. This prevents the researchers from definitively proving that greenspace causes better health, only that they are related.</p>
<p>Additionally, the air pollution data was modeled for the year 2010. While pollution levels are generally stable, this single-year estimate may not perfectly reflect long-term exposure for every participant. The study also focused on residential addresses and did not account for greenspace exposure at work.</p>
<p>“We need to clarify that not all greenspaces confer the same health benefits, as factors such as type, accessibility, and quality of greenspaces all seem to play a role in how much benefit one derives from such exposure,” Odebeatu said. “Additionally, while our findings suggest greenspace can reduce metabolic risk, these effects are modest at the individual level and should be seen as one component of a broader strategy for maintaining metabolic health. Finally, as this study is observational, we cannot establish definitive causality, and unmeasured factors may also influence the observed relationships.</p>
<p>Future research could address these gaps by following participants over many years. Longitudinal studies would help establish the timeline of how environmental changes impact metabolic health. Tracking exposure to nature throughout the day, rather than just at home, would also provide a complete picture. The researchers also recommend qualitative research to understand how people interact with their local environments.</p>
<p>“The present study provides quantitative evidence on the association and pathways between residential greenspace exposure and adults’ metabolic health,” Odebeatu explained. “The next line of research would be to incorporate qualitative research, which can provide rich insights into how people experience and interact with greenspaces.”</p>
<p>“For instance, qualitative evidence exploring how individuals perceive, use, and value different greenspace types may help explain why certain greenspaces appear to be more beneficial than others. It could also help identify barriers (e.g., safety concerns, mobility limitations, norms, etc) that quantitative data alone cannot capture.”</p>
<p>The study, “<a href="https://doi.org/10.1016/j.envres.2025.122148" target="_blank" rel="noopener">Residential greenspace indicators and metabolic syndrome in the UK Biobank Cohort: mediation through behavioural, environmental, social and biomarker pathways</a>,” was authored by Chinonso Christian Odebeatu, Darsy Darssan, Charlotte Roscoe, Simon Reid, and Nicholas J. Osborne.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/men-with-higher-testosterone-produce-body-odor-that-is-perceived-as-more-dominant/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Men with higher testosterone produce body odor that is perceived as more dominant</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Nov 23rd 2025, 06:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new study published in <em>Evolution and Human Behavior</em> provides evidence that human body odor may act as a subtle cue for social status. The research suggests that men with higher levels of testosterone produce a scent that others perceive as more dominant. These findings indicate that chemical signaling may play a role in how humans assess social hierarchies, operating largely outside of conscious awareness.</p>
<p>Social hierarchies are a fundamental aspect of human group living. To navigate these complex social structures, individuals must assess the standing of those around them. Evolutionary psychologists generally categorize social status into two distinct strategies. The first is dominance, which relies on intimidation, force, or coercion to secure resources and compliance.</p>
<p>The second strategy is prestige. Prestige is granted freely by others to individuals who demonstrate valuable skills, knowledge, or wisdom. While both strategies lead to high social rank, they are communicated through different behaviors. Dominance is often associated with aggressive posturing and lower vocal pitch. Prestige is linked to confidence and social connection.</p>
<p>Humans are adept at reading visual and vocal cues to judge these traits. An expansive posture or a deep voice often signals high status. However, animals across the biological spectrum also rely heavily on chemical communication. From insects to mammals, organisms use scent to mark territory, find mates, and signal fighting ability.</p>
<p>Dominance signaling via scent is well-documented in other species. For example, dominant male rodents mark their territory with urine to advertise their competitive strength. Other males typically avoid these areas to prevent costly physical conflicts. The authors of the current study sought to determine if a similar biological mechanism exists in humans.</p>
<p>Testosterone is a hormone frequently associated with status-seeking behavior in men. It is linked to aggression and the drive to achieve social power. Biologically, testosterone also influences physiological processes that create body odor. It affects the apocrine sweat glands and the production of sebum, an oily substance on the skin.</p>
<p>Because testosterone drives both status-seeking behavior and the production of body odor, the researchers hypothesized that the two might be linked. They proposed that men with higher testosterone would produce a smell that others interpret as indicative of high social status. Specifically, they looked at whether this scent would signal dominance or prestige.</p>
<p>“The human sense of smell is still not well understood, and people generally don’t recognize how much information we gather through our noses, or how impactful it can be to lose this sense,” said study author <a href="https://hofermarlise.wordpress.com/" target="_blank" rel="noopener">Marlise Hofer</a>, a postdoctoral researcher at the University of Victoria.</p>
<p>“Our motivation came from two converging observations. First, research is beginning to reveal that human odor cues carry a wide range of social information (e.g., emotional states, health, kinship), and we wanted to know whether this extends to perceptions of social status or dominance.”</p>
<p>“Second, prior work suggests that hormones such as testosterone play a role in status-related behaviors (competitiveness, dominance), so we wanted to examine whether testosterone levels, or scent cues associated with testosterone, might be involved in providing subtle olfactory information shaping perceptions of social status. In doing so, we bridge biological endocrinology, chemical communication, and social status perception, domains that all matter for understanding how people navigate hierarchies, competition, and social connection.”</p>
<p>The researchers recruited 74 male participants to serve as scent donors. These men had an average age of approximately 22 years. To ensure the scents were not contaminated by outside factors, the donors followed a strict hygiene protocol.</p>
<p>For 24 hours, the donors avoided activities that could alter their natural smell. They were instructed not to smoke, drink alcohol, or eat strong-smelling foods like garlic or onions. They also refrained from using scented soaps, deodorants, or colognes. The researchers provided them with fragrance-free soap and shampoo for showering.</p>
<p>During this 24-hour period, each donor wore a clean, white cotton t-shirt. This allowed their natural body odor to accumulate on the fabric. The donors also visited the laboratory to provide saliva samples. These samples were analyzed to measure their circulating levels of testosterone.</p>
<p>After the shirts were collected, they were frozen to preserve the odors. The second phase of the study involved a large group of independent raters. The researchers recruited 797 participants to act as “smellers.” This group included both men and women.</p>
<p>The smellers were presented with the worn t-shirts, as well as unworn control shirts. They were not told who wore the shirts or provided with any information about the donors. For each shirt, the rater inhaled the scent and completed a questionnaire.</p>
<p>The questionnaire asked the smellers to rate the odor on several dimensions. They assessed the intensity of the smell and how pleasant or sexy it was. They also rated their perception of the wearer’s personality traits. Specifically, they rated how dominant and prestigious they imagined the wearer to be.</p>
<p>The items used to assess dominance focused on aggressive control. For example, raters considered whether the wearer seemed like someone who “enjoys having control over others.” The prestige items focused on respect, such as whether the wearer seemed “held in high esteem by others.”</p>
<p>The researchers analyzed the data using multilevel modeling. This statistical approach allowed them to account for the fact that each shirt was rated by multiple people. The analysis revealed a positive association between testosterone and perceived dominance.</p>
<p>Men who had higher levels of testosterone produced sweat that raters perceived as coming from a more dominant individual. This relationship held true even after the researchers controlled for the intensity of the smell. In other words, while high-testosterone sweat was stronger, the perception of dominance was not solely due to the strength of the odor.</p>
<p>The researchers also found that testosterone levels were positively correlated with scent intensity. Men with more testosterone simply smelled stronger. However, there was no significant relationship between testosterone and how pleasant the smell was. The hormonal signal appeared to convey information about power rather than attractiveness.</p>
<p>The findings regarding prestige were distinct from those regarding dominance. The researchers found no association between a donor’s testosterone levels and how prestigious they were perceived to be. This aligns with evolutionary theory suggesting that dominance is a more primal strategy rooted in physical aggression. Prestige, being a more culturally dependent trait, may not be signaled as directly through biological markers like testosterone.</p>
<p>The researchers also examined the donors’ self-views. The men completed surveys rating their own levels of dominance and prestige. The analysis showed no correlation between a man’s self-rated dominance and his testosterone levels. Furthermore, a man’s self-view did not predict how others rated his scent. This suggests that while testosterone influences body odor in a way that signals dominance to others, this signal operates independently of how a man sees himself.</p>
<p>Together, the findings provide evidence that “humans can detect and respond to subtle differences in body odor, and those differences can influence how dominant someone is perceived to be,” Hofer told PsyPost. “Scent is one more input into how we understand and respond to others, and because odor perception often operates outside conscious awareness, it’s an important but understudied part of social interaction.”</p>
<p>But it is important to note that “the effects were small. This doesn’t mean people can clearly ‘smell status,’ but rather that scent contributes one subtle cue among many. It’s an incremental insight: the effect is real but limited, and in everyday interactions context and other sensory or social cues likely play a much larger role.”</p>
<p>The researchers also noted that the sex of the smeller did not affect the results. Both male and female raters perceived the scents of high-testosterone men as more dominant.</p>
<p>“All our odor samples were from male donors, but our raters included both males and females,” Hofer said. “We were surprised that the effect of testosterone did not differ based on rater sex. Females typically have a stronger olfactory ability, so we initially expected they might show stronger effects. We also thought ratings might differ due to differing evolutionary goals related to mating or competition. However, there was no moderation by sex, suggesting that testosterone’s influence on odor perception was consistent across males and females.”</p>
<p>As with all research, there are limitations to consider. First, the sample size of 74 scent donors is relatively small for this type of research. While the number of raters was large, the number of unique scent profiles was limited. Replicating the study with a larger group of donors would help confirm the reliability of the association.</p>
<p>The study also focused on a single hormone measured at a single point in time. Testosterone levels can fluctuate throughout the day. A more comprehensive approach would involve measuring testosterone over several days to get a baseline average. Additionally, other hormones like cortisol likely play a role in social signaling.</p>
<p>Future research could investigate the interaction between testosterone and cortisol. Previous work suggests the ratio of these two hormones is a better predictor of aggressive behavior than testosterone alone. It is possible that the “scent of dominance” is most potent in men with high testosterone and low cortisol.</p>
<p>The biological mechanism behind this phenomenon also requires further study. Testosterone is known to stimulate hair growth and sebum production. It is possible that the perceived dominance is driven by these secondary factors rather than the sweat itself. For instance, more underarm hair provides more surface area for bacteria, potentially intensifying the odor.</p>
<p>Determining the exact biological roots of these signals remains a task for future science, but the results suggest that body odor provides a stream of social data. Hofer explained that she is now turning her attention to the consequences of being cut off from that information:</p>
<p>“Since COVID-19, many people have temporarily or permanently lost their sense of smell, and I am increasingly focusing on the consequences of losing this important source of social and environmental information,” Hofer said. “I would like to better understand these impacts and develop ways to support people living with olfactory dysfunction.”</p>
<p>The study, “<a href="https://doi.org/10.1016/j.evolhumbehav.2025.106752" target="_blank" rel="noopener">The role of testosterone in odor-based perceptions of social status</a>,” was authored by Marlise K. Hofer, Tianqi Peng, Jennifer C. Lay, and Frances S. Chen.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/assortative-mating-develops-naturally-if-mate-preferences-and-preferred-mate-traits-are-heritable/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Assortative mating develops naturally if mate preferences and preferred mate traits are heritable</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Nov 22nd 2025, 18:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A study in Australia ran a computer simulation that showed how assortative mating (the preference for romantic partners similar to oneself) arises spontaneously when heritable traits and heritable preferences for mates become associated through generations. The simulation showed that the heritability of mate preferences and preferred traits is sufficient to produce assortative mating without any other mechanisms. The paper was published in <a href="https://doi.org/10.1177/09567976251365900"><em>Psychological Science.</em></a></p>
<p>Assortative mating is the tendency for individuals to choose partners who are similar to themselves in important traits, such as education, height, personality, or values. It is observed in humans and many animal species, making it a widespread pattern in nature. People tend to resemble their partners more than would be expected by chance. While this similarity can make communication and cooperation easier—whereas a large mismatch in vocabulary, cognitive capacities, or interests can make communication difficult—the study suggests these benefits are not necessary for the pattern to emerge.</p>
<p>In humans, assortative mating frequently occurs regarding socioeconomic status. It can also happen for psychological traits, such as intelligence or mental health vulnerabilities. Biologists distinguish between positive assortative mating, where similar individuals pair up, and negative assortative mating, where opposites attract, although the former is much more common. Positive assortative mating tends to increase genetic similarity within families and can reduce genetic diversity in small populations, an outcome usually considered undesirable.</p>
<p>Study authors Kaitlyn T. Harper and Brendan P. Zietsch propose that when individuals are driven by heritable preferences to choose partners with certain heritable traits, associations form between individuals’ traits and corresponding preferences because offspring inherit both the trait from one parent and the mate preference from the other.</p>
<p>In simple words, if a person who likes (for example) their partner to be tall (a partly heritable preference) has a child with a tall person (a heritable trait), that child may inherit the preference for tall individuals from one parent and genes that make them tall from the other, creating a genetic correlation. Through further generations, this tendency strengthens. In this way, assortative mating—i.e., the preference for individuals similar to oneself—may arise through generations without any other mechanisms.</p>
<p>The study authors wanted to see whether this described mechanism is sufficient to create assortative mating (i.e., to make individuals choose partners similar to themselves). They ran a simulation of multiple generations of individuals using an agent-based model programmed in R. Agent-based modeling is a computational approach that simulates individuals interacting under a specified set of rules.</p>
<p>In this case, agents in the model (simulated individuals) used ideal preference values to evaluate the traits of potential partners. These traits were set to be either neutral regarding the organism’s chance to reproduce (fitness neutral) or were designed to create selection pressure, where different traits conferred different chances of reproduction.</p>
<p>The authors ran 10 versions of each model to assess how varying the number of preferences used in mate choice from 1 to 10 affected the strength of the associations that developed between partners’ traits. They ran each simulation for 100 generations.</p>
<p>Results showed that genetic correlations formed between preferences and preferred traits, as well as between partner traits, over generations. In other words, assortative mating emerged as a natural outcome, demonstrating that the heritability of preferences and preferred traits is sufficient to produce it.</p>
<p>“We have demonstrated that even in the absence of adaptiveness or complex social dynamics, assortative mating is likely to arise naturally when preferences and preferred traits are heritable, which is true for virtually every quantitative trait,” the study authors concluded.</p>
<p>The study contributes to the scientific understanding of mate selection mechanisms in living organisms. However, it should be noted that the study was completely based on a simplified computational model that operated under certain predefined rules. Real-world results might differ.</p>
<p>The paper, “<a href="https://doi.org/10.1177/09567976251365900">Assortative Mating Is a Natural Consequence of Heritable Variation in Preferences and Preferred Traits</a>,” was authored by Kaitlyn T. Harper and Brendan P. Zietsch.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/musicians-possess-a-superior-internal-map-of-their-body-in-space/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Musicians possess a superior internal map of their body in space</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Nov 22nd 2025, 16:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Research suggests that learning to play a musical instrument does far more than provide artistic satisfaction; it appears to fundamentally alter how the brain maps the physical body in space. A new analysis indicates that trained musicians possess a superior ability to maintain their physical orientation and balance, even in the absence of visual cues. These findings were recently published in the academic journal <em><a href="https://doi.org/10.1016/j.cortex.2025.10.002" target="_blank">Cortex</a></em>.</p>
<p>Spatial cognition is the mental process that allows individuals to navigate the physical world. It relies on the brain’s continuous effort to track the body’s position relative to external objects. This internal map is constructed by combining streams of information from the eyes, the inner ear, and physical sensations from the muscles and skin.</p>
<p>This concept creates a body representation that solves the computational problem of locating oneself in an environment. Anchoring the body is essential for tasks ranging from simple walking to complex mental rotation of objects. While vision is a primary tool for this, the auditory system also plays a significant role in stabilization.</p>
<p>The brain utilizes sound sources as anchors to help maintain balance and direction. Previous investigations have demonstrated that even short periods of sensory training can sharpen these spatial skills. For instance, programs that pair body movements with auditory feedback have shown promise in enhancing spatial awareness.</p>
<p>Playing an instrument represents a rigorous form of long-term multisensory training. It demands the simultaneous and precise coordination of touch, hearing, and vision. Researchers hypothesized that this intense practice might permanently alter how the brain processes spatial information.</p>
<p>The investigative team sought to determine if the multisensory nature of musical practice translates into better performance on tasks unrelated to music. They specifically wanted to see if musicians could better resist body disorientation. The researchers focused on how these individuals utilized sound to stabilize their posture.</p>
<p>The study was conducted by a collaborative team of scientists from several Canadian institutions. The group included Daniel Paromov, Thomas Augereau, Maxime Maheu, and François Champoux from the Université de Montréal. They worked alongside Benoit-Antoine Bacon from the University of British Columbia and Andréanne Sharp from Université Laval.</p>
<p>To test their hypothesis, the team recruited thirty-eight participants for the experiment. Half of the group consisted of experienced musicians, while the other half had no significant musical background. The researchers ensured that both groups had similar hearing abilities and ages to rule out unrelated factors.</p>
<p>The musicians in the study had between six and twenty-eight years of experience. Their weekly practice habits varied, but all were considered active players. The control group consisted of individuals whose musical education was limited to standard elementary school classes.</p>
<p>The participants performed a standard clinical assessment known as the Fukuda-Unterberger stepping task. Subjects wore blindfolds and were asked to march in place for sixty seconds. This activity naturally causes people to drift forward and rotate without realizing it.</p>
<p>The task creates a sensory mismatch that reveals the accuracy of a person’s internal body representation. Without visual confirmation, the brain must rely solely on the vestibular system and proprioception to stay in place. Any deviation from the starting point indicates an error in spatial processing.</p>
<p>The researchers measured how much the participants moved from their starting spot under several distinct conditions. In one scenario, the room was completely silent. This baseline condition tested the participants’ innate ability to maintain their position without any external help.</p>
<p>In other trials, a speaker played a speech signal from specific locations around the room. The sound originated from directly in front of the participant, or from forty-five or ninety degrees to the side. These variations tested how well the subjects could use sound to orient themselves.</p>
<p>The distinct angles were chosen to provide incremental levels of difficulty. Auditory cues originating from the side are generally harder for the human brain to localize accurately. This made the ninety-degree condition the most challenging test of auditory spatial anchoring.</p>
<p>The data revealed clear differences between the two groups in the silent condition. When marching without sound, the musicians drifted forward significantly less distance than the non-musicians. This indicates a more accurate internal sense of body position that operates independently of auditory input.</p>
<p>The gap in performance widened when auditory cues were introduced. The musicians were consistently better at maintaining their orientation relative to the sound source. They utilized the audio information more effectively to anchor themselves in the physical space.</p>
<p>The location of the sound mattered less to the musicians than it did to the control group. The non-musicians struggled significantly more when the sound came from the side rather than the front. Their ability to use the sound as an anchor deteriorated as the angle increased.</p>
<p>In contrast, the musicians maintained high accuracy even when the sound originated from a forty-five-degree angle. Their performance only began to align with the control group when the sound moved to the extreme ninety-degree position. This suggests that musicians have a wider effective radius for using sound to stabilize their bodies.</p>
<p>The statistical analysis confirmed that these differences were not due to chance. The musicians demonstrated a robust advantage in minimizing deviation from the sound source. This supports the idea that their training has enhanced their binaural integration capabilities.</p>
<p>The study also looked for correlations between the amount of musical experience and the level of performance. Unexpectedly, the number of years played or the age of onset did not predict the degree of spatial improvement within the musician group. This finding suggests that the benefits of musical training on spatial cognition may be acquired relatively quickly.</p>
<p>It appears that once a certain level of proficiency is reached, the spatial benefits may hit a ceiling. This implies that even moderate amounts of musical training could confer these spatial advantages. It challenges the assumption that only decades of practice lead to cognitive changes.</p>
<p>These results suggest that musical expertise refines the brain’s ability to integrate sensory inputs. The musicians appeared to have enhanced proprioception, which is the body’s ability to sense its own movement and location. This benefit extended beyond the auditory system to include general bodily awareness.</p>
<p>The enhanced performance in silence points to improved vestibular or somatosensory processing. Musicians are known to have superior tactile perception and coordination. This study indicates those physical refinements contribute to a more stable representation of the body in the environment.</p>
<p>The findings have practical applications for medical rehabilitation. Improving spatial cognition is vital for patients who are at risk of falls, such as the elderly. Musical training could serve as a tool to strengthen the neural connections responsible for balance and orientation.</p>
<p>This type of therapy would be distinct from standard physical therapy. It would leverage the multisensory demands of music to train the brain’s navigation systems indirectly. This could be particularly useful for individuals with vestibular deficits.</p>
<p>There are limitations to the study regarding the cause of these abilities. It is not yet clear if musical training creates these improvements or if individuals with superior spatial skills are drawn to music. The current data shows a strong association but does not definitively prove causation.</p>
<p>The study did not account for the specific type of instrument played by the participants. Different instruments require different physical postures and spatial demands. Future research should investigate if pianists, for example, differ from violinists in their spatial cognitive abilities.</p>
<p>The researchers also note that the study involved a relatively small sample size. While the groups were matched effectively, larger cohorts would help validate the findings. Replicating the results with a broader demographic would strengthen the generalizability of the conclusions.</p>
<p>Future investigations could address the causality question by offering musical training to non-musicians. Tracking the progress of novices over time would isolate the specific impact of the training. This would provide a clearer picture of how quickly these spatial benefits emerge.</p>
<p>The lack of correlation with training duration remains an area for further exploration. It contradicts some previous literature that links cognitive benefits directly to the years of practice. Determining the minimum dosage of training required for these effects is a logical next step.</p>
<p>The study, “<a href="https://doi.org/10.1016/j.cortex.2025.10.002" target="_blank">Musical training shapes spatial cognition</a>,” was authored by Daniel Paromov, Thomas MD Augereau, Maxime Maheu, Benoit-Antoine Bacon, Andréanne Sharp, and François Champoux.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/new-research-highlights-the-role-of-family-background-and-attachment-in-shaping-infidelity-intentions/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">New research highlights the role of family background and attachment in shaping infidelity intentions</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Nov 22nd 2025, 14:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>New research sheds light on the factors that may lead young adults to consider cheating on their romantic partners. The study suggests that a combination of family history, personal attachment style, and relationship intimacy plays a significant role in shaping infidelity intentions. These findings were published in <em><a href="https://doi.org/10.1177/10664807251384185" target="_blank">The Family Journal: Counseling and Therapy for Couples and Families</a></em>.</p>
<p>The study was conducted by <a href="http://linkedin.com/in/esra-selalmaz-9519b3176?originalSubdomain=tr" target="_blank">Esra Selalmaz</a> and <a href="https://www.gizemerdemphd.com/" target="_blank">Gizem Erdem</a>. Selalmaz is a clinical psychologist, and Erdem is an associate professor of psychology and a licensed marriage and family therapist at Koç University in Istanbul, Türkiye. They sought to investigate the predictors of infidelity among emerging adults. </p>
<p>While previous studies have linked attachment styles to cheating, the researchers aimed to create a more comprehensive picture. They incorporated family-of-origin experiences and current relationship satisfaction into their analysis. </p>
<p>“At first, we wanted to understand how parental infidelity influences the lives of adult children,” Selalmaz explained. “The literature includes many studies on its psychological effects, but there is also evidence showing that this early experience may influence adult children’s romantic relationships, including their attitudes, behaviors, and intentions regarding infidelity. Then, we wondered ‘Is parental infidelity related to higher infidelity intentions in adult children, and what other factors might be related to it?'”</p>
<p>“Infidelity is a common problem that couples bring to therapy as a presenting problem,” Erdem added. “When we work with infidelity issues in a clinical setting, typically we focus on violation of trust and attachment injuries associated with infidelity. Then, we explore whether healing the emotional bond and re-building trust are possible for the couple. It is a turning point for relationships where it either breaks the relationship or it reveals hidden issues that couples need to face and grow together – it opens the Pandora’s box in long term relationships.” </p>
<p>“In systemic therapy, we see infidelity as a symptom, rather than an incident on its own. We examine how that symptom occurs and what it hints as an underlying issue in that relationship. Does it fill a void? Inspired by that clinical work, our empirical study is exploring the multifaceted nature of infidelity; how infidelity relates to romantic attachment (our ways of connecting to romantic partners), intimacy issues, past romantic affairs, and family experiences. </p>
<p>“As a couple and family therapist, I was particularly interested in the parental infidelity aspect because I was curious whether there could be an intergenerational transmission of infidelity from parents to their adult children,” Erdem continued. “Can we really say infidelity carries over across generations? That is an interesting question to explore.”</p>
<p>For their study, the researchers recruited 280 participants for this investigation. The sample consisted of individuals between the ages of 18 and 30. All participants were unmarried and had no children. Additionally, every participant was currently in a committed romantic relationship that had lasted for at least one year. The researchers chose this duration because commitment and exclusivity are typically established within the first year of dating. Data collection occurred online through a secure survey platform.</p>
<p>Participants provided demographic information and answered questions about their relationship history. The researchers asked detailed questions regarding the participants’ parents. Specifically, respondents indicated whether their parents had ever engaged in an extramarital affair. They also reported how and when they discovered this information.</p>
<p>To measure psychological factors, the survey included the Experiences in Close Relationships-Revised Scale. This tool assesses two dimensions of attachment insecurity: anxiety and avoidance. Anxiety involves a fear of rejection and abandonment. Avoidance involves discomfort with closeness and a desire for independence.</p>
<p>The study also measured the quality of the participants’ current relationships. The researchers used the Personal Assessment of Intimacy in Relationships Scale. This measure separated intimacy into two distinct categories: emotional and sexual. Finally, the participants completed the Intentions towards Infidelity Scale. This instrument asks individuals to rate the likelihood that they would be unfaithful in various hypothetical situations.</p>
<p>The descriptive results provided insight into the family backgrounds of the participants. A significant portion of the sample reported a history of parental infidelity. Approximately half of the participants indicated that at least one of their parents had an extramarital affair. The data showed that fathers were identified as the unfaithful parent more frequently than mothers.</p>
<p>Participants typically discovered this infidelity during adolescence, around the age of 13. This is a developmental stage where individuals begin to form their own concepts of romantic relationships. The researchers noted that discovering a parent’s betrayal during this period can be particularly impactful. It may shape how young adults view trust and loyalty in their own lives.</p>
<p>The statistical analysis revealed that a personal history of cheating was the strongest predictor of future intentions. Participants who admitted to cheating on a partner in a past relationship were significantly more likely to report intentions to cheat again. This supports the notion that past behavior can be a strong indicator of future inclinations. However, having been the victim of cheating in the past did not show a significant link to current infidelity intentions.</p>
<p>“I was not surprised to see that having cheated on a prior romantic partner was associated with higher infidelity intentions, but the strength of this relationship was greater than I expected,” Selalmaz told PsyPost.</p>
<p>Family history also emerged as a significant factor. Participants who knew that a parent had been unfaithful reported higher intentions to engage in infidelity themselves. This finding supports the concept of intergenerational transmission. Observing a parent’s infidelity may normalize the behavior or disrupt the development of trust. The researchers suggest that these early experiences can create a template for future relationships.</p>
<p>Attachment styles provided further nuance to the findings. The researchers found a positive relationship between attachment avoidance and infidelity intentions. Individuals high in avoidance often suppress their needs for intimacy to maintain a sense of independence. They may view infidelity as a way to keep an emotional distance from their primary partner. It can serve as a strategy to regulate their emotions without becoming too dependent.</p>
<p>In contrast, attachment anxiety was not significantly linked to infidelity intentions in this study. Individuals with high attachment anxiety typically fear abandonment. They may cling to their partners rather than risk the relationship by seeking affection elsewhere. While some theories suggest anxious individuals might cheat to find reassurance, this specific model did not support that connection.</p>
<p>“I was surprised that attachment avoidance, but not attachment anxiety, was related to infidelity intentions,” Erdem said. “I would expect attachment insecurity would be related to infidelity in either way. Adults with attachment avoidance struggle to be emotionally present, available, and accessible for their partners.” </p>
<p>“Maybe the infidelity intention is a way of distancing themselves from the demands of a long term relationship as a relief. I would expect attachment anxiety to be related to intentions to fulfill the need to belong and accept and handle the fears of rejection, but it looks like this was not the case at least for youth in our study.”</p>
<p>The quality of the current relationship acted as a protective factor. The researchers found that higher levels of perceived emotional intimacy were associated with lower intentions to cheat. Similarly, higher levels of sexual intimacy were linked to a lower likelihood of infidelity. This implies that when emotional and sexual needs are met within the relationship, the drive to look for alternatives diminishes.</p>
<p>The study also examined gender differences. In some of the statistical models, men reported higher intentions to cheat than women. This aligns with certain cultural norms in Turkey regarding gender roles and sexuality. However, as the researchers added more variables to the analysis, the significance of gender decreased. This suggests that factors like attachment style, intimacy, and personal history may be more direct drivers of infidelity than gender alone.</p>
<p>“There are many different factors related to cheating and every person’s experience has a unique meaning,” Selalmaz said. “In daily life, these experiences are often interpreted through moralized lenses, which may oversimplify the complex dynamics behind infidelity. Therefore, people may feel confused about the meaning of infidelity (why it occurred) and how to approach it. So, our findings suggest that infidelity intentions are related to a combination of personal history and relational dynamics, not with a single factor. Understanding one’s relational patterns, past experiences, and closeness with a romantic partner may offer valuable insight into the dynamics behind infidelity intentions.”</p>
<p>“An average person can take some key messages from our findings,” Erdem told PsyPost. “First and foremost, infidelity is fairly common in committed relationships, as indicated in our descriptive findings of past sexual and emotional affairs of participants as well as their parents. In addition, we see that infidelity intentions are higher for those with a history of cheating in prior relationships and avoidant-attachment style. Intentions are also higher for those whose parents had an affair.” </p>
<p>“There is a nuance, though. We focus on intentions, not behaviors in this study. Past actions and parental infidelity experiences are important to understand one’s intentions, but they may not necessarily result in cheating behavior. That is, there is still a lot of room for growing together as a couple, to build intimacy and connection, and have a discussion around relational needs to uncover such intentions. It also shows us that past experiences can make us vulnerable to have infidelity intentions, yet intimacy (both sexual and emotional intimacy we have with the partner) are important factors that reduce the infidelity intentions.” </p>
<p>“Our findings suggest, in a way, the past is not setting up a destiny for future, but it is a warning sign to keep an eye on. And we see that intimacy is a key component in building trusting relationships. Being genuinely curious about your partner’s life experiences (including past relationships and upbringing) and being open to understanding their relational needs are vital for fulfilling romantic relationships.”</p>
<p>As with all research, there are some limitations. The study relied on self-reported data, which depends on the honesty and self-awareness of the participants. The design was cross-sectional, meaning it captured a snapshot in time rather than following couples over years. This makes it impossible to determine cause and effect with certainty. For instance, it is unclear if a lack of intimacy leads to infidelity intentions or if the desire to cheat causes partners to withdraw emotionally.</p>
<p>The sample was also composed largely of educated, middle-class individuals. The findings may not apply to other demographic groups with different socioeconomic backgrounds. Additionally, the study focused on intentions rather than actual behaviors. Thinking about cheating is distinct from taking action.</p>
<p>Erdem also emphasized that these findings represent correlations rather than destiny: “Infidelity is a complex phenomenon. Just because we found the parental infidelity to be related to infidelity intentions of young adults does not mean that they will perform what they have seen in their families. As adults, they have agency to make decisions for their lives, family-of-origin experiences are important, but we as adults have the option to choose our path in life.” </p>
<p>“This is a common misunderstanding,” she continued. “Having witnessed affairs in a family does not negate the individual agency in future relationships. We can break cycles of intergenerational transmissions. Second, all our findings are correlational. For instance, lack of emotional or sexual intimacy are related to infidelity intentions. They can go both ways. When partners lack intimacy, they may look for fulfilling those needs elsewhere or when they desire to fulfill their intimacy needs elsewhere, they may not have the same intimacy with one another. We do not know the direction of this association between intimacy and infidelity, but we know that they are related to each other.”</p>
<p>Future research could benefit from longitudinal designs that follow couples over time. This would help clarify the direction of the relationships between intimacy, attachment, and infidelity. Erdem also expressed interest in exploring how modern definitions of infidelity are evolving. This includes investigating how interactions with artificial intelligence companions might impact romantic commitment.</p>
<p>“I have been reading about the role of AI companions in romantic relationships and writing about whether we can truly get ‘romantically attached’ to those companions,” Erdem said. “I am curious if a committed partner engages with an AI partner, that would be considered cheating? Where do we draw the line of infidelity in such cases? That is something I am considering to explore in my research in the future.”</p>
<p>“I think we also need to keep in mind who discovers parental infidelity in their families and who does not. It is possible that some children never find out about their parents’ infidelity and it remains a family secret. I am curious about the subsample in our study who reported being ‘unsure’. Does it mean they suspected infidelity or does it mean it is a family secret that everyone knows but does not want to talk about? It would be ideal to have data from their parents to explore more on this issue.”</p>
<p>The study, “<a href="https://doi.org/10.1177/10664807251384185" target="_blank">Emerging Adults’ Infidelity Intentions in Romantic Relationships: The Role of Parental Infidelity, Adult Attachment Insecurity, and Intimacy</a>,” was authored by Esra Selalmaz and Gizem Erdem.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/large-scale-trial-finds-four-day-workweek-improves-employee-well-being-and-physical-health/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Large-scale trial finds four-day workweek improves employee well-being and physical health</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Nov 22nd 2025, 12:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A large-scale international study suggests that reducing the workweek to four days without cutting pay leads to improvements in employee well-being, including lower burnout and better physical health. The research tracks nearly 3,000 workers and indicates that these benefits are largely driven by better sleep and a stronger sense of work ability. These findings were recently published in <em><a href="https://doi.org/10.1038/s41562-025-02259-6" target="_blank">Nature Human Behaviour</a></em>.</p>
<p>The COVID-19 pandemic fundamentally altered how society views employment and labor markets. High rates of stress and burnout, combined with a surge in resignations, prompted employers to seek new ways to retain staff and maintain morale. One specific intervention gaining attention is the four-day workweek. This model differs from a compressed schedule where employees work longer days to make up for a day off. Instead, it involves working fewer hours for the same salary.</p>
<p>Past studies on working hours often relied on observation rather than intervention. It is difficult to determine if long hours cause poor health or if other factors are at play when looking at data at a single point in time. Previous trials were often small or limited to the public sector in Northern Europe. The new research aimed to test the concept in a diverse range of private companies across multiple countries.</p>
<p>“The pandemic created a perfect storm for exposing unsustainable workloads and accelerating interest in alternatives like the four-day workweek. As many companies around the globe began participating in the four-day week trials, we wanted to understand the well-being impacts of reducing the workweek to four days,” said study author <a href="https://www.wenfan.co/" target="_blank">Wen Fan</a>, an associate professor of sociology at Boston College.</p>
<p>“Prior studies tend to rely on single case studies rather than large-scale research involving hundreds of companies. Additionally, both Julie and I have long histories studying work and well-being–Julie wrote an influential book <em>The Overworked American</em>, which remains a classic today, and I have conducted extensive research on workplace flexibility and workers’ health and well-being.”</p>
<p>The research team rooted their investigation in the “job demands-resources” model. This theory posits that long working hours deplete an employee’s mental and physical energy. This depletion leads to exhaustion and distress. Conversely, time off acts as a resource that allows workers to recover from the demands of their roles. The authors wanted to see if an organizational reduction in work time could effectively break the cycle of exhaustion.</p>
<p>To test this, the researchers recruited 141 organizations to participate in a six-month trial. These companies were located in the United States, Canada, the United Kingdom, Ireland, Australia, and New Zealand. The participating organizations agreed to a model where employees worked 80 percent of their regular hours for 100 percent of their pay. To make this feasible, companies underwent a preparation period to redesign work processes and eliminate low-value tasks like unnecessary meetings.</p>
<p>The researchers collected data from 2,896 employees. They administered surveys before the trial began and again after six months. They also collected data from employees at 12 control companies that did not implement the four-day week but were interested in the concept. The surveys measured burnout, job satisfaction, mental health, and physical health. The team tracked the actual number of hours worked to see how much time was saved.</p>
<p>The sample was predominantly female, with women making up about 65 percent of the respondents. The majority of participants were white and held college degrees. Most of the companies involved were in the professional services or non-profit sectors. About 80 percent of the participating organizations had at least 10 employees.</p>
<p>Employees in the trial companies reduced their workweek by an average of about five hours. In contrast, work hours remained stable for employees in the control group. Not everyone reduced their hours by the same amount. Some workers managed to cut eight hours or more, while others saw smaller reductions. This variation allowed the researchers to examine how different levels of time reduction affected the outcomes.</p>
<p>The analysis showed that employees in the trial companies experienced improvements in all four well-being categories. Burnout scores dropped, while job satisfaction scores rose. Both mental and physical health ratings improved over the six-month period. These positive changes did not occur in the control group.</p>
<p>“We find that the four-day workweek improves workers’ well-being,” Fan told PsyPost. “This conclusion comes from comparing changes in four well-being indicators between trial companies and control companies. The control companies were those that initially expressed interest in participating but ultimately did not, for various reasons.” </p>
<p>“We found that employees in the trial companies experienced significant reductions in burnout, along with notable improvements in job satisfaction, mental health, and physical health. In contrast, none of these changes were observed among workers in the control companies.”</p>
<p>The researchers found a connection between the amount of time saved and the benefits gained. Employees who reduced their work hours the most reported the largest improvements in well-being. This pattern was particularly strong for burnout and job satisfaction. Even those with smaller reductions in hours saw some benefits compared to the control group.</p>
<p>Physical health showed the smallest improvement among the four outcomes. The authors note that physical health changes often take longer to manifest than psychological changes. Six months may not be enough time to see dramatic shifts in physical conditions. However, the improvement was still statistically significant.</p>
<p>The researchers looked for the reasons behind these improvements. They identified three key factors that explained the link between shorter hours and better health. First, employees reported better sleep quality. Second, they experienced less fatigue. Third, they reported an increase in “work ability,” which is a measure of how productive and capable a person feels in their job.</p>
<p>The finding regarding work ability is notable. It suggests that the reorganization process helped employees feel more effective. Rather than feeling rushed or stressed by having less time, workers felt they were performing better. This supports the idea of “job crafting,” where employees optimize their workflow to meet demands more efficiently.</p>
<p>The researchers also examined whether the changes happened at the individual level or the company level. They found that simply belonging to a company that reduced hours predicted better well-being. This suggests that organizational norms play a role. When a company collectively values rest and recovery, it benefits employees even if their personal hours do not drop drastically.</p>
<p>“The second major finding is about what explains these improvements,” Fan explained. “We examined various work experiences and health behaviors. We found that three factors played particularly significant roles: work ability (a proxy for workers’ self-assessed productivity), sleep problems, and fatigue. In other words, after moving to a four-day workweek, workers saw themselves as more capable, and they experienced fewer sleep problems and lower levels of fatigue, all of which contributed to improved well-being.”</p>
<p>Long-term data was also collected. The researchers team conducted a follow-up survey at the 12-month mark for the trial companies. They found that the benefits were not temporary. The improvements in well-being were sustained a full year after the trial began. This suggests that the positive effects were not simply due to the novelty of the new schedule.</p>
<p>“Surprisingly, we do not find differences across industries, job types, countries, or demographic characteristics such as gender, parental status, race, or education,” Fan said. “The findings are quite stable and robust, indicating that well-being benefits are broadly shared. I believe this gives us reason for cautious optimism about broader generalizability.”</p>
<p>“That said, we should be careful in making sweeping claims–after all, companies that chose to participate in the trial are not a representative sample. They likely had a higher degree of confidence that the four-day week could work for them. So more research is needed to fully evaluate the broader generalizability of the findings, especially for company outcomes.”</p>
<p>The researchers also checked for the “Hawthorne effect.” This is a phenomenon where people improve their behavior simply because they are being observed. The sustained nature of the results over twelve months argues against this. If it were just an effect of observation, the benefits likely would have faded as the novelty wore off.</p>
<p>But there are still some limitations to this research. The study was not a randomized controlled trial where companies were assigned to groups by chance. The participating companies volunteered for the trial. This means they might already be more supportive of employee well-being than the average firm. This self-selection could influence the results.</p>
<p>Additionally, the data relied on self-reports from employees. While the researchers argue that subjective well-being is best measured this way, objective health data was not collected. It is possible that employees might overstate benefits in hopes of keeping the new schedule. However, the consistency of the data across different questions suggests honest reporting.</p>
<p>The sample was also heavily weighted toward professional services and office jobs in English-speaking countries. This limits how much the findings can apply to other industries like manufacturing or retail. It is also less clear how these results would apply in countries with different labor cultures.</p>
<p>“A common misconception around the four-day workweek is how it’s defined,” Fan noted. “The model we study, for example, is not a compressed workweek, but involves genuine reductions in work hours without reductions in pay.”</p>
<p>“Another misconception is that the four-day workweek only benefits workers. While our research shows clear advantages for employees, it’s also a potential win-win solution for employers. Although we don’t report it in this paper, we’ve observed performance improvements at companies–metrics like revenue, resignation rates, and sick days have improved.”</p>
<p>“Of course, we need to be cautious in interpreting these results, as more sophisticated analyses are needed to account for factors like seasonality or secular trends. Still, around 90% of participating companies chose to continue with the 4-day model after the trial, suggesting they were satisfied with the outcomes. We should also consider the potential broader societal benefits–such as reduced commuting and environmental impact.”</p>
<p>Future research could benefit from government-sponsored trials that allow for randomization. Studies involving larger organizations could also provide opportunities to compare different teams within the same company. The authors suggest that objective measures of health and productivity would add weight to these findings.</p>
<p>“The optimal design would be a truly group-randomized trial, randomly assigning companies to treatment and control groups,” Fan told PsyPost. “However, in reality, it’s often difficult to find companies willing to be randomly assigned to different conditions. One possible solution would be government-sponsored trials where businesses receive incentives — such as tax credits — for participating. Another direction for future research is to work with very large companies where within-company randomization is possible, or where a staggered rollout could create a natural experiment.”</p>
<p>The study, “<a href="https://doi.org/10.1038/s41562-025-02259-6" target="_blank">Work time reduction via a four-day week finds improvements in workers’ well-being</a>,” was authored by Wen Fan, Juliet B. Schor, Orla Kelly, and Guolin Gu.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/family-oriented-women-rely-more-on-social-cues-when-judging-potential-partners/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Family-oriented women rely more on social cues when judging potential partners</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Nov 22nd 2025, 10:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Women who follow slower, more family-oriented life strategies tend to rely more on others’ opinions when judging potential partners, according to new research published in <a href="https://doi.org/10.1007/s40806-025-00451-5"><em>Evolutionary Psychological Science</em></a>.</p>
<p>Mate choice copying, that is, deciding a partner seems more or less desirable based on what others say about him, is a well-documented phenomenon in psychology and evolutionary biology. It offers a shortcut, such that, instead of evaluating every trait directly, people can use others’ experiences as social evidence.</p>
<p>Prior work has shown that women copy others’ mate choices more strongly in long-term romantic contexts, especially when positive or negative details about a man’s past relationship are involved. However, less is known about which women are most likely to lean on social information in the first place.</p>
<p>Alireza Nikakhtar and colleagues set out to investigate whether a woman’s underlying life history traits predict how strongly she copies others’ mate choices. Life history theory proposes that people <a href="https://www.psypost.org/openness-to-sugar-relationships-tied-to-short-term-mating-not-life-history-strategy/">vary along a continuum from “fast” to “slow” strategies</a>. Some individuals prioritize short-term opportunities and mating effort, while others invest more in parenting effort and long-term planning.</p>
<p>The authors reasoned that women who adopt slower strategies, and thus place greater weight on long-term partnerships and parenting, may be especially motivated to avoid costly mistakes when choosing a partner, making social information particularly valuable.</p>
<p>This study involved 214 Iranian women aged 18-45. Participants completed the questionnaire in Persian. Before reaching the central experimental task, they answered demographic questions (age, education, marital status, sexual orientation, number of children) and a brief set of life history-related items.</p>
<p>Participants then completed several psychological measures capturing early life stress (e.g., “My mother was always there when I needed her”), their broader life history strategy using the Mini-K scale (e.g., “I often make plans in advance”), and their levels of mating effort (e.g., “wearing flashy, expensive clothes”) and parenting effort (e.g., “good at taking care of children”). They also answered a single item about their age at menarche. Taken together, these measures provided an overview of each participant’s developmental background, reproductive strategy, and general orientation toward short-term mating versus long-term parenting.</p>
<p>Participants next moved to a vignette-based task. First, they rated 10 male faces, selected and standardized from the Iranian Face Database, paired with neutral descriptions for long-term attractiveness. After a 2 minute distraction task about seabird parenting, the same faces reappeared, now with positive or negative descriptions provided by former partners.</p>
<p>Participants rated attractiveness again. A similar procedure followed for short-term contexts using four new faces, each paired with neutral-plus-positive or neutral-plus-negative descriptions. The difference in ratings from the neutral to the positive or negative information conditions served as the measure of mate choice copying.</p>
<p>Nikakhtar and colleagues found that participants clearly responded to the social information. Positive former-partner descriptions increased attractiveness ratings, while negative ones reduced them, in both long-term and short-term contexts. These shifts were larger in long-term evaluations, consistent with a broader pattern found in past research that social information matters more for decisions involving commitment and unobservable qualities like reliability or generosity.</p>
<p>The researchers also observed that negative information tended to lead to slightly stronger shifts than positive information, though this difference did not reach statistical significance. This pattern aligns with well-known psychological tendencies where people place greater weight on potentially harmful cues than beneficial ones.</p>
<p>Importantly, several life-history-related traits predicted how strongly participants copied negative social information. Women who scored higher in parenting effort, and those who scored lower in mating effort, showed greater decreases in attractiveness ratings when a man was described negatively.</p>
<p>These effects were most pronounced in short-term negative scenarios, where both parenting effort and overall life history strategy scores predicted stronger avoidance of negatively described partners. Age at menarche, a developmental milestone sometimes linked to reproductive strategy, showed no association with mate choice copying.</p>
<p>The authors note that the sample was culturally and demographically specific, involving primarily well-educated Iranian women, which may limit the generalizability of findings.</p>
<p>The study, “<a href="https://doi.org/10.1007/s40806-025-00451-5">Do Human Life History Traits Predict Mate Choice Copying in Women?</a>” was authored by Alireza Nikakhtar, Abbas Zabihzadeh, Arash Monajem, and Mostafa Saadati.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<p><strong>Forwarded by:<br />
Michael Reeder LCPC<br />
Baltimore, MD</strong></p>
<p><strong>This information is taken from free public RSS feeds published by each organization for the purpose of public distribution. Readers are linked back to the article content on each organization's website. This email is an unaffiliated unofficial redistribution of this freely provided content from the publishers. </strong></p>
<p> </p>
<p><s><small><a href="#" style="color:#ffffff;"><a href='https://blogtrottr.com/unsubscribe/565/DY9DKf'>unsubscribe from this feed</a></a></small></s></p>