<table style="border:1px solid #adadad; background-color: #F3F1EC; color: #666666; padding:8px; -webkit-border-radius:4px; border-radius:4px; -moz-border-radius:4px; line-height:16px; margin-bottom:6px;" width="100%">
<tbody>
<tr>
<td><span style="font-family:Helvetica, sans-serif; font-size:20px;font-weight:bold;">PsyPost – Psychology News</span></td>
</tr>
<tr>
<td> </td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/most-americans-experience-passionate-love-only-twice-in-a-lifetime-study-finds/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Most Americans experience passionate love only twice in a lifetime, study finds</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Feb 12th 2026, 08:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Most adults in the United States experience the intense rush of passionate love only about twice throughout their lives, according to a recent large-scale survey. The study, published in the journal <em><a href="http://dx.doi.org/10.54899/ijpr.v20i1.733" target="_blank" rel="noopener">Interpersona</a></em>, suggests that while this emotional state is a staple of human romance, it remains a relatively rare occurrence for many individuals. The findings provide a new lens through which to view the frequency of deep romantic attachment across the entire adult lifespan.</p>
<p>The framework for this research relies on a classic model where love consists of three parts: passion, intimacy, and commitment. Passion is described as the physical attraction and intense longing that often defines the start of a romantic connection. Amanda N. Gesselman, a researcher at the Kinsey Institute at Indiana University, led the team of scientists who conducted this work.</p>
<p>The research team set out to quantify how often this specific type of love happens because earlier theories suggest passion is high at the start of a relationship but fades as couples become more comfortable. As a relationship matures, it often shifts toward companionate love, which is defined by deep affection and entwined lives rather than obsessive longing. Because this intense feeling is often fleeting, it might happen several times as people move through different stages of life.</p>
<p>The researchers wanted to see if social factors like age, gender, or sexual orientation influenced how often someone falls in love. Some earlier studies on university students suggested that most young people fall in love at least once by the end of high school. However, very little data existed regarding how these experiences accumulate for adults as they reach middle age or later life.</p>
<p>To find these answers, the team analyzed data from more than 10,000 single adults in the U.S. between the ages of 18 and 99. Participants were recruited to match the general demographic makeup of the country based on census data. This large group allowed the researchers to look at a wide variety of life histories and romantic backgrounds.</p>
<p>Participants were asked to provide a specific number representing how many times they had ever been passionately in love during their lives. On average, the respondents reported experiencing this intense feeling 2.05 times. This number suggests that for the average person, passionate love is a rare event that happens only a few times in a century of living.</p>
<p>A specific portion of the group, about 14 percent, stated they had never felt passionate love at all. About 28 percent had felt it once, while 30 percent reported two experiences. Another 17 percent had three experiences, and about 11 percent reported four or more. These figures show that while the experience is common, it is certainly not a daily or even a yearly occurrence for most.</p>
<p>The study also looked at how these numbers varied based on the specific characteristics of the participants. Age showed a small link to the number of experiences, meaning older adults reported slightly more instances than younger ones. This result is likely because older people have had more years and more opportunities to encounter potential partners.</p>
<p>The increase with age was quite small, which suggests that people do not necessarily keep falling in love at a high rate as they get older. One reason for this might be biological, as the brain systems involved in reward and excitement are often most active during late adolescence and early adulthood. As people transition into mature adulthood, their responsibilities and self-reflection might change how they perceive or pursue new romantic passion.</p>
<p>Gender differences were present in the data, with men reporting slightly more experiences than women. This difference was specifically found among heterosexual participants, where heterosexual men reported more instances of passionate love than heterosexual women. This finding aligns with some previous research suggesting that men may be socialized to fall in love or express those feelings earlier in a relationship.</p>
<p>Among gay, lesbian, and bisexual participants, the number of experiences did not differ by gender. The researchers did not find that sexual orientation on its own created any differences in how many times a person fell in love. For example, the difference between heterosexual and bisexual participants was not statistically significant.</p>
<p>The researchers believe these results have important applications for how people view their own romantic lives. Many people feel pressure from movies, songs, and social media to constantly chase a state of high passion. Knowing that the average person only feels this a couple of times may help people feel more normal if they are not currently in a state of intense romance.</p>
<p>In a clinical or counseling setting, these findings could help people who feel they are behind in their romantic development. If someone has never been passionately in love, they are part of a group that includes more than one in ten adults. Seeing this as a common variation in human experience rather than a problem can reduce feelings of shame.</p>
<p>The researchers also noted that people might use a process called retrospective cognitive discounting. This happens when a person looks back at their past and views old relationships through a different lens based on their current feelings. An older person might look back at a past “crush” and decide it was not true passionate love, which would lower their total count.</p>
<p>This type of self-reflection might help people stay resilient after a breakup. By reinterpreting a past relationship as something other than passionate love, they might remain more open to finding a new connection in the future. This mental flexibility is part of how humans navigate the ups and downs of their romantic histories.</p>
<p>There are some limitations to the study that should be considered. Because the researchers only surveyed single people, the results might be different if they had included people who are currently married or in long-term partnerships. People who are in stable relationships might have different ways of remembering their past experiences compared to those who are currently unattached.</p>
<p>The study also relied on people remembering their entire lives accurately, which can be a challenge for older participants. Future research could follow the same group of people over many years to see how their feelings change as they happen. This would remove the need for participants to rely solely on their memories of the distant past.</p>
<p>The participants were all located in the United States, so these findings might not apply to people in other cultures. Different societies have different rules about how people meet, how they express emotion, and what they consider to be love. A global study would be needed to see if the “twice in a lifetime” average holds true in other parts of the world.</p>
<p>Additionally, the survey did not provide a specific definition of passionate love for the participants. Each person might have used their own personal standard for what counts as being passionately in love. Using a more standardized definition in future studies could help ensure that everyone is answering the question in the same way.</p>
<p>The researchers also mentioned that they did not account for individual personality traits or attachment styles. Some people are naturally more prone to falling in love quickly, while others are more cautious or reserved. These internal traits likely play a role in how many times someone experiences passion throughout their life.</p>
<p>Finally, the study did not include a large enough number of people with diverse gender identities beyond the categories of men and women. Expanding the research to include more gender-diverse individuals would provide a more complete picture of the human experience. Despite these gaps, the current study provides a foundation for understanding the frequency of one of life’s most intense emotions.</p>
<p>The study, “<a href="http://dx.doi.org/10.54899/ijpr.v20i1.733" target="_blank" rel="noopener">Twice in a lifetime: quantifying passionate love in U.S. single adults</a>,” was authored by Amanda N. Gesselman, Margaret Bennett-Brown, Jessica T. Campbell, Malia Piazza, Zoe Moscovici, Ellen M. Kaufman, Melissa Blundell Osorio, Olivia R. Adams, Simon Dubé, Jessica J. Hille, Lee Y. S. Weeks, and Justin R. Garcia.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/ai-boosts-worker-creativity-only-if-they-use-specific-thinking-strategies/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">AI boosts worker creativity only if they use specific thinking strategies</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Feb 12th 2026, 06:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new study published in the <em><a href="https://psycnet.apa.org/record/2026-29702-001" target="_blank" rel="noopener">Journal of Applied Psychology</a></em> suggests that generative artificial intelligence can boost creativity among employees in professional settings. But the research indicates that these tools increase innovative output only when workers use specific mental strategies to manage their own thought processes.</p>
<p>Generative artificial intelligence is a type of technology that can produce new content such as text, images, or computer code. Large language models like ChatGPT or Google’s Gemini use massive datasets to predict and generate human-like responses to various prompts. Organizations often implement these tools with the expectation that they will help employees come up with novel and useful ideas. Many leaders believe that providing access to advanced technology will automatically lead to a more innovative workforce.</p>
<p>However, recent surveys indicate that only a small portion of workers feel that these tools actually improve their creative work. The researchers conducted the new study to see if the technology truly helps and to identify which specific factors make it effective. They also wanted to see how these tools function in a real office environment where people manage multiple projects at once. Most previous studies on this topic took place in artificial settings using only one isolated task.</p>
<p>“When ChatGPT was released in November 2022, generative AI quickly became part of daily conversation. Many companies rushed to integrate generative AI tools into their workflows, often expecting that this would make employees more creative and, ultimately, give organizations a competitive advantage,” said study author <a href="https://freeman.tulane.edu/faculty-research/management/shuhua-sun" target="_blank" rel="noopener">Shuhua Sun</a>, who holds the Peter W. and Paul A. Callais Professorship in Entrepreneurship at Tulane University’s A. B. Freeman School of Business.</p>
<p>“What struck us, though, was how little direct evidence existed to support those expectations, especially in real workplaces. Early proof-of-concept studies in labs and online settings began to appear, but their results were mixed. Even more surprisingly, there were almost no randomized field experiments examining how generative AI actually affects employee creativity on the job.”</p>
<p>“At the same time, consulting firms started releasing large-scale surveys on generative AI adoption. These reports showed that only a small percentage of employees felt that using generative AI made them more creative. Taken together with the mixed lab/online findings, this raised a simple but important question for us: If generative AI is supposed to enhance creativity, why does it seem to help only some employees and not others? What are those employees doing differently?”</p>
<p>“That question shaped the core of our project. So, instead of asking simply whether generative AI boosts creativity, we wanted to understand <em>how</em> it does so and <em>for whom</em>. Driven by these questions, we developed a theory and tested it using a randomized field experiment in a real organizational setting.”</p>
<p>The researchers worked with a technology consulting firm in China to conduct their field experiment. This company was an ideal setting because consulting work requires employees to find unique solutions for many different clients. The study included a total of 250 nonmanagerial employees from departments such as technology, sales, and administration. These participants had an average age of about 30 years and most held university degrees.</p>
<p>The researchers randomly split the workers into two groups. The first group received access to ChatGPT accounts and was shown how to use the tool for their daily tasks. The second group served as a control and did not receive access to the artificial intelligence software during the study. To make sure the experiment was fair, the company told the first group that the technology was meant to assist them rather than replace them.</p>
<p>The experiment lasted for about one week. During this time, the researchers tracked how often the treated group used their new accounts. At the end of the week, the researchers collected data from several sources to measure the impact of the tool. They used surveys to ask employees about their work experiences and their thinking habits.</p>
<p>They also asked the employees’ direct supervisors to rate their creative performance. These supervisors did not know which employees were using the artificial intelligence tool. Additionally, the researchers used two external evaluators to judge specific ideas produced by the employees. These evaluators looked at how novel and useful the ideas were without knowing who wrote them.</p>
<p>The researchers looked at cognitive job resources, which are the tools and mental space people need to handle complex work. This includes having enough information and the ability to switch between hard and easy tasks. They also measured metacognitive strategies. This term describes how people actively monitor and adjust their own thinking to reach a goal.</p>
<p>A person with high metacognitive strategies might plan out their steps before starting a task. They also tend to check their own progress and change their approach if they are not making enough headway. The study suggests that the artificial intelligence tool increased the cognitive resources available to employees. The tool helped them find information quickly and allowed them to manage their mental energy more effectively.</p>
<p>The results show that the employees who had access to the technology generally received higher creativity ratings from their supervisors. The external evaluators also gave higher scores for novelty to the ideas produced by this group. The evidence suggests that the tool was most effective when workers already used strong metacognitive strategies. These workers were able to use the technology to fill specific gaps in their knowledge.</p>
<p>For employees who did not use these thinking strategies, the tool did not significantly improve their creative output. These individuals appeared to be less effective at using the technology to gain new resources. The study indicates that the tool provides the raw material for creativity, but the worker must know how to direct the process. Specifically, workers who monitored their own mental state knew when to use the tool to take a break or switch tasks.</p>
<p>This ability to switch tasks is important because it prevents a person from getting stuck on a single way of thinking. When the technology handled routine parts of a job, it gave workers more mental space to focus on complex problem solving. The researchers found that the positive effect of the technology became significant once a worker’s use of thinking strategies reached a certain level. Below that threshold, the tool did not provide a clear benefit for creativity.</p>
<p>The cognitive approach to creativity suggests that coming up with new ideas is a mental process of searching through different areas of knowledge. People must find pieces of information and then combine them in ways that have not been tried before. This process can be very demanding because people have a limited amount of time and mental energy. Researchers call this the knowledge burden.</p>
<p>It takes a lot of effort to find, process, and understand new information from different fields. If a person spends all their energy just gathering facts, they might not have enough strength left to actually be creative. Artificial intelligence can help by taking over the task of searching for and summarizing information. This allows the human worker to focus on the high level task of combining those facts into something new.</p>
<p>Metacognition is essentially thinking about one’s own thinking. It involves a person being aware of what they know and what they do not know. When a worker uses metacognitive strategies, they act like a coach for their own brain. They ask themselves if their current plan is working or if they need to try a different path.</p>
<p>The study shows that this self-awareness is what allows a person to use artificial intelligence effectively. Instead of just accepting whatever the computer says, a strategic thinker uses the tool to test specific ideas. The statistical analysis revealed that the artificial intelligence tool provided workers with more room to think. This extra mental space came from having better access to knowledge and more chances to take mental breaks.</p>
<p>The researchers used a specific method called multilevel analysis to account for the way employees were organized within departments and teams. This helps ensure that the findings are not skewed by the influence of a single department or manager. The researchers also checked to see if other factors like past job performance or self-confidence played a role. Even when they accounted for these variables, the link between thinking strategies and the effective use of artificial intelligence remained strong.</p>
<p>The data showed that the positive impact of the tool on creativity was quite large for those who managed their thinking well. For those with low scores in that area, the tool had almost no impact on their creative performance. To test creativity specifically, the researchers asked participants to solve a real problem. They had to provide suggestions for protecting employee privacy in a digital office.</p>
<p>This task required at least 70 Chinese characters in response. It was designed to see if the participants could think of novel ways to prevent information leaks or excessive monitoring by leadership. The external raters then scored these responses based on how original and useful they were. This provided a more objective look at creativity than just asking a supervisor for their opinion.</p>
<p>“The main takeaway is that generative AI does not automatically make people more creative,” Sun told PsyPost. “Simply providing access to AI tools is not enough, and in many cases it yields little creative benefit. Our findings show that the creative value of AI depends on how people engage with it during the creative process. Individuals who actively monitor their own understanding, recognize what kind of help they need, and deliberately decide when and how to use AI are much more likely to benefit creatively.”</p>
<p>“In contrast, relying on AI in a more automatic or unreflective way tends to produce weaker creative outcomes. For the average person, the message is simple: AI helps creativity when it is used thoughtfully: Pausing to reflect on what you need, deciding when AI can be useful, and actively shaping its output iteratively are what distinguish creative gains from generic results.”</p>
<p>As with all research, there are some limitations to consider. The researchers relied on workers to report their own thinking strategies, which can sometimes be inaccurate. The study also took place in a single company within one specific country. People in different cultures might interact with artificial intelligence in different ways.</p>
<p>Future research could look at how long-term use of these tools affects human skills. There is a possibility that relying too much on technology could make people less independent over time. Researchers might also explore how team dynamics influence the way people use these tools. Some office environments might encourage better thinking habits than others.</p>
<p>It would also be helpful to see if the benefits of these tools continue to grow over several months or if they eventually level off. These questions will be important as technology continues to change the way we work. The findings suggest that simply buying new software is not enough to make a company more innovative. Organizations should also consider training their staff to be more aware of their own thinking processes.</p>
<p>Since the benefits of artificial intelligence depend on a worker’s thinking habits, generic software training might not be enough. Instead, programs might need to focus on how to analyze a task and how to monitor one’s own progress. These metacognitive skills are often overlooked in traditional professional development. The researchers note that these skills can be taught through short exercises. Some of these involve reflecting on past successes or practicing new ways to plan out a workday.</p>
<p>The study, “<a href="https://psycnet.apa.org/doi/10.1037/apl0001296" target="_blank" rel="noopener">How and for Whom Using Generative AI Affects Creativity: A Field Experiment</a>,” was authored by Shuhua Sun, Zhuyi Angelina Li, Maw-Der Foo, Jing Zhou, and Jackson G. Lu.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/scientists-asked-men-smell-hundreds-of-different-vulvar-odors-to-test-the-leaky-cue-hypothesis/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Scientists asked men smell hundreds of different vulvar odors to test the “leaky-cue hypothesis”</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Feb 11th 2026, 21:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new study published in <em><a href="https://doi.org/10.1016/j.evolhumbehav.2025.106742" target="_blank">Evolution and Human Behavior</a></em> suggests that modern women may not chemically signal fertility through vulvar body odor, a trait commonly observed in other primates. The findings indicate that men are unable to detect when a woman is in the fertile phase of her menstrual cycle based solely on the scent of the vulvar region. This research challenges the idea that humans have retained these specific evolutionary mating signals.</p>
<p>In the animal kingdom, particularly among non-human primates like lemurs, baboons, and chimpanzees, females often broadcast their reproductive status to males. This is frequently done through olfactory signals, specifically odors from the genital region, which change chemically during the fertile window. These scents serve as information for males, helping them identify when a female is capable of conceiving. Because humans share a deep evolutionary history with these primates, scientists have debated whether modern women retain these chemical signals.</p>
<p>A concept known as the “leaky-cue hypothesis” proposes that women might unintentionally emit subtle physiological signs of fertility. While previous research has investigated potential signals in armpit odor, voice pitch, or facial attractiveness, results have been inconsistent. </p>
<p>The specific scent of the vulvar region has remained largely unexplored using modern, rigorous methods, despite its biological potential as a source of chemical communication. To address this gap, a team led by Madita Zetzsche from the Behavioural Ecology Research Group at Leipzig University and the Max Planck Institute for Evolutionary Anthropology conducted a detailed investigation.</p>
<p>The researchers recruited 28 women to serve as odor donors. These participants were between the ages of 20 and 30, did not use hormonal contraception, and had regular menstrual cycles. To ensure the accuracy of the fertility data, the team did not rely on simple calendar counting. Instead, they used high-sensitivity urinary tests to detect luteinizing hormone and analyzed saliva samples to measure levels of estradiol and progesterone. This allowed the scientists to pinpoint the exact day of ovulation for each participant.</p>
<p>To prevent external factors from altering body odor, the donors adhered to a strict lifestyle protocol. They followed a vegetarian or vegan diet and avoided foods with strong scents, such as garlic, onion, and asparagus, as well as alcohol and tobacco. The women provided samples at ten specific points during their menstrual cycle. These points were clustered around the fertile window to capture any rapid changes in odor that might occur just before or during ovulation.</p>
<p>The study consisted of two distinct parts: a chemical analysis and a perceptual test. For the chemical analysis, the researchers collected 146 vulvar odor samples from a subset of 16 women. They used a specialized portable pump to draw air from the vulvar region into stainless steel tubes containing polymers designed to trap volatile compounds. These are the lightweight chemical molecules that evaporate into the air and create scent.</p>
<p>The team analyzed these samples using gas chromatography–mass spectrometry. This is a laboratory technique that separates a mixture into its individual chemical components and identifies them. The researchers looked for changes in the chemical profile that corresponded to the women’s conception risk and hormone levels. They specifically sought to determine if the abundance of certain chemical compounds rose or fell in a pattern that tracked the menstrual cycle.</p>
<p>The chemical analysis revealed no consistent evidence that the overall scent profile changed in a way that would allow fertility to be tracked across the menstrual cycle. While some specific statistical models suggested a potential link between the risk of conception and levels of certain substances—such as an increase in acetic acid and a decrease in a urea-related compound—these findings were not stable. When the researchers ran robustness checks, such as excluding samples from donors who had slightly violated dietary rules, the associations disappeared. The researchers concluded that there is likely a low retention of chemical fertility cues in the vulvar odor of modern women.</p>
<p>In the second part of the study, 139 men participated as odor raters. To collect the scent for this experiment, the female participants wore cotton pads in their underwear overnight for approximately 12 hours. These pads were then frozen to preserve the scent and later presented to the male participants in glass vials. The men, who were unaware of the women’s fertility status, sniffed the samples and rated them on three dimensions: attractiveness, pleasantness, and intensity.</p>
<p>The perceptual results aligned with the chemical findings. The statistical analysis showed that the men’s ratings were not influenced by the women’s fertility status. The men did not find the odor of women in their fertile window to be more attractive or pleasant than the odor collected during non-fertile days. Neither the risk of conception nor the levels of reproductive hormones predicted how the men perceived the scents.</p>
<p>These null results were consistent even when the researchers looked at the data in different ways, such as examining specific hormone levels or the temporal distance to ovulation. The study implies that if humans ever possessed the ability to signal fertility through vulvar scent, this trait has likely diminished significantly over evolutionary time.</p>
<p>The researchers suggest several reasons for why these cues might have been lost or suppressed in humans. Unlike most primates that walk on four legs, humans walk upright. This bipedalism moves the genital region away from the nose of other individuals, potentially reducing the role of genital odor in social communication. Additionally, human cultural practices, such as wearing clothing and maintaining high levels of hygiene, may have further obscured any remaining chemical signals.</p>
<p>It is also possible that social odors in humans have shifted to other parts of the body, such as the armpits, although evidence for axillary fertility cues remains mixed. The researchers noted that while they found no evidence of fertility signaling in this context, it remains possible that such cues require more intimate contact or sexual arousal to be detected, conditions that were not replicated in the laboratory.</p>
<p>Additionally, the strict dietary and behavioral controls, while necessary for scientific rigor, might not reflect real-world conditions where diet varies. The sample size for the chemical analysis was also relatively small, which can make it difficult to detect very subtle effects.</p>
<p>Future research could investigate whether these cues exist in more naturalistic settings or investigate the role of the vaginal microbiome, which differs significantly between humans and non-human primates. The high levels of Lactobacillus bacteria in humans create a more acidic environment, which might alter the chemical volatility of potential fertility signals.</p>
<p>The study, “<a href="https://doi.org/10.1016/j.evolhumbehav.2025.106742" target="_blank">Understanding olfactory fertility cues in humans: chemical analysis of women’s vulvar odour and perceptual detection of these cues by men</a>,” was authored by Madita Zetzsche, Marlen Kücklich, Brigitte M. Weiß, Julia Stern, Andrea C. Marcillo Lara, Claudia Birkemeyer, Lars Penke, and Anja Widdig.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/blue-light-exposure-may-counteract-anxiety-caused-by-chronic-vibration/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Blue light exposure may counteract anxiety caused by chronic vibration</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Feb 11th 2026, 20:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Living in a modern environment often means enduring a constant hum of background noise and physical vibration. From the rumble of heavy traffic to the oscillation of industrial machinery, these invisible stressors can gradually erode mental well-being. </p>
<p>A new study suggests that a specific color of light might offer a simple way to counter the anxiety caused by this chronic environmental agitation. The research indicates that blue light exposure can calm the nervous system even when the physical stress of vibration continues. These findings were published in the journal <em><a href="https://doi.org/10.1016/j.physbeh.2025.115187" target="_blank">Physiology & Behavior</a></em>.</p>
<p>Anxiety disorders are among the most common mental health challenges globally. They typically arise from a complicated mix of biological traits and social pressures. Environmental factors are playing an increasingly large role in this equation. Chronic exposure to low-frequency noise and vibration is known to disrupt the body’s hormonal balance. This disruption frequently leads to psychological symptoms such as irritability, fatigue, and persistent anxiety.</p>
<p>Doctors often prescribe medication to manage these conditions once a diagnosis is clear. These drugs usually work by altering the chemical signals in the brain to inhibit anxious feelings. However, pharmaceutical interventions are not always the best first step for early-stage anxiety. There is a growing demand for therapies that are accessible and carry fewer side effects. This has led scientists to investigate light therapy as a promising alternative.</p>
<p>Light does more than allow us to see. It also regulates our internal biological clocks and influences our mood. Specialized cells in the eyes detect light and send signals directly to the brain regions that control hormones. This pathway allows light to modulate the release of neurotransmitters associated with emotional well-being.</p>
<p>Despite this general knowledge, there has been little research on how specific light wavelengths might combat anxiety caused specifically by vibration. A team of researchers decided to fill this gap using zebrafish as a model organism. Zebrafish are small, tropical freshwater fish that are widely used in neuroscience. Their brain chemistry and genetic structure share many similarities with humans.</p>
<p>The study was led by Longfei Huo and senior author Muqing Liu from the School of Information Science and Technology at Fudan University in China. They aimed to identify if light could serve as a preventative measure against vibration-induced stress. The team designed a controlled experiment to first establish which vibrations caused the most stress. They subsequently tested whether light could reverse that stress.</p>
<p>The researchers began by separating the zebrafish into different groups. Each group was exposed to a specific frequency of vibration for one hour daily. The frequencies tested were 30, 50, and 100 Hertz. To ensure consistency, the acceleration of the vibration was kept constant across all groups. This phase of the experiment lasted for one week.</p>
<p>To measure anxiety in fish, the scientists relied on established behavioral patterns. When zebrafish are comfortable, they swim freely throughout their tank. When they are anxious, they tend to sink to the bottom. They also exhibit “thigmotaxis,” which is a tendency to hug the walls of the tank rather than exploring open water.</p>
<p>The team utilized a “novel tank test” to observe these behaviors. They placed the fish in a new environment and recorded how much time they spent in the lower half. The results showed that daily exposure to vibration made the fish act more anxious. The effect was strongest in the group exposed to 100 Hertz. These fish spent a statistically significant amount of time at the bottom of the tank.</p>
<p>The researchers also used a “light-dark box test.” In this setup, half the tank is illuminated and the other half is dark. Anxious fish prefer to hide in the dark. The fish exposed to 100 Hertz vibration spent much more time in the dark zones compared to the control group. This confirmed that the vibration was inducing a strong anxiety-like state.</p>
<p>After establishing that 100 Hertz vibration caused the most stress, the researchers moved to the second phase of the study. They wanted to see if light color could mitigate this effect. They repeated the vibration exposure but added a light therapy component. While the fish underwent vibration, they were bathed in either red, green, blue, or white light.</p>
<p>The blue light used in the experiment had a wavelength of 455 nanometers. The red light was 654 nanometers, and the green was 512 nanometers. The light exposure lasted for two hours each day. The researchers then ran a comprehensive battery of behavioral tests to see if the light made a difference.</p>
<p>The team found that the color of the light had a profound impact on the mental state of the fish. Zebrafish exposed to the blue light showed much less anxiety than those in the other groups. In the novel tank test, the blue-light group spent less time at the bottom. They explored the upper regions of the water almost as much as fish that had never been vibrated at all.</p>
<p>In contrast, the red light appeared to offer no benefit. In some metrics, the red light seemed to make the anxiety slightly worse. Fish under red light spent the longest time hiding in the dark during the light-dark box test. This suggests that the calming effect is specific to the wavelength of the light and not just the brightness.</p>
<p>The researchers also introduced two innovative testing methods to validate their results. One was a “social interaction test.” Zebrafish are social animals and usually prefer to be near others. Stress often causes them to withdraw. The researchers placed a group of fish inside a transparent cylinder within the tank. They then measured how much time the test fish spent near this cylinder.</p>
<p>Fish exposed to vibration and white light avoided the group. However, the fish treated with blue light spent a large amount of time near their peers. This indicated that their social anxiety had been alleviated. The blue light restored their natural desire to interact with others.</p>
<p>The second new method was a “pipeline swimming test.” This involved placing the fish in a tube with a gentle current. The setup allowed the scientists to easily measure swimming distance and smoothness of movement. Stressed fish tended to swim erratically or struggle against the flow. The blue-light group swam longer distances with smoother trajectories.</p>
<p>To understand the biological mechanism behind these behavioral changes, the scientists analyzed the fish’s brain chemistry. They measured the levels of three key chemicals: cortisol, norepinephrine, and serotonin. Cortisol is the primary stress hormone in both fish and humans. High levels of cortisol are a hallmark of physiological stress.</p>
<p>The analysis revealed that vibration exposure caused a spike in cortisol and norepinephrine. This hormonal surge matched the anxious behavior observed in the tanks. However, the application of blue light blocked this increase. The fish treated with blue light had cortisol levels comparable to the unstressed control group.</p>
<p>Even more striking was the effect on serotonin. Serotonin is a neurotransmitter that helps regulate mood and promotes feelings of well-being. The study found that 455 nm blue light specifically boosted serotonin levels in the fish. This suggests that blue light works by simultaneously lowering stress hormones and enhancing mood-regulating chemicals.</p>
<p>The authors propose that the blue light activates specific cells in the retina. These cells, known as intrinsically photosensitive retinal ganglion cells, contain a pigment called melanopsin. Melanopsin is highly sensitive to blue wavelengths. When activated, these cells send calming signals to the brain’s emotional centers.</p>
<p>There are some limitations to this study that must be considered. The research focused heavily on specific frequencies and wavelengths. It is possible that other combinations of light and vibration could yield different results. The study also did not investigate potential interaction effects between the light and vibration in a full factorial design.</p>
<p>Additionally, while zebrafish are a good model, they are not humans. The neural pathways are similar, but the complexity of human anxiety involves higher-level cognitive processes. Future research will need to replicate these findings in mammals. Scientists will also need to determine the optimal intensity and duration of light exposure for therapeutic use.</p>
<p>The study opens up new possibilities for managing environmental stress. It suggests that modifying our lighting environments could protect against the invisible toll of noise and vibration. For those living or working in industrial areas, blue light therapy could become a simple, non-invasive tool for mental health.</p>
<p>The study, “<a href="https://doi.org/10.1016/j.physbeh.2025.115187" target="_blank">Blue light exposure mitigates vibration noise-induced anxiety by enhancing serotonin levels</a>,” was authored by Longfei Huo, Xiaojing Miao, Yi Ren, Xuran Zhang, Qiqi Fu, Jiali Yang, and Muqing Liu.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/relatives-with-lower-paternity-uncertainty-are-perceived-as-kinder/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Relatives with lower paternity uncertainty are perceived as kinder</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Feb 11th 2026, 18:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>According to a large study published in <a href="https://doi.org/10.1177/14747049251357493"><em>Evolutionary Psychology</em></a>, people consistently perceive family members as kinder when there is greater certainty of biological relatedness.</p>
<p>Humans often assume that kindness within families is driven mainly by love, shared history, or cultural expectations. Yet evolutionary theories suggest that altruism within families may also be shaped by genetic relatedness. According to kin selection theory, people are predisposed to invest more care and support in relatives who are more likely to share their genes, because such investment indirectly promotes their own genetic success.</p>
<p>One important factor complicating this picture is <a href="https://www.psypost.org/almost-all-unmarried-pregant-women-say-that-the-fetus-resembles-the-father-study-finds/">paternity uncertainty</a>, the fact that, unlike maternity, biological fatherhood is never absolutely certain. Radim Kuba and Jaroslav Flegr examined whether this uncertainty influences how people perceive kindness among different family members.</p>
<p>Drawing on evolutionary psychology and prior findings on parental and grandparental investment, they asked whether relatives associated with higher paternity certainty (such as mothers or maternal grandmothers) are consistently seen as kinder than those associated with lower certainty (such as paternal grandfathers).</p>
<p>The researchers analyzed data from a large online survey conducted between 2016 and 2021. Participants were recruited through a Czech and Slovak Facebook-based volunteer community using a snowball sampling method, allowing the study to reach a broad internet population. Nearly 15,000 individuals began the survey, and after exclusions, 9,128 adult participants who rated at least one family member were included in the final analyses.</p>
<p>Participants completed an extensive questionnaire and were asked to rate the kindness of various family members, such as parents, grandparents, siblings, and step-relatives, ranging from “strongly disagree” to “strongly agree” in response to statements like whether a given relative was kinder than other people. Importantly, the concept of kindness was left intentionally broad, allowing respondents to draw on lifelong experiences, including emotional support and everyday prosocial behavior.</p>
<p>The findings revealed a clear and consistent pattern: perceived kindness decreased as paternity uncertainty increased. Mothers and maternal grandmothers (relatives with no paternity uncertainty) received the highest kindness ratings, followed by fathers, maternal grandfathers, and paternal grandmothers, who carry one level of uncertainty. Paternal grandfathers, associated with two layers of uncertainty, were rated lowest among biological grandparents. These differences were statistically reliable, even though their size was modest.</p>
<p>Importantly, this pattern did not appear among step-relatives. Step-family members, who share no genetic relatedness and identical levels of paternity uncertainty, were rated similarly to one another, regardless of role. This contrast strengthens the authors’ interpretation that genetic relatedness, and not just social roles or cultural stereotypes, drives the observed differences.</p>
<p>Additional analyses showed that daughters tended to rate their biological parents as kinder than sons did, a pattern consistent with evolutionary predictions about investment through more certain maternal lines.</p>
<p>Overall, this study suggests that even in modern societies, subtle evolutionary pressures linked to genetic certainty continue to shape how people perceive kindness and altruism within their families.</p>
<p>Of note is that the voluntary, non-representative nature of the sample, particularly its relatively high level of education, may limit the generalizability of findings. Further, kindness ratings were subjective and may reflect personal relationship quality rather than purely objective behavior.</p>
<p>The research, “<a href="https://doi.org/10.1177/14747049251357493">The Evolutionary Roots of Familial Altruism: Paternity Uncertainty Shapes Patterns of Kindness</a>“, was authored by Radim Kuba and Jaroslav Flegr.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/specific-brain-training-regimen-linked-to-lower-dementia-risk-in-20-year-study/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Specific brain training regimen linked to lower dementia risk in 20-year study</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Feb 11th 2026, 16:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A specific regimen of computer-based brain exercises focused on visual processing speed may lower the long-term risk of receiving a dementia diagnosis. A new analysis of data spanning two decades suggests that older adults who engaged in this adaptive training, provided they participated in follow-up sessions, were approximately 25 percent less likely to be diagnosed with dementia compared to a control group. These results were published in the journal <em><a href="https://doi.org/10.1002/trc2.70197" target="_blank">Alzheimer’s & Dementia: Translational Research & Clinical Interventions</a></em>.</p>
<p>The search for effective ways to prevent or delay Alzheimer’s disease and related dementias is a primary focus of modern medical research. While physical exercise and diet are frequently cited as potential protective factors, the role of specific cognitive training remains a subject of intense debate. Many commercial products promise to sharpen the mind, yet scientific evidence supporting their ability to prevent disease has been inconsistent. To address this uncertainty, researchers revisited data from a gold-standard clinical trial to see if specific interventions had lasting effects on brain health.</p>
<p>The research was led by Norma B. Coe, a professor at the Perelman School of Medicine at the University of Pennsylvania. Coe and her colleagues sought to understand if the benefits of cognitive training could be detected in medical records twenty years after the training took place. They focused on whether different types of mental exercises had varying impacts on the likelihood of a patient developing dementia as they aged into their eighties and nineties.</p>
<p>The team utilized data from the Advanced Cognitive Training for Independent and Vital Elderly study. Known as the ACTIVE study, this large-scale project began in the late 1990s. It was designed as a randomized controlled trial, which is widely considered the most rigorous method for determining cause and effect in science. The original trial enrolled nearly 3,000 healthy adults over the age of 65 living in the community.</p>
<p>Participants in the ACTIVE study were randomly assigned to one of four groups. The first group received memory training. This instruction focused on teaching strategies for remembering word lists and sequences of items. The second group received reasoning training. These sessions involved identifying patterns in number series and solving problems related to daily living. The third group received speed of processing training. The fourth group served as a control and received no training.</p>
<p>The speed of processing intervention was distinct from the other two. It involved a computer-based task designed to improve the user’s visual attention. Participants were asked to identify an object in the center of the screen while simultaneously locating a target in the periphery. As the user improved, the program became faster and the tasks became more difficult. This made the training “adaptive,” meaning it constantly pushed the participant to the limit of their ability.</p>
<p>The initial training period lasted for five to six weeks. Researchers offered a subset of participants “booster” sessions. These additional training blocks occurred one year and three years after the initial enrollment. The goal of these boosters was to reinforce the skills learned during the first phase.</p>
<p>To determine long-term outcomes, Coe and her team linked the original study data with Medicare claims records spanning from 1999 to 2019. This allowed the researchers to track the participants for up to 20 years. They looked for diagnostic codes indicating Alzheimer’s disease or other forms of dementia. By using insurance claims, the team could identify diagnoses made by doctors in real-world clinical settings, even for participants who had stopped communicating with the original study organizers.</p>
<p>The analysis included 2,021 of the original participants. The results revealed a specific and isolated benefit. Participants who underwent the speed of processing training and attended at least one booster session showed a reduced risk of diagnosed dementia. The hazard ratio was 0.75, indicating a 25 percent lower risk compared to the control group.</p>
<p>The study did not find similar benefits for the other groups. Participants who received memory training or reasoning training did not show a statistically distinct difference in dementia diagnosis rates compared to the control group. This was true even if they attended booster sessions. Additionally, individuals in the speed training group who did not attend the booster sessions showed no reduction in risk. The protective effect appeared to depend on the combination of the specific visual speed task and the reinforcement provided by the follow-up sessions.</p>
<p>The researchers propose several reasons why the speed training might have yielded different results than the memory or reasoning exercises. One hypothesis centers on the type of memory engaged. The memory and reasoning interventions relied on “declarative memory.” This involves learning explicit strategies and conscious techniques to solve problems. In contrast, the speed training engaged “procedural memory.” This type of learning becomes automatic and unconscious through repetition, similar to riding a bike.</p>
<p>Another key difference was the adaptive nature of the speed task. The computer program adjusted the difficulty in real-time. This ensured that participants were always challenged, potentially stimulating the brain more effectively than the static strategies taught in the other groups. The authors suggest that this intense, adaptive engagement of the brain’s processing systems might facilitate neuroplasticity, or the brain’s ability to rewire itself.</p>
<p>The findings align with previous, shorter-term analyses of the ACTIVE study, which had hinted at cognitive benefits for the speed training group. However, this is the first analysis to use Medicare claims to confirm a reduction in diagnosed disease over such a lengthened timeframe.</p>
<p>“This work conveys a clear message but also leads us to ask many new questions. We are keen to dig deeper to understand the underlying mechanisms at play here, but ultimately this is a great problem to have,” said Marilyn Albert, the corresponding study author and director of the Johns Hopkins Alzheimer’s Disease Research Center at the Johns Hopkins School of Medicine.</p>
<p>There are limitations to the study that provide context for the results. The analysis relied on administrative billing codes rather than direct neurological examinations of every participant. This means a diagnosis would only be recorded if a participant visited a doctor and the doctor coded the visit correctly. It is possible that some participants developed dementia but were never formally diagnosed.</p>
<p>The study also excluded participants who were enrolled in Medicare Advantage plans because complete claims data were not available for them. If the population in Medicare Advantage plans differs in health or socioeconomic status from those in traditional Medicare, it could influence the generalizability of the findings. Additionally, the researchers noted that individuals with higher education levels or better access to healthcare are often more likely to receive a dementia diagnosis, which could introduce bias into the claims data.</p>
<p>Despite these caveats, the results offer a potential avenue for preventative intervention. “The findings reported here suggest that moderate cognitive training could delay the onset of dementia over subsequent years,” said Richard Hodes, director of the National Institute on Aging, in a press release. “There is still more research to be done to determine about how this works, but this promising lead may move the field further into developing effective interventions to delay or prevent onset of dementia.”</p>
<p>Future research will likely focus on isolating the specific mechanisms that made the speed training effective. Scientists need to understand if the benefit comes from the visual aspect of the task, the speed component, or the adaptive difficulty. Understanding why the memory and reasoning strategies failed to prevent disease diagnosis is equally important for designing future public health programs.</p>
<p>The study also raises questions about the optimal “dose” of training. Since the benefit was only seen in those who received booster sessions, it suggests that brain training may be like physical exercise: it requires maintenance to remain effective.</p>
<p>“This study shows that simple brain training, done for just weeks, may help people stay mentally healthy for years longer,” said Jay Bhattacharya, a director at the National Institutes of Health. “That’s a powerful idea — that practical, affordable tools could help delay dementia and help older adults keep their independence and quality of life.”</p>
<p>The study, “<a href="https://doi.org/10.1002/trc2.70197" target="_blank">Impact of cognitive training on claims-based diagnosed dementia over 20 years: evidence from the ACTIVE study</a>,” was authored by Norma B. Coe, Katherine E. M. Miller, Chuxuan Sun, Elizabeth Taggert, Alden L. Gross, Richard N. Jones, Cynthia Felix, Marilyn S. Albert, George W. Rebok, Michael Marsiske, Karlene K. Ball, and Sherry L. Willis.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/childhood-trauma-scores-fail-to-predict-violent-misconduct-in-juvenile-detention/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Childhood trauma scores fail to predict violent misconduct in juvenile detention</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Feb 11th 2026, 14:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>New research published in <em><a href="https://doi.org/10.1016/j.avb.2025.102121" target="_blank">Aggression and Violent Behavior</a></em> indicates that a history of childhood trauma may not effectively predict which incarcerated youth will engage in the most frequent and violent misconduct. The study suggests that while adverse childhood experiences explain why young people enter the justice system, current factors such as mental health status and gang affiliation are stronger predictors of behavior during incarceration. </p>
<p>Psychologists and criminologists identify childhood adversity as a primary driver of delinquency. Exposure to trauma often hinders emotional regulation and impulse control. This can lead adolescents to interpret social interactions as hostile and resort to aggression. Correctional systems frequently use the Adverse Childhood Experiences score, commonly known as the ACE score, to quantify this history. The traditional ACE score is a cumulative measure of ten specific categories of abuse, neglect, and household dysfunction.</p>
<p>There is a growing consensus that the original ten-item measure may be too narrow for justice-involved youth. It fails to account for systemic issues such as poverty, community violence, and discrimination. Consequently, scholars have proposed expanded measures to capture a broader range of adversities. D</p>
<p>Despite the widespread use of these scores, little research has isolated their ability to predict the behavior of the most serious offenders. Most studies examine general misconduct across all inmates. This study aimed to determine if trauma scores could identify the small fraction of youth responsible for the vast majority of violent and disruptive incidents within state facilities.</p>
<p>“While research has extensively documented that adverse childhood experiences (ACEs) increase the risk of juvenile delinquency, we knew much less about whether ACEs predict the most serious forms of institutional misconduct among already-incarcerated youth,” said study author Jessica M. Craig, an associate professor of criminal justice and director of graduate programs at the University of North Texas.</p>
<p>“We were particularly interested in whether an expanded ACEs measure—which includes experiences like witnessing community violence, homelessness, and extreme poverty beyond the traditional 10-item scale—would better predict which youth become chronic and violent misconduct offenders during incarceration. This matters because institutional misconduct can lead to longer confinement, additional legal consequences, and reduced access to rehabilitation programs.”</p>
<p>For their study, the researchers analyzed data from a cohort of 4,613 serious and violent juvenile offenders. The sample included all youth adjudicated and incarcerated in state juvenile correctional facilities in Texas between 2009 and 2013 who had completed an initial intake assessment. The participants were predominantly male. Approximately 46 percent were Hispanic and 34 percent were Black. The average age at the time of incarceration was 16 years old.</p>
<p>The researchers utilized the Positive Achievement Change Tool to derive two distinct trauma scores for each individual. The first was the traditional ACE score. This metric summed exposure to ten indicators: physical, emotional, and sexual abuse; physical and emotional neglect; household substance abuse; mental illness in the home; parental separation or divorce; domestic violence against a mother; and the incarceration of a household member.</p>
<p>The second measure was an expanded ACE score. This metric included the original ten items plus four additional variables relevant to high-risk populations. These additions included a history of foster care or shelter placements, witnessing violence in the community, experiencing homelessness, and living in a family with income below the poverty level. The average youth in the sample had a traditional ACE score of roughly 3.3 and an expanded score of nearly 4.9.</p>
<p>The study did not treat misconduct as a simple average. The researchers sought to identify chronic perpetrators. They calculated the rate of total misconduct incidents and violent misconduct incidents for each youth. They then separated the offenders into groups representing the top 10 percent and the top 1 percent of misconduct perpetrators. This allowed the analysis to focus specifically on the individuals who pose the greatest challenge to institutional safety. </p>
<p>The researchers used statistical models to test whether higher trauma scores increased the likelihood of being in these high-rate groups. These models controlled for other potential influences, including prior criminal history, offense type, age, race, and substance abuse history.</p>
<p>The analysis yielded results that challenged the assumption that past trauma dictates future institutional violence. Neither the traditional ACE score nor the expanded ACE score served as a significant predictor for membership in the top 10 percent or top 1 percent of misconduct perpetrators. This finding held true for both general rule-breaking and specific acts of violence. The addition of variables like poverty and community violence to the trauma score did not improve its predictive power regarding institutional behavior.</p>
<p>“We were surprised that even the expanded ACEs measure—which included witnessing violence, foster care placement, homelessness, and poverty—failed to predict high-rate misconduct,” Craig told PsyPost. “Given that previous research suggested the traditional 10-item ACEs scale might underestimate adversity among justice-involved youth, we expected the expanded measure to show stronger predictive power.”</p>
<p>While trauma history did not predict chronic misconduct, other personal and situational characteristics proved to be strong indicators. The most consistent predictor of violent behavior was a history of serious mental health problems. Youth with such histories had approximately 150 percent increased odds of falling into the top 1 percent of violent misconduct perpetrators compared to their peers. This effect size suggests that current psychological stability is a primary determinant of safety within the facility.</p>
<p>Age and social connections also played significant roles. The data indicated that older youth were substantially less likely to engage in chronic misconduct. Specifically, those who were older at the time of incarceration were about 50 to 60 percent less likely to be in the high-rate misconduct groups. Gang affiliation was another robust predictor. Youth with gang ties were significantly more likely to be among the most frequent violators of institutional rules. This points to the influence of peer dynamics and the prison social structure on individual behavior.</p>
<p>“These are substantively meaningful effects that have real implications for correctional programming and supervision strategies,” Craig said.</p>
<p>The study provides evidence that the factors driving entry into the justice system may differ from the factors driving behavior once inside. While childhood adversity sets a trajectory toward delinquency, the structured environment of a correctional facility introduces new variables. The researchers suggest that the “survival coping” mechanisms youth develop in response to trauma might manifest differently depending on their immediate environment and mental state.</p>
<p>“Contrary to expectations, we found that neither traditional nor expanded ACEs measures significantly predicted which youth became the most frequent perpetrators of institutional misconduct,” Craig explained. “Instead, factors like age at incarceration, gang affiliation, and mental health history were much stronger predictors.” </p>
<p>“This suggests that while childhood trauma remains critically important for understanding how youth enter the justice system, managing their behavior during incarceration may require greater focus on their current mental health needs, developmental stage, and institutional factors rather than trauma history alone.”</p>
<p>These findings imply that correctional administrators should look beyond a cumulative trauma score when assessing risk. Screening processes that emphasize current mental health conditions and gang involvement may offer more utility for preventing violence than those focusing solely on historical adversity. Effective management of high-risk populations appears to require targeted mental health interventions and strategies to disrupt gang activity.</p>
<p>There are some limitations to consider. The data came from a single state, which may limit the ability to generalize the findings to other jurisdictions with different correctional cultures or demographics. </p>
<p>The study also relied on cumulative scores that count the presence of adverse events but do not measure their severity, frequency, or timing. It is possible that specific types of trauma, such as physical abuse, have different impacts than others, such as parental divorce. A simple sum of these events might obscure specific patterns that do predict violence.</p>
<p>“It’s important to emphasize that our findings don’t diminish the significance of childhood trauma in understanding juvenile justice involvement overall,” Craig said. “ACEs remain crucial for understanding pathways into the system and should absolutely be addressed through trauma-informed programming. However, when it comes to predicting institutional violence specifically among already deeply-entrenched offenders, personal characteristics and current mental health status appear more salient than historical trauma exposure.”</p>
<p>“Future research should examine whether specific patterns or combinations of traumatic experiences—rather than cumulative scores—might better predict institutional violence. We’d also like to investigate whether trauma-informed treatment programs, when youth actually receive them during incarceration, can reduce misconduct even when trauma history alone doesn’t predict it. Additionally, examining the timing and severity of ACEs, rather than just their presence or absence, could clarify the trauma-violence relationship.”</p>
<p>The study, “<a href="https://doi.org/10.1016/j.avb.2025.102121" target="_blank">Looking back: The impact of childhood adversity on institutional misconduct among a cohort of serious and violent institutionalized delinquents</a>,” was authored by Jessica M. Craig, Haley Zettler, and Chad R. Trulson.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/study-finds-mindfulness-creates-lasting-improvements-in-visual-memory/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Study finds mindfulness creates lasting improvements in visual memory</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Feb 11th 2026, 12:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>An experimental study conducted in China found that a 5-week emotion-targeted mindfulness training improved participants’ working memory accuracy for faces displaying emotions, with the exception of faces displaying fear. The improvements continued to be present one month after the training was completed. The research was published in <a href="https://doi.org/10.1038/s41539-025-00389-0"><em>npj Science of Learning</em></a>.</p>
<p>Mindfulness is the practice of intentionally paying attention to the present moment with openness and without judgment. It involves noticing thoughts, emotions, bodily sensations, and external experiences as they arise. Mindfulness has roots in Buddhist meditation traditions but is widely used today in secular psychological and health contexts. It is commonly cultivated through practices such as meditation, breathing exercises, and mindful movement.</p>
<p>Research shows that mindfulness can reduce stress, anxiety, and depressive symptoms. It can also improve emotional regulation and increase awareness of habitual reactions. Mindfulness helps people relate differently to difficult thoughts and feelings rather than trying to suppress or avoid them. In everyday life, it can be practiced during routine activities such as eating, walking, or listening.</p>
<p>Study author Hui Kou and her colleagues wanted to explore the impact of mindfulness training on working memory for faces and the cognitive mechanisms underlying this effect. They conducted an experiment.</p>
<p>Study participants were 120 undergraduate students from a medical university in China. Ninety of them were women. Participants’ average age was 20 years. All participants were right-handed and had normal or corrected-to-normal vision.</p>
<p>Study authors randomly divided participants into a training and a control group. The training group underwent 5 weeks of mindfulness training based on mindfulness-based stress reduction (MBSR) and cognitive therapy. They had 2 hours of training per week. </p>
<p>The goal of the training was to enhance emotion perception and emotion regulation, so the contents of each weekly training focused on the topic of emotions. The control group had two lectures on mindfulness designed to concentrate on general principles of mindfulness. Each lecture lasted 60 minutes and did not include experiential practices.</p>
<p>Before and after the training and 1 month after the training was finished, participants completed assessments of mindfulness (the Five Facet Mindfulness Questionnaire), and a cognitive test assessing their visual working memory for faces displaying emotions. In the cognitive test, participants first viewed two faces for one second. This was followed by a two-second blank screen (delay period), after which another face appeared. </p>
<p>Participants’ task was to indicate whether that final face was among the two initially shown. There were 48 such trials in one block. There were 5 blocks in total. All faces in one block displayed the same emotion and the emotion displayed was different in each block. The emotions the faces displayed were happy, sad, angry, fearful, and neutral.</p>
<p>The results showed that the mindfulness training resulted in improved working memory accuracy for facial stimuli across all examined emotional expressions except fear. One month after the training was finished, these improvements were still present. Participants from the training group performed better than those in the control group both immediately after the training and one month later.</p>
<p>Statistical analyses indicated that, after the training, participants processed information on faces they viewed more efficiently when making memory decisions regardless of the emotion the face displayed. The stronger this increase in processing efficiency was, the more accurate participants’ memory performance became.</p>
<p>“These findings demonstrate that mindfulness training induces lasting improvements in both accuracy and processing efficiency of visual working memory, independent of facial emotions, clarifying its cognitive mechanisms,” the study authors concluded.</p>
<p>The study contributes to the scientific knowledge on the effects of mindfulness training. However, it should be noted that the study used just a single working memory task, with a single type of stimuli. It remains unknown how much the findings would generalize to different working memory tasks and to stimuli that are not faces displaying emotions.</p>
<p>The paper, “<a href="https://doi.org/10.1038/s41539-025-00389-0">Mindfulness training enhances face working memory: evidence from the drift-diffusion model,</a>” was authored by Hui Kou, Wei Luo, Xiaodong Li, Jia Wu, Qianguo Xiao, and Taiyong Bi.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/high-rates-of-screen-time-linked-to-specific-differences-in-toddler-vocabulary/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">High rates of screen time linked to specific differences in toddler vocabulary</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Feb 11th 2026, 11:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>New research published in the journal <em><a href="https://doi.org/10.1111/desc.70091" target="_blank">Developmental Science</a></em> provides evidence that the amount of time toddlers spend watching videos is associated with the specific types of words they learn, distinct from the total number of words they know. The findings indicate that higher levels of digital media consumption are linked to a vocabulary containing a smaller proportion of body part words and a larger proportion of words related to people and furniture.</p>
<p>The widespread integration of digital media into family life has prompted questions about its influence on early child development. Current estimates suggest that many children under the age of two spend roughly two hours per day interacting with screens, primarily watching videos or television. </p>
<p>Previous research has often focused on the relationship between screen time and the overall size of a child’s vocabulary. These earlier studies generally established that high exposure to low-quality programming correlates with a lower total number of words spoken by the child.</p>
<p>However, language acquisition is a multifaceted process. Children do not learn all words in the same manner. The acquisition of certain types of words relies heavily on specific environmental inputs. </p>
<p>“There is no doubt that use of digital media by young children has been on the rise in the past few years, and growing evidence suggest that this has impacts on their language learning, especially during the first few years of life,” said study author <a href="https://sarahkucker.wixsite.com/smukidlab" target="_blank">Sarah C. Kucker</a>, an assistant professor of psychology at Southern Methodist University.</p>
<p>“For instance, we know that children who watch high rates of low-quality television/videos tend to have smaller vocabularies and less advanced language skills (this is work by my own lab, but also many others such as Brushe et al., 2025; Madigan et al., 2024). However, we also know that some forms of media do not have negative effects and can, in fact, be useful for language when the media is high-quality, socially-interactive, and educational in nature (work by Sundqvist as well Jing et al., 2024).” </p>
<p>“On top of this, we know that children’s language development and specifically their vocabulary learning is not an all-or-nothing, but rather that children learn different types of words at different times and in different ways – e.g. learning words for body parts is easier when you can touch the body part when named, and names for people (mama, dada) are learned earlier than most other nouns,” Kucker continued.</p>
<p>“When we put this together it means that we shouldn’t be looking at digital media’s influence on language as just an all-or-nothing, or blanket good-or-bad, but rather take a more nuanced look. So we did just that by looking at the types of words children are learning and the association with the time they spend with digital media.”</p>
<p>For their study, the researchers recruited 388 caregivers of children aged 17 to 30 months. This age range represents a period of rapid language expansion often referred to as the vocabulary spurt. Participants were recruited through online research platforms and in-person visits to a university laboratory. The researchers combined these groups into a single dataset for analysis.</p>
<p>Caregivers completed a comprehensive survey known as the Media Assessment Questionnaire. This instrument asked parents to report the number of minutes their child spent using various forms of technology, such as television, tablets, and video chat. </p>
<p>The researchers collected data for both typical weekdays and weekends. They used these reports to calculate a weighted daily average of screen time for each child. The data revealed that video and television viewing was the most common media activity. On average, the children in the sample watched videos for approximately 110 minutes per day.</p>
<p>To measure language development, caregivers completed the MacArthur-Bates Communicative Development Inventory. This is a standardized checklist containing hundreds of words commonly learned by young children. Parents marked the words their child could say. </p>
<p>This tool allowed the researchers to calculate the total size of each child’s noun vocabulary. It also enabled them to break down the vocabulary into specific semantic categories. These categories included animals, vehicles, toys, food and drink, clothing, body parts, small household items, furniture and rooms, outside things, places to go, and people.</p>
<p>The researchers also analyzed the vocabulary data through a different lens. They classified nouns based on the features that define their categories. Specifically, they looked at shape-based nouns and material-based nouns. </p>
<p>Shape-based nouns usually refer to solid objects defined by their physical form, such as “ball” or “cup.” Material-based nouns often refer to nonsolid substances or items defined by what they are made of, such as “applesauce” or “chalk.” This distinction is significant in developmental psychology because physical handling of objects is thought to help children learn these concepts.</p>
<p>The researchers found that children with higher rates of video viewing produced a smaller proportion of body part words. In a typical toddler’s vocabulary, words like “nose,” “feet,” or “ears” are often among the first learned. However, as screen time increased, the density of these words in the child’s repertoire decreased relative to other word types.</p>
<p>In contrast, the researchers found a positive association between video time and words related to people. This category includes proper names, titles like “teacher” or “grandma,” and general terms like “baby.” Children who watched more videos tended to have a vocabulary composition that was more heavily weighted toward these social labels. </p>
<p>A similar positive association was found for the category of furniture and rooms. Heavy media users were more likely to produce words such as “couch,” “TV,” or “kitchen” relative to their peers with lower media use.</p>
<p>“While we expected that children with high media use would have fewer body part words in their vocabulary, we were surprised to find that children with high media knew relatively more people words and furniture words,” Kucker told PsyPost. “We suspect this may have to do with the content of the media highlighting those terms, or perhaps the physical context in which children are using media (e.g. while sitting on a couch or when working with mom), but the tools to capture this information are currently limited.”</p>
<p>The researchers found no significant relationship between video watching and the other semantic categories measured, such as animals, toys, or food. Additionally, the researchers found no evidence that video exposure altered the balance between shape-based and material-based nouns. The proportion of words related to solid objects versus nonsolid substances remained stable regardless of screen time habits.</p>
<p>The research highlights that the impact of digital media is not uniformly negative or positive. The findings suggest that screen time changes the landscape of early learning in specific ways.</p>
<p>“Most caregivers have heard the advice to avoid screen time with their young children,” Kucker said. “However, the reality is that that is very difficult to do 100% of the time in today’s tech-based world. What this study shows is that a high amount of low-quality videos/TV is associated with lower overall vocabulary sizes in 2-year-old children, but that that videos/TV may not impact all types of words equally.” </p>
<p>“For instance, children with more video/TV time have fewer names for body parts, but seem to learn most other nouns at relatively equal levels, potentially because some videos/TV do a good job teaching children some basics.”</p>
<p>“So do try to limit children’s screen time, but don’t fret about avoiding it completely,” Kucker explained. “Instead, consider the content and context for when the media is being used and why – high-quality, educational use, or those that are social (e.g. FaceTime, Zoom), may not be detrimental as long as children are still getting rich interactive play outside of the screen.”</p>
<p>As with all research, there are some limitations to consider. The data relied on caregiver reports, which can introduce memory errors or bias. </p>
<p>The study was also cross-sectional, meaning it captured a snapshot of the children’s lives rather than following them over time. It is not possible to determine causality from this data alone. For example, it is unknown if watching videos causes the change in vocabulary or if families with different communication styles rely more on media.</p>
<p>“We are currently looking at more longitudinal impacts of digital media on children’s language over time as well as individual differences across children, such as considering personality and temperament,” Kucker noted.</p>
<p>Additionally, the study focused primarily on the duration of screen time. It did not fully capture the specific content of the videos the children watched or the nature of the interactions parents had with their children during viewing. The researchers noted that educational content and co-viewing with a parent can mitigate potential negative effects.</p>
<p>“Not all media is bad!” Kucker said. “Media’s effect on children is nuanced and interacts with the rest of their experiences. I always like to tell parents that if your child watches an educational show for a few minutes so you can have a few minutes of quiet, that may be helping you to then be a better parent later which will more than offset that few minutes of media time.” </p>
<p>“Children who get rich, social experiences are often still developing in very strong ways even if they have a bit of high-quality screen time here and there. Just considering the content and context of the media is key!”</p>
<p>“We have a lot of work left still to do and understand in this area, and much of the support for this work has come from various grants and foundations, such as NIH and NSF,” Kucker added. “Without those funding avenues, this work couldn’t be done.”</p>
<p>The study, “<a href="https://doi.org/10.1111/desc.70091" target="_blank">Videos and Vocabulary – How Digital Media Use Impacts the Types of Words Children Know</a>,” was authored by Sarah C. Kucker, Rachel F. Barr, and Lynn K. Perry.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/hippocampal-neurons-shift-their-activity-backward-in-time-to-anticipate-rewards/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Hippocampal neurons shift their activity backward in time to anticipate rewards</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Feb 11th 2026, 10:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Recent experimental findings suggest that the hippocampus, the brain region primarily associated with memory and navigation, actively reorganizes its neural patterns to anticipate future events. Researchers observed that as mice learned to navigate a complex task, the neural signals associated with a reward shifted backward in time to predict the outcome before it happened. These results were published in the journal <em><a href="https://doi.org/10.1038/s41586-025-09958-0" target="_blank" rel="noopener">Nature</a></em>.</p>
<p>The hippocampus is a seahorse-shaped structure located deep within the temporal lobes of the brain. Neuroscientists have recognized for decades that this region is essential for forming new memories. It is also responsible for creating a cognitive map. This internal representation allows an organism to visualize its environment and navigate through space.</p>
<p>Biologists have traditionally viewed the cognitive map as a relatively static record of the environment. Under this view, the hippocampus encodes features such as landmarks, borders, and the location of resources. However, survival requires more than just a record of the past. An animal must use its prior experiences to predict where food or safety will be located in the future.</p>
<p>This necessity leads to the theory of predictive coding. This theory suggests that the brain is constantly generating models of the world to estimate future outcomes. When an outcome matches the prediction, the brain learns that its model is correct. When an outcome is unexpected, the brain must update the model.</p>
<p>While this theory is widely accepted in computational neuroscience, observing the physical reorganization of cells in the hippocampus over long periods has been a technical challenge. Most neural recording technologies can only track brain activity for short durations. This limitation makes it difficult to see how internal maps evolve as learning consolidates over weeks.</p>
<p>Mohammad Yaghoubi, a researcher at McGill University, aimed to bridge this gap. Working with senior author Mark Brandon at the Douglas Research Centre, Yaghoubi designed an experiment to track specific neurons across an extended timeframe. They sought to determine if the hippocampal map restructures itself to prioritize the prediction of rewards.</p>
<p>The research team employed a sophisticated imaging technique known as calcium imaging. They injected a modified virus into the brains of mice. This virus caused neurons to express a fluorescent protein that glows when calcium enters the cell, which happens when a neuron fires.</p>
<p>The researchers then implanted a gradient refractive index lens, a tiny microscope component, above the hippocampus. This setup allowed them to attach a miniature camera, weighing only a few grams, to the head of the mouse. The camera recorded the fluorescence of hundreds of individual neurons while the animal moved freely.</p>
<p>Because this method relies on optical imaging rather than physical electrodes, it is less invasive to the tissue over time. This stability allowed Yaghoubi and his colleagues to identify and monitor the exact same neurons day after day for several weeks. They could then correlate specific cellular activity with the animal’s behavior during learning.</p>
<p>The mice were trained to perform a task known as “delayed nonmatching-to-location” inside an automated chamber. The apparatus featured a touch-sensitive screen at one end and a reward dispenser at the other. The task required the mouse to initiate a trial and then observe a sample location lighting up on the screen.</p>
<p>After a short delay, the screen displayed the original location alongside a new, novel location. To receive a reward, the mouse had to ignore the familiar spot and touch the new location. The reward was a small amount of strawberry milkshake delivered at the opposite end of the chamber. This task is cognitively demanding because it requires the animal to hold information in working memory and apply a specific rule.</p>
<p>At the beginning of the training, the researchers noted that a distinct population of hippocampal neurons fired vigorously when the mouse received the milkshake. These cells appeared to be tuned specifically to the experience of consuming the reward. The neural map at this stage was heavily focused on the outcome itself.</p>
<p>As the mice repeated the task over weeks and their performance improved, the neural patterns began to change. The researchers observed a phenomenon described as backpropagation of neural tuning. The cells that originally fired only upon receiving the reward began to fire earlier in the sequence of events.</p>
<p>“What we found was surprising,” said Brandon. “Neural activity that initially peaked at the reward gradually shifted to earlier moments, eventually appearing before mice reached the reward.”</p>
<p>By the time the mice had mastered the task, these specific neurons were firing while the animal was still approaching the reward port. In some instances, the firing shifted all the way back to the moment the mouse made the correct choice on the touchscreen. The cells had transformed from sensors of the present reward into predictors of the future reward.</p>
<p>The study also analyzed the activity of the neuronal population as a whole. In the early stages of learning, a large percentage of the recorded cells were dedicated to encoding the reward location. This resulted in an over-representation of the reward site in the mouse’s mental map.</p>
<p>As the weeks passed, the proportion of neurons tuned to the reward itself decreased. Simultaneously, the number of neurons encoding the approach and the choice period increased. The brain appeared to be efficient. Once the reward was predictable, fewer resources were needed to represent it. The cognitive effort shifted toward the actions required to obtain it.</p>
<p>This reorganization supports the idea that the hippocampus acts as a predictive device. The backward shift in timing allows the brain to signal an upcoming event based on the current context. This predictive signal likely helps guide the animal’s behavior, reinforcing the actions that lead to a positive outcome.</p>
<p>The researchers confirmed that this shift was not due to simple changes in the animal’s speed or position. They used statistical controls to ensure that the change in firing timing was a true remapping of the cognitive representation. The consistency of the findings across multiple animals suggests a fundamental biological mechanism.</p>
<p>“The hippocampus is often described as the brain’s internal model of the world,” said Brandon. “What we are seeing is that this model is not static; it is updated day by day as the brain learns from prediction errors. As outcomes become expected, hippocampal neurons start to respond earlier as they learn what will happen next.”</p>
<p>There are limitations to the study that warrant mention. The research was conducted on mice, and while the hippocampus is evolutionarily conserved, human cognition involves additional layers of complexity. Further research is necessary to confirm if identical cellular mechanisms drive predictive learning in the human brain.</p>
<p>Additionally, the study focused on a reward-based task. It remains to be seen if the hippocampus utilizes the same predictive backpropagation for negative or aversive outcomes. Future experiments will likely investigate whether the brain rewires itself similarly to predict threats or punishments.</p>
<p>The findings may have implications for understanding neurodegenerative disorders. Individuals with Alzheimer’s disease often exhibit disorientation and difficulty learning from new experiences. If the predictive coding mechanism in the hippocampus is disrupted, it could explain why patients struggle to anticipate consequences or navigate familiar environments.</p>
<p>By demonstrating that memory circuits are dynamic and predictive, this study offers a new perspective on how the brain interacts with time. The hippocampus does not merely archive the past. It actively reconstructs it to prepare for the future.</p>
<p>The study, “<a href="https://doi.org/10.1038/s41586-025-09958-0" target="_blank" rel="noopener">Predictive Coding of Reward in the Hippocampus</a>,” was authored by Mohammad Yaghoubi, Andres Nieto-Posadas, Coralie-Anne Mosser, Thomas Gisiger, Émmanuel Wilson, Sylvain Williams, and Mark P. Brandon.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<p><strong>Forwarded by:<br />
Michael Reeder LCPC<br />
Baltimore, MD</strong></p>
<p><strong>This information is taken from free public RSS feeds published by each organization for the purpose of public distribution. Readers are linked back to the article content on each organization's website. This email is an unaffiliated unofficial redistribution of this freely provided content from the publishers. </strong></p>
<p> </p>
<p><s><small><a href="#" style="color:#ffffff;"><a href='https://blogtrottr.com/unsubscribe/565/DY9DKf'>unsubscribe from this feed</a></a></small></s></p>