<table style="border:1px solid #adadad; background-color: #F3F1EC; color: #666666; padding:8px; -webkit-border-radius:4px; border-radius:4px; -moz-border-radius:4px; line-height:16px; margin-bottom:6px;" width="100%">
<tbody>
<tr>
<td><span style="font-family:Helvetica, sans-serif; font-size:20px;font-weight:bold;">PsyPost – Psychology News</span></td>
</tr>
<tr>
<td> </td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/users-of-generative-ai-struggle-to-accurately-assess-their-own-competence/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Users of generative AI struggle to accurately assess their own competence</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Dec 29th 2025, 08:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>New research provides evidence that using artificial intelligence to complete tasks can improve a person’s performance while simultaneously distorting their ability to assess that performance accurately. The findings indicate that while users of AI tools like ChatGPT achieve higher scores on logical reasoning tests compared to those working alone, they consistently overestimate their success by a significant margin. </p>
<p>This pattern suggests that AI assistance may disconnect a user’s perceived competence from their actual results, leading to a state of inflated confidence. The study was published in the scientific journal <em><a href="https://doi.org/10.1016/j.chb.2025.108779" target="_blank">Computers in Human Behavior</a></em>.</p>
<p>Scientists and psychologists have increasingly focused on how human cognition changes when augmented by technology. As generative AI systems become common in professional and educational settings, it is essential to understand how these tools influence metacognition. Metacognition refers to the ability of an individual to monitor and regulate their own thinking processes. It allows people to know when they are likely correct and when they might be making an error.</p>
<p>Previous psychological inquiries have established that humans generally struggle with self-assessment. A well-known phenomenon called the Dunning-Kruger effect describes how individuals with lower skills tend to overestimate their competence, while highly skilled individuals often underestimate their abilities. The authors of the current paper sought to determine if this pattern persists when humans collaborate with AI. They aimed to understand if AI acts as an equalizer that fixes these biases or if it introduces new complications to how people evaluate their work.</p>
<p>To investigate these questions, the research team designed two distinct studies centered on logical reasoning tasks. In the first study, they recruited 246 participants from the United States. These individuals were asked to complete 20 logical reasoning problems taken from the Law School Admission Test (LSAT). The researchers provided participants with a specialized web interface. This interface displayed the questions on one side and a ChatGPT interaction window on the other.</p>
<p>Participants were required to interact with the AI at least once for each question. They could ask the AI to solve the problem or explain the logic. After submitting their answers, participants estimated how many of the 20 questions they believed they had answered correctly. They also rated their confidence on a specific scale for each individual decision.</p>
<p>The results of this first study showed a clear improvement in objective performance. On average, participants using ChatGPT scored approximately three points higher than a historical control group of people who took the same test without AI assistance. The AI helped users solve problems that they likely would have missed on their own.</p>
<p>Despite this improvement in scores, the participants engaged in significant overestimation. On average, the group estimated they had answered about 17 out of 20 questions correctly. In reality, their average score was closer to 13. This represents a four-point gap between perception and reality. The data suggests that the seamless assistance provided by the AI created an illusion of competence.</p>
<p>The study also analyzed the relationship between a participant’s knowledge of AI and their self-assessment. The researchers measured “AI literacy” using a tool called the Scale for the Assessment of Non-Experts’ AI Literacy. One might expect that understanding how AI works would make a user more skeptical or accurate in their judgment. The findings indicated the opposite. Participants with higher technical understanding of AI tended to be more confident in their answers but less accurate in judging their actual performance.</p>
<p>A significant theoretical contribution of this research involves the Dunning-Kruger effect. In typical scenarios without AI, the data would show a steep slope where low performers vastly overestimate themselves and high performers do not. When participants used AI, this effect vanished. The “leveling” effect of the technology meant that overestimation became uniform across the board. Low performers and high performers alike inflated their scores by similar amounts.</p>
<p>The researchers observed that the combined performance of the human and the AI did not exceed the performance of the AI alone. The AI system, when running the test by itself, achieved a higher average score than the humans using the AI. This suggests a failure of synergy. Humans occasionally accepted incorrect advice from the AI or overrode correct advice, dragging the overall performance down below the machine’s maximum potential.</p>
<p>To ensure these findings were robust, the researchers conducted a second study. This replication involved 452 participants. The researchers split this sample into two distinct groups. One group performed the task with AI assistance, while the other group worked without any technological aid.</p>
<p>In this second experiment, the researchers introduced a monetary incentive to encourage accuracy. Participants were told they would receive a financial bonus if their estimate of their score matched their actual score. The goal was to rule out the possibility that participants were simply not trying hard enough to be self-aware.</p>
<p>The results of the second study mirrored the first. The monetary incentive did not correct the overestimation bias. The group using AI continued to perform better than the unaided group but persisted in overestimating their scores. The unaided group showed the classic Dunning-Kruger pattern, where the least skilled participants showed the most bias. The AI group again showed a uniform bias, confirming that the technology fundamentally shifts how users perceive their competence.</p>
<p>The study also utilized a measurement called the “Area Under the Curve” or AUC to judge metacognitive sensitivity. This metric determines if a person is more confident when they are right than when they are wrong. Ideally, a person should feel unsure when they make a mistake. The data showed that participants had low metacognitive sensitivity. Their confidence levels were high regardless of whether they were right or wrong on a specific question.</p>
<p>Qualitative data collected from chat logs offered additional context. The researchers noted that most participants acted as passive recipients of information. They frequently copied and pasted questions into the chat and accepted the AI’s output without significant challenge or verification. Only a small fraction of users treated the AI as a collaborative partner or a tool for double-checking their own logic.</p>
<p>The researchers discussed several potential reasons for these outcomes. One possibility is the “illusion of explanatory depth.” When an AI provides a fluent, articulate, and instant explanation, it can trick the brain into thinking the information is processed and understood more deeply than it actually is. The ease of obtaining the answer reduces the cognitive struggle usually required to solve logic puzzles, which in turn dulls the internal signals that warn a person they might be wrong.</p>
<p>As with all research, there are caveats to consider. The first study used a historical comparison group rather than a simultaneous control group, though the second study corrected this. Additionally, the task was limited to LSAT logical reasoning questions. It is possible that different types of tasks, such as creative writing or coding, might yield different metacognitive patterns.</p>
<p>The study also relied on a specific version of ChatGPT. As these models evolve and become more accurate, the dynamic between human and machine could shift. The researchers also noted that the participants were required to use the AI, which might differ from a real-world scenario where a user chooses when to consult the tool.</p>
<p>Future research directions were suggested to address these gaps. The researchers recommend investigating design changes that could force users to engage more critically. For example, an interface might require a user to explain the AI’s logic back to the system before accepting an answer. Long-term studies are also needed to see if this overconfidence fades as users become more experienced with the limitations of large language models.</p>
<p>The study, “<a href="https://doi.org/10.1016/j.chb.2025.108779" target="_blank">AI makes you smarter but none the wiser: The disconnect between performance and metacognition</a>,” was authored by Daniela Fernandes, Steeven Villa, Salla Nicholls, Otso Haavisto, Daniel Buschek, Albrecht Schmidt, Thomas Kosch, Chenxinran Shen, and Robin Welsch.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/lifelong-diet-quality-predicts-cognitive-ability-and-dementia-risk-in-older-age/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Lifelong diet quality predicts cognitive ability and dementia risk in older age</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Dec 29th 2025, 06:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new longitudinal analysis suggests that the quality of a person’s diet throughout their entire lifespan shares a significant link with their cognitive abilities as they age. The research indicates that individuals who maintain lower quality dietary habits from childhood into adulthood may face a higher likelihood of cognitive struggles and dementia in later years. These findings were published in <em><a href="https://doi.org/10.1016/j.cdnut.2025.107619" target="_blank">Current Developments in Nutrition</a></em>.</p>
<p>Scientists have established that diet is a modifiable risk factor for dementia and cognitive decline. However, the majority of existing research focuses on dietary habits in older adults, often after cognitive issues have already begun to surface. This leaves a gap in understanding how nutrition across the entire lifespan might influence brain health.</p>
<p>Symptoms of conditions like Alzheimer’s disease can begin to develop in the brain decades before memory loss becomes apparent. Consequently, researchers suspect that dietary improvements made earlier in life might be more effective at preventing neurodegeneration than changes made in old age. </p>
<p>A research team led by <a href="https://www.linkedin.com/in/kelly-c-cara/" target="_blank">Kelly C. Cara</a> of the Friedman School of Nutrition Science and Policy at Tufts University sought to map the long-term relationship between what people eat and how well their brains function over time. They aimed to determine if dietary patterns established in childhood and mid-life predict cognitive outcomes decades later.</p>
<p>The researchers analyzed data from the 1946 British Birth Cohort. This is a long-running project following individuals born in England, Scotland, and Wales during a single week in March 1946. The study provides a rare opportunity to observe health trends over nearly seventy years.</p>
<p>The final analytical sample included 3,059 participants. The team assessed dietary intake at five specific ages: 4, 36, 43, 53, and between 60 and 64 years old. At age four, dietary data was collected via recalls provided by parents or caregivers. In adulthood, participants completed food diaries recording their intake over several days.</p>
<p>To evaluate diet quality, the researchers utilized the Healthy Eating Index-2020. This scoring system measures how closely a diet aligns with the Dietary Guidelines for Americans. It assigns higher scores for the consumption of adequate components like fruits, vegetables, whole grains, dairy, and proteins.</p>
<p>The index simultaneously lowers scores for high intakes of moderation components. These include refined grains, sodium, added sugars, and saturated fats. Total scores on this index range from 0 to 100, with higher numbers indicating a healthier diet.</p>
<p>Cognitive ability was measured at seven time points: ages 8, 11, 15, 43, 53, 60-64, and 68-69. The researchers used a variety of tests appropriate for each developmental stage. In childhood, assessments focused on reading comprehension, vocabulary, and arithmetic.</p>
<p>In adulthood, the testing focus shifted to functional performance. These measures included word list recalls to test memory, visual search speed tests, and reaction time assessments. To allow for comparison across these different tests and ages, the researchers converted the results into global cognitive ability percentile ranks.</p>
<p>Using a statistical method called group-based trajectory modeling, the researchers identified distinctive trends in the data. This technique groups individuals who follow similar patterns of change over time. The analysis revealed three distinct trajectories for diet quality.</p>
<p>The first group, comprising about 31 percent of the sample, followed a lower diet quality trajectory. The second group, representing about 50 percent, followed a moderate quality trajectory. The third group, making up roughly 19 percent, maintained a higher quality diet trajectory throughout life.</p>
<p>Similarly, the researchers identified four distinct trajectories for cognitive ability. These ranged from consistently lower performance to consistently higher performance relative to peers. The largest portion of the sample fell into the highest cognitive trajectory.</p>
<p>Cara and her colleagues found a clear association between these dietary and cognitive paths. Participants who followed the lowest cognitive ability trajectory were most likely to belong to the lower or moderate diet quality groups. Specifically, 58 percent of the lowest cognitive group came from the lower diet trajectory.</p>
<p>In contrast, those in the highest cognitive ability trajectory were primarily composed of individuals with moderate or higher diet quality. Only a small fraction of the high cognitive performers belonged to the low diet quality group. This suggests that maintaining a high-quality diet is common among those who maintain high cognitive function.</p>
<p>The researchers examined specific dietary components that differed between the groups. Throughout adulthood, participants in the higher cognitive trajectory tended to eat more whole fruits and whole grains. They also consumed fewer refined grains compared to their peers in lower cognitive groups.</p>
<p>At ages 53 and 60-64, the high cognitive group also showed lower sodium intake. They consumed more vegetables, specifically greens and beans. These specific food choices appear to contribute to the overall difference in diet quality scores.</p>
<p>The study also investigated the risk of dementia in later life. At age 68-69, participants completed the Addenbrooke’s Cognitive Examination-III. This is a comprehensive test used in clinical settings to screen for cognitive impairment.</p>
<p>The researchers found that 9.8 percent of participants in the lower diet quality group showed indications of likely dementia, compared to the 6 percent observed in the moderate diet quality group. It was also notably higher than the 2.4 percent seen in the higher diet quality group.</p>
<p>The analysis highlighted early life factors that predicted these outcomes. Higher childhood social class was a strong predictor of being in a higher cognitive trajectory. It also predicted membership in a higher diet quality trajectory.</p>
<p>Engagement in leisure activities at age 11 also played a role. Children who participated in more intellectual and social activities were more likely to follow higher cognitive trajectories later in life. Being female was associated with a greater chance of belonging to the moderate or higher diet quality groups.</p>
<p>The researchers posit several biological mechanisms that might explain these findings. Nutrients found in high-quality diets, such as fatty acids, B vitamins, and antioxidants, are essential for brain health. These compounds support the maintenance of neurons and protect against neurodegeneration.</p>
<p>While the findings provide evidence for a link between life-course diet and cognition, the study has limitations. The research is observational, meaning it cannot definitively prove that a poor diet causes lower cognitive ability. It is possible that individuals with higher cognitive abilities are simply better equipped to make healthier food choices.</p>
<p>Another limitation involves the study population. The sample consisted entirely of individuals born in Britain in 1946. This group was not racially or ethnically diverse, limiting the generalizability of the results to other populations.</p>
<p>Additionally, dietary data was self-reported. Self-reports can be subject to memory errors or social desirability bias. The study also experienced attrition over the decades, as some participants passed away or withdrew from the research.</p>
<p>The researchers note that diet quality at age four was generally similar across all groups. Differences only began to emerge and widen in adulthood. This may be due to the lingering effects of post-war food rationing in Britain, which standardized diets during that era.</p>
<p>Despite these limitations, the study offers evidence that long-term dietary habits matter. It suggests that consistent alignment with dietary guidelines from childhood through adulthood is associated with better cognitive outcomes. This supports the idea that nutritional interventions could be a viable strategy for preserving brain health.</p>
<p>Future research is needed to confirm these trends in more diverse populations. Studies that track diet and cognition starting from very early childhood in modern contexts would be particularly beneficial. Understanding these life-course relationships is essential for developing public health strategies to combat the rising rates of dementia.</p>
<p>The study, “<a href="https://doi.org/10.1016/j.cdnut.2025.107619" target="_blank">Associations between diet quality and global cognitive ability across the life course: Longitudinal analysis of the 1946 British Birth Cohort</a>,” was authored by Kelly C. Cara, Tammy M. Scott, Mei Chung, and Paul F. Jacques.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/mental-fatigue-has-psychological-triggers-%E2%88%92-new-research-suggests-challenging-goals-can-head-it-off/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Mental fatigue has psychological triggers − new research suggests challenging goals can head it off</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Dec 28th 2025, 16:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Do you ever feel spacey, distracted and worn down toward the end of a long work-related task – especially if that task is entirely a mental one? For over a century, psychologists have been trying to determine whether mental fatigue is fundamentally similar to physical fatigue or whether it is governed by different processes.</p>
<p>Some <a href="https://doi.org/10.1016/j.actpsy.2003.11.001">researchers have argued</a> that exerting mental effort depletes a limited supply of energy – the same way physical exertion fatigues muscles. The brain consumes <a href="https://doi.org/10.1037/0022-3514.92.2.325">energy in the form of glucose</a>, which can run low.</p>
<p><a href="https://doi.org/10.1037/h0069511">Other researchers</a> see mental fatigue as more of a psychological phenomenon. Mind-wandering means the current mental effort is not being sufficiently <a href="https://doi.org/10.1016/j.brainresrev.2008.07.001">rewarded</a> – or opportunities to do other, <a href="https://doi.org/10.1017/S0140525X12003196">more enjoyable activities are being lost</a>.</p>
<p><a href="https://scholar.google.com/citations?hl=en&user=JGWPdcMAAAAJ">My</a> <a href="https://scholar.google.com/citations?hl=en&user=1fv9jBIAAAAJ">colleagues</a> and <a href="https://scholar.google.com/citations?hl=en&user=I5HWMl8AAAAJ">I</a> have been <a href="https://doi.org/10.3758/s13414-023-02803-4">trying to resolve this question</a>. Our research suggests mental fatigue is in large part a psychological phenomenon – but one that can be modified by setting goals.</p>
<h2>Vigilance is hard to sustain</h2>
<p>We began by reviewing the science related to mental fatigue.</p>
<p>Psychologists in the World War II era studied why soldiers monitoring radar were losing focus during their shifts. Psychologist Norman Mackworth designed the “<a href="https://doi.org/10.1080/17470214808416738">clock test,</a>” in which military participants were asked to watch a large “clock” on a wall for up to two hours. The second hand ticked at regular intervals. But rarely and unpredictably, it would jump twice the usual distance. The task was to detect those tiny variations.</p>
<p>Within the first 30 minutes, the subjects’ performance dropped dramatically – and then continued to decline more gradually. Psychologists named the necessary mental focus “vigilance” – and concluded it was fundamentally limited in humans.</p>
<p><a href="https://doi.org/10.1037/0033-2909.117.2.230">Decades of research</a> since has confirmed that vigilance is difficult to maintain, even over brief intervals. In studies, people report <a href="https://doi.org/10.1177/00187208211011333">feeling stressed and fatigued</a> following even a brief vigilance task. In 2021, one study even showed a <a href="https://doi.org/10.1177/00187208211011333">reduction of blood flow through the brain</a> during vigilance.</p>
<p>My colleagues and I wondered: Are all forms of mental work like vigilance? Surely, there are instances where people can engage with mental work without feeling fatigued.</p>
<h2>Setting goals</h2>
<p>We decided to study whether <a href="https://doi.org/10.1037/mot0000127">goal-setting</a> could improve mental focus and ran <a href="https://doi.org/10.3758/s13414-023-02803-4">three experiments</a> to test this idea.</p>
<p>In the first experiment, we showed 108 undergraduate students at the University of Oregon a screen with four empty white boxes against a gray background. Every one to three seconds, an X appeared in one of the four boxes. Their task was to indicate where that symbol appeared as quickly as possible. After each response, the participant was given feedback about both their accuracy and their speed, such as “Correct! Reaction time = 400 milliseconds.”</p>
<p>Periodically during the 26-minute test, we also asked participants to rank their mental state as task-focused, distracted or mind-wandering. This gave us data about how they felt, in addition to how they did.</p>
<p>We randomly gave half of them a specific goal: Keep their reaction times under 400 milliseconds while staying as accurate as possible. We gave no goal to the other half.</p>
<p><a href="https://doi.org/10.3758/s13414-023-02803-4">Our results</a> were mixed. People who were given a goal did not experience as many slow reaction times, but having goals didn’t increase their top speed. It also didn’t change how often people reported feeling distracted.</p>
<h2>Setting increasingly harder goals</h2>
<p>We decided to tweak the test for our second experiment. Again, we randomly assigned a goal to half of the 112 fresh participants and no goal to the other half. But this time, as the experiment progressed, we increased the difficulty of the goal from a 450-millisecond reaction time to 400 milliseconds and then to 350 by the final block. Setting these harder-over-time goals had a huge effect on performance.</p>
<p>Compared with the participants assigned a set goal in the first experiment, the participants assigned increasingly more difficult goals in the second experiment had faster reaction times by an average of 45 milliseconds – about a 10% improvement. Participants in the second experiment also reported fewer instances of mind-wandering and showed no slowing of reaction times throughout the experiment. In other words, they showed no signs of mental fatigue. And we didn’t have to make the task easier. In fact, we made it harder.</p>
<p>Our first two experiments were conducted online because of shutdowns related to COVID-19. Our third study – a repeat of our second study – was conducted in person. We got the same results.</p>
<p>These findings, combined with <a href="https://doi.org/10.1037/xhp0001148">other recent work</a> we’ve conducted, have changed the way my colleagues and I consider mental fatigue. It’s clear that when people strive for specific and hard-to-reach goals, they report feeling more motivated and <a href="https://doi.org/10.1037/mac0000141">they do not report feeling as drained</a> by mental work.</p>
<p>If you’re wondering how to implement these findings in your life, make simple, direct and specific goals for yourself. Mark when you complete the goals – the feedback can help you keep going. If you’re feeling particularly drained, take short breaks. Even <a href="https://doi.org/10.1016/j.cognition.2014.10.001">brief rests</a> of less than two minutes can restore capacity for mental work.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img decoding="async" src="https://counter.theconversation.com/content/219057/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1"><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p>
<p> </p>
<p><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/mental-fatigue-has-psychological-triggers-new-research-suggests-challenging-goals-can-head-it-off-219057">original article</a>.</em></p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/psilocybin-shows-promise-for-rapid-reduction-of-cancer-related-depression/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Psilocybin shows promise for rapid reduction of cancer-related depression</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Dec 28th 2025, 14:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new analysis of clinical trial data indicates that psilocybin, when administered alongside psychotherapy, may provide rapid relief for patients suffering from cancer-related anxiety and depression. The research suggests that while the reduction in anxiety symptoms persists for at least two weeks, the antidepressant effects may diminish more quickly without specific dosing strategies. This study, which synthesized data from prior randomized controlled trials, was published in <em><a href="https://doi.org/10.1177/00912174251337572" target="_blank" rel="noopener">The International Journal of Psychiatry in Medicine</a></em>.</p>
<p>Patients facing terminal illness often experience profound psychological challenges that differ from standard mood disorders. Individuals with advanced cancer frequently struggle with major depressive disorder and prolonged grief. They also face a unique form of existential distress characterized by a loss of meaning and a fear of death.</p>
<p>Treating these conditions in a palliative care setting is notoriously difficult. Standard antidepressant medications, such as SSRIs, typically require a gradual increase in dosage over several weeks. It can take six to twelve weeks for the full therapeutic effect to manifest.</p>
<p>This delay is often untenable for patients with limited life expectancy. Additionally, patients with end-stage cancer often suffer from compromised liver or kidney function due to their disease or aggressive oncology treatments. These physical impairments make it difficult for their bodies to process daily medications safely and effectively.</p>
<p>Previous research has hinted that antidepressants offer only moderate benefits in this specific population. Consequently, there is a pressing need for treatments that act quickly and are well-tolerated by physiologically fragile patients. Damian Swieczkowski, a researcher at the LUXMED Group and the Medical University of Gdansk in Poland, led a team to investigate a potential alternative.</p>
<p>The researchers focused on psilocybin, a compound found in certain mushrooms that acts on the serotonin receptors in the brain. They sought to evaluate its efficacy and speed of action specifically for patients with a diagnosis of life-threatening cancer. The team also aimed to determine which dosage levels provided the best outcomes.</p>
<p>To accomplish this, the investigators utilized a network meta-analysis to synthesize findings from separate studies. This statistical technique allows researchers to estimate how different treatments compare to one another, even if they were never tested head-to-head in the same experiment. By combining these independent results, the team could evaluate the relative effectiveness of varying psilocybin dosages against a placebo.</p>
<p>They performed a comprehensive search of major medical databases for randomized controlled trials. The researchers applied strict criteria to ensure the quality of their data. They only included studies involving adult patients with life-threatening cancer and clinically verified symptoms of depression or anxiety. The trials had to be randomized and include a placebo control group.</p>
<p>Ultimately, the analysis incorporated data from two rigorous clinical trials conducted in the United States. These studies included defined measurements of depression and anxiety at specific time points. The team looked at outcomes one day after treatment and again at the two-week mark.</p>
<p>In the included trials, psilocybin was not taken in isolation. The drug administration occurred within a structured psychotherapeutic setting. Participants underwent preparatory sessions to build trust with their therapists.</p>
<p>The dosing sessions took place in a supportive environment with trained counselors present. Afterward, patients engaged in integration sessions to discuss and contextualize their psychedelic experiences. This combination of drug and therapy is central to the treatment model.</p>
<p>The analysis revealed that psilocybin produced a rapid therapeutic effect. On the first day following treatment, patients who received psilocybin showed a reduction in depression scores compared to those who received a placebo. This immediate response stands in contrast to the slow onset of conventional antidepressants.</p>
<p>The researchers also observed a reduction in anxiety levels on the first day. This suggests that the treatment targets the intense emotional distress often felt by those with a terminal diagnosis. The magnitude of symptom relief was substantial in the immediate aftermath of the session.</p>
<p>The team then examined whether these benefits were sustained over time. At the two-week follow-up, the reduction in anxiety remained statistically distinct from the placebo group. Patients continued to report lower states of anxiety and fewer anxious personality traits.</p>
<p>However, the findings regarding depression at the two-week mark were less definitive. While the depression scores for the psilocybin group remained lower than the placebo group numerically, the difference was not statistically significant. This indicates that the antidepressant effect might be more transient than the anxiolytic effect.</p>
<p>The lack of statistical significance at two weeks implies a dynamic variability in how patients respond to the treatment over time. It suggests that the initial relief from depression might require reinforcement. This could potentially be addressed through additional therapeutic integration or dosing adjustments.</p>
<p>The study also provided a ranking of different dosages based on their likelihood of success. The researchers compared a moderate dose of 0.2 milligrams per kilogram of body weight against a higher dose of 0.3 milligrams per kilogram. They used a statistical calculation to determine the probability of each dose being the most effective.</p>
<p>The analysis identified the higher dose of 0.3 milligrams per kilogram as the most effective option. This dosage ranked highest for reducing both depression and anxiety scores on the first day. It also maintained the highest ranking for anxiety reduction at the two-week point.</p>
<p>The lower dose was also effective compared to the placebo but generally performed less well than the higher dose. This finding hints at a dose-response relationship, where a sufficient amount of the compound is necessary to achieve maximum clinical benefit. The results support the idea that dosing protocols in future trials should consider these higher levels.</p>
<p>The researchers proposed biological mechanisms that might explain these effects. Psilocybin is known to alter connectivity in the default mode network of the brain. This network is often overactive in depression and is associated with rumination and rigid negative thinking.</p>
<p>By temporarily disrupting this network, psilocybin may allow for increased neuroplasticity and emotional processing. This “reset” could help terminally ill patients break out of cycles of despair and existential dread. The therapy component then helps them make sense of this shift in perspective.</p>
<p>Despite the promising results, the authors highlighted several limitations that require attention. The primary constraint is the small amount of data available. The analysis relied on only two trials with a relatively small number of total participants.</p>
<p>This small sample size limits the ability to make broad generalizations. The specific finding that the higher dose was superior relies on limited direct comparisons. Larger trials are needed to confirm these dosing recommendations with greater certainty.</p>
<p>Another challenge inherent in this type of research is the issue of blinding. In a typical drug trial, participants do not know if they received the active medication or a placebo. With psychedelics, the intense psychoactive effects make it obvious to most participants which group they are in.</p>
<p>This “functional unblinding” can introduce bias. Patients who know they took the active drug may expect to feel better, which can skew their self-reported scores on mood questionnaires. Conversely, those who realize they took the placebo may feel disappointed, potentially worsening their reported symptoms.</p>
<p>The authors also noted that the trials relied on self-reported scales for depression and anxiety. While these are standard tools, they are subjective. Future studies would benefit from including assessments made by clinicians to provide a more objective measure of improvement.</p>
<p>The authors emphasized that these findings are provisional. They serve as a guide for future research rather than a final verdict on clinical practice. The results highlight the potential for psilocybin to serve as a short-term, add-on treatment for palliative care.</p>
<p>For patients who cannot wait weeks for relief, this therapy could offer a bridge to stability. It may be particularly useful for managing acute existential crises. However, the transient nature of the antidepressant effect suggests that long-term management strategies still need development.</p>
<p>Future research must focus on larger, more diverse patient populations. Researchers need to explore how different types of cancer or psychiatric diagnoses might influence the response to treatment. Establishing standardized protocols for both dosing and psychological support will be essential.</p>
<p>The study, “<a href="https://doi.org/10.1177/00912174251337572" target="_blank" rel="noopener">Psilocybin-assisted psychotherapy as a rapid-acting treatment for cancer-related depression and anxiety: Evidence from a network meta-analysis</a>,” was authored by Damian Swieczkowski, Aleksander Kwaśny, Michal Pruc, Zuzanna Gaca, Lukasz Szarpak, and Wiesław J. Cubała.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/happiness-maximization-appears-to-be-a-culturally-specific-preference/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Happiness maximization appears to be a culturally specific preference</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Dec 28th 2025, 12:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A comprehensive new study suggests that the intense societal focus on maximizing personal happiness is not a universal human aspiration but rather a specific cultural preference. The findings indicate that the drive to pursue positive emotions above all else is largely confined to Western, Educated, Industrialized, Rich, and Democratic (WEIRD) populations. This research was published in the journal <em><a href="https://doi.org/10.1177/17456916231208367" target="_blank" rel="noopener">Perspectives on Psychological Science</a></em>.</p>
<p>Psychology has historically operated under the assumption that certain human motivations are consistent across the globe. One of the most pervasive of these assumptions is that all people strive to maximize their happiness. This belief underpins many Western therapeutic practices, self-help ideologies, and economic policies.</p>
<p>The research team sought to investigate whether this pursuit is truly fundamental to human nature or if it is a product of specific cultural environments. They aimed to determine if the prioritization of happiness varies systematically according to a country’s cultural background.</p>
<p>The study was conducted by a large international consortium of researchers led by Kuba Krys of the Polish Academy of Sciences. The team included dozens of collaborators from institutions around the world, allowing for a truly global perspective. They recognized that previous research had been heavily skewed toward data collected in North America and Western Europe. By expanding the scope of inquiry, they hoped to identify the boundaries of Western psychological models.</p>
<p>To test their hypothesis, the researchers collected data from 13,546 participants. These individuals were drawn from 61 different countries, ensuring a diverse representation of global cultures. The sample included nations from Europe, Asia, Africa, South America, and North America. The researchers used a series of standardized surveys to assess how much participants valued the pursuit of happiness.</p>
<p>The primary measure focused on the concept of happiness maximization. Participants responded to statements designed to gauge their desire to experience the highest possible levels of positive emotion. The researchers asked individuals to rate how much they agreed with the idea that happiness is the most important goal in life. They also measured how much participants believed that one should strive to be happy at every moment.</p>
<p>In addition to measuring attitudes toward happiness, the researchers calculated a “WEIRD distance” score for each country. This statistical metric represented how culturally distinct a given nation was from the United States. The United States was used as the reference point because it is often considered the archetype of a WEIRD society. The score took into account factors such as individualism, religious values, and democratic history.</p>
<p>The researchers analyzed the relationship between a country’s WEIRD distance and its average endorsement of happiness maximization. The data revealed a clear and strong pattern. Countries that were culturally similar to the United States tended to place a much higher value on maximizing happiness. In these societies, the pursuit of positive emotion was frequently viewed as a central purpose of existence.</p>
<p>In contrast, participants from countries with greater cultural distance from the United States showed significantly less interest in happiness maximization. In many non-WEIRD regions, the ideal life was not necessarily defined by constant positive affect. The data suggests that for a large portion of the world’s population, other values compete with or supersede the desire for personal happiness. These alternative values might include social harmony, family duty, or the ability to withstand hardship.</p>
<p>The study provides evidence that the concept of happiness itself may be understood differently across cultures. In WEIRD societies, happiness is often viewed as a personal achievement and a sign of a successful life. It is frequently associated with high arousal states, such as excitement and enthusiasm. Individuals in these cultures often feel pressure to cultivate and display these emotions.</p>
<p>However, the findings indicate that in many non-Western contexts, happiness is viewed with more caution. Some cultures perceive happiness as a fleeting state that is outside of one’s control. Others may view the active pursuit of happiness as selfish or potentially disruptive to social relationships. In certain cultural frameworks, a state of balance or peace is preferred over the maximization of joy.</p>
<p>The researchers found that the correlation between happiness maximization and the WEIRD index was robust even when controlling for other variables. Factors such as the age and gender of participants did not explain away the cultural differences. The pattern held true across various regions, reinforcing the idea that the drive to maximize happiness is a distinct feature of Western modernization.</p>
<p>This research challenges the universality of positive psychology interventions that focus solely on increasing happiness. It suggests that applying Western models of well-being to non-Western populations may be inappropriate or ineffective. Mental health professionals working in diverse contexts may need to reconsider whether maximizing positive emotion is always a valid therapeutic goal.</p>
<p>The study also highlights the potential downsides of the Western obsession with happiness. In societies that emphasize happiness maximization, individuals who fail to feel happy may experience a sense of personal failure. This pressure can paradoxically lead to lower levels of well-being. By contrast, cultures that do not frame happiness as a constant imperative may offer their members a buffer against this specific type of distress.</p>
<p>There are some limitations to the study that should be noted. The data relies on self-reported survey responses, which can be subject to translation issues and cultural differences in how people interpret questions. Additionally, while the sample was large and diverse, it consisted primarily of university students. Students may be more Westernized than the general population in their respective countries, which could affect the generalizability of the results.</p>
<p>Future research could address these limitations by including more representative samples from outside the university system. It would also be beneficial to use experimental methods to see how these cultural values influence behavior in real-time. Longitudinal studies could help determine if the value placed on happiness changes as non-Western countries become more economically developed.</p>
<p>The study implies that the definition of a “good life” is far more variable than previously thought. It encourages scholars and the public to broaden their understanding of human motivation beyond the Western paradigm. Recognizing that happiness maximization is a cultural artifact rather than a biological mandate allows for a more inclusive approach to understanding human well-being.</p>
<p>The study, “<a href="https://doi.org/10.1177/17456916231208367" target="_blank" rel="noopener">Happiness Maximization Is a WEIRD Way of Living</a>,” was authored by Kuba Krys, Olga Kostoula, Wijnand A. P. van Tilburg, Oriana Mosca, J. Hannah Lee, Fridanna Maricchiolo, Aleksandra Kosiarczyk, Agata Kocimska-Bortnowska, Claudio Torres, Hidefumi Hitokoto, Kongmeng Liew, Michael H. Bond, Vivian Miu-Chi Lun, Vivian L. Vignoles, John M. Zelenski, Brian W. Haas, Joonha Park, Christin-Melanie Vauclair, Anna Kwiatkowska, Marta Roczniewska, Nina Witoszek, I .dil Is¸ık, Natasza Kosakowska-Berezecka, Alejandra Domínguez-Espinosa, June Chun Yeung, Maciej Górski, Mladen Adamovic, Isabelle Albert, Vassilis Pavlopoulos, Márta Fülöp, David Sirlopu, Ayu Okvitawanli, Diana Boer, Julien Teyssier, Arina Malyonova, Alin Gavreliuc, Ursula Serdarevich, Charity S. Akotia, Lily Appoh, D. M. Arévalo Mira, Arno Baltin, Patrick Denoux, Carla Sofia Esteves, Vladimer Gamsakhurdia, Ragna B. Garðarsdóttir, David O. Igbokwe, Eric R. Igou, Natalia Kascakova, Lucie Klu˚zová Kracˇmárová, Nicole Kronberger, Pablo Eduardo Barrientos, Tamara Mohoric´, Elke Murdock, Nur Fariza Mustaffa, Martin Nader, Azar Nadi, Yvette van Osch, Zoran Pavlovic´, Iva Polácˇková Šolcová, Muhammad Rizwan, Vladyslav Romashov, Espen Røysamb, Ruta Sargautyte, Beate Schwarz, Lenka Selecká, Heyla A. Selim, Maria Stogianni, Chien-Ru Sun, Agnieszka Wojtczuk-Turek, Cai Xing, and Yukiko Uchida.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/listing-gaming-on-your-resume-might-hurt-your-job-prospects/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Listing gaming on your resume might hurt your job prospects</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Dec 28th 2025, 10:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>An experiment in Germany looking into the effects of extracurricular activities listed in job applicants’ resumes found that the (fictional) applicant who listed gaming as an extracurricular activity tended to be rated lower in hireability compared to an equal applicant who listed volleyball. The research was published in the <a href="https://dx.doi.org/10.1027/1866-5888/a000376"><em>Journal of Personnel Psychology</em></a>.</p>
<p>Gaming is the activity of engaging with digital games for entertainment, competition, creativity, or social interaction across various platforms. It requires a range of skills, including problem-solving, strategic thinking, hand–eye coordination, rapid decision-making, and the ability to learn complex rule systems. Many games also demand social skills such as teamwork, communication, leadership, and conflict management, especially in multiplayer environments.</p>
<p>Employers historically tended to dismiss gaming skills as irrelevant, particularly when they were not formally certified or connected to education. However, this perception is changing, especially in fields such as IT, engineering, design, data analysis, and project management, where transferable skills from gaming are more readily acknowledged.</p>
<p>Some employers now value gaming-related competencies like systems thinking, adaptability, and collaboration under pressure. Despite this, gaming skills require careful framing in professional contexts as they still tend to be undervalued and even looked down upon by many.</p>
<p>Study author Johannes M. Basch and his colleagues wanted to explore how the gaming skills of job applicants influence their perceived hireability and resume quality evaluations during the preselection of applicants. They conducted a study in which they contrasted gaming and participation in a team sport listed as extracurricular activities on job resumes. They compared them at two different proficiency levels: neutral/average and high.</p>
<p>Study participants were 162 individuals recruited in Germany via posts on social media. Their average age was 32 years. 64% were women. 38% had a bachelor’s degree, master’s degree, or a PhD. Only 4% indicated having prior experience as a hiring manager.</p>
<p>Participants were randomly divided into four groups. They were instructed to adopt the perspective of a hiring manager whose task was to evaluate a fictitious applicant. They then viewed a job advertisement from a fictitious organization seeking a suitable person for the position of a customer service advisor. The advertisement explicitly listed the skills required of the applicant and outlined the responsibilities associated with the role.</p>
<p>After reading this, depending on the group, they were given one candidate resume to read. In the end, they answered questions about the applicant’s hireability and the quality of the resume. Among other information, the resume listed jogging as an extracurricular activity and, depending on the group the participant was assigned to, either “volleyball” or “gaming” as the second activity. In the neutral/average skill level conditions, only the name of the activity was listed.</p>
<p>In the two high-proficiency skill conditions, the resume for participants in one group stated that the applicant was a diagonal attacker in the third national volleyball league and a team captain. In the other group, the applicant competed within the Prime League in the game League of Legends. The Prime League is the official German-language League of Legends league sanctioned by Riot Games, the publisher of League of Legends.</p>
<p>Results showed that the (fictional) applicant listing gaming as an extracurricular activity was rated lower in hireability compared to the applicant who listed volleyball. This was the case at both proficiency levels.</p>
<p>“This study can be seen as a first step in investigating the role of gaming skills in the preselection of candidates, with future research needed to pick up the limitations of our study and to examine whether these effects vary across different job sectors, job requirements, and organizations,” the study authors concluded.</p>
<p>The study contributes to the scientific understanding of how gaming and gamers are perceived in job-related contexts. However, the study authors note that the wording of their fictional job advertisements might have directed participants to put greater emphasis on interpersonal skills than on computer-related skills. This might have made volleyball, as a team sport that requires interpersonal communication skills, look more relevant for the job position than the study authors initially intended.</p>
<p>The paper, “<a href="https://dx.doi.org/10.1027/1866-5888/a000376">Game Over or Game Changer? The Impact of Applicants’ Gaming Skills on Their Hirability,</a>” was authored by Johannes M. Basch, Marie L. Ohlms, and Maria Hepfengraber.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<p><strong>Forwarded by:<br />
Michael Reeder LCPC<br />
Baltimore, MD</strong></p>
<p><strong>This information is taken from free public RSS feeds published by each organization for the purpose of public distribution. Readers are linked back to the article content on each organization's website. This email is an unaffiliated unofficial redistribution of this freely provided content from the publishers. </strong></p>
<p> </p>
<p><s><small><a href="#" style="color:#ffffff;"><a href='https://blogtrottr.com/unsubscribe/565/DY9DKf'>unsubscribe from this feed</a></a></small></s></p>