<table style="border:1px solid #adadad; background-color: #F3F1EC; color: #666666; padding:8px; -webkit-border-radius:4px; border-radius:4px; -moz-border-radius:4px; line-height:16px; margin-bottom:6px;" width="100%">
<tbody>
<tr>
<td><span style="font-family:Helvetica, sans-serif; font-size:20px;font-weight:bold;">PsyPost – Psychology News</span></td>
</tr>
<tr>
<td> </td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/evolution-may-have-capped-human-brain-size-to-balance-energy-costs-and-survival/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Evolution may have capped human brain size to balance energy costs and survival</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Aug 24th 2025, 10:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p><strong> </strong>Brain growth slowed down about 300,000 years ago due to energetic and climate pressures, according to a study published in <a href="https://doi.org/10.1016/j.bandc.2025.106336"><em>Brain & Cognition</em></a>.</p>
<p>One of the puzzles of human evolution is why <em>Homo sapiens</em> is the only surviving species within the Homo lineage. Larger brains have often been seen as a key advantage, enabling fire use, tool-making, and symbolic communication. However, big brains also come at a cost—they consume around 20% of our resting energy and produce considerable heat, which can be a liability in <a href="https://www.psypost.org/new-research-links-climate-change-to-shrinking-brain-size-in-modern-humans/">warmer climates</a>.</p>
<p>Study author Jeffrey M. Stibel examined this evolutionary trade-off. Earlier Homo species experienced strong selection for larger brains, which likely helped them navigate shifting environments and complex social worlds. But fossil evidence suggests that in the past 100,000 years, brain size began to plateau <a href="https://www.psypost.org/brains-are-getting-smaller-in-modern-humans/">or even shrink</a>, raising the possibility that survival depended not just on biology but also on cultural innovations.</p>
<p>“I am drawn to broad questions about human evolution and cognition—how we became who we are and what forces shaped that path. The slowdown in brain growth around 100,000 years ago was a striking pivot,” said Stibel, a trustee of the Natural History Museum and Tufts University.</p>
<p>“It raised a puzzle: if larger brains were no longer being strongly selected for, how did we manage to expand our cognitive reach? That question sits right at the crossroads of biology, culture, and survival.”</p>
<p>Stibel analyzed 800 cranial capacity measurements from across the Homo genus. These included specimens of <em>H. erectus</em>, <em>H. heidelbergensis</em>, <em>H. neanderthalensis</em>, and <em>H. sapiens</em>, among others, totaling 690 modern humans and 99 non-modern Homo individuals. Juvenile or deformed skulls were excluded to ensure reliable adult estimates.</p>
<p><strong><em><a href="https://www.psypost.org/psypost-newsletter/" target="_blank">Stay informed with the latest psychology and neuroscience research—sign up for PsyPost’s newsletter and get new discoveries delivered straight to your inbox.</a></em></strong></p>
<p>The brain size estimates were derived using a regression formula validated across 27 primate species. Fossils were grouped into 100,000-year bins, with additional analyses based on major climate phases (glacial vs. interglacial) determined by global isotope records.</p>
<p>“Around 100,000 years ago, the evolutionary pressure for bigger brains eased in humans. Groups that didn’t adapt cognitively may have disappeared—, possibly explaining why some species went extinct. In contrast, those that survived found new ways to keep improving thinking power despite smaller brains,” explained Stibel.</p>
<p>“The primary adaptation appears to be <em>cognitive offloading</em>—shifting mental effort into tools, language, and shared cultural systems. These tools expanded our minds beyond our brains, making us extraordinarily capable but also deeply dependent on the systems we’ve built.”</p>
<p>Brain size increased significantly during the Early and Middle Pleistocene, but growth slowed after about 300,000 years ago. The peak in brain mass occurred roughly 100,000 years ago, with little evidence of further directional growth afterward. Instead, the data suggest a shift toward stabilizing selection, where maintaining brain size, rather than expanding it, became advantageous.</p>
<p>Climate played a critical role. Significant differences between brain size during glacial and interglacial periods only emerged in the last 100,000 years. Brains were larger during glacial phases and smaller during interglacials, suggesting that warmer climates amplified the metabolic and thermoregulatory costs of sustaining large brains. This shift may have heightened extinction risks for some Homo populations while favoring adaptations such as increased brain efficiency and reliance on cultural “cognitive offloading.”</p>
<p>“We still need to understand the fine-grained dynamics of this cognitive shift,” Stibel said. “Did cognitive offloading make certain populations more resilient to environmental shocks? How did these adaptations spread among early humans? And today, are we crossing similar thresholds with digital technology, where our survival hinges less on individual brainpower and more on the stability of vast cultural and technological networks?”</p>
<p>“This isn’t just ancient history—it’s an ongoing evolutionary story. We are the only species capable of anticipating long-term consequences and acting to change them. By understanding the moment when our minds began living outside of our skulls, we can better prepare for the next chapters in human evolution, including the rise of artificial intelligence.”</p>
<p>The study, “<a href="https://doi.org/10.1016/j.bandc.2025.106336">Did increasing brain size place early humans at risk of extinction?</a>” was authored by Jeffrey M. Stibel.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/cannabidiol-shows-potential-to-reverse-some-neuropsychological-effects-of-social-stress/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Cannabidiol shows potential to reverse some neuropsychological effects of social stress</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Aug 24th 2025, 08:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new study published in <em><a href="https://doi.org/10.1016/j.neuropharm.2025.110630" target="_blank">Neuropharmacology</a></em> provides evidence that cannabidiol, a non-psychoactive compound found in cannabis, may help buffer against the lasting psychological and neurobiological impacts of social stress. Researchers in Spain and Germany found that cannabidiol reduced social avoidance and sensitivity to cocaine in male mice previously exposed to repeated social defeat during adolescence. Cannabidiol also reversed several stress-induced changes in brain gene expression related to serotonin, the endocannabinoid system, and the hypothalamic-pituitary-adrenal axis.</p>
<p>Cannabidiol, commonly known as CBD, is one of the primary compounds found in the Cannabis sativa plant. Unlike delta-9-tetrahydrocannabinol (THC), cannabidiol does not produce intoxicating effects. Over the past decade, it has been widely studied for its potential therapeutic properties, including anti-anxiety, antidepressant, anti-inflammatory, and anti-addiction effects. Importantly, cannabidiol influences multiple biological systems, including those involved in stress regulation, mood, and reward processing.</p>
<p>Stress during adolescence is known to increase vulnerability to a wide range of mental health conditions, including anxiety, depression, and substance use disorders. One form of stress, social defeat, models repeated negative social interactions such as bullying or subordination and has been shown to induce lasting behavioral and neurobiological changes in rodents.</p>
<p>“Social stress (e.g., bullying) is one of the factors that contributes to the development of mental disorders and drug use. Given that cannabidiol, a compound found in the cannabis plant, has anxiolytic and antidepressant potential, we believe it could be effective in reversing the negative consequences of stress on mental health and cocaine use,” said study author Maria A. Aguilar, a professor of psychobiology and director of <a href="https://www.uv.es/uvweb/psychobiology-department/en/research/research-groups/neurobehavioral-mechanisms-endophenotypes-addictive-behavior/presentation-1285859653957.html" target="_blank" rel="noopener">the Neurobehavioral Mechanisms and Endophenotypes of Addictive Behavior Lab</a> at the University of Valencia.</p>
<p>Previous research by the study authors demonstrated that intermittent social defeat during adolescence leads to anxiety- and depression-like behaviors in mice and increases sensitivity to the rewarding effects of drugs like cocaine. However, the biological mechanisms underlying this vulnerability—and whether it can be prevented—remain unclear. The present study sought to test whether cannabidiol could mitigate these behavioral changes and normalize stress-related alterations in brain gene expression.</p>
<p>The new study was divided into two main experiments. In both, male mice were exposed to four episodes of social defeat during late adolescence. In each episode, a test mouse (the intruder) was placed in a cage with a larger, aggressive resident mouse, leading to repeated instances of defeat and submission. This procedure aimed to model intermittent social stress, known to mimic aspects of chronic psychosocial stress in humans.</p>
<p><strong><em><a href="https://www.psypost.org/psypost-newsletter/" target="_blank">Stay informed with the latest psychology and neuroscience research—sign up for PsyPost’s newsletter and get new discoveries delivered straight to your inbox.</a></em></strong></p>
<p>Before each social defeat episode, mice were injected with either a low dose (30 mg/kg) or high dose (60 mg/kg) of cannabidiol. Control groups included mice that were not exposed to stress and mice that received no cannabidiol.</p>
<p>In the first experiment, the researchers assessed behavioral outcomes both shortly after the stress period and several weeks later. These included tests of anxiety-like behavior, social interaction, memory, and depression-like responses, as well as sensitivity to cocaine in a conditioned place preference paradigm—a method for assessing the rewarding effects of drugs.</p>
<p>In the second experiment, the researchers examined the expression of several genes in brain regions implicated in stress and reward, including the serotonin transporter, corticotrophin-releasing factor, and cannabinoid receptors.</p>
<p>Exposure to social defeat produced significant behavioral changes. Stressed mice showed increased anxiety in the elevated plus maze and reduced social interaction—two indicators commonly used to model human anxiety and social withdrawal. They also developed a stronger preference for environments associated with cocaine, suggesting enhanced vulnerability to drug reward.</p>
<p>Cannabidiol had mixed effects. It did not appear to alleviate anxiety-like behavior in the elevated plus maze. “Contrary to expectations, treatment with cannabidiol did not reverse the anxiogenic effects of social defeat,” Aguilar told PsyPost. “This could be due to the doses used in our study.”</p>
<p>But the researchers found that cannabidiol reversed the reduction in social interaction. It also blocked the enhanced preference for cocaine observed in stressed mice. This suggests that cannabidiol may reduce both social withdrawal and susceptibility to the rewarding effects of addictive drugs following stress.</p>
<p>The researchers also found that social defeat led to widespread changes in brain gene expression, many of which were normalized by cannabidiol. Specifically, social stress reduced the expression of genes involved in the serotonin system, endocannabinoid receptors, and components of the hypothalamic-pituitary-adrenal axis—the body’s central stress response system.</p>
<p>For instance, mice exposed to social defeat had lower levels of the serotonin transporter in the dorsal raphe nucleus, lower levels of CB1 and CB2 cannabinoid receptors in the hippocampus, and reduced expression of corticotrophin-releasing factor and proopiomelanocortin in the hypothalamus. These molecules are all involved in regulating mood, stress response, and emotional memory.</p>
<p>Cannabidiol tended to reverse these changes, although the effects varied depending on the dose. For example, the higher dose increased CB1 receptor expression in both stressed and non-stressed animals, while the lower dose selectively increased CB2 receptor expression in stressed mice. Similarly, cannabidiol restored serotonin transporter levels at the higher dose but not at the lower one, which paradoxically further reduced expression.</p>
<p>One exception was the glucocorticoid receptor (NR3C1), which was upregulated by stress but not altered by cannabidiol. This suggests that some components of the stress response may remain elevated even when others are normalized.</p>
<p>“Using an animal model of stress, defeat in a social encounter with a more aggressive animal, we have observed that stressed mice show depression- and anxiety-like symptoms, as well as greater sensitivity to the rewarding effects of cocaine,” Aguilar said. “The administration of cannabidiol before each social defeat reverses social avoidance, one of the main symptoms of depression in humans. Furthermore, stressed mice treated with cannabidiol are no more sensitive to cocaine, which would reduce their vulnerability to developing addiction.”</p>
<p>The findings indicate that cannabidiol may have protective effects against certain behavioral and neurobiological consequences of adolescent social stress. By reversing deficits in social interaction and preventing increased sensitivity to cocaine reward, cannabidiol shows promise as a potential intervention for stress-related risk factors in addiction and mood disorders.</p>
<p>Importantly, the study links these behavioral effects to measurable changes in the brain’s stress and reward systems. The involvement of the serotonin transporter, endocannabinoid receptors, and hypothalamic stress hormones suggests that cannabidiol interacts with multiple overlapping pathways. The fact that different doses had different effects on gene expression points to a need for careful dose optimization in future therapeutic studies.</p>
<p>“An important contribution of our study is the demonstration that exposure to social stress due to defeat induces brain alterations in the expression of genes related to the hypothalamic-pituitary-adrenal axis (the stress response system), serotonin, and receptors of the endocannabinoid system (receptors that bind both the cannabinoids produced by our brain and the cannabinoids present in the cannabis plant),” Aguilar said. “Increasing our knowledge of the neurobiological substrates of the effects of stress may contribute to the development of new drugs for its treatment.”</p>
<p>While the study provides important preclinical evidence, it also has some limitations. The behavioral and molecular analyses were conducted in separate groups of mice, making it impossible to directly link individual behavioral outcomes with specific gene expression profiles. Only male mice were used, leaving open questions about how these effects might differ in females. In addition, the researchers note that cannabidiol often produces dose-dependent and sometimes opposing effects, a finding that complicates clinical translation..</p>
<p>“As we mentioned earlier, our study only evaluated the effects of two doses of cannabidiol, so it is possible that different doses would have yielded different results,” Aguilar noted. “Another limitation is that the study was conducted on mice, so the results cannot be directly extrapolated to humans.”</p>
<p>The research team plans to test whether cannabidiol produces similar effects in female mice. They are also exploring other treatments to determine whether these can also reverse the negative consequences of adolescent social stress.</p>
<p>“The next step in our research will be to assess whether cannabidiol also reverses the effects of social stress in female mice,” Aguilar explained. “There are significant sex differences in the effects of stress as well as in the development of mental disorders and drug addiction. We also intend to evaluate the potential of other pharmacological treatments or environmental interventions, such as physical activity, to reverse the negative consequences of social stress.”</p>
<p>“Our work suggests that cannabidiol could be effective in treating stress-induced mental disorders, including drug use. However, this potential must be verified in clinical studies with humans.</p>
<p>The study, “<a href="https://doi.org/10.1016/j.neuropharm.2025.110630" target="_blank">Cannabidiol prevents social avoidance, potentiation of cocaine reward and gene expression alterations induced by exposure to intermittent social defeat in mice</a>,” was authored by Maria Ángeles Martínez-Caballero, Daniela Navarro, Claudia Calpe-López, Abraham B. Torregrosa, Maria Pilar García-Pardo, Jorge Manzanares, and Maria Asunción Aguilar.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/top-ai-models-fail-spectacularly-when-faced-with-slightly-altered-medical-questions/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Top AI models fail spectacularly when faced with slightly altered medical questions</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Aug 24th 2025, 06:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Artificial intelligence systems often perform impressively on standardized medical exams—but new research suggests these test scores may be misleading. A study published in <em><a href="https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2837372" target="_blank" rel="noopener">JAMA Network Open</a></em> indicates that large language models, or LLMs, might not actually “reason” through clinical questions. Instead, they seem to rely heavily on recognizing familiar answer patterns. When those patterns were slightly altered, the models’ performance dropped significantly—sometimes by more than half.</p>
<p>Large language models are a type of artificial intelligence system trained to process and generate human-like language. They are built using vast datasets that include books, scientific papers, web pages, and other text sources. By analyzing patterns in this data, these models learn how to respond to questions, summarize information, and even simulate reasoning. In recent years, several models have achieved high scores on medical exams, sparking interest in using them to support clinical decision-making.</p>
<p>But high test scores do not necessarily indicate an understanding of the underlying content. Instead, many of these models may simply be predicting the most likely answer based on statistical patterns. This raises the question: are they truly reasoning about medical scenarios, or just mimicking answers they’ve seen before?That’s what the researchers behind the new study set out to examine.</p>
<p>“I am particularly excited about bridging the gap between model building and model deployment and the right evaluation is key to that,” explained study author Suhana Bedi, a PhD student at Stanford University.</p>
<p>“We have AI models achieving near perfect accuracy on benchmarks like multiple choice based medical licensing exam questions. But this doesn’t reflect the reality of clinical practice. We found that <a href="https://jamanetwork.com/journals/jama/fullarticle/2825147">less than 5% of papers</a> evaluate LLMs on real patient data which can be messy and fragmented.”</p>
<p>“So, <a href="https://crfm.stanford.edu/helm/medhelm/latest/">we released a benchmark suite</a> of 35 benchmarks mapped to a taxonomy of real medical and healthcare tasks that were verified by 30 clinicians. We found that most models (including reasoning models) struggled on Administrative and Clinical Decision Support tasks.”</p>
<p>“We hypothesized that this was because these tasks involved complex reasoning scenarios that couldn’t be solved through pattern matching alone, exactly the kind of clinical thinking that matters in real practice,” Bedi explained. “With everyone talking about deploying AI in hospitals, we thought this was a very important question to answer.”</p>
<p>To investigate this, the research team created a modified version of the MedQA benchmark. They selected 100 multiple-choice questions from the original test and rewrote a subset of them to replace the correct answer with “None of the other answers,” or NOTA. This subtle shift forced the models to rely on actual medical reasoning rather than simply recognizing previously seen answer formats. A practicing clinician reviewed all changes to ensure the new “None of the other answers” response was medically appropriate.</p>
<p><strong><em><a href="https://www.psypost.org/psypost-newsletter/" target="_blank" rel="noopener">Stay informed with the latest psychology and neuroscience research—sign up for PsyPost’s newsletter and get new discoveries delivered straight to your inbox.</a></em></strong></p>
<p>Sixty-eight of the questions met the criteria for this test set. Each question presented a clinical scenario and asked for the most appropriate next step in treatment or diagnosis. One example involved a newborn with an inward-turning foot—a typical case of metatarsus adductus, which usually resolves on its own. In the original version, “Reassurance” was the correct answer. In the modified version, “Reassurance” was removed and replaced with “None of the other answers,” making the task more challenging.</p>
<p>Bedi and her colleagues then evaluated six widely used artificial intelligence models, including GPT-4o, Claude 3.5 Sonnet, Gemini 2.0 Flash, and others. All models were prompted to reason through each question using a method called chain-of-thought, which encourages step-by-step explanations of their answers. This approach is intended to support more deliberate reasoning rather than simple guesswork.</p>
<p>The models were tested on both the original and modified questions, and the researchers compared their performance across these two conditions. They used statistical methods to measure the significance of any accuracy drops, with a focus on whether each model could maintain performance when familiar patterns were removed.</p>
<p>The results suggest that none of the models passed this test unscathed. All six experienced a noticeable decline in accuracy when presented with the NOTA-modified questions. Some models, like DeepSeek-R1 and o3-mini, were more resilient than others, showing drops of around 9 to 16 percent.</p>
<p>But the more dramatic declines were seen in widely used models such as GPT-4o and Claude 3.5 Sonnet, which showed reductions of over 25 percent and 33 percent, respectively. Llama 3.3-70B had the largest drop in performance, answering nearly 40 percent more questions incorrectly when the correct answer was replaced with “None of the other answers.”</p>
<p>“What surprised us most was the consistency of the performance decline across all models, including the most advanced reasoning models like DeepSeek-R1 and o3-mini,” Bedi told PsyPost.</p>
<p>These findings suggest that current AI models tend to rely on recognizing common patterns in test formats, rather than reasoning through complex medical decisions. When familiar options are removed or altered, performance deteriorates, sometimes dramatically.</p>
<p>The researchers interpret this pattern as evidence that many AI systems may not be equipped to handle novel clinical situations—at least not yet. In real-world medicine, patients often present with overlapping symptoms, incomplete histories, or unexpected complications. If an AI system cannot handle minor shifts in question formatting, it may also struggle with these kinds of real-life variability.</p>
<p>“These AI models aren’t as reliable as their test scores suggest,” Bedi said. “When we changed the answer choices slightly, performance dropped dramatically, with some models going from 80% accuracy down to 42%. It’s like having a student who aces practice tests but fails when the questions are worded differently. For now, AI should help doctors, not replace them.”</p>
<p>While the study was relatively small, limited to 68 test questions, the consistency of the performance decline across all six models raised concern. The authors acknowledge that more research is needed, including testing larger and more diverse datasets and evaluating models using different methods, such as retrieval-augmented generation or fine-tuning on clinical data.</p>
<p>“We only tested 68 questions from one medical exam, so this isn’t the full picture of AI capabilities,” Bedi noted. “Also, we used a specific way to test reasoning, there might be other approaches that reveal different strengths or weaknesses. Real clinical deployment would likely involve more sophisticated setups than what we tested.”</p>
<p>Still, the authors suggest their results point to three major priorities moving forward: building evaluation tools that separate true reasoning from pattern recognition, improving transparency around how current systems handle novel medical problems, and developing new models that prioritize reasoning abilities.</p>
<p>“We want to build better tests that can tell the difference between AI systems that reason versus those that just memorize patterns,” Bedi said. “We’re also hoping this work pushes the field toward developing AI that’s more genuinely reliable for medical use, not just good at taking tests.”</p>
<p>“The main thing is that impressive test scores don’t automatically mean an AI system is ready for the real world. Medicine is complicated and unpredictable, and we need AI systems that can handle that complexity safely. This research is about making sure we get there responsibly.”</p>
<p>The study, “<a href="https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2837372" target="_blank" rel="noopener">Fidelity of Medical Reasoning in Large Language Models</a>,” was authored by Suhana Bedi, Yixing Jiang, Philip Chung, Sanmi Koyejo, and Nigam Shah.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/a-new-frontier-in-autism-research-predicting-risk-in-babies-as-young-as-two-months/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">A new frontier in autism research: predicting risk in babies as young as two months</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Aug 23rd 2025, 18:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>For children with autism, early intervention is critical. Therapies and education – especially in the first two years of life – can facilitate a child’s social development, reduce familial stress and ultimately improve quality of life.</p>
<p>But while we can reliably diagnose autism spectrum disorder (ASD) at 24 months, most children are diagnosed <a href="http://aut.sagepub.com/content/early/2013/06/19/1362361313480277.abstract">much later</a>. This is largely due to a lack of resources, poor adherence to screening guidelines and the fact that primary care physicians are often uncomfortable <a href="http://www.sciencedirect.com/science/article/pii/S0022347615002541">talking about</a> autism risk to parents.</p>
<p>But what if we could use a simple, routine test to screen every baby for autism? It’s not as far-fetched as it sounds. Larger-scale clinical trials for an eye-tracking device that could be used to predict autism are <a href="http://www.webmd.com/news/breaking-news/20150302/autism-early-diagnosis?page=2">slated to begin</a> this year.</p>
<p>This presents a new and unique set of ethical concerns. Technologies that predict the possibility of a neurological disorder have the weight of affecting conceptions of not just “what” these children have but “who” these children will become.</p>
<p>As a neuroethicist and autism researcher, we believe it is time to have a conversation about these technologies, and what it will mean for parents and children or for people with autism.</p>
<h2>Why use eye-tracking to predict autism?</h2>
<p>Many researchers have <a href="http://www.ncbi.nlm.nih.gov/pubmed/24694721">found</a> that autistic children prefer to <a href="http://www.ncbi.nlm.nih.gov/pubmed/24069955">look at</a> different things than typically developing children. This is called gaze preference. In fact, gaze preference changes can be detected prior to the onset of autism. Researchers have been using eye-tracking devices to record where babies gaze when viewing videos of social scenes. And they have been using this device not to diagnose autism, but to predict autism.</p>
<p>A 2013 study using an eye-tracking device found that differences in gaze preference can be detected in infants as young as two months. When viewing videos, the infants who look at mouths more than eyes and objects more than people are more likely to later be diagnosed with autism. These infants experienced a <a href="http://www.ncbi.nlm.nih.gov/pubmed/24196715">decline in attention</a> to <a href="http://www.nimh.nih.gov/news/science-news/2013/earliest-marker-for-autism-found-in-young-infants.shtml">other people’s eyes</a>.</p>
<p>The researchers from this study are working to replicate these findings in larger studies and are heading up the development of the <a href="http://www.webmd.com/news/breaking-news/20150302/autism-early-diagnosis?page=2">eye-tracking device</a> slated for clinical trials this year, and should the trials be successful, researchers will seek FDA approval for the device.</p>
<p>The device is noninvasive, relatively easy to use and portable. And it could provide a standardized, objective measure for predicting autism. In other words, it would be a pre-diagnostic tool. This means that, by identifying the possibility of autism early, eye-tracking devices could increase the chances that children will be officially diagnosed earlier. This would especially help children who tend to be diagnosed at later ages because of disparities related to <a href="http://dx.doi.org/10.2105/AJPH.2007.131243">race</a> or <a href="http://dx.doi.org/10.1177/1362361313480277">geography</a>.</p>
<p>In fact, researchers have suggested it could be used as part of a routine <a href="http://www.webmd.com/news/breaking-news/20150302/autism-early-diagnosis?page=2">well baby checkup</a> for 18- to 24-month-olds. And if the technology proves to be useful in predicting autism in infants, why wouldn’t the device one day be utilized even earlier for two- or six-months-olds? A pre-diagnostic assessment for autism could be easily built into regular checkups, instead of waiting for parents to report symptoms and get an appointment with a specialist. This could be a major leap forward for getting kids diagnosed early with ASD and started on therapy, or providing interventions even prior to the development of autistic traits.</p>
<h2>What does ‘risk’ of autism mean?</h2>
<p>Imagine your baby is assessed for pre-diagnostic autism with an eye-tracking device, and you learn that he or she is is likely to be later diagnosed with autism.</p>
<p>What does that mean? How should we talk to parents about this? And bear in mind that autism is highly variable and has a very wide range of both symptom profile and age of onset, which complicates how accurate such an assessment can be.</p>
<p>A positive assessment would indicate a higher likelihood of the child being diagnosed with autism. A negative one would indicate a lower likelihood. That is not the same thing as getting a diagnosis for autism in infancy. This is pre-diagnostic. A positive assessment could be used to justify an early therapeutic regimen even prior to an autism diagnosis. Early intervention can provide long-lasting improvements in the quality of life of the children, families and caregivers of children with autism. For pre-diagnosed children, the hope would be that intervening before the development of significant autistic traits would be even more beneficial.</p>
<p>The promise of having an opportunity to provide earlier intervention – perhaps earlier than ever before – and to implement this technology in routine community pediatric care requires that we consider the development of this technology very carefully.</p>
<p>For example, what exactly will parents be told upon receiving such an assessment? The word “risk” may fail to communicate the vast range of possible outcomes and instead place too much focus on negative outcomes related to an autism spectrum diagnosis (ASD). Not every child who receives a positive assessment after all will actually be diagnosed with autism (to be sure, even with a tool with as much promise as eye-tracking, there will be false positives).</p>
<p>We should be mindful of the effect a positive assessment (false positive or not) could have on a child and their family. In many cultures, for instance, a condition like autism would stigmatize an entire family.</p>
<p>In the absence of care and resources, especially for children so young, a positive assessment (even if the assessment if found to be wrong or a false positive) could be seen as more of a sentence rather an opportunity for intervention, a sentiment that could occur even within research trials.</p>
<h2>How do you treat a child “pre-diagnosed” with autism?</h2>
<p>While several research groups have raised the possibility of an objective test for toddlers using the eye-tracking device, eye-tracking has also been used in a preliminary study to predict autism in two- to six-month-olds. What if, in the future, babies are regularly assessed at younger ages, for which we do not yet have interventions? What could (and what should) a parent do in that situation?</p>
<p>There are currently no evidence-based interventions available for babies under 12 months. The next phase of studies following upcoming trials will involve testing the development of a <a href="https://clinicaltrials.gov/ct2/show/NCT01985022?term=autism+atlanta&rank=2">novel early intervention</a> for <a href="http://projectreporter.nih.gov/project_info_description.cfm?aid=8893151&icde=0">12-month-olds</a>. Other researchers are attempting to develop interventions for <a href="http://www.ucdmc.ucdavis.edu/publish/news/newsroom/9182">six-month-old infants</a>.</p>
<p>A positive assessment might motivate parents to invest unnecessarily in expensive interventions, surveillance and treatments. It could also lead to changes in the life trajectories of the child, caregivers and entire families such as changes in their financial plans and reallocation of time and material resources to a child’s early intervention or care.</p>
<p>Even after a false positive (an assessment for high risk that is determined to be wrong) is identified and the likelihood of getting a diagnosis of autism is determined to be quite low, caregivers may be unable to stop looking for signs of autism as a child ages.</p>
<p>There are no autism-specific medications (because we still do not know the causes of autism), though drugs are frequently used to treat children for a variety of autism-related symptoms.</p>
<p>In fact, psychotropic drugs have been prescribed to children less than two years of age, and risks of these medications on early development have yet to be determined.</p>
<p>And adherents of a growing <a href="http://www.wired.com/2013/04/neurodiversity/">neurodiversity</a> movement – an advocacy position that rejects notions that autism is unwanted and should be cured and, instead, acknowledges autism as a natural variant of human neurological development – would resist the use of “risk” in relation to ASD.</p>
<h2>Not a diagnosis, but a pre-preexisting condition</h2>
<p>Policymakers must consider the impact of the possible integration of these tools into regular pediatric practice and infant care as a new, community-wide pre-diagnostic assessment tool.</p>
<p>Predictive detection technologies such as these will present a new set of policy considerations. Will insurers pay for the test? If they do, will they pay for treatment and intervention afterwards? Because of the potential for long-term health-care savings, would there be penalties from insurers for not undergoing such an assessment? Right now, we just don’t know.</p>
<p>Keep in mind that insurers were not prohibited from denying people coverage for preexisting coverage until the Affordable Care Act (ACA) was passed. But with this test, we aren’t talking about a preexisting condition. We are talking about a predictive technology, a “pre” whose results essentially create a new category of health or illness, well before the condition even becomes a preexisting one. Think of it as a pre-preexisting condition. This situation is not addressed by the ACA.</p>
<p>The insurance implications can spread beyond childhood. How a predictive assessment will affect life insurance policies and long-term care insurance is unknown.</p>
<p>Because information about one’s brain health often feels especially identity-forming, privacy policies will need to be created to determine how pre-diagnostic information be kept and who will have access to the results of these assessments. Will schools, future employers or insurance agencies have access to this information?</p>
<p>As eye-tracking devices head toward clinical trials, it is critical to think about and address these concerns in a public forum and alongside the development of these technologies.</p>
<p>Without such a discussion, these tools, despite their enormous potential, risk losing resources and public support to be fully developed and advanced or risk being underused or not used properly at all.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img decoding="async" src="https://counter.theconversation.com/content/44821/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1"><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p>
<p> </p>
<p><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/are-we-ready-for-a-test-that-could-pre-diagnose-autism-in-babies-44821">original article</a>.</em></p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/cerebellar-prefrontal-brain-connectivity-may-shape-negative-symptoms-in-psychosis/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Cerebellar-prefrontal brain connectivity may shape negative symptoms in psychosis</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Aug 23rd 2025, 14:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new study has found that stronger communication between the cerebellum and the dorsolateral prefrontal cortex—a part of the brain associated with higher-order thinking—is linked to reduced severity of negative symptoms in people with psychotic disorders. These negative symptoms were also associated with poorer verbal memory. The findings suggest that targeting cerebellar-prefrontal connectivity could help inform new treatment approaches for symptoms that remain largely resistant to existing medications.</p>
<p>The research was published in <em><a href="https://doi.org/10.1016/j.bpsc.2025.07.013" target="_blank" rel="noopener">Biological Psychiatry: Cognitive Neuroscience and Neuroimaging</a></em>.</p>
<p>Psychosis is a mental health condition marked by a disconnection from reality. This can involve hallucinations, delusions, or disorganized thinking—often referred to as positive symptoms because they represent an addition to typical experience. However, many people with psychosis also experience negative symptoms, which reflect a loss or reduction in normal functions. These might include apathy, reduced emotional responsiveness, decreased speech, and social disengagement.</p>
<p>While positive symptoms often respond to antipsychotic medications, negative symptoms tend to be more persistent and less treatable. They are strongly linked to poor quality of life, difficulty in daily functioning, and long-term disability. Despite their importance, the brain mechanisms behind negative symptoms are not well understood.</p>
<p>Previous small-scale studies had suggested that reduced communication between the cerebellum—a region traditionally associated with movement—and the dorsolateral prefrontal cortex (DLPFC), which plays a role in executive functions like planning and decision-making, might be related to the severity of negative symptoms in schizophrenia. One early study even found that increasing this cerebellar-prefrontal connectivity using non-invasive brain stimulation reduced symptom severity in a small group.</p>
<p>However, these earlier findings were based on limited sample sizes. The researchers behind the current study sought to test whether this relationship holds up in a much larger and more diverse sample of people with psychosis spectrum disorders. They also wanted to explore whether cognitive performance—particularly memory—might play a role in this brain-behavior link.</p>
<p>“Negative symptoms are a significant predictor of disability for people with psychotic disorders, yet the underlying brain circuitry remains unknown,” said study author <a href="https://www.vumc.org/heatherwardlab/welcome-ward-lab" target="_blank" rel="noopener">Heather Burrell Ward</a>, director of Neuromodulation Research and the Vanderbilt Psychiatry Residency Research Track, and assistant professor at <a href="https://www.vumc.org/psychiatry/person/heather-burrell-ward-md" target="_blank" rel="noopener">Vanderbilt University Medical Center</a>.</p>
<p>“Importantly, current medications are minimally effective in treating negative symptoms. Previous work from co-author Roscoe Brady, Jr., MD, PhD observed that cerebellar-prefrontal brain connectivity was related to negative symptoms and that using noninvasive brain stimulation increased connectivity in that circuit and improved negative symptoms. These findings were exciting, but they involved small samples (n=44 and n=11), so we wanted to test if this same pattern of brain connectivity was also linked to negative symptoms in a much larger sample of people with psychotic disorders.”</p>
<p>The new study involved 260 adults diagnosed with a range of psychotic disorders, including both affective and nonaffective forms. Affective psychoses refer to conditions such as bipolar disorder with psychotic features, while nonaffective psychoses include schizophrenia and related disorders. Participants underwent brain scans using functional magnetic resonance imaging (fMRI) while in a resting state. This allowed researchers to assess how different brain regions communicate when a person is not actively engaged in a task.</p>
<p>The research team used a specific cerebellar region identified in earlier work as a starting point—or “seed”—and measured how strongly this region connected with the DLPFC. At the same time, they assessed the severity of participants’ negative symptoms using a standardized clinical rating scale. They also administered cognitive tests measuring memory, attention, verbal fluency, and processing speed.</p>
<p>The researchers controlled for factors like age, sex, the type of scanner used, and overall scan quality. They also examined whether results differed across subtypes of psychosis or were influenced by variables such as antipsychotic medication or duration of illness.</p>
<p>The main finding was that stronger connectivity between the cerebellum and the left dorsolateral prefrontal cortex was associated with lower negative symptom severity. This relationship appeared to be specific to negative symptoms, as connectivity did not relate to positive symptoms or levels of depression. Importantly, the association held true across different types of psychotic disorders and stages of illness, and it remained significant even after controlling for potential confounds like head motion during the scan.</p>
<p>“We were excited to see that the relationship between cerebellar-prefrontal connectivity and negative symptoms did not differ by diagnosis, meaning that same brain circuit is involved in negative symptoms across the psychosis spectrum in both nonaffective psychoses (e.g., schizophrenia) and affective psychoses (e.g., bipolar disorder with psychotic features),” Ward told PsyPost.</p>
<p>The researchers also found a modest link between cerebellar-prefrontal connectivity and performance on a test of delayed verbal memory. People who showed stronger brain connectivity between these two regions tended to do better at recalling words after a delay. No other cognitive domains showed a similar relationship.</p>
<p>When the researchers explored whether verbal memory might explain part of the connection between brain connectivity and negative symptoms, they found that delayed verbal learning partially accounted for the link. In other words, people with better verbal memory tended to show stronger connectivity and fewer negative symptoms. This suggests that cognitive function may be one pathway through which cerebellar-prefrontal connectivity relates to symptom severity.</p>
<p>The study also confirmed earlier findings that negative symptoms are broadly associated with cognitive impairments. In this sample, individuals with more severe negative symptoms performed worse on nearly all areas of cognitive testing, including attention, memory, and language-related tasks.</p>
<p>The results add weight to the idea that communication between the cerebellum and prefrontal cortex plays an important role in the expression of negative symptoms across the psychosis spectrum. The findings align with prior studies and extend them by using a much larger and more diverse group of participants. The study also offers preliminary evidence that memory performance—specifically delayed verbal recall—may be a partial bridge linking brain connectivity and symptoms.</p>
<p>“We have shown that cerebellar-prefrontal connectivity is related to negative symptoms in psychotic disorders,” Ward explained. “This provides further evidence for a brain circuit that could be targeted with a variety of treatments (e.g., medication or brain stimulation) to treat the debilitating negative symptoms associated with psychotic disorders.”</p>
<p>While the study had several strengths, including its large sample size and comprehensive analysis, it also had limitations. The data were collected at a single site, and the brain scans were not optimized specifically for studying the cerebellum. This may have limited the precision of some measurements.</p>
<p>In addition, the study was cross-sectional, meaning that it captured a snapshot in time. As a result, it cannot determine whether reduced connectivity causes negative symptoms or whether the symptoms themselves affect brain communication. “Future studies should test if changes in this brain circuit over time lead to changes in negative symptom severity,” Ward said.</p>
<p>The findings support the need for clinical trials that test whether enhancing cerebellar-prefrontal connectivity can lead to symptom improvement in people with psychosis. Prior pilot work using brain stimulation techniques has shown promise, and this study provides a stronger foundation for expanding such approaches.</p>
<p>“As a psychiatrist, my long-term goal is to develop novel brain stimulation treatments for people with psychotic disorders that are highly effective and have minimal side effects,” Ward added. “This analysis was led by Sean Yarrell, MEd, who is now a graduate student in the lab of co-author (and colleague) Alexandra Moussa-Tooks, PhD at Indiana University, and Sophia Blyth, BA, who is now a graduate student in Will Pelham’s lab at UCSD.”</p>
<p>The study, “<a href="https://www.sciencedirect.com/science/article/pii/S2451902225002484" target="_blank" rel="noopener">Cerebellar-Prefrontal Connectivity Predicts Negative Symptom Severity Across the Psychosis Spectrum</a>,” was authored by Sean A. Yarrell, Sophia H. Blyth, Alexandra B. Moussa-Tooks, Baxter P. Rogers, Anna Huang, Neil D. Woodward, Stephan Heckers, Roscoe O. Brady, and Heather Burrell Ward.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/childrens-self-estimates-of-iq-become-more-accurate-with-age-but-only-to-a-point/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Children’s self-estimates of IQ become more accurate with age—but only to a point</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Aug 23rd 2025, 12:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Children under 10 are not very accurate at judging their own intelligence, according to a large-scale study published in <em><a href="https://doi.org/10.1016/j.intell.2025.101933" target="_blank" rel="noopener">Intelligence</a></em>. Researchers in Estonia found that younger children’s self-assessments of how smart they are often have little to do with their actual performance on standardized IQ tests. But around the age of 10, this begins to change. At that point, children’s self-perceived intelligence starts to reflect their measured cognitive abilities more reliably—though the two never fully align.</p>
<p>Intelligence refers to a general ability to think abstractly, solve problems, learn quickly, and adapt to new situations. Psychologists often measure intelligence using standardized tests, such as the Raven Standard Progressive Matrices, which assess pattern recognition and reasoning ability without relying on language. While these tests are widely accepted among researchers, people’s everyday ideas about intelligence tend to be much broader. Many individuals—especially children—may associate being “smart” not only with academic performance but also with traits like being well-behaved, popular, or confident.</p>
<p>The mismatch between scientific and lay definitions of intelligence has raised questions about whether simply asking people how intelligent they think they are can serve as a useful substitute for formal testing. Prior research has found that people’s self-reported intelligence usually has only a weak relationship with their actual IQ scores. This gap has limited the usefulness of self-reported intelligence in psychological research and education, but little was known about how this relationship develops over the course of childhood.</p>
<p>The goal of the current study was to pinpoint the age at which children begin to form a meaningful understanding of their own intelligence. The researchers, <a href="https://scholar.google.com/citations?user=xaxV8_8AAAAJ&hl=en&oi=ao" target="_blank" rel="noopener">Jüri Allik</a> and Helle Pullmann, wanted to know when self-reported intelligence starts to reflect measured cognitive ability and how stable these self-assessments are over time. The work was part of a broader effort to understand how intelligence, personality, and academic performance are linked during childhood and adolescence.</p>
<p>The study also builds on a long tradition of intelligence research in Estonia. In the early 20th century, Estonian school principal Juhan Tork conducted pioneering work on children’s cognitive ability that was later suppressed during Soviet occupation. After Estonia regained independence, researchers sought to revisit and expand upon these early studies using modern methods.</p>
<p>“In 1939, Tork defended his doctoral thesis at the University of Tartu, in which he studied the intelligence of Estonian schoolchildren. After the occupation of Estonia by the Soviet Union, this work, which was far ahead of its time, was one of the first to be banned and physically burned by the new authorities,” explained Allik, a professor of experimental psychology at the University of Tartu.</p>
<p>“Tork’s work was commissioned by the Estonian Ministry of Education, as it was believed that the average intelligence level of a country with a very large number of peasant children was lower than that of developed countries. Tork’s work showed that the intelligence of Estonian children was no lower than that of the United States or Britain. When Estonia regained independence in 1991, there was suspicion that the deportation of the smartest part of the population to Siberia and emigration abroad had lowered the average mental level of the Estonian population.</p>
<p>“Helle Pullman’s <a href="https://dspace.ut.ee/items/9a9ea0a5-1c8b-4636-964a-b5430d6682a1" target="_blank" rel="noopener">doctoral thesis</a>, which she defended in 2005, showed that the intelligence of Estonian children <a href="https://doi.org/10.1017/s0021932003006503" target="_blank" rel="noopener">is basically at the same level</a> as, for example, children in Iceland and the United Kingdom. In addition to the intelligence of Estonian schoolchildren, we also studied personality and the children’s self-perceived intelligence, as there are frequent cases in psychology where belief in one’s abilities is as important as the abilities themselves.”</p>
<p>The study used data from thousands of Estonian schoolchildren between the ages of 7 and 18, drawn from two related samples. The first group included 2,712 adolescents in Grades 6, 8, 10, and 12 who were assessed in 2001, with a follow-up conducted in 2003 that included 1,681 students from the same schools—some of whom had participated previously, while others were new. A second group consisted of 1,832 younger children in Grades 2, 3, and 4 from across Estonia, with ages ranging from 7 to 11.</p>
<p>Participants completed the Raven Standard Progressive Matrices, a nonverbal test of fluid intelligence that requires identifying patterns in visual matrices. Children also rated their own intelligence using age-appropriate self-report items. For students in Grades 1 through 4, a simplified three-point scale was used in response to the statement “I am very smart, and I understand everything immediately.” Older students (Grade 6 and above) used a 10-point scale to assess how their cognitive abilities compared to others, ranging from “Others are significantly more capable than me” to “I am significantly more capable than others.”</p>
<p>To enable comparison across different ages and formats, all responses were standardized within classrooms, and students were grouped into low, average, or high self-reported intelligence categories based on z-scores. In addition to the single-item question, younger children also answered a six-item scale assessing academic self-perception, which improved in reliability with age.</p>
<p>Self-esteem was also measured using the 10-item Rosenberg Self-Esteem Scale, which was administered to students in Grades 6 through 12. This allowed the researchers to examine the relationship between self-perceived intelligence, actual cognitive performance, and broader self-concept over time.</p>
<p>The results indicate that young children are not particularly good at evaluating their own intelligence. Among children aged 7 to 9, those who rated themselves as very smart actually scored lower on IQ tests than their peers with more modest self-assessments. This suggests that self-perceived intelligence during early childhood may reflect wishful thinking, confidence, or self-esteem rather than actual cognitive ability.</p>
<p>But starting around age 10, this pattern begins to shift. From that point onward, children’s self-reported intelligence begins to track more closely with their measured IQ scores. The correlation between the two measures becomes increasingly consistent with age, peaking at a correlation of about 0.41 in 11-year-olds. This pattern suggests that by age 10, children begin to develop the cognitive and social maturity needed to make more accurate judgments about their own abilities.</p>
<p>Importantly, this shift happens before the self-report format changes from the simpler version used in early grades to the more nuanced scale used in older students. This rules out the possibility that the change in response format is driving the improvement in accuracy.</p>
<p>“It is surprising that although we collected the data over 20 years ago, the question of at what age a child’s self-rated intelligence begins to align with their psychometrically measured intelligence had not been explored until our research,” Allik told PsyPost. “We have now established that starting from the age of 10, a correlation begins to emerge between subjective and objective measures of intelligence.”</p>
<p>However, the researchers also found that self-reported intelligence never aligns perfectly with measured IQ. Even among older students, the correlation remains modest. And somewhat surprisingly, by the final year of high school, the relationship between self-perceived and measured intelligence starts to weaken. Among 12th graders, the correlation dropped to just 0.12. The researchers suggest that as adolescents grow older, their self-assessments may become more influenced by broader self-concepts, including their social standing and emotional self-worth.</p>
<p>Supporting this idea, the researchers found that self-reported intelligence was strongly correlated with self-esteem at all ages. This link became especially pronounced in high school, where self-esteem was a much better predictor of perceived intelligence than actual IQ scores. By 12th grade, how smart students believed they were had more to do with how they felt about themselves in general than how they performed on cognitive tests.</p>
<p>“The main point is that a simple question about a person’s self-assessed intelligence cannot replace a verified and reliable IQ test that measures mental abilities,” Allik said. “In addition to the fact that one question is a very unreliable measure, the folk concept of intelligence is different from that of professional psychologists.”</p>
<p>The study provides a rich look at how children’s understanding of intelligence develops, but it has some limitations. The researchers focused on Estonian students, and while previous comparisons suggest their developmental trajectory is similar to children in other countries, cultural differences could still affect how children think about intelligence. Future research might explore how these patterns play out in other cultural settings or in children with different educational experiences.</p>
<p>Another open question is how children’s concepts of intelligence evolve beyond adolescence. While this study tracked students into late high school, it’s possible that their self-assessments change again in early adulthood, particularly as they gain more life experience and academic or occupational feedback.</p>
<p>“One of the goals of this project was to investigate how grades given in school depend on children’s mental abilities and personality traits,” Allik explained. “Since this is an important topic, <a href="https://doi.org/10.1016/j.paid.2006.08.001" target="_blank" rel="noopener">ourarticle on it</a> has been noticed and cited quite a few times. According to Google Scholar, 986 times to date. After matching our data with the birth register, we discovered that within the normal range of birth weight (≥ 2500 g), every 500 g increase in birth weight was associated with an approximate increase of 0.7 points in IQ scores measured in the early grades.</p>
<p>“Also, maternal smoking during pregnancy was accompanied by <a href="https://doi.org/10.1016/j.earlhumdev.2010.06.010" target="_blank" rel="noopener">a 3.3-point deficit</a> in children’s intellectual abilities. We <a href="https://doi.org/10.1002/per.820">also found</a> that the later criminal behaviour of boys was associated with lower cognitive ability, grade point average, the lack of agreeableness and conscientiousness, as well as higher neuroticism.”</p>
<p>The study, “<a href="https://doi.org/10.1016/j.intell.2025.101933" target="_blank" rel="noopener">How accurately does self-reported intelligence reflect psychometrically measured IQ?</a>“, was published June 21, 2025.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<p><strong>Forwarded by:<br />
Michael Reeder LCPC<br />
Baltimore, MD</strong></p>
<p><strong>This information is taken from free public RSS feeds published by each organization for the purpose of public distribution. Readers are linked back to the article content on each organization's website. This email is an unaffiliated unofficial redistribution of this freely provided content from the publishers. </strong></p>
<p> </p>
<p><s><small><a href="#" style="color:#ffffff;"><a href='https://blogtrottr.com/unsubscribe/565/DY9DKf'>unsubscribe from this feed</a></a></small></s></p>