<table style="border:1px solid #adadad; background-color: #F3F1EC; color: #666666; padding:8px; -webkit-border-radius:4px; border-radius:4px; -moz-border-radius:4px; line-height:16px; margin-bottom:6px;" width="100%">
<tbody>
<tr>
<td><span style="font-family:Helvetica, sans-serif; font-size:20px;font-weight:bold;">PsyPost – Psychology News Daily Digest (Unofficial)</span></td>
</tr>
<tr>
<td> </td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/brain-health-knowing-two-languages-might-be-protective-against-alzheimers-progression/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Brain health: Knowing two languages might be protective against Alzheimer’s progression</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Jan 2nd 2025, 08:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new analysis of neuroimaging data has found that individuals with Alzheimer’s disease who speak only one language (monolinguals) have reduced hippocampal volume in the brain. This reduction was not observed in individuals who speak at least two languages (bilinguals). The research was published in <a href="https://doi.org/10.1017/S1366728924000221"><em>Bilingualism: Language and Cognition</em></a>.</p>
<p>Alzheimer’s disease is a progressive neurodegenerative disorder that results in cognitive decline. It is the most common cause of dementia, particularly in older adults. In the brain, it is characterized by the accumulation of abnormal protein deposits, including amyloid plaques and tau tangles, which disrupt communication between neurons. Over time, these disruptions cause neurons to lose function and die, leading to brain atrophy.</p>
<p>This atrophy is particularly notable in areas related to memory, such as the hippocampus. As the disease progresses, it impairs other cognitive functions, including reasoning, language, and problem-solving. Emotional changes, such as depression, anxiety, or apathy, often accompany cognitive decline. In advanced stages, individuals lose the ability to carry out basic daily activities and recognize loved ones. Ultimately, Alzheimer’s leads to a severe decline in overall brain function and death.</p>
<p>Study author Kristina Coulter and her colleagues sought to explore whether bilingualism (i.e., knowing two languages) might be protective against dementia. Previous research has suggested that having a cognitive reserve can provide some protection against dementia. Cognitive reserve refers to the brain’s ability to adapt and compensate for damage or age-related changes by using alternative neural pathways or strategies.</p>
<p>Cognitive reserve is shaped by lifelong experiences such as education, intellectual engagement, social interaction, physical activity, and mentally stimulating activities that enhance the brain’s resilience and adaptability. Learning a second language might be one such activity. The researchers hypothesized that, if this were the case, bilingual individuals with dementia would show observable structural brain differences. Specifically, they expected bilingual individuals to have greater cortical thickness and volume in language-related areas of the brain compared to monolinguals.</p>
<p>The researchers analyzed data from the Comprehensive Assessment of Neurodegeneration and Dementia Study (COMPASS-ND) of the Canadian Consortium on Neurodegeneration in Aging (CCNA) and the Consortium for the Early Identification of Alzheimer’s disease-Quebec (CIMA-Q). The first dataset included neuroimaging data from 356 individuals with Alzheimer’s disease, while the second dataset included data from 175 individuals with, or at risk for, various types of dementia.</p>
<p>Participants self-reported their native language and the number of languages they spoke. Individuals who reported speaking two or more languages were considered bilingual. Participants also underwent magnetic resonance imaging of their brains.</p>
<p>Among monolingual participants, 71% reported speaking English. Among bilingual participants, the most commonly spoken languages were English (38%) and French (39%). Of the bilingual participants, 68% reported speaking two languages, 22% knew three languages, and the remaining participants spoke between four and seven languages. Additionally, 11% of monolinguals and 32% of bilinguals were immigrants.</p>
<p>Brain imaging data revealed that individuals diagnosed with diseases known to lead to dementia tended to have lower gray matter volume and cortical thickness across different brain areas. Bilingual individuals did not exhibit signs of cognitive reserve in language-related regions of the brain. However, monolingual older adults with Alzheimer’s disease showed reduced hippocampal volume—a reduction that was not observed in bilingual older adults with Alzheimer’s.</p>
<p>“Although bilingualism was not associated with brain reserve in language-related areas, nor with cognitive reserve in AD-related areas [Alzheimer’s disease-related areas], bilingualism appears to confer reserve in the form of brain maintenance in AD,” the study authors concluded.</p>
<p>The study sheds light on the brain changes in dementia and their relationship to language knowledge. However, it is important to note that the study design does not allow for cause-and-effect conclusions to be drawn from the results.</p>
<p>The paper “<a href="https://doi.org/10.1017/S1366728924000221">Bilinguals show evidence of brain maintenance in Alzheimer’s disease</a>” was authored by Kristina Coulter, Natalie A. Phillips, and the CIMA-Q and COMPASS-ND groups.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/chatgpt-fact-checks-can-reduce-trust-in-accurate-headlines-study-finds/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">ChatGPT fact-checks can reduce trust in accurate headlines, study finds</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Jan 2nd 2025, 06:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A recent study published in the <em><a href="https://doi.org/10.1073/pnas.2322823121" target="_blank" rel="noopener">Proceedings of the National Academy of Sciences</a></em> investigates how large language models, such as ChatGPT, influence people’s perceptions of political news headlines. The findings reveal that while these artificial intelligence systems can accurately flag false information, their fact-checking results do not consistently help users discern between true and false news. In some cases, the use of AI fact-checks even led to decreased trust in true headlines and increased belief in dubious ones.</p>
<p>Large language models (LLMs), such as ChatGPT, are advanced artificial intelligence systems designed to process and generate human-like text. These models are trained on vast datasets that include books, articles, websites, and other forms of written communication. Through this training, they develop the ability to respond to a wide range of topics, mimic different writing styles, and perform tasks such as summarization, translation, and fact-checking.</p>
<p>The motivation behind this study stems from the growing challenge of online misinformation, which undermines trust in institutions, fosters political polarization, and distorts public understanding of critical issues like climate change and public health. Social media platforms have become hotspots for the rapid spread of false or misleading information, often outpacing the ability of traditional fact-checking organizations to address it.</p>
<p>LLMs, with their ability to analyze and respond to content quickly and at scale, have been proposed as a solution to this problem. However, while these models can provide factual corrections, little was known about how people interpret and react to their fact-checking efforts.</p>
<p>“The rapid adoption of LLMs for various applications, including fact-checking, raised pressing questions about their efficacy and unintended consequences,” said study author <a href="https://www.matthewdeverna.com/" target="_blank" rel="noopener">Matthew R. DeVerna</a>, a PhD candidate at Indiana University’s Observatory on Social Media. “While these tools demonstrate impressive capabilities, little was known about how the information they provide influence human judgment and behavior. While we are confident that such technology can be used to improve society, doing so should be done carefully, and we hope this work can help design LLM-powered systems that can improve digital spaces.”</p>
<p>The researchers designed a randomized controlled experiment involving 2,159 participants, sampled to reflect the demographics of the United States population in terms of gender, age, race, education, and political affiliation. Participants were divided into two groups: one assessed the accuracy of headlines (“belief group”), and the other indicated their willingness to share them on social media (“sharing group”).</p>
<p>Each group encountered 40 political news headlines, evenly split between true and false statements. These headlines were also balanced for partisan bias, ensuring an equal mix of content favorable to Democrats and Republicans. Participants were assigned to one of four conditions: a control group with no fact-checking information, a group shown AI-generated fact-checks by ChatGPT, a group that could choose whether to view the AI fact-checks, and a group presented with traditional human fact-checks.</p>
<p>ChatGPT’s fact-checks were generated using a standardized prompt and labeled as either “true,” “false,” or “unsure” based on the model’s response. Participants in the AI groups were informed that the fact-checking information came from ChatGPT. Those in the human fact-check group received clear, concise evaluations of the claims, supported by details about the credibility of the news source.</p>
<p>The study found that the impact of AI-generated fact-checking on participants’ judgments and behaviors was mixed and often counterproductive. While ChatGPT accurately identified 90% of false headlines as false, it struggled with true headlines, labeling only 15% as true while expressing uncertainty about the majority. This uncertainty led to undesirable effects: participants were less likely to believe true headlines misclassified as false and more likely to believe false headlines when the AI expressed uncertainty.</p>
<p>For example, participants in the belief group who were exposed to ChatGPT fact-checks showed reduced discernment compared to the control group. They were 12.75% less likely to believe true headlines incorrectly flagged as false and 9.12% more likely to believe false headlines when ChatGPT was unsure. Similarly, in the sharing group, participants were more likely to share false headlines labeled as uncertain by the AI.</p>
<p>“The public should be aware that these models can sometimes provide inaccurate information, which can lead them to incorrect conclusions,” DeVerna told PsyPost. “It is important that they approach information from these models with caution.”</p>
<p>In contrast, traditional human fact-checks significantly improved participants’ ability to distinguish between true and false headlines. Those exposed to human fact-checking were 18.06% more likely to believe true headlines and 8.98% more likely to share them compared to the control group.</p>
<p>The option to view AI fact-checks also revealed a selection bias. Participants who chose to view AI-generated fact-checking information were more likely to share both true and false headlines, suggesting they may have already formed opinions about the headlines before consulting the AI. This behavior was influenced by participants’ attitudes toward AI, as those with positive views of AI were more likely to believe and share content after seeing the fact-checks.</p>
<p>“We provide some evidence that suggests people may be ignoring accurate fact-checking information from ChatGPT about false headlines after choosing to view that information,” DeVerna said. “We hope to better understand the mechanisms at play here in future work.”</p>
<p>As with all research, there are some limitations. First, the study relied on a specific version of ChatGPT and a limited set of headlines, which may not fully capture the complexities of real-world misinformation. The experiment’s survey setting also differs from the dynamic and fast-paced nature of social media interactions.</p>
<p>“Our study relied on a specific, now outdated, version of ChatGPT and presented participants with a single response from the model,” DeVerna noted. “Results may vary with other LLMs or in more interactive settings. Future research will focus on examining these dynamics in more realistic contexts.”</p>
<p>“We hope to better understand the risks and potential benefits of LLM-based technology for improving digital spaces and maximize their benefit for society. We are working to understand how humans interact with this technology, how to improve the model’s accuracy, and how such interventions scale to massive networks like social media platforms.”</p>
<p>The study, “<a href="https://doi.org/10.1073/pnas.2322823121" target="_blank" rel="noopener">Fact-checking information from large language models can decrease headline discernment</a>,” was authored by Matthew R. DeVerna, Harry Yaojun Yan, Kai-Cheng Yang, and Filippo Menczer.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/dark-personality-traits-and-love-styles-differ-in-partnered-and-single-individuals/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Dark personality traits and love styles differ in partnered and single individuals</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Jan 1st 2025, 20:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Married or cohabiting participants score lower on traits associated with the so-called “Dark Triad” of personalities—Machiavellianism, narcissism, and psychopathy—compared to their single counterparts, according to new research study published in the journal <em><a href="https://www.sciencedirect.com/science/article/pii/S2405844024162469" target="_blank" rel="noopener">Heliyon</a></em>. Additionally, partnered individuals tend to favor passionate and altruistic love styles, while singles gravitate toward more playful, obsessive, and pragmatic attitudes. These results shed light on how personality and romantic attitudes are related to relationship dynamics.</p>
<p>Romantic relationships are a cornerstone of human life, but not everyone approaches or sustains relationships in the same way. Romantic relationships are a central aspect of human life, yet the psychological factors that affect whether someone forms and maintains a committed partnership are not fully understood. By examining Dark Triad personality traits and love styles, the researchers sought to gain deeper insight into the personal attributes that shape romantic outcomes.</p>
<p>“Romantic relationships are fundamental to human experience, shaping individuals’ emotional well-being and societal structures. Our interest was piqued by the complex interplay between personality traits, particularly those in the Dark Triad, and attitudes towards love. Exploring how these traits influence relationship dynamics provides valuable insights into both typical and atypical patterns of human intimacy,” said study author Sara Veggi of the University of Turin.</p>
<p>Dark Triad personality traits are associated with self-serving, emotionally detached behaviors that could undermine long-term relationships. Machiavellianism is characterized by manipulation, cynicism, and emotional detachment, often used to achieve personal goals at the expense of others. Narcissism, on the other hand, is marked by grandiosity, a sense of superiority, and a need for admiration. Psychopathy, the third component of the Dark Triad, involves impulsivity, emotional detachment, and a disregard for the well-being of others.</p>
<p>Love styles, on the other hand, describe different attitudes and approaches to romantic relationships. Eros is characterized by passionate, intense, and physical attraction, often associated with high levels of satisfaction in relationships. Ludus, by contrast, treats love as a game, emphasizing playfulness and a lack of commitment. Storge reflects a more friendship-based love, grounded in shared experiences and mutual respect rather than passion.</p>
<p>Pragma is a practical approach to love, where partners are chosen based on rational considerations like compatibility or social status. Mania is possessive and obsessive, marked by jealousy and a deep desire for affirmation from the partner. Finally, Agape represents selfless, altruistic love, where the needs and happiness of the partner are prioritized above one’s own.</p>
<p>The researchers conducted their study using an online survey that gathered data from 1,101 participants, primarily Italian adults, between July and October 2023. Participants were required to be at least 18 years old, have a minimum primary school education, and possess sufficient proficiency in Italian. Those with severe cognitive or psychiatric disorders were excluded from the study. The final sample consisted of 615 married or cohabiting individuals and 486 single participants, with an average age of approximately 41 years.</p>
<p>Veggi and her colleagues found distinct differences between the two groups in both personality traits and love attitudes. Married or cohabiting participants scored lower on all three Dark Triad traits—Machiavellianism, narcissism, and psychopathy—compared to singles on average. These results suggest that individuals with lower levels of manipulation, emotional detachment, and self-centeredness are more likely to maintain long-term, stable relationships.</p>
<p>When it came to love attitudes, partnered individuals were characterized by higher levels of Eros and Agape, reflecting passionate and selfless approaches to love. These styles are commonly associated with satisfaction and commitment in romantic relationships. Conversely, single participants scored higher on Ludus, Mania, Pragma, and Storge. These styles suggest a more playful, obsessive, practical, or friendship-oriented approach to love, which may not always align with the dynamics of enduring partnerships.</p>
<p>“The findings reveal that personality traits and attitudes toward love significantly differ between individuals in relationships and those who are single,” Veggi told PsyPost. “This suggests that certain personality characteristics and love attitudes might predispose individuals towards relationship stability or singlehood.”</p>
<p>Using hierarchical logistic regression analysis—a statistical method that identifies how various factors predict an outcome while considering the influence of other variables—the researchers found that age and love styles were significant predictors of relationship status.</p>
<p>Older participants were more likely to be married or cohabiting, as were those who displayed higher levels of Eros and Agape. In contrast, individuals with higher levels of Mania and Storge were more likely to be single. Narcissism, surprisingly, was also a predictor of being partnered, suggesting that this trait might not always hinder relationship formation, possibly due to its association with confidence and charm in the early stages of relationships.</p>
<p>The researchers also uncovered gender differences. Men scored higher on all three Dark Triad traits, aligning with previous research indicating that men are more likely to exhibit socially aversive personality characteristics. Men were also more likely than women to adopt a Ludus love style, suggesting a preference for less committed, more playful relationships. However, men also scored higher on Agape, reflecting a willingness to sacrifice for their partners, which may align with traditional protector roles.</p>
<p>“We were particularly struck by the predictive strength of narcissism, Eros, and Agape in determining relationship status, as well as the gender-specific differences in these traits,” Veggi said. “The finding that men reported higher scores in ludic and agapic love styles than women adds nuance to gendered perceptions of love and relational commitment.”</p>
<p>The study sheds light on how psychological factors influence the likelihood of being in a committed partnership versus remaining single. But as with all research, there are limitations to consider.</p>
<p>“As a cross-sectional study reliant on self-report measures, our results are subject to potential biases such as social desirability and self-perception inaccuracies. Additionally, the sample’s demographic skew (predominantly female and heterosexual) limits generalizability.”</p>
<p>Future research could address these limitations by recruiting more diverse samples and incorporating additional psychological variables, such as attachment styles or measures of relational satisfaction. Longitudinal studies would also help clarify how these traits and attitudes evolve over time and influence relationship trajectories.</p>
<p>“We aim to expand this investigation to specific populations, including perpetrators of violent and sexual crimes,” Veggi said. “By integrating measures of social cognition, emotion recognition and moral judgment, we seek to understand how these variables manifest in high-risk behaviors and offending.”</p>
<p>“This research underscores the importance of addressing personality traits and relational attitudes in both therapeutic and criminological contexts. By understanding these dynamics, we can better support individuals in building healthy relationships and mitigating harmful patterns.”</p>
<p>The study, “<a href="https://doi.org/10.1016/j.heliyon.2024.e40215" target="_blank" rel="noopener">Love actually: Is relationship status associated with dark triad personality traits and attitudes towards love?</a>,” was authored by Agata Benfante, Marialaura Di Tella, Sara Veggi, Franco Freilone, Lorys Castelli, and Georgia Zara.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/financial-dynamics-in-long-term-marriages-surprising-findings-unearthed-from-decades-worth-of-data/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Financial dynamics in long-term marriages: Surprising findings unearthed from decades’ worth of data</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Jan 1st 2025, 18:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Marriage is often seen as a partnership, but how do couples share financial responsibilities over the long haul? A groundbreaking study in <a href="https://www.sciencedirect.com/science/article/abs/pii/S0276562424001082"><em>Research in Social Stratification and Mobility</em></a> reveals that gender-egalitarian earnings patterns are more common than previously thought when viewed from a long-term perspective. These patterns, however, are deeply shaped by the socio-economic circumstances couples bring into their marriages.</p>
<p>The study addresses a gap in understanding how financial equality manifests within marriages over time. Previous research has largely focused on cross-sectional measures of economic homogamy—how similar spouses’ earnings are at a single point in time. However, this approach fails to capture the dynamic and evolving nature of spousal earnings trajectories.</p>
<p>Economic contributions within a marriage are not static; they fluctuate in response to life events, employment opportunities, and external circumstances. By adopting a long-term perspective, the new study sheds light on the various ways financial patterns develop within marriages and how these patterns relate to broader social inequalities.</p>
<p>“I study how couples share work and family responsibilities over the course of long-term relationships and how gender plays a role in these decisions,” said study author <a href="https://www.allisondunatchik.com/">Allison Dunatchik</a>, an assistant professor of sociology at the University of South Carolina. “We know that in recent decades, people are increasingly likely to marry someone with a similar level of education and income as themselves. But what happens after marriage? I wanted to understand how common gender egalitarian earnings patterns are in the long-term within different-gender couples and what those patterns typically look like.”</p>
<p>Dunatchik conducted her study by analyzing data from the National Longitudinal Survey of Youth 1979, a dataset tracking a nationally representative cohort of individuals born between 1957 and 1964. The survey provided detailed information on marriage, education, employment, and earnings over multiple decades, allowing Dunatchik to examine how spouses’ financial contributions evolved over the course of their marriages. Her analysis focused on 5,354 heterosexual couples in their first marriages, following them for up to 30 years.</p>
<p>“Somewhat to my surprise, I found that gender egalitarian earnings patterns were relatively common among couples when we take a long-term perspective—and are more common than when we take a short-term approach,” Dunatchik told <em>PsyPost</em>. “Overall, I found that about 60% of couples follow gender-egalitarian long-term earnings patterns—but these patterns manifest in three different ways, which are highly stratified by couples’ socio-economic status.”</p>
<p>About half of the couples in the study followed a “dual earner” pattern, meaning both spouses maintained steady and consistent earnings throughout the course of their marriage. This pattern was particularly common among couples with higher socio-economic advantages at the time of marriage, such as greater education levels and higher initial earnings.</p>
<p>Another 6% of couples exhibited “jointly mobile” patterns. In these relationships, both spouses experienced earnings that rose, fell, or fluctuated in similar ways over time. Unlike the stability seen in dual-earner couples, these patterns often reflected shared financial instability, where both partners’ incomes responded to external factors such as job market fluctuations or life events.</p>
<p>Additionally, 5% of couples followed an “alternating earner” pattern, where primary earnership shifted between spouses over time. In these relationships, one partner would step into the primary earning role as the other’s income decreased.</p>
<p>The last two gender egalitarian patterns “have been largely overlooked in previous research on couple-level earnings patterns and are most common among couples with lower levels of education and earnings,” Dunatchik explained.</p>
<p>Traditional gendered earning configurations were also evident, with 34% of couples following a “male breadwinner” pattern but only 5% a “female breadwinner” pattern. The male breadwinner arrangement was most common among couples with lower socio-economic status, while female breadwinner configurations typically arose when wives entered the marriage with higher earnings and education levels than their husbands.</p>
<p>Interestingly, Dunatchik found that over half of the wives in her sample—55%—followed a “stable earner” trajectory. These women consistently earned relatively high incomes over time, spending the vast majority of marital years employed and contributing significantly to their household finances. This pattern counters the expectation that wives’ earnings typically decline after marriage or parenthood. Instead, it suggests that many women sustain their career engagement and financial contributions.</p>
<p>On the other hand, the study revealed unexpected instability among husbands’ earnings. While 77% of husbands followed stable earning patterns, a significant minority—23%—exhibited unstable trajectories. These men were categorized into groups with declining earnings, late entry into earning, or highly variable income patterns over the course of their marriages.</p>
<p>“I was quite surprised at the proportion of wives with relatively high, stable earnings and the proportion of husbands with unstable earnings,” Dunatchik said. “Previous research has typically focused on explaining why women’s earnings decline following marriage or first birth, but for over half of women in this study, earnings really didn’t decline substantially over the life course.</p>
<p>“Similarly, we often assume that husbands maintain stable earnings when in reality about a quarter of husbands in my study had unstable earnings. These findings emphasize the need to question these implicit assumptions about men’s and women’s economic roles within marriage.”</p>
<p>Dunatchik’s analysis also highlighted how socio-economic factors at the start of a marriage influence long-term earning patterns. Couples with greater economic stability at marriage were more likely to follow the dual-earner model, benefiting from consistent and stable incomes. By contrast, less advantaged couples were more likely to experience financial precarity, even when their earnings patterns appeared egalitarian in the short term.</p>
<p>This socio-economic stratification suggests that financial equality within marriages looks very different across income levels, with lower-income couples more likely to experience instability and higher-income couples consolidating economic advantages over time.</p>
<p>“Ultimately,” Dunatchik told <em>PsyPost</em>, “I think there are a few key takeaways from these findings: 1) Gender egalitarian earnings patterns are more common than often assumed when we take a long-term perspective. 2) Gender egalitarian earnings patterns can take a variety of forms within couples, some of which are overlooked when we consider only the short term. And 3) given the rise of earnings instability in recent decades, greater attention needs to be paid to the extent to which earnings instability is concentrated within households.”</p>
<p>Dunatchik acknowledges some limitations of her study, including its focus on a single cohort of Baby Boomers. “While this provides incredibly rich, longitudinal data, it does mean that the findings may not necessarily reflect the experiences of younger cohorts,” she noted. In particular, changes in labor market dynamics, gender norms, and family structures could produce different earnings trajectories for Millennials and Generation Z.</p>
<p>“I hope to examine how these patterns have changed over time and how they vary across countries to try to understand how policy and economic contexts shape couples’ long-term earnings patterns,” Dunatchik said.</p>
<p>The study, “<a href="https://doi.org/10.1016/j.rssm.2024.100995" target="_blank" rel="noopener">His and hers earnings trajectories: Economic homogamy and long-term earnings inequality within and between different-sex couples</a>,” was published November 22, 2024.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/depressive-symptoms-might-be-transmitted-from-mother-to-child-through-early-interactions/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Depressive symptoms might be transmitted from mother to child through early interactions</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Jan 1st 2025, 16:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A study of mothers and their infants found that mothers with depressive symptoms tend to display fewer supportive responses to their infants’ positive emotions. In turn, infants who receive fewer supportive responses are more likely to exhibit depressive symptoms during toddlerhood. The paper was published in <a href="https://doi.org/10.1017/S0954579424001615"><em>Development and Psychopathology</em></a>.</p>
<p>Depressive symptoms refer to a range of emotional, cognitive, and physical experiences associated with depression. Emotionally, individuals may feel persistent sadness, hopelessness, or emptiness, often accompanied by a loss of interest or pleasure in activities they once enjoyed. Cognitively, depression can lead to difficulty concentrating, making decisions, or remembering details, along with feelings of worthlessness or excessive guilt. Physical symptoms include fatigue, changes in appetite or weight, sleep disturbances (insomnia or oversleeping), and unexplained aches or pains. The severity and duration of these symptoms can vary, but they typically interfere with daily functioning and quality of life.</p>
<p>Studies have established that depressive symptoms are often transmitted intergenerationally (between generations), particularly between mothers and their children. These studies demonstrate that children exposed to maternal depressive symptoms in the first year of life are more likely to develop internalizing symptoms (e.g., anxiety, depression, social withdrawal) between the ages of 2 and 19. However, the precise mechanisms underlying this transmission remained unclear.</p>
<p>Study author Gabrielle Schmitt and her colleagues sought to investigate how depressive symptoms are transmitted from mothers to toddlers. They hypothesized that mothers with elevated depressive symptoms in the first year after childbirth would display fewer supportive responses to their infants’ positive emotions. Consequently, infants receiving less support for their positive emotions would exhibit more depressive symptoms in toddlerhood.</p>
<p>The study involved 128 mothers and their infants, recruited through various means from Toronto, Canada, as part of a larger longitudinal study on early childhood development. Mothers were required to be proficient in English, at least 18 years old, and without major medical conditions. Infants needed to weigh more than 2,500 grams at birth.</p>
<p>Data were collected at three time points: in early infancy (when infants were around 6–7 months old), late infancy (around 12 months old), and toddlerhood (around 20 months old). By the third data collection point, 90 mother-child pairs (70%) remained in the study. Data collection involved Qualtrics online surveys. Mothers completed assessments of their own depressive symptoms using the Edinburgh Postnatal Depression Scale at the first and second time points and of their toddlers’ depressive symptoms using the Child Behavior Checklist for Ages 1½–5 at the third time point.</p>
<p>Additionally, researchers visited participants’ homes to video record mothers interacting with their infants for 30 minutes. Mothers were given a standardized set of toys and instructed to interact with their infants as they normally would. These interactions were divided into three 10-minute episodes: the first involved free interaction, the second involved the use of toys, and the third involved interaction without toys. Study authors used these videos to assess infants’ manifestations of positive emotions and mothers’ responses to those emotions.</p>
<p>Results showed that 22% of the mothers exhibited clinical levels of postpartum depression. On average, mothers displayed twice as many supportive responses as non-supportive responses to their infants’ positive emotions. Mothers’ responses did not vary based on the infant’s sex.</p>
<p>Overall, mothers with elevated postpartum depressive symptoms tended to display fewer supportive responses to their infants’ positive emotions. In turn, infants who received fewer supportive responses exhibited more depressive symptoms in toddlerhood.</p>
<p>The study authors tested a statistical model suggesting that the number of supportive responses an infant receives mediates the link between mothers’ and toddlers’ depressive symptoms. Results supported the possibility of such a relationship.</p>
<p>“Findings suggest that maternal socialization of infant positive affect is an important mechanism in the intergenerational transmission of depressive symptoms. These results emphasize the need for preventative interventions early in development,” the study authors concluded.</p>
<p>The study sheds light on the potential mechanism behind the transgenerational transmission of depression. However, it is important to note that both toddlers’ and mothers’ depressive symptoms were reported by the mothers themselves, leaving room for reporting bias that may have influenced the results.</p>
<p>The paper, “<a href="https://doi.org/10.1017/S0954579424001615">Intergenerational transmission of depressive symptoms: Maternal socialization of infant positive affect as a mediator,</a>” was authored by Gabrielle Schmitt, Brittany Jamieson, Danielle Lim, and Leslie Atkinson.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/neuroscientists-just-discovered-memory-processes-in-non-brain-cells/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Neuroscientists just discovered memory processes in non-brain cells</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Jan 1st 2025, 14:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new study published in <a href="https://www.nature.com/articles/s41467-024-53922-x"><em>Nature Communications</em></a> provides evidence that memory-like processes are not exclusive to brain cells but can occur in other types of human cells. Researchers demonstrated that two types of non-neural cells, when exposed to specific patterns of chemical stimuli, exhibited memory responses traditionally associated with neurons. This finding suggests that memory mechanisms may stem from fundamental cellular processes.</p>
<p>The research was conducted by scientists at the Center for Neural Science at New York University, led by <a href="https://www.nikolaykukushkin.com/" target="_blank" rel="noopener">Nikolay V. Kukushkin</a> and Thomas Carew. The team set out to investigate whether the molecular mechanisms underpinning memory formation in neurons could also be present in non-neural cells. Building on previous research that identified memory-like processes in simplified neural systems, the researchers aimed to determine if non-neural cells might exhibit similar memory traits, such as the ability to differentiate between spaced and massed stimuli.</p>
<p>“Tom’s lab has been studying for many years how something seemingly intangible, like memory and learning, can boil down to changes in just a handful of brain cells — sometimes, in a single neuron,” explained Kukushkin, a clinical associate professor of life science and author of the upcoming book <em>One Hand Clapping: The Origin Story of the Human Mind.</em></p>
<p>“So we knew that memory does not require all the complexity of the brain. It was a logical step to ask — does it require a brain at all?”</p>
<p>The research centered on the “massed-spaced effect,” a phenomenon well-documented in neuroscience and behavioral psychology. The effect demonstrates that information is retained more effectively when learning sessions are spaced out over time rather than compressed into a single intensive session. This principle, originally identified in neurons, has been observed across species and is considered a cornerstone of memory formation. The researchers hypothesized that similar dynamics might also apply to non-neural cells due to the conservation of certain signaling pathways across cell types.</p>
<p>To test this hypothesis, the scientists engineered two types of human cell lines—one derived from nerve tissue and another from kidney tissue—to include a “reporter” system that produces a glowing protein in response to memory-related activity. This protein, a form of luciferase, is controlled by a promoter dependent on the cAMP response element-binding protein (CREB), a molecule known to play a key role in memory formation in neurons. By observing the production of the glowing protein, the researchers could track when and how the cells “remembered” specific patterns of chemical stimulation.</p>
<p>The team exposed the cells to pulses of two chemicals—forskolin and TPA—that activate key memory-related signaling pathways, mimicking how neurons respond to neurotransmitters during learning. These pulses were administered in various patterns, including single intensive bursts (massed training) and multiple shorter bursts spaced over time (spaced training). The researchers then measured the levels of the glowing protein after different intervals to assess the cells’ responses.</p>
<p>Both cell types exhibited stronger and more sustained responses when exposed to spaced stimuli compared to massed stimuli, mirroring the massed-spaced effect observed in neurons. Importantly, the cells retained these memory-like responses for over 24 hours, indicating that the spacing effect influenced not just the immediate strength of the response but also its longevity. This behavior aligns with key principles of memory, such as enhanced retention and reduced forgetting with repetition over time.</p>
<p>“It’s not news that non-brain cells can retain information,” Kukushkin told PsyPost. “What’s surprising is that non-brain cells can retain information about surprisingly specific time patterns — down to minutes — for days after you have stopped doing anything with them. I don’t think any of us expected kidney cells to be so clever.”</p>
<p>The researchers further investigated the molecular underpinnings of these memory-like processes. They found that the effects were associated with the activation of CREB and extracellular signal-regulated kinase (ERK), two molecules essential for memory formation in neurons. Spaced stimulation led to stronger and more sustained activation of these molecules compared to massed stimulation. By inhibiting the activity of CREB or ERK, the researchers were able to block the memory-like responses, confirming their critical role in the observed phenomena.</p>
<p>“To the cells of our body, anything that we do regularly — eating, exercising, taking medicine — is a pattern of chemicals in time,” Kukushkin explained. “These time patterns can change any cell in literally the same ways as learning for class changes brain cells, and as with brain cells, we don’t yet fully understand which time patterns do what. But in the future, we may be able to use this cellular learning, for example, to train a muscle cell to produce a healthy hormone, or to train a cancer cell to stop dividing.”</p>
<p>The study challenges the traditional view that memory is a feature unique to the brain and its neurons. However, the experiments were conducted under highly controlled laboratory conditions, which may not fully capture the complexity of real-world cellular environments. Additionally, the study focused on a narrow set of stimuli and cell types, leaving open questions about the generalizability of these findings to other cell types and signaling contexts. Future research will need to address these limitations by exploring how different cells respond to various stimuli and whether similar memory-like processes occur in living organisms.</p>
<p>“Our study is a simple proof of principle that generic, non-neural cells use the same basic toolkit for memory formation as brain cells,” Kukushkin noted. “But we don’t yet have a broad understanding of the process: what kinds of time patterns is the cell responsive to? What exactly changes throughout the cell depending on each pattern? We are working on these questions now.”</p>
<p>The researchers also plan to investigate the broader implications of their findings, including potential applications in medicine and artificial intelligence.</p>
<p>“The long-term goal that I hope to pursue in my own lab is interpreting and predicting the behavior of any cell in response to any time pattern,” Kukushkin said. “If this were achieved, it would have enormous implications for two reasons. In neuroscience, it would help treat mental health diseases and create realistic forms of memory in AI. Outside of neuroscience, it would lead to a new approach to health and disease: cellular modification, rather than chemical blockage, which is how most drugs work today.”</p>
<p>“There has been a lot of discussion of the word ‘memory’ in the context of our paper,” he added. “Some reporters have put the word into quotation marks and conditionalized the changes in our cells as ‘metaphorical’ memory. But I would say the main point of the paper is that this is not metaphorical memory — it is literally the same process with the same evolutionary roots and the same functional use.”</p>
<p>The study, “<a href="https://www.nature.com/articles/s41467-024-53922-x" target="_blank" rel="noopener">The massed-spaced learning effect in non-neural human cells</a>,” was authored by Nikolay V. Kukushkin, Robert E. Carney, Tasnim Tabassum, and Thomas J. Carew.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/art-of-living-techniques-improve-well-being-in-therapy-for-depression/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Art of living techniques improve well-being in therapy for depression</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Jan 1st 2025, 12:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new study published in <a href="https://doi.org/10.3389/fpsyg.2024.1426597"><em>Frontiers in Psychology</em></a> suggests that teaching individuals with depression how to reflect on their daily lives could help improve their mental health and overall satisfaction with life – at least in the short term. Researchers found that combining traditional psychotherapy with “art of living” techniques led to significant reductions in depression and boosts in well-being compared to therapy alone.</p>
<p>Depression is a widespread mental health issue, with millions struggling to manage its debilitating effects. Despite advances in psychotherapy, relapse rates for depression remain high.</p>
<p>Therapists and researchers are increasingly recognizing the importance of not only reducing negative emotions, such as persistent sadness or fatigue, but also cultivating positive ones. Positive psychology interventions, which encourage practices like gratitude and optimism, have shown promise in past research. The concept, referred to as the “art of living,” involves developing skills for self-reflection and leading a more mindful, intentional life.</p>
<p>Based at the Technical University Darmstadt in Germany, Elena Renée Sequeira-Nazaré and Bernhard Schmitz sought to build on those findings by testing whether structured daily reflection exercises could amplify the benefits of psychotherapy.</p>
<p>The study included 161 participants diagnosed with mild to severe depression. The participants were divided into three groups: one group (53 participants) received weekly psychotherapy sessions (50 minutes each) for four weeks; a second group (54 participants) received the same therapy plus a daily set of self-reflection questions they recorded in a journal; the third group, serving as a control (53 participants), received no treatment.</p>
<p>The reflective exercises asked participants to consider questions such as what they were grateful for that day and what they would do differently if they could relive the day. The researchers measured participants’ levels of depression, life satisfaction, and “art of living” skills before the intervention, immediately after, and three months later.</p>
<p>The results were encouraging but mixed. Both therapy groups experienced a reduction in depression symptoms over four weeks, with the group practicing daily reflections showing the greatest improvement. Participants in this group also reported enhanced life satisfaction and mastery of “art of living” skills, such as self-reflection and maintaining a positive outlook.</p>
<p>However, the benefits were short-lived. By the three-month follow-up, the differences between groups had diminished, with many participants reporting a decline in well-being. This suggests that while “art of living” exercises can be beneficial, their impact may wane without ongoing practice or reinforcement.</p>
<p>As promising as these findings are, the study has limitations. The short duration of the intervention (only four weeks) and follow-up period may have been insufficient to observe lasting effects. The study also faced practical challenges, including a high dropout rate and possible inconsistencies in how therapists implemented the interventions.</p>
<p>Sequeira-Nazaré and Schmitz concluded, “it would be interesting for future studies to extend the art of living interventions and change the questions or the art of the intervention like a video based intervention for example.”</p>
<p>The study, “<a href="https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1426597/full">Learn to Be Happy—An Experimental Study in Clinical Context with Depressive Patients in Germany</a>,” was authored by Elena Renée Sequeira-Nazaré and Bernhard Schmitz.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/how-well-can-genetic-scores-predict-iq-heres-what-the-latest-research-reveals/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">How well can genetic scores predict IQ? Here’s what the latest research reveals</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Jan 1st 2025, 11:08</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>How well can genetics predict intelligence? A large-scale study published in <a href="https://www.sciencedirect.com/science/article/abs/pii/S0160289624000655"><em>Intelligence</em></a> explores the power of polygenic scores to predict intelligence, finding that these genetic estimates moderately correlate with IQ. However, the study also highlights critical gaps, including variability across studies and challenges in pinpointing specific genetic influences.</p>
<p>Intelligence, often defined as the ability to learn, reason, and solve problems, is a key factor in predicting various life outcomes, including educational attainment, career success, and overall health. Twin and family studies have consistently shown that genetic differences account for about half of the variance in intelligence among individuals, with genetic influences becoming stronger across the lifespan. However, identifying the specific genetic mechanisms underlying these differences has proven to be a significant challenge.</p>
<p>The advent of genome-wide association studies has opened new doors for exploring the genetic architecture of intelligence. These studies analyze genetic data from large populations to identify DNA variants associated with specific traits, such as intelligence. By aggregating these variants into polygenic scores, researchers can estimate an individual’s genetic propensity for a particular trait.</p>
<p>Polygenic scores have been used to predict various characteristics, but their utility in predicting intelligence remains a topic of considerable debate. The new study aimed to address key questions about the power, reliability, and limitations of polygenic scores in the context of intelligence.</p>
<p>“Intelligence has a real bearing on people’s life outcomes. Understanding what drives people’s differences in intelligence is an important first step in identifying ways to work towards a fairer society,” explained study author Florence Oxley, a postdoctoral research associate at the University of York who is affiliated with the <a href="https://www.hungrymindlab.com/" target="_blank" rel="noopener">Hungry Mind Lab</a> and the <a href="https://senfm.york.ac.uk/home" target="_blank" rel="noopener">SENFM research project</a>.</p>
<p>“Research has shown that differences in intelligence are partly genetic and partly shaped by environmental experiences. People’s genetic tendency towards a given trait can be captured by so-called polygenic scores, which add together the DNA variants a person possesses that are linked to that trait. Polygenic scores for intelligence can predict how well someone will score on an IQ test. In our research, we explored how powerful and accurate polygenic score predictions of intelligence are across studies.”</p>
<p>The researchers conducted their study using a meta-analytic approach, pooling data from multiple independent studies to evaluate the predictive validity of polygenic scores for intelligence. The data for this meta-analysis were derived from nine independent samples, all composed of individuals of European ancestry from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) countries.</p>
<p>In total, the sample included 452,864 participants, making it one of the largest investigations into the genetic predictors of intelligence to date. The included studies assessed intelligence using standardized psychometric tests, such as the Wechsler Intelligence Scale, which measure cognitive abilities in various domains, including verbal and non-verbal reasoning.</p>
<p>The researchers found that polygenic scores for intelligence showed a moderate but consistent ability to predict individuals’ performance on standardized intelligence tests. Across the nine independent samples analyzed, the polygenic scores were found to correlate at approximately 0.25 with IQ test results. This means that the scores explained about 6% of the variation in intelligence.</p>
<p>“This translates to a difference of ~4 IQ points, which, some research suggests, could be linked to differences in people’s education, employment, and income,” Oxley told PsyPost. “Genetic prediction could be really useful in research looking to disentangle how genetic and environmental factors work together in shaping us cognitively.”</p>
<p>But the researchers also uncovered significant variability in the predictive strength of polygenic scores across the included samples. Even after accounting for factors such as the type of intelligence being measured and the age range of participants, a substantial portion of this variability remained unexplained. This suggests that other unexamined factors, such as environmental influences or differences in the specific genetic variants included in the polygenic scores, may play a role.</p>
<p>“Critically, we also found that polygenic score predictions varied in strength across studies,” Oxley explained. “It was not clear from the data available why this is the case. Until we can explain why genetic predictions of intelligence are sometimes stronger and sometimes weaker, we won’t be able to use polygenic scores to help us understand people’s differences in intelligence at the individual level. More research will be needed to address this gap.”</p>
<p>The researchers also examined whether the type of intelligence being measured (e.g., verbal, non-verbal, or general intelligence) affected the predictive power of the polygenic scores. They found that verbal intelligence, which involves language-based reasoning and comprehension, was more strongly predicted by the scores compared to general or non-verbal intelligence. However, the polygenic scores were not able to differentiate strongly between other specific domains of intelligence.</p>
<p>“People’s scores on different types of intelligence tests are usually closely related,” Oxley said. “That is, people who score highly in maths tests also tend to do well on vocabulary tests. But we found that polygenic scores for intelligence didn’t predict all types of intelligence equally well. We ran a meta-regression, which revealed that polygenic scores predicted verbal intelligence significantly better than other types of intelligence – including general intelligence – with the weakest predictions seen for memory and crystallized intelligence (factual recall ability).”</p>
<p>“This could be partly because verbal intelligence is more likely to be inherited. People tend to have children with others who share traits in common with them, and verbal communication may matter more than memory or factual recall for most people. We didn’t see clear differences between predictions for other types of intelligence. This tells us that, while polygenic scores can predict people’s differences in general intelligence, they can’t yet be used to discern strengths and weaknesses in specific cognitive abilities.”</p>
<p>The study adds to a growing body of research showing that polygenic scores can provide valuable information about the genetic underpinnings of intelligence. But as with all research, there are limitations. First, it included only individuals of European ancestry from WEIRD countries, which limits the generalizability of the results. Research has shown that polygenic scores derived from European populations are less accurate when applied to individuals from non-European backgrounds.</p>
<p>The study also could not disentangle genetic influences from gene-environment interactions, where genetic predispositions and environmental factors influence each other. Gene-environment interactions occur when an individual’s genetic makeup influences their response to environmental factors. For instance, individuals with a genetic propensity for higher intelligence might seek out more intellectually stimulating environments, further enhancing their cognitive abilities.</p>
<p>“An important thing to remember is that genes do not determine intelligence: They correlate and interact with environmental experiences – like reading books and going to school,” Oxley told PsyPost. “Our differences in intelligence are shaped by genes and environmental factors, working hand-in-hand. However, it wasn’t possible in our meta-analysis to explore this interplay between genes and environments.”</p>
<p>“Another issue that we couldn’t explore in our study was age and generational trends. Research suggests that polygenic scores may predict intelligence more or less strongly at different ages or in different generations. The research data that was available to us here were not detailed enough to draw conclusions about age and generations, but we hope future studies will address these questions.”</p>
<p>Future research could address these limitations by incorporating within-family designs, which control for shared environmental factors, and by including more diverse groups in genome-wide association studies. Additionally, larger sample sizes and refined polygenic scores targeting specific cognitive domains might improve predictive accuracy and help disentangle the complex interplay between genetics and environment.</p>
<p>“We hope that our meta-analysis will inspire future genetically sensitive studies of intelligence,” Oxley said. “One question that we think is important to answer is why polygenic score predictions are stronger for some domains of intelligence and weaker for others. Another is elucidating how polygenic scores and environment come together to shape intelligence.”</p>
<p>“Behavior genetics is a rapidly progressing field. As we learn more, polygenic scores may become useful tools in applied contexts, for example in helping teachers and parents to identify children who might benefit from additional support. Before we can extend the use of polygenic scores to applied contexts though, it is vital to develop guidelines that ensure that genetic tools are not misused or misinterpreted.”</p>
<p>The study, “<a href="https://doi.org/10.1016/j.intell.2024.101871" target="_blank" rel="noopener">DNA and IQ: Big deal or much ado about nothing? – A meta-analysis</a>,” was authored by Florence A.R. Oxley, Kirsty Wilding, and Sophie von Stumm.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<p><strong>Forwarded by:<br />
Michael Reeder LCPC<br />
Baltimore, MD</strong></p>
<p><strong>This information is taken from free public RSS feeds published by each organization for the purpose of public distribution. Readers are linked back to the article content on each organization's website. This email is an unaffiliated unofficial redistribution of this freely provided content from the publishers. </strong></p>
<p> </p>
<p><s><small><a href="#" style="color:#ffffff;"><a href="https://blogtrottr.com/unsubscribe/565/DY9DKf">unsubscribe from this feed</a></a></small></s></p>