<table style="border:1px solid #adadad; background-color: #F3F1EC; color: #666666; padding:8px; -webkit-border-radius:4px; border-radius:4px; -moz-border-radius:4px; line-height:16px; margin-bottom:6px;" width="100%">
<tbody>
<tr>
<td><span style="font-family:Helvetica, sans-serif; font-size:20px;font-weight:bold;">PsyPost – Psychology News</span></td>
</tr>
<tr>
<td> </td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/scientists-discover-a-key-mechanism-for-dopamine-to-regulate-brain-activity-and-movement/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Scientists discover a key mechanism for dopamine to regulate brain activity and movement</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Sep 24th 2025, 10:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A new study published in <em><a href="https://doi.org/10.3390/brainsci15090979" target="_blank">Brain Sciences</a></em> provides evidence that dopamine promotes movement by directly altering the excitability of key neurons in the brain’s motor circuitry. The research identifies a specific ion channel, known as the inwardly rectifying potassium channel or Kir, as a critical mechanism through which dopamine activates behavior-promoting neurons in both healthy and Parkinsonian brains. The findings help resolve long-standing confusion over how dopamine affects these neurons and suggest that targeting Kir could offer new strategies for treating Parkinson’s disease.</p>
<p>Dopamine is a chemical messenger that plays a major role in the brain’s reward, motivation, and motor systems. One of the brain regions most densely packed with dopamine signaling is the striatum, a central part of the basal ganglia network responsible for initiating and regulating movement. Within the striatum, two main types of neurons—direct pathway medium spiny neurons (also called D1-MSNs) and indirect pathway medium spiny neurons—work together to balance motion and rest. The D1-MSNs are stimulated by dopamine binding to D1-type receptors and tend to promote movement when activated.</p>
<p>The ability of dopamine to increase the activity of D1-MSNs is well-established at the behavioral level. When dopamine is depleted, as occurs in Parkinson’s disease, movement becomes severely impaired. Treatments such as L-dopa aim to restore dopamine levels and often restore mobility. Yet, despite decades of research, the precise cellular mechanism by which dopamine increases D1-MSN activity has remained controversial. Past studies offered conflicting results, with some suggesting dopamine reduces excitability of these neurons, which would not align with its known role in promoting movement.</p>
<p>To resolve this discrepancy, the researchers behind the new study turned to ion channels. These are microscopic gates in the neuronal membrane that control the flow of charged particles, helping regulate whether a neuron is at rest or fires an electrical signal. The Kir channel in particular is known to dampen neuronal activity by keeping the cell’s membrane potential low. If dopamine were to inhibit Kir, it would make the neuron more likely to fire. This study set out to test that hypothesis directly.</p>
<p>“I have been studying the brain dopamine system since 30 years ago, driven by the facts that loss of brain dopamine causes loss of motor function, i.e., Parkinson’s disease, and excessive brain dopamine activity can cause cognitive and behavioral abnormalities as demonstrated in schizophrenia and cocaine abusing. However, it was not known how dopamine produces these profound effects,” said study author Fu-Ming Zhou, a professor of pharmacology at the University of Tennessee College of Medicine.</p>
<p>“Based on our extensive past observations and experiments including failed experiments, we conducted our present study, a 5-year project that focused on medium spiny neurons in the striatum that receive intense dopamine innervation and express a high level of D1 type dopamine receptor.”</p>
<p>The research team used two different mouse models that mimic the loss of dopamine seen in Parkinson’s disease. One model lacked a gene essential for dopamine neuron development, while the other had a targeted knockout of tyrosine hydroxylase, the enzyme necessary to produce dopamine. These models allowed the researchers to examine how neurons respond to dopamine in a context where dopamine receptors are especially sensitive, due to the absence of normal dopamine levels.</p>
<p>To investigate how dopamine affects neuronal excitability, the team prepared brain slices from both normal and dopamine-depleted mice and conducted electrophysiological recordings on D1-MSNs. They applied dopamine directly to the brain tissue and measured the electrical properties of these neurons.</p>
<p>In slices from normal mice, dopamine had a modest effect. It caused a slight depolarization of D1-MSNs, meaning it made the cells slightly more excitable. Input resistance also increased, which means the neurons were more responsive to incoming signals. These effects were consistent with a moderate inhibition of Kir, supporting the idea that dopamine suppresses this potassium current to stimulate behavior-promoting neurons.</p>
<p>But in brain slices from dopamine-depleted mice, the effects of dopamine were much stronger. The D1-MSNs in these mice showed significantly greater depolarization, greater increases in input resistance, and a larger number of action potentials in response to stimulation. These hyperactive responses suggest that in the absence of normal dopamine signaling, D1-type receptors become sensitized, making the neurons far more responsive to dopamine when it is present.</p>
<figure aria-describedby="caption-attachment-228943" class="wp-caption aligncenter"><img fetchpriority="high" decoding="async" src="https://www.psypost.org/wp-content/uploads/2025/09/image.png" alt="" width="504" height="435" class="size-full wp-image-228943" srcset="https://www.psypost.org/wp-content/uploads/2025/09/image.png 504w, https://www.psypost.org/wp-content/uploads/2025/09/image-300x259.png 300w" sizes="(max-width: 504px) 100vw, 504px"><figcaption class="wp-caption-text">[Illustration created by Fu-Ming Zhou]</figcaption></figure>
<p>To directly test whether the Kir channel was involved, the researchers used barium chloride, a known Kir channel blocker. When barium was applied, the excitatory effects of dopamine were largely eliminated. This indicated that dopamine’s action on D1-MSNs is mediated through Kir inhibition. Blocking Kir alone, without dopamine, also increased neuronal excitability, further strengthening this conclusion.</p>
<p>To confirm that Kir inhibition could also influence behavior, the researchers conducted experiments in live mice. They injected barium chloride directly into one side of the striatum in dopamine-deficient mice and observed increased movement, specifically contralateral rotation. This behavioral effect was similar to what is seen when dopamine agonists are injected.</p>
<p>When the researchers combined barium chloride with a D1 receptor agonist, the combined effect was not stronger than barium alone. This suggests that both treatments act on the same pathway and that Kir inhibition plays a central role in the behavioral effects of D1 receptor activation.</p>
<p>“We found that dopamine renders these neurons more excitable, allowing them to facilitate motor and cognitive neural circuits and related behaviors,” Zhou told PsyPost. “These findings are a solid step toward understanding the neuronal mechanisms underlying dopamine’s powerful regulation of brain function. The experiments were conducted in mice, but the brain dopamine system is highly conserved among mammalian animals including humans. So these new findings are likely applicable to humans.”</p>
<p>“Data in our present study and our prior studies show clearly and consistently that the dopamine system in the striatum is the main brain dopamine system and therefore is the main target of antipsychotic drugs, anti-Parkinson drugs, Ritalin (used to treat ADHD), and cocaine. A clear implication is that L-dopa doses (L-dopa is the precursor for dopamine) should be moderate for Parkinson’s disease patients; high L-dopa doses can overdrive the brain to produce excessive/abnormal motor activity and behavior.”</p>
<p>Previous studies have reported a wide range of findings regarding how dopamine influences D1-MSN activity. Some found that dopamine reduces excitability, while others reported no consistent effect. The authors suggest that many of these discrepancies can be explained by technical limitations or differences in experimental design. For instance, some studies used mixed populations of neurons that made it difficult to distinguish D1-MSNs from other types. Others may have applied dopamine in conditions that obscured its true effects.</p>
<p>By using genetically defined mouse models and recording from specifically identified neurons, the present study offers a clearer picture. It provides evidence that dopamine does increase the excitability of D1-MSNs and that this is primarily due to its inhibition of Kir channels.</p>
<p>“My long-term goal is to delineate, reliably, the anatomy, physiology and pharmacology of the striatal dopamine system (the brain’s main dopamine system), therefore contributing reliable knowledge that can be used to develop better treatments for brain diseases,” Zhou said.</p>
<p>The study, “<a href="https://doi.org/10.3390/brainsci15090979" target="_blank">Dopaminergic Inhibition of the Inwardly Rectifying Potassium Current in Direct Pathway Medium Spiny Neurons in Normal and Parkinsonian Striatum</a>,” was authored by by Qian Wang, Yuhan Wang, Francesca-Fang Liao, and Fu-Ming Zhou.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/texting-abbreviations-come-with-a-hidden-social-penalty-according-to-new-psychology-research/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Texting abbreviations come with a hidden social penalty, according to new psychology research</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Sep 24th 2025, 08:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Using texting abbreviations might save a few seconds, but a comprehensive new study suggests this efficiency could come at a social cost. The research indicates that people who use texting shortcuts are perceived as less sincere and are less likely to receive a response, primarily because their messages are seen as requiring less effort. This series of studies was published in the <em><a href="https://psycnet.apa.org/record/2025-41529-001" target="_blank">Journal of Experimental Psychology: General</a></em>.</p>
<p>As digital communication becomes the primary way many people connect, the language we use in these spaces is constantly evolving. Texting has developed its own dialect, filled with abbreviations like “ttyl” (talk to you later) or “hru?” (how are you?). While nearly all texters use these shortcuts, researchers had little understanding of their social consequences. </p>
<p>A team of researchers led by David Fang of Stanford University wanted to investigate how these common abbreviations affect interpersonal perceptions. They considered two competing possibilities: abbreviations could be seen as casual and informal, potentially making people feel closer, or they could be interpreted as a lack of investment in the conversation, harming the connection.</p>
<p>To explore this question, the researchers conducted a series of eight studies involving more than 5,000 participants. They used a variety of methods to see if the effects would appear in different situations. In an initial experiment, participants were shown hypothetical text message conversations. Some participants saw conversations where one person used full sentences, while others saw the same conversations but with common abbreviations. </p>
<p>People who read the abbreviated texts rated the sender as less sincere. They also reported being less likely to text back compared to those who read the fully written messages. The analysis showed that this difference was explained by the perception of effort; participants felt the person using abbreviations was not trying as hard in the conversation.</p>
<p>Another study aimed to see how this perception changed behavior. Participants were put in a position to reply to a message that either contained abbreviations or did not. When responding to messages with abbreviations, participants wrote shorter replies and reported putting less effort into their own messages. This finding suggests a reciprocal effect, where the perceived low effort from one person leads to a similar low-effort response from the other, potentially degrading the quality of the interaction.</p>
<p>To determine if these findings held up in the real world, the researchers analyzed actual text message histories provided by participants. Individuals were asked to submit a recent conversation and then rate their conversational partner. Consistent with the lab experiments, people whose texting partners used more abbreviations perceived them as less sincere. They also indicated they were less likely to want to continue the conversation. The perception of low effort once again appeared to be the underlying reason for these negative judgments.</p>
<p>The investigation moved into a more active, real-world setting with a field experiment on the social messaging platform Discord. The researchers sent direct messages to nearly 1,900 Discord users, asking for a recommendation for an animated show. Half of the messages were written with abbreviations, while the other half used full text. </p>
<p>The messages using full text received a significantly higher response rate. This effect was observed across several different categories of abbreviations, including shortenings like “info” for “information” and contractions like “wats” for “what’s.”</p>
<p>The team also tested for situations that might change this effect. In one experiment, they manipulated the density of abbreviations, showing participants conversations with either a small number (10% of words) or a larger number (20% of words) of shortcuts. They also varied the total length of the conversation. </p>
<p>The negative perception of abbreviations remained consistent. Even a small number of shortcuts was enough to lower ratings of sincerity and the likelihood of a response. The overall length of the conversation did not change this outcome.</p>
<p>Another experiment explored whether relationship closeness would make a difference. Participants were asked to imagine texting either a close friend or a distant acquaintance. Even when imagining a conversation with a close friend, the use of abbreviations still led to perceptions of lower sincerity and a reduced desire to text back. This suggests that the negative impression created by low-effort communication can persist even in established relationships where informality might be expected.</p>
<p>To increase the realism of their investigation, the researchers conducted an interactive speed-dating study. Participants were paired up to have a real-time text conversation with a stranger. Unbeknownst to them, one person in each pair was secretly instructed to use a list of words, which were either full words or their abbreviated versions. </p>
<p>After the five-minute chat, participants were asked if they wanted to exchange contact information to continue talking. Those who had been texting with someone using abbreviations were significantly less likely to agree to exchange contact information.</p>
<p>Finally, the researchers analyzed a large dataset of real conversations from the dating application Tinder. Examining over 200,000 conversations from 686 users, they found a clear pattern. Users who used a higher proportion of abbreviations in their messages tended to have shorter conversations on average. This correlation held even after accounting for other factors like gender, education, and other elements of writing style, providing further evidence from a naturalistic dating context that abbreviations are associated with less successful interactions.</p>
<p>“Our research reveals that texting abbreviations negatively affect interpersonal communication by decreasing perceived effort, which in turn leads to lower perceived sincerity and responsiveness. Ultimately, our findings underscore the importance of considering the impact of evolving language use in the digital era on the quality and nature of interpersonal communication,” Fang and his colleagues concluded.</p>
<p>The studies have some limitations. Most of the experiments focused on brief, initial interactions rather than long-term relationships where communication norms might be different. The participants were also primarily English speakers, and it is possible that the perception of abbreviations varies across different cultures and languages. </p>
<p>Future research could explore the long-term effects of using abbreviations on relationship satisfaction and examine these dynamics in different cultural contexts or in group chat settings. Researchers could also investigate the motivations behind using abbreviations, as the sender’s intent might influence how their messages are received.</p>
<p>The study, “<a href="https://psycnet.apa.org/doi/10.1037/xge0001684" target="_blank">Shortcuts to Insincerity: Texting Abbreviations Seem Insincere and Not Worth Answering</a>,” was authored by David Fang, Yiran (Eileen) Zhang, and Sam J. Maglio.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/managers-who-use-ai-to-write-emails-seen-as-less-sincere-caring-and-confident/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Managers who use AI to write emails seen as less sincere, caring, and confident</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Sep 24th 2025, 06:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>Many professionals now use artificial intelligence tools to assist with writing, but a new study suggests that managers who use AI to craft routine workplace emails risk appearing less trustworthy. While AI-assisted messages were generally seen as polished and professional, managers who relied heavily on such tools were viewed as less sincere, caring, and competent by their employees. The findings were published in the <em><a href="https://doi.org/10.1177/23294884251350599" target="_blank">International Journal of Business Communication</a></em>. The study provides evidence that although AI-generated messages are often seen as effective and efficient, they may come at a social cost. </p>
<p>The release of generative artificial intelligence tools like ChatGPT sparked a surge of interest in their use for everyday writing tasks, including those in professional settings. Many workers now rely on these tools to draft emails, reports, or internal memos. Research has already shown that AI-assisted writing can enhance the clarity, correctness, and professionalism of workplace messages. But less is known about how senders of such messages are perceived.</p>
<p>The goal of the new study was to examine not the writing itself, but how readers interpret the character of someone who uses AI to compose a message. In other words, does using AI affect how trustworthy, sincere, or competent the writer appears? And does the answer change depending on whether the message was mostly written by AI or lightly assisted?</p>
<p>The research also aimed to explore how these perceptions shift depending on who is using the AI. Are people more forgiving of their own use of AI than they are of others? Do they judge managers differently than peers?</p>
<p>“I believe AI will significantly impact our interpersonal relationships. People will use AI a lot to assist with communication. This already happens in the workplace. I’d like people to be aware of the impact of AI-mediated communication,” said study author <a href="https://www.linkedin.com/in/petercardon/" target="_blank">Peter Cardon</a>, the Warren Bennis Chair in Teaching Excellence and professor of business communication at the University of Southern California.</p>
<p>The research team surveyed 1,158 full-time working professionals in the United States, each of whom spent at least half of their work time on a computer. Participants were randomly shown one of eight different scenarios describing an email message that congratulated a team on reaching its goals. The scenarios varied along two dimensions: who the message was from (either the participant or their supervisor) and how much of the message was generated by AI (ranging from low to high assistance).</p>
<p>Some messages showed just light editing by AI, while others had been mostly written by an AI tool based on a short prompt. In some cases, the original prompt given to the AI was shown to participants; in others, it was not. After reading their assigned message, participants were asked a series of questions about the perceived authorship, effectiveness, professionalism, sincerity, caring, confidence, and comfort level with the use of AI.</p>
<p>The survey included both numerical rating scales and an open-ended question asking participants to explain why they thought authorship did or did not matter in workplace communication.</p>
<p>Overall, the results indicated that while people viewed AI-assisted messages as generally professional and effective, they were less likely to trust the sender—especially when that sender was a supervisor using a high level of AI assistance.</p>
<p>In particular, participants were less likely to believe that supervisors were the true authors of messages heavily assisted by AI. While 93 percent agreed that a supervisor was the author in the low-assistance condition, only 25 percent agreed in the high-assistance condition without a visible prompt.</p>
<p>Despite this, heavily AI-assisted messages were not rated as less effective. In fact, messages with high AI involvement were sometimes seen as slightly more effective than those with less assistance. Participants often described AI as a useful tool for improving grammar, tone, and structure. Many said they didn’t mind if AI was used to polish writing, as long as the content still reflected the sender’s own ideas.</p>
<p>“Minor use of AI, primarily for making small edits to professional emails, is generally considered appropriate,” Cardon told PsyPost.</p>
<p>Still, there was a clear tension between message quality and perceptions of the sender. Supervisors who relied heavily on AI were consistently rated as less sincere, caring, and confident. Only about 40 percent of participants considered supervisors in the high-assistance conditions to be sincere, compared to over 80 percent in the low-assistance conditions.</p>
<p>“The biggest surprise was the intensity of feelings,” Cardon said. “Many respondents expressed indignation about bosses using AI for emails.”</p>
<p>The open-ended responses revealed several reasons behind this skepticism. Many participants expressed a sense of disappointment or frustration when learning that a message—especially a congratulatory one—had been largely written by AI. Some described it as “lazy,” “insincere,” or “dishonest.” Others said it felt like the manager didn’t care enough to write a personal message. This lack of effort was perceived by some as a lack of investment in the team’s success.</p>
<p>Some participants also questioned the competence of supervisors who relied heavily on AI. A number of respondents said they would expect managers to be capable of writing a simple email without outside help, and using AI for this purpose might signal a lack of leadership or communication skills.</p>
<p>The results also showed a significant perception gap between how participants viewed their own use of AI and how they judged others, particularly their supervisors. People tended to evaluate their own AI-assisted writing more favorably than that of their boss. When they imagined themselves using AI, they were more likely to see it as a helpful support tool. But when supervisors used it, especially without much transparency, the use was more likely to raise doubts about sincerity and trustworthiness.</p>
<p>Despite these concerns, most participants said they were generally comfortable with AI being used for this type of message. Even in the high-assistance conditions, a majority said they had no problem with supervisors using AI to write a congratulatory email. However, their comfort often came with caveats. Many participants emphasized that the acceptability of AI use depends on the nature of the message. Messages that are relational or emotional in tone, such as praise or support, were viewed as less appropriate for AI generation than factual updates or routine reminders.</p>
<p>Several respondents also raised longer-term concerns about the repeated use of AI in workplace communication. Some worried that overuse could lead to a loss of human connection or undermine team cohesion. Others feared that if AI becomes the default for all types of messaging, even interpersonal ones, the workplace could begin to feel impersonal or transactional.</p>
<p>“Professionals should be aware of the reputational and relational risks of overusing AI in business communication,” Cardon advised.</p>
<p>As with all research, there are limitations. The study focused on a specific type of message—an email congratulating a team—which may not generalize to all workplace communication. Responses may have differed if the message was about conflict resolution, feedback, or performance reviews. Future research could explore how perceptions vary across different genres of communication and different professional contexts.</p>
<p>The study also centered on the supervisor-subordinate relationship, where power dynamics may heighten concerns about sincerity and trust. Perceptions might differ in peer-to-peer scenarios, or when subordinates use AI to communicate upward.</p>
<p>“We’re at the early stages of mass AI use,” Cardon noted. “The tools will continue to evolve and people’s attitudes may change too.”</p>
<p>The researchers recommend additional studies on whether people feel that AI use should be disclosed, and how that disclosure might affect trust. They also suggest exploring how attitudes toward AI-assisted writing change over time as such tools become more embedded in everyday work life.</p>
<p>“We want to accurately represent people’s views, attitudes, and experiences as AI becomes more embedded in daily communication,” Cardon explained. “We hope this information empowers individuals to use AI in ways that improve their lives and their relationships. We’re all on an AI journey now. We should discuss it and use it thoughtfully and with purpose.”</p>
<p>The study, “<a href="https://doi.org/10.1177/23294884251350599" target="_blank">Professionalism and Trustworthiness in AI Assisted Workplace Writing: The Benefits and Drawbacks of Writing With AI</a>,” was authored by Peter W. Cardon and Anthony W. Coman.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/even-light-alcohol-drinking-raises-dementia-risk-according-to-largest-genetic-study-to-date/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Even light alcohol drinking raises dementia risk, according to largest genetic study to date</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Sep 23rd 2025, 19:27</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>A large-scale international study combining genetic data and observational records suggests that alcohol consumption of any amount may raise the risk of developing dementia. While earlier research indicated light drinking could be protective, this new study provides evidence that such findings may reflect reverse causation rather than a true beneficial effect. The authors conclude that reducing alcohol consumption could be an effective public health strategy to lower dementia rates.</p>
<p>The research was led by Anya Topiwala of the University of Oxford’s Nuffield Department of Population Health, alongside colleagues from Yale University, Harvard University, the University of Cambridge, and other institutions. Their findings were published in <em><a href="https://doi.org/10.1136/bmjebm-2025-113913" target="_blank">BMJ Evidence Based Medicine</a></em>. </p>
<p>By analyzing data from more than half a million participants in the United States and the United Kingdom, and combining this with genetic data from over two million individuals, the researchers sought to clarify a longstanding question: does alcohol truly protect against dementia at low levels, or does any amount carry risk?</p>
<p>Past observational studies have consistently reported what is called a U-shaped or J-shaped relationship between alcohol and dementia. These studies suggested that people who drank lightly or moderately had a lower risk of dementia compared to both non-drinkers and heavy drinkers. This pattern led some to propose that small amounts of alcohol might be protective for brain health. But this idea has been controversial. </p>
<p>Critics have pointed out that many non-drinkers in these studies may have stopped drinking due to health problems, making them appear at higher risk. This could create a misleading picture, where abstainers seem worse off not because they avoid alcohol, but because they have other underlying health issues. Additionally, some studies may have lacked enough data on heavy drinkers to properly capture the full range of alcohol’s effects. To address these issues, the researchers used a combination of traditional and genetic methods, allowing for a more rigorous test of causality.</p>
<p>To investigate this relationship, the researchers drew on two major population-based cohorts: the United States Million Veteran Program and the UK Biobank. Together, these included over 559,000 adults aged 56 to 72 at baseline. During the follow-up periods, which ranged from 4 to 12 years depending on the cohort, more than 14,000 participants developed dementia and over 48,000 died.</p>
<p>Initial observational analyses suggested that both heavy drinkers and non-drinkers had higher dementia rates than light drinkers. Specifically, individuals who drank more than 40 drinks per week or had a history of alcohol use disorder showed increased risk, as did those who abstained entirely. At first glance, this pattern echoed past findings suggesting a possible benefit of light drinking.</p>
<p>However, the researchers then applied a genetic method known as Mendelian randomization. This technique uses genetic variants associated with alcohol consumption as a proxy to estimate lifetime exposure to alcohol. Unlike traditional observational methods, it is less likely to be distorted by reverse causation or unmeasured confounding factors. </p>
<p>These analyses told a different story. They showed a steady increase in dementia risk as alcohol intake increased, without any sign of benefit at lower levels. A genetically predicted increase in weekly alcohol consumption was linked to a 15 percent higher dementia risk. Likewise, genetic risk for alcohol use disorder was associated with a 16 percent increase in dementia risk.</p>
<p>The research team also analyzed how alcohol use changed over time in people who eventually developed dementia. Using repeated clinical alcohol screening data from the Million Veteran Program, they found that individuals who went on to develop dementia tended to reduce their alcohol intake in the years before diagnosis. </p>
<p>This suggests that lower drinking levels in people with early signs of cognitive decline may have contributed to the misleading appearance of a protective effect in past observational studies. In other words, rather than alcohol preventing dementia, it may be that the onset of dementia leads people to drink less.</p>
<p>The genetic analyses were bolstered by data from over 2.4 million individuals and considered a wide range of alcohol-related traits. This included both how much people drank and whether they exhibited problematic or dependent drinking patterns. Results were consistent across different types of alcohol exposure, and the findings held up in several sensitivity tests. Notably, the researchers found no evidence of a non-linear or U-shaped relationship in their genetic analyses. The more alcohol a person was genetically predisposed to consume, the higher their risk of dementia appeared to be.</p>
<p>By combining traditional observational data with genetic approaches, the study offers a more nuanced understanding of alcohol’s potential impact on the brain. The authors suggest that public health policies aiming to reduce the prevalence of alcohol use disorder could help lower dementia rates across populations. In their calculations, halving the number of people with alcohol use disorder could reduce dementia cases by up to 16 percent.</p>
<p>While the study has several strengths, including its large sample size, diverse ancestry representation, and use of multiple analytic methods, it is not without limitations. For example, dementia diagnoses were based on medical records, which may not always be accurate or complete. The genetic data also reflect lifelong tendencies rather than specific drinking patterns during certain life stages. In addition, the strongest findings came from participants of European ancestry, with somewhat weaker signals in individuals of African or Latin American descent, likely due to smaller sample sizes in these groups.</p>
<p>Another limitation relates to how alcohol intake was measured. In many cases, drinking behavior was self-reported, which can lead to underestimation or misclassification. There was also limited information on lifetime drinking history, which might influence risk in ways not captured by current or recent behavior alone. </p>
<p>“Authors rightly acknowledge several important limitations of the study. Self-reported alcohol use may not be accurate, particularly if people have memory problems in early stages of dementia, and the genetic markers used as predictors of both alcohol intake and dementia are not perfect,” Tara Spires-Jones, the director of the Centre for Discovery Brain Sciences at the University of Edinburgh, who was not involved in the study, told <a href="https://www.sciencemediacentre.org/expert-reaction-to-study-looking-at-the-association-between-any-amount-of-alcohol-consumption-and-risk-of-dementia/" target="_blank">the Science Media Centre</a>.</p>
<p>“Neither part of the study can conclusively prove that alcohol use directly causes dementia, but this adds to a large amount of similar data showing associations between alcohol intake and increased dementia risk, and fundamental neuroscience work has shown that alcohol is directly toxic to neurons in the brain.”</p>
<p>Despite these caveats, the study’s combination of approaches provides stronger evidence than most previous efforts. The researchers emphasize that their results challenge the popular belief that light or moderate alcohol consumption is safe—or even beneficial—for brain health. While this notion has been widely accepted for years, it may have been based on misinterpreted data. The genetic evidence in this study suggests that any level of alcohol exposure could increase the risk of dementia, and that public health messages should reflect this possibility.</p>
<p>Future research may explore whether specific types of alcohol, patterns of drinking, or interactions with other lifestyle or genetic factors influence dementia risk differently. Researchers may also investigate whether the mechanisms linking alcohol to cognitive decline involve direct neurotoxicity, vascular damage, inflammation, or other biological pathways.</p>
<p>For now, the findings support a more cautious stance toward alcohol use, particularly among middle-aged and older adults. While moderate drinking has long been portrayed as compatible with healthy aging, the evidence from this study suggests that even small amounts of alcohol might have long-term consequences for brain function.</p>
<p>The study, “<a href="https://doi.org/10.1136/bmjebm-2025-113913" target="_blank">Alcohol use and risk of dementia in diverse populations: evidence from cohort, case–control and Mendelian randomisation approaches</a>,” was authored by Anya Topiwala, Daniel F. Levey, Hang Zhou, Joseph D. Deak, Keyrun Adhikari, Klaus P. Ebmeier, Steven Bell, Stephen Burgess, Thomas E. Nichols, Michael Gaziano, Murray Stein, and Joel Gelernter.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/the-surprising-link-between-your-sleep-schedule-and-your-belly-fat-according-to-a-brain-expert/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">The surprising link between your sleep schedule and your belly fat, according to a brain expert</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Sep 23rd 2025, 14:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>You stayed up too late scrolling through your phone, answering emails or watching just one more episode. The next morning, you feel groggy and irritable. That sugary pastry or greasy breakfast sandwich suddenly looks more appealing than your usual yogurt and berries. By the afternoon, chips or candy from the break room call your name. This isn’t just about willpower. Your brain, short on rest, is nudging you toward quick, high-calorie fixes.</p>
<p>There is a reason why this cycle repeats itself so predictably. Research shows that <a href="https://doi.org/10.1038/s41574-022-00747-7">insufficient sleep disrupts hunger signals</a>, <a href="https://doi.org/10.1016/j.smrv.2021.101514">weakens self-control</a>, <a href="https://doi.org/10.1016/j.metabol.2018.02.010">impairs glucose metabolism</a> and <a href="https://doi.org/10.1002/oby.23539">increases your risk of weight gain</a>. These changes <a href="https://doi.org/10.1523/JNEUROSCI.0250-18.2018">can occur rapidly</a>, even after a single night of poor sleep, and <a href="https://doi.org/10.1111/nyas.14926">can become more harmful over time</a> if left unaddressed.</p>
<p>I am a <a href="https://scholar.google.com/citations?user=sTqquL0AAAAJ&hl=en">neurologist specializing in sleep science</a> and its impact on health.</p>
<p>Sleep deprivation affects millions. According to the Centers for Disease Control and Prevention, more than one-third of U.S. adults <a href="https://www.cdc.gov/sleep/data-research/facts-stats/adults-sleep-facts-and-stats.html">regularly get less than seven hours of sleep</a> per night. Nearly three-quarters of adolescents <a href="https://www.cdc.gov/sleep/data-research/facts-stats/high-school-students-sleep-facts-and-stats.html?utm_source=chatgpt.com">fall short of the recommended 8-10 hours sleep</a> during the school week.</p>
<p>While anyone can suffer from sleep loss, essential workers and first responders, including nurses, firefighters and emergency personnel, are <a href="https://doi.org/10.1136/bmj.i5210">especially vulnerable</a> <a href="https://doi.org/10.3390/jcm13154505">due to night shifts and rotating schedules</a>. These patterns disrupt the body’s internal clock and are linked to increased cravings, poor eating habits and elevated risks for obesity and metabolic disease. Fortunately, even a few nights of consistent, high-quality sleep can help rebalance key systems and start to reverse some of these effects.</p>
<h2>How sleep deficits disrupt hunger hormones</h2>
<p>Your body regulates hunger through a <a href="https://doi.org/10.1056/NEJMra2402679">hormonal feedback loop</a> <a href="https://doi.org/10.1016/j.peptides.2025.171367">involving two key hormones</a>.</p>
<p>Ghrelin, produced primarily in the stomach, signals that you are hungry, while leptin, which is produced in the fat cells, tells your brain that you are full. Even one night of restricted sleep <a href="https://doi.org/10.1002/oby.23616">increases the release of ghrelin and decreases leptin</a>, which leads to greater hunger and reduced satisfaction after eating. This shift is driven by changes in how the <a href="https://doi.org/10.1002/oby.23616">body regulates hunger</a> and stress. Your brain becomes less responsive to fullness signals, while at the same time <a href="https://doi.org/10.1111/obr.13051">ramping up stress hormones</a> that can increase cravings and appetite.</p>
<p>These changes are not subtle. In controlled lab studies, healthy adults reported <a href="https://doi.org/10.1111/j.1365-2869.2008.00662.x">increased hunger and stronger cravings</a> for calorie-dense foods after sleeping only four to five hours. The effect worsens with ongoing sleep deficits, which can lead to a <a href="https://doi.org/10.1111/obr.13051">chronically elevated appetite</a>.</p>
<h2>Why the brain shifts into reward mode</h2>
<p>Sleep loss changes how your brain evaluates food.</p>
<p>Imaging studies show that <a href="https://doi.org/10.1038/ncomms3259">after just one night of sleep deprivation</a>, the prefrontal cortex, which is responsible for decision-making and impulse control, <a href="https://doi.org/10.3945/ajcn.111.027383">has reduced activity</a>. At the same time, <a href="https://doi.org/10.1523/JNEUROSCI.0250-18.2018">reward-related areas such as the amygdala</a> and the nucleus accumbens, a part of the brain that drives motivation and reward-seeking, become <a href="https://doi.org/10.1093/sleep/zsx125">more reactive to tempting food cues</a>.</p>
<p>In simple terms, your brain becomes more tempted by junk food and less capable of resisting it. Participants in sleep deprivation studies not only rated high-calorie foods as more desirable but were also <a href="https://doi.org/10.3945/ajcn.111.027383">more likely to choose them</a>, regardless of how hungry they actually felt.</p>
<h2>Your metabolism slows, leading to increased fat storage</h2>
<p>Sleep is also critical for blood sugar control.</p>
<p>When you’re well rested, your body efficiently uses insulin to move sugar out of your bloodstream and into your cells for energy. But even one night of partial sleep can <a href="https://doi.org/10.1210/jc.2009-2430">reduce insulin sensitivity by up to 25%</a>, leaving more sugar circulating in your blood.</p>
<p>If your body can’t process sugar effectively, it’s more likely to convert it into fat. This contributes to weight gain, especially around the abdomen. Over time, poor sleep is associated with <a href="https://doi.org/10.1161/CIR.0000000000000444">higher risk for Type 2 diabetes and metabolic syndrome</a>, a <a href="https://www.mayoclinic.org/diseases-conditions/metabolic-syndrome/symptoms-causes/syc-20351916">group of health issues</a> such as high blood pressure, belly fat and high blood sugar that raise the risk for heart disease and diabetes.</p>
<p>On top of this, sleep loss raises cortisol, your body’s main stress hormone. <a href="https://doi.org/10.1210/jc.2013-4254">Elevated cortisol encourages fat storage</a>, especially in the abdominal region, and can further disrupt appetite regulation.</p>
<h2>Sleep is your metabolic reset button</h2>
<p>In a culture that glorifies hustle and late nights, sleep is often treated as optional. But your body doesn’t see it that way. Sleep is not downtime. It is active, essential repair. It is when your brain recalibrates hunger and reward signals, your hormones reset and your metabolism stabilizes.</p>
<p>Just one or two nights of quality sleep can begin to <a href="https://doi.org/10.1152/ajpendo.00301.2013">undo the damage from prior sleep loss</a> and restore your body’s natural balance.</p>
<p>So the next time you find yourself reaching for junk food after a short night, recognize that your biology is not failing you. It is reacting to stress and fatigue. The most effective way to restore balance isn’t a crash diet or caffeine. It’s sleep.</p>
<p>Sleep is not a luxury. It is your most powerful tool for appetite control, energy regulation and long-term health.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img decoding="async" src="https://counter.theconversation.com/content/255726/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1"><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p>
<p> </p>
<p><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/sleep-loss-rewires-the-brain-for-cravings-and-weight-gain-a-neurologist-explains-the-science-behind-the-cycle-255726">original article</a>.</em></p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<table style="font:13px Helvetica, sans-serif; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; background-color:#fff; padding:8px; margin-bottom:6px; border:1px solid #adadad;" width="100%">
<tbody>
<tr>
<td><a href="https://www.psypost.org/study-shows-people-feel-happier-when-doing-everyday-activities-with-others/" style="font-family:Helvetica, sans-serif; letter-spacing:-1px;margin:0;padding:0 0 2px;font-weight: bold;font-size: 19px;line-height: 20px;color:#222;">Study shows people feel happier when doing everyday activities with others</a>
<div style="font-family:Helvetica, sans-serif; text-align:left;color:#999;font-size:11px;font-weight:bold;line-height:15px;">Sep 23rd 2025, 12:00</div>
<div style="font-family:Helvetica, sans-serif; color:#494949;text-align:justify;font-size:13px;">
<p><p>From eating and walking to reading and commuting, a recent study published in <a href="https://doi.org/10.1177/19485506251364333"><em>Social Psychological and Personality Science</em></a> has found that Americans consistently experience greater happiness when engaging in activities with company rather than alone.</p>
<p>The link between social interaction and well-being is well established in psychological research. Humans are inherently social beings, and numerous studies have shown that spending time with others tends to elevate mood and life satisfaction. However, previous research has rarely examined whether specific activities – especially those typically done alone – are more enjoyable when shared.</p>
<p>To address this gap, researchers Dunigan Folk and Elizabeth Dunn analyzed data from four waves of the American Time Use Survey (2010, 2012, 2013, and 2021), which included responses from 41,094 participants. Each participant described their previous day in detail and rated their happiness during three randomly selected activity episodes. They also indicated whether they were interacting with someone during each episode, either in person or via their phone.</p>
<p>The researchers excluded inherently social activities such as caregiving and phone calls, focusing instead on over 80 common activities ranging from household chores to leisure pursuits. Using statistical modeling, they assessed whether social interaction was associated with increased happiness across these activities.</p>
<p>Of the 297 activity-specific analyses conducted, 296 showed a positive association between social interaction and happiness. The sole exception was kitchen clean-up in 2021, which showed a slight decrease in enjoyment when done with others.</p>
<p>Eating and drinking yielded the largest happiness gains, followed by travel and active leisure activities such as walking and running, when paired with socializing. Even traditionally solitary activities like reading, arts and crafts, and commuting were rated as more enjoyable when someone else was present.</p>
<p>To ensure that the results were not simply due to happier individuals choosing to socialize, the researchers conducted additional analyses controlling for participants’ prior mood. They found that social interaction still predicted increased happiness, suggesting that the effect was not merely a reflection of pre-existing emotional states.</p>
<p>Folk and Dunn raised: “if everything is better together, then why do people still choose to do things alone? One obvious explanation is that companionship is not always available. [Also] … people are motivated by more than just happiness, and solitude is … often necessary for many personal pursuits… preparing for that big exam may require foregoing that always fun — but always unproductive — group study session.”</p>
<p>Despite its strengths, the study has several limitations. For instance, the measure of social interaction was binary and did not capture the nature or quality of the interaction. The survey also lacked information on individual personality traits, such as introversion, which could influence how social interaction affects happiness.</p>
<p>The study, “<a href="https://doi.org/10.1177/19485506251364333">Everything Is Better Together: Analyzing the Relationship Between Socializing and Happiness in the American Time Use Survey</a>,” was authored by Dunigan Folk and Elizabeth Dunn.</p></p>
</div>
<div style="font-family:Helvetica, sans-serif; font-size:13px; text-align: center; color: #666666; padding:4px; margin-bottom:2px;"></div>
</td>
</tr>
</tbody>
</table>
<p><strong>Forwarded by:<br />
Michael Reeder LCPC<br />
Baltimore, MD</strong></p>
<p><strong>This information is taken from free public RSS feeds published by each organization for the purpose of public distribution. Readers are linked back to the article content on each organization's website. This email is an unaffiliated unofficial redistribution of this freely provided content from the publishers. </strong></p>
<p> </p>
<p><s><small><a href="#" style="color:#ffffff;"><a href='https://blogtrottr.com/unsubscribe/565/DY9DKf'>unsubscribe from this feed</a></a></small></s></p>