Your Daily digest for PsyPost – Psychology News Daily Digest (Unofficial)

Article Digests for Psychology & Social Work article-digests at lists.clinicians-exchange.org
Wed Apr 23 07:37:56 PDT 2025


PsyPost – Psychology News Daily Digest (Unofficial)

 

(https://www.psypost.org/matcha-green-tea-improves-sleep-and-emotional-perception-in-older-adults-with-cognitive-decline/) Matcha green tea improves sleep and emotional perception in older adults with cognitive decline
Apr 23rd 2025, 10:00

A yearlong study of older adults with cognitive decline in Japan found that drinking matcha green tea improved participants’ emotional perception and sleep quality. Cognitive functioning and daily living abilities were not affected. The research was published in (https://doi.org/10.1371/journal.pone.0309287) PLOS ONE.
Matcha green tea is a finely ground powder made from specially grown and processed green tea leaves. It was originally popular in Japan but is now enjoyed worldwide. Unlike regular green tea, matcha involves consuming the entire tea leaf in powdered form, which significantly increases the intake of nutrients and caffeine. Matcha cultivation is distinctive: the tea bushes are shaded for about 20 to 30 days before harvest to increase chlorophyll levels and boost the production of amino acids, giving matcha its vibrant green color and umami-rich flavor.
When preparing matcha, the powder is whisked with hot water until it forms a frothy beverage, distinguishing it from other teas that are typically steeped. This traditional preparation method is an integral part of Japanese tea ceremonies, emphasizing mindfulness and respect through its ritualistic nature. Nutritionally, matcha is rich in antioxidants, particularly catechins, which are believed to help neutralize harmful free radicals and may contribute to overall health.
Study author Kazuhiko Uchida and his colleagues set out to examine the effects of matcha capsules on cognitive functioning and sleep quality in older individuals with subjective or mild cognitive decline. Matcha capsules are a dietary supplement that contains powdered matcha green tea. These supplements provide a convenient way to consume matcha’s antioxidants and potential health benefits without preparing the traditional drink.
The study included Japanese older adults between 60 and 85 years of age. Participants were recruited from the University of Tsukuba Hospital and the Memory Clinic Toride, both in Japan. To be eligible, participants had to live with a partner who could help manage supplement intake and accompany them to appointments. They also had to have either subjective cognitive decline or mild cognitive impairment, have no serious illnesses in the past five years, and not be diagnosed with dementia or taking dementia treatments.
Subjective cognitive decline refers to when a person perceives a decline in memory or thinking skills, even if standard tests show no measurable impairment. Mild cognitive impairment involves detectable decline on cognitive tests that is greater than expected for age but not severe enough to significantly interfere with daily life. The study included 64 participants with subjective cognitive decline and 35 with mild cognitive impairment.
Participants were randomly assigned to one of two groups. One group took nine capsules of matcha daily—equivalent to 2 grams of matcha, the amount typically consumed in a traditional Japanese tea ceremony. The other group took placebo capsules that looked identical but contained cornstarch. The study was double-blind, meaning neither the participants nor the researchers interacting with them knew which participants were receiving matcha and which were receiving the placebo. This regimen continued for 12 months.
Participants completed a set of neurocognitive tests at the beginning of the study, after 12 months of capsule intake, and again six months after stopping the capsules. They also underwent positron emission tomography at the beginning and end of the 12-month intervention period. Additionally, participants completed assessments of sleep quality and daily functioning.
The results showed that matcha supplementation improved participants’ emotional perception, as measured by tests requiring the identification of facial emotions, and enhanced sleep quality. However, there were no observed improvements in other areas of cognitive functioning or in daily living abilities.
“The present study suggests regular consumption of matcha could improve emotional perception and sleep quality in older adults with mild cognitive decline. Given the widespread availability and cultural acceptance of matcha green tea, incorporating it into the daily routine may offer a simple yet effective strategy for cognitive enhancement and dementia prevention,” the study authors concluded.
The study sheds light on the effects of matcha tea on cognitive functioning of older adults. However, it should be noted that the number of study participants was relatively small and they were selected for mild levels of cognitive decline. Results on larger groups and demographically different groups might differ.
The paper, “(https://doi.org/10.1371/journal.pone.0309287) Effect of matcha green tea on cognitive functions and sleep quality in older adults with cognitive decline: A randomized controlled study over 12 months,” was authored by Kazuhiko Uchida, Kohji Meno, Tatsumi Korenaga, Shan Liu, Hideaki Suzuki, Yoshitake Baba, Chika Tagata, Yoshiharu Araki, Shuto Tsunemi, Kenta Aso, Shun Inagaki, Sae Nakagawa, Makoto Kobayashi, Tatsuyuki Kakuma, Takashi Asada, Miho Ota, Takanobu Takihara, анд Tetsuaki Arai.

(https://www.psypost.org/llm-red-teamers-people-are-hacking-ai-chatbots-just-for-fun-and-now-researchers-have-catalogued-35-jailbreak-techniques/) LLM red teamers: People are hacking AI chatbots just for fun and now researchers have catalogued 35 “jailbreak” techniques
Apr 23rd 2025, 08:00

What happens when people push artificial intelligence to its limits—not for profit or malice, but out of curiosity and creativity? A new study published in (https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0314658) PLOS One explores the world of “LLM red teamers,” individuals who test the boundaries of large language models by intentionally trying to make them fail. Based on interviews with 28 practitioners, the research sheds light on a rapidly emerging human-computer interaction that blends play, ethics, and improvisation.
Large language models (LLMs)—such as those behind popular tools like ChatGPT—can generate human-like responses based on vast quantities of text. While they are often used for helpful tasks like drafting emails or summarizing articles, they can also produce outputs that are unintended, offensive, or misleading. Since their public release, people across the internet have tried to “jailbreak” these models—using clever prompts to make them break their own rules.
“LLMs introduced numerous never-before-seen security and safety issues due to their enabling novel forms of interfacing with computers via language alone. We knew there were going to be security challenges and safety issues, but no one could predict what they were going to be,” said study author (https://www.linkedin.com/in/nannainie/) Nanna Inie, an assistant professor at the IT University of Copenhagen.
“The huge popularity of chat-based LLMs made it possible and easy for the whole world to experiment with the shortcomings and failures of LLMs at once — and they did! This is a new human activity; hacking machines using something as common as natural language hasn’t been popular before. People worldwide shared screenshots of LLM ‘failures’ on both public social media and closed Discord channels. We wanted to find out what drives this communal limit-testing; why do people do it, how do they do it, and what can we learn from it?”
To answer these questions, the research team adopted a qualitative, interview-based approach. Instead of focusing on the technical outcomes of attacks, they aimed to understand the human behaviors, thought processes, and cultural context underlying LLM red teaming—a concept still poorly defined when the study began.
The term “red teaming” originates from military exercises where a “red team” simulates an adversary to test defenses. It was later adopted in cybersecurity to describe structured exercises aimed at finding system weaknesses. However, applying this term to LLMs was problematic, as the activity was new, often unstructured, and its definition unclear. The researchers sought to understand this emerging practice directly from the people involved. Their goal was not to impose a definition but to develop one based on evidence – a “grounded theory.”
“The study demonstrates the importance of human-centered approaches to researching LLM security,” Inie explained. “A year or two after the launch of ChatGPT, hundreds of papers were published on arXiv eager to demonstrate the effectiveness of a single jailbreak (an approach to breaking through safeguards of an LLM) and it was impossible for security professionals to keep on top of all of them.”
“We basically just asked the people who are good at this and collected all their techniques and rationales in a comprehensive overview of LLM red teaming. This issue has to be addressed as a community, which means taking heed of a wide variety of human behaviors and intuitions. Traditional cybersecurity experts had very little advantage in this terra nova of generative machine learning, making it even more crucial to go beyond this sibling community.”
Between December 2022 and January 2023, the researchers conducted in-depth interviews with 28 individuals who actively participated in attempts to manipulate LLMs. These participants came from a wide range of backgrounds, including software engineers, researchers, artists, and even someone who worked on a cannabis farm. Many had jobs in machine learning or cybersecurity, while others were hobbyists or creative explorers. The interviews were conducted via video call, recorded, and later transcribed and analyzed using grounded theory—a method for developing conceptual frameworks based on qualitative data.
The researchers examined how participants defined their own activities, what strategies they used to interact with models, and what motivated their efforts. From these insights, they built a detailed theoretical model of LLM red teaming.
The study defined LLM red teaming as a manual, non-malicious process where individuals explore the boundaries of AI systems by trying to provoke unexpected or restricted responses. The activity typically involved a mix of technical skill, creative experimentation, and playful curiosity. While some participants used terms like “prompt engineering” or “hacking,” many described their work in more whimsical terms—like “alchemy,” “magic,” or “scrying.”
“Why are engineers and scientists so interested in magic and demons?” Inie wondered. “It’s such a consistent way of describing the gaps in sensemaking. This was fascinating and very lovely; the more senior the interviewee, the higher the chance of the arcane creeping in to their description. Why? This is something that practitioners will benefit from being formalized and understood, so that we can be confident in our sensemaking around LLMs, this complex set of technologies that we still don’t understand.”
Several core features emerged as consistent across participants:

Limit-seeking behavior: Participants were not trying to use LLMs as intended. Instead, they deliberately tried to provoke the models into saying things their developers likely wanted to avoid—ranging from offensive jokes to instructions for fictional crimes. These acts were not committed out of malice, but to test the boundaries of what the models could and couldn’t be coaxed into doing.
Non-malicious intent: None of the interviewees expressed any desire to harm others or exploit systems for personal gain. Most were driven by a mix of ethical concerns, intellectual curiosity, and personal interest. Many saw their work as a public good—helping developers identify vulnerabilities before malicious actors could exploit them.
Manual and experimental process: Unlike automated attacks, red teaming was described as a hands-on, intuitive process. Participants described the activity as exploratory and improvisational—akin to trial-and-error tinkering with a new toy. Some compared their process to a “trance state,” losing hours trying different prompts just to see what might work.
Community and collaboration: Red teaming was rarely a solo endeavor. Participants described a loose but vibrant online community, primarily based on platforms like Twitter, Reddit, and Discord. They shared prompts, discussed tactics, and built on each other’s discoveries. Even when not part of formal teams, they viewed their efforts as a collective contribution to understanding AI.
An alchemist mindset: Many participants described their work in metaphoric or mystical terms, acknowledging that they often didn’t fully understand why a certain prompt worked. This embrace of uncertainty gave rise to creative problem-solving, as participants experimented with different languages, formats, or even fictional scenarios to bypass model safeguards.

The researchers also identified a taxonomy of 12 distinct strategies and 35 specific techniques used by participants. These were grouped into five broad categories: language manipulation, rhetorical framing, world-building, fictionalization, and stratagems.
Language strategies involved using alternative formats like code or stop sequences to bypass restrictions. Rhetorical approaches relied on persuasion, misdirection, and escalating requests. World-building techniques placed the model in imagined scenarios where different rules or ethics applied, while fictionalization reframed prompts through genre or roleplay to elicit sensitive content. Stratagems, such as prompt regeneration, meta-prompting, or adjusting temperature settings, exploited the model’s underlying mechanics to increase the chances of a successful jailbreak.
“All LLMs are hackable by anyone with a computer and a decent command of written language,” Inie told PsyPost. “This study displays the incredible breadth of potential security issues that the implementation of an LLM in a system introduces. Cybersecurity in the context of LLMs no longer depends on scanning IP addresses and crunching passwords, but is much closer to social engineering — only, we can now use social engineering techniques directly on the computer.”
Inie was surprised by “how much we could learn about a technologically advanced security issue by asking people, and by asking a varied sample of people. Every interview taught us something new, every person lent a fantastic perspective and demonstrated that closing security holes potentially means introducing new safety risks; such as the prompt engineer who was worried about the model providers actually fixing hallucinations in models because ‘if you make hallucinations rare enough, people become unfamiliar with what they look like and they stop looking for them’ (P14) — or the researcher who noted that sharing different hacks and jailbreaks creates an economy of relevance where outrage potentially directs effort: ‘there’s a certain amount of squabbling. Should we be thinking about murder bots, or should we be thinking about racist bots? And there seems to have been a cultural divide that has appeared around this, but it’s kind of a silly divide because we don’t know how to solve either problem’ (P18).”
One limitation of the study is that it captures a specific moment in time—late 2022 to early 2023—when LLMs were still relatively new to the public and rapidly evolving. Some of the specific attack strategies shared by participants have already been patched or made obsolete by updated models.
However, the researchers argue that the broader insights remain relevant. By focusing on motivations, behaviors, and general strategies, the study offers a framework that can adapt to future changes in technology. Understanding the human element—why and how people probe AI—is essential for designing more resilient and ethical systems.
“Specific attack wording is unlikely to transfer between individual models, and the state of the art is always progressing in a way that tends to render older attacks obsolete,” Inie noted. “And that’s OK. That’s why we focused on building a grounded theory of generalized strategies and techniques. While individual prompts encountered in the study might not work with tomorrow’s LLMs, the general theory has held well over the time between doing the work and having it published.”
“This is why a human-centered interview study makes more sense than looking at individual attacks on a surface level — humans can tell us about their underlying strategies and rationales, and these typically transfer much better than individual attacks.”
The researchers emphasize that their work fills a significant gap in the field by offering a structured, evidence-based understanding of how people engage with LLMs in adversarial ways. While much of the conversation around AI security focuses on technical benchmarks and automated defenses, this study highlights the need to first understand the human behaviors and motivations behind these interactions.
“We set out to understand how this novel human activity works,” Inie explained. “Long term we want to accelerate sensemaking in this area. Industry and academia have both struggled with building typologies of LLM attacks, because there hasn’t been enough evidence on the ground to construct them. Working out what kinds of attacks to try and how to execute or even automate them is going to be something people in the field will do constantly in coming years, and our work will make this faster and more consistent.”
“We also want to showcase the massive impact of qualitative work in machine learning and security. There’s often a focus on measuring effect and efficiency, but this is useless until you know what to measure. Qualitative research shows what can be measured – it’s the requisite step before quantitative anything. Without a theory describing a phenomenon, everyone is working in the dark.
“Often, when developing a new way of showing data, a handful of engineers guess how something works and build that feature, and everyone downstream of their work ends up subjected to that framing,” Inie added. “Those engineers are in fact doing qualitative work, but often without a formal methodology. Studies like ours use strong and broad evidence to show exactly how to go about assessing this new activity using familiar quantitative tools, and we do it in a way that reflects human behaviour and expectations. We hope this gives a good example of how crucial and feasible rigorous qualitative work is in strongly digital, highly engineering disciplines.”
The study, “(https://doi.org/10.1371/journal.pone.0314658) Summon a Demon and Bind It: A Grounded Theory of LLM Red Teaming,” was authored by Nanna Inie, Jonathan Stray, and Leon Derczynski.

(https://www.psypost.org/scientists-find-evidence-that-an-optimal-sexual-frequency-exists-and-mitigates-depression/) Scientists find evidence that an “optimal sexual frequency” exists and mitigates depression
Apr 23rd 2025, 06:00

New research published in the (https://doi.org/10.1016/j.jad.2025.01.043) Journal of Affective Disorders suggests that people who engage in sexual activity at least once a week are less likely to experience symptoms of depression. Drawing from a large, nationally representative sample of U.S. adults, the study found that sexual frequency was negatively associated with depression, even after accounting for factors like age, physical health, and socioeconomic status. The findings also suggest that having sex one to two times per week may offer the greatest psychological benefits.
The study was conducted by researchers from The First Affiliated Hospital of Shenzhen University and Shantou University Medical College. Their goal was to explore whether sexual activity—a commonly overlooked aspect of lifestyle—might serve as a behavioral indicator related to mental health. While sex is widely recognized for its physical health benefits, its role in emotional well-being is less often studied. Depression remains one of the leading contributors to disability and reduced quality of life, and identifying modifiable lifestyle factors that could reduce its burden is an important goal for public health researchers.
To investigate this question, the researchers analyzed data from the National Health and Nutrition Examination Survey (NHANES), a long-running project that collects health and behavior information from a representative sample of adults in the United States. They focused on data collected between 2005 and 2016, selecting participants aged 20 to 59 who reported both their sexual activity over the past year and completed a standard depression questionnaire known as the Patient Health Questionnaire-9 (PHQ-9).
After applying exclusion criteria, the final sample included 14,741 individuals. About 7.5% of these participants had PHQ-9 scores indicating moderate to severe depression. Sexual activity was categorized into three levels: less than once per month, more than once per month but less than once per week, and at least once per week. The researchers also collected information on a wide range of other variables, including age, gender, race, income, education, marital status, insurance coverage, and physical health as measured by the Charlson Comorbidity Index.
Using statistical models that adjusted for these potential confounders, the researchers found a clear association: people who reported having sex at least once per week had significantly lower odds of depression compared to those who had sex less than once per month. Specifically, weekly sexual activity was associated with a 24% reduction in the odds of depression. Those who reported sex more than once per month but less than weekly had about a 23% reduction in depression odds.
The researchers also used a flexible modeling technique called restricted cubic splines to examine whether the relationship between sexual frequency and depression was linear or nonlinear. The analysis revealed what they described as a “saturation effect”—the psychological benefits of sex appeared to peak at a frequency of 52 to 103 times per year, or about one to two times per week. Increasing sexual frequency beyond this range did not seem to offer additional protection against depression.
These findings remained robust across a variety of statistical checks. For example, when participants taking antidepressants were excluded from the sample, the association between sexual frequency and depression remained significant. The researchers also conducted sensitivity analyses using techniques like multiple imputation to handle missing data and inverse probability weighting to adjust for potential biases. Across these different approaches, the pattern held: more frequent sex was associated with lower odds of depression.
Interestingly, the protective association between sex and depression appeared to be stronger in certain groups. Younger adults (ages 20–39), Mexican American and non-Hispanic White participants, and those without health insurance all showed stronger associations. However, the researchers did not find significant differences by gender, income level, or education, and there were no statistically significant interaction effects between sexual frequency and these subgroup variables.
However, it is important to note that the study was cross-sectional, which means that all data were collected at a single point in time. As a result, the researchers could not determine whether reduced sexual frequency leads to depression, whether depression reduces sexual activity, or whether both are influenced by other shared factors. Self-reported sexual activity may also be subject to recall bias or social desirability effects. In addition, the study did not account for sexual orientation, relationship satisfaction, or other contextual factors that might influence both sexual activity and mental health.
Still, the authors argue that sexual frequency may serve as a useful behavioral marker in mental health screening and treatment. They note that sexual activity is a multidimensional experience that combines emotional, physical, and relational elements. Regular sexual activity can contribute to stress relief, intimacy, and emotional bonding—all of which may play a role in protecting mental health. On a physiological level, sexual activity is associated with the release of endorphins and other neurochemicals that are known to elevate mood. It may also function as a form of physical exercise, which itself has well-established benefits for psychological well-being.
The researchers caution against interpreting their findings as encouragement to increase sexual activity indiscriminately. Instead, they suggest that maintaining a moderate, consistent level of sexual activity—particularly within the context of a satisfying relationship—may support emotional health. They also emphasize the need for health professionals to include sexual well-being in their assessments of mental health, as sexual dysfunction or low sexual frequency can be both a symptom and a contributing factor in depression. For patients taking antidepressants, which often have sexual side effects, these considerations may be especially important.
The study, “(https://doi.org/10.1016/j.jad.2025.01.043) Optimal sexual frequency may exist and help mitigate depression odds in young and middle-aged U.S. citizens: A cross-sectional study,” was authored by Mutong Chen, Ruibin Yi, and Zhongfu Zhang.

Forwarded by:
Michael Reeder LCPC
Baltimore, MD

This information is taken from free public RSS feeds published by each organization for the purpose of public distribution. Readers are linked back to the article content on each organization's website. This email is an unaffiliated unofficial redistribution of this freely provided content from the publishers. 

 

(#) unsubscribe from this feed
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clinicians-exchange.org/pipermail/article-digests-clinicians-exchange.org/attachments/20250423/d9ce0ceb/attachment.htm>


More information about the Article-digests mailing list