
Sun Jun 15 00:30:00 UTC 2025: Okay, here’s a summary and news article based on the provided text:
**Summary:**
A new study analyzes the prevalence of fragile “p-values” (barely significant statistical results) in psychology research papers published between 2004 and 2024. The study shows a decline in fragile p-values since the replication crisis of the 2010s, suggesting that psychology is becoming more rigorous. Increased sample sizes and lower reported effect sizes are contributing to this trend. While top-ranked universities tackling complex biological and clinical studies sometimes show less robust results (likely due to resource constraints), the overall trend indicates a positive shift towards sturdier research practices in psychology. Experts recommend open data policies, preregistration of studies, and better funding for resource-intensive research to further improve the field and rebuild public trust.
**News Article:**
**Psychology Rebounds After Replication Crisis: Study Shows Increased Rigor**
**New Delhi, June 15, 2025:** The field of psychology appears to be recovering from the “replication crisis” that shook the scientific community in the 2010s, according to a new study published in *Advances in Methods and Practices in Psychological Science*. The study, authored by Duke University postdoc Paul Bogdan, analyzed over 240,000 psychology papers published between 2004 and 2024 and found a significant decrease in fragile “p-values” – statistical results that barely meet the threshold for significance.
The replication crisis revealed that many published studies, particularly in psychology and medicine, yielded results that couldn’t be reproduced. This eroded public trust in science and raised concerns about the validity of research findings.
Bogdan’s research indicates that the field has taken steps to address these issues. The study found a decline in fragile results from 32% to 26% since the crisis began. A key driver of this improvement has been an increase in sample sizes in studies. Larger samples provide more reliable estimates of effects, leading to more robust and replicable results. The median sample size has climbed rapidly since 2015, while the reported effect sizes have inched downward.
Interestingly, the study also revealed that scientists at top-ranked universities sometimes publish slightly shakier numbers. Text mining showed this was likely linked to biology-heavy, clinically demanding studies that are expensive, labor-intensive, and often ethically constrained.
Experts are calling for further reforms to bolster public trust in science. Recommendations include the adoption of open-data policies, preregistration of studies (to ensure even negative results are reported), and increased government funding for resource-intensive research projects. These measures would further strengthen the integrity of scientific research and ensure its reliability.