CBT (Cognitive Behavioral Therapy) is pretty much the most empirically validated form of effective treatment for a wide variety of people, with a wide range of disorders / problems.
pmc.ncbi.nlm.nih.gov/articles/PMC3584580/
It is also so straightforward that many people who are not in serious mental anguish / disordered thinking can just actually do it on their own after a few actual talk sessions, by following some steps and rules that can fit on a few pages of paper.
positivepsychology.com/cbt-cognitive-behavioral-t…
I am not disagreeing with you though.
Much of the history of psychology as a field is built on theories that are not testable, do not allow for a way to do a test that would affirm or negate the actual validity of theory.
To add more fun to this:
Turns out the fundamental theory behind why SSRIs… work… is basically bunk.
psychologytoday.com/…/we-still-dont-know-how-anti…
Almost every single study on the efficacy of SSRIs in treating what they are prescribed to treat is either funded or conducted… by the people selling them, the drug companies.
And they almost never do long term studies, and they almost always massively downplay the severity and prevalence of side effects.
Remeber when we had an opiod crisis because the Sackler family and other drug companies heavily pushed low quality research and recommendations onto doctors?
www.bmj.com/content/378/bmj-2021-067606
Studies similar to this one have become more and more common in the last few years, with it basicslly looking like SSRIs are either no more effective than placebo, or are nearly imperceptively better, just barely outside of statistically equivalent, or that they are better than placebo, but only for a completely unpredictable subset of people.
Psychiatrists have been very vehemently arguing about this in the last few years, usually not in public but uh, they don’t like to state it as bluntly as I just have, because if it turns out SSRIs and many other mind altering pharmacueticals… don’t actually treat what they are intended to, that they don’t actually work in the ways they tell their patients they do… but do very clearly cause negative side effects… well that’d mean they’ve basically been doing medical malpractice their whole careers.
Whoops!
andros_rex@lemmy.world 1 year ago
The standard p value in most psych research is 0.05, which means that you are willing to accept a 1/20 risk of a Type 1 error - that you are content with a 5% chance of a false positives, that your results were entirely due to random chance.
Keep in mind that you don’t publish research that do the give results most of the time. Keep in mind that you have to publish regularly to keep your job. (And that if your results make certain people happy, they’ll give you and your university more money). Keep in mind that it is super fucking easy to say “hey, optional extra credit - participate in my survey” to your 300 student Intro Psych class (usually you just have to provide an alternative assignment for ethical reasons).
Mrs_deWinter@feddit.org 1 year ago
That wouldn’t be a problem at all if we had better science journalism. Every psychologist knows that “a study showed” means nothing. Consensus over several repeated studies is how we approximate the truth.
The psychological methodology is absolutely fine as long as you know it’s limitations and how to correctly apply it. In my experience that’s not a problem within the field, but since a lot of people think psychology = common sense, and most people think they excel at that, a lot of laypeople overconfidently interpret scientific resultst which leads a ton of errors.
The replication crisis is mainly a problem of our publications (the papers, how impact factors are calculated, how peer review is done) and the economic reality of academia (namely how your livelihood depends on the publications!), not the methodology. The methods would be perfectly usable for valid replication studies - including falsification of bs results that are currently published en masse in pop science magazines.