A few years ago, I was completely dismissive of qualitative research. I didn’t think it could tell us much and didn’t value qualitative work or findings. However, I’ve experienced a full 180 degree spin on my stance. I now appreciate qualitative work and the insights it brings. Inspired by Hans IJzerman’s tweet thread, I’ve collected various open qualitative research resources and ideas. It might not be as clear how qualitative research can benefit from open science practices, but I think making this kind of research more open is a worthwhile endeavour. If you have any suggestions, please let me know and I will add them!
kl快连VPN
kl快连VPN
I recently attended two excellent virtual metascience conferences: the Surrey Reproducibility Society & ReproducibiliTea’s on May 29th and RIOT Science Club’s on the 11th of June. One of the common themes across both conferences was how the culture of academia was changing. The focus of Anne Scheel’s talk for the first conference was on Early Career Researchers (ECRs) and how the culture shift was affecting them. During the second conference, Marcus Munafó talked more broadly about how open research practices fit into the current way of working, how they were helping to shift this culture, and how they can be valued. I agreed with the majority of what they (and all the other speakers) talked about. But I think there is more to be said about how ECRs are affected by this paradigm shift.
kl快连VPN
Dual process theories are everywhere in psychology . From decision making to emotions, the idea that complex cognitive phenomena can be categorised into two classes with specific common features is very alluring. identified psychologists’ penchant for dichotomies (nature vs nurture etc.) but this dichotomisation began with the cognitive revolution. Given that this framework holds almost universal appeal, you would presume there is solid evidence for this model with a plethora of empirical evidence. But, as argue, the foundation is actually much weaker than many (including myself) believe.
kl快连VPN
Writing is one of the most important aspects of any academic or practicicing psychologist’s work life. Crafting research papers for both and writing reports for practicising psychologists are the bread and butter of these professions. But good writing takes time to develop. Very few are naturally excellent writers. For the rest of us, we have to grind away. Putting in the hours to make our writing more fluid, our ideas clearer, our prose more engaging. Although it can seem like a time investment you can’t afford, I think writing blog posts is well worth it. This blog post is primarily for those at an earlier stage in their careers but, of course, everyone can benefit from more writing practice.
kl快连VPN
I think preprints are excellent. Sharing research without waiting for journals to publish it is a great way to spread valuable work. There are many articles and papers highlighting the strengths and weaknesses of preprints, so I won’t rehash those arguments here. My positive stance towards preprints is becoming stronger the more disillusioned with traditional pre-publication peer-review I become . But I’ve encountered a tension between two views I strongly endorse. On one hand, I think preprints are a good way to speed up publication and dissemination. On the other, I believe science needs to slow down to improve quality. The contradiction is pretty clear. So, what do I do? Do I jettison one of these views, or are they reconcilable?
kl快连VPN
Preregistration and Registered Reports (RRs) are rapidly gaining popularity as a means of asking rigorous scientific questions. I think preregistration, and especially RRs, will positively shape how psychologists engage in scientific work. But critically discussing them is vital for our collective understanding. To this end, I was invited to the Cambridge ReproducibiliTea meeting to chat about them. The slides from my talk can be found here. Together with the slides, this blog post is a summary of the points from the evening. I do not claim credit for any of these ideas. This is not an exhaustive list of pros or cons. I’m focusing more on the negatives and pragmatic thoughts about implementation as I feel the positives have been widely discussed already. Thanks to everyone who was present and especially to those who contributed.
kl快连VPN
One of the first things all psychology students are taught is levels of measurement. Every student must wrap their heads around the four different forms data can take: nominal, ordinal, interval, or ratio. These are the bedrock of a lot of students’ understanding of measurement, including mine. I didn’t realise there were questions about their validity and utility until recently. Should we still use these levels of measurement? Do they aid our understanding of measurement? Or do they need to retire?
kl快连VPN
This is a collection of some of the best things I’ve read this year. Organised alphabetically, they cover a wide range of topics. Hopefully you’ll find them as interesting as I did! Comment saying what you really enjoyed reading, I’ll check it out, and may even add it to my list.
好用的付费ssr
‘Most Money Advice Is Worthless When You’re Poor’ by Talia Jane. What it is like for many to live in poverty.
‘Canon Is An Abyss’ by Mike Rugnetta. Why endlessly digging into the backstory and wider context of a well known and loved story can lead to madness.
kl快连VPN
I like science Twitter. Science social media has pretty much been my education for all things related to statistics, metascience, and philosophy of science. I’ve talked with great people and been exposed to ideas I would almost certainly never encountered otherwise. But the way it, and all mediums through which scientific ideas are informally discussed, works can be counter productive. I want to look at a recent example of this and dig a little deeper into perhaps why this happened.
This post was precipitated by the publication of a preprint and the subsequent social media 购买ssr节点还是搭建ssr好. posit that preregistration cannot supersede strong theory when it comes to developing scientific knowledge (among other things). This sparked a wide ranging discussion of the paper, some of it valuable, some of it less so. Why did some people, who would likely argue they are primarily concerned with discussing ideas and promoting knowledge, engage in ways that are antithetical to this? There are many possible explanations, but I want to focus on one.
Does calling a study “under powered” help or hinder criticism?
A common criticism of research (past and present) is that it’s “under powered” or “has low power”. What this typically means is the study doesn’t have many participants (typically between 5 and 40) and so has low statistical power for most effect sizes in psychology. But something being “under powered” only makes sense when compared with an effect size. Power is determined by the effect size you want to detect, the size of your sample, and the alpha level. Sample sizes which are often labelled as “under powered” can actually have high power, depending on the hypothetical effect size.