Hello, all!
I cannot speak for everyone, but I know that replication of experiments was mentioned only as an afterthought in my undergraduate psychology courses. When writing papers, we were told to construct our "Methods" section so precisely so that anyone could follow our instructions and carry out the same experiment. After reading these articles, I see that I was essentially instructed to write my "Methods" section so that anyone could carry out a direct replication. Looking at it now, why would anyone want to conduct a direct replication? It seems too obvious and too simple in most situations. If I wanted to ensure I was certain of my results, it seems as though I would always want to conduct a conceptual replication. If I could achieve the same results in a different situation with different participants etc., I would be thoroughly pleased!
Both articles made the point that publications often shy away from publishing direct replications of previously conducted studies. I cannot say I blame them. In some ways, that is like reinventing the wheel. In no way am I trying to discount the importance of both types of replications, but conducting a direct replication shows only a few things.... First, it shows that the original experimenter wrote solid instructions! Similarly, it shows that the new experimenter can follow instructions. If successful, a direct replication shows that the original experimenter correctly interpreted his or her results, which is important to eliminate any doubt in the significance of results.
Back to what I first said. This is my first semester of graduate school. It seems so bizarre to me that I am in a course focusing entirely on a subject that was barely mentioned in undergraduate. I had the initial urge to email these articles to my previous professors and beg that they point out the importance of replication in psychology (or all sciences, for that matter).
All in all, I found these articles very interesting. Some of the points were common sense, but in a way that I had never thought of them. Similar to what Ed mentioned, I see a flaw in replication in that a faulty experiment will continue to be a faulty experiment as long as it is replicated. Looking forward to this discussion.
Monday, August 29, 2016
Sunday, August 28, 2016
Week 2 Reading Response - Lenzo
Hello all.
I apologize in advance: This post is far too long. I will cut it down next time.
I have no formal cognitive science, or really any kind of "scientific" training whatsoever, so I may be way off base in the following. I hope to learn from this, and from you.
I was curious about a methodological point, especially in the context of cognitive or other "human" sciences. My concern is what I will naively call "population norming." Crandall emphasized my concern in the section in which he quotes Heraclitus, "you cannot step in the same river twice." The notion underlying this quote is that the river is different from moment to moment, and so literally speaking, by the time you step in "it" again it is a different river. This is applied especially to human beings - we are historical, and we change in time (it seems that Schmidt acknowledges this through the term "irreversible units) - and thus we cannot test the same exact subject twice. One concern with this notion is that you can extend it to very small time-scales: within one experiment, a person is different moment to moment, and so we cannot even test the same person once (if an experiment takes more than some relevantly small amount of time). That is not my primary concern.
My concern is whether or not this concern with a change in identify of subjects can be mitigating by norming a population, or taking subjects from a wide range of varying populations: academics, professionals, white/blue collar, the unemployed, oppressed persons, persons of privilege, various genders, etc. Both articles seemed to focus on defining one population and then maintaining in subsequent experiments, with the exception of certain kinds of concept replication (henceforth "CR") experiments, in which the population may be expanded. But it seems that in "social psychology" or any other kind of science in which the subject matter is explicitly human, you would want a wide range of subjects so as to make the most general claims.
Of course there are practical concerns, such as funding, time, access to populations, etc.
I was curious about Crandall's claim that CR is not meant to be exact, but I think that Schmidt explicated this well: There is an intentional change in some variable or other, rather than trying to map the original experiment exactly.
It seems that both authors address what is called a "publication bias," and recommend encouraging that various kinds of replication (for instance, CR) be more readily published. Schmidt seems to raise the point, however, that we don't actually learn very much from a failed CR experiment. Crandall nevertheless would like to see such experiments more often published; I am not sure where Schmidt stands in that regard.
What is pilot testing?
Schmidt utilizes a Dilworth quote that is seemingly inspired by Hume. Hume insists that we ought be careful, though, in ascertaining some "ultimate cause" behind the constant conjunctions we see. In that sense, for Hume, direct replication might not be theory confirming (consistent with the articles), but - except in a potentially deflationary sense - CR might not be either.
Replication is meant, in part, to increase confidence in some experiment, methodology, or perhaps even theory. However, it seems that Schmidt implies confidence that some experimental results would be replicable is good enough. This seems worrysome.
Why is publishing "mere" replication discouraged in the social sciences but apparently not the natural sciences?
Schmidt points out that "interesting" science challenges assumptions of the audience, or of some general scientific paradigm or worldview. He also makes the claim that, though replication is demanded of these views, it is seldom delivered (this demand, by the way, seems related to Dr. Braasch's research presented last week). Thomas Kuhn argues, from a a philosophy and sociology of science standpoint, that persons operating in different scientific paradigms are destined to, to some extent, talk past one another. His considerations might give us grounds for thinking that replication might not be possible between paradigms, or that what counts as replication may be different between paradigms.
Schmidt introduces the notion of "tacit knowledge." I wonder about "tacit bias." I especially wonder how much a mechanism analogous to "Gettier Cases" function in scientific research. The idea is that someone can have a justified true belief without it counting as knowledge, in my interpretation because what justifies the belief and what makes it true can come apart. The standard example:
Smith and Jones apply for a job. Smith has good reason to believe Jones will get the job: The interviewer tells him he will hire Jones. Earlier, Smith counted the number of coins in Jones' pocket: 10. Smith forms the belief that "the man who will get the job has 10 coins in his pocket," and this belief is justified. In the end however, Smith is the one who gets the job. And, unbeknownst to Smith, he too has 10 coins in his pocket. So, his justified belief that "the man who will get the job has 10 coins in his pocket" ends up being true. But, we seem to think, Smith did not know that.
I have yet to work it out, but it seems like replication has the potential to replicate bias, and the important point is this: Just because an experiment is badly designed, or in other words does not map the way things really are, does not mean its results cannot be replicated.
I apologize in advance: This post is far too long. I will cut it down next time.
I have no formal cognitive science, or really any kind of "scientific" training whatsoever, so I may be way off base in the following. I hope to learn from this, and from you.
I was curious about a methodological point, especially in the context of cognitive or other "human" sciences. My concern is what I will naively call "population norming." Crandall emphasized my concern in the section in which he quotes Heraclitus, "you cannot step in the same river twice." The notion underlying this quote is that the river is different from moment to moment, and so literally speaking, by the time you step in "it" again it is a different river. This is applied especially to human beings - we are historical, and we change in time (it seems that Schmidt acknowledges this through the term "irreversible units) - and thus we cannot test the same exact subject twice. One concern with this notion is that you can extend it to very small time-scales: within one experiment, a person is different moment to moment, and so we cannot even test the same person once (if an experiment takes more than some relevantly small amount of time). That is not my primary concern.
My concern is whether or not this concern with a change in identify of subjects can be mitigating by norming a population, or taking subjects from a wide range of varying populations: academics, professionals, white/blue collar, the unemployed, oppressed persons, persons of privilege, various genders, etc. Both articles seemed to focus on defining one population and then maintaining in subsequent experiments, with the exception of certain kinds of concept replication (henceforth "CR") experiments, in which the population may be expanded. But it seems that in "social psychology" or any other kind of science in which the subject matter is explicitly human, you would want a wide range of subjects so as to make the most general claims.
Of course there are practical concerns, such as funding, time, access to populations, etc.
I was curious about Crandall's claim that CR is not meant to be exact, but I think that Schmidt explicated this well: There is an intentional change in some variable or other, rather than trying to map the original experiment exactly.
It seems that both authors address what is called a "publication bias," and recommend encouraging that various kinds of replication (for instance, CR) be more readily published. Schmidt seems to raise the point, however, that we don't actually learn very much from a failed CR experiment. Crandall nevertheless would like to see such experiments more often published; I am not sure where Schmidt stands in that regard.
What is pilot testing?
Schmidt utilizes a Dilworth quote that is seemingly inspired by Hume. Hume insists that we ought be careful, though, in ascertaining some "ultimate cause" behind the constant conjunctions we see. In that sense, for Hume, direct replication might not be theory confirming (consistent with the articles), but - except in a potentially deflationary sense - CR might not be either.
Replication is meant, in part, to increase confidence in some experiment, methodology, or perhaps even theory. However, it seems that Schmidt implies confidence that some experimental results would be replicable is good enough. This seems worrysome.
Why is publishing "mere" replication discouraged in the social sciences but apparently not the natural sciences?
Schmidt points out that "interesting" science challenges assumptions of the audience, or of some general scientific paradigm or worldview. He also makes the claim that, though replication is demanded of these views, it is seldom delivered (this demand, by the way, seems related to Dr. Braasch's research presented last week). Thomas Kuhn argues, from a a philosophy and sociology of science standpoint, that persons operating in different scientific paradigms are destined to, to some extent, talk past one another. His considerations might give us grounds for thinking that replication might not be possible between paradigms, or that what counts as replication may be different between paradigms.
Schmidt introduces the notion of "tacit knowledge." I wonder about "tacit bias." I especially wonder how much a mechanism analogous to "Gettier Cases" function in scientific research. The idea is that someone can have a justified true belief without it counting as knowledge, in my interpretation because what justifies the belief and what makes it true can come apart. The standard example:
Smith and Jones apply for a job. Smith has good reason to believe Jones will get the job: The interviewer tells him he will hire Jones. Earlier, Smith counted the number of coins in Jones' pocket: 10. Smith forms the belief that "the man who will get the job has 10 coins in his pocket," and this belief is justified. In the end however, Smith is the one who gets the job. And, unbeknownst to Smith, he too has 10 coins in his pocket. So, his justified belief that "the man who will get the job has 10 coins in his pocket" ends up being true. But, we seem to think, Smith did not know that.
I have yet to work it out, but it seems like replication has the potential to replicate bias, and the important point is this: Just because an experiment is badly designed, or in other words does not map the way things really are, does not mean its results cannot be replicated.
Friday, August 26, 2016
Week 2 Readings Response
I found the two articles we read to be both informative and thought provoking. I have often griped about the lack of replicability that many studies in psychology have. It is my belief that the inability to directly replicate a study casts sever doubt on the findings of said study. However, I found the first paper to bring up a point I had never thought of before. While I still hold replicability to be just as important as I just before, I now think there are better ways of going about it than directly repeating the experiment in question. Anyways, as the articles brought up, it I impossible to ever truly directly replicate an experiment. The idea of conceptual replicability appeals greatly to me. If a concept can be repeatedly supported through various methods, it is much more supported than if it is supported through the same method. This implies that direct replication is much less important than I had originally considered it to be.
This is not to say I think direct replication is pointless, though. I still think it serves its function of validating a previous experiment. However, it is much better to try to test a hypothesis in a new manner rather than an old. This benefits not just the scientific community as a whole, by either supporting or casting doubt on a concept, but also the researcher individually, by broadening their scope and possible publications.
These two papers brought up points I had never considered before and have altered my viewpoint. They raise a new question in my mind of “Why would one ever do a direct replication when a conceptual replication is just as valid and more useful?” I suppose the answer to this is that sometimes a conceptual replication is not possible, but I find it hard to think of examples of that. I’m sure there are uses for direct replication, but for my own purposes and future endeavors I will likely focus on conceptual replications.
This is not to say I think direct replication is pointless, though. I still think it serves its function of validating a previous experiment. However, it is much better to try to test a hypothesis in a new manner rather than an old. This benefits not just the scientific community as a whole, by either supporting or casting doubt on a concept, but also the researcher individually, by broadening their scope and possible publications.
These two papers brought up points I had never considered before and have altered my viewpoint. They raise a new question in my mind of “Why would one ever do a direct replication when a conceptual replication is just as valid and more useful?” I suppose the answer to this is that sometimes a conceptual replication is not possible, but I find it hard to think of examples of that. I’m sure there are uses for direct replication, but for my own purposes and future endeavors I will likely focus on conceptual replications.
Week 2 - Replication in the sciences (Crandall and Schmidt articles)
Hello Everyone,
I hope your semester is off to a good start.
Please read the Crandall (article 1) and Schmidt (article 2) articles on the importance of replication in the sciences and write a discussion board post by 9PM Monday, August 29th. It can be one post that speaks to both articles (no need for separate posts).
Please do so independently (before reading others' posts). Afterwards, feel free to reply to others' posts. There are plenty of interesting ideas in these articles and I am interested to see what you think of them.
We will discuss the articles on Wednesday, September 7th.
Hope you are all well!
Best,
Dr. Braasch
I hope your semester is off to a good start.
Please read the Crandall (article 1) and Schmidt (article 2) articles on the importance of replication in the sciences and write a discussion board post by 9PM Monday, August 29th. It can be one post that speaks to both articles (no need for separate posts).
Please do so independently (before reading others' posts). Afterwards, feel free to reply to others' posts. There are plenty of interesting ideas in these articles and I am interested to see what you think of them.
We will discuss the articles on Wednesday, September 7th.
Hope you are all well!
Best,
Dr. Braasch
Subscribe to:
Posts (Atom)