Sunday, October 16, 2016

Week 9 - Jeff Greene

Hello Everyone, 

Please read the Greene and Yu article and write a discussion board post by 9PM Monday, October 17th. Jeff will present on this research so your discussion posts can lead you to ask questions in the class portion, but also in directing questions towards Jeff in the public presentation. 

I am excited to discuss the ideas with you at 3PM on Wednesday, October 19th.

Hope you are all well and having a nice weekend!

Best, 
Dr. Braasch 

17 comments:


  1. To kick things off, I really enjoyed the research topic here of EC as most academics find topics like these fascinating as we all want to better understand what we know and how we know it so that we can learn how to know more! I sometimes struggle a bit with philosophy papers as they tend to be a bit verbose and read a bit circular in nature which sometimes makes it difficult to extract the message that the authors are trying to deliver. With all of this being said, I felt the discussion of EC was a compelling one.

    I had a few comments on the Methods section of this paper and how the researchers selected their participants and with their justification for doing so. I 100% agree that choosing two groups on opposite ends of the EC spectrum would allow for the clearest comparison but I wasn’t really clear at all how biology and history were distinctly different when it came knowledge of knowing. The researchers said that they selected university faculty members with little to no background in education and that left me scratching my head because how could that even be possible? In order to be a faculty member at a university, you MUST have some background—and most likely a strong background—in education and educational practice. Also, selecting students from a pool of teacher nominated students would certainly appear to be stacking the deck as these students would most likely be the most metacognitively aware and highly intellectual would they not? I suppose that this bias may have been necessary in order to derive a useful comparison group though. Lastly, though I believe this is unique to qualitative papers and those papers that feature interviews, I believe even quantitative papers could benefit from having a positionality section so that readers can better understand the theoretical background that a particular author may be writing from and how this may influence their interpretations of the data.

    Within the results, the authors claim that history experts “believed the importance of knowledge usually has an inverse relationship to its certainty”. I’ve read and reread this several times and tried looking back through the paper and am still not clear what this finding is supposed to mean at all.

    From a replication perspective, I would really like to see EC measured from a quantitative approach while still maintaining the two comparison groups—though I feel that would also be beneficial to make comparisons both within and between the two groups to see how EC may change over time.


    ReplyDelete
  2. I think this paper specifically was a great display of why the seminar subject of replicability is relevant and important. Very broadly, the paper showed how replicating studies can be used to challenge theory (in this case, the fact that survey reliability wasn’t being reproduced brought about this challenge) and the implications of doing so. As Greene and Yu emphasized, a measure is not of any theoretical use and tells us nothing if it does not accurately measure what it is supposed to, so of course using a measure with poor reliability and validity to make claims is not proper. I appreciate their approach of using prior EC models as a skeleton for their interview rather than basing it entirely off of these models, as it wouldn’t make much sense to use the very models they are bringing to question to dominate the construction of these interview guidelines.

    In regards to replication pertaining specifically to this paper, there are a couple of ideas I think would be interesting to see explored or implemented. To begin, they broke down the domains into history and biology for understandable and well-explained reasons, but I’m curious to what extent this is useful. For this exploratory study, it most definitely was, as the researchers were able to show there are definitely key differences between domains. However, while at the lower school levels there are only so many subjects, as you get higher there are more degrees to which these domains can be broken down. For instance, sciences can be broken down into biology, chemistry, physics, etc., and each of those could be broken down further. At what level should these be distinguished, and is there an implementable systematic way to do so?

    A few other questions came to mind given the sample they used. For these exploratory purposes, I think using a convenience sample they did seems fairly appropriate. However, how would their results differ in, say, a lower SES community? Additionally, since teachers nominated students that they saw as capable of explaining EC beliefs, is it possible they chose particularly bright or well-performing students? Would we see similar interview results if students were randomly chosen? While this group of middle-school students demonstrated a sophisticated grasp of knowledge, it could be that they are some of the few in this age group that do. However, this finding in itself shows that future research should consider this when measuring EC. I would also like to see a within-subjects design, to see if these domain differences exist in participants or if similar ways of thinking arise across domains within the same participant.

    I particularly enjoyed this article and look forward to the discussion.

    ReplyDelete
  3. I had never heard of epistemic cognition before reading this article. It’s easy to get lost in trying to understand a new concept, so while reading I focused on the research design and how the study could be replicated. What struck me was that the current researchers had a feeling that the measures used to uncover the construct in question, EC, were flawed. It takes a pretty bold researcher to eschew accepted measures! I liked that they started from scratch in their investigation and used a phenomological research design. I would be interested in learning how these results could be used to rewrite a questionnaire about EC.

    The qualitative nature of their work makes replication fairly straightforward, because future researchers would just have to ask the few questions and then group results. It would be interesting to interview other professors and middle schoolers about biology and history to see if the general trends uncovered in this research hold true. Other academic domains could also be investigated. Additionally, high schoolers seem like the logical next group to interview using these questions. They could offer more insight into the development of EC.

    ReplyDelete
  4. I definitely enjoyed this article. Epistemic cognition is knowledge about knowledge...which is initially a very confusing idea to grasp. Comparing biology and history, as they were doing in the article, seems a little convoluted.

    At one point in the article, the author says that maybe "simplicity" means different things in different domains. To me, that seems painfully obvious. Practically every aspect of learning will be different across areas of expertise as their necessary knowledge and skill set changes and varies. While I get the basic premise of what the authors were aiming for, I would like to see this study replicated with areas of study more similar to one another (for example, psychology and sociology, or biology and chemistry) to see if these constructs hold up within similar expertise.

    I also took some issue with their sampling method. It seems like there would be inherent bias in having teachers select students from their classes. Maybe these students are already highly functioning?

    As far as applying this research goes, I found this article terribly frustrating. I think many of us have had teachers who want us to just memorize the facts, formulas, dates, et cetera, instead of understanding underlying themes or the reasoning behind theories. If that leads us to a very basic, naive view of knowledge, why are so many classes still being taught this way?

    ReplyDelete
  5. Personal epistemology, epistemic change, epistemic cognition those are the new ideas of research in the field of psychology, educational psychology and other behavioral science. Hofer (1970) started the idea of epistemology with the Harvard undergraduate students. Now people are doing many epistemological research with young children and to see their believe about knowledge and learning. Current study researcher of EC mainly focusses on two things in this study “(a) instrument developers need to refine the foci or wording of the self-report items used to measure EC; (b) instrument developers need to consider different ways of measuring EC (e.g., discontinue using Likert-type items, capture student behaviors)”. And as a background of this research Green mentioned that sometime it is difficult to measure EC in psychometrics but some qualitative measures (i.e. think aloud, interview) could give better results. This research is also done with interviewing history and biology university teacher and some elementary school children.
    In the current literature of the study the researcher stated about few model of EC. Among those model model of EC Khun’s model (Her model suggested movement from positions of realist, to absolutist, then multiplist, and finally to evaluativist, describing a progression in the engagement of knowledge that was roughly analogous to Perry’s model) and the model of the Schommer (1990) which is also know as the first multidimensional model of EC which explores five independent but related dimensions (i.e. fixed ability, quick learning, simple knowledge, certain knowledge, and omniscient authority). And the results of this study also revealed that current models of EC do align with the participants thinking (e.g. biology/history teachers belief of their certainty of knowledge).
    Starting with the history professor who gave importance on students learning various interpretative and argumentation skills, along with historical empathy and perspective taking. The other history professor said he emphasize to students that there are these orders of, of knowledge, a hierarchy is the biggest challenge. Throughout the results section of the research frustration have been shown by the teachers saying that students are more focused on wrong kinds of knowledge (i.e. declarative) instead of more conceptual knowledge. However, Interviews with our biology teachers incorporated a diagram of science domain ontology. And it is surprising that it did not help clarify our discussions or the middle-school students’ understanding. Both from interviews with faculty and students it is found that ontology diagram was not a helpful tool for this participant in the current context. However, models of EC (Khun, Perry and Schommer) are found very logical with what current study participant stated from the point of EC.
    In terms of replication current study could be very good example. Researching in the same field I feel like I can replicate a different research using different subject and and sample size to replicate this study. And looks like in the qualitative method of study replication is much easier than quantitative method.
    As a qualitative case study, a small sample size has been used for this study where more than half of the participants are white. It is not good enough to represent the bigger picture of EC. However, the method of think aloud could be a good tool for further research of EC in future.
    For better understanding of the EC we might explore following question in this study-
    1. Why it is important for the students more focused on conceptual knowledge rather to be declarative knowledge?
    2. How doing research on EC could give better shape in teaching-learning process in schools?
    3. What different we see doing a EC research using qualitative case study instead of using psychometrics?
    4. How this study is good example of replication?
    I enjoyed reading this paper and hope some of you did so. We are excited to see Green next. See you all in the class discussion.

    ReplyDelete
  6. I believe that this week's article raises an issue that has loomed behind many of the discussions of replication thus far, namely: at which point is failure in replication a potential issue in the underlying theoretical framework versus an issue in the research methods utilized? To some extent, the use of conceptual replication helps address this question, as patterns of findings that persist across contexts and with different methodologies may be considered fairly sound. Yet, in virtually every study, a variety of findings are often discrepant from those in previous research, if only non-significant when they should be significant, etc. While this may be a small issue if the big picture remains stable, it is much more problematic if a framework fails to replicate to a greater extreme, as documented in Greene's article. For instance, a theory of academic self-efficacy may replicate well across a variety of contexts until applied to a given SES minority group in STEM classes. In this instance, one is left to guess whether differences are a product of SES, specific minority group culture or environmental pressures, the academic domain, methodological error, or various interactions among any of these variables. Even after manipulating each of these variables in different ways over time, there always remains the possibility that socio-historical or other covert factors may underlie the discrepant findings. Further, in these cases, one is left with the task of determining whether to amend existing theory with a collection additional components or caveats, to develop a sub-theory with unique relationships and constructs that still utilizes general-theory understandings, or to develop an entirely new theoretical understanding altogether, which may use only some of the components of existing theory. Returning to self-efficacy, this topic provides one example of this dilemma. Currently, researchers are taking a variety of approaches to understanding the construct, ranging from replicating more classic, reciprocal deterministic paradigms of self-efficacy (e.g. Bandura) across groups and domains to developing incredibly group-specific, nuanced conceptualizations that significantly alter underlying assumptions. An even more extreme example can be found in modern therapeutic theory, some of which is severing itself from cognitive behavioral assumptions, hypothesizing not only that thoughts may have little meaning but that one may not even have persistent consciousness with which to perpetually attend to labels or other such cognitive phenomena. These issues also mirror an increasing post-modern and post-post or trans-modern push that has been developing in research circles and parts of society at large, in which many groups have begun to question whether any understanding that has not been developed through a certain group's own perspective might adequately or justly apply to those groups (e.g. many current critical race theories or, for a more extreme example, post-colonial arguments). However, these issues reveal themselves to a varying extent depending on the focus of knowledge and research, as cognitive or physiological processes may be far less likely to draw these types of critique than self-perspective, belief-based, or personality-centered theories. Nonetheless, these issues still permeate all research to some extent because of the inherent need to interpret and utilize findings, which is an inherently subjective endeavor.

    ReplyDelete
  7. The article for this week used a qualitative method which, admittedly, I have little experience in (in psychology, at least, cultural anthropology is almost exclusively qualitative) and which I get a bit uneasy about. In spite of my bias, however, I felt the rationale for its use was fairly sound in this instance. Certainly, results are only as valid as the instruments they rely on and, from the authors discussion (as well as a very small amount of familiarity with Deanna Kuhn’s work), it is clear that epistemic cognition (EC) has some tools that need to be sharpened.
    I thought the argument being presented was well-supported from anecdotal experience and from informal consideration of the methods used and questions asked by various disciplines. For instance, in cultural anthropology the emphasis is almost entirely on interpretation of the “other,” as well as their interpretation of their own culture. Although certain attempts were made to be more rigid and scientific, the ultimate goal of this field is not to look for the cognitive or biological bases of culture (at least, to most individuals in the field), but to see what makes a culture unique and what that may say about both their culture and our own. In contrast, in cognitive psychology, the focus is almost exclusively on the mechanisms underlying cognition. It seeks to overcome the bias of the individual in order to get closer to the objective truths of the underlying processes. In these two fields, certainly expertise would present itself in different ways if an individual is prompted with a question like “When I study I look for specific facts.” In anthropology, this may be considered true if one is looking for facts regarding what a certain groups rituals include, just as a psychologist may consider this accurate if one is trying to scaffold off previous research. Certainly, there are specific facts which form the foundation of every discipline, as suggested by the authors; however, this question does not accurately tap into the nuance in which facts we look for and why we look for them.
    In spite of the sound logic behind this research, however, there were some methods which were certainly not optimal. One of the bigger concerns I had was regarding the use of students who were recommended by teachers. Certainly, these teachers chose the “best and the brightest” which may highlight some boundary conditions of existing knowledge, but can easily be dismissed as not being truly representative. For instance, it would not be inaccurate to state that middle schoolers, as a population, are unlikely to be expert composers or have remarkable mathematical ability, in spite of the fact that some individuals (e.g., savants) may demonstrate these skills. While the current study was certainly more of a credible challenge than one virtuoso, the students were still likely selected for their above average ability and, as such, their EC cannot easily be generalized (though the authors acknowledge that this is not their intention). It also seemed a bit troubling that the primary analyses were done by an individual who is so well-versed in the topic and verified by a graduate student who likely would have been influenced by the opinions of his advisor. While the author did acknowledge all these concerns and more, it does highlight why these qualitative analyses are not necessarily common and why they are not held in the same regard as more rigorous quantitative methods. In this instance, however, I do believe the authors point was made that these quantitative methods were inadequate in tackling these questions.

    ReplyDelete
  8. I think the qualitative approach to EC is well motivated, especially with the goal of showing some short comings with quantitative approaches. The approach, with its emphasis on sense-making and its attempt to suspend prior theoretical commitments, sounds a bit like a Phenomenological approach, which I do think is useful for discovering and reworking presuppositions. Of course, self-report may be problematic no matter which way you spin it. That doesn't, however, make it unimportant.

    I'm curious as to the meaning of "dualism," since Greene and Yu seem to relate it to "objectivism;" I can see a story in which this relationship is cashed out, but - since they hope not to rely on prior theory to circumscribe their work, I would prefer to see these terms defined in the ways they plan to use them.

    It is unclear to me that ontology and epistemology need be sharply distinguished or mutually exclusive, and I do not believe that an ontological interpretation of "simplicity" and "certainty" presupposes "realism" (especially since this term itself can mean many, many things).

    It is interesting that some experiments have found "certainty" to be domain general. In my teaching experience, students seem to think there are two kinds of belief: mathematical, and ice cream. Either you have completely certain, rigorous, "scientific" truth, or you have mere preferences to which the term "truth" doesn't even apply.

    I am glad to see a reworking of the naivity/sophistication talk. These seem to be biased, loaded terms. This can be seen in the discussion of authority. No one can be an expert on everything. To get by, we sometimes have to rely on belief or knowledge that comes without our own first-hand experience. You probably shouldn't take cooking advice from a physicist or try to launch a rocket under the direction of a sociologist, but if I need to know a bit of physical or sociological information, there are relevant authorities whom we do take the word of, and often to good result. Authority is not infallible, but can be relevant. In this experiment, by the way, the professors are taken as authorities insofar as their views are taken as automatically non-naive, presumably due to their expertise.

    I would like to know more about the supposed connection between EC and theory of mind. Theory of mind was a prominent theory of subjectivity and (primarily) intersubjectivity for some time but, I believe, it is now out of vogue. I wonder if the connection is really between EC and some kind of theory of subjectivity and intersubjectivity, of which there are many alternatives to choose from.

    Though I appreciate the attempt to remove bias by avoiding teachers with "educational" or "philosophical" histories, it seems that being a teacher is itself an educational history, and definitely biases teachers' view of EC. Just look at my anecdote three paragraphs up!

    The "semi-structured" nature of the interviews performed seem like they would be very difficult to replicate. Yet, I do not think that the study is thereby invalid or not worthwhile, which I believe says something about my conception of replication.

    I think Biology Professor #2's desire for a category of forces is shared by a number of great ontologists! Of course, what is meant by forces.... my word!

    ReplyDelete
  9. From the very outset it is easy to see how this article relates to the topic of our class. Concerns about the validity of a measure are something to never take lightly. If an instrument isn't measuring what you are attempting to get out, your results are meaningless (as the authors note). Attempting to evaluate EC through a different lens than the self-report can give researchers a way to evalaute the claims of EC (and the models of the theory itself).

    The qualitative approach employed by the researchers is interesting. As they note, their approach focuses on meaning made by participants without being closed off by theoretical assumptions. This model is interesting in that it can be semi-exactly replicated. While the focus on getting at participant meaning is important for this specific line of work, the ability for researchers to abandon interview protocols to get at better understading leaves some of the results to interviewer savvy. Certainly most researchers engaging in this form of research will be effectively trained such that I'm sure most will achieve the same goals, it is important to consider how interviewer bias and performance can affect results.

    ReplyDelete
  10. Not having much of a significant background in either education or philosophy, the concept of epistemic cognition was a relatively new one to me— but part of that unfamiliarity made the article more interesting to me. It was a bit difficult to push through much of the jargon surrounding the research, but my main focus was to understand the conceptual points that were trying to be made—and how that fit into this framework of replication that we have been building over the semester. I thought that the concept of replication was particularly relevant in this article, insofar as it challenged the self-report methodologies as not being wholly internally valid, and attempted to introduce a new theoretical paradigm.

    And it seems like they could expand this domain to more than just biology and history, as well. Interestingly, though, I found that perhaps the sampling relies on middle school students that are somewhat adept, and might would have a higher level of reasoning than a standard “novice.” I’d like to see this replicated with a greater and more diverse sample size, because I think that this factor of epistemic cognition might be positively skewed for students that might not have the same sort of background. There could also be differences that are accounted for by adults who are novices in an area, which might be accounted for by different stages in brain development.

    ReplyDelete
  11. I can see how this paper is relevant to our class, though at times I thought it was a little esoteric. The term “epistemic cognition” was a new one to me, though, so that may be why I found it to be that way. I found it interesting that professors from different background had similar viewpoints. I wasn’t really surprised by the fact that middle schoolers tended to view rote knowledge as most important. After all, today’s education systems typically value this type of knowledge, since it allows for better testing on standardized tests. I was glad that this tendency was not completely standard across the group, though. This made me hope these deeper thinking students may even be future professors or researchers.

    It was interesting how the paper took a unique approach to testing EC. I agreed with its criticisms of preexisting measures of EC being flawed, in that they may “trick”, in a way, students with relatively advanced EC to respond as “naïve.” I think that the way they point this out (agreeing to looking for facts was their example, I believe) supports the idea that school systems tend to promote less advanced EC ways of approaching problems. Students have to pay close attention to facts to be able to do well in school, so even if students have advanced EC, they may rate low on that question. And, of course, this would be one of many times when a question could miscategorize a student. I found this fact a little bit annoying, since much of my own education has been “memorize this.” Of course, I’ve tried to go deeper and actually grasp the concepts, but that most of my pre college and much of my undergraduate classes were this way leads me to believe there is some inherent flaw with our education system.

    ReplyDelete
  12. In this research, researchers adopt interview method to measure degree of epistemic cognition of middle school students and university professors from two different domains: biology and history. They adopted interview methods in order to avoid problems of self-reporting methods such as reliability and validity.

    Since this epistemic cognition topic is completely new to me and feature of this kind of topic - long and full of words-, actually it was difficult to understand. And, actually I have a doubt on whether this interview methods are continuous of self-reporting, because in my mind, in the end, the participants answered their mind, feeling, and thought according to interview questions.
    The domain commonalities and differences results were especially interesting to me since it was quite consistent with my experiences that people, especially in academic field, use same language in different way according to their field, in this article, domains. And as somewhat similar and different, this phenomena gets severe as they are more expert in their own field.

    As topic itself, nature of knowledge and knowing, it was quite interesting to me because this is one of topics led me to study language and brain. In my case, I am interested in how human brain organize and build sophisticated thought and in what is the correlation between language and thought, though. In terms of replication, actually I think it is not easy to replicate this concept with different field. But, if I may try, I would like to go deeper domain commonalities and differences and to see how different the language charged with different domains.

    ReplyDelete
  13. The authors mention that they found ways that EC models could be altered to better match the ways that experts and novices think about knowledge and knowing, but it is challenging for me to be on board with such a conclusion when they have interviewed such a small number of people who could perhaps be anomalies. This was in the back of my mind throughout the whole article. In the discussion section I was glad to see that the authors acknowledged that they had a small sample, and they mention that the experiment’s purpose was not to make generalizations.

    One of the things that the authors propose is increasing the item specificity when dealing with people in different disciplines. This sounds challenging though because every time you want to test examine a new discipline, how do you go about developing the appropriate ways to ask them. If you want to use a questionnaire, do you have to conduct initial qualitative interviews like in this study in order to develop appropriate questions? While it may be the ideal, I am not sure that something like this is likely to occur. There is also the additional problem of control. Items that pertain to one discipline are going to be different than items that pertain to another discipline, which would make it more challenging to compare findings across studies that look at different disciplines.

    Because previous research has had a difficult time finding evidence of ECs before college yet this study saw it in 8th graders, it would be interesting to see this type of qualitative interview research done with children earlier than 8th grade to see if perhaps this is a better method of assessing their beliefs. It would also be interesting to see qualitative interviews done with the teachers of the middle schoolers in order to look at how the beliefs of the teachers might be affecting the beliefs of the students.

    ReplyDelete
  14. It is interesting to read a qualitative research because a lot of articles I read are quantitative. The reason why such qualitative research is needed provided by the authors is very convincing. Because one of the major goals of this article is to assess the ways in which current EC models did and did not aligned with novices’ and experts’ EC. Whether or not the EC instruments measure what they supposed to measure is crucial in developing people’s understanding of EC. I have been concerning with the validity of self-reported questionnaire since I took a evaluation of measurement course, because the error will increase more if the validity is poor. Thus I think it is very important to have studies like this article to discover whether there are limitations in a model or paradigm. Thus, in terms of improving a conceptual model or increasing the validity of an instrument, it is helpful to apply the expletory nature of qualitative method to discover what is not covered in the current understanding of a specific topic.

    In terms of replication, this general idea of applying qualitative research on improving poor psychometric qualities of instruments can be replicated in many areas of study. In this case, since Hammer and Elby (2002, 2003) said that learners display different epistemic behavior based on the context or situation, so I think it would be interesting to see EC in other domains other than biology and history. In addition, I also wonder whether there are some differences based on cultural expectation or values across countries. For example, when thinking about the means of justification, teachers and textbooks may be the major sources for Chinese students. Moreover, one limitation of this article is the small and convenient sample, so I think it will be helpful to generalize more people from different background at different ages.

    ReplyDelete
  15. I found the article this week to be highly interesting, I personally enjoy research regarding epistemic beliefs and cognitive processes. I found it encouraging that they brought up reliability and validity right from the start, typically these are terms we see towards the end of an article. While difficult, I did find their attempt to improve upon prior research inspiring. They distinguished a problem in a certain research area and tried to find a solution or improvement. I think this is something many researchers hope to achieve, but often you read more about replicating studies where the results weigh in your favor. I found the selection of participants to be the biggest issue of the study. I don’t understand how they could have selected faculty members, more specifically college professors, that have “little to no background in education”. Obviously these professors have at least a general understanding of education. Later in the paper, the researchers ask the professors “what they believed to be the important kinds of knowledge in their domain”, implying that they have some sort of educational background with the question itself. I am hoping this can be cleared up in the discussion or the presentation, as it raises serious questions about the participants in the study.

    I did find their design to be interesting, yet I wonder how big of a role time is playing on these epistemic beliefs, especially in middle schoolers. Time is clearly not something we can avoid, but I would like to know how it was accounted for or considered for this study. I did find it very encouraging that middle schoolers were showing “sophisticated epistemic cognition”, I find child development fascinating and think this is a great example of moral development in the making. From what I remember, moral development takes place in many ways and begins a very young age, but I was surprised to see middle schoolers meeting these expectations. I think it would be interesting to do this study exclusively on children in different domains in different stages of moral develop to take a closer look at when and how this type of cognition begins to take hold.

    ReplyDelete
  16. This type of research seems very difficult to replicate since it is so person-dependent by the nature of it.

    However, I'm not sure what their case-study approach is going to solve in regards to the replication problem that they previously motivated. Sure, their more in-depth method will get them more qualitative data about the few participants they studied, but will that replicate at all? How does it align with the self-report instruments?

    What I think they should have done, is use the standard self-reporting instruments for EC while also doing their interviews. This way we can learn more about how these instruments go along with the more qualitative data.

    It is basically standard in an HCI paper to do a quantitative comparison (gotta get dat .05!) followed up with some qualitative analysis (what were people *really* doing during the experiment + interviews) on a subset of the participants. Is this not common for EC research (I've only read a few papers).

    ReplyDelete
  17. found the article this week very interesting. I think particularly in terms of conceptual replication this qualitative research is fascinating. I do think that there is much room for interviewer or researcher bias however through the use of semi-structured interviews. There is always the risk that in changing how interview questions and probes are worded, the interviewer can subconsciously bias the answers in a particular direction. I also think that the fact that teachers nominated the student participants is problematic. While I understand that due to the nature of the interviews, it is ideal to talk to students that are well articulated, this is not necessarily representative of middle school students as a whole. This raises issues of generalizability and in regards to replication, its possible that these results are confounded by selection bias. I also think that there is a discrepancy between how students in the US school system are taught and the development of EC. For example, in my experience, most of my schooling in elementary, middle, and high school focused on granular fact based knowledge. Say for essay questions on a test, often I was explicitly instructed that it was going to be on the test. If this model of teaching is used everywhere in the school system, then it’s very possible that this is how students come to view knowledge.

    ReplyDelete