Tuesday, June 13, 2017

June 13---Bonus Post----The Special Edition of Ed Researcher

Now that you have heard about most/all of the articles in the special issue on ed. research post a comment about what they do (or don't) have to do with what we've been talking about in class thus far...

15 comments:

  1. Kendra- I read the Guitierrez and Penuel (2014) article on conceptualizations of rigor as it relates to educational research. I appreciated this article because of its linking of research to practice. It felt timely given our discussion of Pring’s assertion that theory and practice are more closely related than researchers often propose.
    Guitierrez and Penuel highlight the falsity of a “scientific culture” of research as a fix for the apparent lack of direction within educational research. This is due in large part to the nuances inherent within education both as a structure and a field of inquiry. What the authors propose is that relevance, rather than “science” should be the criterion used to assess the value of a research study. When is a study best positioned to impact a field? When it is pursuing insight into the pressing and relevant questions and situations experienced by practitioners; at least it seems that way to me (understanding that research could lead to exploration of circumstances that have not necessarily been expressed, but deemed worth exploring). The development of this sort of reflexive relationship leaves room for variation in methodology and approach, due in large part to a merging of stakeholders. Proactive collaboration allows for a more efficient way of ensuring research and practice are consistently moving in the same direction. Additionally, the complexities of educational systems can be considered and contextualized in such a way that allows for more intentional focus on disciplines within the field of education independently, as well as in relation to one another.

    ReplyDelete
    Replies
    1. Michelle Boulanger ThompsonJune 19, 2017 at 8:04 PM

      This is Michelle-
      Kendra, I completely agree! I also read the Guitierrez article and identified with his view, definition of, and application of "rigor" as being as, if not more important that the purity of science we talk about seeing in "quadrant 1" research. I also see his view of "relevance to practice" as being more practical and relating to real life problems we see in education, and how he describes in-depth qualitative educational research as dynamic and with interventions "as contested spaces". I had not hear this term, "contested spaces" previously and it speaks to me because I see education and educational research as a dynamic interactive and complex "relationship" leading to transformative change. I feel like this is is the missing piece that Pring speaks about, the missing link of reseach-to-practice. Gutierrez talks about the importance of theory being relevant to organize practice in order to generalize the intervention past the research study itself. Personally in my own experience working in K-12 schools and early intervention, I have found tying theory to practice to be both necessary and practical for change to occur. Like Gutierrez states, asking "What works?" (for whom and under what conditions?) guides the research best and allows us to better define "rigor" and relevance to practice.

      Delete
  2. In his article, Phillips gets into the hard/soft debate that we've been concerned with over the last few weeks. Phillips claims that soft research (such as educational research) really isn't all that different than hard science research. His point is essentially that competent research, regardless of its area, has a lot in common, and the dissimilarities are rather minor when considered against these similarities. That said, he notes that the biggest shortcoming of educational research (particularly when compared to hard science research) is its lack of predictive power, which seems to me like a pretty big shortcoming. This lack of predictive power is a function of educational research being (to some extent) dependent on context, and all variables being relevant in a given situation.

    Part of me is inclined to like Phillips' argument about soft and hard sciences being similar, but I'm not sure I buy it. Humans and chimps share something like 98% of our DNA, but that 2% difference is really important. Even if soft and hard sciences are remarkably similar in a lot of ways, I think the differences are more remarkable than Phillips claims.

    ReplyDelete
  3. The Wieman article proposes that there is similarity between hard sciences and educational research. It refers to research in the hard sciences as messy which is usually attributed to educational research.It also alludes to a similar construct of truth or validity when he speaks about predictive power, if I am understanding it correctly.Wieman suggests that the accuracy of the predictive power found in the results is a result of the process of the design being carried out. This is similar to Pring’s objectivity claim that objectivity is played out in the research process.However, the problem is as Phillips presents is that educational research rarely has predictive power and therefore is not considered quality research. This goes back to the debate of whether educational research is scientific research. The information obtained in educational research is difficult to generalize to varying populations. Because educational research is contextual and considers many factors, it is difficult to say that one result would be the same with a different population, at a different time, etc.
    These articles, in my opinion, continue the discussion of reality, truth, scientific vs non-scientific, empirical vs non-empirical, quantitative vs qualitative, sciences vs education but with different perspectives. Possibly leaning more in favor of how to bring educational research closer to the scientific, empirical, and valued side of things while also including the practice or applicability piece.

    Marsha

    ReplyDelete
  4. Evandra-Rudolph’s article definitely addresses issues we have been discussing so far this summer. It is clear within the policy context, there is definitely tension surrounding what type of research shall be conducted within education. Rudolph discusses how choosing one type of research design or method is problematic to the field of educational research because it naturally omits other types of research that can tease out some of the social and contextual factors education research seeks to solve. In 2001, NCLB set RCT as the gold standards to research, which was born out of medicine, as the research, method of choice. I agree with Rudolph, that an attempt to move education toward a hard science research model where we try to measure and control key variables is bound to fail at the societal level. He further states if we hold an experimental model as a standard a good deal of other types of research will be viewed as deficient by comparison. Also, to Pring’s point this article also seems to touch on some of the quant/qual debate without explicitly saying so.

    ReplyDelete
  5. Wieman's article in essence tries to make the argument that educational research is not as different from hard science research as some theorists have made it out to be. He first points out that the cutting edge of the hard sciences can be very messy and less straightforward than many education researchers may have thought, but no matter how messy or unpredictable, one can measure the research results through their predictive power. Wieman reminds us that even failed and contextual results can be predictive. I agree with Wieman that the hard sciences and the softest education research both have value and, when done well, can help direct us in research, policy, and practice. Where I disagree is in his characterization of objectivity. He claims that to keep our biases from corrupting the research we must acknowledge them and use methods to prevent them from doing so. This is the kind of thinking that Pring touts in his book - the idea being we can achieve a kind of objectivity because there is an independent reality. I argue instead that there are no tests and procedures that can fully dissolve our inherent biases. This kind of thinking leads to research that is based on unexamined, sometimes damaging values and assumptions. The best we can do is to acknowledge our positionality and be upfront about our biases (that we are aware of) and contribute our work to the larger body of knowledge knowing that it cannot be objective or comprehensive. That is not to say that we cannot try to “prevent” our biases by using tests and procedures; however, we cannot do so without a great deal of transparency and an admission that our results cannot be completely objective or universally applied.

    ReplyDelete
  6. Amy-- The article by Gutierrez and Penuel discusses the advantages of using a broader set of principles to define “rigor” in educational research, namely its significance to practice. Their viewpoint suggests a more collaborative approach between researcher and practitioner, which we have discussed in class. The authors contend that research documentation should include “…how students and teachers change and adapt interventions in interactions with each other in relation to their dynamic local contexts”. Approaching education research in this manner exhibits an appreciation for the autonomy of teachers and the flexibility required of them in the classroom setting. It seems this approach would also cut down on the fragmentation of educational research and the number of unsuccessful research projects. It would keep the research at a broader level and rely on the teacher’s knowledge and experience of their student population.
    The article also addresses a need for documentation of agreement among stakeholders regarding a research study’s “worthiness” and a research plan that is agreed upon not just by the researchers. Gutierrez and Penuel suggest measuring the value of research according to those who would be responsible to implement the resulting programs. There are some good suggestions throughout the article. However, it is unlikely that this method would be agreed upon by all interested parties.

    ReplyDelete
  7. Phillips tries to argue that although educational research does not fit in hard science that it is hard, if not harder in its own right. As researchers have race, SES, ethnicity and home life to consider in their work they have to be aware and try to account for more factors than hard sciences who try to eliminate as many of the variables as possible. He alludes to it but I keep thinking that even as the hard sciences create control experiments with a few confounding variables as possible the results miss the context and reality of life. When are the conditions ever going to be replicated or the sames as the experiment?, never. It feels as though the results are presented as this great irrefutable truth but in reality the results have a sense of falsity and being limited because the way they were determined were so controlled.

    These articles seems to fit well in what we have been debating, where does Ed research fit the research world. We have seen and argued for inclusion and more appreciation and acceptance of Ed research as scientific and valid. I think where I am right now is that the debate seems futile as Ed research and the hard sciences are so different. It feels like we are trying to compare apples and oranges. While they have similarities they are extremely different and does no good to compare them. To increase our validity and standing, I think we need to maintain rigorous methods and procedures that fit with our research focuses and keep us honest to the results we find.

    ReplyDelete
  8. Stephanie - Phillips continues with the discussion of hard verses soft research. He lists several similarities and differences between the two and makes a case for educational research actually being more difficult than research in the hard sciences. He also indicates that “the issue of whose field should be taken as benchmark and whose should be a copy” (p.9) is irrelevant. His concerns seem to be more with quality research than with the topic of the research.

    This is very fitting with our discussions up to this point and I think that he makes some interesting points to qualify his position. I agree with Eric that the lack of “predictive power” is a concern. However, his arguments regarding both the difficulty of dealing with the confounding variables and the importance of educational research seem valid. While I am not certain that these points were enough to claim that it is more difficult than scientific research, I am comfortable with the idea that they are both important and the battle for superiority isn’t necessary.

    ReplyDelete
  9. Southerland, Gadsden, and Herrington discusses what warrants "high quality" research. One criteria they noted was that quality research needs to be able to respond to the tensions between theory and practice. They also noted the different commentaries that other scholars brought to the discussion.

    One of the notes they allude to is that even in the hard sciences, especially in "cutting edge" science, it's not as precise as we want it to be. Areas like education and the hard sciences can actually have a lot of similarities despite what others believe.

    ReplyDelete
  10. These articles definitely mirror what we have discussed in class, the hard vs. the soft sciences and the pure vs. applied sciences. We discussed how those in quadrant one may not value the research of quadrant three and that education research is always on the defensive. There seems to be an underlying, or sometimes direct, theme in these articles of hard science vs. soft science, educational research vs. scientific research, and relevant research vs. nonrelevant research which makes them very relevant to what we have discussed in class.

    In the Gutierrez article, the authors claim that in order for research to be relevant, it must be able to be applied to practice. Wieman attempts to validate educational research by comparing it to hard sciences. In the Southerland article, again, quality educational research was in question. Once again, educational research was defended by comparing it to scientific research. However, in both the Rudolph and Phillips articles, the author acknowledged the benefits of scientific research, but argues that its methods are not always appropriate ways of measurement when dealing with matters of education.

    ReplyDelete
  11. Phillips writes about the confounding variables in the hard sciences can eventually be controlled, though it may take some time. In the social sciences, however, to do this level of control, we would lose all ecological validity. The situation that would be so contrived that it would no longer resemble the social world we study and attempt to improve. He writes “learning is a phenomenon that involves real people who live in a real, complex social contexts from which they cannot be abstracted in any meaningful way.”
    That is what separates “quadrant 3” that we have been discussing. We must consider the context in which learning takes place. The confounding variables we face are vast and relevant. Instead of being controlled these variables must be considered, making the work much more challenging.
    Paige Carter

    ReplyDelete
  12. In his article, Rudolph explains the attraction of scientific research to policymakers, who are frustrated by the apparent ineffectiveness of educational ‘fads’--which can here be understood to refer to studies upon which successive legislation such as NCLB, ESSA, etc. draw upon. Policymakers (like most of the public and much of academe, too) equate 'research' with the scientific method, and understandably want to feel reassured that they're getting a ROI on where they're directing taxpayer dollars.

    As Rudolph notes, though, in ed research there are too many variables than can ever be controlled, and additionally, what is considered valuable and useful in education is a moving target: “what we might count as learning today—and certainly what society deems worth learning—is not likely to be the same 20 years from now” (16).

    Rudolph seems to be arguing here for not just the validity but the importance of qualitative (but still rigorous and empirical!) research in education. As Wieman notes, we simply don't know enough about the many factors influencing learning and more exploratory/descriptive studies--of the sort that were performed and published 100+ years ago in the "mature" sciences that today serve as the paragon of rigorous research--are needed. Unfortunately, as Gutierrez and Penuel point out, "The problems that researchers initially think important to address are not likely to be the same ones that diverse education stakeholders perceive as important” (20). Because of policymakers' concern for responsibly spending taxpayer dollars and making demonstrable improvements, such exploratory/descriptive research is therefore unlikely to be funded by government sources.

    ReplyDelete
  13. Wieman, adding to Kim’s summary, makes the case that cutting edge, hard science research is similar to ed. research in that both require researchers to make decisions about numerous factors and methods for controlling them. He states that this makes the research complicated and messy, but argues that even the more mature sciences that appear so clean and controlled, at one point, had to confront this dilemma. Wieman describes how Physics research appears clean because it has worked out its messiness over time. I find his argument to be problematic because he seems to believe that if we just do this ed. research thing long enough, eventually we will find a way to control for all of the factors and complications that we struggle with today. I am not so sure that this situation is likely. From our conversations in class it appears that we have multiple viewpoints and approaches to ed. research, and we are a bunch of grad students in an education program! It seems unlikely that we (the collective, societal we) would be able to come to an agreement on what variables are most important to control, even if we were able to scientifically control for all of the factors impacting education. However, I suppose if ed. research becomes scientifically-based to an extreme this is how it would be clean and have predictive power.

    Adding to Wieman’s troublesome argument is his discussion of how good researchers “recognize, with limited information, which variables are relevant…[and] measure and/or control those variables that matter” (p. 13). Who are these researchers that are determining which variables matter? And how are they deciding what variables matter over others? Aside from these basic clarifying questions, this discussion ties into our own regarding relevant research, specifically how the aims of research are often dictated by those outside of education, and how studies can be designed to “get” desired results. My worry is that certain variables, such as motivation and efficacy are more difficult to measure, and these can be factors that certain researchers do not believe matter.

    ReplyDelete
  14. Tom here--The essay by Gutierrez and Penuel felt more practical than theoretical to me. I appreciated their attempt to do more than just comment/expound on the difficult circumstances of ed research. I don't know that I entirely agree with the idea of relevance as a measuring stick for ed research. However, they got me thinking about the pros and cons of the idea. The other authors crafted excellent points, but I was most drawn to "Relevance to Practice as a Criterion for Rigor". It offered a meaningful suggestion for how to change the current state of things--which I appreciated.

    ReplyDelete

June 13---Bonus Post----The Special Edition of Ed Researcher

Now that you have heard about most/all of the articles in the special issue on ed. research post a comment about what they do (or don't)...