Research in psychology


What counts as good quantitative research and what can we say about when to use quantitative and/or qualitative methods?

1. How interpretation enters into inquiry

To set the stage for discussing the scope of good quantitative research, I will briefly reconsider the role played by interpretation in the process of inquiry. In my position paper, I argued that when we try to understand psychological phenomena, we have to take as bedrock the practices in which people are engaged. These practices are concretely meaningful in a way that cannot be explained by other, supposedly more basic, terms. I also pointed out that this idea is very closely linked to the view that psychologists themselves are participants in the world of practices. Inquiry in psychology is itself practical activity. As I discussed elsewhere at some length (Westerman, 2004), practices of inquiry in our field are based in part on the ways in which we learn about things in everyday life (e.g., a teacher trying to discover the best way to teach 7-year-olds how to read) and also on the practices in which we participate in our lives in general (e.g., all the practices in the given culture in which reading plays a role). Two points follow from this view of interpretation that will provide the basis for my responses to issues raised in the commentaries. The first point is that research in psychology is irreducibly interpretive. It cannot be a transparent process of learning what human behavior is really like in a final sensethe kind of understanding an uninvolved subject might garner from a removed point of view. On this point, all three commentaries at least appear to agree with me. Stiles directly asserts his agreement with this idea, and Dawson et al. and also Stam argue against the notion that research can provide us with a view from nowhere.

The second point that follows from what I have said so far concerns what it means to say that research is interpretive. Most often, calls for an interpretive approach to researchfor example, by proponents of qualitative methodsemphasize the subjective appreciation of meanings. We see this in the fact that almost all qualitative studies are based on interviews aimed at learning about participants subjective experiences. But this approach also appears when we go beyond interview-based research and consider efforts that emphasize the investigators views of the phenomenon of interest, for example, themes they identify in their research.

In contrast to this focus on how we think about or experience things, my understanding of interpretation emphasizes how research irreducibly refers to how we do things as participants always already engaged in practical activities. As I discussed in my position paper, my approach centers on the role played by prereflective understanding, or a familiarity with things that is prior to any efforts aimed at thematized knowledge. In our everyday example of figuring out how to teach a class to read, the teachers investigation takes place against the background of his or her sense of what counts as progress (e.g., reading with some indications of comprehension, unless this is a class in reading Hebrew aimed largely at preparing students to sound out words in order to recite prayers in the synagogue). This background is not primarily a matter of how the teacher thinks about things. One way to put it is that the relevant background is what comes prior to what the teacher thinks about.1 This point holds for psychological research as well. The process of inquiry is always embedded in our ways of life. Research is indexical in the sense that every aspect of what we do as investigators, including what we take as important problems to explore and what we learn from our inquiries, always refers beyond itself to our prior involvement in the world of practical activities. Although it is not clear to me what Stam meant when he said that both Yanchar and I used the term interpretation in two different ways, for me, the use of the term that refers to investigators prior familiarity with practiceswhich may be what Stam (2006) refers to as a rather ordinary use of the termis the crucial one.

I should note that although Stiles and I agree on many points, my guiding perspective is quite different from his experiential correspondence theory of truth. Stiles focused on what seems to be a subjectivist matching notion: A statement is true for you to the extent that your experience of the statement corresponds to your experience of the event (object, state of affairs) that it describes. He talked about good research as inquiry that effectively shares experiences. As I see it, these ideas depart markedly from a view of research as practical activity, although Stiles (footnote 2) also said he agreed with this view. For me, the key criterion of truth is pragmatic (i.e., what works, but taking this in a broad sense that includes whether something we believe we have learned contributesnot necessarily in any simple, direct way at allto our ways of life) and research, ultimately, is not learning the way things (including my experience of things) are, but an activity that is part of doing things.

2. If not real measures, then what?

Stam argued that my view of quantitative research is problematic because such research should be based on real measures, that is, assessments that refer back to some concrete feature of the world, whereas what I call measurement amounts to nothing more than simply assigning numbers to things. As I noted at the outset, Dawson et al. similarly advocated the value of adhering to the classical definition of measurement, although they expressed much more optimism than Stam about the possibility of developing such strong measures.

I believe that it is not possible to develop measures that meet the criteria for real measures and that we should not aim to develop such measures. These claims follow directly from the first point above about interpretation. All research is interpretive, and this certainly includes the key research process of measurement. While it may or may not be the case that the classical notion of measurement can and should apply in some natural sciences, it does not apply in research in the human sciences. But, we can employ measurement procedures and, therefore, make use of quantification in our investigations so long as we understand what we are doing in a novel way, which could be called a different theory of measurement.

Here, the second point about interpretation comes into play. We can make use of measurement so long as we recognize that our measures are indexical, that is, interpretive in the sense that they always refer beyond themselves to our prior familiarity with practices. Such measures can be of very different kinds, which, very roughly speaking, mark out a continuum ranging from the very concrete to the obviously meaning-laden. Measures of decibel levels lie quite far to the concrete end of this continuum, the coding category yells moves away from that end, and global ratings of behaves in a hostile manner lie well to the meaning-laden end. Note, however, that because all measures are indexical, all points along this continuum are ultimately both concrete and meaningful. They are all examples of phenomena of interest that, in varying ways, concretely specify the phenomena but at the same time reflect the fact that the concrete specifications are never exhaustive. A measure on the concrete end of this continuum based on decibel levels might be the dependent variable in an experimental paradigm within which high-decibel verbalizations are examples of angry behavior. At the other end of the continuum, global ratings will be based on a manual that uses concrete examples to define the phenomenon of interest.

Given this theory of measurement, I think that it is misleading to sayas Stam and Dawson et al. claimedthat I call for weak measures rather than strong ones. As I see it, I am offering a different framework that incorporates many measures that might well be called strong measures (those near the concrete end of the continuum), although they do not conform to the classical definition of measurement. Stam argued that my position would lead to confusion in the field. He characterized it as calling for an arbitrary process of assigning numbers to events, asked us to imagine a world where we each developed our own measures of length or temperature, and cited the multitude of personality measures that exist as an example of how things have, in fact, already gotten out of hand. I do not find these arguments convincing. For one thing, any concern about diversity of viewpoints surely holds at least as clearly for qualitative research, which Stam supports. Moreover, while I agree with Stam that cooperation in the field is desirable, I do not believe my position works against it. My second point about interpretation is relevant here. I am not advocating inquiry that is interpretive in the sense that it is based on however an investigator happens to think about things. As I suggested in my position paper, practices of inquiry are relative to investigators prereflective understanding, but they are not arbitrary. Interpretive inquiry does not lead to a problematic free-for-all by any means. Only certain ways of proceeding will prove to be useful for people who are participants in a shared world of practical activities (see Sugarman & Martin, 2005). Furthermore, investigators can make their procedures public and cooperate in using one set of measures when that seems useful in a given situation, even if they can never fully explicate the procedures because they always refer beyond themselves to the background of the shared world. It is true that there is likely to be a diversity of approaches to any given issue, but this is desirable. Diversity in measures and other research procedures often is a function of differences in research goals (see Westerman, 2004, p. 137). Fundamentally, diversity in approaches is both good and necessary because investigators in psychology address issues that do not have final, determinate answers.

3. How is interpretive quantitative research helpful?

Even if employing interpretive quantitative measures does not have the downside of leading to a confusing free-for-all, we can still ask, along with Stam, whether there is something to be gained by using numbers in our investigations. As I pointed out in my position paper, I agree with researchers who embrace positivism about some of the useful features of quantitative measures and quantitative research procedures in general (e.g., they enhance our ability to investigate group differences without being unduly influenced by dramatic instances of a phenomenon). I want to mark out an additional basis for appreciating what quantitative methods have to offer.

In my position paper, I argued that quantitative research procedures can make a special contribution because they require us to concretely specify our ideas about psychological phenomena. I endorsed such measurement procedures as relational coding, which could be called soft measurement, but I also discussed how what could be considered strong measures and related quantitative procedures (e.g., coding discrete behaviors, conducting experiments) also offer useful ways to concretely specify phenomena of interestalthough I argued for reconceptualizing these methods as interpretive procedures and recognizing that they do not exhaustively specify the constructs and processes under investigation. Now, I want to extend my analysis of the ways such apparently strong measures and procedures can be extremely helpful. To begin with, apparently strong procedures can be highly informative about particular situations that are of interest in connection with particular applied problems.

For example, consider Woods (e.g., Wood & Middleton, 1975) paradigm for examining how mothers scaffold their childrens attempts to learn how to build a block puzzle, which I referred to in my position paper. That paradigm includes a clearly delineated procedure for identifying the specificity of parental bids at guiding a child. Although the goal is to explore a relational process (i.e., do mothers home in and out contingently as a function of the childs moment-to-moment success), the specificity measure does not rely on relational coding. Instead, each bid is coded based on its own properties. Investigating parentchild interaction in this specific situation has been shown to have applied utility. In a study I conducted (Westerman, 1990), assessments of maternal behavior in the context of Woods paradigm discriminated between motherpreschooler dyads with and without compliance problems. In an experimental study, Strand (2002) found that teaching mothers to home in and out when they show their children how to build Woods puzzle leads to greater child compliance in a separate context.

Apparently strong quantitative methods also can lead to the discovery that specific, concrete forms play a role in many situations, not just the original measurement context. For example, Strand (2002) found that the specificity scale was useful when applied to a task other than Woods block puzzle. Similarly, we might find that measures which were initially employed in particular structured observation contexts, say a measure of verbal aggression based on decibel levels or, more likely, a measure of activation in a certain part of the brain, identify specific concrete forms that play a particular role quite generally. Merleau-Ponty (1962) used the term sediment to refer to concrete forms of this sort. Sediment often plays a part in psychological phenomena, and apparently strong quantitative procedures can be very helpful because they enable us to learn about these aspects of practical activity.

Two qualifications are in order, however. First, even when one aspect of a phenomenon of interest typically takes a specific concrete form, we need to recognize that it is part of a larger, meaningful process. For example, even if Woods specificity scale worked in all contextswhich is extremely unlikelyit would be crucial to appreciate the role that the specificity of maternal directives plays as part of doing something, that is, teaching a child. It is not specificity per se, but the modulation of maternal efforts as a function of the childs success at what he or she is doing that is crucial. The second qualification is that there are always limits to the ways in which specific concrete contents function in a particular manner. It is useful to discover that a certain area of the brain typically functions in a particular way as part of what a person is doing, but another area might play this role under particular circumstances, perhaps due to brain plasticity. Apparently strong quantitative studies can be helpful here, too, because they are useful for marking out the relevant limits.

Such research has other benefits that can be considered the flipside of the advantages I have mentioned so far. Studies employing apparently strong quantitative procedures can help us understand psychological phenomena in terms of richly generative principles, because quantitative measures such as discrete behavior codes provide concrete examples of meaningful constructs and quantitative procedures like experiments constitute concrete examples of meaningful processes. For example, research employing Wood paradigm suggests the general principles that homing in and out is a crucial feature of parenting and that this process refers to modulating the specificity of parental bids. Apparently strong quantitative methods

9-09-2015, 15:56

: 1 2 3