Back to Journal Cover Page


CURRENT RESEARCH IN SOCIAL PSYCHOLOGY


Volume 6, Number 13
Submitted: June 6, 2001
Resubmitted: June 20, 2001
Accepted: June 20, 2001
Publication date: June 21, 2001

EFFECTS OF PROTOCOL DIFFERENCES ON THE STUDY OF STATUS AND SOCIAL INFLUENCE1

Lisa Troyer
The University of Iowa

ABSTRACT

I discuss the role of a standardized research protocol in social psychological research. Taking the standardized experimental setting of status characteristics theory as an exemplar, I discuss the theoretical implications of protocol variations. Subtle variations may have significant effects on theoretical processes, leading to unexpected empirical results, independent of theoretical variables of interest. Results of two experiments support my arguments on how variations in the standardized experimental setting in recent status characteristics theory research may have affected results. I note, however, that there are theoretical and methodological justifications for adjusting a protocol and offer a strategy for such adjustments.

[182]
---------------
[183]

INTRODUCTION

Experimental methods offer many well-recognized advantages for the study of social phenomena, including the potential for high levels of control, isolation of key theoretical variables, possibility of testing causal inference, and potential for replicability (for reviews of the features of experiments in contrast to other social science research methods, see Aronson, Ellsworth, Carlsmith, and Gonzales 1990; Blalock and Blalock 1968; Christensen 1997). When experimental methods are used extensively to conduct research within a theoretical research program, then it is not uncommon for experimental procedures to become standardized across the research studies. In this paper, I argue that variations in implementations of the standardized protocol used in recent status characteristics theory research may affect the realization of scope conditions corresponding to the theory and social influence rates (independent of the theoretical variables in which researchers were interested). I present results from an experiment in which features of the protocol were systematically varied. These results indicate that seemingly subtle protocol variations may have important effects on social influence outcomes resulting from status-organizing processes. This suggests the importance of including baseline conditions in experiments where such protocol variations are necessary in order to facilitate the interpretation of results.

According to Wagner and Berger (1985, p. 698), "Theoretical context as well as empirical context is vital to the search for evidence of theoretical development." I use the term theoretical research program (TRP) in the sense of Wagner and Berger (1985) to refer to a system of interrelated unit theories along with the research that evaluates them. An important characteristic of TRPs is that they are built upon a core set of concepts and conceptual relations. Branches and departures from the core often occur through incremental addition, subtraction, or alteration of components of the theoretical core. Examples of TRPs in sociological and social psychological research include (but by no means are limited to) expectation states theories (e.g., Berger, Rosenholtz, and Zelditch 1980; Wagner and Berger 1985), elementary theory (e.g., Willer 1987; Willer and Markovsky 1993), affect control theory (e.g., Heise 1987; MacKinnon and Heise 1993; Smith-Lovin and Heise 1988), and distributive justice theory (e.g., Jasso 1980; Jasso 1990). When standardized experimental setting is used to test claims derived from a TRP, then core concepts are empirically instantiated in experiments through established operationalizations. Incremental advances within the TRP are frequently mirrored in experimental research by incremental adjustments to components of the standardized protocol. The protocol lends efficiency to research within a TRP, since the protocol often involves pre-established and tested operationalizations of key variables (saving researchers development and design time), and adherence to the protocol makes it possible to assess results to light of prior research (lending interpretation to findings). Protocol variations, however, may affect more than the theoretical variables of interest. They may also have unintentional effects on other theoretical components such as scope conditions, leading to more complex effects than the researcher posited.

[183]
---------------
[184]

As a means of demonstrating these effects, I will examine how variations in the standardized experimental setting used for a substantial body of experimental research in status characteristics theory may affect social influence processes (the key dependent variable of interest in this line of research). I begin with an overview of status characteristics theory, one branch of the expectation states TRP. Then, I describe the standardized experimental setting (SES) common to status characteristics theory research. This is followed by a discussion of variations in the protocol in recent research and the theoretical implications of these variations. Finally, I present an experiment in which I systematically test how these variations may impact status-organizing processes, and consequently, social influence outcomes, through their effects on the theory's scope conditions.

STATUS CHARACTERISTICS THEORY

Status characteristics theory (SCT) is a branch of the expectation states theoretical research program (e.g., Berger, Cohen, and Zelditch 1972; Berger, Fisek, Norman, and Zelditch 1977) that focuses on how a group whose members are initially differentiated along one or more social characteristics (like education, occupation, race) also become differentiated in the amount of influence they exercise in the group. Social attributes do not merely differentiate individuals in a group, according to Berger and his colleagues, they may also reflect beliefs about the worthiness and competency of group members. These beliefs lead to the formation of a group status hierarchy, in which hierarchical status position is linked to expectations regarding the relative competence of group members. Actors in more advantaged positions in the hierarchy (i.e., higher status actors) are expected to be more competent than actors in less advantaged hierarchical positions (i.e., lower status actors). Competency expectations are manifested in the social behavior of group members. Higher status actors (who enjoy expectations of greater competence compared to lower status actors) are granted more opportunities to contribute to the group and their contributions are viewed more favorably compared to lower status actors. As a result, higher status actors exercise greater influence than lower status actors in groups. That is, status, competency expectations, and influence may arise from attributes that differentiate social actors, provided that the attributes have not been proven to be irrelevant to the group's task. This latter condition (i.e., lack of demonstrated irrelevance) is a key component of the theory, referred to as the "burden of proof" principle. The process whereby differentiating social attributes translate to status value, differential competency expectations, and the manifestation of observable differences in behaviors (e.g., influence behaviors) is referred to as a "status-organizing process."

[184]
---------------
[185]

The status-organizing process that links differentiating characteristics to social influence is theorized to hold in groups whose members are task-oriented and collectively oriented. Task orientation refers to a situation in which actors are working toward a valued outcome on a task (e.g., the correct answer to a problem). Collective orientation refers to a situation in which actors believe it is appropriate to consider the input of others as they work on a task. These conditions, task and collective orientation, are the scope conditions of SCT (e.g., Cohen 1989; Walker and Cohen 1985). When groups meet these conditions, we expect the theory's claims linking differentiating social characteristics to status, competency expectations, and social influence to be supported in empirical observations. Research has shown a number of social attributes that operate as status characteristics under the theory's scope conditions, including sex (e.g., Lockheed and Hall 1976), race (e.g., Cohen 1971), beauty (e.g., Webster and Driskell 1983), educational affiliation (e.g., Moore 1968), and physical disability (e.g., Houser 1997).

THE STANDARDIZED EXPERIMENTAL SETTING2

The conceptual advances within the expectation states theoretical research program in general, and SCT, in particular, can, in part, be attributed to the reliance of researchers on an experimental protocol, referred to as the "standardized experimental setting" (SES). Using the SES has facilitated the comparison of results across different studies and helped researchers advance the theory through the estimation of formal models of the status-organizing processes leading to social influence that SCT represents. The result has been an enhanced understanding of critical theoretical issues like the effects of multiple consistent status characteristics on social influence (e.g., Berger and Zelditch 1977), the effects of status inconsistency on social influence (e.g., Norman, Smith, and Berger 1988), and the effects of relevance between a status characteristic and a task outcome on social influence (e.g., Wagner, Ford, and Ford 1986).

As outlined in Cook, Cronkite, and Wagner (1974) and summarized in Berger and Zelditch (1977) and Moore (1968), the SES characterizing SCT research represents a set of standardized procedures through which researchers (1) introduce manipulations of independent variables to operationalize key theoretical variables (e.g., status characteristics), and (2) assess effects of independent variables on a dependent variable of interest (e.g., social influence), while (3) employing manipulations to ensure the realization of the theory's scope conditions (i.e., task and collective orientation of social actors). It is important to note that not every detail of the experimental setting can be held constant across all studies. As new variables and

[185]
---------------
[186]

processes are introduced through theoretical elaborations, researchers must adjust the SES. Yet, results are more interpretable to the extent that cross-study comparisons can be made, and comparisons are possible to the extent that only manipulations relevant to the theoretical variables differ and all other procedures remain consistent. For example, if a researcher is interested in testing whether occupation operates as a status characteristic, then she might employ an experimental design involving two conditions. One condition might involve measuring influence behavior in peer-equal interaction (i.e., a situation in which the subject believes she is interacting with another person whose occupation is the same as hers). A second condition might involve measuring influence behavior in peer-unequal interaction (i.e., a situation in which the subject believes she is interacting with another person whose occupation is different than hers). If occupation is a status characteristic, then the researcher in this example would expect to observe more manifestations of influence in the second than in the first condition.

Further interpretation can be given to the results of this hypothetical study, however, if the researcher can compare the peer-equal and peer-unequal condition results to conditions of prior studies. That is, the researcher could compare the occupation conditions with, for instance, results of research in which educational attainment was demonstrated to operate as a status characteristic. If the rates of influence in peer-equal and unequal conditions in occupation experiment correspond to rates of influence in peer interaction (i.e., same educational attainment) and status differentiated interaction (i.e., different educational attainment) conditions of earlier experiments, additional support is lent to the hypothesis that occupation operates as a status characteristic. More interestingly, if influence rates for correspondent conditions across the studies vary dramatically, then additional issues are raised for theoretical consideration -- why would rates vary if both attributes behave as status characteristic? Perhaps social attitudes regarding the social value of different states of occupation are not as consistent as those corresponding to different states of educational attainment. In other words, cross-experiment comparisons lend interpretation and/or information to the results of the single study, and promote theoretical growth.

It is precisely this type of theoretical growth that is the basis for the SES. The SES protocol is composed of five features: (1) standardized instructions to subjects regarding the rationale behind the experiment, (2) directions for completing the experimental task, (3) experimental procedures dictating how and when manipulations are introduced (including manipulations establishing scope conditions and introducing status characteristics), (4) a type of task (i.e., binary-choice task) that provides a context in which influence can be assessed, and (5) debriefing procedures.

[186]
---------------
[187]

The SES protocol starts by instructing subjects that they are participating in a study that is designed to test a "newly discovered skill" ("Contrast Sensitivity Ability," "Meaning Insight Ability," or "Spatial Judgment Ability"). They are advised that the skill is unrelated to known abilities, like mathematical competence or artistic ability. This is an important instruction developed to ensure that subjects will not have prior beliefs about the skill and that their behavior in the setting will reflect expectations resulting from experimental manipulations (and not theoretically irrelevant prior beliefs).

The next set of instructions introduces subjects to a "partner" with whom they will be working on a task to test the newly discovered skill. Subjects do not see their partner (an important facet of control for visual factors that may affect status), but do receive descriptive information indicating that their partner occupies a higher, lower, or equal status position relative to themselves along some attribute. For example, if the subjects are first-year undergraduate students and the theoretical status variable of interest is educational attainment, then the subjects might receive information indicating that their partner is a graduate student (higher status than themselves), high school student (lower status than themselves), or a first-year undergraduate student (equal status to themselves).3

Following the introduction of the partner, the instructions describe the task to the subjects. The task involves a series of binary-choice problems, with each problem representing an experimental trial. Two tasks characterize most of the experimental research using the SES: "Contrast Sensitivity" and "Meaning Insight." For the Contrast Sensitivity task, subjects choose which of two arrays of black and white rectangles has the most white area. In reality, the arrays have nearly equal amounts of black and white. For the Meaning Insight task subjects choose which of two words from a purportedly primitive language is closest in meaning to an English word with which they are presented (e.g., Webster 1977). Instructions advise subject that for every task, one alternative is correct and one is incorrect. This instruction operationalizes one of the scope conditions of status characteristics theory, task orientation. Subjects are advised that for each problem they will make an initial choice regarding the correct answer without any help from their partner. After they have made this initial choice, their partner will be told of their initial choice, and they will find out their partner's initial choice. Then subjects make a final choice. Subjects are not given information about their partner's final choice and are not told whether their final choice was correct.

In earlier experiments using the SES, subjects communicated their choices by pressing buttons on a console corresponding to the alternative they selected. They received feedback regarding their partner's initial choice (and their own) through lights on a panel indicating their own and their partner's initial choice. This is handled through an Interaction Control Machine (ICOM, for a description, see Cronkite, Cook, and Wagner 1974). In more recent experiments, subjects communicate their choices by either typing a number corresponding to an alternative or clicking a mouse on a button on the computer screen

[187]
---------------
[188]

corresponding to the alternative they have chosen. They receive feedback regarding their partner's initial choice (and their own) through text displayed on their computer screen. It is important to note, however, that there are two variants of the protocol used in this setting that convey feedback in different ways. In one variant (e.g., Foschi 1996), subjects are advised which choice their partner made along with the statement that the partner "Disagrees" or "Agrees" with the subject (depending on the trial). In the other variant (e.g. Troyer 1999), subjects are only advised which choice their partner made. As I will later describe, this relatively subtle protocol difference may have effects on the realization of scope conditions, and thus, rates of social influence.

The feedback conveying the partner's initial choice represents a manipulation that sets the stage for an examination of influence. The experimenter manipulates the feedback to reflect that the partner has made either the same initial choice as the subject or a different initial choice than the subject. Usually different initial choices are indicated for about 80% of the trials. These trials are referred to as "critical trials" because influence behavior can be operationalized at these points in the SES. If subjects do not alter their initial choices in the binary choice problem, this is recorded as a "stay" response. Staying with one's own initial choice corresponds rejection of the other's influence, while altering one's choice on the binary choice problem corresponds to acceptance of the other's influence. Over the critical trials of the experiment, then, the researcher can calculate the proportion of trials in which subjects issue a stay response, P(S), which operationalizes influence behavior.

Before subjects begin working on the binary-choice problems, task and collective orientation are also further manipulated. This occurs through the presentation of scores of individuals and groups, ostensibly from prior research. The scores are labeled to indicate different levels of performance on the task. For instance, in Moore (1968) a score of 32-40 (out of 40 trials) by an individual is designated as "Superior," 22-30 is designated as "Good", 12-20 is designated as "Fair" and a score of 0-10 is designated as "Poor." This reinforces subjects' task orientation by reminding them that there are successful and unsuccessful outcomes, and instills the value of task success over task failure. The subjects also receive information indicating that when individuals have more time and information, then they perform better, which reinforces their collective orientation. For instance, Moore (1968, p. 54) advised subjects that, "… individuals whose performances are fair or good when they do not have additional time and information are frequently capable of attaining superior performance when they do have this additional time and information." The "additional information" is the initial feedback regarding the partner's choice. The appropriateness of using this information is emphasized by telling subjects that they should not hesitate to make a different final choice, if the information they receive from their partner helps them make a correct final choice.

[188]
---------------
[189]

It is notable that the manner in which the scores are conveyed has varied across versions of the SES. Moore (1968) displayed the scores for individual performance on a chart in the room in which subjects worked. Wagner, Ford, and Ford (1986), showed subjects charts with the scores on a video monitor that was accompanied by an audio commentary describing the different scores. These researchers also showed subjects charts of group scores that clearly indicated higher performance levels by groups than individuals. Troyer and Younts (1997) and Lovaglia and Houser (1996), conveyed individual scores to subjects via a computer terminal, along with text briefly describing the fact that groups outperform individuals (though group scores were not displayed). Since the protocol components related to individual and group performance correspond to scope manipulations, variations like these may affect the realization of critical scope variables, and hence, status-organizing processes and influence outcomes.

Ideally, as many aspects of the protocol as possible should be retained in an experiment testing status characteristics theory's arguments. The only alterations made should be ones that correspond to conceptual variables. So, for instance, in the educational attainment example described above, only the status variable introduced (in this example, educational attainment) should vary; all other procedures should correspond to the protocol. Thus comparisons between the current experiment and previous research can be made, and differences between the studies can be more reasonably interpreted in terms of the conceptual variables that are introduced and manipulated.

As I noted earlier, the benefits of using a standardized protocol may be further enhanced if baseline conditions are incorporated in the experimental design. In a baseline condition, correspondence to the SES protocol should be high and no new manipulations should be introduced. This allows researchers to examine their results in comparison with results of previous research involving baseline conditions. Such a comparison affords the opportunity to adjust procedures and/or alert the researcher to any discrepancies that may threaten the integrity of the experiment as well as analysis and interpretation of study results. The combination of a baseline condition and careful adherence to as many aspects of the SES as possible lends interpretability and efficiency to the experiment.

Not all research within SCT, however, has systematically included a baseline condition against which the setting and experimental conditions can be independently explored and interpreted. All too often, for instance, resources available (i.e., assistants, subject pay, project timelines) make it difficult to include baseline conditions, which may not be necessary in order to test hypotheses related to proposed theoretical variables of interest. Furthermore, research that has included baseline conditions has not always systematically investigated baseline results that deviate from results obtained in prior research testing status characteristics theory. Finally, as alluded to above, in recent years the interpretation of research findings has been further complicated by seemingly subtle variations in the protocol.

[189]
---------------
[190]

The implications of variations in the SES protocol become clearer when tools related to formal modeling of status-organizing processes are used. Because SCT has relied on a standardized research protocol over the years, researchers have amassed a large body of commensurable empirical results. Meta-analyses of these studies have led to formal models of status-organizing processes (e.g., Berger, Fisek, and Norman 1977; Fisek, Norman, and Nelson-Kilger 1992; Fox and Moore 1979). These models allow researchers to make concise predictions of rates of influence behavior in experiments using the SES. In particular, the Fisek et al. (hereafter, FKN) developed a model that allows researchers to generate a priori estimates of P(S) values for experimental studies using the SES. For example, according to their model, when actors are status equals, we expect P(S) values of about .64. When actors differentiated with respect to one diffuse status characteristic (whose relevance to the task has not been explicitly established), then the FKN model yields P(S) estimates of about .68 for higher status actors and .59 for lower status actors. Thus the FKN model affords researchers the opportunity to examine their results in light of prior research (given that the FKN model that generates the estimates is derived from prior research). To date, though, systematic use of the estimates has not been evident in published results of status characteristics theory experiments using the SES. An examination of recent research, however, in light of the FKN estimates reveals departures from predicted outcomes. More specifically, recent research findings indicate lower P(S) values than those predicted by a model employing the FKN parameter estimates. Table 1 provides observed P(S) values from five recent studies as well as the estimated P(S) values based on the FKN model.

Table 1. Observed and Estimated (PS) Values for Selected Condition from Five Recent SCT Experiments Involving a Single Diffuse Status Characteristic

    P(S) Values
(Fisek et al. (1992) Estimates in Parentheses)
Study Status Characteristic Peer Interaction High - Lowa
(p-o)
Low-Highb
(p-o)
Foschi (1996) Gender .55c
(.64)
.66
(.68)
.47
(.59)
Lovaglia & Houser (1996) Year-in-School .55c
(.64)
.59
(.68)
.51
(.59)
Houser A (1997) Disability .43
(.64)
.56
(.68)
----d
(.59)
Houser B (1997) Disability .48
(.64)
.58
(.68)
----d
(.59)
Troyer & Younts (1997) Year-in-School .54c
(.64)
.60
(.68)
.48
(.59)

Note. The relevance of the diffuse status characteristic to the task outcome was not explicitly established in the instructions to subjects in any of the above experiments. Houser A (1997) and Houser B (1997) are two separate experiments reported in Houser (1997). In all studies except Foschi, subjects were female undergraduates. Subjects in Foschi (1996) were male (in the High-Low interaction condition) or female (in the Low-High interaction condition).
a "High-Low" indicates study manipulations led subject to perceive self as higher status than partner.
b "Low-High" indicates study manipulations led subject to perceive self as lower status than partner.
c Peer interaction condition was not included; value is estimated from mirror-image conditions.
d Condition was not included in study.

[190]
---------------
[191]

As indicated in this table, each condition in these studies generated lower P(S) values than the Fisek et al. model predicts. Each study, although based on the SES, included subtle protocol variations. For instance, Foschi, who is interested in exploring the effects of status variables on standards for performance did not include information on scores received by actors in previous studies. As noted above, conveying scores represents a common manipulation intended to evoke task orientation and collective orientation among actors. Lovaglia and Houser (1996), Houser A and B (1997), and Troyer and Younts (1997) did include information on scores, but in an abbreviated form. Another important feature of the studies in Table 1 is that they all relied on the method of feedback regarding the partner's initial choice in which differences between subjects and their partner in initial choices were labeled as "disagreements." How might such variations affect status-organizing processes? In the next section, I explore the theoretical implications of these protocol variations for the status-organizing processes posited by SCT.

THEORETICAL IMPLICATIONS OF PROTOCOL DIFFERENCES FOR STATUS-ORGANIZING AND SOCIAL INFLUENCE PROCESSES

The protocol components related to individual vs. group scores and feedback representing the partner's intial choice may affect both task and collective orientation among study subjects. First, the extended discussion of scores and performance levels may generate an enhanced sense among subjects that they are working on a task that has known success and failure outcomes. By "enhanced" I mean that subjects may find the notions of success and failure on the task more salient and more enduring. As such, we might expect that subjects would search more closely for cues that will enhance performance. One such cue is status information. Thus, we might expect status differences to become more salient to the extent that instructions make task outcomes salient. For higher status subjects, this may lead to higher P(S) values as they would less readily succumb to the influence of a lower status partner (who, from a theoretical standpoint, might lead the subject toward task failure). Note, however, that by the same logic, the P(S) values for lower status subjects should be depressed. That is, for lower status subjects the heightened task orientation should lead them to yield more readily to a higher status partner (who, from a theoretical standpoint, might lead the subject toward task success). When less emphasis is placed on scores, by corresponding logic, we would expect status differences to be less salient. The result would be a regression towards peer-interaction (i.e., P(S) moving toward .64). In this situation, the P(S) for higher status actors should be lower and the P(S) for lower status actors should be higher, compared to a situation in which scores receive more emphasis.

[191]
---------------
[192]

Second, we might also expect that if an explicit and detailed comparison is made between individual and group scores (as in Wagner, Ford, and Ford 1986), then an effect on collective orientation will be evident. Such a manipulation clearly emphasizes that improved performance results from considering another's input. To the extent that this feature of the protocol is emphasized, we would expect subjects to attend more closely to their partner. The result should be detectable in a heightened level of collective orientation. Collective orientation, in turn, may affect P(S) values by making the partner and the partner's potential contributions to the task more salient. As with task orientation, we would expect this to generate a higher P(S) value for higher status subjects (who will find the lower status of their partner more salient) and a lower P(S) value for lower status subjects (who will find the higher status of their partner more salient). When the contrast between individual and group performance receives less emphasis, then we would expect reduced collective orientation. Collective orientation may provide actors with a point of reference from which they gauge their own behavior. In the absence of this reference point, actors may engage in a more random pattern of responses, leading to P(S) values regressing toward .50 for both higher and lower status actors.

Third, the form that the feedback takes regarding the partner's initial choice may affect the task and collective orientation of subjects. When a subject's partner is explicitly portrayed as disagreeing, this may elicit a powerful chain of responses from the subject. Initially, it may cue an actor to the task context of the interaction (leading to a heighten task orientation). Additionally, it may cue an actor to attend more to the other from whom the disagreement is arising (leading to a heightened sense of collective orientation). As I have argued, heightened task and collective orientation are likely to increase the search for and salience of status information. Furthermore, the declaration of a disagreement may lead an actor to view the partner as possessing more task ability than otherwise. That is, a statement of disagreement may act as a status cue (e.g., Berger, Webster, Ridgeway, and Rosenholtz 1986) leading both higher and lower status subjects to view their partner as more competent than they would if such an assertion was not made. The result would be an increased tendency to accept the partner's input, leading to a reduced P(S) among both higher and lower status subjects.

[192]
---------------
[193]

EMPIRICAL INVESTIGATION OF VARIATION IN PROTOCOL ON TASK ORIENTATION, COLLECTIVE ORIENTATION, AND SOCIAL INFLUENCE

To assess how variations in the SES protocol affect the scope conditions and social influence outcomes described in SCT, I conducted two experiments. I modeled the protocol for my experiments after the protocol used in Wagner, Ford, and Ford (1986). In their study, Wagner, Ford, and Ford were interested in studying the effects of disconfirmation of established gender-based expectations. For this project, gender was the status characteristic of interest, with females representing the lower status state and males representing the higher status state. Wagner, Ford, and Ford conducted two experiments, one involving female subjects, the other involving male subjects. In both experiments, subjects initially completed a "One-Pattern Contrast Sensitivity Task" alone. This task required that subjects indicate whether a single rectangle composed of smaller black and white rectangles contained more white or black area. After completing this task, in the baseline condition of each experiment, subjects went on to complete a second "Two-Pattern Contrast Sensitivity Task," ostensibly with a partner without any information regarding their performance on the first task. In a second condition, subjects received confirming information such that lower status subjects were advised that they had received lower scores on the initial task than their higher status partners; and higher status subjects were told that they had received higher scores than their lower status partners. In a third condition, subjects received disconfirming information: higher status subjects received lower scores on the initial task than their lower status partners and lower status subjects received higher scores on the initial task than their higher status partners. The protocol that Wagner, Ford, and Ford used included (1) extensive discussion of the performance standards (including detailed comparisons of individual vs. group scores) and (2) unlabeled feedback regarding their partner's initial choice on the joint task (i.e., there was no explicit "disagree" or "agree" label attached to the partner's initial choice). Subjects completed 25 trials of the Contrast Sensitivity Task, of which 20 were pre-programmed so that the partner's initial choice would be opposite the subject's.

I used features of the protocol employed in the baseline conditions of Wagner, Ford, and Ford (1986) as the baseline conditions for my study of the effects of variations in the protocol. A total of ninety second-year undergraduate students were recruited to voluntarily participate in the study for ten dollars. Like the study conducted by Wagner, Ford, and Ford, one experiment involved males (n=45) and the other involved females (n=45). As described below, there were three conditions in each study. Subjects were randomly assigned within each experiment to one of the three conditions (fifteen subjects per condition in each experiment).

[193]
---------------
[194]

In both experiments, subjects interacted in a computer-mediated environment. The dependent variables in which I am interested are P(S) values for higher and lower status actors, as well as degree of task orientation and degree of collective orientation among higher and lower status actors. In addition to the baseline condition (BASELINE), I examined two other conditions, one in which performance standards were only briefly discussed (BRIEF NS) and one in which subjects received explicit feedback that their partner "disagreed" or "agreed" with the subjects' initial choice on the Contrast Sensitivity Task (DISAGREE FB). With respect to the latter condition of my experiments involving males and females, subjects were provided with not only their partner's initial choice, but also a label depicting whether the subject's partner "Agrees" or "Disagrees" with the subject's initial choice. With respect to the former conditions, subjects were given only the following information in text form, which they read on their computer screen:

"When individuals work alone at solving Contrast Sensitivity problems, 0 to 10 is a poor performance, 11 to 15 represents an average performance, and 16 to 25 is clearly a superior performance. Today, you will have additional time and information when working together with your partner to jointly solve Contrast Sensitivity problems. In particular, you will have information regarding your partner's initial choice, which you can review before submitting your final choice. We have found that individuals whose performances are poor or average when they do not have additional time and information are frequently capable of attaining superior performance when they do have this additional time and information."

The first sentence of this passage corresponds to the information regarding performance standards that Wagner, Ford, and Ford (1986) provided to subjects.4 The remainder, which provides only a generalization of individual vs. group outcomes as opposed to the detailed information in Wagner, Ford, and Ford, is adapted from Moore's (1968) experiment.

The experiments I conducted also differed in another important ways from the Wagner, Ford, and Ford (1986) study. Wagner, Ford, and Ford were exploring the effects of disconfirming evidence on established gender-based expectations (i.e., a situation in which gender was explicitly made relevant to task outcomes). Because of this, they provided information to subjects in the study instructions indicating that there was established evidence that males were better at solving Contrast Sensitivity Problems than females:

[194]
---------------
[195]

"Second, one of the most interesting findings to emerge from previous studies of Contrast Sensitivity ability is that males generally are far more accurate at solving Contrast Sensitivity problems than are females. Contrast Sensitivity may in fact be a gender based ability. That is, whether you have high or low levels of Contrast Sensitivity may be dependent upon your sex or gender. Social scientists are not sure why males seem to have higher levels of Contrast Sensitivity, although some social scientists believe that the difference is probably due to different socialization experiences -- for example, the different kinds of academic skills that are emphasized in the educational paths that are open to men as compared to the traditional paths open to women. While we do not understand the reasons for this fully we do know that this difference does exist." (Excerpted from protocol for experiments conducted by Wagner, Ford, and Ford (1986).)

This passage was omitted from my experiments (as were any references to it in the remainder of the protocol), since I was not interested in studying established status expectations. All other components of the instructions to subjects leading up to the Contrast Sensitivity Task followed (verbatim, wherever possible) the protocol of Wagner, Ford, and Ford (1986). After subjects completed 25 trials of the Contrast Sensitivity Task, they filled out a questionnaire. Two items on the questionnaire operationalized subjects' task and collective orientation. For task orientation, subjects were asked to indicate the extent to which they cared whether they and their partner obtained the correct answers to the Contrast Sensitivity Task (8-point response scale: 1 = "I Did NOT Care if WE Had the Right Answers," 8 = "I Cared a GREAT Deal that We Had the Right Answers). For collective orientation, subjects were asked to indicate the extent to which they paid attention to their partner's initial choice when working on the Contrast Sensitivity Task (8-point response scale: 1= I Paid NO ATTENTION to My Partner's Input," 8 = I Paid CONSIDERABLE ATTENTION to My Partner's Input).

RESULTS

As indicated in Table 2, these variations in the protocol have some effects on the task and collective orientation of subjects, as well as the P(S) values. The brief discussion of performance scores (common in some more recent research) corresponded to lower levels of task orientation for both higher and lower status subjects, as I have proposed. It did not, however, have the effect of reducing collective orientation, nor did it correspond to reduced P(S) levels, as I suggested.

[195]
---------------
[196]

Table 2. Mean Effects of Protocol Manipulations on Task Orientation, Collective Orientation, and P(s) of Higher and Lower Status Actors (Standard Deviations in Parentheses).

  Condition
Dependent Variable Baseline Brief NS Disagree FB
Experiment I: Higher Status Subjects (Males)
Task Orientation 6.600a
(0.986)
5.867b
(0.915)
6.867a
(0.990)
Collective Orientation 6.267a
(0.884)
6.067a
(0.799)
7.00b
(0.76)
P(S) 0.650a
(0.082)
0.633a
(0.084)
0.607a
(0.088)
Experiment II: Lower Status Subjects (Females)
Task Orientation 6.667a
(0.900)
6.067b
(0.884)
6.733a
(0.961)
Collective Orientation 6.733ab
(0.884)
6.200b
(1.014)
6.867a
(0.990)
P(S) 0.547a
(0.103)
0.513ab
(0 085)
0.460b
(0.112)

Note. Means within a row that do not share a common superscript differ at p < .05 on a one-tailed t-test. For each condition of each experiment, n=15.

In contrast, the conditions in which subjects received feedback indicating their partner disagreed with their initial choice did not have the effects on task orientation that I had posited. They did, however, have the effects on both collective orientation and P(S) that I hypothesized. Collective orientation was greater for higher status subjects when they received feedback indicating that their partner disagreed with their initial choice, compared to when they simply viewed their partner's initial choice (which was obviously different). Yet, collective orientation was not significantly different, for lower status subjects in these parallel conditions. It is, however, interesting to point out that the collective orientation for lower status subjects was already high across all conditions. It may be that lower status subjects, who are already aware of their lower competence at the task, are particularly attentive to their partner as they search for information to increase their likelihood of success on the task. In contrast, higher status subjects, who are aware of their superior confidence may place less weight on their partner's input and only attend to it when they encounter unexpected input (e.g., "disagreement") from their partner.

[196]
---------------
[197]

The disagreement condition clearly yielded the expected effects on P(S) values in the experiment involving lower status subjects. When these subjects were advised that their partner disagreed, they yielded to their partner (i.e., had lower P(S)) with much greater frequency. For higher status subjects, the differences between simply being advised of one's partner's initial choice and being told that the partner "disagreed" (.607 vs. .650) was moderately significant (p=.09). Overall, it seems that this relatively subtle protocol difference may indeed have an impact on subjects' susceptibility to influence.

Table 2 also reveals another interesting outcome. The P(S) values predicted by FKN for higher and lower status actors differentiated on the bases of gender (a single diffuse status characteristic) are .68 for higher status males and .59 for lower status females. While the values in the baseline condition begin to approximate these predicted values, they are still lower than predicted (by about 4% for higher status actors and 7% for lower status actors). This suggests that perhaps one of two other factors, interaction medium and/or shifts in social cultural attitudes over time, may have an effect on status organizing processes (e.g., Troyer forthcoming). The studies on which the FKN estimates are based were conducted in either a face-to-face setting (subjects received instructions verbally from an experimenter in the same room) or in an audio/video-mediated setting (subjects received instructions via a closed-circuit system). In the experiments I conducted, subjects read instructions on a computer monitor. It may be that face-to-face and audio/video media generate heightened task orientation, collective orientation, and/or status salience. Also, it may be that the status effects of gender are not as powerful as they were over 15 years ago (when the experiments FKN used to generate their estimates were conducted). Insofar as status represents the cultural value of an attribute (like maleness or femaleness), it is reasonable to suggest that cultural shifts may lead to corresponding shifts in the status-value of social characteristics. To date, little research has been conducted within SCT to examine how trends in attitudes may relate to status-organizing processes. This unexplained result from the experiments I conducted suggests both methodologically and theoretically interesting avenues of research to pursue.

DISCUSSION

The arguments and empirical evidence that I have offered emphasize the importance of standardization in research and how standardization can facilitate theoretical growth. As I have also noted, however, there are critical junctures in theoretical development and methodological advances that make it both necessary and prudent to adjust a standardized research protocol. A good example of how theory necessitates the adjustment of a standardized protocol is found in work by Foschi on double standards within the expectation states theoretical research program (e.g., Foschi and Foddy 1988; Foschi 1996; Foschi, Lai, and Sigerson 1994). Since Foschi examines the different performance standards that are held for higher and lower status actors, one key component of the standardized experimental setting used in SCT research, information on performance standards (i.e., scores ostensibly based on prior research) that subjects receive, must be omitted. A second example involving the addition of components to the standardized experimental setting is found in research by Wagner, Ford, and Ford (1986). Here, the researchers were interested in exploring the effects of (dis)confirming evidence on established gender-based expectations. To meet this requirement, these researchers added information to the standardized protocol in order to explicitly establish the path of relevance between gender states and performance expectations.

[197]
---------------
[198]

Also, advances in computer technologies can lend many advantages to experiments. For instance, computer technologies can make data collection more efficient and reliable, and can engender greater degrees of control over extraneous factors (e.g., Cohen 1988). Retaining old technologies in order to preserve the standardized protocol is neither necessary nor prudent. With careful attention to and testing of new technologies, the advantages of more efficient technologies can be realized at no theoretical cost.

In summary, features of a standardized research protocol corresponding to a theoretical research program are not unchanging. Yet, theoretical development will be more efficient if protocol changes are made incrementally and with attention to existing features of the research protocol (and the rationale behind their inclusion). Moreover, pilot testing of alterations to the protocol and the inclusion of baseline conditions wherever possible will also facilitate the interpretation of results, thereby promoting theoretical growth.

Additionally, as I have suggested here, it may be worthwhile for researchers to pay special attention to the effects that seemingly subtle protocol changes have on the scope conditions of a theory. The experiments I described indicated that seemingly subtle changes elicited significant differences in outcome variables and in the extent to which scope conditions were realized. This result corresponds to an important insight of Foschi (1997) that scope conditions are themselves theoretically interesting variables. Subtle shifts in the protocol may generate significant shifts in both the scope conditions and outcome variable (P(S)).

In conclusion, systematic theorizing and standardized research methods can contribute significantly to our understanding of social psychological processes. As new theoretical and methodological advances are made, however, it will be important to exercise caution in the adaptation of existing protocols. By carefully exploring the effects of changes to standardized protocols, net of the effect of theoretical variables of interest, we will generate clearer more interpretable results, and perhaps shed insight on new theoretical avenues to explore as I have attempted to do here.

ENDNOTES

1. Some of the ideas represented in this paper were presented at the Conference on Theory Development and Theory Testing in Group Conferences (Vancouver, British Columbia, Canada, August 1998). Conference participants, in particular, Martha Foschi, Jane Sell, David G. Wagner, and Murray Webster, Jr. offered several useful insights that helped me develop this project. Additionally, I am grateful to the anonymous reviewer and editor who also contributed key suggestions that improved this paper.

[198]
---------------
[199]

2. The summary of the standardized experimental setting (SES) presented here is drawn primarily from Berger and Zelditch (1977); Cronkite, Cook, and Wagner (1974); Moore (1968); as well as discussions with Joseph Berger, David G. Wagner, and Murray Webster, Jr. I am grateful for their insights and advice. Of course, any errors or oversights in the presentation are my own.

3. Over the years, the medium through which the SES has been administered has varied from face-to-face situations (in which the experimenter delivers the instructions in-person), to an audio/video mediated situation (in which the subject receives instructions over a closed-circuit audio/video system), to recent computer-mediated situations (in which the subject reads instructions and information on a computer screen). Clearly, the medium through which an experiment is administered may have important effects on the operationalization of key variables, and thus key theoretical processes. In this paper, however, I focus on the effects of the instructions themselves and how variations in those instructions may affect theoretical processes. Nonetheless, the study of effects of experimental medium represents an important consideration for researchers (e.g., Troyer forthcoming; Troyer 1998; Troyer and Kalkhoff 1999).

4. The omitted remainder of the passage states, "Individuals can improve their scores substantially if they are given the opportunity to see another person's initial choice before having to make a final decision. In this situation, we are interested in seeing how well you can work together as a team. When people work together as partners, it has been found that a team score falling between 0 and 26 constitutes a very poor team performance; a team score of 27 to 32 is a below average performance; scores of 33 to 40 represent an average team performance; 41 to 47 points represents an above average score; and 48 to 50 out of a possible 50 is a superior team performance. As you can see from these standards, it has been demonstrated that teams working together are able to perform more effectively than two individuals working independently. For example, individuals with average ability working together migh each get between 11 and 15 for a total score falling between 22 and 30. However, as the team results show, the average team score is quite a bit higher --between 33 and 40. This is because two people working together as a team and exchanging information with each other can do better than two individuals working alone." (Excerpted from the protocol used in the experiments conducted by Wagner, Ford, and Ford (1986).)

[199]
---------------
[200]

REFERENCES

Aronson, Elliot, Phoebe C. Ellsworth, J. Merrill Carlsmith, and Marti Hope Gonzales. 1990. Methods of Research in Social Psychology. New York: McGraw-Hill.

Berger, Joseph, Bernard P. Cohen, and Morris Zelditch, Jr. 1972. "Status Characteristics and Social Interaction." American Sociological Review. 37:241-255.

Berger, Joseph, Bernard P. Cohen, and Morris Zelditch, Jr. 1966. "Status Characteristics and

Berger, Joseph, M. Hamit Fisek, and Robert Z. Norman. 1977. "Status Characteristics and Expectation States: A Graph-Theoretic Formulation." Part II in Joseph Berger, M. Hamit Fisek, Robert Z. Norman, and Morris Zelditch, Jr. (Eds.), Status Characteristics and Social Interaction. New York: Elsevier.

Berger, Joseph, M. Hamit Fisek, Robert Z. Norman, and Morris Zelditch, Jr. 1977. Status Characteristics and Social Interaction: An Expectation States Approach. New York: Elsevier.

Berger, Joseph, Susan J. Rosenholtz, and Morris Zelditch, Jr. 1980. "Status Organizing Processes." Annual Review of Sociology. 6:479-508.

Berger, Joseph, Murray Webster, Jr., Cecilia L. Ridgeway, and Susan J. Rosenholtz. 1986. "Status Cues, Expectations, and Behavior." Pp. 1-22 in Edward J. Lawler (Ed.), Advances in Group Processes, vol. 3. Greenwich, Connecticut: JAI Press.

Berger, Joseph, and Morris Zelditch, Jr. 1977. "Status Characteristics and Social Interaction: The Status-Organizing Process." Part I in Joseph Berger, M Hamit Fisek, Robert Z. Norman, and Morris Zelditch, Jr. (Eds.), Status Characteristics and Social Interaction: An Expectation States Approach. New York: Elsevier.

Blalock, Hubert M. Jr., and Ann B. Blalock. 1968. Methodology in Social Research, 2nd Edition. New York: McGraw-Hill.

[200]
---------------
[201]

Christensen, Larry. 1997. Experimental Methodology, 7th Edition. Boston: Allyn and Bacon.

Cohen, Bernard P. 1989. Developing Sociological Knowledge: Theory and Method, 2nd Edition. Chicago: Nelson-Hall.

Cohen, Bernard P. 1988. "A New Experimental Situation Using Microcomputers." Chapter 20 in Murray Webster, Jr. and Martha Foschi (Eds.), Status Generalization: New Theory and Research. Stanford, California: Stanford University Press.

Cohen, Elizabeth G. 1971. "Interracial Interaction Disability." Urban Education. January:336-356.

Cook, Karen, Ruth Cronkite, and David Wagner. 1974. "Laboratory for Social Research Manual for Experimenters in Expectation States Theory." Stanford University Laboratory for Social Research, Stanford, California.

Fisek, M. Hamit, Robert Z. Norman, and Max Nelson-Kilger. 1992. "Status Characteristics and Expectation States Theory: A Priori Model Parameters and Test." Journal of Mathematical Sociology. 16:285-303.

Foschi, Martha. 1996. "Double Standards in the Evaluation of Men and Women." Social Psychology Quarterly. 59:237-254.

Foschi, Martha. 1997. "On Scope Conditions." Small Group Research. 28:535-555.

Foschi, Martha and Margaret Foddy. 1988. "Standards, Performances, and the Formation of Self-Other Expectations." Pp. 248-260 in Murray Webster, Jr. and Martha Foschi (Eds.), Status Generalization: New Theory and Research. Stanford, California: Stanford University Press.

Foschi, Martha, Larissa Lai, and Kirsten Sigerson. 1994. "Gender and Double Standards in the Assessment of Job Applicants." Social Psychology Quaerterly. 57:326-339.

Fox, John, and James C. Moore. 1979. "Status Characteristics and Expectation States: Fitting and Testing a Recent Model." Social Psychology Quarterly. 42:126-134.

[201]
---------------
[202]

Heise, David. 1987. "Affect Control Theory: Concepts and Model." Journal of Mathematical Sociology. 13:1033.

Houser, Jeffrey Alan. 1997. "Stigma, Spread and Status: The Impact of Physical Disability on Social Interaction." Unpublished Ph.D. Dissertation. Department of Sociology, The University of Iowa, Iowa City, Iowa.

Jasso, Guillermina. 1980. "A New Theory of Distributive Justice." American Sociological Review. 45:3-32.

Jasso, Guillermina. 1990. "Methods for the Theoretical and Emprical Analysis of Comparison Processes." Pp. 369-419 in Clifford C. Clogg (Ed.), Sociological Methodology 1990. Washington, DC: American Sociological Association.

Lockheed, Marlaine E., and Katherine P. Hall. 1976. "Conceptualizing Sex as a Status Characteristic: Applications to Leadership Training Strategies." Journal of Social Issues. 32:111-124.

Lovaglia, Michael J., and Jeffrey A. Houser. 1996. "Emotional Reactions and Status in Groups." American Sociological Review. 61:867-883.

MacKinnon, Neil J. and David R. Heise. 1993. "Affect Control Theory: Delineation and Development." Pp. 64-103 in Joseph Berger and Morris Zelditch Jr. (Eds.), Theoretical Research Programs: Studies in the Growth of Theory. Stanford, California: Stanford University Press.

Moore, James C., Jr. 1968. "Status and Influence in Small Group Interactions." Sociometry. 31:47-63.

Norman, Robert Z., Roy Smith, and Joseph Berger. 1988. "The Processing of Inconsistent Status Information." Chapter 8 in Murray Webster, Jr. and Martha Foschi (Eds.), Status Generalization: New Theory and Research. Stanford, California: Stanford University Press.

Smith-Lovin, Lynn and David Heise. 1988. Analyzing Social Interaction: Advances in Affect Control Theory. New York: Gordon and Breach.

Troyer, Lisa. Forthcoming. "The Relation between Experimental Standardization and Theoretical Development in Group Processes Research." Chapter 8 in Michael Lovaglia, Jacek Szmatka, and Kinga Wysienska (Eds.), Theory, Simulation, and Experiment. Praeger Publishing.

[202]
---------------
[203]

Troyer, Lisa. 1998. "Technology, Tactics, or Trends: Toward a Systematic Investigation of Differences in Results of Experiments in the Status Characteristics Theory Tradition." Unpublished paper presented at the Conference on Theory Development and Theory Testing in Group Processes, Vancouver, British Columbia, Canada.

Troyer, Lisa. 1999. MacSES, v. 5.0. Unpublished software manual.

Troyer, Lisa and Will Kalkhoff. 1999. "Computer Technologies in Group Processes Research: Issues & Insights." Unpublished paper presented at the Conference on Group Processes Research and Theory, University of Illinois - Chicago, Chicago, Illinois.

Troyer, Lisa, and C. Wesley Younts. 1997. "Whose Expectations Matter? The Relative Power of First- and Second-Order Expectations in Determining Social Influence." American Journal of Sociology. 103:692-732.

Wagner, David G. and Joseph Berger. 1985. "Do Sociological Theories Grow?" American Journal of Sociology. 90:697-728.

Wagner, David G., Rebecca S. Ford, and Thomas W. Ford. 1986. "Can Gender Inequalities be Reduced?" American Sociological Review. 51:47-61.

Walker, Henry A., and Bernard P. Cohen. 1985. "Scope Statements: Imperatives for Evaluating Theory." American Sociological Review. 50:288-301.

Webster Jr., Murray. 1977. "Equating Status Characteristics and Social Interaction: Two Experiments." Sociometry. 40:41-50.

Webster Jr., Murray, and James E. Driskell, Jr. 1983. "Beauty as Status." American Journal of Sociology. 89:140-165.

Willer, David. 1987. Theory and the Experimental Investigation of Social Structures. New York: Gordon and Breach.

Willer, David and Barry Markovsky. 1993. "Elementary Theory: Its Development and Research Program." Pp. 323-363 in Joseph Berger and Morris Zelditch Jr. (Eds.), Theoretical Research Programs: Studies in the Growth of Theory. Stanford, California: Stanford University Press.

[203]
---------------
[204]

AUTHOR BIOGRAPHY

Lisa Troyer (lisa-troyer@uiowa.edu), Assistant Professor of Sociology at the University of Iowa, studies problem-solving and decision-making in groups with particular emphasis on the effects of context (e.g., face-to-face vs. computer-mediated), social structure, and social expectations on group processes and outcomes.

[204]
---------------
[205]

Back to Journal Cover Page