May/June 1999 // Critical Reading
The Difference Frenzy and Matching Buckshot with Buckshot
by Gary Brown and Mary Wack
Note: This article was originally published in The Technology Source (http://ts.mivu.org/) as: Gary Brown and Mary Wack "The Difference Frenzy and Matching Buckshot with Buckshot" The Technology Source, May/June 1999. Available online at http://ts.mivu.org/default.asp?show=article&id=1034. The article is reprinted here with permission of the publisher.

It's hard to determine which is more puzzling: the report, or the unquestioning coverage and response to the release of the report. We're referring to the recent review of research on the effectiveness of distance learning in higher education from the Institute for Higher Education Policy (Phipps & Merisotis, 1999). The report, commissioned by the American Federation of Teachers and the National Education Association, analyzes "the most important and salient" (p. 11) works of original research on distance learning. It challenges the current research in distance education and laments what the authors identify as a serious lack of progress distance education researchers have made.

The report, available as a PDF file at both the NEA and AFT sites, has received attention from The Chronicle of Higher Education (Blumenstyx, 1999), NetFuture (Talbot, 1999), and The Inquirer (O'Neill, 1999), among others. We feel, nonetheless, that some critical reading is warranted.

The title of the report, "What's the Difference?" recalls another ambitious work that came out about the time the Internet started its meteoric rise: Pascarella's and Terrenzini's (1991) How College Affects Students: Findings and Insights from Twenty Years of Research. In the preface of that work, the authors reveal the guiding question behind it: "Does college make a difference?" (p. xvi). Pascarella and Terrenzini respond to their own question, ironically noting, after their extensive culling of more than 2,600 studies, that "the appeal of its straightforwardness notwithstanding, the question is really a na?ɬØve one" (p. xvi). "Naive" because the simplicity of the question as posed disguised the complexity of the underlying questions (listed on pp. 7-8), that could only be answered by an analysis of research when unpacked, teased out, and answered separately—in nearly 900 pages.

Now, almost a decade later, having reviewed 40 "original studies" (p. 11) on distance education, Phipps and Merisotis (who refer to Pascarella's and Terenzini's work as "seminal") ask a similar question, asserting that "an entire body of research needs to be developed to determine if students participating in distance learning for their whole program compare favorably with students taught in the conventional classroom" (p. 24). By failing to unpack the real questions embedded in their assertion, their fundamental premise for evaluating the research on distance education seems overly simplistic. The underlying questions include: (a) What do we know of the outcomes of whole programs for campus-based students; (b) How valid and reliable are the data; (c) What proportion of distance students take a "whole program" rather than individual courses; (d) Are the student demographics comparable between a campus-based program and its distance counterpart?

We are troubled by the criteria of selection for the research reviewed in this report. Phipps and Merisotis lament the paucity of "true, original research dedicated to explaining or predicting phenomena related to distance learning" (p. 2). In this context, what does "true" research mean? "Original" is not explicitly defined either. It could refer to research employing the designs described on pp. 11-12 (descriptive, case study, correlational, and experimental designs). On the other hand, Phipps' and Merisotis' critique of Russell's bibliography, The No Significant Difference Phenomenon, for containing entries that "cite similar research and/or reference each other" (p. 19) raises the troubling possibility that the authors misunderstand the dialogic nature of research communities, and the cumulative nature of "original" research. Since the "40 original studies" are not listed separately, we are left to infer from the "Selected References" what kinds of studies those are. A disturbing number (to us) of the references are to papers presented at conferences and to papers published by university offices, not university presses. The point is that a considerable fraction of the references (and hence of the 40 studies?) has not passed through the ordinary processes of peer review for publication. Thus, readers of the report are unable to verify for themselves the foundation of "true, original research" on which the report builds.

However, their call for hard evidence and a conclusive comparison of differences is not uncommon. Green (in Morrison, 1999) says "we need to be honest about the gap between aspirations and performance. And being honest requires that we acknowledge we don't yet have clear, compelling evidence about the impact of information technology on student learning and educational outcomes." Like Green, Phipps and Merisotis assume, first, that such "compelling" evidence is attainable, and second, that even amid "dizzying" technological change and shifting student populations such comparisons with conventional education are relevant.

To make their case, Phipps and Merisotis point to the lack of rigorous controls and random sampling techniques in distance education research. But there is something disingenuous about their critique. After their call for rigorous controls, they turn around and complain that "experimental studies in distance learning are using an agricultural-botany paradigm—that students react to different educational treatments as consistently as plants react to fertilizers" (p. 24).

The contradictions in their report are pervasive. They call for comprehensive assessment "dedicated to measuring the effectiveness of total academic programs taught using distance learning" (p.5) and "a guiding framework" that "allows the research to be replicated [across programs] and enhance its generalizability" (p. 6). But they then say, "Further research needs to focus on how individuals learn, rather than how groups learn" (p. 5). In other words, how can outcome research for a program predict experience of an "individual learner" if those individuals all learn differently?

Their convoluted expectations illustrate precisely why comprehensive, clear evidence is rarely attainable in the complex, messy world of teaching and learning, even after decades of educational research. Quite simply, Phipps and Merisotis call for a fantasy research paradigm in their critique. They want "randomized experiments" (p. 4) embedded in "theoretical construct to test multiple interacting variables" (p. 6) in which "extraneous variables are controlled" (pp. 3-4) to produce results that do not yield population data, but rather are "predictive of outcomes for individual learners" (p. 6). This would be roughly equivalent to a randomized, double blind study of the effects of multiple drugs interacting with each other and with caregivers' styles, resulting in predictions of how various drug combinations work with different individuals in order to make a uniform policy for a universal health care program.

Such an experimental design is impossible in a clinical study, and beyond the absurd in educational research. To make research on distance education carry a higher burden of "proof" than most social, scientific, and educational research invites suspicion that other, not fully articulated, issues inform the methodology of this report.

Certainly a call for better research in distance education is not unreasonable. But the notion that distance learning alone might be responsible for hoisting such an enormous weight of expectations and diverse concerns is ridiculous, to say the least. Phipps' and Merisotis' argument that a theory might emerge from such efforts and thereby provide a comparison that provides a clear picture of the difference between distance and traditional face-to-face models of teaching and learning is, well, "naive." After all, most of the criticisms in this report could be (and are) applied to conventional face-to-face models of educational practice.

Consider, for instance, Phipps' and Merisotis' argument that "the validity and reliability of the instruments used to measure student outcomes and attitudes are questionable" (p. 4). They target "teacher-produced examinations, which have not followed established methods for ensuring high levels of validity and reliability," and note that "it is rare to find a teacher-made test in the research that is based upon persuasive evidence of content or construct validity" (p. 21). And finally they ask, "in short, do the instruments—as final examinations, quizzes, questionnaires, or attitude scales—measure what they are supposed to measure?" (p. 4). Since the same alleged shortcomings apply to the commonly-used types of evaluation in face-to-face education (i.e., teacher-made tests, exams, quizzes, etc.), is there some evidence that educators in distance programs have somehow slipped into exceptional laxity in their assessment practices?

The authors' attribution of veracity to one kind of evidence or from one expert and the summary dismissal of other evidence as improperly rigorous or anecdotal is disconcerting. For instance, Phipps and Merisotis condemn "the sheer weight of opinion in the literature reminding us that it "should not be taken as conclusive…since most of it is based on anecdotal evidence offered by persons and institutions with vested interests..." (p. 22). They also say that their "analysis revealed several methodological flaws that should give pause to an objective observer..." and that "this is not necessarily surprising. Merely being published in a journal or book, for example, does not guarantee the quality of the study or that it was reported accurately..." (p. 19). Yet, they had earlier noted that "technology 'can leverage faculty time, but it cannot replace most human contact without significant quality losses,' as one expert has stated" (p. 8, italics ours).

The authors' shifting perspective is so elusive that by the time they examine the uneven retention in distance courses and conclude that "if a substantial number of students fail to complete their courses, the notion of access becomes meaningless" (p. 25), we are forced to ask: meaningless for whom?

What is most interesting, however, is the summary observation that, "any discussion about enhancing the teaching/learning process through technology also has the beneficial effect of improving how students are taught on campus" (p. 8). It is indeed a hopeful assertion, but it seems odd to us that a report and discussion that so aggressively targets distance programs will have such an effect. By what processes will the dismissal of positive claims of distance educators translate into altered teaching practices on campus?

We should recall what evaluators had come to recognize before the technology explosion—there is as much difference between two teachers doing, purportedly, the same thing in conventional classes as there is between two teachers doing different things (Worthen, 1997). In that sense, efforts to compare distance and conventional courses and programs are problematic, especially as distance and campus programs and populations are increasingly integrated.

Since, as Phipps and Merisotis observe, "many of the results seem to indicate that technology is not nearly as important as other factors…namely pedagogy—the art of teaching" (p. 31), the pitting of face-to-face conventional instruction against technology-enhanced and distance strategies distracts from educational researchers' most pressing and persistent challenge. It is not that we don't have insight into ways to assess and enhance the art of teaching, but that educational research has for all but a few failed to inform teaching practice (Richardson, 1994; Robinson, 1998).

We need more and better assessment of distance learning, certainly. We need, even more, for that research to inform practice.

References

Blumenstyx, G. (1999). Studies of distance education are mostly "questionable," report says. The Chronicle of Higher Education, Wednesday, April 7, 1999. Retrieved from the World Wide Web April 24, 1999: http://chronicle.com/daily/99/04/99040702t.htm?it

Mangan, K. S. On-Line programs face faculty resistance, management educators say. The Chronicle of Higher Education, Wednesday, April 21, 1999. Retrieved April 22, 1999 from the World Wide Web: http://chronicle.com/free/99/04/99042101t.htm

Morrison, J. L. (1999). The role of technology in education today and tomorrow: An interview with Kenneth Green, part II. On The Horizon, 7(1). Retrieved April 22, 1999 from the World Wide Web: http://www.camfordpublishing.com/oth/ archive/99/vol7_no1b.htm

O'Neill, J. M. (1999). College Board report urges skepticism about online courses. The Inquirer, April 7th. Retrieved April 23, 1999 from the World Wide Web: http://www.phillynews.com/inquirer/99/Apr/07/national/VIRT07.htm

Pascarella, E. , & Terenzini, P. T. (1991). How college affects students: Findings and insights from twenty years of research. San Francisco: Jossey-Bass.

Phipps, R., & Merisotis, J. (1999). What's the difference? A review of contemporary research on the effectiveness of distance learning in higher education. A Report from The Institute for Higher Education Policy, April 1999. Retrieved April 15, 199 from the World Wide Web: http://www.ihep.com/PUB.htm

Richardson, V. (1994). Conducting research on practice. Educational Researcher, 23 (5), 5-10.

Robinson, V. M. (1998). Methodology and the research-practice gap. Educational Researcher, 27 (1), 17-26.

Talbot, S. (1999). How compelling is distance education? Netfuture, 88. Retrieved April 22, 1999 from the World Wide Web: http://www.oreilly.com/people/staff/ stevet/netfuture/1999/Apr1699_88.html#1a

Worthen, B. R., Sanders, J. R., & Fitzpatrick, J. L. (1997). Program Evaluation: Alternative Approaches and Practical Guidelines. New York: Longman.

platform gamesmahjongbrain teaser gamesdownloadable gamesaction gamespuzzle gamescard games
View Related Articles >