September 1998 // Letters to the Editor
Finding Flashlight in the Dark:
A reply to Steve Ehrmann and Gary Brown
by Ed Neal
Note: This article was originally published in The Technology Source (http://ts.mivu.org/) as: Ed Neal "Finding Flashlight in the Dark:
A reply to Steve Ehrmann and Gary Brown" The Technology Source, September 1998. Available online at http://ts.mivu.org/default.asp?show=article&id=1034. The article is reprinted here with permission of the publisher.

The purpose of my critique of the Schutte (1997) study (Neal, 1998) was to point out that much of what passes for research in today's climate of technology hype would not meet the standards of empirical research in psychology, education, or the social sciences in general. Without these standards, research can be nothing more than pseudo-study. The truth is, there is very little empirical evidence that technology improves student learning, but research has discovered some things that do improve learning, such as smaller class size. We're not willing to put more money into reducing class size, but we are willing to spend millions on technology.

My criticism of the Flashlight project stems from its failure to address one fundamental question: does the application of technology affect student learning outcomes? Both Steve Ehrmann (1998) and Gary Brown (1998) admit that Flashlight was not designed to answer this question. I must conclude, then, in spite of the fact that the project has the imprimatur of the AAHE and the support of a number of institutions, it represents an enormous expenditure of time and money in pursuit of the wrong goal. Flashlight will undoubtedly reveal some interesting information about how students use technology, but the point is (to turn Ehrmann's title on its head) "secondary characteristics are not enough." Knowing how students use technology would be useful if one is already committed to technology as an instructional system and wants to improve the ways students use it. It is clear that many institutions have made that commitment and have already invested so much money in technology that they don't feel a need for anyone to ask the questions they should have asked earlier.

In his explanation of Flashlight, Ehrmann asserts that "even when faculty can collect outcomes data from an experimental group and a comparison group (without the same technology), they still can't prove why outcomes change (or fail to change)." I find this statement difficult to understand, even in the light of the example he provides. In the first place, if the experiment was designed properly, and the experimental and control groups treated identically (except for the use of technology), why wouldn't the teacher be able to conclude that any differential outcomes were attributable to the technology (or lack of it)? Schutte failed in this regard, but it would be relatively simple to re-run the experiment in this manner.

Ehrmann's example of a teacher who used e-mail in hopes that it would improve exam performance reinforces my point. If the experiment were properly designed, the teacher could certainly ascertain whether e-mail had any effect on the outcome. To say that the teacher wouldn't "know whether the e-mail played a role" implies that the teacher didn't conduct the study properly. Ehrmann points out that Flashlight "might have revealed that students were indeed collaborating more," but for this experiment it doesn't matter how students used e-mail as long as it was the only variable that was manipulated between the experimental and control groups.

In Gary Brown's defense of the Flashlight program, he asserts that "for assessing student learning we are wise to use whatever information and whatever tools we can get." I agree wholeheartedly with that statement, but he also admits that the Flashlight "does not intend to provide a direct assessment of learning outcomes." If it isn't intended to assess student learning, how can it contribute to the investigation of that phenomenon?

Brown seems to have difficulty understanding how one can operationalize student learning outcomes such as critical thinking skills and application of knowledge. He says "when it comes time to spell them out in ways that folks can really assess, the call for learning outcomes seems to trail off into platitudes and other nice abstractions."

On the contrary, I help faculty members "operationalize" outcomes every day of my professional life. For example, the following sample outcome statements are from a syllabus for an undergraduate course, History 80, "Women and Gender in Latin American History." These statements appear at the beginning of each unit in the course, to guide students in learning the material for the unit. (Note: These examples are from several different units.)

  • Explain the differences between feminist and socialist analyses of women's subordination.
  • Identify the reasons why women participated or chose not to participate in revolutions.
  • Construct an argument: Does revolutionary change improve conditions for women?
  • Identify the themes women addressed in their artistic and literary works, and the reaction of society to such women.
  • Critically evaluate the differences between norms and behavior. Can women depart from the norm without directly challenging it?

I submit that this teacher is operationalizing application of knowledge and critical thinking quite well, and, at the same time, indicating how the outcomes will be evaluated. I have dozens more examples of similar course syllabi in which teachers define and test higher-order learning outcomes. It would not be hard to develop similar outcome statements for any course, and we could then measure achievement in terms of real learning targets.

Brown goes on to describe how, at Washington State, they have "substantiated a correlation between positive Flashlight findings about student learning experiences and improved grades." But without controlled research design, any correlation is meaningless. Brown later uses this correlation to suggest that "grades may have a bit more validity than I initially might have suspected." To believe that grades have any measurement validity flies in the face of logic and experience (see Milton et al., 1986). Were the students in these classes graded "on the curve"? Were test grades simply averaged or were they standardized? How much of the grade was based on "participation" in class or on a discussion board? Did the tests measure factual recall or higher-order learning? We know that grading practices vary from teacher to teacher and that individual teachers don't necessarily use the same grading practices from semester to semester. As Paul Dressel wrote, "A grade is an inadequate report of an inaccurate judgment by a biased and variable judge of the extent to which a student has achieved an undefined level of mastery of an unknown proportion of an indefinite material" (1976, p. 2).

Ehrmann and Brown seem to believe I've been unfair and inconsistent in my criticism of Shutte and Flashlight. They argue that, given the difficulties of conducting controlled research in college classrooms, we should seek other ways to investigate technology and teaching. It is always difficult to conduct such research, but that doesn't mean we shouldn't do it (and many researchers have succeeded, as one can discover in the pages of the Journal of Higher Education and the American Educational Research Association Journal). My suggestion is that we divert the time, energy, and funding that is currently being spent on projects such as Flashlight to research projects with solid empirical standards. In that way we might actually discover something useful.

References

Brown, G. (1998, July). Flashligh illuminates assessments. Technology Source. Retrieved August 27, 1998 from the World Wide Web: http://technologysource.org/?view=article&id=230#brown.

Dressel, Paul. (1976). Grades: One more tilt at the windmill. Bulletin. Memphis: Memphis State University, Center for the Study of Higher Education.

Ehrmann, S. (1998, July). Outcomes measurement is not enough. Technology Source. Retrieved August 27, 1998 from the World Wide Web: http://technologysource.org/?view=article&id=230#ehrmann.

Milton, O., Pollio, H. R., & Eison, J. A. (1986). Making sense of college grades. San Francisco: Jossey-Bass.

Neal, E. (1998, June). Does using technology in instruction enhance learning? or, the artless state of comparative research. Technology Source. Retrieved August 27, 1998 from the World Wide Web: http://technologysource.org/?view=article&id=86.

Schutte, J. (1997). Virtual teaching in higher education: The new intellectual superhighway or just another traffic jam? Retrieved July 26, 1998, from the World Wide Web: http://www.csun.edu/sociology/virexp.htm.

management gamespuzzle gamesaction gameshidden objects gameskids gamessimulation gamespc gamescard gamesmahjongbrick buster
View Related Articles >