September/October 2000 // Assessment
Plugging in to Course Evaluation
by Keith Hmieleski and Matthew V. Champagne
Note: This article was originally published in The Technology Source (http://ts.mivu.org/) as: Keith Hmieleski and Matthew V. Champagne "Plugging in to Course Evaluation" The Technology Source, September/October 2000. Available online at http://ts.mivu.org/default.asp?show=article&id=1034. The article is reprinted here with permission of the publisher.

Though college students can order textbooks, register for courses, view grades, and apply for jobs using the World Wide Web, a recent survey by Rensselaer Polytechnic Institute's Interactive and Distance Education Assessment (IDEA) Laboratory (Hmieleski, 2000) found that nearly all colleges conduct course evaluations at the end of the term using a paper-based format. This form of student feedback does not serve students, faculty, or institutions well for the following reasons: results are delivered weeks after the term has ended; summaries are often ambiguous and fail to provide action-oriented solutions; students know that their comments will not be read for weeks (if at all); and evaluations are the basis for highly stressful decisions (e.g., raises, promotion, and tenure) rather than a tool for improving teaching and learning. In the Internet age, this "autopsy approach," determining what went wrong after a course is over, leaves vast untapped potential for improved teaching and learning via high-tech student feedback.

The Disadvantages of Traditional Course Evaluation

To prompt discussion of the challenges and benefits of transferring course evaluation to the Web, the IDEA Laboratory surveyed the nation's 200 most wired colleges as identified by ZDNet (1999). Below is a summary based on 105 responses (Hmieleski, 2000):

  • Format. Surprisingly, 98% of the "most wired" schools use primarily paper-based evaluation forms.
  • Frequency of Feedback. Of the schools requiring some form of course or faculty evaluation, all currently administer the evaluation forms solely at the end of the term (the "autopsy approach").
  • Report Latency. Faculty receive the results of their course evaluations within two weeks at 25% of the schools, within one month at 64% of the schools, and within two months at 90% of the schools.
  • Costs. Twenty-two percent of schools conducted a cost analysis of paper-based evaluation. Ten schools reported results that range from $0.25 to $4.00 per student per year. This large range is due to the factors used to calculate costs. That is, low cost estimates are usually based solely on the cost of evaluation forms, while high cost estimates usually account for many of the "hidden" costs of evaluation (e.g., labor costs to photocopy, count, collate, and deliver forms; and to retrieve, scan, store and deliver results to stakeholders).
  • Return Rate. Sixty-seven percent of schools reported return rates of 70% or higher for paper-based evaluation. Schools using or pilot-testing a Web-based evaluation system reported return rates ranging from 20% to greater than 90%.
  • Faculty Support. Only 28% of respondents rated their faculty as very supportive of their school's current evaluation system. Ninety-five percent of schools reported that their faculty are involved in the development of course evaluations, typically through participation in the faculty senate or by developing evaluation questions.
  • Student Support. Thirty-one percent of schools reported that students are involved in the development of their college’s course evaluation system, typically through participation in the student senate, and 36% of schools allow their students to view the results of course evaluations, typically via the Internet and student publications.

Three Steps Toward Web-Based Course Evaluation

Of the colleges surveyed, those transitioning to the online environment are converting their paper-based evaluation form to a Web-based form. This is an important first step but not an optimal use of the Web environment. Most of the surveyed schools are well-positioned to implement the second step in moving toward a Web-based evaluation. They can do this by incorporating a "feedback-and-refinement" process to their ongoing evaluation efforts. This process, particularly well-suited for high-enrollment and distance learning courses, allows frequent feedback from students. Such frequent student feedback removes obstacles to learning, improves student satisfaction, and rapidly improves course delivery. Regardless of the technology used to drive this process, the key features of feedback-and-refinement provide instructors with the following:

  • immediate student feedback via automated results,
  • interpretable results that facilitate rapid adjustments,
  • organized student comments that can be quickly addressed, and
  • individual student responses rather than "class average" responses.

Schools that lead in developing Web-based course evaluations can take a third step using currently available online technology and infrastructure. This stage redefines course evaluation as a process of frequent exchange of information between students and instructors to guide course refinement. Key features of this system include:

  • availability throughout the term so that individual faculty can collect data to refine and focus their teaching efforts;
  • analyses of individual student responses rather than the "average" student response; and
  • unique login parameters to allow different stakeholders (e.g., instructors, content developers, students, and administrators) to view specific results.

This system improves the quality of teaching, learning, and course delivery; increases the utility of course evaluations; and serves as a model to other institutions of higher education.

Advantages of Web-Based Evaluation

Many schools hesitate to convert to Web-based evaluation due to fears regarding cost, return rates, and response quality. Ironically, these same factors provide the strongest support for converting to a Web-based format.

Cost of Conversion. Although at the time of the survey, ten schools had conducted or were conducting cost analyses of Web-based evaluation, none reported the results of their analyses. Kronholm, Wisher, Curnow, and Poker (1999) conducted a comprehensive study on this issue, comparing production, distribution, monitoring, scanning, and analysis preparation costs for both paper-based and Web-based evaluations of a distance learning course. According to their study, delivering a 22-item paper-based evaluation to 327 participants across 18 sites costs $568.60 (or $1.74 per student, assuming labor costs are $15 per hour). The study indicates that delivering the same evaluation via the Internet is $18.75—a savings of 97%! This savings increases rapidly as course size increases, since via the Web, multiplying the number of evaluations adds practically zero cost.

In terms of data analysis and reporting, Kronholm et al (1999) find that analyzing 327 forms takes approximately 16 hours of labor, with additional time needed to write reports to key stakeholders. Schools using feedback-and-refinement or fully Web-based evaluation systems, with automated analysis of a database of responses, analyze their data within a few seconds. Either system allows customized reports to individual faculty members, requiring only a few hours of setup time regardless of the number of reports.

Return Rates. Several respondents noted that return rates of Web-based evaluations are lower than in-class evaluations. However, if return rates become the primary goal of course evaluation, then the value of evaluation may be lost. One respondent to the IDEAL Lab survey summarized the thoughts of many: "We are afraid that students would not complete surveys [outside of class, but] with paper, the instructor can hold them captive at the beginning of the last class." End-of-course evaluations usually achieve the goal of high return rates (approximately 100% of students who show up for class that day). But this manner of evaluation results in many students simply circling the entire column labeled "agree" and leaving the comments section blank before rushing out the door. It is interesting to note that this same phenomenon occurs online as well. In our administrations of the feedback-and-refinement system, in which students give feedback voluntarily, return rates are indeed lower. When participation is mandatory, the return rates also approach 100%. However, the number of useful comments drops dramatically.

Global comparisons between return rates of paper-based and Web-based formats have been conducted, but they are usually as unproductive as attempts to determine the superiority between distance and traditional learning. In both cases, there are far too many alternative explanations to explain the results (Champagne, Wisher, Pawluk, & Curnow, 1999; Phipps & Merisotis, 1999). In reality, there are three primary factors that determine return rate: faculty, students, and the instrument. If faculty are "on-board" and eager to use the information provided by a good evaluation, students see changes resulting from their feedback, and both parties recognize that the instrument measures what it is supposed to measure, then return rates will be high. If these factors do not exist (e.g., results are unknown for months, students believe that their comments will not be heard, evaluation items appear unrelated to the particular course), then return rates will be low.

Response Quality. Some respondents to the IDEA Lab survey felt that students completing course evaluations on their own time, without the urgency of running to their next class, would provide richly detailed comments and thoughtful responses. Others speculated that students would give undue negative or reckless remarks due to environmental distractions present outside of class. Still others argued that students would give insincerely positive remarks because their responses would not be anonymous.

We have found that when using a feedback-and-refinement system, where students give feedback at their leisure, comments tend to be more plentiful and insightful. Our recent survey of a graduate management course found that students typed an average of four times as many comments (62 words/student) as students completing a paper-based version of the same evaluation form at the end of class (15.4 words/student). In addition, comments delivered through the online system were automatically sorted by categories and searchable by key words, generating individual results and lists of action-oriented recommendations. Comments written on the paper-based form had to be re-typed to hide recognizable handwriting and provided no means of ordering the information for the instructor’s benefit.

Conclusion

Colleges have grown accustomed to using end-of-term standardized evaluations as a basis for both improving the quality of instruction and making important faculty career decisions. This system frustrates both students and instructors, providing neither with the feedback required to make necessary changes while classes are in session. A feedback-and-refinement process serves students, faculty, and administrators better by removing obstacles to learning, providing a means to rapidly improve delivery, and cutting evaluation costs. A fully developed Web-based evaluation system serves colleges better by providing information more quickly and clearly and shifting the definition of quality instruction and improvement from "getting high scores" to "using student feedback to facilitate change." By taking these steps, schools can begin to mine the vast potential of technology-driven evaluation to improve teaching and learning.

Authors' note: Many of the statements we have made are based on feedback from college administrators, faculty, and students, as well as from our data and personal experience. We invite others to test our assumptions, share data, and make discoveries in the emerging area of Web-based evaluation.

References

America’s 100 Most Wired Colleges. (1999, May). Yahoo Internet Life. Retrieved 15 June 2000 from the World Wide Web: http://www.zdnet.com/ yil/content/college/colleges99.html.

Champagne, M. V., Wisher, R. A., Pawluk, J. L., & Curnow, C. K. (1999). An assessment of distance learning evaluations. Proceedings of the 15th Annual Conference on Distance Teaching and Learning 15, 85-90.

Hmieleski, K. H. (2000). Barriers to online evaluation. Troy, NY: Rensselaer Polytechnic Institute, Interactive and Distance Education Assessment (IDEA) Laboratory. Retrieved 15 June 2000 from the World Wide Web: http://idea.psych.rpi.edu/evaluation/report.htm.

Kronholm, E. A., Wisher, R. A., Curnow, C. K., & Poker, F. (1999). The transformation of a distance learning training enterprise to an Internet base: From advertising to evaluation. Paper presented at the Northern Arizona University NAU/Web99 Conference, Flagstaff, AZ.

Phipps, R., & Merisotis, J. (1999). What’s the difference?: A review of contemporary research on the effectiveness of distance learning in higher education. Institute for Higher Education Policy. Retrieved 15 June 2000 from the World Wide Web: http://www.ihep.com/difference.pdf.

shooter gamesadventure gamesmatch 3 gamessimulation gamesdownloadable pc gamestime management games
View Related Articles >