Learning in an Online Format versus an In-class Format: An Experimental Study

##AUTHORSPLIT##<--->

The past five years have borne witness to a revolution in education with an acceleration in the use of online technologies to assist or, in many cases, supplant traditional modes of instruction (Bjorner 1993; Velsmid 1997). Peterson's Guide reports that nearly 400 accredited colleges and universities in North America currently employ online instruction of some sort (Velsmid). In addition, Herther (1997) noted that over 150 accredited institutions offer entire bachelor's degree programs to students who rarely, if ever, visit campus.

The asynchronous nature of many online programs together with their accessibility from home, office, or hotel room are obvious advantages to students (see Bjorner). Additionally, as the cost of traditional education increases, market pressures are forcing more and more institutions to consider online offerings (see Gubernick and Ebeling 1997) that do not incur the costs of dormitories, athletic programs, etc. The Florida State University system expects online programs to save about 40% of the cost of in-class programs ("Caught" 1998). It should be noted, however, that Duke University charges a premium for its online MBA ($82,500 vs. $50,000 for its on-campus equivalent).

As more and more online courses and programs proliferate, the questions of quality and comparability of such instruction with traditional methods naturally arise. Gubernick & Ebeling report a study conducted by the University of Ph'enix (a private, for-profit institution) that demonstrated standardized achievement test scores of its online graduates were 5% to 10% higher than graduates of competing on-campus programs at three Arizona public universities. While one may legitimately question the degree of comparability of the subject populations, these results are similar to those summarized by Vasarhelyi and Graham (1997) in which investigators at the University of Michigan concluded that computer-based instruction yielded higher average scores than traditional instruction.

To date, the most methodologically sound investigation to evaluate the effectiveness of online instruction was conducted by Gerald Schutte at Cal State, Northridge (as cited by McCollum 1997). "Schutte randomly divided his statistics class into two groups. One attended class as usual, listening to lectures, handing in homework assignments, and taking examinations. The other took an online version of the course, completing assignments on a World Wide Web site, posting questions and comments to an electronic discussion list, and meeting with their professor in an Internet chat room. After an orientation session, students in the virtual class went to Dr. Schutte's classroom only for their midterm and final exams. On both tests, Dr. Schutte found, the wired students outscored their traditional counterparts by an average of 20 percent."

The present study extends Schutte's paradigm by looking at pre and posttest scores of students enrolled in online and inclass versions of the same class taught by the same instructors over a variety of disciplines. 

Methodology

Students enrolled in five different undergraduate online courses during the Fall semester 1997 participated in a test-retest study designed to measure their learning of the course material. These students were compared with students enrolled in traditional inclass courses taught by the same instructors. The course titles were Organization Behavior, Personal Finance, Managerial Accounting, Sociological Foundations of Education, and Environmental Studies. Student participation was voluntary; names were only used to ensure a matching of the pre/posttest results.

Subjects

In total, 40 undergraduate students were enrolled in the online courses and 59 undergraduate students were enrolled in the inclass courses during the testing period.

Pretests

Instructors designed pretests to measure the level of knowledge students had of the course content prior to the start of the course. Pretest formats differed by instructor, but were scored on a 100-point scale. The average pretest scores for online students was 40.70 (s.d. = 24.03). The average pretest scores for inclass students was 27.64 (s.d. = 21.62).

Posttests

Instructors designed posttests to test the knowledge students had of the course content at the end of the course. Posttests for each class were similar to the pretest for that class. Posttests were scored on a 100-point scale. The average posttest scores for online students was 77.80 (s.d. = 18.64). The average posttest scores for inclass students was 77.58 (s.d. = 16.93).

Results

Using a matched t-test, our results indicate that the posttests for both groups of students were significantly higher than the pretests (t = 14.24; d.f. = 98; p = 0.0000). In comparing the pretest scores of the two groups, our results indicate that online students scored significantly higher than the inclass students did (t = 2.82; d.f. = 97; p = 0.0059). However, our results indicate that the there were no significant differences for the posttest scores for the online and inclass students (t = 0.06; d.f. = 97; p = 0.9507).

Discussion

Our study demonstrates that the learning of online students is equal to the learning of inclass students for our sample. Interestingly, the group of students who self-selected into the online courses scored higher on the pretests than did the inclass students. This result is an indication that the students who select online courses may be better prepared for the course material than the students who select inclass courses. This preparedness may not, however, lead to greater learning since there were no differences between the two groups of students on their posttest scores.

When discussing the online course, it is natural for faculty and students alike to question the effectiveness of the delivery method. The ability of the students to demonstrate their learning of the material is one such method of measuring the effectiveness of the delivery method. Thus, this study provides support for the effectiveness of the online course.

The ability of the findings for this study to generalize to other online courses is limited because of the small number of students enrolled in the research courses. Our university prides itself in small course enrollments. This may be better for the students, but it is not better for our research design. Therefore, it is recommended that continued research be conducted testing the effectiveness of online instruction using additional samples.


References

Bjorner, S. 1993. "The virtual college classroom." Link-Up, 10, p. 21-23.

"Caught in the Web: E-mail reshapes educators' roles." 1998. (September 22). Sun-Sentinel, 6B.

Gubernick, L. and Ebeling, A. 1997. "I got my degree through E-mail." Forbes, 159, p. 84-92.

Herther, N. 1997. "Education over the Web: Distance learning and the information professional." Online, 21, p. 63-72.

McCollum, K. 1997. A professor divides his class in two to test value of online instruction. Chronicle of Higher Education, 43, p. 23.

Vasarhelyi, M. and Graham L. 1997. "Cybersmart: Education and the Internet." Management Accounting, (Aug), p. 32-36.

Velsmid. D. A. 1997. "The electronic classroom." Link-Up, 14, p. 32-33.

This article originally appeared in the 06/01/1999 issue of THE Journal.

Whitepapers