Designing Distance Delivery Courses
As we create distance delivery courses for the Internet, we must learn new functional capabilities and incorporate them into this emerging methodology. Consequently, courses employing distance delivery at various quality levels abound. As educators and course developers adopt distance delivery, new mind-sets must appear. When technology dramatically changes, well-understood and long-employed methodologies become obsolete. History shows that people persistently apply outdated methodologies after circumstances change; these anachronisms eventually disappear.
The Industrial Revolution was an earlier era of rapid changes where outdated practices persisted throughout the period. The world’s first iron bridge was constructed in Shropshire , England , in 1779 with wooden-bridge technology. It took years before the Industrial Revolution’s advances in iron use were integrated into bridge construction. Even Stonehenge, the mysterious circular formation of large stones, shows builders worked with outdated technology. The stones’ adjacent sides appear to have been tongued and grooved. Stone-building technology later progressed to include mortar joints that bond better.
However, delays in technology adoption are not limited to antiquity. A recent example that proves this is the system for automated bank-service delivery. “The first ATM was located inside a bank and was available only during banking hours. Bankers viewed this technological innovation as an automated teller. [But] real innovation did not occur until ATMs were placed outside banks and in malls, grocery stores and airports, available 24 hours a day” (Twigg 2001).
The newness of distance delivery helps accommodate meager expectations. Russell (1999) noted that studies often seek to prove distance delivery is as good as traditional classes. But we must move beyond that level of just being good, which requires superiority.
Education’s complexity makes studying distance delivery a challenge. Perhaps the most important characteristic needed for adjusting course structure and accommodations is experience. A good summary of the general notion is captured in a question posed by Webb (2001): “Have you successfully delivered this course in the traditional classroom so that standards for measurement are available to you?”
Historically, professors review course topics and select the body of knowledge. Knowledge components are assembled into a logical sequence. Students rely on assigned texts, articles, case studies, video clips, problems and other tasks. All participants work through the content sequentially and concurrently.
Reading . The time-honored approach to learning has been reading. Traditional textbooks can beneficially accompany a distance delivery class. In addition, it is becoming more practicable to read computer displayed text. Although it is not an integral part of distance delivery, hypertext incorporated into electronically delivered material enhances its usefulness. The functional capability of finding greater topical depth on a subject through an Internet connection can enhance reading, but caution is necessary to prevent this practice from becoming a distraction.
Lectures. Second to reading for imparting new knowledge is the lecture. Before books and printed material were generally available, individuals lectured to groups about the content of an assortment of books. Today’s lecturers offer their dimension to the material through video. In addition to the punctuation and diacritics employed to convey meaning around words in text, a lecture includes volume adjustments, timing, tone, emphasis and various other enhancements to restructure and shape that meaning. Spontaneity is also possible. An excellent lecture conveys substantial amounts of meaning often not found in reading.
Problems. Distance delivery experiences must foster problem-solving skills. Well-designed problems can be delivered by distance as in traditional classes. The complexity can extend from simple to complicated procedures. Frequently, final answers are relatively simplistic. After many steps, the final answer is a mere number. Automating evaluations is confounded by rounding differences. You can resolve this by structuring questions in multiple-choice formats. Incorrect answers are obvious when the answer matches no listed alternative. Including “none of the above” options helps to alleviate this problem. Another resolution is calculating the result by mistaken procedures. If all answers reached by erroneous sequences are included as alternatives, the student’s error is apparent to the system. By including explanations of what would lead to a particular wrong answer, students are guided to a better understanding of the problem.
Research. Research opportunities have dramatically changed with pervasive Internet use. Previously, there was a scarcity of available information. Researchers would spend a considerable amount of library time finding, ordering and waiting for information —typical delays were days or weeks. That paradigm has been replaced with a plethora of information. Rather than struggling in a wilderness, today’s student is burdened with vast quantities of information needing analysis and organization. Research assessments require exploring a topic and writing a report. Spelling, grammatical and style problems in these reports can be detected automatically, but in-depth assessments require human intervention. Perhaps the greatest benefit available for reports through distance delivery is the interactivity of returning Post-it notes or audio comments directly with a paper.
The immediate outcome of any course, whether online or traditional, is an evaluation of the results. Two important evaluation dimensions are reliability and validity. An assessment’s reliability relates to its consistency with other assessments of the same content. A highly reliable assessment is one that closely matches equivalent evaluations. Obviously, an assessment with low reliability provides an erratic outcome that fails to inspire confidence. The reliability can be likened to the standard deviation of outcomes measuring the same objective. An assessment’s validity relates to its ability to measure the understanding possessed about the topic. Those with a sound understanding of a topic should score highly on an assessment, while those with poor topical understanding should score lowly on that assessment if it has high validity.
Periodic assessments throughout a course evaluate progress. Presumably at course termination, all continuing participants obtain a final assessment. The potential for improving on this paradigm through distance delivery is great. Initial assessments identify individual strengths and weaknesses in a topical area. Then students should be individually directed toward suitable tasks, which allow strong areas to be slighted and weak areas to be emphasized. Ultimately, students work through a body of knowledge to a suitable understanding. By stressing individual needs, all course participants can reach a satisfactory level. Efficiency is gained by eliminating the time historically spent dealing with material that is already understood.
Test characteristics. Virtually any type of written test can be administered by distance delivery. True-false, multiple choice, multiple answer, ordering, short answer, problems, and essay questions all fit into the typical structure. True-false and multiple-choice questions are easily scored mechanically, while multiple-answer and ordering questions are slightly less amenable to machine scoring. Short-answer and problem questions of one or two words or numbers can be scored mechanically, but the challenge is great due to the difficulties of machine recognition. Essay questions remain beyond the scope of machine grading for the present. However, pattern recognition technology makes it possible to check essays for casual spelling and grammatical infractions.
An underappreciated dimension of education is rigor. Few people would argue that rigor is inappropriate in education. Interestingly, one must look to the fourth definition of rigor offered by Merriam-Webster to find a definition of “exactness.” As such, rigor is taken to mean “strict, particular and complete accordance with fact or a standard.” Furthermore, it is said to be “marked by thorough consideration or minute measurement of small factual details.” A synonym offered for “exact” is “correct.” Insuring that students achieve a correct understanding of details and an ability to use that understanding effectively is a worthy educational outcome. A desirable feature of distance delivery is making things understandable rather than difficult. Rigor should not be confused with difficulty. We must seek rigor with ease.
Cheating problems date to antiquity. Distance delivered courses introduce new opportunities for creative cheating. The presumption in any educational system is that students gain and demonstrate knowledge appropriately. Of course, the mechanisms for cheating in distance deliver courses need to be examined. Cheating techniques include prior knowledge of assessments, inappropriate collaboration, breaking into systems and plagiarism.
Advance exam-question copies. Certain exam types, regardless of their delivery mechanism, are susceptible to corruption through advanced knowledge. In particular, simple fact-based questions often presented as multiple-choice, true-false or short answer questions best discriminate between knowledgeable people and those lacking the knowledge if specific questions are unknown in advance.
It’s no secret that exams are stolen from offices; thus, detecting if students possess advance exam copies is challenging. Normally, good professors teaching a traditional class interact with students adequately enough to assess their ability without exams. Aberrations between exam results and perceived ability are readily apparent. Unfortunately, while the symptom is apparent, the resolution is often difficult. It is also inappropriate to substitute a subjective assessment for a flawed objective one. Consequently, awareness of advance question knowledge merely implies the need for future precautions.
Inappropriate collaboration. Collaboration is sometimes beneficial, but frequently inappropriate. Bowers (1964) reported that the greatest amount of dishonest behavior occurs in unsupervised homework or laboratory assignments. Collaboration is exacerbated with distance delivery; using a personal assistant during exams may be easier with privacy. The well-known case of Edward Kennedy sending a skilled Spanish speaker to take Kennedy’s exam is an early example of this technique (McGinniss 1993). Kennedy’s attempt to pass the course by cheating was thwarted by an observant proctor who recognized the imposter. It is likely that numerous similar instances have gone undetected. With unsupervised distance delivery exams, two or more students can combine efforts with little fear of detection. Paper exam-question answers can similarly be copied. Students within eyeshot can observe answers entered on other exams. However, a watchful proctor can help to reduce the copying problem. When distance delivery students must assemble in monitored facilities to limit cheating, benefits also diminish. Geographically dispersed students reduce this problem, but communication devices allow deception.
Breaking into systems. Internet-connected computer systems have long been recognized as vulnerable. Consequently, it is inevitable that Internet-based distance delivery courses are susceptible to tampering. The people adequately skilled to break into these systems are very limited. This handful of people has the capability of inappropriately benefiting and readily sharing their work widely and quickly.
The notion of breaking into the system is not totally new with distance delivery. Students long ago broke into locked offices and changed grades. Years ago, I detected that a student had broken into my locked office and changed a grade recorded for him in my gradebook. He was captured during a later attempt at another office after elaborate efforts. It is likely that few such crimes have been detected, though I’m sure everyone has heard similar stories.
Plagiarism. Reports and essays have been long abused by plagiarists. Detecting plagiarism is easier when reports are delivered in a machine-readable format. Plagiarism is defined as presenting another’s words or ideas as one’s own without credit. Exact words probably constitute most plagiarism instances. Plagiarism detection previously required prior awareness of another document. But modestly insightful professors can usually select a handful of potential sources from the bibliography and review copies to confirm the plagiarism. Another approach is to retain copies of previously submitted reports for comparison. This provides a limited database and still depends on the professor’s ability to recognize repeated patterns manually. Commercially available products such as MyDropBox.com (http://mydropbox.com) allow professors to submit papers electronically to detect copied work.
In addition, word processing programs attach hidden information to document files. The properties option allows one to see the author, date of initial creation, date of modification and other details about a file. This information is embedded HTML within the file. Within word processors, the code affects the display but it is not revealed; however, editing software can open the HTML code.
- For successful distance delivery courses, suitable recommendations should be adopted. Students in distance delivery classes need the same feeling and welcome of traditional classes. For advance insight, post a syllabus on an unrestricted server so that prospective students can make informed decisions.
- Require a course reference in any e-mail subject line. Given the junk e-mail crowding mailboxes today, serious messages can often be accidentally discarded unread.
- Distance delivery also accommodates individuality and excellence. In traditional settings, professors focus on the average student; thus, the poor and excellent suffer. Design distance delivery courses with attention to individuality. Courses delivered at a distance should excel beyond traditional courses.
- Frequently asked questions should be compiled and available. This file logically evolves over time as new and generally relevant questions emerge. Some highly visible pointer should take confused students there.
- A practice experience should introduce students to the course structure and critical components. Questions can ask about the information contained in the welcome, the general instruction and the frequently-asked-question files. Some questions can even be contrived to guide students into actions that produce problems. Students will then have less chance of committing a grievous error.
- Assigned page lengths confuse students due to font size, spacing, etc. A better solution is to specify a word count.
- While the asynchronous structure encourages individual rates, goals benefit people. Semester or similar structured courses without intermediate steps allow participants to delay until ultimate deadlines; eventually, accumulated tasks become insurmountable. Intervening deadlines give participants manageable tasks. Rather than cramming learning into a course’s waning days, work is distributed. Allowing early completion reduces deadline conflicts as well. Alternately, use makeup assignments on a delayed schedule. To discourage makeups, penalize them with reduced scores or greater difficulty.
- It is inevitable that students will experience computer and connection malfunctions, so it’s unfair to penalize students for external factors. Grace periods can accommodate most corrections. Scheduling deadlines one day early allows instructors to resolve any crashes during the grace period.
- Exam completion time should be limited. A penalty can be assessed for going over time or a more readily received reward can be given for timely completion of an exam. Instructors must remember that modest delays in system responses occasionally occur.
- General interest questions should be submitted to public forums to exchange information. E-mail exchanges are repetitive and reluctant students often miss e-mail explanations.
- Since quiz discrimination is lost when students take identical quizzes and collaborate, random questions should be generated from a database whenever possible. However, it’s important to remember that collaboration is still possible and that previously completed assessments can be used by later participants. Reduce this problem by preventing assessment pages from being copied or printed.
- Mistake explanations help students learn. Immediate explanations are good, but delays until everyone has completed the assessment may be appropriate.
- Quantitative questions often allow for easy transfers of answers among collaborating students. Confound quantitative questions with a multiplicity of answers. By modestly changing one input parameter, different questions are possible.
- Finally, numerical pattern recognition is difficult due to rounding errors. Often, it is wise to implement quantitative questions with multiple-choice alternatives. Keep them interesting by calculating problems with likely errors to ensure the presence of false answers. The “none of the above” option remains possible to simplify that task.
Fortunately, distance delivery classes are easily replicable; incremental improvements are possible. Although many things need be completed in advance and remain immutable during the offering, revisions precede the next iteration. In the end, I have no misgivings that distance delivered courses designed with due diligence can surpass traditional courses.
Bowers, W. 1964. “Student Dishonesty and Its Control in College.” New York : Bureau of Applied Social Research, Columbia University .
McGinniss, J. 1993. The Last Brother: A Biography of Edward M. Kennedy. New York : Simon & Schuster.
Russell, T. 1999. The No Significant Difference Phenomenon. Montgomery , AL : International Distance Education Certification Center .
Twigg, C. 2001. “Innovations in Online Learning: Moving Beyond No Significant Difference.” The Pew Learning and Technology Program. Online: http://www.center.rpi.edu/PewSym/mono4.html.
Webb, J. 2001. “Technology: A Tool for the Learning Environment. Campus-Wide Information Systems 18 (2): 73-78.
Arlyn Rubash, Ph.D., is an associate professor of finance at Bradley University in Illinois . He has a bachelor’s, MBA and Ph.D. from the Pennsylvania State University . He teaches a variety of finance courses that employ creative techniques for delivery. He delivered courses by traditional means, video taping lectures, taking students to visit corporations in the United States and abroad, and has included many creative devices in the process. His recent work includes extensive applications of the Internet for course delivery.
This article originally appeared in the 11/01/2004 issue of THE Journal.