What Do Students Know and How Do We Know that They Know It?


While the need for new assessment tools is not limited to distance learning, we need to find new ways to assess course and program quality because distance learning has its own set of problems. Being the relative new kid on the block, we must find ways to give validity to our successful courses and programs. How then, can a college prove that it is providing quality programs and

quality courses in all formats of distance learning?


At Paducah Community College we have been offering distance learning courses with the use of televised video since 1976, with interactive TV since 1990, computer-mediated courses since 1991 and Web-based courses since 1996. We have evaluated all of the courses with the traditional measures: student evaluation of instruction, survey, grade distribution and attrition rate. We have done the same with our on-campus courses and program offerings.

Making Student Evaluation Forms Applicable to Each Course

Student evaluation of instruction is one of our most consistent and, we have traditionally thought, most strong indicators of course quality. With that in mind we took the on-campus-based student evaluation of instruction form and applied it to distance learning. Some of the results were ridiculous. A question like, "Is daily attendance kept on a regular basis?" fails to meet the needs of a distance learning format. So, new forms with a unique approach to evaluation had to be designed. Questions about an instructor's understanding of technology-related demands became important. We also found questions about the number of times a student was in contact with the instructor and questions about the quality of those contacts to be equally important.

Of course, certain questions are universal but many of the information assessment needs are individual and must be written for the specific college, program or course. Since assessment needs change, revision of evaluations from semester to semester and year to year may be required. Currently, we have an online student evaluation of instruction that covers our immediate need. The form can be filled out online and e-mailed to the division chair, the program coordinator, or any person designated to be the recipient.

In the early stages of our course and program offerings, survey was a major assessment device. We wanted to know if the students felt they were getting an education equal to that of an on-campus course. We also wanted to know what level of technology training they had; what level of discussion they felt was necessary with the faculty and with other students; and if they were given adequate information to perform the requirements of the course. We wanted to know further, such things as the number of students who completed the course; why they were not able to finish; and how we could provide a more successful course or program. Survey was an ever-changing assessment. We often gave surveys from several departments without knowledge of the other surveys. This unnecessary duplication was done to the frustration of the student and the embarrassment of the department. Coordination of survey and evaluation, indeed, coordination of the whole process, became essential.


In the early stages of our course and program offerings, survey was a major assessment device. We wanted to know if the students felt they were getting an education equal to that of an on-campus course.


A Survey is Only as Good as the Remediation of the Weaknesses It Exposes

Many of the results of the surveys were positive, but action plans built from the suggestions of weaknesses helped us create stronger programs and stronger courses. One such improvement came early in the offering of distance learning courses in English composition in a computer-mediated format. In 1991 when Paducah Community College offered its first online composition class, we used a bulletin board service. Students would phone in and connect to a BBS via modem and computer. The BBS allowed the student to read messages, converse with classmates, ask the instructor questions and turn in assignments. A single line BBS would only allow one student online at a time. By survey we found that a major frustration was having to read lecture material online, or, as was often the case, wait to read lecture material because someone else got there first and was tying up the line.

One solution was an off-line program that gave lecture material to the student without having to attach to the BBS. That idea eventually led to a software program, which the student receives as a registered participant of the online course. The software is available in both Windows help format or CD-ROM format, and contains all the lecture, definition and writing assistance that the student needs to work properly within the course. Many other course and program weaknesses were identified by survey. However, once the major weaknesses are remedied, survey is no longer necessary on a regular basis.

Assessing Grades and Grades as Assessment Factors

In a number of well-documented studies, grade distribution has shown repeatedly that students do as well in a distance learning atmosphere as they do in a physical classroom atmosphere. It is equally true that grade distribution d'es not tell us anything about what a student knows. It only tells us that a student made a high, medium or low grade. Like any indicator of academic success, grades cannot be used in isolation. They must be looked at in combination with other factors such as task performance.

While it is true that students do as well in distance learning as in physical classrooms, it only holds true if those students are carefully chosen (told about the uniqueness of a particular distance learning format) and advised of the characteristics of a distance learner. Those characteristics include attributes such as being a strong self-starter, being self-disciplined, being knowledgeable of the technology requirements of the specific format, and being able to meet other students and the faculty in a virtual environment and not face-to-face. Students who meet a majority of these characteristics have a great chance of success in a distance learning format. Those who do not have these characteristics have a great chance of failure. Care is taken to ensure that the registered student is the student taking the class, performing the work, and receiving the grade. It also has become quite obvious that distance learning is not the answer for all students. Once concerns for security of testing information, student confidentiality and student honesty are taken care of, the grades bear out the fact, as best they can in isolation, that distance learning is a viable format for many students.


In a number of well-documented studies, grade distribution has shown repeatedly that students do as well in a distance learning atmosphere as they do in a physical classroom atmosphere.


Weighing in the Attrition Rate

Attrition rate is another traditional indicator of quality, but it tells us only that students stay in a class or dropout. We must conduct a survey, evaluation of instruction and grade analysis to determine the cause of the high or low attrition rate. Nationally, the attrition rate on advanced math classes is high. The reason for that rate is most often either poor teaching or poor preparation. Attrition alone d'es not explain the situation. As with all evaluation of instruction, combining information gives us the most accurate picture of effectiveness.

Other indicators of educational growth are beginning to have relevance in teaching. Among those indicators are entrance and exit surveys, entrance and exit skills, and employer assessment. Success in the next level course or program is used as an indicator of achievement. A major question being asked in educational circles, these days, is: What can the graduate do? The question is relevant. The answer is pertinent.

Aspects of Distance Learning Provide Means of Assessment

Distance learning has, by its nature, introduced several new success indicators. These indicators have come out of the nature of distance learning. Since many of the distance learning formats require written communication (e-mail, for example), that communication itself can be an indicator of growth and learning by using a studied approach to that communication's progression in grammar, organization and development. The use of threaded discussion in many distance learning classes also provides a point of evaluation through an analysis of the types of questions posed, the types of responses given, the depth of the observations between teacher and student and student and student, and the number of posted entries.

Much of the interaction between learner and learner, learner and teacher, and learner and material in the distance learning format is written. E-mail is a written and visual medium. Students ask questions, make observations and discuss topics through e-mail. The writing is almost always informal as neither the instructor nor the student feel the need for polished, edited writing in e-mail. While spell checking is automatic in some e-mail programs, grammar, development, and mechanics checkers are not applicable. So, what the reader gets in e-mail correspondence is generally an indication of the writer's abilities. A studied look at the quality of these messages written during the course can tell us something of a writer's growth. An e-mail message written at the end of a composition class should show a broader understanding of idea development than did the message written at the beginning of that course. Likewise, a message written at the end of that same class would also show growth in the areas of organization and grammar. One would also expect that clarity of writing would improve to some degree in any course that has a writing component. It has long been believed that the more one practices a task, the better one will be at that task, especially if the writer is exposed to quality models.


Distance learning has, by its nature, introduced several new success indicators.


Making Virtual Discussions Count

While e-mail may be an indicator of growth in a writing environment, the threaded discussion commonly used within a distance learning environment may tell us something about conceptualization of ideas in other courses or programs. Threaded discussion is typically used in association with Web-based learning. A question or proposition is posted and the student is expected to respond to that question. Once a response is posted, it is filed away by student name and is dated. The next student may respond to the original proposition or the student posting. All students can read and post comments. This continuous growing of ideas only stops when the students feel the proposition is exhausted.

Branching ideas may be posted by students or by the instructor. These branching ideas may reflect an understanding of the concepts presented or indicate a need for further basic information. Thus, analysis of the growth illustrated in the threaded discussion is another instrument of evaluation. As mentioned above, the depth of understanding, the application of principals and the sharing of information can be used as evaluation of instruction.

Acknowledging Sources of True Assessment

So, the answer to, "What d'es a student know and how do we know that the student knows it?" is more complete now than it was in the past. We have evidence of what is known from a variety of sources. Some of those sources are traditional, but many of those sources come with the territory found in distance learning. We must begin to take advantage of the fact that distance learning uses writing as its primary means of communication between student and student, as well as between student and teacher.



William Wade is Distance Learning Coordinator at Paducah Community College in Paducah, Kentucky. Wade has spoken nationally on distance learning since 1994 and has presented at various conferences. His article "Distance Learning: A Kentucky Perspective" was published in the April 1997 issue of ED Journal. Wade has taught online courses in composition since August of 1991.

E-mail: [email protected]

This article originally appeared in the 10/01/1999 issue of THE Journal.