...

Courseware, Assessment and Evaluation

by Dr. Sylvia Charp Editor-in-Chief Courseware, Assessment and Evaluation Whether we are in K-12, higher education or training, we are always looking for better software and assessment tools to assist students to think, solve problems, organize, synthesize and communicate. We also want to create an environment that generates excitement about learning and a desire to learn more. We can see this happening. For example, we can observe: Courseware and software tools have substantially improved and are more easily accessible. The improving capabilities of telecommunications and multimedia systems are providing opportunities for producing material more interesting to the learner. The new technologies are more exciting to students and are having a positive effect on learners. Technology funds are not only devoted to equipment and technical staff but a portion is being set aside for faculty who wish to take advantage of the Web/Internet and multimedia. Web-based courses are being created. They range from the simple presentation of lecture notes and exercises to whole interactive teaching packages. Sharing of ideas between teachers, teacher and students, and among students is encouraged. Customized feedback to an individual's activities, with students commenting on and evaluating each other's work, has increased. New Tools for Assessment Development of new assessment techniques is expanding. Basic assessment tools are usually defined in quantitative terms: standardized tests; objective tests designed to measure outcomes of specific courses; criteria-referenced tests; and measures developed to demonstrate comprehension, recall or some other skill. Computerized tests have been in existence for a number of years. For example the Graduate Record Examination (GRE), the test students take to get into graduate school, has been computerized since 1992. Adaptive testing, in which the test itself "adapts" and changes as the taker answers each question, providing easier or more complex questions as required, is in greater use. Performance-based testing, for "on the job" evaluation, is accepted and provides valid measurements. Use of profiles and portfolio assessment is growing. In a recent study conducted in Vermont, the Rand Corp. concluded that the effects of portfolio assessment on instruction were "substantial" and "positive." In another sign of the times, all teacher education graduates of Eastern Washington University in Cheney, Wash., leave with a diskette which states, among other things, their academic accomplishments, student teaching experiences, their educational philosophy and comments on teaching pedagogy. Room for Improvement However, what passes as evaluation is often limited in both scope and scale. Though monitoring and assessment techniques are often embedded within software programs, these are frequently trivial, do not involve the end user, and therefore have not been properly tested. In a paper presented at the Educational Multimedia and Hypermedia conference in Boston, June 1996 by A. Bartalome, Universitat de Barcelona, and L. Sandals, University of Calgary, titled "Evaluating Educational Multimedia Programs in North America," they reviewed a small sample (26 sites) of end users' involvement in the development of educational multimedia projects. The 26 programs represent the work of more than 160 people during an average time of two years. In several cases, work is in progress on new versions. More than half of the programs were related to Science and Technology. (Though questionnaires were sent to more than 100 projects, the authors said its length and in-depth questions may have discouraged a greater response.) Even with a caveat, the paper's following conclusions are interesting: Educational multimedia programs were evaluated during their production (65%) and at the end of the production (68%). Most programs were continuously evaluated during production, but participation of the end user was not always encouraged. In the programs themselves, 92% include some type of activity or question (exercises, questions or problems to solve). Programs also included (a) a help system, (b) user control over the program, (c) a variety of levels for different users, (d) an assessment or evaluation system, and (e) a feedback summary for users. Note that 12% of the programs had less than three of the above "quality indicators" and the latter three (c, d, e) are usually given the least attention. Design and creation of good software that includes worthwhile assessment tools that do more than report and critique responses requires significant allocation of resources and involves implementation on a significant scale. Research in application and design must remain a key issue. Technology, however sophisticated, plays only a small part in the complex learning process, but d'es provide the tool to assist us in our efforts.

This article originally appeared in the 09/01/1996 issue of THE Journal.

comments powered by Disqus

Whitepapers