A New Competitive Grant Model for Nevada
Transitioning to scientifically based research through testing and assessment.
Technology funding can have consequences beyond its original intent of providing money for equipment, professional development, and other necessary aspects of a project. It also can be a lever for changing "the way we have always done it." In Nevada, we used Enhancing Education Through Technology (EETT) funds to start breaking the cycle of "continuation grant politics." The newly designed application required projects to be implemented at the classroom level with a strong evaluation and assessment component designed to show evidence of an impact on student achievement. The push at both the state and federal levels for accountability led the Nevada Department of Education (NDE) to transition to a new emphasis on scientifically based research.
Sensing the possible disappointment from districts and consortia comfortable with their "entitlement" funding, the NDE split the EETT competitive grant funding into three sections. First, money was dedicated for a statewide technology conference (11 percent); second, for a competition similar to previous years (62 percent); and third, for a special competition developed for new scientifically based research projects (27 percent). This was the first step in moving toward a totally research-based competition.
A nonprofit public television station partnered with an urban school district to conduct the statewide conference. The regular competition attracted the usual applicants: urban districts and rural consortia. The special competition grantees included two consortia: 1) a partnership with higher education (University of Nevada, Reno) and four rural school districts (Churchill, Douglas, Lyon, and Nye counties) that focused on middle school science; and 2) a partnership with a for-profit company (Classroom Connect) and an urban school district (Clark County in Las Vegas) that focused on second-grade literacy. The following section includes reflections and preliminary assessment data on the university/rural project called, Rural Science Teachers Teaching with Technology (RST3).
The RST3 program had all the earmarks of an exciting and successful project:
- Narrow grade-level grouping: middle school
- Focused content area: science
- Scientifically based research: a strict control of variables.
The administrators in the project each selected two teachers to meet with researchers as a core planning group. The teachers examined the science standards and proposed topics that would provide ample opportunity for technology integration.
Critical to the project was setting up a research design that would isolate the effects of the technology integration on student achievement in middle school science with as much validity as possible. Teachers designed units of study that included assessments and an integrated technology component. Teachers also crafted the units in such a way that they could be taught with or without the technology. This allowed for selection into an experimental group (with technology) and a control group (without technology). The teachers conducted class one period with technology and another period without technology, because they felt they could hold other variables more constant for comparison purposes if they taught the units both ways.
Each participating teacher was provided with a technology package, including a laptop computer, an LCD projector, and 12 months of high-speed Internet access at home. Teachers also received 45 contact hours of professional development supported by Web-based instruction.
The teachers worked in groups to develop a total of eight units, and each teacher selected four of the units to teach to their students over two semesters. Teachers gathered demographic data such as gender, ethnicity, special-education services, and qualification for free and reduced lunches. Teachers also built three assessments into each unit and entered the summative scores from these assessments into a database. The database grew to more than 125,000 data points. These data were submitted to the researchers with a randomized student number and no link to student names in order to preserve confidentiality. Parent permission was obtained before submitting any data including assessment scores.
The Results: Test Data
More than 3,200 students participated in the first year of the study. Results show that students who were taught science supported by technology scored significantly higher on the end of unit tests. When those results were broken out into student groups, results indicate that students in the technology group scored higher in every student category than their peers in the control group as shown in Figure 1.
Figure 1: Students who were taught science supported by technology scored significantly higher on the end of unit tests than those in the control group.
Figure 2 shows a comparison of unit test scores and state Criterion Referenced Test (CRT) scores for eighth-grade science. This chart indicates a reduction in the achievement gaps on the unit tests for all student groups except American Indians. Negative numbers show how far below the mean a given student group performs compared to the mean for all students in the state. State CRT data are based on all eighth-graders in the state, and unit test data are based on the experimental group within the 3,200 Cohort I students only. Of particular interest is the group scoring above the mean on the unit test and well below the mean on the state CRT.
Figure 2: A comparison of unit test scores and state CRT scores for eighth-grade science reveals that there was a reduction in the achievement gaps on the unit tests for all student groups except American Indians.
The Results: Interviews
To further investigate the effects of technology integration on student achievement, the researchers employed an additional assessment strategy: one-on-one interviews. These interviews are one of the best methods for finding out what students know and understand. A 10-question interview protocol was developed for each unit, and teachers randomly selected three students from each class to participate in the interview process. A total of 404 students were interviewed with 49 percent selected from the control group and 51 percent from the technology group. While overall results were not significant, some very powerful results emerged when the data were disaggregated by student group.
Girls and boys. The mean interview score for girls in the technology group was about the same as for girls in the control group. Boys' interview scores in the control group were significantly below the girls' scores, but boys' interview scores in the technology group were significantly higher than the girls' scores. A possible explanation for this outcome could be that when males engaged in learning science concepts through integrated technology, they were better able to visualize and make connections at a deeper level of understanding. This allowed them to verbalize their responses to the interview questions better than their peers in the non-technology group could. Girls, on the other hand, were able to verbalize their responses at about the same performance level regardless of their group membership.
Special-education students. One of our most exciting results occurred with special-education students. Interview scores for special-education students in the non-technology group were more than 15 points below the mean for non-special-education students. The mean score for special-education students in the technology group was six points above that of non-special-education students. The interview method illuminated what normal testing sometimes masks: Special-education students often have learning disabilities related to reading and writing, so pencil and paper tests can be much more challenging for them. However, use of interviews as an assessment tool tends to level the playing field for these students and allows them to truly communicate what they know without being encumbered with having to write down their answers. With the interview method serving to level the playing field for special-education students, it became clear that technology made a significant difference in what these students learned, understood, and verbalized.
Reflections & Lessons Learned
Although the model had been carefully planned and organized, a number of hurdles surfaced. Because of the funding structure, the technology belonged to the school districts, but everyone agreed at the outset that the teacher participants would be stewards of the equipment as long as they remained in the district. However, one district collected all the computers from their teachers and held them for a month to load the software. This included wiping the hard drives clean and re-installing a lower-end operating system for compatibility with the district network. The district IT staff held the CDs that came with the computer, and the teachers were not given administrative rights to the laptops. This situation upset the teachers, and the university found itself caught in the middle.
Another hurdle was the lack of clarity among teachers of what technology integration means and what it looks like in the classroom-despite a great deal of time spent on modeling and facilitating best practices during professional development. This resulted in diverse levels of technology integration in the units developed by the teachers. One teacher used a PowerPoint presentation for both the technology and non-technology groups, with the only difference being one was in color and the other was in black and white. To resolve this problem, the researchers developed a four-level rubric describing the criteria for technology integration. Each teacher evaluated the technology component in each unit they taught. Use of the rubric allowed the researchers to examine the effects of four different levels of technology integration on student achievement.
Validity of assessment was an important consideration in developing unit assessments. Researchers developed a standardized test blueprint for teacher-designed end-of-unit tests. Teachers were to create grade-level and standards-based test items at three cognitive levels: knowledge, conceptual understanding, and analysis/synthesis. The test development component raised our awareness of a general lack of assessment knowledge among many of the teachers. We set aside additional time to discuss and practice topics such as levels of the content and cognitive domains, item development, and threats to item validity. Teachers were very interested in this assessment training; one teacher even invited us to present additional assessment training for all the teachers at her school.
Cohort II teachers completed the project in May, and data analysis is currently underway. Classroom observations indicate that most teachers are using the technology as a regular part of their curriculum delivery. One teacher stated, "I'm glad I'm finished teaching my units because now I don't have to deprive the control group kids anymore. I can't imagine teaching without technology-I use it every day."
This type of research-based project will likely be expected by state and federal funding agencies in the near future. Development of the 2006-07 competitive request for application (RFA) is underway with a majority of the funding dedicated to classroom-level projects with strong testing and assessment components.
Mark Knudson, M.Ed., is the educational technologist for the Nevada Department of Education and the department's education liaison to the Nevada Commission on Educational Technology. He is also on the Board of Directors of the State Educational Technology Directors Association (SETDA). He manages Nevada's state and federal technology funding, including the E-Rate program.E-mail: firstname.lastname@example.org
Pamela Cantrell directs the Raggio Research Center for STEM (Science, Technology, Engineering & Mathematics) Education in the College of Education, University of Nevada, Reno. She is also an assistant professor of science education. She works directly with teachers in numerous professional development projects designed to improve science education and increase student achievement.E-mail: email@example.com
This article originally appeared in the 07/01/2005 issue of THE Journal.