UC Davis and Digital Promise Report on How To Run Better Ed Tech Pilots
        
        
        
			- By Dian Schaffhauser
- 12/10/15
Before the education technology purchase comes the pilot.  But how schools and districts run their pilots varies widely, and school  administrators and IT leaders may have different ideas about pilot process,  what's worth measuring and overall goals.
The University of  California, Davis School of Education and Digital Promise recently ran a  research project to better understand how districts conduct their pilots of ed  tech products and the challenges they face in doing so. Digital Promise is a  Congress-funded non-profit focused on uncovering innovation in education. The  results of the pilot research are contained in a  lengthy report and online  resources that offer best practices and guidance to help districts make  optimal procurement decisions.
Researchers enlisted six school districts from across the  country to participate in the "Pilot-to-Purchase Project":
 
Data collection for the study was done through interviews,  focus groups and surveys among district and school administrators, teachers and  students. The districts also provided documentation of their pilot processes. And  Digital Promise collected survey data from 1,200 students in grades 4-12 specifically  to evaluate student participation in school technology decisions and to  understand how they respond to ed tech.
One common finding across districts was the need for  "positive communication and relationships between all involved stakeholders."  That includes getting feedback from students and teachers, two groups who are  rarely part of the formal pilot process.
According to the researchers, frequently what happens is  that principals ask teachers for their opinions about products informally. A  more effective approach is to create "formal opportunities" to allow  teachers to give their feedback, through surveys, interviews, focus groups or  team meetings. Districts also need to make sure teachers receive support during  the pilot "to enable a strong implementation." As the director of  technology in the Pennsylvania district explained, "I think every district  needs to make sure they give complete support to anyone piloting a new product,  meaning that when they have an issue, it should be addressed immediately. That  helps the project go smoothly so you know if you're actually evaluating what  you think you are."
Student feedback played varying levels of importance in  technology decision-making. As one student from the Idaho district pointed out,  "Our opinions are kind of key in it because we are the ones using  it." The survey found that non-white students reported being more excited about  and more motivated by the use of the ed tech products they were piloting than  white students did. Higher-performing students were more likely to report that  the ed tech products resulted in them becoming better problem solvers.
In addition, those students who had technical problems gave  feedback that the ed tech was less beneficial to them, and those who considered  their teachers more knowledgeable when using the program found the ed tech more  beneficial. The researchers advised schools to pull together "a  representative group of students with a variety of backgrounds" when they  gather input from pilot programs.
While all of the researched districts used data during the  pilot to consider product effectiveness, great variation showed up in the types  of data being emphasized within the district and the type of analysis each  district performed. Some districts conducted quantitative analysis to examine  impact on student learning; others sought qualitative feedback from users  regarding product effectiveness. As an example, the report profiled a pilot in  D.C. in which the district worked with the developer of the product under  consideration to generate weekly customized usage reports by school and also worked  with the school system's own office of data and strategy to analyze data from  the developer and Scholastic Reading Inventory (SRI) scores to identify teacher  and student usage and changes in student Lexile scores. A summer analysis of  spring pilot data determined that more usage would be required to  "detect" differences in learning outcomes, so the district continued  its pilot into the fall.
The report's findings also recommended:
    - Designing a structured pilot process;
- Setting specific pilot goals;
- Matching the pilot with timelines connected to  academic, budgeting and purchasing calendars; and
- Collaborating with developers and researchers to  help them understand the district and provide a forum for giving feedback and  receiving support along the way.
The report, "Pilot-to-Purchase Project," is  available on  the Digital Promise Web site. Digital Promise has also published a checklist and a goals  worksheet to help districts plan their pilots.