How do I use all this data?
An eight-step checklist and questions for making use of various kinds of education data.
It’s just after spring break. The annual spring cram session for end-of-grade or end-of-course testing begins. The pressure this year, thanks to the increasingly strict standards of No Child Left Behind, is greater than ever. Here are a few tips — derived from a thorough review of the latest research on data-driven decision making and our personal work with more than 1,000 school executives over the past year — to help you through crunch time.
School executives are drowning in data, yet many have difficulty in finding useful ways to make sense of all the data they have. Victoria Bernhardt suggests there are four major types of data: achievement data, demographic data, perception data, and school process data. Most school executives focus on cross-referencing achievement and demographic data to analyze how different subgroups compare in achievement. Typically, school executives have a stronger working knowledge of achievement and demographic than of other types of data. Yet, two other types of data sources — perception data and school process data — reveal rich information to help schools succeed.
Perception data are those data that include parent satisfaction surveys, student satisfaction surveys, and North Carolina’s Teacher Working Conditions Survey. School process data take the form of information on “how are we teaching a particular content area,” discipline data, IEP goals, or LEP goals. Achievement and demographic data provide end results whereas perception and process data focus on the internal workings of a school. To help busy school executives determine what to focus upon, we offer the following eight-step checklist.
The eight-step checklist
- Prioritize questions. What questions need to be answered? Are they actionable (i.e., can you do something with the answer or is it just nice to know)? A word about priorities is in order here. Remember Pareto’s principle: 80 percent of your results will come from 20 percent of your focus.
- End results and metrics. What will success look like? If, for instance, you want to improve the math achievement of students who are LEP, then you need to know your desired end result and the measurements you will need to determine whether you are successful. Making these determinations before beginning your analysis gives you a goal to work toward. Your question, projected end result, and metrics will form the foundation for your analysis.
- Assumptions and benchmarks. What are you taking for granted? Continuing with your question about low achieving LEP students, you decide to focus upon improving attendance for these students. The assumption is that if you are able to keep the students in school, they will achieve at a higher level. Explicitly writing down your assumptions at the beginning allows you to review and analyze your assumptions as you move through data from the entire school year. The second part of this step is benchmarking (the current level of performance). If you wish to improve attendance for low performing LEP students, your current benchmark might be fourteen days attendance in each twenty-day period. That benchmark is what you will focus upon, looking for growth during each twenty-day period. You also want to set the baseline for your end result (for instance, math achievement). You want to create intermediate benchmarks (for instance monthly benchmarks for both attendance and math achievement) as well as an “end” benchmark.
- Plan and understand implications. How will you get the results you want? As you consider aligning resources (time, money, people, and space), there will likely be some tradeoffs you must make. These tradeoffs may involve any aspect of the school — students, teachers/staff, parents/community, operations, finances, and curriculum/instruction. Take some time as you develop your intermediate benchmarks to think through the implications of what you decide to do. For example, if you stipulate that the ESL teacher will monitor the attendance of the LEP students, one implication will be that the ESL teacher will not be able to do some of the other tasks and responsibilities that he/she did earlier. Determine a way to either give him/her additional resources (stipend, clerical assistance, etc.) or determine whether the ESL teacher is the best person to monitor attendance. You may find, for example, that this task could be completed by a clerical assistant.
- Implement and monitor the plan. Once you have developed the plan, implement the action items that constitute the plan. Now that you know what will be done, by whom, when, and metrics that will be used with each of the action items, a critical piece of your leadership is monitoring what happens.
- Analyze and interpret. Are you on or off course? By monitoring intermediate benchmarks, you can halt your plan if you are completely off target or tailor it if you need to change your focus. Referring again to the above example, you may find that attendance for LEP students is improving but their math achievement is stagnant. You learn this by checking the intermediate benchmarks and the assumptions that go along with your plan. Now you must make modifications. Have you devoted enough time to monitoring? Are you using the wrong metrics?
- Autopsy. What did you learn? Sometimes also referred to as a plus/delta or post mortem, this step usually happens at the end of the project. Conducting an autopsy with those who are involved in the plan allows you to determine what has and has not gone on in terms of time, morale, emotional investment or disengagement, etc. These “soft” measures, in conjunction with the “hard” numbers that you are analyzing, help give you a complete picture of your analysis.
- Repeat the steps. This is an ongoing process, a discipline you must cultivate both for yourself and for your faculty. As accountability measures increase, school executives must hone this method of problem analysis, plan execution, monitoring, and interpretation.
Questioning the data
Two publicly available data sources are the Department of Public Instruction’s website on the ABCs and the North Carolina Report Card, maintained by the Governor’s office. Dr. Ken Jenkins has outlined a list of questions for instructional improvement that you can use in analyzing data from these two sources.
Some questions you can use when looking at your ABCs data include:
- Where are your widest achievement gaps?
- How persistent have these gaps been?
- Are there dramatic differences from one year to the next? If so, what might explain the differences?
- Are there gender differences worth noting?
- Is there any relationship you can determine between the population of free and/or reduced price lunch students and general student achievement?
- For high schools, are there differences between major curriculum areas (math, science, social sciences, etc.) worth noting?
- What are the bright spots in these data?
Sample questions for the North Carolina Report Card include:
- Does class size in your school relate to student success?
- Are there grade level differences that might shed light on any achievement problems you might have?
- Looking at your average attendance rate, who are the kids who are absent? Are there any persistent patterns or relationships to be explored here?
- What are the general qualifications of the teachers teaching the poorest performing students? Will they meet NCLB standards for being “highly qualified?”
- What are the teaching assignments for the teachers who are best prepared to teach?
- Are there sufficient learning resources (books, library resources, instructional technology, etc.) to elevate student learning?
- What are your bright spots? What might explain those?