SUMMARIZING MY DATA SETS
MONITORING STUDENT PROGRESS
Throughout my action research, I aimed to consistently and accurately measure and monitor my student’s reading comprehension abilities. I utilized three specific forms of data collection methods to measure my students’ growth throughout my capstone journey. These data points were Guided Reading Benchmarks, Comprehension Summative Assessments, and Running Records/Anecdotal Notes.
DATA SET ONE: GUIDED READING BENCHMARK
Using the Fountas & Pinnell Benchmark Assessment System, I tested my students’ reading levels before and after my action research. This benchmark assessment measured their reading level using an alphabetical system (A-Z). For example, students at a level A are reading significantly less complicated texts than students reading a level J. Moreover, the benchmark assessment measured students’ accuracy, fluency, and comprehension when reading a specific leveled text. During guided reading, the goal was for students to be reading texts that were at their instructional level. These texts were not too easy nor too difficult for the reader.
To determine each student’s instructional level, I assessed my students with the benchmark system. My students were given a leveled book to read that they had never seen before. As they read the text to me, I checked to see how accurately they were reading the words as well as assessing their ability to read smoothly and with expression. When they finished reading, I asked them a set of questions to check their understanding of what they read. Their responses to the questions were then scored between zero and seven or zero and ten depending upon on the level of the text. The link below is an example of a benchmark recording form.
I chose to implement the Fountas & Pinnell Benchmark Assessment System because it was supported by my school and district. This leveling system was widely used throughout the various grade levels in my school as students can be scored for grade level progress. When concerning scores arose, I could reach out to my students’ previous kindergarten teachers, my first grade team, my literacy coach, and special education teachers to discuss the data. Since all of these educators understand how to analyze the benchmark assessment, collaboration with them was straightforward. Moreover, because this benchmark assessment was used throughout the entire district, it made it easy for me to communicate with my CADRE Associate when it came time to make important decisions for my students.
During the course of my study, I used these benchmark reading levels to put my students into appropriate guided reading groups. Students that were reading at a similar level were placed into groups together as these students had similar needs. In doing so, students could be reading texts that were not too easy and not too difficult for them. My students in Group 1 (reading below grade level expectations) could then experience success. The texts they were reading, though below grade level norms, were of the appropriate difficulty for them as readers. They could more easily decode the words on the page so that we could spend more time understanding the text. On the contrary, students in Group 4 were reading well-above first grade expectations. For this group of students, I found texts for them to read that challenged them, as the on-level texts disinterested them because they were too easy. With each student in a group of their peers at similar reading levels, all of my students could be learning within their instructional level.
DATA SET TWO: CSA
Comprehension Common Summative Assessments (CSAs) were tests provided by my school district. CSAs were administered to first grade students at least once every reading unit. The students were asked to read two different passages. Increasing difficulty with each subsequent unit, the provided passages were supposed to be readable for students who met grade level expectations. Once the students had independently read the material, they had to then answer a set of questions about the content. Some questions were multiple choice, some were fill-in-the-blank, and some were short answer. The total points a student could earn on a CSA was 16 points. Below is an example of a first grade Comprehension CSA in my district.
Comprehension CSAs were selected as a data point for three main reasons: they targeted comprehension, they connected to state standards, and they were an appropriate length and level for first grade students. First, this data collection method was chosen because CSAs are specifically designed to measure students’ ability to understand what they read. This form of assessment related directly to the purpose of my study. Additionally, CSAs provided explicit connections between each question and state reading standards. By the end of first grade, the goal was for students to meet all of the Nebraska reading standards. As I graded my students’ CSAs, I used the provided answer key to link the questions that they answered incorrectly to the state reading standard that they are not fully meeting. By making these connections on their pre CSA, I was able to plan my targeted questions during my action research around the individual needs of my students. Finally, CSAs were selected as a data collection method because of their appropriate length and reading level. If I wanted a true measure of my students’ comprehension abilities, I needed to test their understandings with text that was appropriate for their age range. Passages that were too long or too difficult, could have hindered my students. Lengthy texts might have decreased their reading stamina. Difficult texts might have caused them to read less accurately, and thus, not understand as much of the vocabulary. Because CSAs were designed for first graders in my school district, they were a suitable length and level for my students.
These Comprehension CSAs helped me as I monitored student progress throughout my action research. As I looked over the pre-CSAs, I was able to see what specific comprehension skills my students were missing, based on the state standards. When it came time to plan my questions, I used the data from the pre-CSAs to plan my questions around the needs of my students. For example, if Student A incorrectly answered questions about making predictions and making connections, I knew that she needed further practice on those skills. As I planned questions for Student A, I started by emphasizing the prediction skill. When Student A began to answer prediction questions correctly, I decided to move on and focus her questions primarily around making connections. I emphasized one skill at a time until success was achieved.
DATA SET THREE: RUNNING RECORDS & ANECDOTAL NOTES
I wanted to gauge my students’ comprehension abilities at their instructional level each time we met for guided reading. This required me to informally assess them daily throughout the duration of my study. Applying the research of Fountas and Pinnell, I devised within, about, and beyond questions for each of my guided reading groups based on the texts we were reading as well as the whole group comprehension target skill. I administered these questions on both the Day 1 and Day 2 of the guided reading model.
During every Day 1 of our guided reading model, I asked each of the five students in my reading groups one question. I graded their individual responses utilizing a comprehension scoring key. A score of 0-3 reflected an unsatisfactory understanding of the text. A score of 4 reflected a limited understanding of the text. A score of 5 reflected a satisfactory understanding of the text and a score of 6-7 reflected an excellent understanding of the text. Below is a link to a Day 1 and Day 2 guided reading lesson plan template.
During a Day 2 model of guided reading, I assessed my students individually by administering a running record. A running record was an informal assessment. A student was asked to read a text that they had read the day prior. Similar to a benchmark, as they read, I noted the student’s accuracy, fluency, and comprehension skills. After assessing my students’ accuracy and fluency as they read a familiar text, I spent ample time checking for understanding. I asked them a within the text question, a beyond the text question, and an about the text question. Using the same scoring key as above, I graded their responses based on their comprehension of the text. Below is a link to a running record form.
At the end of every week, I averaged each student’s comprehension scores on both their Day 1 responses and their running record responses. I used this average to determine a weekly comprehension score, overall, between zero and seven. It was my goal for students to score an average between five and seven each week, as these scores demonstrated a satisfactory through an excellent understanding of the text.
This method of collecting data was chosen to be utilized throughout my study because of its ongoing benefits for my students. Running records and anecdotal notes gave me information that singular tests could not. This form of assessments provided me with data every day, whereas benchmarks occured far less frequently. The data collected from running records and anecdotal notes allowed me to make quick, necessary decisions about my instruction to aid my students in their growth.
During my study, these weekly running records and anecdotal notes were crucial in guiding my instruction. I used my students' comprehension scores to give them immediate feedback on their understandings of the text. For example, if Student C struggled to correctly answer a question about putting a story in chronological order, I could then immediately reteach that student sequence of events. If I noticed that an entire group needed improvement in a specific target skill, I could devote time during their group to revisit that particular skill.