DATA ANALYSIS
CRITICALLY EXAMINING MY DATA
EXAMINING MY DATA
As I analyzed my data, it became apparent that my purpose statement of determining if the use of targeted questions during guided reading would increase student achievement in reading comprehension was true. All 20 of my students showed growth when comparing their comprehension abilities from before and after my study.
DATA SET ONE: GUIDED READING BENCHMARKS
The graph compares my students’ reading levels before (pre) and after (post) my action research. All 20 of my students showed growth from their pre-benchmark to their post-benchmark. One student grew two levels, nine students grew three levels, nine students grew four levels, and one student grew five levels. Prior to starting my action research, my students were reading between level B and level I. After my action research, my students were reading between level D and level L.
In addition to increasing their reading comprehension, the goal was for all of my students to be reading on or above grade level expectations (level H) by the end of my action research. At the start of my study, nine of my students (45% of my class) were reading on or above grade level. By the end of my study, 17 students (85% of my class) were reading on or above grade level.
The increase in guided reading levels also demonstrated my students’ growth in reading comprehension. In order for students to be able to move up a guided reading level, during a benchmark, they must score at least satisfactory on the comprehension portion of the assessment. Due to my students’ advancement in reading comprehension scores and abilities, their guided reading levels likewise increased.
As I critically examined the results from my benchmark assessments, the gap between my lowest reader and my highest reader was evident. My students ranged between a level B and a level I prior to my study. After my action research, this range grew to level D and level L. This gap grew wider over the course of the study. Despite all of my students showing improvement, they grew a variety of paces. All students learned differently. Some students adapted to the new questioning routines quickly, and others were initially more timid to answer questions every day. Over the course of my study, the apprehensive students began to gain confidence, as answering questions every day became the new norm.
It was evident that my study greatly impacted my students. Though all of my students showed growth, some students showed more growth than others. One student (Student 1C) grew two levels, nine students grew three levels, nine students grew four levels, and one student (Student 2B) grew five levels. Student 1C and Student 2B stood out to me in particular as they had the least and the most growth out of all of the students in my classroom.
Student 1C was in Group 1 for guided reading. Before my study began, she was reading at a level B. At the end of my study, she had progressed to a level D. She remained below grade level expectations, but she did advance two levels. Throughout my action research, Student 1C was receiving additional interventions and support. Furthermore, just as my study began, Student 1C started receiving an additional guided reading group. Not only was she meeting with me for guided reading every day, but she was also meeting with another teacher for a second small group reading time. Despite only growing two levels during my action research, since the beginning of the school year, this was the most she had grown in a six week time period. Prior to my study, Student 1C had stayed at a consistent level B for two quarters. Advancing two levels was an accomplishment for Student 1C. Perhaps if I had incorporated targeted questioning from the beginning of the year, she would have been reading at a higher level by the end of the third quarter.
Student 2B was in Group 2 for guided reading. Before my action research began, he was reading at a level D, this was below grade level expectations. By the end of my study, he was reading at a level I, surpassing grade level norms. Student 2B had a very supportive home life. In talking to his mother about my action plan at the beginning of the third quarter, she agreed to ask Student 2B comprehension questions at home as he read to her every weeknight. The additional encouragement and practice at home provided to aid in the advancement of Student 2B’s comprehension abilities at school.
DATA SET TWO: CSA
The graph above compares my students’ scores from the CSA given before my action research (pre) with the CSA given at the end of my action research (post). The y-axis reflects the students’ scores out of the total 16 points. The x-axis reflects the five students (labeled A-E) in each of my four guided reading groups. Out of the 20 students in my classroom, 17 students showed growth or received the same score on both assessments. Three students (Group 2 Student A, Group 3 Student A, and Group 4 Student C) received one point less on their post-CSA than they did on their pre-CSA. The goal for my students on their post Comprehension Common Summative Assessments (CSAs) was to show growth and the majority of my students reached this goal.
As I critically examined the data from the CSAs, the exponential growth in Group 1 students was most prevalent. Out of my four guided reading groups, these students showed the most improvement. All five of the Group 1 students advanced their scores from the pre-CSA to the post-CSA. On their pre-CSA, their scores were below grade level norms. However on their post-CSA, their scores surpassed or met grade level expectations. Student 1A grew six points, Student 1B grew two points, Student 1C grew seven points, Student 1D grew six points, and Student 1E grew seven points.
There are several reasons why their increase in scores could have occurred. First, the students in Group 1 became accustomed to independently responding to questions. Prior to starting my study, I did not ask each student a question every day. Throughout my action research, I asked at least one targeted question to each of my students daily. This became a new routine during our guided reading time. As my study continued, their confidence increased. They knew that I was going to call on them after they read, compelling them to truly focus on the text.
This data set further demonstrated how impactful the action research was on my students. In particular, the students in Group 1 and Group 2 showed more growth than I anticipated. By asking them questions about a new text every day in guided reading group, they were prepared to independently answer questions on the post-CSA. Out of all of my students, however, three students (Group 2 Student A, Group 3 Student A, and Group 4 Student C) did not show growth. These three students scored one point lower than their pre-test score.
Student A in Group 2 (Student 2A) did not show growth. Student 2A scored a 15 on the pre-CSA and a 14 on the post-CSA. Though these scores fall within grade level norms, he still did not advance. This could be because he was absent on the day we took the post-CSA. Instead of taking the post-CSA per our normal classroom routine, Student 2A had to take his assessment during an alternate time. He had a lot of anxiety when it came time to take a test. It was possible that this modification in routine caused him additional stress as he was testing.
Student A in Group 3 (Student 3A) also dropped one point from pre-CSA to post-CSA. He scored a 16 (100%) on the pre-CSA and a 15 on the post-CSA. He incorrectly answered the very first question on the assessment. Student 3A is on a behavior plan. Minutes before taking the post-CSA, his behaviors escalated. As per his plan, I asked him to take a brief break to calm down and get in the right mindset for learning. When he returned from his break, his emotions had diminished, but he was not entirely focused. As he began the assessment, he started to calm down and channel his energy towards the test. Because of his behaviors prior to taking the test, he could have missed the first question because he was not quite in a calm, ready-to-learn state when he began the assessment.
Finally, Student C in Group 4 (Student 4C) did not display growth from pre to post-CSA. This student missed a question about synonyms. Throughout the school year, Student 4C had been confusing plural nouns and synonyms. For example, if he had been asked to provide a synonym for the word book, he would respond with books. Instead, he should have responded with text or story. On the post-CSA, the question he missed asked him to write a word that means the same as friend. Instead of writing the words buddy or pal, he wrote the word friends. Though this was consistent with his past errors, it showed me that I still needed to work with him on identifying synonyms.
DATA SET THREE: RUNNING RECORDS & ANECDOTAL NOTES
The graph above displays the weekly comprehension scores for two of my guided reading groups, Group 1 and Group 2. The x-axis displays each week of my action research and the y-axis displays the students’ scores from zero to seven. Each student in Group 1 and Group 2 can be identified by a specific color. No students scored a zero. If a student’s color does not appear (such as Student E in Group 2 during Week 1), it means they were not present for their guided reading group that particular week.
Similar to the other data sets, the students in Group 1 and Group 2 showed the most improvement on their weekly running record and anecdotal note scores. Between both groups, only one student scored satisfactory understanding of the text during the first week of my study. During the final week of my research, all 10 students scored satisfactory or higher. Throughout the weeks in between, some students’ scores fluctuated. Nonetheless, they all made gains over time.
As I analyzed this data set, it was clear that the majority of the students in Group 1 experienced a slight plunge in scores during Week 3. During this week, the comprehension target skill was identifying the author's purpose. Students were asked to read a text and determine why the author wrote this text. Instead of reading fiction stories that week, I had selected some informational texts for this group to read. The switch from fantasy, fable, and realistic fiction stories to information texts proved to be a challenge for most of the students in Group 1.
Furthermore, during my examination of this set, the data showed that my students that were regularly absent did not end with as high of scores as their peers. Student C in Group 2 (Student 2C) was absent for the entirety of Week 2. When he returned to school for Week 3, he made growth for one week and then consistently scored fives for Weeks 4, 5, and 6. Similarly, Student E in Group 1 (Student 1E) was absent during Week 3. During this week, he was only asked one question and received a score of 2, as he had little practice on our comprehension target skill. For the remaining weeks, however, he was present for reading groups and steadily grew his comprehension skills.
The graphs above highlight some of my students from Group 1 and Group 2. On the left were the regularly absent students (Student 2C and Student 1E). Displayed on the graph on the right were two students in Group 1 and Group 2 (Student 1B and Student 2D) that demonstrated tremendous growth throughout my study. Student 1B and Student 2D were present for every day of the action research.
ANALYZING MY PRACTICES & INTERACTIONS
If I wanted my students to be successful at the end of my action research, I needed to ensure that my practices and interactions with them were effective. Based on the data, which demonstrated my students’ growth throughout this study, my practices and interactions were sufficient.
PRACTICES
Each week, I took the time to purposefully plan the questions that I wanted to ask each of my 20 first grade students. This level thoughtful, precise preparation provided my students with the opportunity to respond to questions that were appropriate for their reading level. These questions were also tied to my students’ areas for improvement. This allowed for my students to receive additional practice in their areas of weakness, which ultimately promoted their overall growth.
INTERACTIONS
The achievement of my students was also dependent upon my interactions with them. It was inevitable that during my study, students would incorrectly answer questions. How I chose to respond to these incorrect comments was crucial. Negative interactions with a student could have potentially caused that student to fear answering questions for the remainder of my study. Thus, I chose to interact with students’ wrong responses in a very positive, supportive demeanor. I taught my students to be brave and consistently reminded them that mistakes (and incorrect answers) could help us learn and grow. Through analysis of the anecdotal notes, creating this positive classroom community improved my students’ confidence when it came to answering questions.
TRIANGULATING THE DATA
In looking at all three data sets, a common thread emerged. My students showed growth in reading comprehension. This confirmed that my hypothesis was true; the use of targeted questions during guided reading will increase student achievement in reading comprehension. The first data set, a pre and post Fountas and Pinnell Benchmark Assessment, revealed that my students were able to understand more complex-level texts. Thus, all 20 of my students increased their guided readings throughout the course of my study. The second data set, Comprehension Common Summative Assessments, exposed my students' ability to independently read a text and then demonstrate their understandings by answering questions. The third and final data set, Weekly Running Records and Anecdotal Notes, displayed my student’s comprehension achievements on a weekly basis. Each week, they were asked to answer multiple questions about the texts we were reading. In doing so, they began to focus on the texts, read for understanding, and articulate their responses to questions in both oral and written form. In this data set, all students also displayed growth. Because all three of my data sets demonstrated student growth in reading comprehension, my purpose statement stands true.
Not only did these data points confirm my purpose statement, the data from the Guided Reading Benchmark Assessments was also confirmed by the CSA scores and the running records and anecdotal notes. The three different data points displayed the same strengths and areas for improvement for each of my students. Throughout my study, I used data from running records and anecdotal notes to identify the comprehension skills that my students lacked. By providing my students with this additional practice, they began to show growth in the targeted areas. Furthermore, these teaching points were shown to be effective when the students were assessed at the end of my action research. On their concluding benchmark assessments and CSAs, my students improved their areas of weakness. In using the weekly running records and daily anecdotal notes to monitor the progress of my students, they demonstrated success between the other two data points.