Using Common Formative Assessments to Check for Understanding

Citation metadata

Authors: Douglas Fisher and Nancy Frey
Date: 2014
Checking for Understanding: Formative Assessment Techniques for Your Classroom
Publisher: Association for Supervision and Curriculum Development
Document Type: Topic overview
Pages: 14
Content Level: (Level 5)

Document controls

Main content

Full Text: 
Page 121

Using Common Formative Assessments to Check for Understanding

When teachers in course-alike groups or grade-level teams meet on a regular basis to look at student work, checking for understanding becomes a systemwide process. Like the authors of Collaborative Analysis of Student Work (Langer, Colton, & Goff, 2003 ) and Common Formative Assessments (Ainsworth & Viegut, 2006 ), in this chapter we explore the ways that teacher teams can use assessment information to guide their instructional interventions. We also describe ways in which teachers can use common formative assessments to increase their expectations, tune the curriculum, and inform instruction.

Using Data to Improve Student Achievement

There are a number of strategies that can be used to improve student achievement and close the achievement gap, including hiring veteran teachers, purchasing new curricula, providing after-school tutoring, and so on. These are all likely to have positive effects on the achievement of students who are performing at less than acceptable levels. Our experience, however, suggests that it is the teacher and what the teacher does that make the difference for students (Frey & Fisher, 2006 ). We know that access to professional development differentiates teachers who have the knowledge and skills to meet the increasing demands of our diverse student population and those who do not (Joyce & Showers, 2002 ). We also know that not all professional development is created equally (National Staff Development Council, 2001 ). Teachers deserve professional development that is engaging, based on current research evidence, aligned with standards, and provides them opportunities for peer engagement.

Page 122  |  Top of Article

Understanding this, we have developed and implemented a protocol for examining and aligning content standards, creating common assessments, scoring student work by consensus, and planning changes based on the information gathered in this process. Let’s explore the protocol first and then look at the results of the protocol in checking for understanding and in closing the achievement gap.

A Protocol for Using Common Assessments

A number of recursive steps can be used to align curriculum, instruction, and assessment such that student learning becomes the focus of professional development and teachers can check for understanding at the grade or department level. A record-keeping tool for this process can be found in Figure 7.1 .

Step 1: Pacing Guides

The first step in the process involves gathering teachers with common courses (e.g., 3rd grade, 7th grade English, U.S. history, algebra) to meet and decide on a timeline for the sequence of content instruction. The group of teachers will need access to their content standards to ensure that each standard is addressed in a meaningful way. While this sounds easy, it can be the most difficult part of the protocol. Some teachers may resist standards-aligned instruction; others may have their favorite units or teaching order. Still others may be unfamiliar with their content standards and the expectations of their specific grade level. It is hard to imagine a way to close the achievement gap if students do not have access to instruction that is aligned with the grade-level standards.

Step 2: Instructional Materials and Arrangements

Once pacing guides have been agreed upon, teachers must select instructional materials, strategies, approaches, and arrangements. While the materials may be selected for teachers in many states, they know that they can use the materials in a variety of ways. In discussions during this step in the protocol, teachers share their evidence-based and effective instructional approaches with one another. In addition, the team may request assistance from a consultant who has more information about instructional strategies and approaches. In this way, the work of the consultant is contextualized in the work of the teacher teams.

Page 123  |  Top of Article


FIGURE 7.1 Tools for Implementing the Common Assessment Protocol

Page 124  |  Top of Article


FIGURE 7.1 Tools for Implementing the Common Assessment Protocol Figure 7.1 Tools for Implementing the Common Assessment Protocol Adapted by R. Elwardi and L. Mongrue from Smaller learning communities: Implementing and deepening practice, by D. Oxley, 2005, Portland, OR: Northwest Regional Educational Laboratory.

Page 125  |  Top of Article

Step 3: Common Assessments

At predetermined points in the school year, but no less than every six weeks, students should participate in a common assessment of their learning. While there are a number of commercially available tests and assessments, our experience suggests that when groups of teachers create their own common assessments, scores rise faster. Creating an assessment, even an imperfect one, allows groups of teachers to talk about the standards, how the standards might be assessed, where students are performing currently, and what learning needs to take place for students to demonstrate proficiency. In other words, creating common assessments provides teachers with an opportunity to “begin with the end in mind” (Covey, 2004 ). In addition, common assessments provide students with test format practice, which has been documented to increase performance (Langer, 2001 ). When students understand the genre of the test, they are likely to do better.

Step 4: Consensus Scoring and Item Analysis

Once all of the students have participated in the common assessment and the results have been tabulated, teachers should meet to discuss the results. The results are presented for the grade or course, not for individual teachers. The results are also disaggregated by significant subpopulations, such as students with disabilities, students who are English language learners, or specific ethnic/racial groups. This allows teachers to identify and discuss achievement gaps and plan interventions.

When considering a specific item, teachers note the number or percentage of students who answered correctly and hypothesize why the students who answered incorrectly did so. They question one another about students’ understandings and misunderstandings and theorize about future instruction, pacing, instructional materials, assessments, and planning.

Page 126  |  Top of Article

Step 5: Revising Pacing Guides, Reviewing Assessments, Reteaching, and Forming Intervention Groups

As teachers review student work, they note changes that need to be made in the pacing guides, review standards for clarification of the content, and plan for reteaching opportunities. Teachers also discuss the implications that specific instructional materials have for students’ learning and make recommendations about changes in this aspect. In some schools, teachers request the assessment data for their own students so that they can compare with the school, department, or grade average. This final step provides an opportunity for the protocol to cycle again; the assessment data inform instruction, curriculum, and future assessments. Along the way, gaps in student performance are identified and plans are developed to address these gaps, whether they be between ethnic/racial groups or between the students and the state content standards. The teacher may choose to meet with certain groups of students on a temporary basis, providing instruction on the missing subject knowledge or skills. In high-performing schools, gaps in student knowledge are often addressed in after-school programs such as the federally funded 21st Century Community Learning Centers. Thus, common assessments become the link between the school day and the after-school interventions.

The Protocol in Action

The protocol was used by a group of five teachers who all teach the same course. The teachers met regularly to discuss their content standards and the ways in which those content standards can be assessed. They regularly administer a common assessment that includes 10 to 12 questions. They also use writing prompts and interviews to explore students’ thinking about the content. On a recent common assessment, the following question was used:

For what purpose did Parliament vote during the Restoration?

A. To restore Puritan religion in England

B. To restore the monarchy in England

C. To restore Charles I to power

D. To restore the idea of the divine right of kings

Page 127  |  Top of Article

In terms of responses, 37.5 percent of the students chose A, 7.5 percent chose B (the correct answer), 17.5 percent chose C, and 37.5 percent chose D. While we might debate the relative merit of the question or the importance of this point in the overall understanding of history, the teachers noted that this is the type of question that confuses students on the state assessment and that this type of question is commonly asked of students on these assessments.

Having acknowledged this result, the conversation that the teachers had about this one question illustrates the power of this process. One of the teachers explained, “Restoration is when they brought the king back. I never really discussed the fact that Parliament voted on this. I really focus on the timeline, not so much why. Using the timeline, my students know that Oliver Cromwell ruined arts and literature and that Charles II restored them. I think that I missed one of the keys here, that Parliament restored the monarchy and ended the military dictatorship.”

Another teacher focused on students’ seeming lack of test-taking skills. He said, “Our students should have been able to delete several items right away. Charles I was beheaded, so C can’t be right. Also, the divine right of kings is a belief system, not something that Parliament could or could not restore. They should have crossed those two choices off right away. We have to go back and review some common test-taking skills.”

Maria Grant is a science teacher who regularly facilitates conversations with her colleagues about student work. A sample question from a recent biology common formative assessment can be found in Figure 7.2 . Based on student response to this item, the teachers had the following conversation.

Mr. Simms encouragingly reported, “The greatest percentage of students did choose the correct answer.” Ms. Jackson quickly curbed the group’s enthusiasm by noting, “Fifty-four percent of the students didn’t choose the right answer.” She added, “Seventeen percent chose answer A. This might mean that students don’t understand how to determine percentages. I think that we should all do a quick review of some basic skills. Who can develop a quick review for us all to use?”

Mr. Simms offered to develop the review and then added, “Though I covered the main concepts of Mendelian genetics, it seems that students didn’t really understand how expressed traits are passed from parent to offspring.” Mrs. Rodriguez agreed, “Yes, and 11 percent chose answer B. The students who chose this answer don’t seem to understand the concept of a dominant allele. Maybe I need to focus more on vocabulary instruction for this group of students. We had the key terms, but they don’t seem to know how to use them. In addition to the math review, I think we should find out the specific students who missed this and get to them during small-group time.”

Page 128  |  Top of Article


FIGURE 7.2 Sample Biology Question and Results Figure 7.2 Sample Biology Question and Results

Ms. Jackson also noted, “I think we need to work on test-taking skills. Our students should have been able to eliminate answers A and B right away because each shows a parent with blue eyes, and the question states that both parents have brown eyes.” Mr. Simms added, “Twenty-six percent of our students chose answer D. Maybe they thought that since three out of four alleles are B, there’s a correlation to the 31 out of 40 total species with brown eyes as described in the question. I think I need to review how to use Punnet’s squares.”

Ms. Grant asked the group if they thought that sharing the item analysis with students might also facilitate their thinking about the content. As she said, “What if we showed all of the students this item analysis and asked them to work in small groups to determine why specific answers were wrong? Wouldn’t that help them understand the test as a genre and get them test format practice?” Mr. Simms agreed, noting that this would also be teaching biology and not simply test practice.

Page 129  |  Top of Article

Christine Johnson is a teacher and the facilitator of the course-alike conversations in history. This history department has piloted a metacognitive task in combination with the content knowledge task. For each question that students answer, they also indicated one of the following four choices:

  • I knew it
  • I figured it out
  • I guessed at it
  • I don’t care

For example, during a discussion the group started their conversation about a question that troubled a number of students (see Figure 7.3 ). As Mr. Jacobs said, “Let’s start with question 3. Only 61 percent of the students got it right and only 38 percent of them self-reported that they knew it. According to the same self- assessment, an additional 36 percent had ‘figured it out’ and 24 percent indicated that they ‘guessed at it.’ It’s interesting that only 3 kids (of 241) didn’t care about this question. I know that I taught this. But most of the wrong answers were still based on democracy, but not the right type of democracy. I think this could be a quick fix. We need to make sure that students really have a sense of the difference between direct and representative democracy. I have an idea for a simulation that could really solidify this for students.” Mr. Jacobs proceeded to describe his idea for a simulation and the teachers agreed to reteach this idea.


FIGURE 7.3 Sample History Questions and Results Figure 7.3 Sample History Questions and Results

Page 130  |  Top of Article

From the students’ self-assessment, the teachers determined a correlation between correctness and a confident response in “knowing” the answer. Also, accuracy was evident in the “figuring it out” indicator. The teachers were pleased to see that the students were using their test-taking strategies of elimination or using context clues.

Mrs. Johnson then turned their attention to question 10 when she said, “Here we go again. Our students still don’t have a sense of the cardinal points. We keep asking them questions that require them to use map skills, but they are getting them wrong. Look here, just over 50 percent correct. We have to focus on interpreting maps every day. It’s not just about using this for history and geography. This is a life skill.” Ms. Vasquez confessed, “I don’t really know how to teach this. I’ve shown my students the map and the directions. I don’t know what to do differently so that they learn this.” Mrs. Johnson suggested that Ms. Vasquez visit another teacher’s class and observe. As Mrs. Johnson said, “I’ll cover your class so that you can go observe Mr. Applegate. Is that okay? Then we can talk further about reteaching the concept of cardinal points. Does anyone else need help with this? Only half of our students are getting this!” Mrs. Johnson also suggested that the group consider revising the pacing guide to allow for more time to teach map skills. The group continued to analyze the results and in the process identified a small group of students who would benefit from instruction to build their background knowledge. These students were found to have missed all of the items related to government structures, and the group suspected that they lacked background knowledge. Mr. Applegate met with them during the school’s afterschool program, where students who need additional intervention are tutored.

To check for their students’ understanding using this protocol, a group of 3rd grade teachers analyzed individual items on a common assessment. First, they correlated the items with content standards and identified items aligned with key standards that fewer than 60 percent of students answered correctly. Next, they identified items aligned with nonkey standards that had fewer than 60 percent correct responses. There were four key standards and seven nonkey standards associated with items that fewer than 60 percent of students got correct. The teachers then checked to see how many questions were asked for each standard and considered each question on the test for discussion.

Page 131  |  Top of Article

Using the standards analysis, the teachers discovered that the key standard with the lowest percent correct was in the area of measurement and geometry. Standard MG 1.3 reads, “Find the perimeter of a polygon with integer sides.” There were two items on the assessment that addressed this standard: one item showed a rectangle with the length and width labeled, and the second item showed a rectangle with squares filled in and no measurements given. Although 76.6 percent of the students selected the correct answer for the first item, only 28.9 percent answered the second item correctly. Even more puzzling was the fact that 48.1 percent of the students chose the same incorrect response to the second item. Figure 7.4 shows this second item.

The teachers determined that the question was valid and simply stated. The next step was to look at the distractors. It soon became apparent to the teachers that the students who chose C (48.1 percent) were most likely trying to find the area by counting the squares or multiplying 5 by 6 and chose the answer 29 because it was closest to the area (30 sq ft). Another suggestion was that when the students saw the grid with all the little squares, they immediately thought of area since that is how they usually see the area questions presented in the text. The teachers were still confused as to why the students had a difficult time finding the perimeter. After much discussion, the group came to the consensus that they really needed to work on teaching perimeter in various ways, especially when a grid is given with no values.

In a similar fashion, a group of 5th grade teachers analyzed common math assessment items and also spent a great deal of time unpacking the curriculum and revising the pacing guide. The 5th grade teachers found that there were five key standards in which fewer than 60 percent of students selected the correct choice; of those five, four were in the area of measurement and geometry. This was of great concern to the teachers because it was apparent that it was a weak area. Let’s consider an item representing key standard MG 1.2: “Construct a cube and rectangular box from two-dimensional patterns and use these patterns to compute the surface area for these objects.” See Figure 7.5 .

Page 132  |  Top of Article


FIGURE 7.4 Common Assessment Item of Concern, 3rd Grade Figure 7.4 Common Assessment Item of Concern, 3rd Grade

The interesting thing about this problem is that 22.7 percent of the students chose answer A, 23.8 percent chose C, and 42.7 percent chose D (the correct response). The teachers were at a loss to explain how the students came up with a response of 16 units, but they guessed that some students chose 4 units because they added (or multiplied) the 2 units and 2 units that were on the illustration. The group felt that the question was valid but wondered if there was too much information given, confusing the students. Determining what information is needed to solve a problem was definitely a strategy that needed emphasis. The 5th grade teachers also agreed that they needed to do more work with surface area in general.

Page 133  |  Top of Article


FIGURE 7.5 Common Assessment Item of Concern, 5th Grade Figure 7.5 Common Assessment Item of Concern, 5th Grade

In the fall of each school year, an all-day meeting is held for each grade level to discuss only mathematics. At that meeting, data regarding common assessments are distributed to the teachers and time is taken to evaluate test items and to work on strategies for teaching difficult concepts. If teachers did not have all the data on each item and were not given the time to compare the data and examine the items in question, they would never really know what their students understood or how they could better instruct their students. The teachers at all grade levels have expressed how much they value the time to meet and discuss their grade-level content.

Page 134  |  Top of Article

Tips for Success

Creating systems for teachers to engage with their peers and administrators in systematically looking at student work, supported with collaboratively developed pacing guides and common assessments, can help close the achievement gap that has persisted for decades. We do not need to change teachers to get the results our students deserve. Instead, we need to focus our professional development on ensuring that teachers understand their grade-level and content-specific standards, how those standards are assessed, and what to do when students do not perform well.

Some tips to consider as you use common formative assessments to check for understanding include the following:

  • Dig deeply into, and develop a sophisticated understanding of, the content standards so that appropriate common formative assessments can be developed.
  • Develop common formative assessments in collaboration with your peers. Some of the best professional development occurs when groups of teachers attempt to create assessment items from the standards.
  • Analyze student responses collaboratively. In doing so, you’ll likely gain greater understanding of students’ thinking, which you can use later in your instruction.
  • Use this as a recursive and continuous process rather than a single event. This type of checking for understanding should become part of the regular operation of the grade level, department, and school.

Source Citation

Source Citation   

Gale Document Number: GALE|CX6156900014