A Beautiful Garden is Measured As it is Nurtured (Week 9)

Week 9 Blog for EDET 637 Differentiating Instruction through Technology
with Dr. Lee Graham

by Aleta May

Essential question: How can I use both formative and summative assessment to enhance (or at least not interfere with) intrinsic motivation?

Criterion referenced measurement was introduced by Robert Glasser in 1963 strategy, not test, that is useful as a teaching tool to focus student learning. A nom-referenced test was used by Glaser to compare World War II trainees to each other; a norm group of trainees over time. In the 1950s Glaser had been influenced by behaviorist B. F. Skinner, and promoted programmed instruction that was “designed to present information in small steps, provide immediate feedback, and require learners to correctly complete one step before moving on to the next” (Popham, 2014, p. 62). The issue came when student scores were so successfully high, interpretation of these test as normed (comparing students to each other) was not useful without a range of scores. However, if the aim is to measure individual students’ achievement over time as learning has unfolded, having scores at one end of a scale is fine in that the focus is instruction and whether the student is responding to it well or not. criterion- and norm- referenced measures are interpretations of students’ skills and what they have or have not learned. “Any kind of test—from multiple choice to essarys to oral examinations—can be standardized if uniform scoring and administration are used” (Bond, p. 3, 1996). Keep in mind that a “criterion identified a behavior domain, such as a cognitive skill or a body of knowledge” (Popham, p.64). To summarize, norm-referenced and criterion-referenced measures should be used as an interpretation of learning tool (and teaching); not as different types of tests.

The point of measuring learning with a criterion-referenced test (tool) is to determine whether content that was pre-determined to be important has been learned; thus the measure needs to match the content. In math, “. . . a CRT score might describe which arithmetic operations a student can perform or the level of reading difficulty he or she can comprehend” (Bond, p. 2, 1996). The AIMSWeb measure our school uses is given at grade level. At the beginning of a school year, this seems reasonable, but when it is clear that a student is no where near that level, the students’ progress measures need to be adapted to the instructional needs of the student and carefully monitored for progress as that student works specifically on those skills. What I have just defined is a formative assessment. Otherwise we are simply measuring curriculum and whether that child has met that curriculum goal over and over. If a student is reading at a primer level and takes the 3rd grade measure again and again that year, there may be some growth in word calling shown, but comprehension is not even in the picture. An advantage of the computerized MAP test is that it is adaptive to the level of that student no matter how low or how high it needs to adjust in either direction starting from their grade level. Of course the hope here is that the test questions have relevance to the lives of students.

What we are really looking for is authentic assessment. In my UbD unit, I will be teaching fraction concepts to middle school students who missed the concrete understanding of abstract concepts. This will lead them into Pre-Algebra for next school year as well; so it will be framed as Pre-Algebra presently to make it age appropriate. There are many ways to assess students’ learning. For example, when students learn to use visual ways to represent their understanding of a process during learning, they create a product teachers can formatively evaluate for student progress. Students may use a diagram to solve a math word problem. A teacher may interview a student to explain their problem-solving process they used on a visual organizer. By using this information, the teacher is using diagnostic assessment to analyze the students’ understanding (Poch, Garderen, & Scheuermann, 2015). The teacher can individually interview students to identify their conceptual understanding “What is it?—Comprehension of what relationships a diagram can represent and how a diagram can be used when solving a problem” (Poch, et. al, p. 156). Also the teacher can ask specific questions to determine the procedural fluency in their word problem solving process (like are they getting a little too much into the diagram, and not making connections to the problem). By asking specific questions, the teacher can observe for strategic competence; and their ability for adaptive reasoning to be able to explain what they did in the diagram.

Since we are moving more and more each year toward a learner-centered approach, our assessment of students will need to move in that direction as well. Progress is being made, but my question is whether that progress is keeping up with the change of instruction pedagogy. “. . . teachers must plan their lessons and provide opportunities for individual and group work, lot of students’ activities, two way questioning, discussions, role-plays, presentations to properly facilitate learners to achieve desired learning outcomes” (Ilyas, Qazi & Rawat, p. 19). Teaching fractions is an example of a subject that must taught in such a way that abstract concepts need to be made in meaningful and relevant concrete hands-on learning to help students remember what they are learning.

One approach for teaching math in a way where students can be assessed (and assess their own learning) along the way is called Solve It! This cognitive strategy has four phases:
I. Translation phase “read the problem for understanding; and paraphrase”
II. Integration phase “visualize” and represent from prior knowledge
III. Planning phase “hypothesize about problem solutions” and “estimate the answer.
The students’ knowledge was built through a process and their development (with age considerations) is assessed as they learn how to demonstrate learning in these phases (Krawe,, Huang, Montague, Kressler, & Alba, pp. 81-82.

It is important to remember that, “before we assess our students’ learning; it is about instruction . . . Info In” (Lewin & Shoemaker p. 8). Lewin and Shoemaker propose a four step approach that includes First: Preparing the student by making connections between what students already know and new learning; Second: First Dare; implying that taking risks to try out your new understanding is a good way to learn; Third: Repair (using fix-up strategies); such as when producing a product, make needed corrections based on new understanding and Fourth: Share learning with an authentic audience rather than just the teacher (2011).

There is a place for summative assessment. As with plants, “it is interesting and important to compare and analyse measurements but, in itself, this does not affect the growth of the plants. Formative assessment, on the other hand, is the equivalent of feeding and watering the plants appropriate to their needs-directly affecting their growth” (p. 1)
I recommend watching these video clips as a way to communicate authentic assessment to staff members:
Sharing assessment data with students:

Working with students to develop their next learning steps

Developing students’ ownership of their learning

An overarching theme is that we need to keep in mind establishing learning goals that are age appropriate and allow students to take control of and manage their own learning over time.


Bond, L. A. (1996). Norm-and Criterion-Referenced Testing. ERIC/AE Digest. Retrieved from: http://www.ericdigests.org/1998-1/norm.htm

Ilyas, B. M., Qazi, W., & Rawat, K. J. (2014). Effect of teaching fractions through constructivist approach on learning outcomes of public sector primary schools teacher. Bulletin of Education and Research, 36(1), pp. 15-35.

James Popham, W. p. (2014). Criterion-Referenced Measurement: Half a Century Wasted?. Educational Leadership, 71(6), 62-68. Retrieved on 3-21-16 from: Egan Library

Krawec, J., Huang, J., Montague, M., Kressler, B., Melia de Alba, A. (2012). The effects of cognitive strategy instruction on knowledge of math problem-solving processes of middle school students with learning disabilities. Learning Disability Quarterly, 36(2), pp. 80-92.

Lewin, L. & Shoemaker, B. J. (2011). Great performances: Creating classroom-based assessment tasks (2nd Edition.). Alexandria, VA, USA: Association for Supervision & Curriculum Development (ASCD). Retrieved on 3-21-16, from ProQuest ebrary at: http://egandb.uas.alaska.edu:2081/lib/uasoutheast/reader.action?ppg=106&docID=10488667&tm=1428975832182 Web. 13 April 2015. (chapters 1 thru 4)

Ministry of Education Te Tahuhu O Te Matauranga Formative and summative assessment

Poch, A. L. Garderen, D. V. & Scheuermann, A. M. (2015). Students’ understanding of diagrams for solving word problems: A framework for assessing diagram proficiency. Teaching Exceptional Children, 47(3), pp. 153-162.

One thought on “A Beautiful Garden is Measured As it is Nurtured (Week 9)

  1. Pingback: Blog 9 Reflection Assessment From Past to Present | aleta57

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s