EdPsych Modules word boh7850x CL8Mod28







28



















M O D U L E








Developing Performance Assessments

n Presentations

n Projects

n Portfolios

A Broader View of Assessment

n Performance Assessment

n Authentic Assessment







Performance Assessment






Outline Learning Goals


1. Define performance assessment and provide examples of the formative and summative uses of performance assessment.

2. Define authentic assessment and identify its essential characteristics.


3. Describe the three major types of performance assessment and provide a rationale for using each type.





Summary Key Concepts Case Studies: Reflect and Evaluate

Evaluating Performance Assessments

n Checklists

n Rating Scales

n Rubrics





4. Describe the three methods of systematically evaluating students’ performances.


Advantages and Disadvantages of Performance Assessment

5. Discuss the general advantages and disadvantages of performance assessments.






boh7850x_CL8Mod28.p498-513.indd 498 boh7850x_CL8Mod28.p498-513.indd 498 11/19/08 9:58:43 AM

11/19/08 9:58:43 AM

module twenty-eight performance assessment 499

A BROADER VIEW OF ASSESSMENT

Since the implementation of the federal No Child Left Behind Act of 2001, educators have been required to use standardized tests for accountability purposes, but they also recognize that narrow test formats and inappropriate uses of standardized testing negatively affect the quality of instruction and student learning (Resnick & Resnick, 1992; Shepard, 2006). Dissatisfaction with the limitations of testing has led national policymakers, individuals responsible for state- and district-level assessments, and teachers interested in better uses of assessment in their own classrooms to consider assessment alternatives that give students the opportunity to show what they can “do,” as well as what they know. Current trends in assessment are moving toward (McMillan, 2007; National Research Council, 2001):

n using multiple forms of assessment,

n assessing a broader range of abilities and talents,

n assessment as an integral part of instruction, and

n assessment tasks that are relevant to real life or represent tasks common to a particular discipline.

In this module, we’ll examine the ways performance assessment in the classroom can expand teachers’ view of what students know and can do and allows them to assess students in a multidimensional way.

Performance Assessment

Performance assessment is any form of assessment that requires students to carry out an activity (process) or develop a product in order to demonstrate skill or knowledge (Airasian, 2005; Perlman, 2002). It requires students to actually demonstrate proficiency rather than simply answer questions about proficiency, and it asks students to perform, create, produce, or do something that involves the use of higher-level problem-solving skills (Gronlund, 2006). Performance assessments can be completed individually or as part of a group, and they may have oral and written components.

Formative versus summative uses. Like traditional forms of assessment, performance assessments can have both formative and summative uses. Consider these performance assessments:

1. A band director listens to each flute player’s performance and provides suggestions for improvement.
2. A PE teacher watches a student shoot a free throw and then offers suggestions on physical stance and hand and arm movements.

3. An industrial technology teacher observes students as they use a drill press to determine whether they are operating the machinery safely.

These formative assessments are used to plan for instruction and to monitor progress during instruction throughout the grading period. The purpose of the assessment is to improve student performance by providing feedback in the moment.

Teachers also can use performance assessments as a summative assessment to assess achievement at the end of an instruction period. Consider these performance assessments again:

1. The band director listens to each flute player perform in order to assign chairs in band for the next nine weeks.
2. The PE teacher watches a student playing basketball in order to rate the adequacy of the student’s skill and participation.

3. The industrial technology teacher observes students using the drill press in order to grade them in the use of safety goggles.

Process versus product. In each example, the teacher is evaluating the students’ skill and determining how well they have met performance objectives. The examples refer to the assessment of

During a formative performance assessment, the coach provides feedback in the moment to help a student improve her free throw shooting.





formance

Assessment

Module 28:

Per







>><<

Formative and summative assessment: See page 469.






boh7850x_CL8Mod28.p498-513.indd 499 boh7850x_CL8Mod28.p498-513.indd 499 11/19/08 9:58:49 AM

11/19/08 9:58:49 AM

500 cluster eight classroom assessment

processes or behaviors, yet performance assessment can also include the assessment of tangible products that students create. Most processes lead to products, so teachers might assess both as part of a single assignment. In fact, multiple processes as part of, say, a lab experiment or a research paper might lead to single or multiple products (e.g., in the lab, a finished chemical solution plus a lab report).

Matching performance assessment to instructional objectives. As a teacher, you will want to select the assessment format that provides the most direct evaluation of the particular skill or learning outcome being measured (Gronlund, 2006). Before choosing to use performance assessment, you should clearly identify the purpose of the instructional activity (Moskal, 2003). If the purpose is to assess the student’s ability to perform a skill, then having a student actually play a selection on the flute, for example, provides much richer, more meaningful information about the student’s ability to perform that skill than simply having the student answer multiple-choice questions about flute playing.

Authentic Assessment

Authentic assessments present students with problem-solving tasks that allow them to use their knowledge and skills in a meaningful way (Nitko & Brookhart, 2007). In order to prepare students for challenges and tasks that they will face in their careers and personal lives, teachers need to give them opportunities to practice problem-solving skills related to important, real-life skills and contexts (Hambleton, 1996; Popham, 2005). Solving important problems may require locating and using resources, consulting or collaborating with other people, and integrating basic skills with higher-level thinking and creativity (Popham, 2005; Wolf, Bixby, Glenn, & Gardner, 1991). Authentic tasks (Powers, 2005):

n present messy, poorly defined problems similar to the roles and challenges that students will encounter in the real world;

n simulate ways students should use combinations of knowledge, skills, and abilities in the real world;

n require the development of complete and well-justified responses, performances, or products; and

n may have multiple correct solutions (although the tasks clearly specify standards and criteria for determining the possible range of correct answers).

In today’s technology-rich learning environments, authentic assessments can include adaptive computer scenarios that present a student with a situation and then ask questions of or require a decision from the student. Because these presentations can be dynamic, changing depending on the student’s response, each student may encounter a slightly different scenario (Nitko & Brookhart, 2007). Computer simulations can provide greater economy and consistency than real-life scenarios and also provide the advantage of computerized scoring of student responses (Jones, 1994). Research indicates that, in some cases, computer-based simulations of “hands-on” activities are just as effective as activities in which students manipulate real objects (Triona & Klahr, 2003). Skills reported to be improved on through computer simulations include reading (Willing, 1988), problem solving (Jiang & Potter, 1994; Rivers & Vockell, 1987), science process skills (e.g., measurement, data interpretation, etc.)

(Geban, Askar, & Ozkan, 1992; Huppert, Lomask, & Lazarowitz, 2002), 3-D visualization (Barnea & Dori, 1999), mineral identification (Kelly, 1997/1998), abstract thinking (Berlin & White, 1986), creativity (Michael, 2001), and algebra skills involving the ability to relate equations to real-life situations (Verzoni, 1995).

Performance assessment and authentic assessment are not necessarily synonymous (McMillan, 2003). It is possible to assign a performance task that is not authentic because, although it requires that the student perform a skill, that skill is not grounded in a meaningful, real-world context. For

>><<

Problem solving:

See page 248.





Simulations Can Be Efficient and Effective. Computer-based simulations allow students to build a wide range of skills.



>><<

Technology and assessment: See page 472.



boh7850x_CL8Mod28.p498-513.indd 500 boh7850x_CL8Mod28.p498-513.indd 500 11/19/08 9:58:54 AM

11/19/08 9:58:54 AM

module twenty-eight performance assessment 501

example, a student might be asked to go to the board and demonstrate how to solve a math problem, but if the math exercise is not tied to the solution of a complex real-world problem, it is not considered an authentic assessment.

In your own words, how would you describe the difference between performance assessment and authentic assessment? How is performance assessment used in your college courses? How might you use it in your own teaching?

DEVELOPING PERFORMANCE ASSESSMENTS

After you have decided what knowledge or skills need to be assessed and have concluded that performance assessment best suits your purpose, it is time to consider which type of performance assessment is most appropriate. We’ll examine the basic facets of three types of performance tasks: presentations, projects, and portfolios. Each of these performance tasks has its own unique characteristics, but first we’ll consider some performance assessment guidelines that apply across multiple formats:

1. The selected performance should reflect a valued activity. The type of assessment you select sends a message to students about what you value and most want them to learn. For example, if you incorporate a large number of cooperative learning activities in the classroom, you are communicating the importance of interdependence and learning to work as a team.
2. The completion of performance assessments should provide a valuable learning experience. Performance assessments require more time to administer than other forms of assessment. The investment of this classroom time should result in a higher payoff that includes an increase both in the teacher’s understanding of what students know and can do and in the students’ knowledge of the intended content.

3. The statement of goals and objectives should be clearly aligned with the measurable outcomes of the performance activity. Figure 28.1 provides examples of performance activities and products that demonstrate the different levels of cognitive objectives in the taxonomy developed by Benjamin Bloom and his colleagues. Bloom’s taxonomy presents six categories of cognitive skills (Bloom, Englehart, Frost, Hill, & Krathwohl, 1956). Think of these six categories as a comprehensive way of considering different cognitive goals that need to be met when planning for instruction.

Presentations

Several common forms of performance assessment involve a presentation of one kind or another, including demonstrations, experiments, oral presentations, and exhibitions.

Demonstrations require students to show that they can use knowledge or skills to complete a well-defined, complex task (Nitko & Brookhart, 2007). A demonstration is usually a closed-response task, meaning that there is one correct way or a best way to complete the task. Typically, a demonstration is not as long or involved as a project. Demonstrations might include preschoolers tying a shoelace, elementary school students showing the proper way to line up for a fire drill, middle school students using a microscope to view slides, and high school students driving a car.

In an experiment, a student plans, conducts, and interprets the results of research. Experiments allow teachers to assess whether a student can use inquiry skills and methods such as making estimates or predictions, gathering and analyzing data, drawing conclusions, stating assumptions, and presenting findings. Experiments can be used with students at all grade levels. Preschoolers might test whether certain objects sink or float, elementary school students might test different growing conditions for plants, middle school students might predict the series of steps needed to create an electrical circuit, and high school students might estimate the type of reaction that will occur when certain chemicals are mixed.

Oral presentations might include interviews, speeches, skits, debates, or other dramatizations in which students are required to verbalize their knowledge and use their oral communication skills. Written work such as a list of interview questions, the draft of a speech, note cards to be used in a debate, or the script of a skit often is submitted along with an oral presentation. As with other forms of performance assessment, oral presentations can be done individually or as a group.

An exhibition is a public performance that serves as the culmination of a series of performances in a particular area, usually a graduation-level exercise or final class project. Exhibitions demonstrate what has been learned over the course of a unit or program of study and may require a combination



formance

Assessment

Module 28:

Per





>><<

Bloom’s taxonomy and learning objectives: See page 360.



boh7850x_CL8Mod28.p498-513.indd 501 boh7850x_CL8Mod28.p498-513.indd 501 11/19/08 9:58:56 AM

11/19/08 9:58:56 AM

502 cluster eight classroom assessment





P r o d u cts

Figure 28.1: Cognitive Categories.

Direct performance activities and products demonstrate each of the six cognitive objectives presented in Bloom’s taxonomy. (Two categories, Remember and Understand, are grouped together in this diagram.




collection

puzzle

photographs



construct

teach paint

sketch





events

diary

television

radio

newspapers

magazines tapes

people

books

diagram

sculpture

diorama

scrapbook

map

diagrams

ask

list

match

discover

stitchery

mobile

records

models

films

filmstrips

listen

locate

interview

experiment

identify

manipulate

model

illustration

research

A c ti vities

record

observe

report

stimulate

recommendation

Remember/ Understand

classify

categorize

separate

compare

dissect contrast

advertise

survey

graph

survey

questionnaire

commercial

report

diagram

chart

Apply

letter

evaluate

judge





cartoon

advertisement

structure

group discussion

panel

news item

court trial






recipe

TV, radio show

new game product

puppet show



debate

Evaluate

Analyze

discuss

editorialize

Create

decide

recommend

choose

survey

combine

compose hypothesize

role-play

write

invent

conclusion

value

predict

estimate

infer

produce

imagine

self-evaluation

story

poem

news article

play

magazine

pantomime



Source: Growing up gifted: Developing the potential of children at home and at school, by B. Clark, 2002, Upper Saddle River, NJ: Merrill-Prentice Hall.

song

invention machine new color, smell, taste



of reading, writing, questioning, speaking, and listening. Exhibitions can yield an authentic measure of students’ abilities to engage in inquiry and skillful expression, and they can motivate and engage students by involving them in a public challenge. Preschoolers might exhibit their fingerpaintings or block structures, elementary school students might exhibit Young Authors stories they have written, middle school students might exhibit their Science Fair projects, and high school students might exhibit and race vehicles they have designed and built in an engineering class.

Projects

A project is an activity, usually completed over an extended period of time, that results in a student product of some kind, such as a model, a functional object (e.g., a map or diorama), a substantial report, or a collection of related arti-facts (Banks, 2005). Projects can be completed individually or as a group.



boh7850x_CL8Mod28.p498-513.indd 502 boh7850x_CL8Mod28.p498-513.indd 502 11/19/08 9:58:57 AM

11/19/08 9:58:57 AM

module twenty-eight performance assessment 503

In addition to assessing academic learning goals, the group project can be used to assess how well students work together cooperatively. Research on cooperative learning suggests that students achieve the most when an element of both group goals and individual accountability is present (Johnson & Johnson, 2005; Slavin, 1988). The group succeeds (group goals) only when each member contributes to the project as a whole (individual accountability). For example, a teacher might assign a project in which students work in groups to visually represent the main themes in the novel A Tale of Two Cities. The teacher would evaluate a single product for each group and give the group a grade based on these criteria: identification of main ideas, organization, aesthetics, and originality.

The process of working on a project can be a worthwhile educational experience, but a project’s usefulness as a form of assessment depends on how well the project task has been designed. The effective use of projects as a form of assessment requires that these four conditions be met (Nitko & Brookhart, 2007):

1. The project must focus on one or more important learning goals that are clearly communicated in advance via written instructions or a rubric that outlines grading criteria. Well-designed project tasks require students to apply a wide range of abilities and knowledge.
2. Each student must have equal access to the resources needed to create an excellent final product. If you know that students vary widely in their access to resources, such as computers, you should limit the resources they are allowed to use.

3. Long-term project work will be more successful if you keep students on track by setting intermediate deadlines, requiring regular progress reports, and helping students overcome any obstacles that might threaten to derail their work.
4. Each student must do his or her own work. If students are working on a project as a group, individual roles and responsibilities should be clearly defined.

Portfolios

Interest in portfolio assessment has increased dramatically in recent years (Burke, 2006; Butler & McMunn, 2006). A portfolio is a systematic collection of student work (Popham, 2005). Portfolios can include a wide variety of items: writing samples, artwork, graphs, diagrams, photographs, audio tapes or videotapes, teacher comments, peer comments, work in progress, revisions, and student self-analyses—anything that represents what the student has learned in the area being taught and assessed (Knotek, 2005; Wolf et al., 1991). Well-designed portfolios can capture the complexity and range of a student’s work. The process of selecting items for inclusion and reviewing what has been included involves critical analysis and self-reflection on the part of the student and the teacher, as both consider how best to portray what the student has learned. Older students might include written reflections about the items selected for inclusion in the portfolio. Because portfolios may include multiple samples of a student’s work collected over an extended period of time, they are an excellent tool for demonstrating progress (Berryman & Russell, 2001).

Teachers can use process portfolios or best work portfolios. Process portfolios contain work from different stages to show a student’s progress or achievement over time (Gronlund, 2006; Knotek, 2005). They are sometimes called growth portfolios or developmental portfolios. Best work portfolios include a carefully selected combination of materials that showcase examples of a student’s best work and serve as a final summative assessment (Johnson & Johnson, 2002). Effective use of either type of portfolio requires adherence to these guidelines:

1. Establish the purpose of the portfolio. Is the portfolio to be used to demonstrate progress or growth over time, or is it intended to showcase best work?
2. Involve the student in decisions about what to include. Many teachers allow students to have a say in what goes into their portfolios (Weasmer & Woods, 2001). If students are allowed to choose the items to

>><<





formance

Assessment

Module 28:

Per



Individual and group accountability as related to learning: See page 377.



>><<

How teachers can assign roles and responsibilities: See page 358.





The Solar System. Creating a model of the solar system integrates many skills within a single project.





boh7850x_CL8Mod28.p498-513.indd 503 boh7850x_CL8Mod28.p498-513.indd 503 11/19/08 9:59:00 AM

11/19/08 9:59:00 AM

504 cluster eight classroom assessment





take risks

persevere

collaborate

This piece of work demonstrates that I can:





Set Precise Criteria for Evaluation. Students should be involved in decisions about what to include in their portfolios.

Student Reflection: Sample Self-assessment

Student Name: Date:

The attached portfolio item is (e.g., first draft, poetry, concept map).



support ideas with evidence or reasons

organize related ideas

write using a variety of sentence structures

use effective spelling strategies

self-edit

use a writing process

participate in a discussion





Student Signature:

Now I am planning to:



other:

Please notice:






be included, have them write a reflective statement telling why each piece was selected (Airasian, 2005).

3. Review the contents of the portfolio with the student. It is important to meet with each student on a regular basis to discuss the current state of the portfolio, review progress, and plan future work to be included (McMillan, 2007; Weldin & Tumarkin, 1999).

4. Set precise criteria for evaluation. Clear and systematic criteria make the process of developing the portfolio less mysterious and make grading much more efficient (Burke, 2006; Gronlund, 2006). The criteria should allow evaluation of how well the portfolio as a whole represents the student’s level of achievement (Airasian, 2005).

As a student, what is your reaction when you are assigned performance tasks? How might this influence when and how you use performance assessments as a teacher? What issues will you consider in developing performance assessments for your own students?

EVALUATING PERFORMANCE ASSESSMENTS

Once the performance task has been selected, the teacher must decide how to evaluate the assessment. Whether the assessment involves a product, a performance, or both, it should be done systematically so that all students are assessed in a fair and consistent manner. Performance assessments involve a subjective evaluation of a student’s performance and therefore can be subject to inconsistencies. For example, when evaluating a student presentation, one teacher might think a student “sometimes” used good eye contact while another might think the student “seldom” used good eye contact. Both teachers observed the same behavior but attached a different value to what they saw. Determining the



boh7850x_CL8Mod28.p498-513.indd 504 boh7850x_CL8Mod28.p498-513.indd 504 11/19/08 9:59:02 AM

11/19/08 9:59:02 AM

module twenty-eight performance assessment 505

reliability, or consistency, of the scoring of performance assessments involves inter-rater reliability, or the degree of consensus or similarity of ratings given by two independent raters. Like standardized achievement tests and classroom tests that use objective items (e.g., multiple choice and true/false), performance assessments must show evidence of reliability for the score or grade to be meaningful. However, reliability is more difficult to achieve with performance assessments than with more traditional forms of assessment.

Developing a scoring system—such as a checklist, rating scale, or rubric—can help teachers improve the reliability of performance assessments scores. These scoring systems provide preset criteria for evaluating student performance, making grading simpler and more transparent (Kubiszyn & Borich, 2003). They clarify what students are expected to know and do, and they make explicit how various types of knowledge and subskills in the performance are to be evaluated and weighted. The more explicit a scoring system, the more likely a teacher will be consistent in scoring across students or across class periods, increasing reliability. Figure 28.2 presents an overview of the types of scoring instruments commonly used in performance assessment.

Checklists

The use of a checklist, the least complex form of scoring system, is appropriate when you are looking for specific elements in a product or performance and all elements are weighted the same. Checklists provide a quick and easy way to assess based on a specified list of criteria, such as behaviors or characteristics that can be marked as Present/Absent, Complete/Incomplete, or Yes/No. Working from a prepared checklist, you mark off each item as it occurs and assign a score based on the total number of items checked. However, you give no recognition to variation in quality, and you assign no higher or lower values for how well a particular skill is executed. Checklists are especially useful for recording information during the observation of student behaviors. For example, a checklist for evaluating oral presentation skills might indicate whether the student:

——— maintains eye contact with the audience ——— speaks loudly enough to be heard in all parts of the room ——— enunciates clearly ——— uses gestures appropriately ——— speaks for the allotted time

Rating Scales

Rating scales offer a way to attach an indication of quality to the various elements of a process or product. For example, you might rate the performance of a skill on a scale of one to ten, with ten being the best score. Graphic rating scales allow the rater to mark a point on a line or a continuum that reflects degrees of performance (e.g., never, seldom, sometimes, usually, always). Numeric rating scales quantify results. You might circle 1 to indicate that a certain behavior never occurs, 2 to indicate that it seldom occurs, 3 to indicate that it sometimes occurs, 4 to indicate that it usually occurs, and 5 to indicate that it always oc-curs. This approach works best when it is set up so that the highest value is assigned to the desired response. Descriptive rating scales provide a description rather than simply a number or a single term for each level of performance. For example, you might rate a student’s organizational skills on a project by using the following descriptors:

n Exemplary: Ideas and information are very well organized.

n Profi cient: Some flaws in organization interfere with understanding of the project.

n Defi cient: The project is haphazard, with no apparent organization.

>><<

The reliability of classroom tests: See page 470.

>><<

Standardized tests: See page 524.



Figure 28.2: Tools for Scoring. By developing a scoring system, teachers can improve the reliablity of performance assessments scores, while clarifying what students are expected to know and do.



Scoring instruments for performance assessments



















Checklists Rating scales













Graphic




























Rubrics












Numeric
























Analytic rubrics












formance

Assessment

Module 28:

Per






Holistic rubrics





boh7850x_CL8Mod28.p498-513.indd 505 boh7850x_CL8Mod28.p498-513.indd 505 11/19/08 9:59:03 AM

11/19/08 9:59:03 AM

506 cluster eight classroom assessment

Graphic Rating Scale for Evaluating a Student’s Performance During Group Work. A graphic rating scale can be used to reflect degrees of performance along a continuum.






Performance During Group Work Unsatisfactory Fair Satisfactory Good Outstanding

Participation: Was present at all group meetings and made a significant contribution to the workload

Focus: Stayed on task and encourages others to do so

Attitude: Exhibited enthusiasm and supported the efforts of group members

Dependability: Was conscientious, thorough, reliable, accurate

Cooperation: Was willing and able to work with others to produce desired goals






In addition to evaluating the achievement of learning objectives, rating scales can be used to evaluate student behaviors such as time on task, level of motivation, or degree of contribution to a group project.

Rubrics

A rubric is a means of scoring a performance assessment in which multiple criteria are being assessed and the quality of the product or performance is important. Rubrics are especially appropriate for evaluating complex tasks or activities that integrate content from more than one area. Rubrics improve scoring consistency and also improve validity by clarifying the standards of achievement teachers use to evaluate students’ work and communicate students’ performance to parents.

A holistic rubric, illustrated in Table 28.1, requires the teacher to score the overall process or product as a whole, without judging the component parts separately (Nitko, 2001). Teachers using this assessment method may rely on a rubric that lists features of A work, B work, and so on, but they do not assign a specific number of points to each feature. Instead, they determine which description best fits the paper or project and grade it accordingly. Although holistic rubrics can be easier to create and score, making them faster to use, they provide less feedback to students than is possible with an analytic rubric.

With an analytic rubric, like the one in Table 28.2, the teacher scores separate, individual parts of the product or performance first and then sums the individual scores to obtain a total score (Moskal, 2000; Nitko, 2001). Analytic grading assigns separate scores to different criteria. For example, ideas might be worth 10 points, organization 10 points, sentence structure 10 points, and so on. This format allows the teacher to provide more detailed feedback about the strengths and weaknesses of a student’s product or performance. Detailed feedback can improve student learning because it breaks down students’ performance into specific components of the task and identifies students’ progress toward meeting learning goals. Sometimes, however, it may be difficult or even inappropriate to define the parts of a students’ product or performance as separate and distinct. In such cases, when the overall impression or quality of the work is paramount, holistic grading may be a better choice.

Rubrics also can be classified as either generic or task-specific. A generic rubric provides a standard format that the teacher uses repeatedly throughout the year to evaluate a set of assignments. It contains scoring guidelines that can be applied to many different tasks (e.g., writing, science lab work, or math problem solving). Generic rubrics are useful for both teachers and students. They are an efficient tool for teachers because the same general format can be used multiple times. Repeated use of a generic rubric also encourages students to improve their performance from one task to the



boh7850x_CL8Mod28.p498-513.indd 506 boh7850x_CL8Mod28.p498-513.indd 506 11/19/08 9:59:03 AM

11/19/08 9:59:03 AM

module twenty-eight performance assessment 507



TA B L E 2 8 .1 Template for Holistic Rubrics Score Description






5 Demonstrates complete understanding of the problem. All requirements of task are included in response.

4 Demonstrates considerable understanding of the problem. All requirements of task are included.

3 Demonstrates partial understanding of the problem. Most requirements of task are included.

2 Demonstrates little understanding of the problem. Many requirements of task are missing.

1 Demonstrates no understanding of the problem.

0 No response/task not attempted.

Source: Mertler, 2001.











formance

Assessment

Module 28:

Per






TA B L E 2 8 . 2



Template for Analytic Rubrics Beginning




Accomplished 3

Exemplary

1

Developing

2

4 Score





Criteria #2 Description reflecting beginning level of performance

Criteria #1 Description reflecting beginning level of performance






Description reflecting achievement of mastery level of performance

Description reflecting achievement of mastery level of performance





Description reflecting movement toward mastery level of performance

Description reflecting highest level of performance









Description reflecting movement toward mastery level of performance

Description reflecting highest level of performance


Criteria #3 Description reflecting beginning level of performance

Description reflecting movement toward mastery level of performance

Description reflecting achievement of mastery level of performance

Description reflecting highest level of performance


Criteria #4 Description reflecting beginning level of performance

Description reflecting movement toward mastery level of performance

Description reflecting achievement of mastery level of performance

Description reflecting highest level of performance




Source: Mertler, 2001.





next because the criteria are clear and consistent. The use of a generic rubric (in an analytic format) throughout the year leads to increased student achievement (Khattri, Reeve, & Adamson, 1997). A task-specific rubric modifies the generic framework to match the specific learning goals of a particular task. In certain situations, an assignment is not part of a series of similar tasks, or a particular



boh7850x_CL8Mod28.p498-513.indd 507 boh7850x_CL8Mod28.p498-513.indd 507 11/19/08 9:59:04 AM

11/19/08 9:59:04 AM

508 cluster eight classroom assessment

assignment has a unique set of learning objectives. In these cases, a task-specific rubric is the more appropriate choice.

The design and scoring recommendations that follow are suitable to both analytic and holistic scoring rubrics (Arter & McTighe, 2001; Moskal, 2003). The design of effective scoring rubrics can be conceptualized as a series of three basic steps, shown as the first three of seven steps in Figure 28.3 (Mertler, 2001; Montgomery, 2001; Tombari & Borich, 1999).

1. Determine the criteria to be evaluated.

The criteria within a scoring rubric should be clearly aligned with the requirements of the task and the stated goals and objectives.
These criteria should be expressed in terms of observable behaviors or product characteristics. Rubrics should be written in specific and clear language that the students understand and should provide students with a clear description of what is expected, before they proceed with the assessment activity. If the language in a scoring rubric is too complex for students, the benefit of rubrics is lost.

2. Determine the number of performance levels. The scale used for a scoring rubric should reflect clear differences between student achievement levels. A scoring rubric that has fewer categories and clear distinctions between them is preferable to a scoring rubric that has many categories that may overlap or be difficult to interpret. The number of points assigned in each category or level should clearly reflect the value of the activity. On an analytic scoring rubric, if elements are weighted differently (e.g., weighting spelling and grammar less than content in an essay), there should be a clear reason for these differences.

3. Define expectations clearly, beginning with the highest level of performance, and proceed with a description of each subsequent level. This step may involve brainstorming characteristics that describe each attribute being assessed on the rubric, as well as the criteria for different levels of performance. The separation between score levels should be clear. Consider the following descriptions of a holistic five-point scale provided for a written assignment at the middle school level:

n A score of 5 represents outstanding work. An essay in this category is very well-organized and coherent, clearly explains key ideas, and is free of errors in spelling, grammar, and punctuation.

n A score of 4 represents strong work. An essay in this category is generally well-organized and coherent, explains key ideas, and is free of errors in spelling, grammar, and punctuation.

n A score of 3 represents competent work. An essay in this category is adequately organized and developed, explains some key ideas, but may display some errors in spelling, grammar, and punctuation.

n A score of 2 represents insuffi cient work. An essay in this category has one or more of the following weaknesses: inadequate organization or development, inadequate explanation of key ideas, little or no detail, or a pattern of errors in spelling, grammar, and punctuation.

n A score of 1 represents seriously fl awed work. An essay in this category contains serious or persistent problems in writing style and in mechanics, clarity, and organization of ideas.

Figure 28.3 breaks down the process into further steps that differ depending on whether the rubric is holistic or analytic.



Designing Scoring Rubrics: Step-by-Step Procedure



Step 4a: Write thorough narrative

descriptions for excellent

work and poor work

incorporating each

attribute into the

description.





Step 1: Re-examine the learning objectives to be addressed by the task. Step 2: Identify specific observable attributes that you want to see (as well as those you don’t want to see) your students demonstrate in their product,

process, or performance.

Step 3: Brainstorm characteristics that describe each attribute.



For holistic rubrics... For analytic rubrics...


Step 4b: Write thorough narrative

descriptions for excellent

work and poor work for

each individual attribute.



Step 5b: Complete the rubric by

describing other levels

on the continuum that

ranges from excellent to

poor work for each

attribute.



Step 5a: Complete the rubric by

describing other levels

on the continuum that

ranges from excellent to

poor work for the

collective attributes.





Step 6: Collect samples of student work that exemplify each level. Step 7: Revise the rubric, as necessary.


Figure 28.3: Steps for Designing Rubrics. The first three steps in this flow chart form the basis for creating an effective scoring rubric. Steps 4 and 5 address differences in developing holistic and analytic rubrics.


Learning objectives: See page 360.

>><<



boh7850x_CL8Mod28.p498-513.indd 508 boh7850x_CL8Mod28.p498-513.indd 508 11/19/08 9:59:06 AM

11/19/08 9:59:06 AM

module twenty-eight performance assessment 509

After the rubric has been designed, teachers must consider how to use it effectively. Mathematics education professor Barbara M. Moskal (2003) offers these recommendations for scoring, interpreting, and using the results of performance assessments:

1. The connection between the score or grade and the scoring rubric should be immediately apparent. If an analytic rubric is used, then the report should contain the scores assigned to each analytic level. If a summary score or grade is provided, then an explanation of how the summary score or grade was determined should be included. Both students and parents should be able to understand how the final grade or score is linked to the scoring criteria.
2. The results of the performance assessment should be used to improve instruction and the assessment process. How can information gleaned from student responses be used to improve future classroom instruction? What did the teacher learn? How can the performance assessment and scoring rubric be improved for future instruction? Teachers should use the information acquired through classroom assessment to improve future instruction and assessment.

What is the most memorable performance assessment you have completed as a student? Why was it memorable? Evaluate its value as a learning experience for you. Evaluate its value as a representation of your knowledge or skills.


ADVANTAGES AND DISADVANTAGES OF PERFORMANCE ASSESSMENT

When making the decision about whether to use a more traditional assessment (usually a test) or some type of performance assessment, teachers need to weigh the advantages and disadvantages of each approach. Performance assessments can offer several advantages over other forms of assessment (Linn & Gronlund, 1995; Oosterhoff, 1999; Rudner & Schafer, 2002) that can benefit students, teachers, and parents alike:

n Performance tasks allow students to use prior knowledge to build new knowledge structures, engage in active learning through inquiry and exploration, and construct meaning for themselves. Performance assessments can be designed to give students an opportunity to engage in self-assessment.

n Performance tasks can give teachers an opportunity to assess the processes students use, as well as their final products. These tasks assess students’ ability to do things, not simply their ability to talk about or answer questions about how to do things, so in some cases they may provide a more valid assessment of students’ skills.

n Performance tasks give parents an opportunity to see their children’s strengths in areas that a traditional testing format might not capture. In some cases, these tasks offer parents an opportunity to share their own interests, hobbies, and experiences with their children as parent and child discuss possible options and gather necessary resources.

Despite their advantages, performance assessments are not ideal for every situation (Miller & Seraphine, 1993; Nitko & Brookhart, 2007). From a practical standpoint, completion of performance assessments may take a great deal of students’ time, and the teacher must be sure that the assignment is meaningful enough to warrant the time invested. High-quality performance assessments are difficult to design, and poorly designed performance tasks may not provide a valid assessment of what students have learned and can do. A student’s performance on one task may tell very little about what that student can do in other areas. For example, in art class a student may be instructed to glaze a pot. The student’s performance on this task provides information about how well the student can apply glazing techniques but reveals nothing about whether the student is able to successfully throw a pot on the wheel or whether the student understands the rich history of pottery as an art form. Be aware that effective scoring rubrics for performance assessments are difficult to create and that the scores may have lower reliability than other measures. Grading performance assessments can be very time-consuming, making them a less practical alternative.

Validity, a characteristic of high-quality assessment, must be carefully considered when making decisions about what form of assessment to use. Validity in the classroom context is primarily a measure of how well interpretations from assessments contribute to instructional decision making and



formance

Assessment

Module 28:

Per





>><<



Validity applies to standardized tests: See page 534.

Validity applies to classroom tests: See page 487.

>><<



boh7850x_CL8Mod28.p498-513.indd 509 boh7850x_CL8Mod28.p498-513.indd 509 11/19/08 9:59:06 AM

11/19/08 9:59:06 AM

510 cluster eight classroom assessment

move students toward increasing levels of competence (Brookhart, 2003; Moss, 2003). To better ensure the validity of performance assessments, the teacher should make sure the assessment (Nitko &
Brookhart, 2007):

n includes content that is representative and relevant,

n represents thinking processes and skills,

n shows evidence of consistency with other assessments, and

n is part of multiple assessments across the course of a grading period.

Generally, performance assessment for summative purposes fills in the gaps left by other, more objective methods rather than being used as the sole assessment tool (Hanna & Dettmer, 2004). During the assessment process, teachers look for patterns, check for contradictory evidence, and compare the developing picture of a student’s abilities to certain learning goals or standards of competence (Shepard, 2006). The inclusion of performance assessments facilitates this process by allowing the teacher to see student competencies that may not have been captured via traditional assessments. The combined use of traditional and performance assessments increases validity by presenting a more accurate and complete picture of what a student knows and is capable of doing.

How might you use performance assessments to the greatest advantage in your own teaching? In what situations might you decide not to use performance assessments?








What Has Been

Learned? Exhibitions allow stu dents to share their work in a public format.








boh7850x_CL8Mod28.p498-513.indd 510 boh7850x_CL8Mod28.p498-513.indd 510 11/19/08 9:59:07 AM

11/19/08 9:59:07 AM

key concepts 511






Summary






Define performance assessment and provide examples of the formative and summative uses of performance assessment. Performance assessment is any assessment that requires students to carry out an activity or develop a product in order to demonstrate skill or knowledge. Formative uses of performance assessment provide feedback in the moment in order to help a student improve. Summative uses of performance assessment help the teacher evaluate students’ progress, as well as the effectiveness of instructional methods, at the end of a unit or grading period.

Define authentic assessment and identify its essential characteristics. Authentic assessment measures important abilities using procedures that simulate the application of those abilities to real-world intellectual problems, roles, or situations. Authentic assessments present students with tasks that are complex and require students to use a combination of different types of knowledge and skills. Authentic tasks may be messy, challenging students to deal with poorly defined problems similar to the roles and issues students will encounter in the real world. The problems may have multiple correct solutions; however, standards and criteria for assessing the possible range of correct answers, performances, or products should be clearly specified.

Describe the three major types of performance assessment and provide a rationale for using each type. Projects, portfolios, and presentations are three commonly used forms of performance assessment. A project is a long-term activity that results in a student product. Well-designed project tasks allow students to apply and integrate a wide range of abilities and knowledge and, if designed as group work, can give students the opportunity to develop skill in working cooperatively. A portfolio is a systematic collection of work that can capture the complexity and range of a student’s work. Because portfolios may include multiple samples of a student’s work collected over an extended period of time, they are an excellent tool for demonstrating progress. Presentations can take many different forms, can be used to demonstrate what has been learned over the course of a unit or program of study, and may require a combination of reading, writing, questioning, speaking, and listening.

Describe the three methods of systematically evaluating performance assessments. Checklists, rating scales, and rubrics provide preset criteria for evaluating student performance, thereby making grading simpler, more transparent, and more consistent. Developing one of these scoring systems in the course of designing a performance assessment helps teachers define expectations of what students need to know and to do. If given to students at the very beginning of an assignment, these scoring systems allow students to better understand the criteria for success. Checklists are the simplest system, because the teacher simply marks whether a particular behavior or skill is present or absent. Rating scales, which can be designed in graphic, numeric, or descriptive formats, allow the teacher to indicate the level or quality of a skill performed. Rubrics provide the greatest level of detail by specifying the criteria for each level of achievement.

Discuss the general advantages and disadvantages of performance assessments. The advantages of performance assessments include their consistency with modern learning theory (building on prior knowledge, active engagement, construction of meaning); the integration of knowledge, skills, and abilities; the ability to assess the processes students use, as well as their final products; and the ability to assess what students can do, not simply what they know. Unfortunately, high-quality performance assessments can be diffi-cult to design and time-consuming to implement and grade. Performance assessments are less objective and thus have lower reliability than other measures and, if not well designed, may also have poor validity.








Key Concepts

analytic rubric authentic assessments best work portfolios check list demonstrations descriptive rating scales exhibition experiment




portfolio process portfolios project rating scales rubric summative assessment task-specific rubric



formative assessments generic rubric graphic rating scales holistic rubric inter-rater reliability normative rating scales oral presentations performance assessment



boh7850x_CL8Mod28.p498-513.indd 511 boh7850x_CL8Mod28.p498-513.indd 511 11/19/08 9:59:11 AM

11/19/08 9:59:11 AM

512 case studies: reflect and evaluate







Case Studies: Refl ect and Evaluate

Early Childhood: “The Zoo”

These questions refer to the case study on page 458.

1. How do Vivian and Sanjay use performance assessment in their classroom?

2. Why might performance assessment be a particularly good choice for this classroom environment?

3. Would the creation of the zoo be considered an authentic task? Why or why not?

4. If Vivian and Sanjay were using portfolios as a form of assessment, what possible artifacts could be included? Try to think of at least five items that would be appropriate to include.

5. How could Vivian and Sanjay involve the students in the assessment process?

6. How could Sanjay have used checklists or rating scales to gather information about the students in the preschool session? What advantages would this method offer?

Elementary School: “Writing Wizards”

These questions refer to the case study on page 460.

1. Brigita uses many different performance tasks in her classroom to engage students in writing. What concerns about validity should she keep in mind when using performance assessments? Give specific examples.

2. What concerns about fairness should Brigita keep in mind when using performance assessments?

3. Which elements of Brigita’s assessment of student writing could be considered authentic assessment? Why?

4. Brigita developed an evaluation form (rubric) for invited guests to use to comment on the Young

Authors submissions. What is the advantage of providing a structured rubric with preset criteria as opposed to simply asking for open-ended feedback from the evaluators?

5. If Brigita wants to be able to provide a score when evaluating each piece of student writing, what kind of scoring system should she use? Describe how this system works.

6. Based on what you read in the module, provide at least two suggestions for how Brigita could use computers in the assessment process.

7. If Brigita were using portfolios as a form of assessment, what possible artifacts could be included based on what was mentioned in this case?

Middle School: “Assessment: Cafeteria Style”

These questions refer to the case study on page 462.

1. Why might Ida’s project option be classified as a type of performance assessment?

2. What advantages does performance assessment offer the students? The teacher?

3. What disadvantages does performance assessment present for the students? for the teacher?

4. Why is it important to use a rubric when scoring student projects?

5. What was the purpose of providing students with a copy of the rubric in advance?

6. Is the project option a form of authentic assessment? Would your answer vary depending on what each student chooses to do for his or her project? Explain.














boh7850x_CL8Mod28.p498-513.indd 512 boh7850x_CL8Mod28.p498-513.indd 512 11/19/08 9:59:15 AM

11/19/08 9:59:15 AM

case studies: reflect and evaluate 513









High School: “Innovative Assessment Strategies”

These questions refer to the case study on page 464.

1. What concerns might critics of the Oregon Senior Project in English raise about reliability, validity, fairness, and practicality?

2. How might the concerns in question 1 be addressed?

3. The third phase of the Senior Project in Oregon is a formal presentation before a panel of teachers and community members. What are the advantages of having students exhibit their work publicly?

4. Is the assessment approach used by the math teacher in California a type of authentic assessment?

Why or why not?

5. What kinds of information does the assessment approach referred to in question 4 provide for the teacher?

6. What are some learning objectives that could be met with the oral history project used by the teacher in Rhode Island?

7. What guidelines could Joe share with his teachers to help them decide when to use some type of performance assessment and when to continue to use traditional tests?


















boh7850x_CL8Mod28.p498-513.indd 513 boh7850x_CL8Mod28.p498-513.indd 513 11/19/08 9:59:20 AM

11/19/08 9:59:20 AM


Wyszukiwarka

Podobne podstrony:
EdPsych Modules word boh7850x CL8Mod26
EdPsych Modules word boh7850x CL8Mod27
EdPsych Modules word boh7850x CL7Mod25
EdPsych Modules word boh7850x CL5Mod17
EdPsych Modules word boh7850x CL7Mod23
EdPsych Modules word boh7850x CL6
EdPsych Modules word boh7850x cre
EdPsych Modules word boh7850x CL2Mod08
EdPsych Modules word boh7850x CL INTRO
EdPsych Modules word boh7850x CL5Mod16
EdPsych Modules word boh7850x CL6Mod21
EdPsych Modules word boh7850x CL1Mod02
EdPsych Modules word boh7850x CL4
EdPsych Modules word boh7850x ref
EdPsych Modules word boh7850x CL2
EdPsych Modules word boh7850x CL7Mod24
EdPsych Modules word boh7850x CL6Mod18
EdPsych Modules word boh7850x CL2Mod06
EdPsych Modules word boh7850x CL1Mod05

więcej podobnych podstron