first article in this series presents Assessment Vocabulary
while the second article discusses Differentiating Performance
is such a hot topic, many authors, organizations, and institutions are
currently publishing articles and research reports on this subject. Trouble
arises, however, in the vocabulary used in these publications, as many
words seem to be used differently in different reports. Due to this plethora
of confusing terminology, we have found it expedient to define the terms.
The meanings we associate with these words have developed from our own
assessment research with districts, schools, and teachers as well as from
our work with the SouthEastern Region Vision for Education laboratory.
However, we, as authors, must emphasize that we do NOT aver that our definitions
are the only true and correct ones! We simply understand that, if people
are going to discuss assessment, they must have a vocabulary in common.
For example, when we work with teachers in the field, we have quickly
found that portfolio can mean many things to many people.
Some see a portfolio as a folder or notebook containing all the students
work. Others see a portfolio as a collection of a students best
work. We contend that a portfolio is a purposeful collection of student
work and that the purpose should be determined prior to collection. Such
purposes may entail demonstrating growth, proficiency, or experience.
These differing definitions for the simple word, portfolio,
illustrate the importance of defining terms. In this chapter, we hope
to negotiate a shared vocabulary.
This article will provide insight
into the following topics:
vs. Formative Evaluation
Reliability & Validity
The first word which
must be defined is assessment, itself. Assessment is the act
of collecting information about individuals or groups of individuals in
order to better understand them. The twin purposes of assessment are to
provide feedback to students and to serve as a diagnostic tool for instruction.
In essence, assessment answers
Did the students achieve the intended standards?
If the student did not achieve the intended standards, will the
feedback he/she received help improve the students performance?
Was the instruction effective?
If the instruction was NOT effective, how can I, the teacher, improve
my instruction to meet the needs of all students?
The results of the assessment are shared with both the students and the
teacher. In this manner, should the assessment indicate a need for improvement,
students can explore new study strategies and teachers can search out
and implement new instructional techniques.
researching assessment, it is not unusual to experience some confusion
between the terms, assessment and evaluation,
as these terms seem to be used interchangeably by some authors. To our
minds, however, the two terms are not synonymous. Evaluation is a judgment
regarding the quality or worth of the assessment results. This judgment
is based upon multiple sources of assessment information. If each classroom
assessment is a snapshot of what students know and are able to do, these
snapshots can be collected into the photo album of evaluation.
However, the evaluative process goes beyond just collecting information;
evaluation is concerned with making judgments based upon the collection.
To continue the snapshot/photo album analogy, lets suppose that
this particular photo album belongs to a professional photographer. When
she applies for a job, the photographer brings along her photo album (portfolio
of best work). She has performed a personal assessment of each snapshot
within the album, and has made a decision about whether or not to include
each one. Then, based upon the multiple examples of the photographers
work present in the album, the future employer judges the overall proficiency
of the photographer. The photographer assessed her ongoing work and the
future employer evaluated her worth as a photographer based upon multiple
examples of her work.
is a summative process, whereas assessment is a formative one. The purpose
of formative assessments is to provide feedback to students as they progress
toward a goal. If this feedback is of a high quality, improvement in student
performance can result. Summative processes have more of a gate-keeping
function. For example, student applications to college are evaluated using
a summative process and students are either accepted or rejected via this
process. In the classroom, teachers use formative assessment on a daily
basis, and then use the more summative evaluation to recommend
report card marks at the end of a grading period. Unfortunately, the final
evaluation, the grade, can only be as good as the assessment
information which was collected. As in the computer aphorism, Garbage
in = garbage out. If a teacher is producing poor assessment snapshots,
the grade given will be of little use in determining what the students
know or are able to do. The following vignette illustrates this point:
classroom teachers, we sometimes reflect on our own poor performances
as assessors and wish we could go back now and alleviate the
harm we unknowingly did our students.
For example, one year Nancy taught a young football star named
Len. Len did not pass the first quarter of chemistry and was
in danger of failing the second quarter. At this time, Nancy
was using the multiple-choice, pullout test that
accompanied the textbook to assess her students understanding
of chemical concepts.
However, during the second quarter of work, Nancy decided too
many of her students were failing her class, and consequently
viewed chemistry as a hateful subject. She decided
that she needed to give her students an opportunity to view
and appreciate chemistry as a lifelong learning experience.
She wanted to make chemistry enjoyable for her students!
the study of gas laws, she decided to change her assessment
strategy. Instead of taking a test, the students assessment
experience would be a presentation of one of the gas laws. In
this presentation, the student had to show fellow students the
following: an authentic use of the gas law, a real world application
of the law through a demonstration, and an explanation by example
of the formula for determining some relevant factor, using the
Not all Nancys students were supportive of this change
in assessment. The gifted students, in particular, were opposed
to the change, since they had been very successful under the
old system. However, Nancy soon realized that many of the students,
including Len, were excited about the possibility of viewing
chemistry as it related to their real world.
After several days of facilitated classroom work, the day for
the actual assessment was close at hand. Nancy asked her students
to help in creating a rubric for scoring their presentations.
The students responded to this session with motivation and with
the attitude, This is do-able and I know exactly how I
will be assessed.
The day for the presentations arrived and Len came to class
carrying a bicycle tire, pump, soda pop cans, and posters. Len
stood in front of the class and demonstrated how his gas law
related to their work and how to calculate the pressure differences
using the formulas from their text. He made a tremendous mess
in the classroom with the soda pop and for weeks, his classmates
teased him about the sticky pop on their chairs! However, he
responded well to such teasing and became more confident and
comfortable within the chemistry class.
Lens teacher, Nancy, also learned a few things that presentation
day. The most depressing item was the fact that it took almost
two quarters of the school year to realize the strengths and
talents of this young man. From his presentation, it was obvious
that Len understood some areas of chemistry even better than
his teacher! However, the assessments made to date had not revealed
Nancy changed her assessment and instructional strategies in
the classroom to meet the needs of all her multi-talented students.
She increased her
own assessment capacity thought the use of multiple types of
assessments, rubrics, and student involvement in assessment
through peer- and self-assessment activities.
What happened to Len? He not only passed the second quarter
of chemistry, but passed for the year and went on to college.
The above vignette
illustrates how poor formative assessments can result in poor evaluations
of students. Nancy was using the same type of assessment over and over
in her classroom: a multiple-choice test. Obviously, Lens qualities
as a future chemist were not revealed through this type of assessment
process. It was only through changing her assessment methods that Nancy
came to realize Lens grasp of chemical concepts. This led her to
further exploration of alternative assessment techniques (and eventually
to co-author this text).
Reliability and Validity
This vignette also
illustrates a further use of evaluation. Evaluation can be applied to
the assessment process, itself, to determine if the assessments we have
made are relevant, reliable, and valid. If they are relevant, then the
assessments are tied to our classroom instruction. If would be highly
irrelevant to assess skills we have not attempted to teach nor included
in our curriculum. That does not mean, however, that all assessments made
by all teachers are always relevant. For example, Debbie (a prospective
teacher and student in one of Susans assessment classes) reports
the following experience:
went to my daughters school for Back to School Night,
the night parents can meet all the teachers. I was particularly
anxious to meet my daughters science teacher, Ms. Steeple,
as my daughter was reporting academic difficulty in this class.
In fact, my daughter had told me to expect a failing grade in science
on her report card. During Back to School Night, Ms
Steeple explained her grading practices and revealed that many of
her students currently had low marks in science. By way of explanation,
Ms. Steeple produced her analysis of this phenomenon. According
to her, most of the low cumulative grades could be attributed to
the low scores earned on the pre-test for the current unit. When
I questioned Ms. Steeple about WHY she would count scores
earned on a PRE-test, she was unable to answer my question. In fact,
she seemed to believe that ALL work should count. That
night, when I got home, I talked with my daughter about her grade
on this pre-test and she showed me the huge red 53 scrawled
across the top of this paper. We celebrated with a trip to Dairy
Queen, after I explained to my daughter that she ALREADY KNEW 53%
of the material Ms. Steeple had not yet taught!
story of her daughters experience in science class is not, alas,
unique or uncommon. It is, however, the perfect example of irrelevant
Reliability involves the consistency of scores across evaluators, over
time, or across different versions of a test. An assessment is reliable
when the same answers receive the same score no matter when the assessment
occurs or how or who does the scoring, or when students receive the same
scores no matter which version of the test they take. To be valid, an
assessment must measure what it is intended to measure, rather than extraneous
features. An example of an invalid assessment of the ability to use a
microscope correctly would be to give a pencil and paper test on the parts
of the microscope. A more valid assessment would be to hand the student
a slide and have him/her focus the slide under low and high power.
Evaluation, then, is the method we use to rate the design of our assessments.
This process will involve examining all the assessments we have made through
the lenses of the above listed criteria. Evaluation is done periodically,
at specified times, whereas assessment is ongoing and continuous.
While the above explanations
hopefully clarify differences between evaluation and assessment, there
still remain pockets of confusion to address. For example, in the current
literature, assessment is not usually a stand-alone word. Many times,
it is modified by an adjective preceding it, as traditional assessment,
alternative assessment, or authentic assessment.
Therefore, as we attempt to define assessment, we must also differentiate
between these words. It is, perhaps, easiest to elucidate their meanings
through a process of comparison. We will begin with traditional assessment.
Traditional assessment is any type of assessment in which students
choose a response from a given list. Such assessments include the standard
true/false quiz or multiple-choice test so familiar to students. However,
matching exercises also fall under this category, as do fill-in-the-blank
activities, if students are given a word bank from which to
choose answers. In traditional assessments, then, students are expected
to recognize that one particular choice best answers the question asked.
In contrast to traditional
assessment, alternative assessment includes any assessment in which
students create a response to a question. Here, again, we find some stock
classroom activities, as short-answer and essay questions. In both of
these exercises, students are called upon to respond to a question using
their own ideas, in their own words. Of course, these are not the only
activities which require student creativity in the classroom. Also included
within this category are musical recitals, theme papers, drama performances,
and student-made posters, art projects, and models, among many others.
It should be evident, then, that when we ask teachers to consider alternative
assessment, we are not asking them to invent new ways of assessing students.
The above list illustrates that many alternative assessments are already
in use in classroom around the country. We simply hope to encourage MORE
teachers to use these types of assessment MORE often.
Therefore, the method used to answer questions is the primary difference
between traditional and alternative assessment. The critical criteria
for differentiating between these two types of assessment, then, is to
determine if a data set of answers is given to students. The definition
of authentic assessment, however, is not quite so clear cut.
assessment tasks are ones that elicit demonstrations of knowledge
and skills in ways that resemble real life as closely as possible.
When students participate in politically-oriented debates, write for the
school newspaper, conduct student government, club, or research group
meetings, or perform scientific research, they are engaging in real
life tasks. Therefore, such activities would fall under the category
of authentic assessment. Students appear to learn bet when they see the
importance for learning and when the learning environment is familiar
to them. Authentic scenarios can provide this environment and relevance
to students. To implement such authentic assessment, the teacher must
strive to assess students as they would be assessed in the work place.
Professions where advancement is based solely upon results of periodic
pencil and paper tests are rare!
One further term,
prevalent in recent literature, is worth including in this chapter. This
is the term, Quality Assessment. When assessment means that
we are clear in our expectations of students and when we share those expectations
up-front, we are practicing quality assessment. Within quality
assessment, it is also necessary to provide good feedback to students,
use assessment data to improve instruction, and utilize a variety of assessment
methods. One key to understanding quality assessment is to view assessment
as an on-going, student-participatory activity, not just as something
the teacher does to students!
Before leaving these vocabulary words, a few examples may prove useful.
The following sample assessments might occur in a high school chemistry
class, at the end of a unit on acids and bases:
a) turns red litmus paper to blue
b) releases hydroxide ions in solution
c) tastes sour
d) feels slippery
answer is c.
between an acid and a base.
mother took a TUMS tablet last night for acid indigestion.
Why? Trace the TUMS through her system describing the
correct chemical reactions. Why did she burp? (SouthEastern
Regional Vision for Education, 1998)
A similar set of
assessments might occur in an elementary classroom, in which the instruction
is focused on handling money:
wishes to buy three apples. If each apple costs 11 cents,
how much money must she spend?
a) 31 cents
b) 22 cents
c) 33 cents
has $1.00 to spend on candy. She wants to buy a lollipop for
herself and one for each of the other 10 players on her softball
team. Will Noel have enough money to buy these lollipops?
instructions) Jesse, take a $5.00 bill from your practice
(play) money to the classroom store. Choose one
of the items in the store (nothing in the store costs over
$3.00) and pay the storekeeper. Noel, as the store keeper,
you are responsible for giving Jesse his correct change.
These examples simply
re-emphasize the meanings of traditional, alternative, and authentic assessment.
They clearly show that traditional assessment allows students to choose
from a list, while alternative assessment requires the creation of a response.
Authentic assessment tasks are tied to everyday occurrences in the students
By defining these terms, we hope to clarify the meanings of these words
for the readers of this book. We must emphasize, however, that we do not
wish to imply that one type of assessment is good, while another
is bad or that one type of assessment stimulates higher order
thinking than another. There is a place for all types of assessment in
the classroom, and we have seen many challenging examples of all three
assessment types. One example of a thought-provoking multiple-choice question
is contributed by George Dawson, from Florida State University. This question,
used in a science class studying the flow of energy in systems, reads:
winter, seven sailors are shipwrecked on a barren arctic island
which has water but neither soil nor vegetation. A crate of corn
flakes and one containing seven hens are also cast ashore with
them. In order to survive as long as possible, the sailors should:
a) feed the corn flakes to the hens as long as they last and then
kill and eat the hens.
b) kill and eat the hens and then eat the corn flakes
c) feed the corn flakes to the hens and then eat the eggs which
d) eat the corn flakes, giving none to the hens, and then eat
the hens when they die of starvation.
So as not to squash
any discussions, or heated debates about this question, we decline to
provide the correct answer to this question. However, we will
suggest that responders to the question consider the efficiency of conserving
energy through a food chain.
the beginning of this chapter, in the definition of assessment, we used
another term which may have caused some confusion. We stated that assessment
helps determine if students have achieved some standard. This word, standard,
and its frequent companion, benchmark are used within the
current literature in a dizzying array of combinations and usages. From
an overview of many research reports and publications, we have compiled
a short list of terms which are used equivalently with standards and benchmarks:
we feel the need to clarify our meanings of these terms.
Standards are, simply, statements of what should be taught. They
establish a level of achievement, quality of performance, or degree of
proficiency expected from students. Standard statements are generally
rather broad in scope (see examples below). Benchmarks, on the
other hand, are used to explicate the standards. Benchmarks explain
what students must do to meet the standards; they focus on explicit behaviors
or particular products. State departments of education usually establish
these standards and benchmarks for courses taught in their public schools.
However, there are also national standards and benchmarks for many subject
areas, as science, foreign language, English language arts, history, arts,
health, civics, economics, geography, physical education, mathematics,
and social studies (Marzano & Kendall, 1996).
The following examples from diverse curricula may serve to help differentiate
between standards and benchmarks. The terms used below are authentic to
the source. Please consult the above table for clarification.
Sunshine State Standards in the area of reading, under the K-2
Language Arts curriculum (Florida Department of Education, 1996):
student uses the reading process effectively.
what a passage is about based on its title and illustrations.
Carolinas Health Occupations curriculum for grades 9-12,
in the Biomedical Technology course (North Carolina Department
of Public Instruction, 1995):
a career path based on personal qualifications.
a personal career path
National Center for History in Schools History Standard Project
(Kendall & Marzano, 1997) for grades 5-6:
the historical perspective.
historical fiction according to the accuracy of its content
and the authors interpretations.
In the above examples,
it is clear that the first statement, whether designated as a standard
or a competency, is a broad view of what students should know and be able
to do. Using the reading process effectively could encompass
many skills. The second statements above help clarify which skills or
activities may be needed in order to ascertain if students have achieved
the written standard. Continuing with the elementary example, one of the
activities needed to determine if students can use the reading process
effectively is encompassed by making predictions about a reading passage.
In this manner, the benchmark provides greater specificity than the standard
about what students should know and be able to do. So, benchmarks list
a particular activity that students must engage in or it recommends the
creation of a particular product. These activities and products are then
used to move students toward the achievement of the overall standard.
Since the benchmark breaks the standard into smaller increments, it is
usual to find several benchmarks written for each standard. By completing
all the incremental units of the benchmarks, the students construct the
whole, which is the standard.
This chapter, then,
has served as an introduction to several assessment terms. We chose the
terms to include here based upon those used most prevalently in current
literature. From reading this text, we hope you have a clearer understanding
We hope you have
profited from the examples given in this text.
of Education. (1996). Sunshine state standards.
Tallahassee, FL: Florida DOE.
Kendall, J.S. & Marzono, R.J. (1997). Content knowledge: A compendium
of standards and benchmarks for K-12 education. Alexandria, VA:
Association for Supervision and Curriculum Development.
Marzono, R.J. & Kendall, J.S. (1996). Designing standards-based
schools, and classrooms. Alexandria, VA: Association for Supervision
and Curriculum Development.
North Carolina Department of Public Instruction. (1995). Course blueprint
biomedical technology. Raleigh, NC: NCDPI.
SouthEastern Regional Vision for Education. (1998). Improving classroom
assessment: A toolkit for professional developers. Tallahassee, FL:
far, we have defined many terms. In doing so, we have classified assessment
into three categories: traditional, alternative, and authentic. Yet, one
further term necessitates clarification. That term is performance-based
article will provide insight into the following topics:
of Performance-Based Assessment
Logs, Journals, Notebooks, Portfolios & Projects
we think of a performance, we may envision a musical recital, a concert,
a play. However, a broader view is needed if we wish to understand performance-based
assessment. In performance-based assessment, student performances may
include all of the above examples, but are not limited to the arts. Driving
a car in drivers education class, making a speech in Public Speaking
class, constructing a birdhouse in Industrial Arts class all can
be construed as performances.
In our work with teachers, we have found some confusion between true
performances and classroom activities. Many times, teachers
have students perform very enjoyable activities which do little to forward
the curriculum for the course. A true performance, conversely, demonstrates
student mastery of a portion of the curriculum. Therefore, in a true
performance, the instruction is linked to the curriculum, which is linked
to the assessment. This relationship is shown in the following graphic:
found this differentiation between activities and performances to be one
of the harder concepts to convey to teachers. This may be due to the fact
that teachers love a particular activity and wish to include it simply
because it is so enjoyable. One example that springs to mind is the chemistry
teacher who enjoyed making peanut brittle with her students at Christmas.
The scientific sounding title of this activity was Partial Degradation
of a Six Carbon Sugar, Utilizing Protein Inclusions. Although it
sounded scientific, very little learning occurred and the activity was
not effective in forwarding the curriculum. Therefore, the construction
of the product of this activity (the peanut brittle) could not be classified
as a true performance. We have no quarrel with a teacher who wishes to
include such an enjoyable activity in her classroom repertoire, but we
do ask that only true performances be assessed.
of Performance-Based Assessment
such performances, we use prearranged criteria. The criteria are shared
with the students before the performance and are derived from the learning
outcomes or standards we wish to advance through our instruction. (The
development of such criteria are discussed later, in Chapter Seven of
this book, under the heading of Rubrics). Therefore, performance-based
assessment is defined as the direct, systematic observation of actual
student performances according to pre-established performance criteria.
methods can be employed in order to obtain these direct, systematic
observations of student performances. Basically, teachers may observe
students during the performance, ask students questions both before and
after performances, and examine student work (McColskey & OSullivan,
1995). The observations of students may be formal or informal. Formal
observations tend to be more structured than informal. The teacher using
a formal method may collect data using an observation instrument, as a
rubric for the performance. She/he is watching one particular student
or one particular group and comparing the behavior she/he see to some
criteria. In an informal observation, no one student or group may be singled
out for observation and the teacher may not have predetermined criteria
in mind. However, she/he may still collect data about students in the
form of anecdotal teacher notes or narratives.
For example, informal observations are made every day in the classroom.
It is through informal observations that teachers learn about their students.
They may learn that Jackie tends to wander away from her group or that
Matt never volunteers information. In contrast, formal observations are
more likely to occur only on special occasions, when the stage has been
set, so to speak. Formal observations occur most often during culminating
events, as at the end of a unit or project and are often termed summative
students is another method teachers use to assess performance. These questions
can be oral or written, with one right answer or many possible answers.
The student may be asked to participate in an interview, write a paper,
write essays, complete a survey, give an oral report, perform self-assessments,
perform peer assessments or simply respond out loud to short-answer questions
in class. Through acute questioning techniques, a teacher can assess what
students know and are able to do. The responses to such questions, then,
can be classified as performances and questioning, without a doubt, can
be a performance assessment.
Although responding to questioning can be a performance, other classroom
activities are more commonly classified in this area. These activities
include demonstrations of skills and knowledge, role-playing activities,
construction of models, projects, exhibitions, and compilation of journals
or portfolios. Each of these activities has its own definition and its
own set of assessment criteria.
For clarity, the following graphic classifies types of assessments under
two headings. The Selected Response Items are those which
are more traditional in nature and require students to choose answers
from a given list. On the right side of the graphic, Performance-Based
Assessments are broken into three sub-categories. These categories
include Constructed Responses, Products, and Performances. (See Table
Assessment Approaches and Methods
the approach to assessment may originate from either these traditional
methods or from the performance-based methods, this portion of Chapter
Five concentrates on the tools of performance assessment. These tools,
Constructed Responses, Products, and Performances are used
in student assessment to determine what students know and are able to
do. The tools involve real world skills and are interdisciplinary in nature,
in that they can be used for assessment purposes in all subjects. The
use of such tools constitutes one further example of how assessment can
be integrated with instruction. These tools enable students to become
more skillful in lifelong tasks and to demonstrate their understandings
of content-related concepts.
above table, the Constructed Responses heading includes such
tools as short answer sentences/paragraphs, diagrams/illustrations, graphic
organizers, graphs/tables, and matrices. A brief definition and sample
are given below for each of these tools. Then, an overview of the assessment
utility of the tool is elaborated.
Short answer sentences/paragraphs may be used to answer either
open response or open-ended questions. Open response questions are currently
more commonly used within school settings. Such open response questions
call for student-constructed responses, but generally possess a correct
answer. For example, Who invented the electric light bulb?
is an example of an open response question. Students do not choose their
answer to this question from a list, but must construct a response. However,
those students who answer with a name other than Thomas A. Edison will
most likely find they have responded incorrectly!
Open response questions are frequently used to check comprehension or
to explore the recall of information by students. However, such questions
have utility beyond this scope. Essay questions which encourage students
to synthesize information or to evaluate solutions would be examples of
the use of higher-order thinking skills in open response questions. For
example, a business management teacher, distributing information from
various banking institutions, might ask students to choose the best account
for a specified business. Students would review the needs of the business
in order to choose the correct account.
In contrast to open response questions, open-ended questions have more
than one right answer. Students are expected to justify their answers,
based upon concepts learned in class. For example, What two physical
characteristics would be most suitable for supporting and sustaining life
in animals living within the arctic tundra? is an open-ended question.
Like the open response question, the open-ended question fits under the
Constructed Response category, in that students do not select
their answers from a given list. However, this sample question demonstrates
that there are many possible avenues for responses. To answer the question,
students may focus on protection from cold (fur, feathers, fat), adaptive
feeding mechanisms (claws, teeth, jaws, beaks), or locomotion devices
(hooves, webbed appendages, wings). As long as student actually discuss
physical characteristics (and not such adaptive behaviors as hibernation)
and relate such characteristics to conditions within the arctic tundra,
a variety of responses can be correct.
Open-ended questions encourage original, imaginative, and creative student
thought. The nature of the question usually demands such critical thinking
skills as analysis, synthesis, or evaluation. In fact, the most common
use of open-ended questions in assessment is to uncover the thinking processes
involved in student decision making.
Like open-ended or open response short answer questions, diagrams and
illustrations are examples of student-constructed responses. These use
pictorial displays to uncover student knowledge. The uses of diagrams/illustrations
for assessment purposes are varied. Students may diagram the set-up of
equipment for a chemistry experiment, or draw a blueprint for a house.
They may create computer-assisted drawings, showing intricate details
of engineering projects. They may draw a picture of an animal to answer
the arctic tundra question above. When any of these techniques
are used for assessment, however, the criteria for judging each should
be clearly communicated to students via a rubric distributed prior to
the creation of the diagram or illustration.
Graphic organizers, according to Burke (1994, p. 118) are mental
maps that represent key skills like sequencing, comparing and contrasting,
and classifying and involve students in active thinking. The most
common types of graphic organizers used in schools are webs and concept
maps. Such tools help students to see the connections between and
the differences among concepts. The following simple concept map may illustrate
this use of graphic organizers:
are simply more complex concept maps, in which there are many cross-linkages
between a variety of concepts. The above simple concept map could be converted
to a web by adding more concepts. For example, other animal classifications
as reptiles, birds, fish could be added. There would be many cross linkages
between such groups, which would help students to visualize the differences
and similarities among animals in the animal kingdom. Venn Diagrams,
such as the one introduced in chapter one (depicted here, again) are also
often categorized as graphic organizers. Like concept maps and webs, Venn
Diagrams show the relationship between ideas. However, Venn Diagrams are
uniquely adept at illustrating the overlap between concepts. One example
of a Venn Diagram appeared as the Big Picture graphic at the beginning
of this chapter. In this Venn Diagram, we see that where curriculum, instruction,
and assessment overlap, we achieve Quality Education.
Flow charts, another type of graphic organizer, may be used to encourage
students to sequence events. Students may be asked to transform the written
directions for a scientific experiment into a flow chart before beginning
the procedure. Through the use of this technique, the teacher assures
that students read the procedure and understand the steps. The following
illustrates part of the experimental procedure for a Geiger counter lab.
The flow chart clearly shows the sequence of events to follow. It also
helps the student to organize data collection points.
charts are also commonly used in order to classify objects into categories.
For example, many field guides use flow charts to help differentiate between
species of animals (birds, insects, snakes, etc.). The simple flow chart
below shows the differentiation between a square and a rectangle:
flow charts may use yes or no questions to direct
users towards classification. Computer manuals often include these types
of flow charts in their troubleshooting sections, to encourage
users towards a step by step analysis of a problem.
No matter the form of the graphic organizer (map, web, Venn diagram, flow
chart) used for assessment, all such organizers explicate student understanding
of concepts. Graphic organizers are particularly helpful in diagnosing
student pre- or misconceptions.
Like the other forms of constructed responses previously discussed, graphs
and tables help make the thinking of the students evident to the teacher.
Students within an elementary classroom might be asked to construct a
graph to answer the question, What shoe type are most of the students
in this classroom wearing today? The graph would then show the relative
numbers of sneakers, dress shoes, boots, etc. worn that day. . To aid
younger elementary students in constructing the graph, stickers of shoes
could be placed on a grid. Such a graph could be used to assess counting
skills. Color recognition could also be tested in this manner, with a
simple change in the original question.
Another use of graphing might be to display data collected in a science
experiment. One first grade teacher shared such an activity with the authors.
She began a science activity by removing the lid from an electric popcorn
popper. She then placed the popper on a tablecloth spread on the floor
of the classroom. Then, she placed popcorn in the uncovered popper and
turned it on. When the corn popped, some of it landed in the popper, some
landed on the tablecloth, and some landed in the far corners of the room!
She asked student to count the pieces of popcorn in each location and
then graph this information. The graph helped students visualize the relative
numbers of kernels landing in the differing locations. After students
constructed their graphs, the teacher conducted a whole class discussion
on the findings. By using the graphed data, the teacher was able to elicit
descriptions of the popcorn based upon energy. For example, the high
energy popcorn landed further away from the popper than low
energy kernels. In this one assessment activity involving the construction
of a graph, then, the teacher was able to teach graphing skills, reinforce
counting skills, and introduce the science concept of energy.
Tables, like graphs, are often used to help students discern patterns.
A science teacher may ask students to record weights and volumes of pure
substances (carbon, tin, lead, zinc) in a table next to the given density
of the substance. From this data, the teacher may ask that students hypothesize
the mathematical relationship between the three terms. Similarly, a law
enforcement class studying forensics may collect measurements of footprints,
strides, and height. This class would then be asked to derive a relationship
between foot size and height or stride length and height. In this manner,
organizing the data into tables would help students see the patterns embedded
in the data. Therefore, the assessment activity of creating the table
would forward the teaching of a new concept.
Tables may also be utilized to organize data into easily reviewed units.
This particular use of tables is often evident in math classes in which
students are solving word problems. For example, a teacher may require
students to construct simple tables like the one below, utilized for a
rate times time equals distance problem:
Such tables help
the student organize the data given in the problem, and help separate
the relevant information from the irrelevant. A table like this one can
be an assessment tool because it offers valuable feedback to the teacher
on student understanding of the problem solving process. Similarly, a
table can clarify information that appears confusing when written in paragraph
form. The sample paragraph below illustrates such a convoluted, bewildering
mass of data:
Street students took three test during the first nine weeks of school.
Big Bird made a 52 on the first test, but improved to a 98 on the
last test. Kermit scored an 87 on the second, Animal Kingdom, test,
but only received a 72 on the last exam, Energy. On the Magnets
test, Cookie Monster received a score of 78, but Kermit made a 48.
Big Bird and Cookie Monster both received the same grade on the
second test, which was 5 points higher than Kermits score
on this test. On the final test, Cookie Monster made an 88. Which
student showed continuous improvement
in test scores?
a table of the data is constructed, it is hard to see that Big Bird made
the continuous improvement. However, this fact is easily perceived by
studying the table below:
A teacher might
ask students to construct a table like the one above in order to assess
reading skills or charting skills. When the data is even more complex,
however, or when detailed comparisons of data sets are needed, a matrix
takes the place of a simple table. (See the Blooms Taxonomy and
Multiple Intelligences matrix below).
matrix above plots levels of Blooms Taxonomy versus Gardners
Multiple Intelligences. Teachers wishing to incorporate questions and
assignments which support both of these educational theories may use the
matrix to plan instruction.
Matrices, like tables, graphic organizers, illustration, diagrams, and
short answer questions, can all serve as assessment tools in the classroom.
Through the use of such tools, the teacher integrates the instruction
with assessment, thereby advancing the study of the content within the
assessment activity. All of the tools discussed so far fall under the
Constructed Responses sub-category. We now move from this division to
tools which are classified as Products. While many products
from this list are familiar to teachers, several warrant detailed analysis.
Within this portion of the chapter, we will distinguish the characteristics
and uses of logs, journal, notebooks, portfolios, and projects.
Logs, Journals, Notebooks, Portfolios & Projects
log provides documentary evidence of events and may also show the
progression of such events. One example of a log is a patients hospital
chart. The ward clerk logs medications, laboratory testing
results, and therapy in this chart in order to establish a record of the
patients medical treatment. Police stations utilize logs to protect
evidence from contamination, loss, or misuse. Business people log
on to protected computer sites in order to obtain or distribute
sensitive or private information. In schools, logs may be used in assessment
to verify student actions. For example, students may be asked to keep
scientific logs while running science experiments. The addition of growth
factors to plants, as well as recorded heights of the plants at specified
intervals are examples of data included in such logs. A detailed log can
also help convince a teacher that, indeed, the student performed certain
actions. In addition, it can reveal the exact nature of those actions.
Because of its documentary properties, logs are frequently utilized to
support student assertions or conclusions.
Journals are similar to logs, in that they provide a record of
the progression of events. Generally, however, journals do not have the
legalistic, evidentiary purpose of a log. The Diary of Anne Frank is one
example of a journal. While this diary documents events, it flavors those
events with the opinions, feelings, and perceptions of the author. This
flavoring of data with the consciousness of the author is
what makes journals so useful to teachers. In the Good Morning,
Miss Toliver video (Foundation for Advancements in Science &
Education, 1993), Ms. Toliver asks her math students to record what
you learned today. By reading the journals, Ms. Toliver can ascertain
not only which concepts were conveyed to students, but the level of understanding
of the concepts achieved by her students. She can also uncover any frustrations
that the students are experiencing. Tobias (1993) encourages the use of
math journals with students endeavoring to overcome math anxiety. When
confronted with a difficult math problem, students are asked to write
down their feelings and any information they think might be pertinent
to the problem. The students writings help Tobias diagnose misconceptions
and knowledge gaps, while helping direct the students thoughts toward
possible solution avenues. One of the authors uses an electronic Dialogic
Journal with students enrolled in World Wide Web-based classes.
Such students may never meet in person, but are expected to collaborate
in preparing on-line conferences, participate in on-line discussion groups,
and produce solutions to contextual problems conveyed to them over the
Web. The Dialogic Journal is, in essence, a chat room for a small group
of students. In the journal, students may discuss class assignments, difficulties
with the Web technology, and concerns and issues related to the class.
The teacher uses the captured discussions to assess the classroom environment
experienced by these virtual students.
Notebooks, like logs and journals, are composed of written documents.
However, a notebook is similar to a file folder, in that it commonly holds
a collection of all information pertinent to a topic. A cookbook is one
example of a notebook. Recipes within the cookbook are divided into categories
as breads, vegetables, entrees, desserts, etc. Other sections may also
be present, covering such topics as weights and measures, planning meals,
presentation of dishes, or table settings. However, all the information
present in the book is related to preparing food. A good cookbook will
contain all the information necessary to help the reader prepare and serve
delicious, nutritious meals and thereby become a successful cook.
Students sometimes create their own cookbooks within the school
setting by voluntarily compiling notebooks for their classes. A science
notebook, for example, may contain lecture notes, copies of science exams,
lab reports, and completed homework assignments. This science notebook
is a file folder for all the science information presented in the class.
The student collects this information in hopes of utilizing the notebook
as a study guide for upcoming exams. The completeness of the information
is of primary importance to such a student, if he/she wishes to become
a successful science student. A missing piece might result in a gap in
knowledge, which could adversely affect the students exam score.
Occasionally, teachers will require students to keep a notebook.
The purposes of this assignment are varied, but may include teaching organizational
skills to students, verifying that ungraded assignments are being completed
by students, providing parents with a collection of all student work,
and producing documentary evidence to justify a grade. In such teacher-required
notebooks, the teacher usually sets the parameters for the collection
of data. In fact, he/she may provide handouts to students, specifying
assignments to be collected and placed in the notebook.
Notebooks may be helpful for assessment purposes, as they contain the
totality of work produced by a student. Such data can be analyzed to track
student performance over time, determine particular content areas/concepts
in which the student experiences difficulty, or serve as a basis for student
Portfolios, like notebooks, are collections of student work. However,
a portfolio is defined as a purposeful, integrated collection of
student work showing effort, progress, or a degree of proficiency. Portfolios
are often defined by the purpose underlying the collection of artifacts.
For example, before the current boom in portfolio use in the classroom,
artists and models assembled portfolios. Such portfolios could be classified
as Best Work portfolios, as the owner of the portfolio included only those
drawings or photographs which best displayed his/her talent. Scrapbooks
and photo albums constitute another class of portfolios in common usage.
The artifacts in these portfolios are a collection of mementos, so this
class of portfolio is known as the Memorabilia portfolio. Almost everyone
has assembled this type of portfolio, at one time or another. If doubtful
of this fact, empty your wallet or purse onto a tabletop. Then, divide
the contents into useful articles and mementos. You will be surprised
at the number of items you carry on a daily basis that have no practical
use in everyday life!
Growth portfolios are a third category of portfolios. In such portfolios,
the emphasis is on change. The elementary writing portfolio below demonstrates
how the writing of an elementary student changed from second grade to
third. Note that in the second grade sample, on the left, the student
has self-assessed (in red) and corrected one capitalization error and
one spelling error (but missed correcting the spelling of pregnant).
In the third grade sample, on the right, the student has moved from writing
about family events to addressing the President of the United States!
The writing is now in cursive, and longer sentences (albeit run-on sentences)
other type of portfolio is the Skills Portfolio. Here, the owner of the
portfolio assembles documentation to verify that he/she is proficient
at a particular skill or set of skills. For example, many states require
beginning teachers to assemble a Skills Portfolio, documenting evidence
of effective teaching. Veteran teachers applying for National Board Certification
compile similar portfolios to demonstrate advanced competencies in teaching
skills. In some high schools, students are encouraged to collect work
samples that demonstrate employability skills. These student portfolios
serve the same purpose as a resume, in that the student provides the potential
employer with a copy of the portfolio. In this manner, the prospective
employer can ascertain what the applicant knows and is able to do.
Each of the four types of portfolios, Best Work, Memorabilia, Growth,
and Skills, has utility as assessment tools. Asking students to assemble
Best Work portfolios encourages self-assessment and builds self-esteem.
The Best Work portfolios can then be displayed at parent conferences,
during Open House Night, or in the schools display case or media
Memorabilia portfolios appear at first to have no function in the school
setting. However, the Health Occupations Students of America (HOSA) club
requires such a portfolio, documenting club activities for those chapters
wishing to be awarded Chapter of the Year honors. Personal
Memorabilia portfolios may also be assembled by students in art classes,
or used in language arts courses to stimulate writing.
example above of the elementary writing portfolio is a classic example
of the use of a Growth Portfolio in a classroom. At the end of second
grade, the portfolio was passed to the third grade teacher. By reviewing
the portfolios, the third grade teacher could assess the writing skills
of the students, and then plan her writing curriculum for the year. Of
course, this writing Growth portfolio also makes a cherished keepsake
for a parent! One of the authors also uses a Growth portfolio as the primary
assessment tool for a directed individual study course she teaches. This
course, for licensed health professionals, encourages the student to learn
new skills within his/her own health field.
The portfolio begins with a letter from the students supervisor,
noting that the knowledge in the portfolio is new to the student and not
currently a part of his/her routine daily duties. Then, the student writes
a five-page research paper on a new medical procedure, treatment, or process.
In addition to this research, the student must create an assessment instrument
for his/her own job performance and then use this instrument to perform
a self-assessment. Two other artifacts in the Growth portfolio highlight
the potential of the student. One requirement is a resource list of healthcare
professionals within the licensure area of the student, two of which have
to be at the national level (as officers of professional national organizations).
The other requirement is for students to create a researched list of potential
job opportunities related to the health care field, for which the student
will qualify once he/she receives her degree. Once the portfolio is assembled,
it documents not only new knowledge the student has acquired within one
semester, but serves as a planning tool for the students future.
The Skills portfolio,
of course, has a multitude of uses in the classroom. Longs (Long,
1997) students designed their own report card and then documented their
grades through the use of a Skills portfolio. The report card
used a Likert scale, as:
1=Most of the
time (seldom needs help)
2=Sometimes (may need help to get started)
3=Beginning to do (needs help to complete work)
4= Does not complete work (Long, 1997, p. 37)
To support these
descriptors, in math, for example, students collected artifacts to demonstrate
the Uses basic operations skill (Long, 1997, p.37). The students
then assessed their own work, and assigned one of the above descriptors
to this skill.
Butler describes the design of the Skills portfolio used in her tenth
grade chemistry class in the following manner:
The first portion
of the portfolio was based on the Florida Department of Education Course
Student Performance Standards
I distributed these standards evenly
into four Learning Activity sheets for students. Students were given
one Learning Activity sheet for each of our four nine-week grading periods.
The sheets were used as a basis for student proofs. Proofs constituted
documentation that the knowledge described in the Learning Activity
had actually been achieved by the student (Butler, 1997).
students had to prove, or collect evidence to document the
successful attainment of a particular skill. Samples skills from this
chemistry-related portfolio included Explain the organization of
the Periodic Table of Elements, Determine atomic number and mass number
when number of protons, neutrons, and electrons is specified, Solve stoichiometric
problems (Florida Department of Education, 1991). A sample of student
work can be viewed below:
any type of portfolio in the classroom, however, it is important to plan
the purpose for use, the design, and the assessment of the portfolio.
The following list of design questions are recommended by the authors:
1) Is the purpose
of the portfolio instructional in nature, to support learning or is
the portfolio assembled only for assessment purposes? Such questions
will help determine the types of artifacts the students will collect.
A portfolio assembled solely for summative assessment purposes may contain
best works pieces, rather than a continuum of student work.
2) What goal am
I trying to attain by using portfolios in my classroom? For example,
am I trying to promote self-assessment? Student reflection on their
own learning? Problem solving? Particular skills? Higher order thinking?
The goal for the portfolio will determine the design of the portfolio.
For example, if you are promoting self-esteem, a Best Works portfolio
appears appropriate. If, on the other hand, you wish your students to
prove their proficiency at content-related skills, a Skills portfolio
is indicated. A portfolio promoting student reflection would contain
many subjective, journal-like artifacts, whereas a problem-solving portfolio
might contain more objectively focused work.
3) What types of
artifacts will be collected? Will only written work be accepted, or
will videotapes, posters, computer disks, etc. be acceptable? How many
artifacts are necessary for documentation of a skill, goal, or purpose?
The decisions you make here will impact the size of the portfolio and
its physical characteristics. If you envision a notebook, then perhaps
only written work can be accepted. If you are planning an electronic
portfolio, all data may be stored on a disk. The decisions you make
here will be influenced by the storage capacity of your classroom! We
recommend the use of single entries to prove a skill, rather
than multiple entries. Surely if the student was successful once, he/she
can be so again! The number of artifacts also defines the type of tool.
If ALL student work is collected, the tool is a notebook, not a portfolio.
4) How will artifacts
be selected? Will the students select them or will the teacher select
them? How often will the portfolio be updated (artifacts added)? Must
the students keep copies of all potential portfolio artifacts, or will
the teacher maintain a file for this purpose? What are the criteria
for selecting artifacts (how will you or the students decide if a particular
artifact documents a skill, purpose or goal)? For example, if you wish
the portfolio to promote self-assessment, the students should choose
the artifacts. Older students may keep their own working files, while
younger ones may need help with this process, as they have not yet developed
5) How will you
orient students to the use of the portfolio? We recommend that you start
slowly, giving students plenty of support and practice. A structured
portfolio, in which expectations are explicated to students is recommended
over a more open, unstructured design. Remember that change is difficult,
and be prepared for some student resistance to this new procedure in
your class. Perseverance and consistency are two key factors in the
success of portfolio implementation.
6) How will the
portfolio be assessed? A scoring guide or rubric is essential for this
task, and this guide should be shared with students before data collection
begins. Constructing the guide before assigning the work will prevent
student frustration, enhance the matching of the purpose to the artifacts,
and ease the assessment task for the teacher. It is much simpler to
assess an assignment, if the plan for assessing is written beforehand.
In this manner, you do not put off tackling that huge mound
of papers, dreading the moment when you must confront the task!
Once these design
questions are addressed, the portfolio can be planned and implemented.
Although it may seem to be a monumental undertaking, the research indicates
that portfolio assessment reaps huge benefits for the students and for
the teacher. The following quotes demonstrate some of these rewards:
As I watch my students
portfolios fill up with entries each year, I become more knowledgeable
about my students as individual learners (and people). This knowledge
helps me effectively re-teach difficult math concepts and hone in on
particular areas of weakness (Clarkson, 1997).
As I reflect on my first year of portfolio activities, I am pleased
with the increased commitment to learning I have begun to foster in
my students. I have always felt that students need to feel safe, secure,
and appreciated for their uniqueness before they take risks to try new
ideas. Portfolios put this belief to the test (Long, 1997).
Perhaps the thing I like best about portfolios is the time I gain for
planning, instruction, and interacting with students. I am freed from
collecting and grading papers all the time, because I dont look
at every piece of work my students complete (Williams, 1997).
From comments such
as these, it is easy to see the advantages of well-planned portfolio implementation.
Like portfolios, projects fit under the product category of
performance-based assessment. Projects can be defined as a compendium
of complex assignments, each directed toward a common goal. Projects,
like all performance-based assessments should be designed and selected
to teach core curriculum content standards and should be scored using
a rubric which was shared with students up-front. Students
should be given some choice as to the activities they will perform or
the roles they will assume within the project. In addition, students should
be required to meet interim deadlines for the project (which will aid
the procrastinating student), to participate in planning the project (aid
for the disorganized student) and to reflect on project activities (aid
for the surface learner).
Of course, projects
at different grade levels will vary in level of difficulty. The following
examples may help in further clarifying the word, project.
Students study the origin of holidays and participate in meaningful
activities, as planting a tree for Arbor Day, visiting a veterans
hospital on Memorial Day, etc.
Students contribute articles to the Daily News, a compilation
of student essays about class activities which is distributed to parents
Students study the systems of the body and make life-size posters showing
the location of major body organs.
Students plan and design an appropriate backyard play area for a pet.
Middle School Level
Students design and build model racecars to test the effect of tire
sizes, gear ratios, and body design.
Students compete in science competitions in which they design and perform
experiments to answer a research question.
Students plan, write, and produce a video based on a historical event
or upon literature.
History students produce a museum exhibit by collecting folk songs surrounding
a particular time period or historical event (Civil War, pioneer life,
High School Level
History students plan and raise funds for a Roman Tour of Great
Trade and industry students design, construct, build, and sell a house.
Science students reclaim an endangered estuary through clean up efforts
and then turn the estuary into a living classroom for elementary
Language Arts students write a Canterbury-like tale concerning modern
American teenagers hanging out at the local fast food restaurant.
As evidenced by the
above lists, projects can be the result of both cooperative work and individual
effort. No matter which type of project is implemented in the classroom,
however, Lewin and Shoemaker warn of some common project pitfalls.
The following are their descriptors of these negative phenomena:
The Razzle Dazzle:
The performance has a lot of flash but no substance.
The Parasite: The parents pick the topic. The student may do
the work but has no interest or ownership in the project. Moms or Dads,
however get to live out their dreams/interests/fantasies through their
Scaffolding: The student picks a project of personal interest, but may
not do any of the actual work. It is difficult to determine how much
scaffolding (shoring up) by others (usually parents) occurred.
The Fizzle: Not enough guidance or direction is provided. The
task is assigned, and students are expected to miraculously produce
a fantastic project in six weeks. They rarely do.
The Celebration: This category results from an erroneous belief
that performances should be showcases festivals, parties, or
other gala events without evaluation. Everyone should be honored
no matter the quality of the work.
The Uneven Playing
Field: Some students draw from many resources (e.g. parents, computers,
libraries, and art supplies) in creating their projects, while other
students draw from few or no resources.
Near Death: Teachers, near exhaustion, walk around school with
glazed-over eyes mumbling, Why did I do this to myself? I will
never do this again! (Lewin & Shoemaker, 1998, p. 104)
It is the responsibility
of the teacher to minimize such project pitfalls and to maximize the learning
experience for the students. This must be done in the design phase of
the project assessment. Criteria for quality design will be elucidated
in Chapter Nine of this text.
final category of Performance-Based Assessments on our graphic brings
us back to where this chapter started, with performances. The performances
listed here include musical recitals, dances, dramatic readings, enactments,
etc. all of which we have no trouble recognizing as performances.
However, in order to make such performances work as performance-based
assessments, they must conform to the definition of performance-based
assessments, in that they must be judged according to pre-established
performance criteria. Of course, like all quality assessments, they
should also help forward the instruction of key curricular concepts.
One example of a performance not listed on the graphic is the writing
and then the oral defense of a doctoral dissertation. At first, it seems
to make little sense to include such an example in a text designed for
teachers of primary, middle school, and secondary school students. However,
a performance not unlike that of the doctoral candidate is rising in prominence
among high school students. This performance is the Senior Project.
SERVE defines the Senior Project program as a performance assessments
and requirement (basics to honors) for 12th graders with these three components:
- A research
paper on a topic of the students choice
- A project related
to the paper
- A presentation
to a panel comprised of community members and school staff (SERVE,
1999, p. 2).
research paper serves to demonstrate the acquisition of new knowledge
through the use of researching, writing, interviewing, or other complex
skills. The second phase of the work, the project, is an extension of
the research paper in which students work with a mentoring adult in a
field related to the topic. Finally, students present their work to a
panel of judges in a question and answer format. Through such activities,
students demonstrate complex knowledge and skills. These skills
- Gathering information
through researching and reading,
information by writing, speaking, and using visual exhibits and verbal
and nonverbal expression,
- Using numbers,
graphs, charts, drawings, and problem-solving techniques gained from
math and science, and
- Using current
systems of technology (Public Schools of North Carolina, 1999,
by the above listing of components and skills, the Senior Project program
incorporates activities from all three categories of Performance-Based
Assessments (constructed responses, products, and performances). Benefits
garnered from such Senior Project assessment accrue to students, faculties,
and communities. Students benefit in that they are given opportunities
to present their best work, acquire new skills, gain self-confidence,
and focus on career goals. Benefits to faculties include opportunities
to collaborate with colleagues in interdisciplinary studies, chances to
raise expectations and standards for students, and connect the curriculum
to world-of-work applications. Community benefits from student work on
Senior Projects include having more community members involved in school
activities, publicity for community problems/challenges, and gaining new
community members, well prepared for the world of work as students matriculate.
The Senior Project performance, however, like all performance-based assessments
does not succeed unless carefully designed and planned. All such assessments
must be based upon curriculum standards, must have clear expectations
which are shared with students, and must be judged by prearranged quality
criteria. The intent of this chapter has been to acquaint the novice assessor
with the many examples of performance-based assessments and to aid the
novice in differentiating one type of assessment from another. However,
future chapters will delve into planning for the quality design of performance-based
assessments. Conveying expectations and writing rubrics for these assessments
will be covered in Chapter Seven, while other design considerations are
found in Chapter Nine.
Burke, K. (1994).
The mindful school: How to assess authentic learning. IRI/Skylight Training
and Publishing, Inc: Palatine, Illinois.
Butler, S.M. (1997). Using science portfolios in a tenth-grade
chemistry classroom. In Barton, J. & Collins, A. (1997). Portfolio
assessment: Handbook for Educators. Menlo Park, California: Addison-Wesley
Clarkson, J. (1997). Using math portfolios in first-, second-,
and third-grade classrooms. In Barton, J. & Collins, A. (1997).
Portfolio assessment: Handbook for Educators. Menlo Park, California:
Addison-Wesley Publishing Company.
Florida Department of Education. (1991). Course student performance
standards, Chemistry I Honors. Curriculum Framework. Tallahassee,
FL: Department of Education.
Foundation for Advancements in Science & Education. (Producer).
(1993). Good morning, Miss Toliver. [Film]. (Available from FASE Productions,
4801 Wilshire Blvd., Suite 215, Los Angeles, CA 90010, 800-404-3273)
Lewin, L. & Shoemaker, B.J. (1998). Great performances: Creating
classroom-based assessment tasks. Arlington, VA: Association for Supervision
and Curriculum Development.
Long, D. (1997). Using language arts portfolios in a fourth-and-fifth
grade classroom In Barton, J. & Collins, A. (1997). Portfolio
assessment: Handbook for Educators. Menlo Park, California: Addison-Wesley
McColskey, W. & OSullivan, R. (1995). How to assess student
performance in science: Going beyond multiple-choice tests. Greensboro,
Public Schools of North Carolina. (1999). Classroom assessment: Linking
instruction and assessment. Raleigh, NC: State Board of Education, North
Carolina Department of Public Instruction.
SERVE. (1999). Senior project: Student work for the real world. [Brochure
to accompany videotape] Greensboro, NC: SERVE.
Tobias, S. (1993). Overcoming math anxiety. New York: W.W. Norton.
Williams, A. (1997). Using science portfolios in a sixth-grade
classroom. In Barton, J. & Collins, A. (1997). Portfolio assessment:
Handbook for Educators. Menlo Park, California: Addison-Wesley Publishing
©2001 Dr. Susan Butler for Science Junction, NC State University.
All rights reserved.
Last updated 10/09/01
Route | Science Junction
| NC State University |