Student
Ratings Frequently Asked Questions Online
Student Ratings Home Why
evaluate teaching? Why
use student ratings? Why administer student ratings online? What
are the challenges of administering student ratings online? Who
has provided input into the development of the online student-rating system? How
were items on the student rating form developed? Why
are the two global items ("rate the instructor", "rate the course")
included on the form? Why do the average ratings for
global items ("rate the instructor", "rate the course") sometimes
differ from average ratings for other items on the rating form? Will
instructors be able to add items to the online rating form? Can
online student ratings be used for mid-course student feedback? How
can the online student ratings be used to improve teaching? How
will students know when and how to complete the online rating forms? How
will confidentiality of student responses be maintained? How
have students responded to using the online student-rating system? Are
other universities using online student ratings? Where
can I learn more about student ratings of instruction? Why
evaluate teaching? In
general, the evaluation of teaching serves two broad purposes: - Teaching
evaluations provide an opportunity for faculty to receive feedback on their teaching.
Through feedback from students, peers, and supervisors, faculty can better understand
teaching strengths and weaknesses and gain ideas on how teaching can be improved.
- Teaching
evaluations provide important information for the evaluation of faculty and courses.
Teaching evaluation results are used for decisions regarding faculty rank and
status and for decisions about courses, programs, and faculty assignments.
There
are three primary data sources for collecting data in the above areas: students,
peers, and supervisors. Each of these groups is in a unique position and has strengths
and weaknesses in relation to data they can provide for teaching evaluations.
- Students
are in the best position to report on the day-to-day functioning and activities
of a course and provide feedback on their own learning experience in a course
(Chism, 1999; Theall & Franklin,
2001).
- Peers
are in the best position to provide feedback on course content and design and
an instructor's subject matter expertise (Chism, 1999; Hutchings,
1994; Johnson & Ryan, 2000).
- Supervisors
(e.g., dept. chairs, deans, university administrators) are in the best position
to synthesize and confirm student and peer feedback and evaluate instructor performance
in light of department, college, and university goals (Chism,
1999; Diamond, 1994).
Obviously,
there is some overlap in the data received from these three sources (as there
should be), but each has its own unique contribution in providing data and perspective
in the evaluation of teaching.
top Why
use student ratings? Those
who doubt the value of student ratings often lack an understanding of the overall
evaluation process and the appropriate contributions of each of the primary data
sources: students, peers, and supervisors (see "Why
evaluate teaching?"). Students should not be expected to act as primary
evaluators of course content or the overall contribution of a faculty member in
a department or college. On the other hand, neither peers nor supervisors are
in a good position to know what goes on day-to-day in the classroom and how the
course is experienced by students. Common sense, as well as research, reveals
that students are the most valid and reliable source for this type of information
(McKeachie & Kaplan, 1996; Theall
& Franklin, 2001).
Data
from students can be gathered in a number of ways including individual interviews,
focus groups, measures of student learning (assignments, exams), and student ratings
of instruction. Of these methods,
student ratings are usually preferred. Student
ratings are more feasible (and typically more reliable) than individual interviews
or focus groups. In principle,
measures of student learning may be the best measure of teaching, but there is
uncertainty regarding how to use test scores to evaluate teaching. As Scriven
(1983) points out, there are a number of factors besides teaching that affect
examination performance and often exams are poorly written, especially as measures
of the total learning that takes place in a course. Some are working toward a
greater emphasis on student learning in evaluating teaching (Barr
& Tagg, 1995; North, 1999), but much thought
and research is still needed.
Student
ratings are the most researched method for gaining feedback on teaching. There
are over 1,500 published articles dealing with research on student ratings of
instruction (Cashin, 1995; McKeachie &
Kaplan, 1996). This research
shows that student ratings are generally a reliable and valid method for gathering
data on teaching (Cashin, 1995; Marsh,
1997, Ory, 2001; Theall &
Franklin, 2001)-much more so than any other teaching evaluation method (McKeachie
& Kaplan, 1996; Scriven, 1983). However,
student ratings are certainly not a perfect measure of teaching. To help substantiate
and extend data from student ratings, the teaching evaluation process should include
the triangulation of results from student ratings, peer review, and supervisor
evaluations (Johnson & Ryan, 2000; Kahn,
1993; Marsh & Dunkin, 1992; Wagenaar,
1995). top Why
administer student ratings online? Flexibility
and customization--The online student rating system will provide evaluation
forms and reports that are tailored to specific needs. Instructors will be able
to choose items to include on the form that are specific to the courses they teach.
Faculty members and administrators will receive reports that are more complete,
easier to interpret, and customized to their individual needs.
More
helpful feedback for instructors--Online reporting will provide more complete,
in-depth reports that are easy to interpret. It will also allow reports to include
links to online resources on specific areas of teaching and learning.
Quicker
feedback to professors--The online system will allow professors to view student-rating
results as soon as grades are reported. This will provide timely feedback that
can be used in preparation for the following semester.
Anonymity of
student comments--Because student comments on the rating forms are typed,
professors cannot identify a student's response by his or her handwriting. This
helps students feel more comfortable and open in their responses.
Longer
and more thoughtful student responses--Because forms are completed outside
of class, students don't feel pressured to complete the forms quickly. In addition,
students can easily type their comments rather than write them by hand. Research
shows that when forms are completed online, the number, length, and thoughtfulness
of student comments are greatly increased.
Wide Spread Evaluation--Online
administration of the student-rating form will provide students the opportunity
to rate all of their courses each semester. It will also provide faculty members
with student feedback on every course they teach.
Class-time savings--When
student ratings are done online, class time is not needed to complete rating forms.
Cost
reduction--With online administration there is no need for paper forms, thus,
the costs of producing, distributing, and processing these forms are eliminated.
The costs of setting-up and maintaining the online rating system will be considerably
less than continuing to operate the current paper-pencil system.
Efficiency
and accuracy--Online questionnaire administration and data processing produces
fewer errors; this is due to automation and reducing manual steps in the process
such as collecting forms, scanning, and distributing reports.
top What
are the challenges of administering student ratings online? Research
has shown that the primary challenges to online administration of student ratings
are gaining faculty and student support, providing adequate computer access, and
ensuring adequate response rates. Research at BYU and elsewhere has also pointed
to methods for meeting these challenges.Research has shown that the primary challenges
to online administration of student ratings are gaining faculty and student support,
providing adequate computer access, and ensuring adequate response rates. Research
at BYU and elsewhere has also pointed to methods for meeting these challenges. Student
and faculty support is gained by providing an online rating system that is
valuable and easy to use. Support is also gained by informing students and faculty
about the system, its merits, and how to use it. Steps are being taken to address
each of these areas. Student
access to computers at BYU is steadily increasing. In 1999, about 70% of the
BYU student body had access to a computer at home-either their own, a family members',
or a roommates' computer. Access to computers and the Internet in the dorms is
now close to 100%. On campus, there are about 1000 computers with Internet access
in general computer labs. Additional computers and Internet access are available
in department labs. There are kiosks where students can access RouteY. There are
also ports for students to use their personal laptops for Internet access. Continuing
analysis of computer availability and student needs is part of the process of
implementing online student ratings. Response
rate can be a challenge because students must take time outside of class to
complete online rating forms. The response rate for recent BYU pilots ranged from
62% to 30%. In these pilots, several strategies for increasing response rates
were tested. It was clear that some strategies must be employed to increase response
rates; with no strategies, the response rates were low. With full implementation
of the online rating system and measures taken to increase response rates, the
response rates are expected to be near or above 70%. This is similar to the response
rates we have experienced using the paper-pencil rating system. (Over the past
year, the overall response rate for the paper-pencil student-rating system was
72%.) Some strategies to
increase response rates have been identified in BYU pilot studies: - Response
rates increase when completing the rating form is a class assignment. This is
true regardless of whether or not actual points are given for completing the rating
forms.
- Response
rates increase when students know about the student rating system and how to use
it. A number of strategies are being considered to inform students about the online
rating system and its use.
- Student-rating
responses increase when students understand how rating results are used. Various
methods are being explored to help students understand the different uses of student-rating
results and that student responses do make a difference. Along with educating
students regarding the use of rating results, it is important that faculty members
and administrators make the best possible use of these results.
top
Who
has provided input into the development of the online student-rating system? Faculty-During
Winter Semester 2002, all faculty at BYU were sent email messages directing them
to a website with information on the proposed online student rating system. This
website included a copy of the new rating form, a list of Frequently Asked Questions
(FAQs), and an opportunity to provide feedback on the new rating form and online
student rating system. Fifty-seven faculty members responded. These responses
were analyzed and used in the development of the online student ratings. Faculty
Advisory Council-The Faculty Advisory Council has provided ongoing input to
the development of online student ratings at BYU. This council approved an early
version of the BYU rating form and has continued to provide periodic feedback
since that time. In Winter 2002, the Faculty Advisory Council helped in revising
the online rating form. Department
Chairs-During Winter Semester 2002, all BYU department chairs were invited
to meet with AAVP Richard Williams to discuss and provide feedback on online student
ratings. Sessions were held on multiple days to accommodate individual schedules.
Chairs received a description of how the form was developed and articles summarizing
the national research on student ratings of instruction. Department chairs have
also given feedback on online ratings in the Department Chair Seminars. Deans
and Associate Deans-In Deans Council, BYU deans provided recommendations and
approved current plans for the implementation of online student ratings. Associate
deans have discussed online student ratings and given recommendations in the University
Faculty Development Council and the University Learning, Teaching, and Curriculum
Council. Students-
During Winter Semester 2002, all students at BYU were sent email messages directing
them to a website with information on the proposed online student rating system.
This website included a copy of the new rating form, a list of Frequently Asked
Questions (FAQs), and an opportunity to provide feedback on the new rating form
and online student rating system. Six-hundred-forty students responded. All responses
were analyzed and used in further revision of the online rating form and system.
In addition, students participating in online-student-rating pilots were asked
to give feedback. During the Fall 2000 pilot, over 1,800 students responded to
a questionnaire sent to pilot participants. In addition, 40 students participated
in student focus groups. Student feedback was analyzed and used in developing
the online student ratings. BYU
Student Association and Student Advisory Council-The BYU Student Association
(BYUSA) and the Student Advisory Council (SAC) have reviewed and given feedback
on the online student ratings. Representatives from the SAC were members of the
original Lee Hendricks student-ratings committee in 1996. Over the past year and
a half, BYUSA and SAC representatives have met in a series of meetings to discuss
implementation of online student ratings. They have provided many ideas for, and
their support of, the current rating system.
top How
were items on the student rating form developed? In
1995, President Rex Lee commissioned a committee to begin work on a new BYU student
rating form. The committee was chaired by Lee Hendrix of the Statistics Department
and consisted of faculty, students, and administrators. Additional efforts built
on the work of this committee. An in-depth analysis was conducted on the research
on teaching and learning, research on student ratings of instruction, and specific
BYU needs. From this analysis, essential categories of student rating items were
identified. Within each category, items were chosen that seemed to best represent
the category and align with BYU needs. Categories that were most important to
teaching and learning (as indicated by the research) were given more items. (For
more information on items and item categories, click here.)
Research was conducted on the form, including inter-item correlations and factor
analyses. Versions of the form were reviewed and approved two separate times by
the Faculty Advisory Council. Outside experts were consulted on the content and
layout of the form. Finally, the form was beta-tested with students to examine
their interpretations and perceptions. Throughout this process, the online student-rating
form was revised according to feedback and research results.
Why
are the two global items ("rate the instructor", "rate the course")
included on the form? Research
shows that responses to overall items (e.g., rate the course, rate the instructor)
generally have a higher correlation to measures of student learning than do individual
items or groups of individual items on rating forms (Marsh,
1994; Theall, Scannell, & Franklin, 2000). This
has been replicated in numerous research studies and in meta-analyses of multiple
studies (Ali & Sell, 1998; Koon &
Murray, 1995; Zong, 2000).
Why
do the average ratings for global items ("rate the instructor", "rate
the course") sometimes differ from average ratings for other items on the
rating form? Differences
in global ratings and an average of individual item ratings on the form occur
for a number of reasons: - The
global items on rating forms are intended to be normative (i.e., "compared
to other courses you have taken"). The specific items are less normative
in that they focus on specific aspects of a course or actions of an instructor.
Therefore, the global and specific items are asking for two different types of
ratings.
- Even
though the number of points are the same on the global and specific item rating
scales, these points are labeled differently. A Likert scale asking for agreement
or disagreement to a given statement (on individual items) is not the same as
rating an instructor or course as good or poor (on global items).
- The
individual items on the rating form are a sampling of important areas of teaching;
it is impossible to include all important areas of teaching on a short student-rating
form. When students provide an overall course or instructor rating, they may consider
aspects of teaching and learning that are not represented in the individual items
on the form. Therefore, results of overall items and averages of specific rating
items are usually different. This phenomenon is observed on rating forms across
the country. (For more information on the validity of global items, see "Why
are the two global items included on the form?")
- An
average of the scores for all individual items on a rating form does not take
into account that some individual items are more important than others to the
overall quality of the course or instructor. To determine an appropriate average
of individual items, a weighting scheme for individual item scores would be needed.
If a weighting scheme were developed, it would have to be adjusted for individual
courses because the most important aspects of teaching are not necessarily the
same for every course. Determining weighting schemes for individual courses would
be a very difficult process. Of course, all discussion about a weighting scheme
is based on the assumption that all important aspects of teaching are represented
in the individual items on the rating form, which is not possible on a rating
form of reasonable length.
top Will
instructors be able to add items to the online rating form? Yes.
Plans are underway to allow instructors to choose items from an item pool or construct
their own items to add to the online rating form. This will allow instructors
to include items that are tailored to the specific goals and contexts of their
individual courses. These items will be very easy for instructors to add to the
online rating forms. top Can
online student ratings be used for mid-course student feedback? The
proposed online rating form will only be used for end-of-course evaluations. The
form is designed to elicit general feedback from students about the course as
a whole. However, plans are underway to develop a mid-course-student-rating form
that instructors can use anytime during the semester. This form will be very flexible,
allowing instructors to select or write the items that appear on the form. Data
from mid-semester ratings will be separate from the end-of-course rating system
in that the results will only be available to the requesting faculty member and
will only be used for formative purposes (i.e., to give the instructor feedback
to improve the course, not for faculty rank and status decisions). top How
can the online student ratings be used to improve teaching? The
new online student rating system is designed to promote the improvement of teaching.
Some teaching improvement features are already built into the system; others are
planned for, but not yet implemented: - One
of the most important reasons for moving to an online student rating system is
the flexibility it provides in selecting rating items. This flexibility is important
to improving teaching:
- The
online system will allow instructors to choose or develop items to add to the
rating form. These items may be tailored to the goals, needs, and context of each
course. Results from these items will be used solely by instructors for improving
the course.
- In
cases where course formats are unique (e.g., internships, fieldwork), faculty
may choose to use a shortened version of the online form. This short form provides
additional room to add items that are tailored to the unique characteristics and
improvement needs of each course.
- Online
reporting includes features that support the improvement of teaching:
- Reports
are available as soon as grades are submitted. This allows instructors time to
look over student rating results and make changes to their courses for the following
semester.
- In
comparison to paper reports, online reports provide more complete, detailed, and
understandable information on each course.
- In
the future, online reports will include links to resources for improving teaching
and learning. Links and resources will be provided for every item/topic on the
rating form.
- Research
at BYU and across the nation has shown that students are much more likely to supply
written comments when ratings are online.
-
over six times more students commented in a recent BYU pilot.
- In
addition, student comments entered online tend to be much longer and more detailed
than comments on paper forms.
-
Faculty members often report that student written comments are the most useful
feedback for improving a course.
- The
items on the online rating form focus on areas research has shown are important
to teaching and learning in higher education. As instructors review student rating
reports, they will gain a better understanding of important areas of their teaching
and student learning.
- Unlike
the old paper system, the online student rating system provides the opportunity
for every student to rate every course every semester.
- All
instructors will receive feedback on their teaching.
- This
increase in student rating data will provide the opportunity for more discussion
among department faculty regarding teaching and course improvement.
- The
online student-rating reports are provided to each instructor; in addition, the
reports are accessible by their respective chairs and deans, including all written
comments. This provides a rich source of data for use in annual stewardship interviews
and for other department and college formative evaluation efforts.
Student
rating results may be combined with other information sources and methods (e.g.,
peer reviews, student/class interviews, mid-course student feedback) to give a
more complete and accurate picture of teaching. It
is important to keep track of feedback on teaching over time to better understand
patterns and the influence of contextual variables in teaching. Research
has shown that teaching improvement is greatly enhanced when instructors discuss
student-rating results with a colleague or faculty development consultant (Brinko,
1993; Hoyt, 1999; McKeachie, 1996).
top How
will students know when and how to complete the online rating forms? Students
will be sent a series of email messages reminding them to complete the online
forms and telling them how to do so. Instructors will be notified when the online
forms are available so they can remind their students in class. In addition, other
methods (e.g., newspaper articles, posters, student orientation) will be used
to notify and instruct students in completing the forms.
top How
will confidentiality of student responses be maintained? Only
group data for the entire class will be reported to faculty members. No identifying
information will be linked to data on the reports (unless individual students
choose to identify themselves in their open-ended comments). Identification of
respondents will be encrypted when the data are stored.
top How
have students responded to using the online student-rating system? In
general, BYU students are very positive about using the online student-rating
system. In
a recent pilot, many students said they liked the online system because it was
efficient, convenient, and easy to use. They
also mentioned advantages such as: saves class time; anonymity of responses; not
rushed, more time to consider answers; typing responses is easier and takes less
time; students are more apt to write comments online; students who miss class
can still respond; saves paper; more space to write comments; all instructors
can be evaluated even if they don't pass out the forms; typed comments are easier
for faculty to read; and no one needs to take the forms to the Testing Center. top
Are
other universities using online student ratings? Yes.
A number of institutions are using online rating systems for part or all of their
courses. Below is a list of some of these universities and websites describing
their online student-rating systems: Universities
using online student ratings campus-wide: Georgia
Institute of Technology; Atlanta, Georgia https://intranet.gatech.edu/cfprod/cios/student_general_help.html Includes
the reasoning behind using an online student ratings system and how to ensure
anonymity of students' responses. Formative and summative rating forms are provided.
Northwestern
University; Evanston, Illinois http://www.it.northwestern.edu/itcom/monitor/sep00/ctec.html Includes
an article from the university's newspaper explaining why they have switched to
an online-student-rating system.
Polytechnic
University; New York, New York http://survey.poly.edu/Ceval/CevalSp.shtml Displays
a sample online course-evaluation form. Responses are selected from a drop down
list for each item.
Universities
using online student ratings in select colleges or departments or as part of a
pilot program: Indiana
University, School of Education; Bloomington, Indiana http://ic.educ.indiana.edu/ Includes
a site for students, faculty, and administrators to access the online-course-evaluation
system. University of
Chicago, Graduate School of Business; Chicago, Illinois http://gsbwww.uchicago.edu/curriculum/courses/eval.html Describes
the benefits of having student-rating data stored online.
University
of Florida, Gainesville, Florida http://medinfo.ufl.edu/omi/docs/neweval.html Provides
hyopthetical examples of the online faculty evaluation. (Follow Hypothetical
Course Evaluation link) https://medinfo.ufl.edu/cgi-bin/eval.cgi?dir=demo;form=course
University
of Illinois; Champaign, Illinois http://www.news.uiuc.edu:16080/ii/01/1004/1004onlinesystem.html Identifies
the benefits of online student ratings including flexibility and data manipulation
features.
University
of Washington; Seattle, Washington http://depts.washington.edu/oeaias/ Describes
the online student rating system; includes links to a sample form, a sample report,
and a demonstration/tutorial of how to use the system.
top Where
can I learn more about student ratings of instruction? General
Websites: Student
Ratings of Teaching: The Research Revisited WIlliam E. Cashin http://www.idea.ksu.edu/papers/Idea_Paper_32.pdf
Student
Ratings of College Teaching: What Research Has To Say Lucy C. Jacobs http://www.indiana.edu/~best/pdf_docs/student_ratings.pdf
Research
on Student Ratings in a Nutshell http://www.usafa.af.mil/dfe/research_on_student_ratings_in_a_nutshell.doc
Ratings
Myths and Research Evidence http://www.nea.org/he/advo99/advo0199/feature.html
Embracing
Student Evaluations of Teaching: a Case Study Timothy J. Gallagher, Kent
State University http://dept.kent.edu/fpdc/pdf_files/gallagher.PDF
Student
Ratings Offer Useful Input to Teacher Evaluations. ERIC/AE Digest. Scriven,
Michael http://www.ericfacility.net/databases/ERIC_Digests/ed398240.html
How professors can
use student course ratings to improve teaching and to prepare for tenure/promotion/merit
decisions: Research-based suggestions http://www.education.mcgill.ca/cutl/files/ratinfo1.pdf Test
your assumptions about Student Evaluations: A Quick Quiz http://cea.curtin.edu.au/seeq/quiz.html
Web
Sites related to Online Student Ratings: Online
Student Ratings: Research and Possibilities Trav Johnson http://www.oir.uiuc.edu/dme/conference/trav.htm
Student
Feedback on Teaching: Online! On target? Rick Cummings and Christina Ballantyne http://wwwtlc1.murdoch.edu.au/evaluation/pubs/confs/aes99.html
References:
Aleamoni,
L. (1999). Student Rating Myths Versus Research Facts from 1924 to 1998, Journal
of Personnel Evaluation in Education, 13 (2), 153-166. Ali,
D.L., & Sell, Y. (1998). Issues regarding the reliability, validity and
utility of student ratings of instruction: A survey of research findings.
Retrieved August 23, 2001, form the University of Calgary Web site: http://www.ucalgary.ca/UofC/departments/VPA/usri/appendix4.html Ballantyne,
C. (2000, November). Why survey online: A practical look at issues in the use
of the internet for surveys in higher education. A paper presented at the
annual conferences of the American Evaluation Association, Honolulu, HI. Barr,
R.B. & Tagg, J. (1995, November/December). From teaching to learning: A new
paradigm for undergraduate education. Change. 13-25. Bernstein,
D. (1995, August 21). Establishing effective instruction through peer review
of teaching. A distillation of a FIPSE proposal. Braskamp,
L.A., & Ory, J.C. (1994). Establishing the credibility of evidence. In Assessing
faculty work: Enhancing individual and institutional performance. pp. 95-104.
San Francisco: Jossey-Bass Brinko,
K.T. (1993, Sep.-Oct). The practice of giving feedback to improve teaching: What
is effective? Journal of Higher Education, 64, (5), 574-593. Cashin,
W.E. (1995, September). Student ratings of teaching: the research revisited.
IDEA paper no. 32 from the Center for Faculty Evaluation and Development at Kansas
State University. Chism,
N.V. (1999). Peer review of teaching. Bolton, MA:Anker Publishing Diamond,
R.M. (1994). Documenting and assessing faculty work. In Serving On Promotion
and Tenure Committees: A Faculty Guide, Syracuse University. Bolton: Anker
Publishing Company, Inc. (pp. 13-21). Hoyt,
D.P. & Pallett, W.H. (November, 1999). Appraising teaching effectiveness:
Beyond student ratings. IDEA paper no. 36 from the Center for Faculty Evaluation
and Development at Kansas State University
Hutchings,
P. (Ed.) (1994, November). Peer review of teaching: From idea to prototype.
AAHE Bulletin. Retrieved May 28, 2002, from the AAHE Web site: http://www.aahe.org/teaching/nov94bull...May_18.htm
Johnson,
T.D. & Ryan, K.E. (2000, Fall). A comprehensive approach to the evaluation
of college teaching. New Directions for Teaching and Learning, 83, (pp.
109-123). Jossey-Bass. Kahn,
S. (1993). Better teaching through better evaluation: A guide for faculty and
institutions. To Improve the Academy, 12, 111-127. Koon,
J. & Murray, H.G. (1995). Using multiple outcomes to validate student ratings
of overall teacher effectiveness. Journal of Higher Education, 66 (1),
61-81.
Marsh, H.W. (1994). Weighting for the right
criteria in the instructional development and effectiveness assessment (IDEA)
system: global and specific ratings of teaching effectiveness and their relation
to course objectives. Journal of Educational Psychology, 86 (4), 631-648.
Marsh,
H.W. & Dunkin, M.J. (1992). Students' evaluations of university teaching:
A multidimensional approach. In J.C. Smart (Ed.), Higher education: Handbook
of theory and research (Vol. 8, pp. 143-233). New York: Agathon Press. Marsh,
H.W. & Roche, L.A. (1997). Making students' evaluations of teaching effectiveness
effective. American Psychologist, 52 (11), 1187-1197. McKeachie,
W.J. & Kaplan, M. (1996a, February). Persistent problems in evaluating
college teaching. AAHE Bulletin, pp. 5-8. North,
J.D. (1999). Administrative courage to evaluate the complexities of teaching.
In Seldin, P. Changing practices in evaluating teaching. Bolton, MA: Anker
Publishing Ory,
J.C. & Ryan, K.E. (2001) How do student ratings measure up to a new validity
framework? In Theall, M., Abrami, P.C., & Mets, L.A., editors. (2001). The
student ratings debate: Are they valid? How can we best use them? New Directions
for Institutional Research, no. 109, San Francisco: Jossey-Bass. Sanders,
W.L. (2000). Value-added assessment from student achievement data: Opportunities
and hurdles. Jason Millman Award Speech. CREATE National Evaluation Institute,
San Jose, CA. July 21. Scriven,
M. (1983). Summative Teacher Evaluation. In J. Milman (ed.), Handbook of Teacher
Evaluation. Thousand Oaks, CA: Sage. Theall,
M. & Franklin, J. (2001). Looking for bias in all the wrong places: A search
for truth or a witch hunt in student ratings of instruction? New Directions
for Institutional Research, no. 109, 45-56. Theall,
M., Scannell, N. & Franklin, J. (2000, Spring). The eye of the beholder: Individual
opinion and controversy about student ratings. Instructional Evaluation and
Faculty Development. Retrieved September 23, 2002, from http://www.umanitoba.ca/academic_support/uts/sigfted/iefdi/spring00/matrix.htm Wagenaar,
T.C. (1995). Student evaluation of teaching: Some cautions
and suggestions. Teaching Sociology, 23, 64-68. Zong,
S. (2000). The meaning of expected grade and the meaning of overall ratings of
instruction: A validation study of student evaluation
of teaching with hierarchical linear models. Dissertation Abstracts International,
61(11), 5950B. (UMI No. 9995461) top
|