Skip Navigation
Back 

cover2

cover2

                              September 6, 1996
 
    TO:     Mr. Larry Wilson, Chairman, and Members,
            Performance Indicator Task Force on Academids
 
    FROM:   Dr. Gail M. Morrison
 
            Agenda and Materials for Task Force Meeting
                        September 10, 1996
 
    As previously agreed upon, our next meeting will be held on Tuesday, September
 10, 1996, here in the Commission's main conference room, from 1:30 p.m. to
 approximately 5:30 p.m.
 
    A suggested agenda is as follows:  
 
    l.  Consideration of Minutes of September 3, 1996
 
    2.  Consideration of Draft Performance Indicators from September 3
 
    3.  Consideration of "Best Practices" documents for Performance Review
        and Post-Tenure Review
 
    4.  Review of Brief Summary of Student Evaluation Information
 
    5.  Discussion of New Performance Indicators/Observation on Background
        of Indicators
 
        2(F) community and public service activities for which no extra
            compensation is paid
    
        3(C) ratio of full-time faculty as compared to other full-time
            employees
 
        3(D) acccreditation of degree-granting programs
 
        3(E) institutional emphasis on quality of teacher educaion and
            reform
 
    Please let me know if you require additional information for the next Task
 Force meeting.  I look forward to seeing you then.
 
 cc:    Members, Advisory Committee on Academic Programs
    Fred R. Sheheen
    Associate Commissioners

MINUTES ACADEMICS TASK FORCE PERFORMANCE INDICATORS September 3, 1996 CHE Conference Room 1:30 P.M. Task Force Members Present: Mr. Stephen Avery, Dr. John Britton, Ms. Juanita Bulloch, Mr. Frank Gilbert, Dr. Martha Herbert, Mr. Douglas McKay, Dr. John Stockwell, Mr. Larry Wilson (Chairman). Task Force Resource Persons Present: Dr. Wanda Hayes (USC-Aiken); Dr. Joe Prus (Winthrop University); Dr. Gail Morrison, CHE Associate Commissioner for Academic Affairs; Dr. Mike Smith, CHE Associate Commissioner for Special Projects; Dr. Lynn Kelley, as recorder (CHE-Academic Affairs); Also present: Mr. Fred Sheheen, CHE Commissioner; faculty and staff representatives of the institutions of higher education; and state government representatives. Mr. Wilson opened the meeting by stating that Dr. Layton McCurdy has not been able to attend meetings of the Task Force because he had had surgery. The following procedural matters were discussed: 1. The Task Force agreed to post minutes on the Internet; if minutes have not been approved, that will be noted. [Note: These minutes will be posted following approval.] 2. Mike Smith distributed a paper from the Chair of the Council of Faculty Chairs which included items of concern from that group. Mr. Wilson then requested a motion for approval of minutes. Several changes were suggested which the Task Force accepted. Dr. Britton then moved (seconded, McKay) that the minutes be approved as amended. The motion was approved unanimously. Review of Performance Indicators From Prior Meeting The Task Force turned its attention to (1)Mission Focus (B)Curricula offered to achieve mission. After considerable discussion and an affirmation that Act 359 provided for everything contained in the draft, Mr. Wilson asked for a motion to accept. Dr. Britton moved (seconded, Mr. Gilbert) to accept the draft language of the Performance Indicator on (1) Mission Focus (B) Curricula Offered to Achieve Mission. The motion passed unanimously. The Task Force next examined the draft for the performance indicator on (2) Quality of Faculty (A) Academic and Other Credentials of Professors and Instructors. To meet concerns expressed about theunique position of technical program faculty in the technical colleges concerning "b," the Task Force agreed to include a footnote to this Performance Indicator to make such a nuance clear. Concerns were raised about the meaning, interpretation, and possible use of "b." After considerable discussion, the Task Force agreed that this "b" was necessary to provide a qualitative dimension to this performance indicator which "a" could not provide. Dr. Stockwell then moved (seconded, Avery) that the Performance Indicator (2) Quality of Faculty (A) Academic and Other Credentials of Professors and Instructors be adopted. The motion was approved unanimously. Discussion of the Second Set of Performance Indicators Mr. Wilson thanked the staff for providing a summary of legislative intent for Act 359, based upon the November 1995-February 1996 discussions of the blue ribbon committee on higher education reform. Copies of this summary were distributed to the Task Force members and members of the audience (copy attached). The Task Force next moved to consider performance indicator (2) (B): Performance review system for faculty to include student and peer evaluations. Introductory remarks from Dr. Morrison showed that there is no statewide standard for performance review, although all institutions have this as a regular process. Discussion which followed showed that the Task Force members and technical support personnel support inclusion of tenured faculty in this performance indicator and support inclusion of flexibility in it so that it can be tailored to individual institutions' missions and departmental goals and objectives. There was consensus that it should be mission-driven, comprehensive in scope, and systematic in process; and that it be linked with faculty development and faculty reward systems. Concerns were also expressed that if the indicator were to be met with a yes/no format, administrators might find it difficult to provide low ratings to faculty. However, these concerns were alleviated by the agreement of the Task Force that the process must contain student evaluations as well as peer and administrative components. This performance indicator is to answer the question, "Is the appropriate process in place to be able to render a judgment?" rather than to address the question of "What is the judgment?" The Task Force discussed the difference between this performance indicator and a similar section in Act 255. The Act 255 element focuses on institutional assessment, whereas this indicator is focused on the quality of individual faculty members as classroom instructors. Discussion also ensued about the performance indicator formulated by the Statewide Committee on Planning. It was pointed out that the group had not formally voted on the document or endorsed it. The Task Force agreed with the expressed comment that if the Task Force were to set a genuine standard, the faculty at the institutions will meet it. They agreed that such an indicator was important, given the perception of the General Assembly that there is not much evaluation of faculty, especially those who are tenured. The Task Force then requested staff to get standards/criteria for what exists in the literature of evaluation and to develop a draft of this performance indicator for the next meeting in which a "best practices" scenario might be included. The Task Force requested that this material be sent out so that it could be received prior to the next meeting. The Task Force next considered item (2) (C) Post-Tenure Review for Tenured Faculty. Mr. Wilson asked if processes were in place that sufficiently answered the question of whether continuation of a person's tenure was justified. Dr. Morrison and Dr. Mike Smith pointed out that the technical colleges do not have tenure, so that this indicator will not apply to them. Dr. Morrison responded to questions about the extent to which post-tenure review exists by saying that there is not as much post-tenure review as there are other types of review of faculty; that post-tenure review policies and practices differ widely among institutions; and that the debate in the 1996 General Assembly to consider the abolition of all tenure was causing a rethinking of the issue of post-tenure review on the public campuses. There was much discussion about the possibility and likelihood of revocation of tenure. From that discussion agreement emerged that while those who are tenured are not immune to being asked to leave, the tenure process is designed to prevent persons who are likely to be noncontributors or underachievers to move prior to the tenure decision. Those who are not contributing and have become tenured are able to be removed either through "show cause" (in effect, a kind of impeachment proceeding) or, more often, through lack of salary increases, the addition of undesired work schedules, providing job tip searches in other states, and so forth. Salary raises are not automatic in any institution for faculty members. A discussion was held on whether any expectations were held for faculty contributions after tenure was achieved and whether faculty who are tenured have different responsibilities from those who are not in terms of faculty governance. Staff and institutional representatives present answered affirmatively on both these questions. For example, capturing the wise use of sabbatical time in this performance indicator is important, the Task Force agreed. The Task Force also developed a consensus that this performance indicator should be crafted in such a manner as to answer whether an institution is using best practices for post-tenure review processes. This would permit the indicator to be evaluated as a yes/no proposition and the benchmarking group could benchmark it at whatever percent it might find reasonable. Several institutional representatives suggested that the Task Force consider a statewide five-year time period for post-tenure review. In response to the positive things said about maintenance of tenure in South Carolina's public institutions, Mr. Wilson noted that we need a public relations campaign to get this idea of the relationship of tenure to recruitment and retention of excellent faculty members and their creativity to be understood more fully by critical publics. A brief recess was held in the proceedings beginning at 3:30 and ending at 3:45 P.M. The Task Force then turned its attention to (2)(D) Compensation of the Faculty. After listening to considerable discussion on the issue of recruiting faculty from national pools and hearing Mr. Sheheen recount that Act 359 stipulates that South Carolina is to be not just a national but a global leader in higher education, the Task Force decided the goal of the compensation indicator should be to have compensation reach national norms. Dr. Stockwell agreed with Mr. Sheheen's assessment and suggested that the same logic should be applied to other issues. He said we cannot look at South Carolina's institutions at any level as common denominators. For example, USC-Spartanburg sees itself as one of 50 "metropolitan universities" nationwide; Clemson is one of approximately the same number of land grant universities; USC-Columbia compares itself to the American Association of Universities group; and so forth. So, USC-Spartanburg, although a "comprehensive four-year university," should not be compared with The Citadel and Winthrop which are also in this category, but rather with other national metropolitan universities of which there are six that are highly regarded. Similar national norming groups are what should constitute the benchmark groups to be used for assessing all the public institutions in South Carolina under this legislation, he said. Mr. Wilson agreed and stated that unless there was some objection this part of the minutes should be placed in bold type so that the benchmarking committees do not lose sight of its importance. There was tacit agreement with the sentiment that South Carolina does not compare favorably with North Carolina in higher education salaries. While the Task Force listened to institutional representatives discuss a variety of reasons for low faculty salaries, the Task Force agreed that through Act 359 the General Assembly has stipulated with this performance indicator one way institutions might strive to provide high faculty salaries if they wish to be funded under this performance indicator. Discussion centered around the idea that an upward trend-line over a five-year period would allow an institution to be rewarded more through this indicator. Likewise, the Task Force agreed that this performance indicator could be crafted so that one mean average could be calculated for the entire faculty of each institution, even though data would be collected by discipline and professorial rank. The Task Force further agreed that this single measure should be the weighted mean of all the standard deviations. Discussion was held about whether "compensation" should include benefits. It was pointed out that while benefits packages in South Carolina are only approximately 17 percent (and average between 23-30 percent nationally), they are nevertheless tied automatically to salary by law and institutions have no control over them. Therefore, at best this measure would be redundant; and, at worst, might be difficult to collect. Finally then, the Task Force decided to look solely at salary. The Task Force focused next on the performance indicator (2) (E) Quality of Faculty: Availability Outside the Classroom. It was agreed after discussion that this indicator is both one of "user-friendliness" and intimately related to an evaluation of the faculty advising system on our campuses. Although the technical college faculty members are required to be in offices eight hours per week, there were reported to be real problems with this arrangement and similar ones on four-year campuses. Faculty and administrators present from the institutions argued that the development of e-mail, internet, voice mail, distance education, and other sophisticated electronic equipment, coupled with the development of large numbers of nonresidential/nontraditional students, make the enforced office hours solution both archaic and unresponsive to student needs. Mr. Sheheen stated that his view was that the General Assembly wanted to see real faculty members meeting with real students in real time at least as a part of this indicator. Dr. Stockwell and others stated that the real issue was student satisfaction with faculty availability outside classroom hours, whatever form that contact might take. Much questioning followed concerning the amount, timing, and type of surveying and the student populations to be surveyed in order to assure an accurate measuring of this performance indicator. As a result of the discussion, the Task Force agreed that it was important to have many students involved in the surveying instrument representing different levels, including freshmen where retention is a concern. The issue of nonfaculty advisors doing academic advising was raised by a few institutional representatives present. Mr. Wilson then proposed that the following measure be given approval by the Task Force for this indicator: the percent of teaching faculty for whom students report satisfaction with their availability outside the classroom; and the percent of students reporting satisfaction with availability of their faculty advisors The membership of the Task Force requested that the staff develop this idea in preparation for the next meeting, including preparation of definitions for "availability outside the classroom" and "faculty advisor." Finally, at the request of Mr. Avery, the Task Force again discussed the performance indicator (2)(B) concerning the relative weight to be attached to the evaluations of students, administrators, and peers. There were two competing concerns which were expressed in the discussion which followed: 1) that student input concerning faculty performance not be lost; and 2) that the empowerment of administrators to move faculty in particular directions for the good of the institution not be impeded or disregarded, as might happen if student evaluations were weighted too heavily. Task Force members agreed that weighting of student evaluations should be significant but did not reach consensus on what that weighing should be in more specific terms. Dr. Morrison offered to prepare some information on student evaluations for the next meeting in line with the "best practices" research to be conducted on performance indicators. Mr. Wilson closed the discussion by observing that in his view the inclusion of "best practices" would bring both these positions together in the final determination of the performance indicator and its benchmarking. Dr. Smith then distributed a chart to show in summary fashion what some other states are doing on performance indicators. Mr. Wilson thanked him and Dr. Morrison for their assistance. He asked if there were further comments. Hearing none, he declared the meeting adjourned at 5:26 P.M.
DRAFT PERFORMANCE INDICATORS ACADEMICS TASK FORCE SEPTEMBER 3, 1996 (2) Quality of Faculty (B) Performance review system for faculty to include student and peer evaluations The quality of the faculty as evidenced by the institution's performance review system is to be measured as: the percent of the criteria stipulated in the "Best Practices for a Performance Review System for Faculty" document (Attachment 1) which are incorporated into the institution's own performance review system. (2) Quality of Faculty (C) Post-tenure review for tenured faculty The quality of the faculty as evidenced by the institution's post-tenure review system for tenured faculty is to be measured as: the percent of the criteria stipulated in the Best Practices for Post-tenure Review" document (Attachment 2) which are incorporated into the institution's own performance review system. (2) Quality of Faculty (D) Compensation The quality of the faculty as indicated by compensation is to be measured as: the average deviation (expressed in standardized units) of faculty salaries by rank, discipline, and type of institution from national averages. (2) Quality of Faculty (E) Availability of faculty to students outside the classroom The quality of the faculty as indicated by their availability to students outside the classroom is to be measured as: a. the percent of instructional faculty who receive a mean rating of "satisfied" or above on related question(s) on anonymous student evaluations which are submitted for all courses; b. the percent of students who report satisfaction with the availability of academic advisors outside the classroom as shown by a mean rating of "satisfied" or above on an anonymous evaluation instrument completed at a minimum during the fall term by a representative sample of freshmen, sophomores, juniors, and seniors. Definitions: "Availability outside the classroom" includes personal contact between faculty and students during office hours and other scheduled appointments as well as contact through e-mail, Internet, telephone, correspondence and other media. "Faculty advisors" refer to those faculty or staff who advise students with respect to their course schedules and degree requirements as well as to those institutional personnel who interact with students in such contexts as sponsorships of student clubs, honor societies, publications, research projects, and other student extracurricular activities in which there is faculty participation. Finally, for your information, I have included some brief information about student evaluations (Attachment 3).
Attachments: 3 Agenda Item 4 Student Evaluations of Faculty Teaching Numerous investigations have been undertaken concerning the validity of student evaluation of faculty teaching. In their 1993 article Marsh and Bailey provided an overview of this research and determined that such evaluations are: (1) reliable and stable; (2) primarily a function of the instructor who teaches a course rather than of the course that is taught; (3) relatively valid against a variety of indicators of effective teaching; (4) relatively unaffected by a variety of variables hypothesized as potential biases to the ratings; (5) seen to be useful by faculty as feedback about their teaching, by students for use in course selection, by administrators for use in personnel decisions, and by researchers. Braskamp and Ory (1994) provide further data which validate the usefulness of student evaluations of faculty teaching. Their study provides a table of factors influencing student evaluation of faculty or course. These are summarized below: Factor Effect Administration Student anonymity Signed ratings are more positive than anonymous ones Instructor Ratings more positive if instructor stays in class Directions Ratings more positive if stated used in promotion Timing Ratings lower if given during final exam rather than during class Nature of Course Requires/elective Ratings higher in elective than required courses Course level Ratings higher in higher-level courses Class size Smaller classes tend to receive higher ratings but low correlation between class size and student rating Discipline In descending order, lower ratings for courses in arts & humanities, biological & social sciences, business, computer science, math, engineering, & physical sciences Instructor Rank Ratings higher for professors than teaching assist. Gender No significant relationship; but ratings slightly favor women Personality Warmth/enthusiasm generally related to overall rating of teacher competence Years teaching Rank, age and years unrelated Research productivity Positively correlated but minimal correlation Students Expected grade Expecting high grade give higher rating than expecting low grade Major or minor Majors rate more positively than non-majors Gender Student gender not related to overall evaluation, but tend to rate same-sex faculty slightly higher Personality No relation with student personality Instrumentation Item placement No relationship Number of scale points 6-point scales have slightly more varied responses and higher reliability than 5-point scales Negative wording No relationship Labeling all scale Labelling only end-points gives slightly higher average ratings
Agenda Item 5 OBSERVATIONS ON BACKGROUND OF ACADEMICS TASK FORCE INDICATORS 1. Community and public service activities of faculty for which no extra compensation is paid. The study committee recognized that faculty have a variety of responsibilities on campus for which they are compensated in their basic salary. The committee also recognized that faculty accept consulting jobs in government agencies and private industry for which they receive compensation. But the committee felt that an additional mark of desirable faculty would be citizenship and civic contributions for which faculty receive no compensation, not unlike the expectations held for outstanding citizens in other professional and vocational fields. 2. Ratio of full-time faculty as compared to other full-time employees This is again designed to be a measure of the relative emphasis on the academic mission of the institution. Are there more administrative and support personnel at the expense of investment in faculty? The desirable end was perceived to be emphasis on faculty personnel to the maximum feasible extent, which would benefit the instructional program, and ultimately students. 3. Accreditation of degree-granting programs The General Assembly has long required the Commission to report on the number and percentage of programs eligible for national accreditation which achieve national accreditation. National accreditation is seen as a basic hallmark of the quality of programs, recognized by higher education peers in the country, by the business community, and by the public. The Commission has required some programs to achieve national accreditation to ensure minimum quality. 4. Institutional emphasis on quality teacher education and reform One of the primary missions of colleges and universities is to prepare teachers for the public school system in South Carolina. Therefore, the quality of public schools, of tremendous concern to the General Assembly,is directly linked to the quality of teacher education programs. Additionally, a number of changes have been proposed in teacher education programs by national study groups, and some changes have been mandated by the General Assembly. This indicator is designed to ensure priority attention to this function of higher education institutions and also to ensure that teacher education programs are constantly updated to meet the new educational needs of the public schools. 5. Transferability of credits to and from the institution It is fair to say that perceived confusion about transfer of credits among public institutions has been, for decades, one of the most prominent irritant points in the General Assembly. Students, and their parents, become extremely irate when courses do not transfer easily, and when students have to repeat identical or similar courses because they do not transfer among institutions. Because of the prominence of the problem in the past, the General Assembly recently granted substantial new mandatory authority to the Commission on this subject. A comprehensive transfer policy has been adopted by the Commission, the provisions of which are mandatory for public institutions in the state.