Skip Navigation
Back 

trans2

trans2

 
 
 
 
 
 
                              September 13, 1996
 
 
 MEMORANDUM 
 
 
 TO:        Mr. Larry Wilson, Chairman, and Members,
        Performance Indicator Task Force on Academics
 
 FROM:  Dr. Gail M. Morrison
 
                             Agenda and Materials for Task Force Meeting
                              September 18, 1996
                                 1:30 p.m.
 
    As agreed upon previously by members of the Task Force, our next meeting will
 be held on Wednesday, September 18, 1996, here in the Commission's main conference
 room, beginning at 1:30 p.m.
 
    A suggested agenda is listed below:
 
    1.  Consideration of the Minutes of September 10, 1996
 
    2.  Consideration of Three New Indicators: (8)A; (9)A; 9(B)
        a.  Observations on Legislative Background
        b.  Staff suggestions
 
    3.  Consideration of Indicators Drafted from September 10
        Discussions: (2)F; (3)C; (3)D; (3(E) 
        
    4.  Review of Best Practices Documents for Indicators 2(B) and 2(C) 
 
    5.  Consideration of Standardized Questions to be Asked in Support
        of Indicator (2)E
 
    6.  Review and Approval of All Indicators 
 
    7.  Discussion of Suggestions and Observations from Task Force to
        Steering Committee, CHE, and General Assembly
 
      Because this is the Task Force's last meeting, I have suggested that the last
 three new indicators be discussed first.
 
Approved as Amended by Task Force, September 10, 1996 MINUTES ACADEMICS TASK FORCE PERFORMANCE INDICATORS September 3, 1996 CHE Conference Room 1:30 P.M. Task Force Members Present: Mr. Stephen Avery, Dr. John Britton, Ms. Juanita Bulloch, Mr. Frank Gilbert, Dr. Martha Herbert, Mr. Douglas McKay, Dr. John Stockwell, Mr. Larry Wilson (Chairman). Task Force Resource Persons Present: Dr. Wanda Hayes (USC-Aiken); Dr. Joe Prus (Winthrop University); Dr. Gail Morrison, CHE Associate Commissioner for Academic Affairs; Dr. Mike Smith, CHE Associate Commissioner for Special Projects; Dr. Lynn Kelley, as recorder (CHE-Academic Affairs); Also present: Mr. Fred Sheheen, CHE Commissioner; faculty and staff representatives of the institutions of higher education; and state government representatives. Mr. Wilson opened the meeting by stating that Dr. Layton McCurdy has not been in attendance at these meetings because he had had surgery and is, therefore, unable to attend. He then raised the following procedural matters: 1. He asked the group about posting the minutes of the Task Force on the Internet. After discussion, the Task Force agreed to do this with the proviso that a disclaimer of "Draft minutes awaiting approval" appear before each set until they have been submitted and approved by the Task Force. 2. Mike Smith distributed a paper from Dr. Jack Parsons, Chair of the Council of Faculty Chairs, with several items which that group wished to have the Task Force consider in its deliberations. Mr. Wilson then requested a motion for approval of minutes. Dr. Britton suggested several changes which the Task Force accepted. Dr. Prus suggested a change which the Task Force accepted. Dr. Britton then moved (seconded, McKay) that the minutes be approved as amended. Mr. Avery stated that the minutes of the Task Force were principally for the purpose of serving as a distillation of the thinking of the Task Force and that their distribution to many audiences might have the unintended consequence of unnerving persons not in attendance. He stated that he had this concern, although he certainly wanted an open process. Others pointed out that minutes of a public agency and its subgroups are necessarily available to the general public upon request. Drafted Performance Indicators Mr. Wilson asked the Task Force next to focus on the drafted performance indicators which the staff of the Commission had been requested to prepare for the meeting of September 3. He began by directing the group's attention to (1)Mission Focus (B)Curricula offered to achieve mission. Dr. Stockwell noted that "curricula that achieve the mission" of institutions seemed to be incorporated to some extent under the drafted performance indicator in points (2) A-E in a positive sense and in (2) F in a negative sense. Mr. Avery stated that the word "offered" reflects the title of the draft as developed by the staff and questioned if this was found in the legislation. Dr. Morrison stated that it was found in Act 359 of 1996. Mr. Wilson asked if there were further comment. Hearing none, Dr. Britton moved (seconded, Mr. Gilbert) to accept the draft language of the Performance Indicator on (1) Mission Focus (B) Curricula Offered to Achieve Mission. The motion passed unanimously. Mr. Wilson then requested that the Task Force examine the draft for the performance indicator on (2) Quality of Faculty (A) Academic and Other Credentials of Professors and Instructors. Dr. Morrison explained that two different dimensions are captured on parts "a" and "b". Part "a" is expected to include 100 percent of all institutions since it is necessary, essentially, for accreditation. Part "b" was to reward the exceeding of this baseline. For example, while the arts and sciences faculty in the technical colleges only need to have a master's degree with 18 hours in the discipline which they teach, those institutions with faculty who exceed this standard greatly would be rewarded. Dr. Britton expressed concern that the benchmark committee might not pick up on the nuance that for technically-based associate degree faculty members the "b" standard might not be applicable as a terminal degree. Dr. Stockwell suggested that a footnote be appended to this Performance Indicator to make such a nuance clear. The Task Force agreed to this suggestion. Further discussion was held about the use of the SACS criteria. At the conclusion of this discussion the group reaffirmed the usefulness of the SACS criteria for this performance indicator. Dr. Garrison (USC-Associate Provost) stated that the University would like to see a category called "research dollars developed by faculty" as a criterion, at least for the research institutions, as part of faculty credentials. Mr. Wilson responded that the performance indicators have to be similar kinds of phenomena similarly measured across the state, if they are to achieve their purpose. Mr. Wilson asked for discussion of "b", stating that in his opinion its meaning and intent needed clarification. Mr. Avery stated that it appeared to him that it was for those who exceeded the degree categories as stated in the SACS criteria. Mr. Gilbert expressed some concern that the percentages of large institutions' faculties, as opposed to the smaller ones, might work in favor of large institutions receiving more funding under this performance indicator. Dr. Melford Wilson (Winthrop-Interim Vice President for Academic Affairs) stated that he felt "b" was going to create problems, since each faculty member would consider that he/she exceeds SACS criteria. Mr. Wilson said that without having to recall everything that was discussed at the last meeting about this indicator, it was clear to him that for part "a", all institutions necessarily will meet the criterion; and that for part "b" the intent is that it is necessary to provide a qualitative dimension to an institution's efforts. He said this effort to incorporate quality in the performance indicators is critical since it was the intent of the General Assembly, as it sought to move away from student enrollment-driven funding. Dr. Stockwell then moved (seconded, Avery) that the Performance Indicator (2) Quality of Faculty (A) Academic and Other Credentials of Professors and Instructors be adopted. The motion was approved unanimously. FIRST DISCUSSION OF THE SECOND SET OF PERFORMANCE INDICATORS Mr. Wilson thanked Mr. Sheheen for putting together a summarized staff understanding of the intent of the General Assembly, based upon the November 1995-February 1996 discussions of the blue ribbon committee on higher education reform. Copies of this summary were distributed to the Task Force members. Mr. Wilson then asked that the Task Force turn its attention to Performance Indicator (2) (B): Performance review system for faculty to include student and peer evaluations. He asked Dr. Morrison if there is anything statewide which is formally established and satisfactory already in existence on this issue. She responded that while all institutions have something in this regard and all have some committees to handle performance review, no standardized review system exists. She said that there is a separate performance indicator for promotion and tenure and, therefore, she suggested that the Task Force might wish to limit its attention on this performance indicator to nontenured faculty only. In the discussion that followed, Mr. McKay stated that he was not persuaded that tenured faculty should be left out of this performance indicator. Dr. Prus suggested that this performance indicator's operational definition might be so constructed that flexibility will be built into it to allow for the promotion and tenure review and to allow within the institutions for the process to differ from department to department. Dr. Stockwell stated that all institutions do evaluations, especially for promotion and tenure, but also as an annual performance review. He said that the promotion and tenure evaluations are done in a very systematic manner. Mr. Wilson asked what would happen with this performance indicator if two institutions of substantially different quality both say that they have, for example, a high percentage of faculty who have very positive performance reviews. Dr. Stockwell responded that whatever definition might be given to this performance indicator it needs to be one which assures quality by assuring that systematic processes exist and that it has implications for faculty development. Mr. Avery asked what the Task Force does about the fact that four different types of institutions will not in itself permit a single form to work. Dr. Garrison responded by stating that she agreed with Dr. Stockwell's assessment of what elements need to be contained within this performance indicator's operational definition. She added that any evaluation system should pass the test of two questions, to wit: Is the form that is in place a comprehensive one? Is it used in a manner that has implications for faculty development and personnel administration? She said that a single form for all types of institutions would not be desirable given the diversity of their missions. Mr. Wilson asked her if that meant that in her opinion the performance indicator under review should be considered met on a "yes" or "no" basis. Dr. Morrison stated that to employ language such as "to include at a minimum" would include the notion of administrative review of faculty, not just student reviews and peer reviews. Dr. Britton stated that if this performance indicator is to be met in a yes/no fashion he thought it would be very difficult for deans not to give positive reviews to their faculty almost universally. Mr. Sheheen agreed that quality in faculty review was difficult to achieve and to agree upon. Mr. Wilson said that if there were a combination of student, peer and administrative review and that all had to be answered affirmatively before a "yes" response could be given to this question, it would be a more difficult bar to surpass. Mr. McKay stated that if students rate courses, over time this process will build a good deal of evidence of quality in a faculty member. David Fleming (Clemson) stated that faculty will try to count more positive responses in order to get higher funding through this performance indicator. Mr. Wilson said that he did not perceive this indicator being used in that manner. He said that the Task Force was trying to frame the operational definition in such a manner that it answered the question "Is the appropriate process in place to be able to render a judgment?"; the question is not "What is the judgment?" Dr. Herbert read from the SACS Criteria that faculty evaluation must be conducted periodically, must have definite criteria, must be consistent with the goals of the institution, and must show that it is used for improvement of the institutional program. Mr. Wilson said that it is important that these processes involve peers and students. Dr. Prus asked who will judge the rigor of the process? For example, he asked, how is one to judge the fact that there are many or few drop-outs at an institution. While some might argue that many drop-outs demonstrates the rigor of the institution, others might argue that few drop-outs shows the institution is doing a superior job at retention. The existence of a system of evaluation will not get at the answer to this kind of question. Mr. Gilbert said that it is important that the Task Force and the benchmarking group note the importance of arriving at measurable criteria for all types of institutions for this performance indicator. Dr. Reed Johnson (Francis Marion) stated that CHE is concentrating on peer evaluations for individual faculty members, but that it is important not to lose sight of the faculty members outside the classroom. He said that from Act 255 there were five questions on general education which had been used to look at weaknesses in the institutions. Dr. Morrison stated that Act 255 was focused on institutional effectiveness; the current Task Force is focused on individuals within institutions. Mr. Sheheen commented that the SCHEA Network had been in the process of developing some common questions through a survey of institutional documents for evaluation of faculty. He said that a working committee of the planning committee had developed this idea. Dr. Harry Matthews (USC-Columbia, Director of Institutional Research) said that this kind of process would have to be undertaken fully, if the Task Force decided to move in this direction. Mr. Sheheen commented that the SCHEA committee had already adopted this and sent it forth, but Dr. Matthews said that it had been sent forth as a matter of information without any vote or any consensus on the document. Dr. Britton commented that it was of interest to him that the document was, however, sent forth by the committee. Dr. Mike Smith commented that the Task Force could stipulate the elements to be included for a good peer, student, administrative evaluation and, therefore, evaluate how good a job each of the institutions is doing in these arenas. Dr. Stockwell stated that in his opinion the evaluation had to be mission-driven and had to show clear linkages between this mission and faculty rewards and development systems. Mr. Wilson agreed and said that the ultimate objective of this process within an institution was to develop the potential of its faculty. Dr. Morrison asked Dr. Stockwell if he might provide any insights on how to measure these items; he indicated that he would want to think about this some more. Mr. Gilbert stated that the process needs to be one which demonstrates quality exists because a good process of evaluation is in place. He said that it needs to offer a kind of check list to be sure that certain criteria exist and are measurable. So, in effect, he said we would be evaluating the evaluation system. Dr. Stockwell stated that there are characteristics which are common to all good evaluation systems. Mr. Gilbert agreed and said that we needed to put the criteria for these in place. Dr. Morrison said that the staff could review the literature on evaluation systems and submit it to the Task Force to allow others to know what exists currently. Mr. Wilson said that this process shows that evaluation of the teaching faculty clearly goes beyond the tenure process. Dr. Prus stated that evaluation for this performance indicator needs to contain certain elements: it must outline general ideas to be reviewed along with how and when the evaluation is to take place; it cannot mandate the entire document, although certain common elements can be in all such reviews; it needs to report the percentage of the faculty who are found to be satisfactory. Dr. Britton stated that the dilemma was whether faculties just need to meet minimum standards or whether the Task Force needs to prescribe a whole series of things which must be met. He said his point was that he felt sure that if the Task Force were to set a genuine standard, the faculty at the institutions will meet it and that this was fine with him. Dr. Morrison stated that it was a common perception in the General Assembly that faculty are not evaluated. This perception, she said, is particularly held by members of the General Assembly to be true about the teaching faculty. Mr. Wilson asked if Dr. Morrison could get standards/criteria for what exists in the literature of evaluation. She affirmed that she would. He then asked if CHE chould develop a draft of this performance indicator for next meeting in which a "best practices" scenario might be included. He said that this would truly, in his opinion, suggest that such a system might lead to developing the faculty to a higher level. He asked if there was Task Force consensus on this. The members affirmed that there was. Dr. Britton asked if Dr. Morrison might send out this draft in order to be received prior to the next meeting. She responded affirmatively. Mr. Wilson then asked that the Task Force consider item (2) (C) Post-Tenure Review for Tenured Faculty. Mr. Wilson asked if processes were in place that sufficiently answered the question whether continuation of the tenured status was justified. Dr. Morrison and Dr. Mike Smith pointed out that the technical colleges do not even have tenure. Mr. Wilson asked if post-tenure review was ubiquitous in our public institutions which have tenure. He also asked for a summary of how this process works. Dr. Morrison responded by saying that there is not as much post-tenure review as there is other types of review of faculty in the system. Some institutions do this in a pro forma manner, whereas others do it with great care and rigorous analysis. She said that she felt certain that in spite of the great diversity which exists in the public institutions, the example of the heated debate in the 1996 General Assembly to consider the abolition of all tenure was causing a great deal of rethinking of the issue of post-tenure review on the public campuses. Mr. Wilson asked if tenure is ever revoked. Dr. Morrison stated that it is but that such a decision is rarely taken. Dr. Prus stated that his own institutional post-tenure reviews took place every three years, but that his annual performance review had not changed at all pre- or post-tenure. Dr. Melford Wilson (Winthrop, Interim Vice President for Academic Affairs) stated that it is a grave misunderstanding to believe that "once tenured, always on the payroll." He said that there are many ways by which tenured faculty members can and are disinvited by an institution. He provided several examples including: no salary increases, providing difficult work schedules, providing job tip searches in other states, and so forth. He also said that salary raises in academic work were not automatic as they are in state government. He stated, however, that in his experience revocation of tenure was a rarity. Gary Docker (SC Chapter, AAUP) stated that he is aware of seven cases which arose with the expressed purpose of revoking tenure. Yet, in all of these, he said, the revocation did not formally occur, because lawyers for both sides worked out arrangements for "resignations" and the departing of the individuals in question. Dr. Garrison stated that it could be considered an example of system failure if there were to be large numbers of revocations of tenure, because the process itself is designed to assure that such revocations are not likely by the very fact of selecting persons of quality to the tenured professoriate. She suggested that perhaps the CHE staff might be asked to put together a set of "best practices" criteria in post-tenure review, based upon the work of CHE staff from 1995. Mr. Wilson asked why (2)(B) and (2)(C) are not the same indicator if done properly. Dr. Morrison stated that the General Assembly wrote the law to distinguish these two; and that many faculty are not on tenure lines. Dr. Britton asked if there are any expectations for faculty professional behavior once the faculty are tenured. Dr. Morrison stated that there are these expectations, which are related to where one is in his/her career path in the life cycle. Mr. Wilson asked if there are differences in expectations of persons who have the same salary, academic degrees, teaching fields, etc. except that one is tenured and one is not. Dr. Morrison stated that there are different expectations for such persons in terms of committee service, expected publications, and a host of other areas. Dr. Britton asked if pay raises for tenured faculty were automatic. Dr. Morrison stated that faculty members did not automatically receive pay raises, whether or not they might be tenured. Mr. Sheheen responded to Mr. Wilson's suggestion about the relationship between (2)(B) and (2)(C) by saying that the use of sabbatical time is a big issue that distinguishes the two. Dr. Frederike Wiedeman (Lander, Vice President for Academic Affairs) stated that it takes seven years and tenure to be eligible even to apply for a sabbatical. Mr. McKay said that the former was a measure which focused on quality for all the faculty and the latter was a measure which focused on quality at the research and comprehensive institutions only. In response to some questions, Dr. Wilson stated that seven years is the standard time period for a junior faculty member to receive tenure. Post-tenure review cycles, on the other hand, vary considerably from institution to institution. Mr. Wilson asked, then, if it was possible to frame this question in such a manner that the performance indicator could answer the question "Is an institution using best practices for post-tenure review processes?" Dr. Morrison said that if the issue could be framed in such a way, the answer to it would be either "yes" or "no". Mr. Wilson said that would be fine for the Task Force's analysis, then the benchmarking group could benchmark it at whatever percent it might want as reasonable. Ms. Bulloch said the only problem with this formulation of the issue that she foresaw was that some institutions might be doing the post-tenure review but not applying it to improve the persons or the process or to revoke the tenure when necessary. Mr. Wilson said that those institutions doing the best job would functionally receive the most funding. Dr. Wilson stated that a major part of post-tenure review is development. In the pre-tenure period the question is "should we retain this person?"; whereas, Dr. Wilson said, in post-tenure review, the question becomes "how shall we best develop this person?" Mr. Avery asked if the CHE staff might suggest a time period of mandatory post-tenure review. Dr. Wilson stated that nationally there appeared to be movement toward a five-year period. Dr. Garrison agreed. Mr. McKay asked if tenure revocation occurred only for academic failings. Gary Docker replied that it did not. Rather, he said, it required a "show cause" (i.e., a list of charges, similar to impeachment). Mr. Avery asked what positive might be said about the maintenance of tenure. Dr. Stockwell stated that the traditional argument for tenure is the maintenance of academic freedom to allow for the expression of points of view, research conclusions, and so forth that are politically not acceptable. A scholar must be free to express his/her scholarship without concern for what is "popular" in the greater society. Mr. Wilson asked if the bill to dismantle tenure had passed in the General Assembly last year might have created any disadvantages for South Carolina. Dr. Morrison and Mr. Sheheen and Dr. Melford Wilson all declared that it would have been disastrous for the retention and recruitment of top-flight academicians, since these persons are recruited on a national basis to come here. Dr. Stockwell stated that tenure encourages creativity in the faculties. He said that the American colleges and research universities are considered a magnet to attract many of the world's brightest students. Mr. Wilson said that we need a public relations campaign to get this idea of the relationship of tenure and creativity understood. Mr. Sheheen said that a staff paper by Dr. Loope last year was sent to the General Assembly and made this point among others. A brief recess was held in the proceedings beginning at 3:30 and ending at 3:45 P.M. Upon return of the members, Mr. Wilson asked that the Task Force turn its attention to (2)(D) Compensation of the Faculty. Mr. Wilson stated that it is clearly important to have competitive compensation. This part is easy to understand, but, he said, the more difficult part to grasp is at what point compensation becomes counterproductive. That is, he said, the question is: What is the goal of compensation--the mean, below the mean or above the mean average? Additional questions are: At what point at the upper limit does compensation become counterproductive? How do we award this? Mr. Sheheen stated that five years ago the staff of the Commission had said in a report that we should use national data in South Carolina for benchmarking our salaries, since recruitment of faculty is done by national market standards, rather than state or regional ones. Furthermore, he stated, the law (Act 359 of 1996) stipulates that South Carolina is to be not just a national, but a global, leader in higher education and this cannot be done unless we compete nationally for faculty at the very least. Mr. Wilson asked if there were any disagreements with Mr. Sheheen's assessment. There were none. Therefore, the Task Force decided that salary numbers and percentages should reflect national norms. Dr. Stockwell agreed with Mr. Sheheen's assessment and suggested that the same logic should be applied to other issues. He said we cannot look at South Carolina's institutions at any level as common denominators. He said, for example, that USC-Spartanburg sees itself as one of 50 "metropolitan universities" nationwide; Clemson is one of approximately the same number of land grant universities; USC-Columbia compares itself to the AAU group; and so forth. So, USC-Spartanburg, although a "comprehensive four-year university", should not be compared with The Citadel and Winthrop which are also in this category, but rather with other national metropolitan universities of which there are several that are highly competitive. Similar national norming groups are what should constitute the benchmark groups to be used for assessing all the public institutions in South Carolina under this legislation, he said. Mr. Wilson agreed and stated that unless there was some objection this part of the minutes should be placed in bold type so that the benchmark committees do not lose sight of its importance. Dr. Prus stated that we can get mean averages and standard deviations for national data on faculty compensation. Mr. Fleming (Clemson) said that we can get data for the past five years by rank, discipline, and type of institution. Mr. Wilson said this is encouraging, but asked how to measure success on the performance indicator. Mr. Sheheen said this should be done by standard deviation from the mean. Mr. Wilson inquired at what point in the standard deviations was success to be considered stopped and Mr. Sheheen asked why we would want to stop. Mr. Wilson asked if, for example, 100 or 120 percent would be considered "tops". Mr. Fleming said this question could be left to the benchmarking group, but Mr. Wilson replied that they were to be limited to the question of deciding the weight of this performance indicator; he said it was this Task Force's responsibility to decide upon the measure itself. Mr. Docker asked how the Commission on Higher Education felt about the relational aspects of faculty compensation to administrator compensation as part of this performance indicator. He said that, for example, a scenario of 3 percent increases for faculty and 6 percent increases for administrators needed to be examined. Someone suggested that such out-of-synch increases between faculty and administrators probably were not done. Mr. McKay asked what would happen under the developing operational definition if an institution, in an effort to get rid of an entire faculty department that was not producing, provided the Philosophy Department with low raises. He said he felt the performance indicator might discriminate against such rational organizational behavior by punishing the institution for not being at or above the national norm for Philosophy. Mr. Wilson said that it would be his hope/belief that in such a scenario those not being rewarded would move on and other, more productive philosophy professors would come in and be paid at or above national averages. Dr. Britton pointed out that although North Carolina does not have the best scores in elementary and secondary schools, their institutions of higher education are the envy of South Carolina. We must compete at least with them, he said. Dr. Wilson said that faculty get paid less because institutions have less funding to go around. Mr. Sheheen said that this is not his view: institutions were free to choose where to put their funding once they get it. Dr. Wilson said that while that might be the case up to a point, the fact that 80 year old buildings require heavy maintenance reduces the applicability of such an argument in the real world. Mr. Wilson responded by stating that, nevertheless, the General Assembly has stipulated with this performance indicator that one way by which institutions will be rewarded is by the amount of dollars they are providing to their faculties in compensation. If the five-year trend changes upwards, an institution will be rewarded more through this indicator. Dr. Wiedemann stated that there are many reasons why institutions might have lower salaries than some others. For example, she stated, an institution might have a large number of persons tenured with only a master's degree. In order to encourage them to move on or to complete their terminal degree, the institution might make salary adjustment dependent upon the degree having been completed. Mr. Wilson thanked Dr. Wiedemann for bringing this point up for understanding. He agreed with her that a small space should be accorded institutions so that they might tersely explain their reasons for low salaries. Mr. Gilbert said that he felt outcomes tests (e.g., EEE) should be tied with faculty performance in being considered for salary increments. Mr. Wilson said that this was a good point. If the objective is to get compensation to the national level, so should student performance (as a result of faculty intervention) be expected to be meeting that same level. Dr. Wilson suggested that we should be looking at the direction in which all faculty compensation at an institution is moving. Mr. Wilson asked if this implied a measure on this performance indicator that would be one mean average for the entire faculty. Dr. Morrison stated that this would be valid. Mr. Wilson suggested that this single measure be the weighted mean of all the standard deviations. The Task Force agreed. Dr. Morrison asked if salary should be the measure of compensation or just the primary measure. She suggested that benefits also might be considered to be part of this mix. Mr. Wilson asked if there were any national studies on this. Dr. Morrison stated that IPEDS collects such data. After considerable discussion, however, the Task Force decided not to include this. The Task Force reasoned that while benefits packages in South Carolina are only approximately 17 percent (and average between 23-30 percent nationally), they are nevertheless tied automatically to salary by law. Therefore, at best this measure would be redundant; and, at worst, might be difficult to collect. Mr. Gilbert suggested that housing is a benefit at some of our public institutions. Mr. Sheheen agreed that this, unfortunately, is the case and that it is unfortunate because it is also illegal, according to the Legislative Audit Council. Dr. Reed Johnson stated that the gap in the benefits packages in South Carolina should be included because they point out the tremendous gap between South Carolina and the rest of the nation. However, Mr. Sheheen and Mr. Wilson argued that the inclusion of benefits data in this performance indicator cannot change institutional behaviors. Mr. Wilson next requested that the Task Force focus its attention on the performance indicator (2) (E) Quality of Faculty: Availability Outside the Classroom. Mr. Wilson said that he understood that this was added as a "user-friendly" indicator. Mr. Sheheen said it was more than this: that it is related intimately to an evaluation of the faculty advising system on our campuses. Dr. Holderfield said that at the technical colleges, each instructor is required to have eight hours of office hours per week. However, he said, there are real problems with this since not being there is considered a negative and many students are not aware that the faculty are there. He also said that the use of e-mail, voice mail, etc. had all substantially changed the interplay of external-to-class availability of faculty to students in ways that simply were not possible even ten years ago. Dr. Stockwell said the issue cannot simplistically be measured by the "eight hour" rule of office hours. He said the real issue is student satisfaction. Mr. McKay asked if it were not true that the standard work week for all state employees (including faculty) is either 37.5 or 40 hours, depending upon the discretion of the agency. There was general agreement that this is the expectation. Mr. Fleming said that he would not be in favor of restricting faculty to their offices in some lock-step manner to meet this indicator. Instead, he said, there are examples of creative faculty members like one he knows who has promised students that he will respond to any inquiry--phone, e-mail, person-to-person--within four hours of receiving it. That instructor received very high student performance ratings. Dr. Wanda Hays stated that commuter populations pose other issues for faculty interchanges which did not exist in the period when virtually all student populations were residential in character. The faculty member can be in the office now for eight office hours, but the students cannot be there. Thus, the effectiveness of eight disciplined hours of faculty time is dissipated and made inconsequential to whatever student needs for communication might exist, she said. Mr. Wilson said he was understanding from these remarks that some variability needs to be built into the measure of this performance indicator. Ms. Bulloch again stated that from what she had heard the importance of e-mail, phone usage, etc. was important. Mr. Sheheen stated, however, that the General Assembly had made it plain last year that they want to see live instructors meeting with live students. He said they will not be pleased with arguments extolling the virtues of virtual communication by internet, voice mail, and other electronic mechanisms. Dr. Garrison disagreed with Mr. Sheheen's assessment. She stated that she, too, sat through many meetings of the General Assembly and that in her view the legislators were considering what worked when they were in college, but were not aware of some of the new choices. She said that some measures must be focused on student satisfaction. Mr. Sheheen said that he thought the proposals from the statewide planning group and the tech system were intriguing to consider. He said that the percentage of time in a work week devoted to out-of-class availability of the faculty to the students seemed workable as a basis for measurement. Dr. Reed Johnson asked if faculty time spent with student groups might be counted herein. Mr. Wilson said that this was part of (2)(F), not the current indicator under consideration. Dr. Morrison agreed, stating that Dr. Johnson was referring to items which properly were more "community" in orientation, as opposed to faculty advising in nature. Mr. McKay and Dr. Britton stated that this performance indicator needed at least to stress to some degree the need for human-to-human contact between the faculty and students. Dr. Betsy Brown (Winthrop) said that in her opinion the issue needed to be focused around the question of student satisfaction. She gave as an example the fact that she teaches distance learning classes in which she is never in personal contact with her students. Dr. Prus stated that the problem of individual student advising, student clubs, and like groups would never be captured on student satisfaction surveys which are related solely to course evaluations. Mr. Avery said that another dimension to be considered was how often whatever measure was agreed to would be sampled. He asked if there was any agreement on the following idea: to measure all advising by having seniors complete a survey; and to measure course evaluation satisfaction each year for every course by all students. Mr. Gilbert said that just to ask seniors about advising was to lose the richness of response that would be forthcoming from those who drop out before becoming seniors. Mr. Sheheen said that legislation on mandatory advising came about because a student complained to a member of the General Assembly that she dropped out of college because her advisor did not provide her with appropriate advising. Dr. Wiedemann asked if the issue was the advisor or advising availability, since some institutions use systems in which persons work solely or mainly as academic advisors. Mr. Fleming said the costs of evaluating this amount of material was going to be very high financially and temporally. He agreed with Dr. Morrison that those elements of advising that were related to courses taken could be attached as part of student course evaluations; but those which were programmatic, rather than course-based, could not. The Task Force discussed the time for measuring this indicator and concluded that it should be done with some frequency. Dr. Prus suggested that a random selection of students be considered for each year. Mr. Gilbert echoed that idea by stating that he felt a random sample of seniors and all freshmen and sophomores should be evaluated each year. He placed more stress on the importance of the advising function for freshmen and sophomores, because of the importance of advising to retention. Dr. Melford Wilson agreed that retention was an important result of good advising. Mr. Wilson suggested a measure of the percent of faculty whose students reported availability outside the classroom and the percent of freshmen advised in classes who reported availability. Dr. Stockwell said that he would not want to see this limited to freshmen, but rather felt the focus should be on the percentage of students expressing satisfaction. After considerable discussion, Dr. Prus suggested that the measure might be considered as the percent of the faculty who are considered available, measured by taking the percentage of the fall term freshmen, sophomores, juniors, and seniors for both advising and out-of-class contact. Ms. Bulloch said that we had gotten off track in the discussion by discussing evaluation of faculty and the availability of instructors. All the Task Force needs to do, she said, is to measure availability. Some discussion was held then on the uses to which non-faculty advisors were put and how this was to considered in availability. Mr. Sheheen commented that this was a different model not to be considered under this indicator. Dr. Stockwell agreed. Mr. Sheheen asked if the institutional effectiveness report on advising might be melded into the measure for this indicator. Dr. Prus agreed it could be. Mr. Sheheen pointed out that in his opinion the SCHEA task force did a good job of defining advising. He said that in the past individual institutions have defined advising differently and Mr. Wilson responded that this is irrelevant to the performance indicator's definition. Mr. Wilson then proposed that the following measure be given approval by the Task Force for this indicator: the percent of students reporting satisfaction with teaching faculty's availability outside the classroom and the percent of students reporting satisfaction with availability of their faculty advisors (the definition for which group shall be left to the CHE staff). The membership of the Task Force agreed and requested the staff to flesh this idea out in preparation for the next meeting. Mr. Avery then requested the revisiting of (2)(B). He said that he wanted to see the weighting of administrative, peer, and student evaluation done in such a manner that would guarantee that student evaluation was not "lost" under these others. He proposed a 1/3 split of weighting be used for each of these. Mr. Wilson asked who was more important in evaluations--peers, customers, or superiors? Dr. Stockwell said that irrespective of the answer to that question, he would not want to see lost the sense of empowerment which deans and department chairpersons needed to know was theirs when he has asked them to move faculty resources in particular directions. Dr. Hays stated that some times student ratings were known to drop for reasons wholly extraneous to the quality of teaching. Dr. Garrison said that required courses always have lower evaluations than elective courses. Mr. Wilson stated that his wife's taking courses showed him that student evaluations were done under conditions that ranged from the classroom to taverns to cake and ice cream parties. Mr. Gilbert suggested that the Commission's staff be requested to look into the literature that might exist on such weighting in other states. Dr. Morrison affirmed that she would have her staff undertake this. Mr. Wilson said that the use of "best practices" gives us some range and leeway in making these decisions. Dr. Smith then distributed a chart to show in summary fashion what some other states are doing on performance indicators. Mr. Wilson thanked him and Dr. Morrison for their assistance. He asked if there were further comments. Hearing none, he declared the meeting adjourned at 5:06. Agenda Item 2.a. OBSERVATIONS ON BACKGROUND OF ACADEMICS TASK FORCE INDICATORS 1. Transferability of credits to and from the institution It is fair to say that perceived confusion about transfer of credits among public institutions has been, for decades, one of the most prominent irritant points in the General Assembly. Students, and their parents, become extremely irate when courses do not transfer easily, and when students have to repeat identical or similar courses because they do not transfer among institutions. Because of the prominence of the problem in the past, the General Assembly recently granted substantial new mandatory authority to the Commission on this subject. A comprehensive transfer policy has been adopted by the Commission, the provisions of which are mandatory for public institutions in the state. 2. Financial support for reform in teacher education This general discussion would relate to the Task Force's prior review of "institutional emphasis on quality teacher education and teacher reform." Since this new item is under the research category, it would seem to address institutional efforts to join in reform movements such as the Holmes Group, the Goodlad Project and Professional Development Schools. Ancillary to that would be efforts to gain grants funds to participate in such efforts and to improve teacher education programs. 3. Amount of public and private sector grants This is the most definitive indicator currently linking funding to performance in the higher education system. The Commission collects detailed data on public and private research grants from each institution, and the State matches each dollar reported with an incentive match of 25 percent. DRAFT PERFORMANCE INDICATORS ACADEMICS TASK FORCE September 10, 1996 Please find listed below draft proposals for the four performance indicators discussed on September 10. (2) Quality of Faculty (F) Community and public service activities of faculty for which no extra compensation is paid: This indicator is to be measured as: the percent of full-time faculty participating in each of three criteria (which can each be weighted and benchmarked by sector), defined as follows: 1. Service to one's profession; 2. Service to community/public using one's professional skills/knowledge base; and 3. Other service to or in the community Note: "Community or public service activities" are to be defined as actions taken or processes presented to audiences primarily not affiliated with the institution as students, faculty, or administrators. For Criteria #1 and #2 these actions and processes must be related to the professional skills and knowledge base of the faculty members used in their work as members of the faculty; in Criterion #3, general skills and knowledge application as a citizen alone are involved. The Task Force recommends that less weight be assigned to this indicator by the "Sector Committees" because of the difficulties inherent in defining these activities and collecting appropriate data. (3) Instructional Quality (C) Ratio of full-time faculty as compared to other full-time employees This indicator is to be measured as: the total number of all full-time faculty members paid from unrestricted Educational and General Funds as a percent of the total number of all full-time employees paid from unrestricted Educational and General Funds. Note: The Task Force concluded that the measure should rely on full-time faculty, not FTE faculty; and that "faculty" includes all persons holding the rank of faculty, including non-instructional faculty such as librarians. (3) Instructional Quality (D) Accreditation of degree-granting programs This indicator is to be measured as: The number of programs listed in the Inventory of Academic Degree Programs holding accreditation from a recognized ccrediting agency as a percent of the total number of programs listed in the Inventory of Academic Degree programs for which accreditation is available. Note: THE CHE will develop a list of recognized accrediting agencies which may include those endorsed by the U.S. Department of Education and/or those affiliated with the Commission on Recognition of Postsecondary Accreditation (CORPA) or its successor replacement to the Council on Postsecondary Accreditation (COPA). (3) Instructional Quality (E) Institutional emphasis on quality teacher education and reform In the absence of context for this indicator and given its potential scope, the following general quality and reform principles are suggested as definitional parameters for what constitutes "quality teacher education and reform" : Quality and Reform Principles 1. Promotion and enhancement of rigorous learning in the academic discipline for both pre-service and in-service teachers as a means of ensuring that teachers possess an in-depth knowledge of the subject matter content critical for successful student performance; 2. In-depth understanding and widespread use of instructional technologies and other pedagogical innovations among pre-service and in-service teachers as a means of ensuring familiarly with the most effective formats for disseminating critical knowledge to students; 3. Increased exposure to observational classroom experience, clinical experiences, and quality student teaching as a means of ensuring a strong experiential base in instructional methods and classroom management prior to full-time employment as a teacher; 4. Demonstrated collaboration between higher education and the PK-12 sector in the training of pre-service teachers and in the development of continuous improvement processes for in-service teachers; this collaboration should strive to bring state-of-the-art pedagogical and content knowledge to schools and to ensure that teacher education faculty maintain a first-hand knowledge of the needs of teachers in the classroom environment; and 5. Demonstrated commitment to enrollment and graduation of minority teachers as a means of motivating and understanding as well as role modelling for minority students in the classroom. MEASUREMENT The instructional quality of an institution as defined by its institutional emphasis on quality teacher education and reform is to be measured as: 1. Attainment of successful initial accreditation by the National Council for Accreditation of Teacher Education (NCATE) and continued success in maintaining NCATE accreditation, benchmarked in accord with the number of weaknesses identified for the unit; 2. Percentage of eligible programs approved by the specialized professional associations through the NCATE folio review process, benchmarked in accord with the number of weaknesses identified for specific programs; 3. Percentage of school superintendents reporting satisfaction with school personnel prepared by the institution; with professional development programs; and with other services offered by the institution in the school districts, etc. as obtained under (7) Graduates Achievements (c) Employer feedback on graduates (Planning/Institutional Effectiveness Task Force); 4. The deviation, expressed in standard units, from a student pass rate of 100 % on a) the professional knowledge exam of the National Teachers Examination (NTE) and b) the specialty area exams of the NTE (see data collected under (7) Graduates Achievements (D) Scores of graduates on post-undergraduate professional, graduate, or employment-related examinations and certification tests); 5. The extent to which the teacher education program is responsive to State needs as measured by a) the increase in the number of students who graduate from teacher education programs designated as subject matter "critical shortage" areas as these are defined by the State Board of Education, calculated from a baseline year; b) the decrease in the number of students (excluding minority students) who graduate from teacher education programs designated as subject matter oversupply areas as these are defined by the State Board of Education, calculated from a baseline year; c) the percentage of minority students enrolled in the institution's teacher education programs, benchmarked against an appropriate percentage; d) the number of teacher education programs that fulfill "special needs" directly linked to institutional mission and to State needs not covered by a), b) and c) above. 6. The percent of institutional faculty in education-related disciplines participating in structured activities (e.g., as with Professional Development Schools) other than regularly taught courses with PK-12 for the primary purpose of improved quality of education and related services at the PK-12 level; 7. The number of PK-12 certified personnel participating in institutionally-sponsored activities and/or courses designed to improve quality of education and related services at the PK-12 level. 8. The number of teacher education students receiving the clinical field-based components of their program in Professional Development Schools (PDS) as these are defined in the existing CHE-PDS criteria. Agenda Item 5 Standardized Question At its meeting on September 10, the Task Force requested that Drs. Hayes and Prus develop standardized questions that are to be used by the colleges and universities in compiling measurement data for Performance Indicator (2) Quality of Faculty (E) Availability of faculty to students outside the classroom. Please find attached the two questions developed by Drs. Hayes and Prus at your request. The staff would point out that the Task Force defined academic advisor more broadly that the second evaluation question apparently does. The Task Force definition includes as academic advisors those faculty who interact with students as advisors in student organizations, honor societies, student research projects conducted outside a particular course, and so on. The standardized question appears to be directed at advisors who assist students with course scheduling. Thus, the staff suggests that a third question be adopted for inclusion in the same survey instrument as the academic advisor question, as follows: Please indicate your satisfaction with the availability of faculty advisors for student organizations, student honor societies, student publications, or comparable activities, excluding your faculty course scheduling advisor and your course instructors. Choose one response from the scale below. (In selecting your rating, consider the advisor's availability via established office hours, appointments, and other opportunities for face-to-face interaction as well as via telephone, e-mail, and other means. If you do not participate in activities which bring you into contact with faculty advisors, do not complete this item.) We would suggest using the same four-point scale (very dissatisfied; dissatisfied; satisfied; very satisfied) as well as the "Recommendations for Administration of the Evaluation" for academic advisors proposed by Drs. Hayes and Prus for this question as well. Attachments (2) Agenda Item 5 Standardized Question At its meeting on September 10, the Task Force requested that Drs. Hayes and Prus develop standardized questions that are to be used by the colleges and universities in compiling measurement data for Performance Indicator (2) Quality of Faculty (E) Availability of faculty to students outside the classroom. Please find attached the two questions developed by Drs. Hayes and Prus at your request. The staff would point out that the Task Force defined academic advisor more broadly that the second evaluation question apparently does. The Task Force definition includes as academic advisors those faculty who interact with students as advisors in student organizations, honor societies, student research projects conducted outside a particular course, and so on. The standardized question appears to be directed at advisors who assist students with course scheduling. Thus, the staff suggests that a third question be adopted for inclusion in the same survey instrument as the academic advisor question, as follows: Please indicate your satisfaction with the availability of faculty advisors for student organizations, student honor societies, student publications, or comparable activities, excluding your faculty course scheduling advisor and your course instructors. Choose one response from the scale below. (In selecting your rating, consider the advisor's availability via established office hours, appointments, and other opportunities for face-to-face interaction as well as via telephone, e-mail, and other means. If you do not participate in activities which bring you into contact with faculty advisors, do not complete this item.) We would suggest using the same four-point scale (very dissatisfied; dissatisfied; satisfied; very satisfied) as well as the "Recommendations for Administration of the Evaluation" for academic advisors proposed by Drs. Hayes and Prus for this question as well. Attachments (2) Agenda Item 6 Review and Approval of All Indicators We will send to you separately or distribute to you on Wednesday the indicators approved to date as displayed in the required format. In order to get the mailout to you in a timely manner, we had only enough time to complete our work on the substance of the indicators and we will return to formatting them as requested on Monday. Agenda Item 7 Discussion of Suggestions and Observations from Task Force to Steering Committee, etc. This item is an oral item. No written materials are included in this mailout. Agenda Item 4 Best Practices for a Performance Review System and Best Practices for Post-tenure Review Several institutions responded to our request for feedback concerning the performance indicators involving performance review and post-tenure review of faculty. We have tried to incorporate into the two indicators as many of the comments received as possible. These include a further definition of external evaluations to be dependent upon the role and function of the faculty member, such as external review for assessment of research. However, for those not conducting research, an external review may be performed by someone external to the department. Most of the substantive comments focussed on external peer evaluation which represents a new dimension to faculty review at some institutions. Institutional feedback ranged from total elimination (one written suggestion to this effect) to recognition of the fact that this is not currently used to various recommendations for its use in certain defined circumstances. Notably, though, the literature review concerning faculty assessment practices indicated that external peer evaluation is a widely accepted practice. The staff analysis of the literature is the basis upon which we included the external peer review component. Deletion of this factor would weaken the effectiveness of the performance review and post-tenure review system. Additionally, the State Technical College system commented regarding their inability to apply some of the practices, including external peer review, to review of their faculty. However, many of these practices are already incorporated into the Technical College review system. Perhaps others, such as use of student evaluations in faculty performance review, should be incorporated. The Commission staff contend that Tech's primary concern about how the various criteria for performance review apply to their system can be resolved at the benchmarking stage, where the possibility exists for decreased weighting or no weighting for performance review practices inconsistent with the technical college mission. Furthermore, staff hold that the State Technical College system's argument that the criteria contained in practice nine of indicator 2B should not be weighted according to mission is invalid. The entire performance indicator process is predicated on the notion of establishing a system of priorities and measuring institutional performance on these priorities within sector-specific parameters. With this as a given, the proper forum for discussing weighting for the criteria on practice nine of indicator 2B is the sector committee. We hope that the changes in the performance indicators reflect the concerns and opinions expressed during the last Academics Task Force meeting as well as material submitted to the staff by Friday noon. Attachment 1 BEST PRACTICES FOR A PERFORMANCE REVIEW SYSTEM FOR FACULTY 1. The performance review system must meet the "Criteria and Procedures for Evaluation" (4.8.10) of the Southern Association of Colleges and Schools which stipulate that: (1) an institution must conduct periodic evaluations of the performance of individual faculty members; (2) the evaluation must include a statement of the criteria against which the performance of each faculty member will be measured; (3) the criteria must be consistent with the purpose and goals of the institution and be made known to all concerned; and (4) the institution must demonstrate that it uses the results of this evaluation for improvement of the faculty and its educational program. 2. The performance review system should be both formative (designed to be a supportive process that promotes self-improvement) and summative (assesses and judges performance). 3. The performance review system process and criteria should be explained to new hires. 4. All faculty, including tenured faculty at all ranks, are reviewed annually and receive a written performance evaluation. In this way, for those institutions with a tenure system, the performance review system should not pose a threat to the tenure system but extends and enlarges it. 5. The performance review system should have been developed jointly by the faculty and administrators of an institution. 6. The performance review system should allow for discipline-specific components. 7. The performance review system should provide opportunities for reflection, feedback, and professional growth whose goal is to enhance instruction at the institution. 8. The performance review system should include written performance evaluation data from four sources: a. Annually, instruction and course evaluation forms completed anonymously by students through a standardized institutional process and submitted for each course taught; b. Annually, administrative evaluation which includes assessments from the department chair and/or dean; c. Annually for untenured faculty and at least every three years for tenured faculty, internal peer evaluations, i.e., evaluation of faculty by their peers within the institution of higher education; d. At least every three years for untenured faculty and every five years for tenured faculty, input from peers external to the department and/or institution as appropriate to the role and function of each faculty member. External evaluators to the institution include national peers from the same field of expertise from other institutions of higher education, professional organizations and societies, federal agencies, etc. Specialized national accreditations and the CHE program reviews, which include external reviewers' assessments, should be incorporated into the external peer review component. 9. The performance review system must include, at a minimum, the following criteria which are weighted in accordance with each institution's mission: instruction/teaching advisement and mentoring of students graduate student supervision supervision of other students (teaching assistants, independent study students) course/curriculum development research/creative activities publications service to department service to institution service to community participation in professional organizations/associations honors, awards, and recognitions self-evaluation participation in faculty development activities/programs 10. The results of each performance review, including post-tenure review, must be used by the institution as part of its faculty reward system and faculty development system, and the system should include a plan for development when deficiencies are indicated in the review. Specifically: a. when an instructor (in the Tech system) or untenured faculty member receives an overall rating of unsatisfactory on the annual performance review, the faculty member may be subject to nonreappointment; b. when an instructor (in the Tech system) or tenured faculty member receives an overall rating of unsatisfactory on the annual performance review, the faculty member is immediately subject to a development process, developed by the specific unit, whose goal is to restore satisfactory performance. The development process will include a written plan with performance goals in deficient areas, with appropriate student and peer evaluation of performance. c. when an instructor (in the Tech system) or a tenured faculty member fails to make substantial progress towards the performance goals at the time of the next annual review or fails to meet the performance goals specified in the development plan within a specified period, that faculty member will be subject to dismissal (in the Tech system) or revocation of tenure for habitual neglect of duty under the terms of the senior institution's faculty manual. 11. The institution should develop an appeals/grievance procedure for those faculty who do not agree with the results of the performance evaluation and/or the resulting recommendations or requirements for improvement. Attachment 2 BEST PRACTICES FOR POST-TENURE REVIEW 1. A post-tenure review system should incorporate all the indicators identified in the "Best Practices for a Performance Review System for Faculty" document. 2. The post-tenure review should be as rigorous and comprehensive in scope as an initial tenure review. 3. The post-tenure review should incorporate annual performance reviews accumulated since the initial tenure review or since the last post-tenure review. 4. Whereas the focus of an initial tenure review tends to be on past performance, equal emphasis should be given to future development and potential contributions in the post-tenure review. 5. Statewide, each tenured faculty member will have a post-tenure review conducted at pre-established, published intervals of no more than five years, unless the faculty member is participating in a development/improvement process in which case the review may be conducted more frequently. 6. If reviews for promotion (e.g., a tenured associate professor is reviewed for promotion to tenured full professor) fall within the appropriate time interval and encompass all the indicators in this document and in the "Best Practices for a Performance Review System for Faculty "document, they may constitute a post-tenure review. 7. The post-tenure review must include evaluations from peers external to the department and/or institution as appropriate to the role and function of each faculty member (usually to evaluate the quality of research), as well as internal peer evaluations, student evaluations, and administrative evaluations. 8. The post-tenure review must provide detailed information about the outcomes of any sabbatical leave awarded during the five-year post-tenure review period. 9. The institution must identify the means by which the post-tenure review is linked with faculty reward systems, including merit raises and promotion. 10. The institution must display a commitment to provide funds to reward high achievers on post-tenure reviews as well as to provide assistance to faculty members needing improvement. 11. If a faculty member receives an unfavorable post-tenure review, the faculty member is immediately subject to a development process as described in the "Best Practices for a Performance Review System for Faculty, as outlined in 10(b) and 10(c) of that document. 12. The institution should develop an appeals/grievance procedure for those faculty who do not agree with the results of the post-tenure review evaluation and/or the resulting recommendations or requirements for improvement.