Skip Navigation
Back 

res1211

res1211


Minutes of the
Research Universities Sector Committee
December 11, 1996   10:00 a.m.- 12:00 p.m. 

Members Present                         Members Absent
                    
Mr. Larry Wilson, Chair                 Mr. Winfred Greene
Mr. William Dauksch                	Mr. Thomas Marschel
Dr. Layton McCurdy                 	Dr. Ronald Thurston
Dr. Walton Owens, Jr.                   Ms. Patricia McAbee
Dr. Marcia Welsh                        
Dr. Janis Bellack

Staff Assisting Committee

Mr. David Fleming, Clemson University
Mr. Harry Matthews, USC-Columbia
Dr. Gail Morrison, CHE
Dr. Nancy Healy-Williams, CHE


Mr. Wilson called the meeting to order and indicated that there were three items
on the agenda:
     
     1) Report from the December 10, 1996 Steering Committee Meeting;
     2) Request from USC to reconsider two items; and
     3) Comments on funding formula.

Steering Committe Meeting

Mr. Wilson stated that he presented the Research Sector's recommendations
and rationale to the Steering Committee.  Much discussion was prompted by the
recommendations.  In particular, concern was raised that if the standards are
raised (i.e., SAT scores), then S.C. students will have limited access to the
research universities.  Mr. Wilson countered this by noting that many of our
brightest students leave the state to pursue degrees in higher education.  Also,
the applicant pool in S.C. is small and the research universities may need to
attract more out-of-state students.  Mr. Wilson sees these students as a benefit
o the economic development of the State.

It was reiterated that these increased levels in the standards are needed to close the 
gap between the state's research institutions and other great institutions in the nation.
  It will be up to the institutions to raise these standards and to demonstrate that 
they are performing at high levels of quality.  Universities will be held 
responsible for insuring high quality and should not be regulated to do so.

Each member was invited to  attend the December 19, 1996 meeting of the
Steering Committee (10:00 a.m. at the Commission).  A strong showing will
insure that the proposals of the Research Sector Committee can be fully
explained and the extent of support communicated.

Performance Indicator 2A Academic and Other Credentials of Professors
and Instructors

Changes in this indicator were discussed by the committee which agreed upon
the following:


Benchmark for Performance Indicator 2A(a) is 0%.  Part 2A(b) is 100%
funding for attaining 80% of aspirational peer group credentials for the
initial year.  For each subsequent year, the institution must gain two
percentage points to receive 100% funding.  If two point progress is not
attained, the institution receives only the percentage attained that year,
i.e.,80% funding for 80% of aspirational peer group credentials.  Academic
 and other credentials of faculty will be evaluated during regular CHE program
reviews by reviewers (selected from aspirational peer group).  Generally,
CHE review 20% of programs on an annual basis.  Assessment will be
based on 10 criteria which are as follows:
1.   Referred articles, books, technical reports
2.   Performances and artistic works
3.   Presentations in national and international forums
4.   Service on national review panels, editorial boards, advisory boards
5.   Number of doctoral students supervised
6.   Number of masters students supervised
7.   Number of patents, copyrights, licenses
8.   National awards
9.   Documented instructional achievement
10.  Other appropriate indicators of high quality as defined by the peer
     review team.     

Benchmark 2A reweighted: 8%

Benchmark 6A SAT and ACT Scores of the Student Body

Discussion occurred regarding Performance Indicator 6A.  The committee
clarified how this indicator is linked to Performance Indicator 2D Compensation. 
Specifically, the intent of the 110% cap on compensation in relation to SAT
scores is intended to avoid paying faculty at the aspirational level but having SAT
scores fall below the aspirational peer midpoint average.  The capping at 110%
works as follows: If an institution achieves 80% of the aspirational peer midpoint
average on SAT scores and 100% of the average aspirational peer compensation,
the institution would receive 88% of funding in the compensation category 
(100% of 80%) becasue SAT scores are lagging behind.  If SAT average is 90% of 
aspirational peer average, then funding in the compensation category would be 
99% (110% of 90%); if SAT average is 100%, then it would be possible to 
receive 110% in compensation funding (110% 0f 100%).

Benchmark for Performance Indicator 6A:
The ACT scores will be converted to SAT scores for USC and Clemson. 
The difference between the midpoint score in the 25-75 percent SAT range
for the institution and the average SAT midpoint score in the 25-75 percent
SAT range for the aspirational peer institutions will be determined.  The
institution will have ten years to eliminate this gap.  Funding will be 100% if
the institution meets the minimum amount per year as determined by
dividing the gap by ten years (i.e., if the gap is 120 points, then the annual
gain must be 12 points).  The aspirational peer group average will change
annually; thus, the gap will be adjusted annually.  However, if the planned
annual increase is not met the benchmark becomes:

     midpoint range for institution /        = % of funding
Average midpoint range score for peers


For MUSC, the difference between the average score on the GRE, PCAT,MCAT, 
and DAT examinations for the institution and the average score on the GRE, 
PCAT, MCAT and DAT examinations for the aspirational peer institutions will be 
determined.  Each test is to be weighted as 25%. The institution will have 
en years to eliminate this gap.   Funding will be at 100% if the institution 
meets the minimum amount per year as determined by dividing the gap in 
the scores by ten years  (i.e., if the gap is 120 points then the annual 
gain must be 12 points). The aspirational peer group average will change
 annually; thus, the gap will be adjusted annually. However, if the planned 
annual increase is not met the benchmark becomes:

      Midpoint range for institution /       =  % of Funding
average midpoint range score for peers


Funding Discussion

The final discussion concerned how  the distribution of funds would need to take
into consideration in some way the size of an institution as well as its program
mix and the costs of the programs, all of which vary enormously among
institutions.  The focus of the discussion was on how to calculate cost per
student credit hour.  Questions raised included how will PSA be handled for
Clemson and whether the medical universities should be separated from other
programs.  After extended discussion, the Committee agreed that the conclusion
reached at the meeting on November 20,1996, was the most acceptable
method.  At that meeting, the committee agreed that the way to measure costs,
which would encompass such variables as size and program mix, would be to
calculate the average cost per student credit hour (divide State appropriations by
student credit hours generated.)

The meeting adjourned at 12:00 p.m. with a reminder to the members to please
attend the December 19, 1996, Steering Committee Meeting to beginning at
10:00 a.m. at the Commission.