Files in this item



application/pdfDowns_Holly.pdf (32MB)
(no description provided)PDF


Title:Discerning quality evaluation in online graduate degree programs in agricultural sciences and engineering
Author(s):Downs, Holly A.
Director of Research:DeStefano, Lizanne
Doctoral Committee Chair(s):DeStefano, Lizanne
Doctoral Committee Member(s):Burbules, Nicholas C.; Greene, Jennifer C.; Schwandt, Thomas A.
Department / Program:Educational Psychology
Discipline:Educational Psychology
Degree Granting Institution:University of Illinois at Urbana-Champaign
Online Learning
Quality Indicators
STEM Graduate Degree
Science, Technology, Engineering, and Mathematics (STEM)
Abstract:Enormous demands for online degrees in higher education have increased the pressure on universities to launch web courses and degrees quickly and, at times, without properly attending to the quality of these ventures. There is scarce research that defines which quality indicators are used to assess cyberlearning environments, how different stakeholders view the relative importance of these quality indicators in online graduate degree programs from fields like science and engineering that have a historical preference for formal program accreditation, and what practices are used in evaluating completely online graduate degree programs in higher education. This mixed methods study examined current practices in three established online degree programs in agriculture and engineering at the University of Illinois, identifying quality indicators and evaluation practices used with cyberlearning environments in these fields and comparing myriad stakeholder views regarding the value of these practices. Data collection in this study used a mixed-methods approach, including a combination of surveys (n = 107) and interviews (n = 27) with program administrators, faculty, and students, as well as a document review from the different programs. While most of the evaluation occurring in the programs is informal, analysis of the surveys, interviews, and documents collected from the programs revealed four key themes related to current evaluation practice including the use of: (a) informal feedback from the students and faculty, (b) student satisfaction surveys (i.e., ICES student feedback and department-specific and created satisfaction surveys), (c) student grades and performance information, and (d) the Committee on Extended Education and External Degrees (CEEED) process. There were a several challenges reported in using these strategies to evaluate quality, including lacking structured collection and reporting mechanisms, differing implementation levels in traditional and online courses, varying availability of data and student quality, lacking fidelity of information delivery and access, and changing survey forms. Also evident from study was that the implementation of these evaluation practices is occurring at varying levels which were categorized on a four-stage continuum of evaluation. The programs in this study are at or beginning to move out of the first evaluation stage of preservation, meaning that the administrators have an evaluation system that is focused on efficiency and on collecting student satisfaction ratings. In this evaluation stage, small improvements are made periodically in hopes of getting more efficiency out of the current system, but little is done to explore quality beyond student satisfaction. Thus, the evaluation is incomplete as it overlooks important issues like student learning outcomes, the teaching and learning process, faculty support, course structure, and others. A factor analysis was done to explore the dimensionality of the 72 items related to quality evaluation and science, technology, engineering, and mathematics (STEM) program accreditation, from the 2009 National Research Council Study on the Quality of Traditional Programs, Institute for Higher Education Policy (IHEP) Online Benchmark Study, and ABET Criterion Three Items for STEM programs, which resulted in 12 common quality indicators to determine program quality of STEM online programs. These 12 quality indicators included (a) diversity of students and faculty, (b) professional and scholarly productivity of faculty, (c) presence, accessibility, and articulation of evaluation activities, learning outcomes, and support information, (d) student knowledge of current practice, ethics, impact, and professional conduct in STEM, (e) student production of STEM capstone research projects, (f) customer service provided by the program, (g) student training in conducting scholarly research and access to university resources, (h) interaction between students with each other and the faculty, (i) comparable achievement profiles between entering online and traditional students, (j) faculty preparation to transition from traditional to online environments, (k) student persistence to degree completion, and (l) student success beyond graduation. Differences between stakeholders revealed that online students placed a statistically significant higher emphasis than faculty on the presence, accessibility, and articulation of evaluation activities, learning outcomes, and support information. Interviews revealed that the online students considered themselves to be “consumers” of the degree program, thereby increasing the need for identification of clearly defined “outcomes” or “competencies” that online students should be able to produce or demonstrate as a result of participation in the courses and degree program overall. The study concludes with implications on how online programs and evaluators can use the quality indicators to identify the strengths and weaknesses of the current evaluation system and to define and develop their own evaluation procedures for richer understanding within and between institutions and departments.
Issue Date:2011-08-25
Rights Information:Copyright 2011 Holly A. Downs
Date Available in IDEALS:2011-08-25
Date Deposited:2011-08

This item appears in the following Collection(s)

Item Statistics