Development and Validation of an Instrument for Measuring Instructor Satisfaction in Massive Open Online Courses (MOOCs)
Abstract
To support the pursuit of quality in MOOCs, this study follows the Sloan
Consortium online education quality framework. This framework includes five
pillars of quality online education that need to be assessed on an ongoing basis:
learning effectiveness, cost effectiveness, access, faculty satisfaction, and student
satisfaction. At present, there is no measures for any of these five pillars. The
purpose of this study was to develop and validate a measure for one pillar, faculty
satisfaction, the MOOC Instructor Satisfaction Measure (MISM).
This was a quantitative study which used instrument development methods
to develop the MISM and examine its validity and reliability, and survey methods
to gather data from study participants. This study used a combination of instrument
development steps mentioned in five frameworks in the literature. Those steps were
divided into three phases: Phase I: Purpose and Construct Definition; Phase II: Item
Generation; Phase III: Instrument Validation.
The purpose of the first phase was to specify purpose of the instrument and
provide construct definition to determine clearly what I was measuring. In this phase, I confirmed that there was no existing instrument that adequately served the
purpose of measuring instructor satisfaction in MOOCs. I concluded this phase by
specifying six dimensions of instructor satisfaction and defining each dimension:
student-related, instructor-related, system-related,
instruction-related, support-related, and feedback-related.
The second phase involved item generation. First, I generated an item pool
and used a Kano survey to review them. Then, I selected and developed a scaling
technique for use with these items. A panel of five experts, each of whom had
taught more than two MOOCs, reviewed the initial pool for content and face
validity. The results from that expert review were used to revise the initial items. In
the pilot study of the MISM, I administered the revised MISM to a sample of
MOOC instructors who had used edX, Coursera, and FutureLearn, and examined
their responses (n=29) for psychometric properties. At the end of this second phase,
I revised the items based on these analyses of pilot data.
In the final phase, I administered MISM to a larger sample of MOOC
instructors drawn from that population (n=84) and examined its reliability and
validity. Results from the Maximum Likelihood method of Exploratory Factor
Analysis were used to answer the research question about which dimensions or
factors were most useful in assessing MOOCs instructor satisfaction. Although
eight factors had Eigenvalues greater than 1.0, the best model was found to be the
9-factor solution. The total variance explained by nine factors was 73.57%. The
Chi-square for goodness of fit was 102.8 (p = 0.299). Those nine factors were: intrinsic rewards, extrinsic reword, resources, platform, interactions, percentage
of students, time, administration and technical support.
The reliability of the MISM was measured using Cronbach’s alpha. Those
results were used to answer the second question, which asked about the extent to
which the estimates of the MISM validity and reliability fell within an acceptable
range. In final administration of the MISM, the alpha value was 0.845, which is
considered fairly high and reliable.
The procedures and results from each of the 13 steps to build MISM were
documented. The generalizability, implication, and recommendations for research
relative to study delimitations, limitations, and limitations were offered.