Programme Evaluation

Indirect programme evaluation (recommended structure IWIO surveys)

Evaluations

Although there can be many purposes for evaluations, the most widely used form of evaluation is the assessment of student satisfaction, usually done after a course, or after a specific element of a course, such as a lecture or practical. The questions in such evaluations usually cover a broad range of domains, such as good logistics, workload, study facilities, etc. However, if a course coordinator decides to use the principles of CoAl, he would probably want to know in more detail how the students perceived the new approach. The traditional evaluation forms may then seem too superficial; when the teaching is focussed on a deep approach, then of course the evaluations should not be surface. We searched the literature on student evaluations to find out which kind of evaluations could be used to help the course coordinator to evaluate and improve his new course.

 

Evaluating the study process

The aim is to challenge the students, so that they become eager to investigate the topic and to go the extra mile to really understand what they have to learn. Obviously, the aim is to avoid the students to just memorize the topics and/or to pass the exam with minimal effort.

John Biggs himself developed and investigated the Study Process Questionnaire which he later modified into the R-SPQ-2F (Click here). This questionnaire has two main scales, Deep Approach (DA) and Surface Approach (SA) and it measures whether students are motivated and use deep study approaches to reach understanding.

According to this scale, the approach a student uses is determined by motivation and strategy, so besides the main overall scores for deep and surface approach, the scale has four subscales, Deep Motive (DM), Deep Strategy (DS), Surface Motive (SM), and Surface Strategy (SS). A total score for the main scales and for the subscales can be calculated by summing the items. The scale has been used in several studies since it was developed, and it has also been translated and validated in different languages, including Dutch.

Clarity of intended learning outcomes, teaching/learning activities and assessment methods

A curriculum that is developed according to the principles of CoAl should have very specific Intended Learning Outcomes (ILOs), Teaching and Learning Activities (TLAs) and Assessment methods (Ass.). This may create the desire among teachers to evaluate how much the students appreciated these specific elements and the alignment between them. In his book, Biggs gives the following examples of possible questions a course coordinator could ask. Were the ILOs clear? Did the TLAs help the student to achieve the ILOs? Which did not? Did the Assessment methods address the ILOs? Were the grading rubrics understood? Did the ILOs help students plan for learning? Did they see the assessment methods as fairly assessing what they had learned?

Based on the work of Biggs (2001), a questionnaire was designed and evaluated by Wong, Kwong and Thadani (2014). Similarly to what Biggs (2001) suggests, Wong and colleagues state that effective learning would only be taking place if students are clear about:

  1. What they are to learn and how that learning is manifested (ILOs).
  2. What they are supposed to do when learning appropriately (TLAs).
  3. What the requirements and standards of assessment (Ass.).

Following these three constructs Wong and colleagues (2014) designed and evaluated the Learning Experience Inventory in Courses (LEI-C).

Some examples of validated student evaluation questionnaires

Generic student evaluation questionnaires have been used for many years, far before CoAl was developed. Therefore, most universities have a long lasting routine of systematic evaluations of programs. Quite often such evaluations are self-designed questionnaires, often by an independent office at the university. Besides this pragmatic approach to student evaluations, there is also a large body of evidence around student evaluations, see for example a very useful review of the available evidence around student evaluations written by Richardson. The most widely used and investigated student evaluation questionnaires are the Students’ Evaluations of Educational Quality (SEEQ) questionnaire and the Course Experience Questionnaire (CEQ). Both questionnaires have been used a lot in both research and everyday practice.


SEEQ
The SEEQ has 35 items in which students are asked to rate their teacher or course unit, using a five-point scale from ‘very poor’ to ‘very good’. The statements are intended to reflect nine aspects of effective teaching:

  1. Learning (I have found the course intellectually challenging and stimulating)
  2. Enthusiasm (Instructor was enthusiastic about teaching the course)
  3. Organisation (Course materials were well prepared and carefully explained)
  4. Group interaction (Students were invited to share their ideas and knowledge)
  5. Individual rapport (Instructor made students feel welcome in seeking help in or outside of class)
  6. Breadth (Instructor adequately discussed current developments in the field)
  7. Examinations (methods of evaluating student work were fair and appropriate)
  8. Assignments (required readings/texts were valuable)
  9. Overall (compared with other courses I have had, I would say this course is (very poor-very good))

An interesting aspect that emerged from the research that has been done on the SEEQ, is that the scores of the SEEQ seem to be very much associated with the teacher giving the course and not so much with the course itself. So the scores are stable when one teacher is evaluated over several courses, but unstable if the same course is given by different teachers. When looking at the items of the SEEQ this makes sense, because enthusiasm and organisation for example seem directly related to the teacher.

Although not specifically designed to evaluate the alignment of teaching and assessment, some questions in the SEEQ are indicative of good alignment and deep approaches to studying. One could for example expect that students feel more intellectually challenged after the introduction of a CoAl curriculum, and comparing this to other courses or previous years could be interesting and helpful.

 

CEQ
The other frequently used and investigated general student evaluation questionnaire is the CEQ. The CEQ has been widely used in Australian universities for many years. The initial CEQ consisted of 30 items reflection 5 domains:

I. Good teaching: Teaching staff here normally gives helpful feedback on how you are going.

II. Clear goals and standards: You usually have a clear idea of where you’re going and what’s expected of you in this course.

III. Appropriate workload: The sheer volume of work to be got through in this course means you cannot comprehend it all thoroughly.

IV. Appropriate assessment: Staff here seems more interested in testing what we have memorized than what we have understood

V. Emphasis on independence: Students here are given a lot of choice in the work they have to do.

Many studies have been done with the CEQ, also resulting in different versions such as a 23 item version and a 36 item version. In the light of constructive alignment, the addition of a Generic Skills domain consisting of 6 questions that are concerned with problem solving, analytic skills, teamwork, communication and work planning is interesting. But also the domains ‘Clear goals and standards’ and ‘appropriate assessment’ are very relevant when evaluating the alignment of a programme. An interesting finding that came out from the research done on the CEQ is that it seems to be highly related to deep approaches to studying, which makes sense if you look at the domains previously mentioned. So it seems that quite some elements of the CEQ are very much aligned with the intentions of CoAl.

 

How to move forward?

The available questionnaires to evaluate deep learning approaches (R-SPQ-2F) and to evaluate clarity of alignment (LEI-C) seem very useful when evaluating curricula with a focus on CoAl. In addition, at the end of a module or a course, a course coordinator would also like to evaluate general issues such as the enthusiasm of the teacher or efficiency of organisational aspects. The good thing about using the already investigated questionnaires is that they are validated and benchmarks are available from the previous studies.

However, using both the R-SPQ-2F and the LEI-C, added to a questionnaire such as the Students’ Evaluation of Education Quality (SEEQ) or the Course Experience Questionnaire (CEQ), would make the assessment at the end of a course probably too burdensome for the students. As a consequence, the response rate would drop or items would be skipped or not properly considered by the students, which would influence the validity of the questionnaires. Since using all questionnaires is not an option, in our view there are two other options, one is to choose a validated questionnaire that fits best to the needs (without being perfect) and the other option is to combine the good items of the different questionnaires and create a new questionnaire tailored to the course’s or universities’ needs.

 

 

When it comes to the first option, using the CEQ would probably be a good choice. Questions like ‘You usually have a clear idea of where you’re going and what’s expected of you in this course’ come very close to the ideas about having a clear and aligned course plan, which was the basis of the LEI-C. The same goes for questions like ’the staff made it clear right from the start what they expected from students’. But also the desire to create a deep approach study environment comes back in the CEQ with questions such as ‘I found my studies intellectually stimulating’, or ‘I was generally given enough time to understand the things I had to learn’. So using the CEQ gives good opportunities to evaluate constructively aligned courses, and it has the benefit of using a validated questionnaire and benchmarking.

The other option is to combine the good elements of the different questionnaires. We made an attempt to combine the good things of the available items, and our attempt is shown here. Of course such an attempt is too subjective and it doesn’t come close to the rigorous standards we usually demand when designing new questionnaires. However, as has been stated before, the current practice of designing evaluation questionnaires is that of internal (university) development of ‘own’ questionnaires, so this attempt should be seen as an informed suggestion of good items to consider.