Skip to content

Discussion

First and foremost, one can argue that a questionnaire is simply a questionnaire. That the development of a questionnaire is 'just' a group effort. That the people in such groups are constrained by time. And that this is a good enough excuse. However, this overlooks the fact that at least sixteen thousand participants have taken the time to fill in this questionnaire. If we care about our participants, maybe we should care about the usefulness of a questionnaire we send to each of them.

RQ1: What is the history of the ELIXIR evaluation questions?

Although this is not formally published, maybe the current survey (or the surveys it is based on) have been developed by evidence-based best practices, yet in unpublished and more informal documents. However, it seems unlikely that references to the literature are used in informal communication, yet subsequently omitted when a formal academic paper is written.

To us, it seems more likely that a too short amount of time was allocated to researching the academic literature, resulting in missing the papers mentioned in this paper.

RQ2: How does the academic literature relate to the ELIXIR evaluation questions?

The majority of the ELIXIR mandatory evaluation questions, in the section to assess course quality have little connection to the academic literature. Three out of five questions (i.e. questions 5, 6 and 9) resulted in zero papers being written on their effectiveness.

To us, these questions simply seem to be in the wrong session. Why it was chosen to put these questions in the section called 'quality metrics', instead of a (new) section with a better fitting name is unknown.

Regarding the two questions that seem to be in the correct section, however, only one is supported by the literature ('Would you recommend the course?') by one paper, where the other ('How satisfied are you with the course?') has strong support against its effectiveness.

Regarding these two questions, they seem to be measuring the same thing: course satisfaction (on its own), and recommending a course (because the learner is satisfied with the course). It is unknown why these two similar questions are both in the evaluation and it would be interesting to see how strong the correlation is between the answers on these two questions.

Epilogue

We know that a teacher reflecting on his/her work is one of the best ways to increase his/her teaching quality. Or: 'student ratings can only become a tool for enhancement when they feed reflective conversations about improving the learning process and when these conversations are informed by the scholarship of teaching and learning [Roxå et al., 2021]. The other best way for teachers to improve is to do peer observations. Note that neither practice needs an evaluations.

If we really care about teaching quality, shouldn't we encourage doing the things that actually work?

References

  • [Roxå et al., 2021] Roxå, Torgny, et al. "Reconceptualizing student ratings of teaching to support quality discourse on student learning: a systems perspective." Higher Education (2021): 1-21.