Mind the Gap: Using R/exams to Ease the Transition into STEM Studies

[This article was first published on R/exams, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Ideas and experiences from an award-winning bridging course in mathematics at Universität Innsbruck, whose teaching and examination culture is guided by learning outcomes and extensively uses R/exams.

Mind the Gap: Using R/exams to Ease the Transition into STEM Studies

Guest post by Pia Tscholl & Lisa Schlosser (Universität Innsbruck).

Motivation

Universities in German-speaking countries face above-average dropout rates in STEM (Science, Technology, Engineering, Mathematics) subjects. These high dropout rates are worrying, as the demand for qualified workers in these fields exceeds the available supply. Especially deficient mathematical knowledge plays a major role during the dropout process in STEM fields. For this reason, the majority of the German-speaking universities offer mathematical remedial or bridging courses at the beginning of STEM degree programs. However, researchers criticize that many remedial courses do not offer audience-appropriate assessments to efficiently diagnose mathematical deficits. Echoing this critique, a team at the University of Innsbruck has developed a mathematical self-assessment, which is implemented via R/exams and made available to participants of the mathematical remedial course via the university’s learning management system OpenOlat. The implementation with R/exams offers the advantage of large test sets with (potentially) randomized tasks which can be easily modified and evaluated in an automated manner. Due to the automated evaluation, the self-assessment can be used in courses with a large number of participants. Moreover, the created assessment can be easily exported to other formats (pdf, moodle, …).

Creating task lists by content area

The self-assessment covers the following content areas (with abbreviations based on the German titles):

  • AK: General mathematical competencies.
  • ZF: Numbers and functions.
  • GU: Equations and inequalities.
  • KV: Coordinates and vector geometry.
  • GT: Geometry and trigonometry.
  • DR: Differential calculus.
  • IR: Integral calculus.
  • SW: Statistics and probability.

A separate test set with a certain number of R/exams tasks is created for each content area. All tasks are saved in the same folder and named as follows: “ContentArea_AscendingNumber.Rmd”, e.g. the first task of the content area ZF is named ZF_001.Rmd, the second ZF_002.Rmd and so on. Consistent naming makes it possible to quickly create task lists by content area:

testset_ZF <- list.files(path = "wd/task", pattern = "^ZF_[0-9]+\\.Rmd$") 

where wd is your current working directory or the path where your task folder is located and task is the name of the folder where the tasks are stored. The argument pattern is a regular expression starting with ZF (or another content area) followed by _, some digits and ending with .Rmd. Subsequently, testset_ZF contains the ZF task list to be exported to OpenOlat.

Example: Defining a task

As an example, the code for a specific randomized task is presented below. It asks the participants to compare two randomly selected fractions to check which one is greater or whether they are equal. In addition to the code being shown below (in R/Markdown format) it can also be downloaded in both R/Markdown and R/LaTeX format as ZF_071.Rmd or ZF_071.Rnw, respectively.

```{r, include = FALSE}
## Generate two random fractions based on numbers from 1 to 10
f1 <- sample(1:10, 2, replace = FALSE)
f2 <- sample(1:10, 2, replace = FALSE)
a <- f1[1] 
b <- f1[2] 
c <- f2[1] 
d <- f2[2]

## Save random fractions a/b and c/d as character
## (don't forget escaping \\ within the math mode)
fr1 <- paste0("$\\frac{", a, "}{", b, "}$")
fr2 <- paste0("$\\frac{", c, "}{", d, "}$")

## Possible answers
answers <- c(
  paste(fr1, "is greater than", fr2),
  paste(fr2, "is greater than", fr1),
  "Both fractions are equal"
)

## Correct solution
sol <- c(0, 0, 0)
if(a/b > c/d) {
  sol[1] <- 1
} else if(c/d > a/b) {
  sol[2] <- 1
} else {
  sol[3] <- 1
}

## Explanation
k <- answers[as.logical(sol)]
eq <- c(" > ", " < ", " = ")[as.logical(sol)]
explanation <- paste0(k, " since $", a, " \\cdot ", d, eq, b, " \\cdot ", c, "$.")
```

Question
========
Which of these numbers is greater: `r fr1` or `r fr2`?

```{r, echo = FALSE, results = "asis"}
answerlist(answers, markup = "markdown")
```

Solution
========
`r explanation`

Meta-information
================
exname: Comparing fractions
extype: schoice
exsolution: `r paste(sol, collapse = "")`

One random version of the exercise is shown below as rendered in an OpenOlat test after entering an incorrect answer:

Question displayed in OpenOlat with implemented explanation given after saving the answer

Further meta-information could be added to the exercise via the exextra tags, if needed. For num questions, the tag extol defines the tolerance range for numerical solutions. Exercise templates for different task types (num, mchoice, schoice, cloze, …) are provided on the R/exams web page at https://www.R-exams.org/templates/.

The elegant aspect of randomized tasks is that students can perform the same test multiple times for the purpose of practice or improved self-assessment. If implemented foresighted like in our example, solutions and explanations are automatically adapted - so there is no additional work for teachers despite many task variations.

The next section will explain how to export such randomized tasks from R/exams to OpenOlat.

Randomized export to OpenOlat

As explained in the previous section, consistent file naming makes it easy to create a list of test files, hereinafter referred to as testset_ZF. The following code exemplifies how such a test set, consisting of several randomized tasks, can be exported from R/exams to OpenOlat.

library("exams")             # Load R/exams package

seed <- 6020                 # Select seed, so randomization can be repeated
set.seed(seed)               # Set seed

export_ZF <- exams2openolat( # Function for exporting tasks to OpenOlat
  edir = "wd/tasks",         # Directory where task files are stored
  file = testset_ZF,         # File names of the tasks
  n = 20,                    # Number of randomized versions of the test set
  name = paste0("ZF_Testset_seed", seed), # Remember seed also in file name
  stitle = "ZF",             # Section title 
  ititle = "Aufgaben",       # Item title
  solutionswitch = TRUE      # Display explanation right after saving the answer
)

Further arguments, such as cutvalue (threshold for passing), navigation (disabling switching between tasks), and duration (maximum processing time) can individualize the test implementation regarding the needs of the course (see this blog post for some practical illustration).

The exams2openolat function creates a zip folder with the chosen name in the current working directory.

Now, you can insert a new Test element in your OpenOlat course. In the editor mode, it is possible to import the created zip folder to the Test-Konfiguration element. At this point, the work is already done - yay! Each time the test is called up, one of the created randomized test sets is selected by OpenOlat. If desired, further settings can be done using the editor mode in OpenOlat, such as displaying the test results after the test has been completed (Testkonfiguration > Report), a winner’s podium for the three best (anonymous) participants (HighScore > Siegertreppchen), or an automated submission confirmation by E-mail (Email Bestätigung).

It is noteworthy - since this question occurs often among our students - that decimal input for num questions can be entered in the OpenOlat export with dot or comma.

Results and feedback

Once a class or group has finished the self-assessment, various tools provide information on the results. For example, an anonymous ranking showing the achieved number of points of the three best performing participants and a histogram over all point results can provide a first overview on the results.

Moreover, each participant is informed about his/her performance after finishing the assessment for one of the content areas. In particular, the total number of achieved points, the time needed to finish and the number of answered questions is listed. Additionally, each question, the given answer, and the expected answer can be accessed by the participant.

(Anonymous) winners ranking in OpenOlat

Feedback overview after finishing the self-assessment in OpenOlat

Experience and outlook

A problem that still arises with num tasks is the incorporation of fractions. While they can be properly in the tasks using mathematical notation, there is still no convenient way to insert fractions as solutions when completing the test set. As of now, fractions can only be transmitted as decimal numbers, which can be tedious and prone to errors. One possibility would be to implement the numerator and denominator as two num solution-elements of a cloze task in R/exams. However, this would have the disadvantage that reduced or expanded variants of the respective fractions are not recognized as solutions. In this case, the most reduced fraction would have to be requested as a solution, which in turn is again prone to errors. Furthermore, if only the numerator or only the denominator is inserted correctly (and the solution is therefore wrong overall), 50% of the points are still awarded.

Additionally, it would be desirable to provide students with the solutions, including implemented explanations for the tasks, after completing the test. It is currently only possible to display explanations immediately after saving the answer, but not as final feedback.

However, we received primarily positive feedback from students and colleagues regarding the implementation of our mathematics self-assessment using R/exams in OpenOlat.

The concept for this Bridging Course Mathematics even won the Austrian teaching award Ars Docendi State Prize 2023. The photo below shows the entire team Tobias Hell, Elisabeth Hell, Pia Tscholl, and Lisa Schlosser (left to right) being presented with the award by Secretary of Education Martin Polaschek (Photo: Martin Lusser/BMBWF).

Dipl.-Ing. Tobias Hell, BSc PhD & Mag.a Elisabeth Hell & Pia Tscholl, MEd & Lisa Schlosser, PhD & BM Martin Polaschek (Photo: Martin Lusser/BMBWF)

One advantage of this digitalized self-assessment - in addition to those already mentioned - is the ability to collect data in larger quantities and without further effort. For example, a full PhD position has been realized to evaluate the data collected through the presented self-assessment.

To leave a comment for the author, please follow the link and comment on their blog: R/exams.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)