Skip to content

GitHub Actions

GitHub Actions is a continuous integration (CI) platform offering from GitHub. It enables the definition of so-called workflows executed in an environment managed by GitHub. We can use the workflow to define our assessment tasks if the students submit their solutions in GitHub repositories.

Why GitHub Actions? Mainly because using GitHub enables us to cover multiple steps of the homework submission-evaluation process.

Don't want to or cannot use GitHub?

Check out the alternative option here.

Running the evaluation

The pre-requisites of running the evaluation in GitHub are the following.

  1. You need a GitHub organization with GitHub Education benefits providing you with free credits for executing Actions in the cloud.
  2. Have the students submit their work in GitHub.
  3. Include a workflow definition in the starter code repository specifying the assessment process.

The workflow definition is a yaml file placed in the starter code repository. When the student's repository is created, the contents, along with the workflow definition, are copied from the starer repository. Here are two sample workflow definitions.

Example 1

The first example is from the presented sample at https://github.com/akosdudas/ahk-sample-startercode/blob/master/.github/workflows/evaluate.yml.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
name: Evaluation

on:
  pull_request:
    types: [opened, synchronize, ready_for_review, labeled]

jobs:
  evaluate:
    runs-on: ubuntu-latest

    timeout-minutes: 3

    if: github.event.pull_request.draft == false

    steps:
      - name: Checkout
        uses: actions/checkout@v2
        with:
          fetch-depth: 1

      - name: Check neptun.txt
        uses: akosdudas/ahk-action-neptuncheck@v1

      - name: Prepare .NET SDK
        uses: actions/setup-dotnet@v1
        with:
          dotnet-version: "2.1.607"

      - name: Build sln
        run: dotnet build

      - name: Evaluate
        uses: docker://ghcr.io/akosdudas/ahk-sample:evaluator-v1

      - name: Publish results
        uses: docker://ghcr.io/akosdudas/ahk-publish-results-pr:v1
        with:
          GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
          AHK_IMAGEEXT: ".png"
          AHK_RESULTFILE: "result.txt"

This workflow definition works on pull requests; this is what you see on lines 4-5. If the student sets the pull request as draft, the evaluation is skipped (line 13), allowing the student to make changes to the code without running the assessment.

Since the evaluator application is containerized, the Ubuntu virtual environment is used for the CI execution (line 9). (GitHub also has Windows and macOS environments.)

Specifying a timeout (line 11) prohibits long-running workflows, which would only eat up the minutes available in GitHub; the evaluation should not take more than a minute, including obtaining the source code and pulling container images.

The assessment starts in line 16 with the checkout of the source code (to get the student's code). Next, there is a preliminary check verifying the existence of a text file that contains the student's identifier; see the reason here. If the student forgot to upload this file, the assessment does not proceed.

The next step is building the source code to get the executable versions (lines 24-30). If the build fails, the student is automatically notified.

And now comes the actual assessment. Since the evaluator application is containerized, the workflow only needs to trigger the execution of this container. The working directory with the source code and the built binaries are mounted into the container, so the application can access it.

The final step is publishing the results of the assessment (line 35). The evaluator application writes the results to a text file, and the custom containerized application reads this text file to send the contents into the pull request thread (see example here). (This custom step also allows us to save the outcome of the automated evaluation to a database, which reduces the manual labor on the part of the instructors; see here for further details.)

Example 2

The second example simplifies the process by eliminating the pull request and the container too.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
name: Evaluation

on: [push]

jobs:
  build:
    runs-on: ubuntu-latest

    timeout-minutes: 3

    steps:
      - name: Checkout
        uses: actions/checkout@v1

      - name: Check neptun.txt
        uses: akosdudas/ahk-action-neptuncheck@v1

      - name: Prepare .NET SDK
        uses: actions/setup-dotnet@v1
        with:
          dotnet-version: "5.0.200"

      - name: Build .NET code
        run: dotnet build

      - name: Run .NET unit tests
        run: dotnet test

Suppose you distribute the evaluator code as unit tests. In that case, you can run them with a simple command (line 27) after preparing the appropriate SDK and building the code (lines 18-24).

There are no pull requests here either. The trigger for evaluation is a push (line 3), so every time the student pushed code to the GitHub repository, this workflow will execute. The results will be limited to console output of the process on GitHub's web interface.

Tips

A few tips to keep in mind when using this approach.

Keep the workflow file universal and straightforward. The workflow definition file is copied to the student's repository. You cannot make changes to it. The file must be "universal," that is, changes you might need to make in the evaluation process should not be part of this file. Containerizing the evaluator application is an excellent way of ensuring that the workflow file needs no change; any update to the assessment process happens in the container, which you can update.

Limit the number of evaluations. Be explicit about how many executions students are allowed. The assessment takes time and consumes the minutes you have in GitHub. The students should not use this online assessment to verify their work every step of the way. The automated evaluation should be the last step in the process after they checked their work locally. But do not limit it to one. This method aims to enable the student to correct the code if something does not work according to expectations.

Prepare for occasional outages. There are far too many moving parts in this process. Sometimes things will break. GitHub Actions has outages sometimes; pulling containers fail occasionally; the execution times out from time to time; the .NET SDK download fails randomly; etc. These things happen. Either prepare to help students through transient errors (i.e., by manually re-running the failed workflow), or let them know how to do this on their own.

Set up a self-hosted runner if the free CI minutes are not enough or you need specialized software/hardware. GitHub allows you to use self-hosted runners instead of the ones provided in the cloud. You can prepare your customized environment in these runners. But before going down this path, also consider that you will need stable infrastructure and monitoring to do this.

Back to top