Autograding Jupyter Notebook assignments with Otter - quickstart

Otter uses raw notebook cells, together with the first comment line in each of these, for delimitation between blocks, i.e. where questions, tests or configurations begin and/or end. Ex:

# BEGIN SOLUTION
def my_func(a, b):
	# BEGIN SOLUTION
	c = (a+b)/2 
	# END SOLUTION
return c
# END SOLUTION

In the above, a solution block is started and ended with the first and last lines, as they each lead a raw cell. Inside my_func, # BEGIN SOLUTION and # END SOLUTION are used to hide a specific part of code from the notebooks generated for students(but they still belong to the same block as they’re not in sperate raw cells).

The first cell shall be a raw cell containing a YAML-formatted configuration block. Use # ASSIGNMENT CONFIG to de fine this. This specifies how the assignments should be generated , tests run, etc. Find more details directly from Otter here or on this wiki here. Example:

# ASSIGNMENT CONFIG
requirements: requirements.txt
solutions_pdf: true
export_cell:
    instructions: "These are some submission instructions."
generate: 
    pdf: true
    filtering: true
    pagebreaks: true
    zips: false

To write question descriptions you can use a simple markdown cell, no additional syntax required

Use # BEGIN QUESTION and # END QUESTION to define a question block. Within it, use # BEGIN SOLUTION, # END SOLUTION to specify the answer, and # BEGIN TESTS and # END TESTS to specify tests. Example:

# BEGIN QUESTION
name: q1
points: 2
Question 1. Write a function called sieve that takes in a positive integer n and returns a set of the prime numbers less than or equal to n. Use the Sieve of Eratosthenes to find the primes.
# BEGIN SOLUTION
def sieve(n):
    """
    Generate a set of prime numbers less than or equal to a positive integer.
    """
    # BEGIN SOLUTION
    is_prime = [True for _ in range(n + 1)]
    p = 2
    while p ** 2 <= n:
        if is_prime[p]:
            for i in range(p ** 2, n + 1, p):
                is_prime[i] = False
        p += 1

    is_prime[0]= False
    is_prime[1]= False

    return set(i for i in range(n + 1) if is_prime[i])
    # END SOLUTION
# END SOLUTION
# BEGIN TESTS
def test_low_primes(sieve):
    assert sieve(1) == set()
    assert sieve(2) == {2}
    assert sieve(3) == {2, 3}

test_low_primes(sieve)  # IGNORE
# HIDDEN
def test_higher_primes(sieve):
    assert sieve(20) == {2, 3, 5, 7, 11, 13, 17, 19}
    assert sieve(100) == {2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97}

test_higher_primes(sieve)  # IGNORE
# END TESTS
# END QUESTION

As seen above, you can specify a name and points for a question after declaring its block. Same can be done for tests. See here for details Generate assignment Once finished writing the assignment, you can generate the student version, along with the autograding zip, PDF and other files with:

otter assign <loc-to-notebook> <loc-to-save>

Where the first argument is the filepath to your Notebook and the second is the location where you want the generated files to be saved.

Open a terminal(in Jupyter: File→New Launcher→Other→Terminal), go to where you have saved assignment_grader.py and run:

python assignment_grader.py  -a <path-to-autograder-zip> -s <path-to-student-submissions> -c <where-to-save-csv> -tp <whether-to-generate-feedback-pdfs> [0,1] -p <where-to-save-pdfs>

Each argument is optional, the script looks into or creates default locations for all its arguments. Feedback is generated by default(tp=1).
Example:

python assignment_grader.py  -a ml_hw1/dist/autograder/autograder.zip -s ml_hw1/submissions -c ml_hw1/results -tp 1 -p /ml_hw1/test_feedback