Skip to main content

A mini framework to implement auto-evaluated exercises in Jupyter notebooks

Project description

nbautoeval

nbautoeval is a very lightweight python framework for creating auto-evaluated exercises inside a jupyter (python) notebook.

two flavours of exercises are supported at this point :

  • code-oriented : given a text that describes the expectations, students are invited to write their own code, and can then see the outcome on teacher-defined data samples, compared with the results obtained through a teacher-provided solution, with a visual (green/red) feedback
  • quizzes : a separate module allows to create quizzes

At this point, due to lack of knowledge/documentation about open/edx (read: the version running at FUN), there is no available code for exporting the results as grades or anything similar (hence the autoeval name).

There indeed are provisions in the code to accumulate statistics on all attempted corrections, as an attempt to provide feedback to teachers.

Try it on mybinder

Click the badge below to see a few sample demos under mybinder.org - it's all in the demo-notebooks subdir.

NOTE the demo notebooks ship under a .py format and require jupytext to be installed before you can open them in Jupyter.

Binder

History

This was initially embedded into a MOOC on python2 that ran for the first time on the French FUN platform in Fall 2014. It was then duplicated into a MOOC on bioinformatics in Spring 2016 where it was named nbautoeval for the first time, but still embedded in a greater git module.

The current git repo is created in June 2016 from that basis, with the intention to be used as a git subtree from these 2 repos, and possibly others since a few people have proved interested.

Installation

pip install nbautoeval

Overview

code-oriented

Currently supports the following types of exercises

  • ExerciseFunction : the student is asked to write a function
  • ExerciseRegexp : the student is asked to write a regular expression
  • ExerciseGenerator : the student is asked to write a generator function
  • ExerciseClass : tests will happen on a class implementation

A teacher who wishes to implement an exercise needs to write 2 parts :

  • One python file that defines an instance of an exercise class; this in a nutshell typically involves

    • providing one solution (let's say a function) written in Python
    • providing a set of input data
    • plus optionnally various tweaks for rendering results
  • One notebook that imports this exercise object, and can then take advantage of it to write jupyter cells that typically

    • invoke example() on the exercise object to show examples of the expected output
    • invite the student to write their own code
    • invoke correction() on the exercise object to display the outcome.

quizzes

Here again there will be 2 parts at work :

  • The recommended way is to define quizzes in YAML format :

    • one YAML file can contain several quizzes - see examples in the yaml/ subdir
    • and each quiz contain a set of questions
    • grouping questions into quizzes essentially makes sense wrt the maximal number of attempts
    • mostly all the pieces can be written in markdown (currently we use myst_parser)
  • then one invokes run_yaml_quiz() from a notebook to display the test

    • this function takes 2 arguments, one to help locate the YAML file
    • one to spot the quiz inside the YAML file
    • run with debug=True to pinpoint errors in the source

results and storage

Regardless of their type all tests have an exoname that is used to store information about that specific test; for quizzes it is recommended to use a different name than the quiz name used in run_yaml_quiz() so that students cant guess it too easily.

stuff is stored in 2 separate locations :

  • ~/.nbautoeval.trace contain one JSON line per attempt (correction or submit)
  • ~/.nbautoeval.storage for quizzes only, preserves previous choices, number of attempts

Known issues

see https://github.com/parmentelat/nbautoeval/issues

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nbautoeval-1.3.0.tar.gz (38.3 kB view details)

Uploaded Source

Built Distribution

nbautoeval-1.3.0-py3-none-any.whl (42.5 kB view details)

Uploaded Python 3

File details

Details for the file nbautoeval-1.3.0.tar.gz.

File metadata

  • Download URL: nbautoeval-1.3.0.tar.gz
  • Upload date:
  • Size: 38.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.1.1.post20200323 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.6.10

File hashes

Hashes for nbautoeval-1.3.0.tar.gz
Algorithm Hash digest
SHA256 c6ee1dba8863293123281af3a42b21c8e35c0a6a06a19a3fedf04249c2a969c3
MD5 e4396530f2ed1760a4c0f06ccf2dae2e
BLAKE2b-256 0ed957d4bf9d2634a05af60cae4db6555c115235575d6032b43cae92ae31aac9

See more details on using hashes here.

File details

Details for the file nbautoeval-1.3.0-py3-none-any.whl.

File metadata

  • Download URL: nbautoeval-1.3.0-py3-none-any.whl
  • Upload date:
  • Size: 42.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.1.1.post20200323 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.6.10

File hashes

Hashes for nbautoeval-1.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8f4f7e88f310dfbf06e753aeeb16304d78565be844f30f64e7ee9b992e04f443
MD5 983fb3356cd969027fb3c396f51a062b
BLAKE2b-256 2c61e7d352c209b33557830ee68b7d5c24bee39a8f1420974d46aa9c1a0370a5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page