micropython/tests
Damien George 55c76eaac1 extmod/uasyncio: Truncate negative sleeps to 0.
Otherwise a task that continuously awaits on a large negative sleep can
monopolise the scheduler (because its wake time is always less than
everything else in the pairing heap).

Signed-off-by: Damien George <damien@micropython.org>
2020-08-22 12:17:06 +10:00
..
basics py/runtime: Fix builtin compile() in "single" mode so it prints exprs. 2020-08-22 11:38:46 +10:00
cmdline
cpydiff
esp32
extmod extmod/uasyncio: Truncate negative sleeps to 0. 2020-08-22 12:17:06 +10:00
feature_check
float py/objcomplex: Add mp_obj_get_complex_maybe for use in complex bin-op. 2020-06-27 01:03:10 +10:00
import tests: Move .mpy import tests from import/ to micropython/ dir. 2020-07-26 22:04:31 +10:00
inlineasm
internal_bench
io
jni
micropython py/persistentcode: Maintain root ptr list of imported native .mpy code. 2020-08-02 22:34:09 +10:00
misc
multi_bluetooth extmod/modbluetooth: Add event for "indicate acknowledgement". 2020-07-20 23:26:41 +10:00
multi_net extmod/uasyncio: Add StreamReader.readexactly(n) method. 2020-07-25 23:10:05 +10:00
net_hosted
net_inet extmod/modussl_mbedtls: Integrate shorter error strings. 2020-07-21 00:31:05 +10:00
perf_bench
pyb
pybnative
qemu-arm
stress
thread
unicode
unix
wipy
README
run-internalbench.py
run-multitests.py
run-natmodtests.py
run-perfbench.py
run-tests py/objcomplex: Add mp_obj_get_complex_maybe for use in complex bin-op. 2020-06-27 01:03:10 +10:00
run-tests-exp.py
run-tests-exp.sh

README

This directory contains tests for various functionality areas of MicroPython.
To run all stable tests, run "run-tests" script in this directory.

Tests of capabilities not supported on all platforms should be written
to check for the capability being present. If it is not, the test
should merely output 'SKIP' followed by the line terminator, and call
sys.exit() to raise SystemExit, instead of attempting to test the
missing capability. The testing framework (run-tests in this
directory, test_main.c in qemu_arm) recognizes this as a skipped test.

There are a few features for which this mechanism cannot be used to
condition a test. The run-tests script uses small scripts in the
feature_check directory to check whether each such feature is present,
and skips the relevant tests if not.

Tests are generally verified by running the test both in MicroPython and
in CPython and comparing the outputs. If the output differs the test fails
and the outputs are saved in a .out and a .exp file respectively.
For tests that cannot be run in CPython, for example because they use
the machine module, a .exp file can be provided next to the test's .py
file. A convenient way to generate that is to run the test, let it fail
(because CPython cannot run it) and then copy the .out file (but not
before checking it manually!)

When creating new tests, anything that relies on float support should go in the
float/ subdirectory.  Anything that relies on import x, where x is not a built-in
module, should go in the import/ subdirectory.