micropython/tests
Damien George 514bf1a191 extmod/uasyncio: Fix race with cancelled task waiting on finished task.
This commit fixes a problem with a race between cancellation of task A and
completion of task B, when A waits on B.  If task B completes just before
task A is cancelled then the cancellation of A does not work.  Instead,
the CancelledError meant to cancel A gets passed through to B (that's
expected behaviour) but B handles it as a "Task exception wasn't retrieved"
scenario, printing out such a message (this is because finished tasks point
their "coro" attribute to themselves to indicate they are done, and
implement the throw() method, but that method inadvertently catches the
CancelledError).  The correct behaviour is for B to bounce that
CancelledError back out.

This bug is mainly seen when wait_for() is used, and in that context the
symptoms are:
- occurs when using wait_for(T, S), if the task T being waited on finishes
  at exactly the same time as the wait-for timeout S expires
- task T will have run to completion
- the "Task exception wasn't retrieved message" is printed with
  "<class 'CancelledError'>" as the error (ie no traceback)
- the wait_for(T, S) call never returns (it's never put back on the
  uasyncio run queue) and all tasks waiting on this are blocked forever
  from running
- uasyncio otherwise continues to function and other tasks continue to be
  scheduled as normal

The fix here reworks the "waiting" attribute of Task to be called "state"
and uses it to indicate whether a task is: running and not awaited on,
running and awaited on, finished and not awaited on, or finished and
awaited on.  This means the task does not need to point "coro" to itself to
indicate finished, and also allows removal of the throw() method.

A benefit of this is that "Task exception wasn't retrieved" messages can go
back to being able to print the name of the coroutine function.

Fixes issue #7386.

Signed-off-by: Damien George <damien@micropython.org>
2021-06-16 13:02:37 +10:00
..
basics
cmdline
cpydiff tests/cpydiff: Add test for array constructor with overflowing value. 2021-06-13 10:30:14 +10:00
esp32
extmod extmod/uasyncio: Fix race with cancelled task waiting on finished task. 2021-06-16 13:02:37 +10:00
feature_check
float
import py/builtinimport: Change relative import's ValueError to ImportError. 2021-05-30 19:35:03 +10:00
inlineasm
internal_bench
io
jni
micropython
misc
multi_bluetooth tests/multi_bluetooth/ble_gap_advertise.py: Allow to work without set. 2021-06-06 21:58:07 +10:00
multi_net extmod/uasyncio: Add readinto() method to Stream class. 2021-06-15 13:13:35 +10:00
net_hosted
net_inet
perf_bench
pyb
pybnative
qemu-arm
stress
thread
unicode
unix tests/unix: Add ffi test for integer types. 2021-06-06 22:52:25 +10:00
wipy
README
run-internalbench.py
run-multitests.py tests/run-multitests.py: Allow to work without sys.stdout on target. 2021-06-06 21:58:07 +10:00
run-natmodtests.py
run-perfbench.py
run-tests-exp.py
run-tests-exp.sh
run-tests.py

README

This directory contains tests for various functionality areas of MicroPython.
To run all stable tests, run "run-tests.py" script in this directory.

Tests of capabilities not supported on all platforms should be written
to check for the capability being present. If it is not, the test
should merely output 'SKIP' followed by the line terminator, and call
sys.exit() to raise SystemExit, instead of attempting to test the
missing capability. The testing framework (run-tests.py in this
directory, test_main.c in qemu_arm) recognizes this as a skipped test.

There are a few features for which this mechanism cannot be used to
condition a test. The run-tests.py script uses small scripts in the
feature_check directory to check whether each such feature is present,
and skips the relevant tests if not.

Tests are generally verified by running the test both in MicroPython and
in CPython and comparing the outputs. If the output differs the test fails
and the outputs are saved in a .out and a .exp file respectively.
For tests that cannot be run in CPython, for example because they use
the machine module, a .exp file can be provided next to the test's .py
file. A convenient way to generate that is to run the test, let it fail
(because CPython cannot run it) and then copy the .out file (but not
before checking it manually!)

When creating new tests, anything that relies on float support should go in the
float/ subdirectory.  Anything that relies on import x, where x is not a built-in
module, should go in the import/ subdirectory.