summaryrefslogtreecommitdiff
blob: bd0df6df99b7fddf8441cf5253fd66bcbc71ac46 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
==============
pytest recipes
==============

.. highlight:: bash

Skipping tests based on markers
===============================
A few packages use `custom pytest markers`_ to indicate e.g. tests
requiring Internet access.  These markers can be used to conveniently
disable whole test groups, e.g.::

    python_test() {
        epytest -m 'not network' dask
    }


Skipping tests based on paths/names
===================================
There are two primary methods of skipping tests based on path (and name)
in pytest: using ``--ignore`` and ``--deselect``.

``--ignore`` causes pytest to entirely ignore a file or a directory
when collecting tests.  This works only for skipping whole files but it
ignores missing dependencies and other failures occurring while
importing the test file.

``--deselect`` causes pytest to skip the specific test or tests.  It can
be also used to select individual tests or even parametrized variants
of tests.

Both options can be combined to get tests to pass without having
to alter the test files.  They are preferable over suggestions from
skipping problematic tests when tests are installed as part
of the package.  They can also be easily applied conditionally to Python
interpreter.

The modern versions of eclasses provide two control variables,
``EPYTEST_IGNORE`` and ``EPYTEST_DESELECT`` that can be used to list
test files or tests to be ignored or deselected respectively.  These
variables can be used in global scope to avoid redefining
``python_test()``.  However, combining them with additional conditions
requires using the local scope.

::

    python_test() {
        local EPYTEST_IGNORE=(
            # ignore whole file with missing dep
            tests/test_client.py
        )
        local EPYTEST_DESELECT=(
            # deselect a single test
            'tests/utils/test_general.py::test_filename'
            # deselect a parametrized test based on first param
            'tests/test_transport.py::test_transport_works[eventlet'
        )
        [[ ${EPYTHON} == python3.6 ]] && EPYTEST_DESELECT+=(
            # deselect a test for py3.6 only
            'tests/utils/test_contextvars.py::test_leaks[greenlet]'
        )
        epytest
    }


Avoiding the dependency on pytest-runner
========================================
pytest-runner_ is a package providing ``pytest`` command to setuptools.
While it might be convenient upstream, there is no real reason to use
it in Gentoo packages.  It has no real advantage over calling pytest
directly.

Some packages declare the dependency on ``pytest-runner``
in ``setup_requires``.  As a result, the dependency is enforced whenever
``setup.py`` is being run, even if the user has no intention of running
tests.  If this is the case, the dependency must be stripped.

The recommended method of stripping it is to use sed::

    python_prepare_all() {
        sed -i -e '/pytest-runner/d' setup.py || die
        distutils-r1_python_prepare_all
    }


.. index:: PYTEST_DISABLE_PLUGIN_AUTOLOAD
.. index:: PYTEST_PLUGINS

Disabling plugin autoloading
============================
Normally, when running a test suite pytest loads all plugins installed
on the system.  This is often convenient for upstreams, as it makes it
possible to use the features provided by the plugins (such as ``async``
test function support, or fixtures) without the necessity to explicitly
enable them.  However, there are also cases when additional plugins
could make the test suite fail or become very slow (especially if pytest
is called recursively).

The modern recommendation for these cases is to disable plugin
autoloading via setting the ``PYTEST_DISABLE_PLUGIN_AUTOLOAD``
environment variable, and then explicitly enable specific plugins
if necessary.

.. Note::

   Previously we used to recommend explicitly disabling problematic
   plugins via ``-p no:<plugin>``.  However, it is rarely obvious
   which plugin is causing the problems, and it is entirely possible
   that another plugin will cause issues in the future, so an opt-in
   approach is usually faster and more reliable.

The easier approach to enabling plugins is to use the ``-p`` option,
listing specific plugins.  The option can be passed multiple times,
and accepts a plugin name as specified in the package's
``entry_points.txt`` file::

    python_test() {
        local -x PYTEST_DISABLE_PLUGIN_AUTOLOAD=1
        epytest -p asyncio -p tornado
    }

However, this approach does not work when the test suite calls pytest
recursively (e.g. you are testing a pytest plugin).  In this case,
the ``PYTEST_PLUGINS`` environment variable can be used instead.  It
takes a comma-separated list of plugin *module names*::

    python_test() {
        local -x PYTEST_DISABLE_PLUGIN_AUTOLOAD=1
        local -x PYTEST_PLUGINS=xdist.plugin,xdist.looponfail,pytest_forked

        epytest
    }

Please note that failing to enable all the required plugins may cause
some of the tests to be skipped implicitly (especially if the test suite
is using ``async`` functions and no async plugin is loaded).  Please
look at skip messages and warnings to make sure everything works
as intended.


.. index:: EPYTEST_XDIST

Using pytest-xdist to run tests in parallel
===========================================
pytest-xdist_ is a plugin that makes it possible to run multiple tests
in parallel.  This is especially useful for programs with large test
suites that take significant time to run single-threaded.

Using pytest-xdist is recommended if the package in question supports it
(i.e. it does not cause semi-random test failures) and its test suite
takes significant time.  This is done via setting ``EPYTEST_XDIST``
to a non-empty value prior to calling ``distutils_enable_tests``.
It ensures that an appropriate depedency is added, and that ``epytest``
adds necessary command-line options.

.. code-block::

    EPYTEST_XDIST=1
    distutils_enable_tests pytest

Please note that some upstream use pytest-xdist even if there is no real
gain from doing so.  If the package's tests take a short time to finish,
please avoid the dependency and strip it if necessary.

Not all test suites support pytest-xdist.  Particularly, it requires
that the tests are written not to collide one with another.  Sometimes,
xdist may also cause instability of individual tests.  In some cases,
it is possible to work around this using the same solution as when
`dealing with flaky tests`_.

When only a few tests are broken or unstable because of pytest-xdist,
it is possible to use it and deselect the problematic tests.  It is up
to the maintainer's discretion to decide whether this is justified.


Dealing with flaky tests
========================
A flaky test is a test that sometimes passes, and sometimes fails
with a false positive result.  Often tests are flaky because of too
steep timing requirements or race conditions.  While generally it is
preferable to fix the underlying issue (e.g. by increasing timeouts),
it is not always easy.

Sometimes upstreams use such packages as ``dev-python/flaky``
or ``dev-python/pytest-rerunfailures`` to mark tests as flaky and have
them rerun a few minutes automatically.  If upstream does not do that,
it is also possible to force a similar behavior locally in the ebuild::

    python_test() {
        # plugins make tests slower, and more fragile
        local -x PYTEST_DISABLE_PLUGIN_AUTOLOAD=1
        # some tests are very fragile to timing
        epytest -p rerunfailures --reruns=10 --reruns-delay=2
    }

Note that the snippet above also disables plugin autoloading to speed
tests up and therefore reduce their flakiness.  Sometimes forcing
explicit rerun also makes it possible to use xdist on packages that
otherwise randomly fail with it.


.. index:: EPYTEST_TIMEOUT

Using pytest-timeout to prevent deadlocks (hangs)
=================================================
pytest-timeout_ plugin adds an option to terminate the test if its
runtime exceeds the specified limit.  Some packages decorate specific
tests with timeouts; however, it is also possible to set a baseline
timeout for all tests.

A timeout causes the test run to fail, and therefore using it is
not generally necessary for test suites that are working correctly.
If individual tests are known to suffer from unfixable hangs, it is
preferable to deselect them.  However, setting a general timeout is
recommended when a package is particularly fragile, or has suffered
deadlocks in the past.  A proactive setting can prevent it from hanging
and blocking arch testing machines.

The plugin can be enabled via setting ``EPYTEST_TIMEOUT`` to the timeout
in seconds, prior to calling ``distutils_enable_tests``.  This ensures
that an appropriate depedency is added, and that ``epytest`` adds
necessary command-line options.

.. code-block::

    EPYTEST_TIMEOUT=1800
    distutils_enable_tests pytest

The timeout applies to every test separately, i.e. the above example
will cause a single test to time out after 30 minutes.  If multiple
tests hang, the total run time will multiply consequently.

When deciding on a timeout value, please take into the consideration
that the tests may be run on a low performance hardware, and on a busy
system, and choose an appropriately high value.

.. Note::

   ``EPYTEST_TIMEOUT`` can also be set by user in ``make.conf``
   or in the calling environment.  This can be used as a general
   protection against hanging test suites.  However, please note that
   this does not control dependencies, and therefore the user may need
   to install ``dev-python/pytest-timeout`` explicitly.


Avoiding dependencies on other pytest plugins
=============================================
There is a number of pytest plugins that have little value to Gentoo
users.  They include plugins for test coverage
(``dev-python/pytest-cov``), coding style (``dev-python/pytest-flake8``)
and more.  Generally, packages should avoid using those plugins.

.. Warning::

   As of 2022-01-24, ``epytest`` disables a few undesirable plugins
   by default.  As a result, developers have a good chance
   of experiencing failures due to hardcoded pytest options first,
   even if they have the relevant plugins installed.

   If your package *really* needs to use the specific plugin, you need
   to pass ``-p <plugin>`` explicitly to reenable it.

In some cases, upstream packages only list them as dependencies
but do not use them automatically.  In other cases, you will need
to strip options enabling them from ``pytest.ini`` or ``setup.cfg``.

::

    src_prepare() {
        sed -i -e 's:--cov=wheel::' setup.cfg || die
        distutils-r1_src_prepare
    }


TypeError: _make_test_flaky() got an unexpected keyword argument 'reruns'
=========================================================================
If you see a test error resembling the following::

    TypeError: _make_test_flaky() got an unexpected keyword argument 'reruns'

This means that the tests are being run via flaky_ plugin while
the package in question expects pytest-rerunfailures_.  This is
because both plugins utilize the same ``@pytest.mark.flaky`` marker
but support different set of arguments.

To resolve the problem, explicitly disable the ``flaky`` plugin and make
sure to depend on ``dev-python/pytest-rerunfailures``::

    BDEPEND="
        test? (
             dev-python/pytest-rerunfailures[${PYTHON_USEDEP}]
        )"

    python_test() {
        epytest -p no:flaky
    }


ImportPathMismatchError
=======================
An ``ImportPathMismatchError`` generally indicates that the same Python
module (or one that supposedly looks the same) has been loaded twice
using different paths, e.g.::

    E   _pytest.pathlib.ImportPathMismatchError: ('path', '/usr/lib/pypy3.7/site-packages/path', PosixPath('/tmp/portage/dev-python/jaraco-path-3.3.1/work/jaraco.path-3.3.1/jaraco/path.py'))

These problems are usually caused by pytest test discovery getting
confused by namespace packages.  In this case, the ``jaraco`` directory
is a Python 3-style namespace but pytest is treating it as a potential
test directory.  Therefore, instead of loading it as ``jaraco.path``
relatively to the top directory, it loads it as ``path`` relatively
to the ``jaraco`` directory.

The simplest way to resolve this problem is to restrict the test
discovery to the actual test directories, e.g.::

    python_test() {
        epytest test
    }

or::

    python_test() {
        epytest --ignore jaraco
    }


Failures due to missing files in temporary directories
======================================================
As of 2024-01-05, ``epytest`` overrides the default temporary directory
retention policy of pytest.  By default, directories from successful
tests are removed immediately, and the temporary directories
from the previous test run are replaced by the subsequent test run.
This frequently reduces disk space requirements from test suites,
but it can rarely cause tests to fail.

If you notice test failures combined with indications that a file was
not found, and especially regarding the pytest temporary directories,
try if overriding the retention policy helps, e.g.::

    python_test() {
        epytest -o tmp_path_retention_policy=all
    }


fixture '...' not found
=======================
Most of the time, a missing fixture indicates that some pytest plugin
is not installed.  In rare cases, it can signify an incompatible pytest
version or package issue.

The following table maps common fixture names to their respective
plugins.

=================================== ====================================
Fixture name                        Package
=================================== ====================================
event_loop                          dev-python/pytest-asyncio
freezer                             dev-python/pytest-freezegun
httpbin                             dev-python/pytest-httpbin
loop                                dev-python/pytest-aiohttp
mocker                              dev-python/pytest-mock
=================================== ====================================


Warnings
========
pytest captures all warnings from the test suite by default, and prints
a summary of them at the end of the test suite run::

    =============================== warnings summary ===============================
    asgiref/sync.py:135: 1 warning
    tests/test_local.py: 5 warnings
    tests/test_sync.py: 12 warnings
    tests/test_sync_contextvars.py: 1 warning
      /tmp/asgiref/asgiref/sync.py:135: DeprecationWarning: There is no current event loop
        self.main_event_loop = asyncio.get_event_loop()
    [...]

However, some projects go further and use ``filterwarnings`` option
to make (some) warnings fatal::

    ==================================== ERRORS ====================================
    _____________________ ERROR collecting tests/test_sync.py ______________________
    tests/test_sync.py:577: in <module>
        class ASGITest(TestCase):
    tests/test_sync.py:583: in ASGITest
        async def test_wrapped_case_is_collected(self):
    asgiref/sync.py:135: in __init__
        self.main_event_loop = asyncio.get_event_loop()
    E   DeprecationWarning: There is no current event loop
    =========================== short test summary info ============================
    ERROR tests/test_sync.py - DeprecationWarning: There is no current event loop
    !!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
    =============================== 1 error in 0.13s ===============================

Unfortunately, this frequently means that warnings coming from
a dependency trigger test failures in other packages.  Since making
warnings fatal is relatively common in the Python world, it is
recommended to:

1. Fix warnings in Python packages whenever possible, even if they
   are not fatal to the package itself.

2. Do not enable new Python implementations if they trigger any new
   warnings in the package.

If the warnings come from issues in the package's test suite rather than
the installed code, it is acceptable to make them non-fatal.  This can
be done either through removing the ``filterwarnings`` key from
``setup.cfg``, or adding an ignore entry.  For example, the following
setting ignores ``DeprecationWarning`` in ``test`` directory::

    filterwarnings =
        error
        ignore::DeprecationWarning:test


.. _custom pytest markers:
   https://docs.pytest.org/en/stable/example/markers.html
.. _pytest-runner: https://pypi.org/project/pytest-runner/
.. _pytest-xdist: https://pypi.org/project/pytest-xdist/
.. _pytest-timeout: https://pypi.org/project/pytest-timeout/
.. _flaky: https://github.com/box/flaky/
.. _pytest-rerunfailures:
   https://github.com/pytest-dev/pytest-rerunfailures/