29
loading...
This website collects cookies to deliver better user experience
# make the test output verbose
pytest tests/ -v
# stop testing whenever you get to a test that fails
pytest tests/ -x
# run only a single test
pytest tests/test_base.py::test_initialization
# run only tests tagged with a particular word
pytest tests/ -m specialword
# print out all the output of tests to the console
pytest tests/ -s
# run all the tests, but run the last failures first
pytest tests/ --ff
# see which tests will be run with the given options and config
pytest tests/ —collect-only
# show local variables in tracebacks
pytest tests/ —showlocals
tempfile
which is a platform-agnostic way of creating temporary files and directories. Pytest has tmp_path
which you can insert as an argument into your test function and have a convenience location which you can use to your heart's content. (There are also several other options with Pytest). Then other libraries you're using may have specific testing capabilities. We use click
for our CLI functionality and there's a useful convenience pattern for running commands from a temporary directory:def test_something():
runner = CliRunner()
with runner.isolated_filesystem():
# do something here in your new temporary directory
parametrize
functionality:@pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)])
def test_eval(test_input, expected):
assert eval(test_input) == expected
xfail
:@pytest.mark.xfail()
def test_something() -> None:
# whatever code you have here doesn't work
mark
method in general is a great way of creating some custom ways to run your tests. You could — using a @pytest.mark.no_async_call_required
decorator — distinguish between tests that take a bit longer to run and tests that are more or less instantaneous, for example.datetime
value without any problem. Instead of trying to come up with a list of different possible edge cases, hypothesis instead will run (in parallel) a whole series of values to check that this is actually the case. As the docs state:"It works by generating arbitrary data matching your specification and checking that your guarantee still holds in that case. If it finds an example where it doesn’t, it takes that example and cuts it down to size, simplifying it until it finds a much smaller example that still causes the problem. It then saves that example for later, so that once it has found a problem with your code it will not forget it in the future." (source)
datetime
.pyenv
as your overall Python version manager, you may have to use something like the following command to make sure that all the various Python versions are available to tox
:pyenv local zenml-dev-3.8.6 3.6.9 3.7.11 3.8.11 3.9.6
tox
.--pdb
flag which you can pass in along with your CLI command. This will deposit you inside a pdb
debugging environment at exactly the moment your test fails. Super useful that we get all this convenience functionality out of the box with Pytest.pre-commit
hooks that kick into action whenever you try to commit code. (Check out our pyproject.toml
configuration and our scripts/
directory to see how we handle this!) It ensures a level of consistency throughout our codebase, ensuring that all our functions have docstrings, for example, or implementing a standard order for import
statements.mypy
hook, for example — starts to verge into what feels like testing territory. By ensuring that functions all have type annotations you sometimes are doing more than just enforcing a particular coding style. When you add mypy
into your development workflow, you get up close and personal with exactly how different types are passed around in your codebase.