pytest-csv-params/docs/pages/guide.md
2022-08-25 13:46:19 +02:00

128 lines
4.9 KiB
Markdown

# User Guide
This guide will lead you to your first CSV-file parametrized pytest test. It starts with designing your test, preparing
your data, writing the test method and finally execute your new test.
## The Scenario
Let's say, you have to test this method:
```{eval-rst}
.. literalinclude:: ../../tests/test_docs_example.py
:language: python
:lines: 10,12,18-23,37-41
```
Parts of the code are from a more complex example written for
[a German blog post](https://juergen.rocks/blog/articles/data-driven-tests-mit-pytest-csv-params.html). The example code
is part of the source code and can be found unter `tests/test_blog_example.py`. It is documented as
{mod}`~tests.test_blog_example`.
## Prepare your data
Your test data resides in an CSV file. CSV files can have different formats, when it comes to:
- Field separators and delimiters
- Quoting
- Line Termination
The class {class}`pytest_csv_params.dialect.CsvParamsDefaultDialect` defines a default CSV format that should fit most
requirements:
```{eval-rst}
.. literalinclude:: ../../pytest_csv_params/dialect.py
:language: python
:lines: 5-6,8,18-
```
You can derive your own CSV format class from there (or from {class}`csv.Dialect`), if your files look any other.
Your test data for the method above could look like this:
```{eval-rst}
.. literalinclude:: ../../tests/assets/doc-example.csv
:language: text
:emphasize-lines: 1
```
- We have a header line in the first line, that names the single columns
- The column names are not good for argument names
- The value in the dimensions column needs to be transformed in order to get tested
- There is a column that tells if an exception is to be expected, and the last two lines expect one
## Design and write the test
The test must call the ``get_smallest_possible_container`` method with the right parameters. The CSV file has all
information, but maybe not in the right format. We take care of that in a second.
The test may expect an exception, that should also be considered.
The parameters of the test method should reflect the input parameters for the method under test, and the expectations.
So let's build it:
```{eval-rst}
.. literalinclude:: ../../tests/test_docs_example.py
:language: python
:lines: 14-15,75-81,91-
:emphasize-lines: 4-8
```
- The test could now get all parameters needed to execute the `get_smallest_container_method`, as well as for the
expectations
- Based on the expectation for an exception, the test goes in two different directions
Now it's time for getting stuff from the CSV file.
## Add the parameters from the CSV file
Here comes the {meth}`~pytest_csv_params.decorator.csv_params` decorator. But one step after the other.
```{eval-rst}
.. literalinclude:: ../../tests/test_docs_example.py
:language: python
:lines: 14,16-17,58-81
:emphasize-lines: 5,6,8,16,18
```
- With the parameter `data_file` you point to your CSV file
- With the parameter `id_col` you name the column of the CSV file that contains the test case ID; the test case ID is
shown in the execution logs
- With the `header_renames` dictionary you define how a column is represented as argument name for your test method; the
highlighted example transforms "Number of items" to `number_of_items`
- The `data_casts` dictionary you define how data needs to be transformed to be usable for the test; you can use
`lambda`s or method pointers; all values from the CSV arrive as `str`
All possible parameters are explained under [Configuration](config), or more technically, in the source documentation of
{meth}`pytest_csv_params.decorator.csv_params`.
The `data_casts` method `get_dimensions` looks like the following:
```{eval-rst}
.. literalinclude:: ../../tests/test_docs_example.py
:language: python
:lines: 44,52-55
:emphasize-lines: 4
```
The method is called during the test collection phase. If the {class}`ValueError` raises, the run would end in an error.
## Execute the test
There is nothing special to do now. Just run your tests as always. Your run should look like this:
```text
tests/test.py::test_get_smallest_possible_container[Small Container 1] PASSED [ 12%]
tests/test.py::test_get_smallest_possible_container[Small Container 2] PASSED [ 25%]
tests/test.py::test_get_smallest_possible_container[Small Container 3] PASSED [ 37%]
tests/test.py::test_get_smallest_possible_container[Medium Container] PASSED [ 50%]
tests/test.py::test_get_smallest_possible_container[Large Container 1] PASSED [ 62%]
tests/test.py::test_get_smallest_possible_container[Large Container 2] PASSED [ 75%]
tests/test.py::test_get_smallest_possible_container[Not fitting 1] PASSED [ 87%]
tests/test.py::test_get_smallest_possible_container[Not fitting 2] PASSED [100%]
```
## Analyse test failures
- Is it a failure for all test data elements or just for a few?
- When only some tests fail, the Test ID should tell you where to look at