Replacement for the --resultlog option, focused in simplicity and extensibility
Project description
Replacement for the --resultlog option, focused in simplicity and extensibility.
Usage
Install pytest-reportlog as a test requirement in your test environment.
The --report-log=FILE option writes report logs into a file as the test session executes.
Each line of the report log contains a self contained JSON object corresponding to a testing event, such as a collection or a test result report. The file is guaranteed to be flushed after writing each line, so systems can read and process events in real-time.
Each JSON object contains a special key $report_type, which contains a unique identifier for that kind of report object. For future compatibility, consumers of the file should ignore reports they don’t recognize, as well as ignore unknown properties/keys in JSON objects that they do know, as future pytest versions might enrich the objects with more properties/keys.
Example
Consider this file:
# content of test_report_example.py
def test_ok():
assert 5 + 5 == 10
def test_fail():
assert 4 + 4 == 1
$ pytest test_report_example.py -q --report-log=log.json .F [100%] ================================= FAILURES ================================= ________________________________ test_fail _________________________________ def test_fail(): > assert 4 + 4 == 1 E assert (4 + 4) == 1 test_report_example.py:8: AssertionError ------------------- generated report log file: log.json -------------------- 1 failed, 1 passed in 0.12s
The generated log.json will contain a JSON object per line:
$ cat log.json {"pytest_version": "5.2.2", "$report_type": "SessionStart"} {"nodeid": "", "outcome": "passed", "longrepr": null, "result": null, "sections": [], "$report_type": "CollectReport"} {"nodeid": "test_report_example.py", "outcome": "passed", "longrepr": null, "result": null, "sections": [], "$report_type": "CollectReport"} {"nodeid": "test_report_example.py::test_ok", "location": ["test_report_example.py", 0, "test_ok"], "keywords": {"test_ok": 1, "pytest-reportlog": 1, "test_report_example.py": 1}, "outcome": "passed", "longrepr": null, "when": "setup", "user_properties": [], "sections": [], "duration": 0.0, "$report_type": "TestReport"} {"nodeid": "test_report_example.py::test_ok", "location": ["test_report_example.py", 0, "test_ok"], "keywords": {"test_ok": 1, "pytest-reportlog": 1, "test_report_example.py": 1}, "outcome": "passed", "longrepr": null, "when": "call", "user_properties": [], "sections": [], "duration": 0.0, "$report_type": "TestReport"} {"nodeid": "test_report_example.py::test_ok", "location": ["test_report_example.py", 0, "test_ok"], "keywords": {"test_ok": 1, "pytest-reportlog": 1, "test_report_example.py": 1}, "outcome": "passed", "longrepr": null, "when": "teardown", "user_properties": [], "sections": [], "duration": 0.00099945068359375, "$report_type": "TestReport"} {"nodeid": "test_report_example.py::test_fail", "location": ["test_report_example.py", 4, "test_fail"], "keywords": {"test_fail": 1, "pytest-reportlog": 1, "test_report_example.py": 1}, "outcome": "passed", "longrepr": null, "when": "setup", "user_properties": [], "sections": [], "duration": 0.0, "$report_type": "TestReport"} {"nodeid": "test_report_example.py::test_fail", "location": ["test_report_example.py", 4, "test_fail"], "keywords": {"test_fail": 1, "pytest-reportlog": 1, "test_report_example.py": 1}, "outcome": "failed", "longrepr": {"reprcrash": {"path": "D:\\projects\\pytest-reportlog\\test_report_example.py", "lineno": 6, "message": "assert (4 + 4) == 1"}, "reprtraceback": {"reprentries": [{"type": "ReprEntry", "data": {"lines": [" def test_fail():", "> assert 4 + 4 == 1", "E assert (4 + 4) == 1"], "reprfuncargs": {"args": []}, "reprlocals": null, "reprfileloc": {"path": "test_report_example.py", "lineno": 6, "message": "AssertionError"}, "style": "long"}}], "extraline": null, "style": "long"}, "sections": [], "chain": [[{"reprentries": [{"type": "ReprEntry", "data": {"lines": [" def test_fail():", "> assert 4 + 4 == 1", "E assert (4 + 4) == 1"], "reprfuncargs": {"args": []}, "reprlocals": null, "reprfileloc": {"path": "test_report_example.py", "lineno": 6, "message": "AssertionError"}, "style": "long"}}], "extraline": null, "style": "long"}, {"path": "D:\\projects\\pytest-reportlog\\test_report_example.py", "lineno": 6, "message": "assert (4 + 4) == 1"}, null]]}, "when": "call", "user_properties": [], "sections": [], "duration": 0.0009992122650146484, "$report_type": "TestReport"} {"nodeid": "test_report_example.py::test_fail", "location": ["test_report_example.py", 4, "test_fail"], "keywords": {"test_fail": 1, "pytest-reportlog": 1, "test_report_example.py": 1}, "outcome": "passed", "longrepr": null, "when": "teardown", "user_properties": [], "sections": [], "duration": 0.0, "$report_type": "TestReport"} {"exitstatus": 1, "$report_type": "SessionFinish"}
record_property
The record_property fixture allows to log additional information for a test, just like with JUnitXML format. Consider this test file:
def test_function(record_property):
record_property("price", 12.34)
record_property("fruit", "banana")
assert True
This information will be recorded in the report JSON objects under the user_properties key as follows:
..., "user_properties": [["price", 12.34], ["fruit", "banana"]], ...
Note that this nested list construct is just the JSON representation of a list of tuples (name-value pairs).
License
Distributed under the terms of the MIT license.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for pytest_reportlog-0.3.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 30561aee147b512fcde13cabbd609d275ff8db43e1a348fda5b43ec280b41dcf |
|
MD5 | 3e68a2d5c9d88bae14d3bbc4980511cb |
|
BLAKE2b-256 | 58208d62051ff42965ad21e641e3f69d6418732ff4e0d62fd1f1cb2a350c7bd9 |