Pytest Misc
Pytest Misc
For cleanup (usually not needed), a --cache-clear option allows to remove all cross-
session cache contents ahead of a test run.
Other plugins may access the config.cache object to set/get json encodable values
between pytest invocations.
Note
This plugin is enabled by default, but can be disabled if needed: see Deactivating /
unregistering a plugin by name (the internal name for this plugin is cacheprovider).
# content of test_50.py
import pytest
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
pytest.fail("bad luck")
If you run this for the first time you will see two failures:
$ pytest -q
.................F.......F........................
[100%]
================================= FAILURES
=================================
_______________________________ test_num[17]
_______________________________
i = 17
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:7: Failed
_______________________________ test_num[25]
_______________________________
i = 25
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:7: Failed
========================= short test summary info
==========================
FAILED test_50.py::test_num[17] - Failed: bad luck
FAILED test_50.py::test_num[25] - Failed: bad luck
2 failed, 48 passed in 0.12s
$ pytest --lf
=========================== test session starts
============================
platform linux -- Python 3.x.y, pytest-7.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 2 items
run-last-failure: rerun previous 2 failures
test_50.py FF
[100%]
================================= FAILURES
=================================
_______________________________ test_num[17]
_______________________________
i = 17
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:7: Failed
_______________________________ test_num[25]
_______________________________
i = 25
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:7: Failed
========================= short test summary info
==========================
FAILED test_50.py::test_num[17] - Failed: bad luck
FAILED test_50.py::test_num[25] - Failed: bad luck
============================ 2 failed in 0.12s
=============================
You have run only the two failing tests from the last run, while the 48 passing tests have
not been run (“deselected”).
Now, if you run with the --ff option, all tests will be run but the first previous failures
will be executed first (as can be seen from the series of FF and dots):
$ pytest --ff
=========================== test session starts
============================
platform linux -- Python 3.x.y, pytest-7.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 50 items
run-last-failure: rerun previous 2 failures first
test_50.py FF................................................
[100%]
================================= FAILURES
=================================
_______________________________ test_num[17]
_______________________________
i = 17
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:7: Failed
_______________________________ test_num[25]
_______________________________
i = 25
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:7: Failed
========================= short test summary info
==========================
FAILED test_50.py::test_num[17] - Failed: bad luck
FAILED test_50.py::test_num[25] - Failed: bad luck
======================= 2 failed, 48 passed in 0.12s
=======================
New --nf, --new-first options: run new tests first followed by the rest of the tests, in
both cases tests are also sorted by the file modified time, with more recent files coming
first.
# content of test_caching.py
import pytest
import time
def expensive_computation():
print("running expensive computation...")
@pytest.fixture
def mydata(request):
val = request.config.cache.get("example/value", None)
if val is None:
expensive_computation()
val = 42
request.config.cache.set("example/value", val)
return val
def test_function(mydata):
assert mydata == 23
If you run this command for the first time, you can see the print statement:
$ pytest -q
F
[100%]
================================= FAILURES
=================================
______________________________ test_function
_______________________________
mydata = 42
def test_function(mydata):
> assert mydata == 23
E assert 42 == 23
test_caching.py:20: AssertionError
-------------------------- Captured stdout setup
---------------------------
running expensive computation...
========================= short test summary info
==========================
FAILED test_caching.py::test_function - assert 42 == 23
1 failed in 0.12s
If you run it a second time, the value will be retrieved from the cache and nothing will be
printed:
$ pytest -q
F
[100%]
================================= FAILURES
=================================
______________________________ test_function
_______________________________
mydata = 42
def test_function(mydata):
> assert mydata == 23
E assert 42 == 23
test_caching.py:20: AssertionError
========================= short test summary info
==========================
FAILED test_caching.py::test_function - assert 42 == 23
1 failed in 0.12s
$ pytest --cache-show
=========================== test session starts
============================
platform linux -- Python 3.x.y, pytest-7.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
cachedir: /home/sweet/project/.pytest_cache
--------------------------- cache values for '*'
---------------------------
cache/lastfailed contains:
{'test_caching.py::test_function': True}
cache/nodeids contains:
['test_caching.py::test_function']
cache/stepwise contains:
[]
example/value contains:
42
pytest --cache-clear
Stepwise
As an alternative to --lf -x, especially for cases where you expect a large part of the test
suite will fail, --sw, --stepwise allows you to fix them one at a time. The test suite will
run until the first failure and then stop. At the next invocation, tests will continue from
the last failing test and then run until the next failing test. You may use the --stepwise-
skip option to ignore one failing test and stop the test execution on the second failing test
instead. This is useful if you get stuck on a failing test and just want to ignore it until
later. Providing --stepwise-skip will also enable --stepwise implicitly.
For UI automation test cases using selenium and appium, screenshots are saved when they fail,
and are attached to the report when allure is used
Support helium, the webdriver process cannot be killed within a use case
install
Usage
Demo
The driver instances of selenium and appium must be transferred by a fixture.
import pytest
@pytest.fixture()
def init_driver():
driver = webdriver.Chrome()
yield driver
driver.close()
driver.quit()
def test_login_success(init_driver):
init_driver.get("https://github.com/fungaegis/pytest-failed-screenshot")
assert False
# helium demo
@pytest.fixture()
def init_helium():
yield None
kill_browser()
@pytest.mark.usefixtures("init_helium")
def test_helium_demo():
start_chrome("https://github.com/fungaegis/pytest-failed-screenshot")
assert False
Event listeners are a set of functions in a selenium python bindings that waits for an event to
occur; that event may be a user clicking or finding, changing the value, or an Exception. The
listeners are programmed to react to an input or signal.
There is a function to react when an event about to happen; also, there is a function to react after
the event.
In a Simple way : If person A tries to slap perform B, person B tries to escape the hit (before the
Even though selenium python provides listeners, it doesn't offer details on what to do when an
event occurs. The user must provide all the details of what should happen when an event occurs.
For Example, the User should implement a method for what should happen before and after a
click.
EventFiringWebDriver is a wrapper on the selenium python, and it provides all the methods
EventFiringWebDriver provides two more functions to register and unregister the Listeners
implementing class.
AbstractEventListener provides pre and post even listeners, so to use these listeners, we have to
implement those listeners by inheriting AbstractEventListener.
before_change_value_of :
before_change_value_of method will be invoked when we try to change the value of the
element; this method accepts the target element, driver, and the text
after_change_value_of :
after_change_value_of method will be invoked when the target element's values change; this
method accepts the target element, driver, and text.
before_click :
before_click method will be invoked before clicking and element with click() method in
selenium, this method accepts web element and driver as parameters
after_click :
This method will be invoked after the click() method's operation.
before_find :
before_find method will be invoked before finding the element with the findElement method,
this method accepts By class parameter and two web elements
after_find :
before_navigate_back :
before_navigate_back this method will be invoked before executing back() method from
Navigation class; this method accepts driver as a parameter
after_navigate_back :
before_navigate_forward
after_navigate_forward :
after_navigate_forward method will be invoked after executing the forward() method from the
Navigation class.
beforeNavigateRefresh :
beforeNavigateRefresh method will be executed before executing refresh() method from the
Navigation class, this method accepts driver as a parameter
after_navigate_refresh :
after_navigate_refresh method will be executed after refreshing the page with the refresh()
method from the Navigation class
before_navigate_to :
before_navigate_to method will be executed before navigating to nay webpage using to() method
from Navigation class, this method accepts String web address and driver as parameters
before_execute_script:
before_execute_script method will be executed before executing any javascript code with
JavaScriptExecutor
after_execute_script :
on_exception :
irrespective of whether the user handles the exception or not, this method will be executed
import unittest
import logging
from selenium import webdriver
from selenium.webdriver import Firefox
from selenium.webdriver.support.events import EventFiringWebDriver,
AbstractEventListener
class MyListener(AbstractEventListener):
def before_navigate_to(self, url, driver):
print("Before navigate to %s" % url)
def after_navigate_to(self, url, driver):
print("After navigate to %s" % url)
class Test(unittest.TestCase):
def test_logging_file(self):
driver_plain = webdriver.Chrome(executable_path=r'D:PATHchromedriver.exe');
edriver = EventFiringWebDriver(driver_plain, MyListener())
edriver.get("https://google.com")
if __name__ == "__main__":
unittest.main()
import unittest
import logging
from selenium import webdriver
from selenium.webdriver import Firefox
from selenium.webdriver.support.events import EventFiringWebDriver,
AbstractEventListener
class MyListener(AbstractEventListener):
def before_navigate_to(self, url, driver):
print("Before navigating to ", url)
class Test(unittest.TestCase):
def test_logging_file(self):
driver_plain = webdriver.Chrome(executable_path=r'D:PATHchromedriver.exe');
edriver = EventFiringWebDriver(driver_plain, MyListener())
edriver.get("https://google.com")
edriver.find_element_by_name("q").sendKeys("Sendkeys with listener");
edriver.find_element_by_xpath("//input[contains(@value,'Search')]").click();
edriver.close()
if __name__ == "__main__":
unittest.main()
*************************************************************************************
********
pytest-order - a pytest plugin to order test execution
pip install pytest-order
works with Python 3.7 - 3.12, with pytest versions >= 5.0.0 for
pytest-order
all versions up to Python 3.9, and for pytest >= 6.2.4 for Python >=
3.10. pytest-order runs on Linux, macOS and Windows.
Documentation
Apart from this overview, the following information is available:
Features
Overview
Have you ever wanted to easily run one of your tests before any others
run? Or run some tests last? Or run this one test before that other
test? Or make sure that this group of tests runs after this other group
of tests?
Install with:
This defines the order marker that you can use in your code with
different attributes.
import pytest
@pytest.mark.order(2)
def test_foo():
assert True
@pytest.mark.order(1)
def test_bar():
assert True
plugins: order
collected 2 items
Contributing
Contributions are very welcome. Tests can be run with tox, please
ensure the coverage at least stays the same before you submit a pull
request.