top of page
hand-businesswoman-touching-hand-artificial-intelligence-meaning-technology-connection-go-

Grouping tests in PyTest -Substrings, and Markers

As promised, in one of my previous blogs, here I’m to discuss that powerful feature of grouping tests in PyTest. Grouping of tests is an effective way to perform testing because it helps us to execute or skip certain tests based on criteria. If you are new to PyTest then refer to Getting started with PyTest before this.


Before diving in, quick environment check Python version used : Python 3.7 in PyCharm Install PyTest : pip install pytest Confirm the installation: pytest -h, which will display the help.

We can perform grouping in 2 ways:

  •  Matching Substring

  • Applying Marker


Matching Substring:


We can group a subset of tests just by mentioning a common substring from different tests in different files. For this, naming the tests accordingly is the key.


For example, we have multiple modules test_numbers_one.py and test_numbers_two.py with multiple tests that have test names with sub-strings “arith” and “compare”.


test_numbers_one.py

def test_arithadd():
    a = 6
    b = 3
    assert a + b == 9, "Expected: {}, Result: {}".format(a + b, 9)

def test_arithsubtract():
    a = 6
    b = 3
    assert a - b == 3, "Expected: {}, Result: {}".format(a - b, 3)

def test_compare_small(numbers):
    assert numbers[0] < numbers[1],\
        "{} not less than {}".format(numbers[0],numbers[1])

def test_compare_greater(numbers):
    assert numbers[0] > numbers[1],\
        "{} not greater than {}".format(numbers[0],numbers[1])

 test_numbers_two.py

def test_arith_multiplytest():
    a = 6
    b = 3
    assert a * b == 18 , "Expected: {}, Result: {}".format(a * b, 18)

def test_arith_divide():
    a = 6
    b = 3
    assert a / b == 2, "Expected: {}, Result: {}".format(a / b, 2)

def test_compare_equalto(numbers):
    assert numbers[0] == numbers[1],\
        "{} not equal to {}".format(numbers[0],numbers[1])

To execute tests with a specific substring, use option -k followed by substring. pytest -k arith -v, here “arith” is the substring. 


Below when executed, it collected 7 items / 3 deselected / 4 selected which have “arith” as substring.


C:\Users\PyTest_Project\UserApp>pytest -k arith -v  
=========== test session starts ====================================================================
platform win32 — Python 3.7.7, pytest-5.4.3, py-1.8.2, pluggy-0.13.1 — c:\programs\python\python37\python.exe
cachedir: .pytest_cache
metadata: {‘Python’: ‘3.7.7’, ‘Platform’: ‘Windows-10–10.0.19041-SP0’, ‘Packages’: {‘pytest’: ‘5.4.3’, ‘py’: ‘1.8.2’, ‘pluggy’: ‘0.13.1’}, ‘Plugins’: {‘django’: ‘3.9.0’, ‘forked’: ‘1.1.3’, ‘html’: ‘2.1.1’, ‘metadata’: ‘1.10.0’, ‘xdist’: ‘1.32.0’}, ‘JAVA_HOME’: ‘C:\\Pr
ogram Files\\Java\\jdk1.8.0_191’}
rootdir: C:\Users\PyTest_Project\UserApp, inifile: pytest.ini
plugins: django-3.9.0, forked-1.1.3, html-2.1.1, metadata-1.10.0, xdist-1.32.0
collected 7 items / 3 deselected / 4 selected
test_numbers_one.py::test_arithadd PASSED [ 25%]
test_numbers_one.py::test_arithsubtract PASSED [ 50%]
test_numbers_two.py::test_arith_multiplytest PASSED [ 75%]
test_numbers_two.py::test_arith_divide PASSED [100%]
==================================================================== 4 passed, 3 deselected in 0.11s ====================================================================

Now let’s try with “compare” substring.


To execute tests with compare substring, I have used a fixture numbers to initialize numbers. You can place fixture in the same file or in conftest.py. For more details on fixtures refer to this.


Fixture used:

@pytest.fixture
def numbers():
    a=9
    b=3
    return a,b

and when executed, it collected 7 items / 4 deselected / 3 selected which have “compare” as a substring. Here, 2 tests failed for obvious reasons as 9 is not lesser than 3, and 9 is not equal to 3. 


C:\Users\PyTest_Project\UserApp>pytest -k compare -v
================== test session starts =======================================================
platform win32 -- Python 3.7.7, pytest-5.4.3, py-1.8.2, pluggy-0.13.1 -- c:\programs\python\python37\python.exe
cachedir: .pytest_cache
metadata: {'Python': '3.7.7', 'Platform': 'Windows-10-10.0.19041-SP0', 'Packages': {'pytest': '5.4.3', 'py': '1.8.2', 'pluggy': '0.13.1'}, 'Plugins': {'django': '3.9.0', 'forked': '1.1.3', 'html': '2.1.1', 'metadata': '1.10.0', 'xdist': '1.32.0'}, 'JAVA_HOME': 'C:\\Pr
ogram Files\\Java\\jdk1.8.0_191'}
rootdir: C:\Users\PyTest_Project\UserApp, inifile: pytest.ini
plugins: django-3.9.0, forked-1.1.3, html-2.1.1, metadata-1.10.0, xdist-1.32.0
collected 7 items / 4 deselected / 3 selected
test_numbers_one.py::test_compare_small FAILED                                                                                                                                                                                                                       [ 33%]
test_numbers_one.py::test_compare_greater PASSED                                                                                                                                                                                                                     [ 66%]
test_numbers_two.py::test_compare_equalto FAILED                                                                                                                                                                                                                     [100%]
============================================================ FAILURES =============================================================
_______________________________________________________ test_compare_small ________________________________________________________
numbers = (9, 3)
def test_compare_small(numbers):
>       assert numbers[0] < numbers[1],\
            "{} not less than {}".format(numbers[0],numbers[1])
E       AssertionError: 9 not less than 3
E       assert 9 < 3
test_numbers_one.py:12: AssertionError
______________________________________________________ test_compare_equalto _______________________________________________________
numbers = (9, 3)
def test_compare_equalto(numbers):
>       assert numbers[0] == numbers[1],\
            "{} not equal to {}".format(numbers[0],numbers[1])
E       AssertionError: 9 not equal to 3
E       assert 9 == 3
E         +9
E         -3
test_numbers_two.py:12: AssertionError
==================================================================== short test summary info ====================================================================
FAILED test_numbers_one.py::test_compare_small - AssertionError: 9 not less than 3
FAILED test_numbers_two.py::test_compare_equalto - AssertionError: 9 not equal to 3
==================================================================== 2 failed, 1 passed, 4 deselected in 0.13s ====================================================================

Applying Markers


Nothing but, marking some tests as a specific type and execute them exclusively.

To use markers, first import pytest in the test file and add the marker just above the test declaration.

Syntax to set a marker.

@pytest.mark.<marker-name>

There are two types of markers:

  • Built-in Markers

  • Custom Markers

Built-in Markers As the name specifies, we will first learn how to use some of the built-in PyTest markers.


skip

When a test is marked as ‘skip’ then it allows us to skip the execution of that test.  For Example, this marker can be used when a test doesn’t support a version. 

The syntax is given below:

@pytest.mark.skip

For example, in the below file some tests are marked as skip, due to which the pytest will not execute them and will report them as skipped when that file is executed.


test_xfail_skip.py

import pytest

@pytest.fixture
def numbers():
    a=9
    b=3
    return a,b

@pytest.mark.skip
def test_smaller(numbers):
    assert numbers[0] < numbers[1],\
        "{} not less than {}".format(numbers[0],numbers[1])

def test_greater(numbers):
    assert numbers[0] > numbers[1],\
        "{} not greater than {}".format(numbers[0],numbers[1])

@pytest.mark.skip
def test_equalto(numbers):
    assert numbers[0] == numbers[1],\
        "{} not equal to {}".format(numbers[0],numbers[1])

@pytest.mark.skip
def test_smaller_equal(numbers):
    assert numbers[0] <= numbers[1],\
        "{} not less than {}".format(numbers[0],numbers[1])

def test_greater_equal(numbers):
    assert numbers[0] >= numbers[1],\
        "{} not less than {}".format(numbers[0],numbers[1])

When executed, it clearly shows the tests executed as PASSED or FAILED and tests skipped as SKIPPED

C:\Users\PyTest_Project\PyTestFramework>pytest test_xfail_skip.py -v
==================================================================== test session starts ====================================================================
platform win32 -- Python 3.7.7, pytest-5.4.3, py-1.8.2, pluggy-0.13.1 -- c:\programs\python\python37\python.exe
cachedir: .pytest_cache
metadata: {'Python': '3.7.7', 'Platform': 'Windows-10-10.0.19041-SP0', 'Packages': {'pytest': '5.4.3', 'py': '1.8.2', 'pluggy': '0.13.1'}, 'Plugins': {'django': '3.9.0', 'forked': '1.1.3', 'html': '2.1.1', 'metadata': '1.10.0', 'xdist': '1.32.0'}, 'JAVA_HOME': 'C:\\Pr
ogram Files\\Java\\jdk1.8.0_191'}
rootdir: C:\Users\PyTest_Project, inifile: pytest.ini
plugins: django-3.9.0, forked-1.1.3, html-2.1.1, metadata-1.10.0, xdist-1.32.0
collected 5 items
test_xfail_skip.py::test_smaller SKIPPED                                                                                                                                                                                                                             [ 20%]
test_xfail_skip.py::test_greater PASSED                                                                                                                                                                                                                              [ 40%]
test_xfail_skip.py::test_equalto SKIPPED                                                                                                                                                                                                                             [ 60%]
test_xfail_skip.py::test_smaller_equal SKIPPED                                                                                                                                                                                                                       [ 80%]
test_xfail_skip.py::test_greater_equal PASSED                                                                                                                                                                                                                        [100%]
==================================================                  2 passed, 3 skipped in 0.05s ===================================================

xfail


When a test is marked as ‘xfail’, the test is executed but whether it passes or fails does not affect the overall result fo the test. 

For Example, this marker can be used is TDD when some functionality is not yet implemented by the developer. 


The syntax is given below:

@pytest.mark.xfail

In the below example file some tests are marked as xfail, then the pytest will execute them and will report them as xpass if it passed and xfail if it failed but details of the failure are not shown.


test_xfail_skip.py

import pytest

@pytest.fixture
def numbers():
    a=9
    b=3
    return a,b

@pytest.mark.xfail
def test_smaller(numbers):
    assert numbers[0] < numbers[1],\
        "{} not less than {}".format(numbers[0],numbers[1])

def test_greater(numbers):
    assert numbers[0] > numbers[1],\
        "{} not greater than {}".format(numbers[0],numbers[1])

@pytest.mark.xfail
def test_equalto(numbers):
    assert numbers[0] == numbers[1],\
        "{} not equal to {}".format(numbers[0],numbers[1])


def test_smaller_equal(numbers):
    assert numbers[0] <= numbers[1],\
        "{} not less than {}".format(numbers[0],numbers[1])

@pytest.mark.xfail
def test_greater_equal(numbers):
    assert numbers[0] >= numbers[1],\
        "{} not less than {}".format(numbers[0],numbers[1])

When executed, 

  • tests passed are shown as PASSED 

  • tests failed are shown as FAILED

  • tests xfailed, meaning marked as xfail and failed are shown as XFAIL  (Also observe the XFAIL failure details are not shown like it is shown for failed tests)

  • tests xpassed, meaning marked as xfail and passed are shown as XPASS

C:\Users\PyTest_Project\PyTestFramework>pytest test_xfail_skip.py -v
====================================================================test session starts ====================================================================
platform win32 -- Python 3.7.7, pytest-5.4.3, py-1.8.2, pluggy-0.13.1 -- c:\python\python37\python.exe
cachedir: .pytest_cache
metadata: {'Python': '3.7.7', 'Platform': 'Windows-10-10.0.19041-SP0', 'Packages': {'pytest': '5.4.3', 'py': '1.8.2', 'pluggy': '0.13.1'}, 'Plugins': {'django': '3.9.0', 'forked': '1.1.3', 'html': '2.1.1', 'metadata': '1.10.0', 'xdist': '1.32.0'}, 'JAVA_HOME': 'C:\\Pr
ogram Files\\Java\\jdk1.8.0_191'}
rootdir: C:\Users\PyTest_Project, inifile: pytest.ini
plugins: django-3.9.0, forked-1.1.3, html-2.1.1, metadata-1.10.0, xdist-1.32.0
collected 5 items
test_xfail_skip.py::test_smaller XFAIL                                                                                                                                                                                                                               [ 20%]
test_xfail_skip.py::test_greater PASSED                                                                                                                                                                                                                              [ 40%]
test_xfail_skip.py::test_equalto XFAIL                                                                                                                                                                                                                               [ 60%]
test_xfail_skip.py::test_smaller_equal FAILED                                                                                                                                                                                                                        [ 80%]
test_xfail_skip.py::test_greater_equal XPASS                                                                                                                                                                                                                         [100%]
============================================================ FAILURES =============================================================
_______________________________________________________ test_smaller_equal ________________________________________________________
numbers = (9, 3)
def test_smaller_equal(numbers):
>       assert numbers[0] <= numbers[1],\
            "{} not less than {}".format(numbers[0],numbers[1])
E       AssertionError: 9 not less than 3
E       assert 9 <= 3
test_xfail_skip.py:25: AssertionError
====================================================================short test summary info ====================================================================
FAILED test_xfail_skip.py::test_smaller_equal - AssertionError: 9 not less than 3
====================================================================1 failed, 1 passed, 2 xfailed, 1 xpassed in 0.19s ====================================================================

Custom Markers


So the good news is, we can customize the marker name as it suits our needs.

The rules and syntax remain the same:

import pytest
@pytest.mark.<marker-name>
def test_me():
    pass

Below is the example where some custom markers “regression” and “sanity” are made and used.

 

To execute tests of a specific marker we can do by using the -m option as below:

pytest -m <marker-name> 

Before you use a custom marker, it has to be registered. The method of registering custom markers will be discussed in my next blog.


test_userapp.py

import pytest

@pytest.mark.regression
def test_database_insert_record(database_connection):
    if (database_connection):
        print("Insertion Successful. ")
        assert True

@pytest.mark.sanity
def test_database_update_record(database_connection):
    if (database_connection):
        print("Update Successful. ")
        assert True

@pytest.mark.sanity
def test_database_delete_record(database_connection):
    if (database_connection):
        print("Deletion Successful. ")
        assert True

@pytest.mark.regression
@pytest.mark.sanity
def test_ui_login(ui_open):
    if (ui_open):
        print("Login Successful")
        assert True

@pytest.mark.sanity
def test_ui_view_details(ui_open):
    if (ui_open):
        print("View details Successful")
        assert True

@pytest.mark.regression
def test_ui_log_out(ui_open):
    if (ui_open):
        print("Log Out Successful")
        assert True

Now execute tests marked as “regression”. pytest test_userapp.py -m regression -v


Observe that pytest collected 6 items / 3 deselected / 3 selected

C:\Users\PyTest_Project\UserApp>pytest test_userapp.py -m regression -v
====================================================================test session starts ====================================================================
platform win32 -- Python 3.7.7, pytest-5.4.3, py-1.8.2, pluggy-0.13.1 -- c:\programs\python\python37\python.exe
cachedir: .pytest_cache
metadata: {'Python': '3.7.7', 'Platform': 'Windows-10-10.0.19041-SP0', 'Packages': {'pytest': '5.4.3', 'py': '1.8.2', 'pluggy': '0.13.1'}, 'Plugins': {'django': '3.9.0', 'forked': '1.1.3', 'html': '2.1.1', 'metadata': '1.10.0', 'xdist': '1.32.0'}, 'JAVA_HOME': 'C:\\Pr
ogram Files\\Java\\jdk1.8.0_191'}
rootdir: C:\Users\PyTest_Project\UserApp, inifile: pytest.ini
plugins: django-3.9.0, forked-1.1.3, html-2.1.1, metadata-1.10.0, xdist-1.32.0
collected 6 items / 3 deselected / 3 selected
test_userapp.py::test_database_insert_record PASSED                                                                                                                                                                                                                  [ 33%]
test_userapp.py::test_ui_login PASSED                                                                                                                                                                                                                                [ 66%]
test_userapp.py::test_ui_log_out PASSED                                                                                                                                                                                                                              [100%]
====================================================================3 passed, 3 deselected in 0.04s ====================================================================

Now execute tests marked as “sanity”. pytest test_userapp.py -m sanity -v


Observe that pytest collected 6 items / 2 deselected / 4 selected

C:\Users\PyTest_Project\UserApp>pytest test_userapp.py -m sanity -v
==================================================================== test session starts ====================================================================
platform win32 -- Python 3.7.7, pytest-5.4.3, py-1.8.2, pluggy-0.13.1 -- c:\programs\python\python37\python.exe
cachedir: .pytest_cache
metadata: {'Python': '3.7.7', 'Platform': 'Windows-10-10.0.19041-SP0', 'Packages': {'pytest': '5.4.3', 'py': '1.8.2', 'pluggy': '0.13.1'}, 'Plugins': {'django': '3.9.0', 'forked': '1.1.3', 'html': '2.1.1', 'metadata': '1.10.0', 'xdist': '1.32.0'}, 'JAVA_HOME': 'C:\\Pr
ogram Files\\Java\\jdk1.8.0_191'}
rootdir: C:\Users\PyTest_Project\UserApp, inifile: pytest.ini
plugins: django-3.9.0, forked-1.1.3, html-2.1.1, metadata-1.10.0, xdist-1.32.0
collected 6 items / 2 deselected / 4 selected
test_userapp.py::test_database_update_record PASSED                                                                                                                                                                                                                  [ 25%]
test_userapp.py::test_database_delete_record PASSED                                                                                                                                                                                                                  [ 50%]
test_userapp.py::test_ui_login PASSED                                                                                                                                                                                                                                [ 75%]
test_userapp.py::test_ui_view_details PASSED                                                                                                                                                                                                                         [100%]
==================================================================== 4 passed, 2 deselected in 0.04s ====================================================================

Also, note that a test can have multiple markers and that test will be executed when either of the markers is chosen to execute. Just like the below test is executed both when regression and sanity markers are chosen to execute.

If you missed to observe scroll up and check out!

@pytest.mark.regression
@pytest.mark.sanity
def test_ui_login(ui_open):
    if (ui_open):
        print("Login Successful")
        assert True

Another cool option is that we can exclude executing a group of tests marked by a specific marker by using “not <markername>” while executing.

pytest -m "not <marker-name>" test_userapp.py -v

Let’s exclude the regression tests while executing the tests in test_userapp.py pytest -m “not regression” test_userapp.py -v


Observe that pytest excluded test_ui_login test as well and  collected 6 items / 3 deselected / 3 selected

C:\Users\PyTest_Project\UserApp>pytest -m "not regression" test_userapp.py -v
==================================================================== test session starts ====================================================================
platform win32 -- Python 3.7.7, pytest-5.4.3, py-1.8.2, pluggy-0.13.1 -- c:\programs\python\python37\python.exe
cachedir: .pytest_cache
metadata: {'Python': '3.7.7', 'Platform': 'Windows-10-10.0.19041-SP0', 'Packages': {'pytest': '5.4.3', 'py': '1.8.2', 'pluggy': '0.13.1'}, 'Plugins': {'django': '3.9.0', 'forked': '1.1.3', 'html': '2.1.1', 'metadata': '1.10.0', 'xdist': '1.32.0'}, 'JAVA_HOME': 'C:\\Pr
ogram Files\\Java\\jdk1.8.0_191'}
rootdir: C:\Users\PyTest_Project\UserApp, inifile: pytest.ini
plugins: django-3.9.0, forked-1.1.3, html-2.1.1, metadata-1.10.0, xdist-1.32.0
collected 6 items / 3 deselected / 3 selected
test_userapp.py::test_database_update_record PASSED                                                                                                                                                                                                                  [ 33%]
test_userapp.py::test_database_delete_record PASSED                                                                                                                                                                                                                  [ 66%]
test_userapp.py::test_ui_view_details PASSED

To view, all the available markers (custom and built-in) use the following command:

pytest --markers


Conclusion


Hope this article helped you with the grouping of test cases in multiple ways which gives you the convenience of executing a subset of tests at a time when needed. 

3,141 views0 comments

Recent Posts

See All

Beginner Friendly Java String Interview Questions

Hello Everyone! Welcome to the second section of the Java Strings blog. Here are some interesting coding questions that has been solved with different methods and approaches. “Better late than never!”

bottom of page