index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html
![]()
本教程深入探讨了Python中最强大的测试框架之一PyTest的高级功能。我们从头开始构建一个完整的迷你项目,演示了fixtures、markers、plugins、参数化和自定义配置。教程重点展示了PyTest如何从一个简单的测试运行器演变为一个健壮、可扩展的系统,适用于实际应用。通过本教程,您将不仅学会如何编写测试,还能掌握如何控制和定制PyTest的行为以适应任何项目需求。最终,我们生成了一个JSON格式的总结报告,展示了PyTest如何轻松集成到现代CI和分析流程中。
📊 **环境搭建与项目结构**:教程首先详细介绍了如何设置PyTest开发环境,包括导入必要的Python库,安装最新版本的PyTest,并创建一个清晰的项目结构。这为后续的测试代码编写奠定了坚实的基础,确保了项目组件的有序组织,使得开发和测试过程更加高效和可维护。
⚙️ **PyTest配置与自定义插件**:通过`pytest.ini`文件,教程展示了如何配置PyTest的默认选项、测试路径以及自定义标记(markers),从而精确控制测试的发现和过滤。同时,`conftest.py`文件中的自定义插件功能被详细阐述,它能够跟踪测试结果(通过、失败、跳过),添加自定义命令行选项(如`--runslow`),并提供会话范围的fixtures,极大地增强了PyTest的灵活性和扩展性。
🧪 **核心模块与测试用例设计**:教程构建了一个包含基本数学函数(加法、除法、移动平均)和`Vector`类的计算模块,并提供了相应的测试用例。这些测试用例巧妙地运用了PyTest的参数化(`@pytest.mark.parametrize`)、期望失败(`@pytest.mark.xfail`)以及用于测试自定义对象的相等性比较,全面验证了核心业务逻辑的正确性。
🌐 **IO工具与API模拟测试**:为了展示更广泛的应用场景,教程还实现了JSON文件的读写工具和模拟API函数。相应的测试用例利用了`@pytest.mark.io`和`@pytest.mark.api`标记,结合`tmp_path`、`capsys`(捕获输出)和`monkeypatch`(模拟环境)等PyTest内置工具,有效地测试了文件I/O操作和外部API调用的行为,即使在没有实际服务的情况下也能进行可靠的集成测试。
⏳ **高级Fixtures与测试控制**:教程通过`@pytest.mark.slow`标记和`event_log`、`fake_clock` fixtures,演示了如何管理耗时较长的测试。`fake_clock` fixture通过`monkeypatch`模拟时间,允许精确控制测试中的时间流逝,而`event_log`则用于记录测试过程中的事件。这种方式不仅方便了对时间敏感的代码进行测试,还展示了如何通过自定义fixtures来管理和共享测试状态,以及如何使用命令行参数来控制特定测试的运行,如`--runslow`选项。
In this tutorial, we explore the advanced capabilities of PyTest, one of the most powerful testing frameworks in Python. We build a complete mini-project from scratch that demonstrates fixtures, markers, plugins, parameterization, and custom configuration. We focus on showing how PyTest can evolve from a simple test runner into a robust, extensible system for real-world applications. By the end, we understand not just how to write tests, but how to control and customize PyTest’s behavior to fit any project’s needs. Check out the FULL CODES here.
import sys, subprocess, os, textwrap, pathlib, jsonsubprocess.run([sys.executable, "-m", "pip", "install", "-q", "pytest>=8.0"], check=True)root = pathlib.Path("pytest_advanced_tutorial").absolute()if root.exists(): import shutil; shutil.rmtree(root)(root / "calc").mkdir(parents=True)(root / "app").mkdir()(root / "tests").mkdir()
We begin by setting up our environment, importing essential Python libraries for file handling and subprocess execution. We install the latest version of PyTest to ensure compatibility and then create a clean project structure with folders for our main code, application modules, and tests. This gives us a solid foundation to organize everything neatly before writing any test logic. Check out the FULL CODES here.
(root / "pytest.ini").write_text(textwrap.dedent("""[pytest]addopts = -q -ra --maxfail=1 -m "not slow"testpaths = testsmarkers = slow: slow tests (use --runslow to run) io: tests hitting the file system api: tests patching external calls""").strip()+"\n")(root / "conftest.py").write_text(textwrap.dedent(r'''import os, time, pytest, jsondef pytest_addoption(parser): parser.addoption("--runslow", action="store_true", help="run slow tests")def pytest_configure(config): config.addinivalue_line("markers", "slow: slow tests") config._summary = {"passed":0,"failed":0,"skipped":0,"slow_ran":0}def pytest_collection_modifyitems(config, items): if config.getoption("--runslow"): return skip = pytest.mark.skip(reason="need --runslow to run") for item in items: if "slow" in item.keywords: item.add_marker(skip)def pytest_runtest_logreport(report): cfg = report.config._summary if report.when=="call": if report.passed: cfg["passed"]+=1 elif report.failed: cfg["failed"]+=1 elif report.skipped: cfg["skipped"]+=1 if "slow" in report.keywords and report.passed: cfg["slow_ran"]+=1def pytest_terminal_summary(terminalreporter, exitstatus, config): s=config._summary terminalreporter.write_sep("=", "SESSION SUMMARY (custom plugin)") terminalreporter.write_line(f"Passed: {s['passed']} | Failed: {s['failed']} | Skipped: {s['skipped']}") terminalreporter.write_line(f"Slow tests run: {s['slow_ran']}") terminalreporter.write_line("PyTest finished successfully " if s["failed"]==0 else "Some tests failed ")@pytest.fixture(scope="session")def settings(): return {"env":"prod","max_retries":2}@pytest.fixture(scope="function")def event_log(): logs=[]; yield logs; print("\\nEVENT LOG:", logs)@pytest.fixturedef temp_json_file(tmp_path): p=tmp_path/"data.json"; p.write_text('{"msg":"hi"}'); return p@pytest.fixturedef fake_clock(monkeypatch): t={"now":1000.0}; monkeypatch.setattr(time,"time",lambda: t["now"]); return t'''))
We now create our PyTest configuration and plugin files. In pytest.ini, we define markers, default options, and test paths to control how tests are discovered and filtered. In conftest.py, we implement a custom plugin that tracks passed, failed, and skipped tests, adds a –runslow option, and provides fixtures for reusable test resources. This helps us extend PyTest’s core behavior while keeping our setup clean and modular. Check out the FULL CODES here.
(root/"calc"/"__init__.py").write_text(textwrap.dedent('''from .vector import Vectordef add(a,b): return a+bdef div(a,b): if b==0: raise ZeroDivisionError("division by zero") return a/bdef moving_avg(xs,k): if k<=0 or k>len(xs): raise ValueError("bad window") out=[]; s=sum(xs[:k]); out.append(s/k) for i in range(k,len(xs)): s+=xs[i]-xs[i-k]; out.append(s/k) return out'''))(root/"calc"/"vector.py").write_text(textwrap.dedent('''class Vector: __slots__=("x","y","z") def __init__(self,x=0,y=0,z=0): self.x,self.y,self.z=float(x),float(y),float(z) def __add__(self,o): return Vector(self.x+o.x,self.y+o.y,self.z+o.z) def __sub__(self,o): return Vector(self.x-o.x,self.y-o.y,self.z-o.z) def __mul__(self,s): return Vector(self.x*s,self.y*s,self.z*s) __rmul__=__mul__ def norm(self): return (self.x**2+self.y**2+self.z**2)**0.5 def __eq__(self,o): return abs(self.x-o.x)<1e-9 and abs(self.y-o.y)<1e-9 and abs(self.z-o.z)<1e-9 def __repr__(self): return f"Vector({self.x:.2f},{self.y:.2f},{self.z:.2f})"'''))
We now build the core calculation module for our project. In the calc package, we define simple mathematical utilities, including addition, division with error handling, and a moving-average function, to demonstrate logic testing. Alongside this, we create a Vector class that supports arithmetic operations, equality checks, and norm computation, a perfect example for testing custom objects and comparisons using PyTest. Check out the FULL CODES here.
(root/"app"/"io_utils.py").write_text(textwrap.dedent('''import json, pathlib, timedef save_json(path,obj): path=pathlib.Path(path); path.write_text(json.dumps(obj)); return pathdef load_json(path): return json.loads(pathlib.Path(path).read_text())def timed_operation(fn,*a,**kw): t0=time.time(); out=fn(*a,**kw); t1=time.time(); return out,t1-t0'''))(root/"app"/"api.py").write_text(textwrap.dedent('''import os, time, randomdef fetch_username(uid): if os.environ.get("API_MODE")=="offline": return f"cached_{uid}" time.sleep(0.001); return f"user_{uid}_{random.randint(100,999)}"'''))(root/"tests"/"test_calc.py").write_text(textwrap.dedent('''import pytest, mathfrom calc import add,div,moving_avgfrom calc.vector import Vector@pytest.mark.parametrize("a,b,exp",[(1,2,3),(0,0,0),(-1,1,0)])def test_add(a,b,exp): assert add(a,b)==exp@pytest.mark.parametrize("a,b,exp",[(6,3,2),(8,2,4)])def test_div(a,b,exp): assert div(a,b)==exp@pytest.mark.xfail(raises=ZeroDivisionError)def test_div_zero(): div(1,0)def test_avg(): assert moving_avg([1,2,3,4,5],3)==[2,3,4]def test_vector_ops(): v=Vector(1,2,3)+Vector(4,5,6); assert v==Vector(5,7,9)'''))(root/"tests"/"test_io_api.py").write_text(textwrap.dedent('''import pytest, osfrom app.io_utils import save_json,load_json,timed_operationfrom app.api import fetch_username@pytest.mark.iodef test_io(temp_json_file,tmp_path): d={"x":5}; p=tmp_path/"a.json"; save_json(p,d); assert load_json(p)==d assert load_json(temp_json_file)=={"msg":"hi"}def test_timed(capsys): val,dt=timed_operation(lambda x:x*3,7); print("dt=",dt); out=capsys.readouterr().out assert "dt=" in out and val==21@pytest.mark.apidef test_api(monkeypatch): monkeypatch.setenv("API_MODE","offline") assert fetch_username(9)=="cached_9"'''))(root/"tests"/"test_slow.py").write_text(textwrap.dedent('''import time, pytest@pytest.mark.slowdef test_slow(event_log,fake_clock): event_log.append(f"start@{fake_clock['now']}") fake_clock["now"]+=3.0 event_log.append(f"end@{fake_clock['now']}") assert len(event_log)==2'''))
We add lightweight app utilities for JSON I/O and a mocked API to exercise real-world behaviors without external services. We write focused tests that use parametrization, xfail, markers, tmp_path, capsys, and monkeypatch to validate logic and side effects. We include a slow test wired to our event_log and fake_clock fixtures to demonstrate controlled timing and session-wide state. Check out the FULL CODES here.
print(" Project created at:", root)print("\n RUN #1 (default, skips @slow)\n")r1=subprocess.run([sys.executable,"-m","pytest",str(root)],text=True)print("\n RUN #2 (--runslow)\n")r2=subprocess.run([sys.executable,"-m","pytest",str(root),"--runslow"],text=True)summary_file=root/"summary.json"summary={ "total_tests":sum("test_" in str(p) for p in root.rglob("test_*.py")), "runs": ["default","--runslow"], "results": ["success" if r1.returncode==0 else "fail", "success" if r2.returncode==0 else "fail"], "contains_slow_tests": True, "example_event_log":["start@1000.0","end@1003.0"]}summary_file.write_text(json.dumps(summary,indent=2))print("\n FINAL SUMMARY")print(json.dumps(summary,indent=2))print("\n Tutorial completed — all tests & summary generated successfully.")
We now run our test suite twice: first with the default configuration that skips slow tests, and then again with the –runslow flag to include them. After both runs, we generate a JSON summary containing test outcomes, the total number of test files, and a sample event log. This final summary gives us a clear snapshot of our project’s testing health, confirming that all components work flawlessly from start to finish.
In conclusion, we see how PyTest helps us test smarter, not harder. We design a plugin that tracks results, uses fixtures for state management, and controls slow tests with custom options, all while keeping the workflow clean and modular. We conclude with a detailed JSON summary that demonstrates how easily PyTest can integrate with modern CI and analytics pipelines. With this foundation, we are now confident to extend PyTest further, combining coverage, benchmarking, or even parallel execution for large-scale, professional-grade testing.
Check out the FULL CODES here. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
The post A Coding Implementation of Advanced PyTest to Build Customized and Automated Testing with Plugins, Fixtures, and JSON Reporting appeared first on MarkTechPost.