Blog about software - ordep.dev 10月02日
软件测试的重要性与实践
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

软件测试是软件开发不可或缺的环节,确保代码质量与功能正确性。本文探讨了不同类型的项目测试困境、测试方法(如TDD和bug复现测试),并强调建立测试文化的重要性。文章建议每个代码变更必须配套测试,bug修复需验证测试,测试代码需评审,且测试套件需保持一致。通过这些实践,团队可提升软件质量,减少后期维护成本。

📚 软件测试是确保最终产品质量的关键环节,它不仅验证功能正确性,还作为“活文档”记录系统行为,提升代码覆盖率。

🛠️ 错误的测试设计(如过度耦合、执行效率低下)会导致假阳性结果,降低开发者运行测试套件的积极性,需通过单元测试与集成测试的合理搭配解决。

🔄 测试驱动开发(TDD)要求先编写测试用例再实现功能,迫使开发者从设计层面思考,确保代码持续可用且覆盖率高;重构时同样适用,需先补全测试再修改代码。

🐛 复现bug的测试是高效修复手段,通过逐步缩小测试场景直至复现问题,既能定位bug又能保证修复后的代码质量,避免重复无效的调试工作。

🤝 建立测试文化需全员参与:变更必须附带测试、bug修复需验证测试、测试代码需同行评审、团队需统一测试工具与规范,以代码审查和静态分析工具强制执行一致性。

First of all, why testing?

Testing takes time, just like structural analysis takes time. Both activitiesensure the quality of the end product. It’s time for software developers to takeup the mantle of responsibility for what they produce. Testing alone isn’tsufficient, but it is necessary. Testing is the engineering rigor of software development. – Neal Ford

Software engineering is much more than coding. It is “an engineering disciplinethat is concerned with all aspects of software production”. As professionalSoftware Engineers, we’re responsible for the code we write. We should notrelease untested code, and, what I think even more important, we should not askour peers to perform code reviews without delivering any test, that support theproduced code. It’s our job to reduce the bugs that are found after thedevelopment phase. No one is perfect, neither our code, but what matters most isour software craftsmanship attitude of making sure that our code works properly.

Types of Software Projects

Before jumping into testing culture and best practices, let’s reflect on thetypes of software projects that most of us end up working on during our career.

1. Projects without tests

How can we build projects without any kind of automated tests? Are we talkingabout lack of professionalism, lack of skills or knowledge, or are we talkingabout a work culture based on pressure where there is no time to test software?It can be just one of them or can be all of them. It’s a really hard questionto answer.

The point here is that we’re talking about a team without testing culture, thatare able to deliver features without any kind of testing effort to support thedevelopment phase, neither to assure a certain level of software quality.

2. Projects with wrongly designed tests

Testing is hard. There are several pitfalls in testing that often leads to falsepositives, and in its turn leads to having a low number of developers runningthe test suites. I will focus on two specific pitfalls: coupling and performance.

We tend to assume that exactly what an implementation does is precisely what wewant to test for. Our tests are often coupled to the implementation, wheresometimes the observed behaviour is incidental and have no bearing on thedesired functionality. When we need to change the implementation to match theactual specification, may cause tests to fail. When we have correctimplementations and failing tests, we start to develop uncertainty and doubtagainst our test suite.

Another pitfall in testing, that is caused by having a bad design, is when wedon’t run the test suite because of the length of time they may take. If we areworking against a deadline, naturally, people will start cutting corners. Andby cutting corners, it means releasing without running the test suite. Oneviable solution is to split the test suite into two or more profiles. I tend towrite both unit (fast) and integration (slow) tests. I want to run the unittests when I’m writing code, and if the feedback is immediate, I’ll start todevelop a reliable relationship with my test suite.

We must treat testing code as we treat production code. If we have wronglydesigned test suites that are slow to run and provide false positives, we’rejust wasting our time.

Types of Testing

1. Testing after the implementation

I don’t like this approach, but let’s face it. It contributes to high codecoverage, project quality, and it can be used as living documentation, if welldesigned. The problem with this approach is that we tend to design the testcases to succeed since we’re writing them after we implement the feature.Also, most of the time, these tests, just test many things at a time andthey tend to get too big and complex and harder to maintain.

2. Tests that support development (TDD)

Actually, I don’t know how to code without writing a test first. It makes usreasoning about our design choices, it enforces us to change implementationcode only to make a test pass, it contributes to a high code coverage, and itensures that our code compiles all the time. We can write these types of testswhen we’re implementing a brand new feature, or if we’re refactoring someexisting code.

Refactoring doesn’t change the current behaviour. So, if we’re missing a testthat covers the piece of code that we’re refactoring, writing one should be thefirst thing to do. Once the test is green, we should start refactoring. Afterwe change the implementation and the test is failing, we’re not refactoring.We’re rewriting the existing code to something different.

Test-Driven Development brings lots of advantages to the team and to theproject. Unfortunately, most people can’t understand that.

3. Tests that reproduce reported bugs

This is one of my favorite testing techniques. Unfortunately, the commonscenario when a bug is reported is to open the IDE, pick some critical lines ofcode and drop breakpoints after breakpoints. When all of them are set, we’reready to debug.

After hours of debugging, the bug is fixed. Sometimes, the bug is fixed withoutchanging any line of code. In these situations what was the contribution to the codebase? None. No changes. No tests.

The optimal workflow is to write a test that can reproduce the reported bug.We should start with a broader test scenario. Is the test green? Startnarrowing the scenario until the bug is found. With this approach, we make surethat every change that we make to the source code is covered by a test that was failing. If we can’t reproduce the bug,at least at the end of the day, thecodebase is covered with more real-life scenarios.

It’s our job to build a testing culture. But how?

One of the rules for succeeding with a testing culture is to have everyone onboard on accepting that testing is part of our job. If someone can write newfeatures, it’s their responsibility to write tests that cover those changes.If we can’t have everyone on board with this simple responsibility, I’ll haveto admit that it will be almost impossible to change anyone’s mind.

1. Every change must have tests

Changes that will end up in production must have been covered with tests.It’s as simple as that. We can’t afford to have critical bugs in testingor production environments being reported by Quality Assurance Analysts,Product Owners, or even Customers. We just can’t. It’s an embarrassing andcostly situation.

2. Bug fixing must have tests

Sometimes we spend hours trying to reproduce reported bugs and we reach theend of the day with nothing. Our search was inconclusive, and we close thereport without any single change to the codebase. The next month, the samebug is reported, and another developer will spend another day searching fora bug that can’t be reproduced. If we write at least one test to cover thereported scenario, we can make sure that our program, for that given scenario,does not contain any bug at all.

3. Testing code must be reviewed

We should treat testing code as production code. Why? We like to show ourproblem-solving and software design skills to our peers, right? So, why shouldwe write not-so-beautiful testing code? I will not extend to much on codereviews (that’s a topic for another article), but it is really important to bea critic with the test’s code quality. All of this while making sure that we’reasserting the critical points, and most of all, we need to contribute to ablazing fast test suite. A well-designed test case/suite is much easier toread, understand and evolve.

4. Test suites must be consistent.

If we want to build a solid testing culture on our team, we must share the samevalues, the same practices, and the same tools. I’m not talking about codeformatting. That topic should be covered by a static analysis tool and stylechecking policy shared across all team projects.

Every software engineer has its own habits of writing and designing code,and also has their favorite tooling. Writing testing code is no exception.Some people like too use fluent assertions, others not so much. The key pointhere is to, as a team, reach a consensus and choose the appropriate tools forthe job and guarantee that everyone is on board happy to use them.

Regarding coding practices, we should use the same ones we use for productioncode.

All of these choices must be consistent, and we can enforce them throughcode reviews, and proper automated static analysis and style checking tools.

So, what’s the point?

I hope that, after reading this, you’ll be able to enforce that everyone iswriting meaningful and valuable tests for every single change in your codebase.

Remember that testing itself takes time, and creating a testing culture is nodifferent. Testing alone isn’t sufficient, but it is necessary.

Be a software craftsman, evangelize your best practices!

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

软件测试 测试驱动开发 代码质量 测试文化 软件开发实践
相关文章