Some premises about my relationship with unit testing:
I like test-driven development.
I like driving out individual object design small, isolated (e.g. no database) unit tests.
I think of these unit tests as a design aid, full stop. Any help they provide with preventing regressions is gravy.
I treat unit tests as disposable. Once they have served their purpose as design aids, I will only keep them around so long as they aren't getting in the way.
These four premises are strongly interconnected. Take away #1 (test first), and tests are no longer a design aid and none of the other points are valid. Take away #2 (isolation) and we get into integration test territory where there's a much higher possibility of tests having regression-prevention value. Take away #3 (design focus) and the other points lose their justification. Take away #4 (disposability) and I spend all my time updating tests broken code changes.
This makes it easy for me to find myself at cross purposes with others in discussions about unit testing, because often they come into the conversation not sharing my premises. For instance, if your focus in unit testing is on preventing regressions, you might well decide isolated unit tests are more trouble than they are worth. I can't really argue with that.
A recent conversation on this topic inspired the thought from the driving-design perspective, maybe unit tests are really just a crutch for languages that don't have a sufficiently strong REPL experience. While I don't think this is strictly true, I think the perspective shines a useful light on what we're really trying to accomplish with unit tests. I almost think that what I really want out of unit tests is the ability to fiddle with live objects in a REPL until they do what I want, and then save that REPL session directly to a test for convenience in flagging API changes.
That conversation also spawned the idea of immutable unit tests: unit tests that can only be deleted, never updated. A bit like TCR. I wonder if this would place some helpful incentives on the test-writing process.
You should subscribe to his newsletter, if you haven't yet.
The other day we dealt with code coverage and gnarly conditionals. I promised to offer a way to be able to test them properly.
THERE IS NONE.
Ha, what a bad joke. But the real answer might not be better, depending on your point of view.
What you have to do is create a table.
(A || B) && C
This is our conditional.
| m | A | B | C | RESULT |
| 0 | T | T | T | T |
| 1 | T | F | T | T | x
| 2 | F | F | T | F | x
| 3 | F | T | T | T | x
| 4 | T | T | F | F |
| 5 | T | F | F | F | x
| 6 | F | T | F | F |
| 7 | F | F | F | F |
# m is the test case
# A, B, C are the atomic parts of the conditional
# RESULT is the result of the evaluation of the conditional
For three terms in a conditional, you can have 8 different cases (2^3). You don't need to test every case. You have to find those cases where switching one term (A, B or C) changes the RESULT. You take those cases and write tests for them. You can ignore the rest as they don't bring you any new information. For our example these could be the test cases 1, 2, 3, 4. I marked them with an x,
The general rule of thumb is that you can solve this with n + 1 test cases where n is the number of terms.
This technique is called Modified Condition/Decision Coverage or short MC/DC. I like this name, it's easy to remember 🤘.
It gets harder to do, when one term of the conditional is used more than once (coupled). Another thing to take note of is that depending on the decision statement in the code, it may not be possible to vary the value of the coupled term such that it alone causes the decision outcome to change. You can deal with this by only testing uncoupled atomic decisions. Or you analyse every case where coupling occurs one-by-one. Then you know which ones to use.
You'll have to do this in the aerospace software industry or where you deal with safety-critical systems.
If you read this far: Congratulations. This is one of the more in-depth topics of testing software. You deserve a 👏 for learning about these things!
During the last week, I had two discussions about code coverage. Code coverage is the metric of how many lines of code are covered by your automated test suite. Many test frameworks have built-in ways to measure this. Other times you have to install another tool manually. When you run your tests you then see how many lines are not covered by a test. That means that no test was run where this line of code was evaluated or executed or interpreted.
When you reach 100% code coverage, what then? Are you done? Could you guarantee that there are absolutely no bugs in your code?
If you are tempted to say Yes, or "maybe?" then let me tell you that you are wrong.
Consider this piece of code.
If you write a unit test for this method, the line eval... will be interpreted because of the if emergency at the end. The line is thus covered.
But the code is not covered or tested.
Admittedly, this is a very trivial example that I made up. In reality, there are some more profound things to consider.
If you have complex conditionals you might need a logic table where you compare all possible combinations of the atomic parts of the conditional.
You cannot possibly evaluate this in your head and know whether you checked for every possible, sensible combination. Yet when you cover that line you are at 100% coverage and can go home, right?
[…] we do have formal rules that we should obey when writing code. A team has rules, and new team members need to learn them before trying to write any code.
That's what I wrote yesterday. It's my email so I can write whatever I think is correct. You'll let me know through your answers if I am wrong.
My friend Tino answered on Friday and asked whether a university degree or certificates might function as a driver's license. And that is true to a certain degree. I am getting a new certificate these days as well. I hope to complete the exam on Wednesday (ISTQB Advanced Level — Technical Test Analyst).
The obvious difference to a driver's license? I am not legally required to obtain one before I can start writing code. Tino also said that he'd find it interesting to be (self)tested in current web-standards and best practices. I do believe these tests are valuable. If I come around to create one, I'll let you know.
Back to the beginning of the email. Why do rules matter to a team? Developers have their style for writing code. Even if there are rules and certain regulations you have to follow, developers still find ways to write code in their unique style. And that's a good thing. It would be boring otherwise.
Still, this style has to obey the rules. Here's why:
The code won't be too complex. Because your static analysis tools tell you if your cyclomatic complexity metric is too high.
Classes and modules will have low coupling and high cohesion. This leads to code that's more easily testable and has higher reusability than other code.
If you have an error if explaining comments are missing, you could make sure that your developers take some extra time to make sure code can be easier to understand. Other rules, like variable and method/class/module naming conventions, have the same goal.
In short: Rules help your team to write code that is maintainable and has low technical debt. This reduces the total costs of ownership. If you only look at the cost of writing the code and delivering the software, the costs might be higher if you follow stricter rules. Over the complete lifecycle of a software (product), the total costs would be lower because of better maintainability and a lower number of defects.
This is already getting long. See you tomorrow with even more thoughts on this topic.
Sorry for not writing yesterday. My schedule did not allow it. I refrained from posting anything since it wouldn't have been of sufficient quality for you.
On Monday I tried something new. I recorded the content of the newsletter as a video and posted it to YouTube and LinkedIn. While it didn't exactly blow up, I am happy and excited about these changes. I want to try to stick with this.
Today I switched things up. I first recorded the video, of which I've embedded a link to below, and then wrote the newsletter.
I wrote to you about test management yesterday. My goal was to provide you with an idea of why test management (TM) might be something your projects could benefit from. What I did not tell you: You won't get there just by using a software. There is a lot more to TM than meets the eye. But I won't use your precious inbox for that.
You don't need test management. You develop web applications. Your job is not rocket science. It's demanding, and you are doing a fantastic part shipping features and making customers happy. Who needs test management to do that, right?
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.
3rd Party Cookies
This website uses Drip for Email Marketing and RightMessage to personalize content for you. Please accept these cookies, so I can better create content that helps you to increase your software quality.
Please enable Strictly Necessary Cookies first so that we can save your preferences!