I am working on my first React component library for my current client. Doing lots of things for the first time. Running into lots of roadblocks and things don’t really work like I expect them to. This might be simple, but it’s not easy.

This morning I took some detours on my way to my client by bike. I rode through some parks, chose small streets and enjoyed myself on my @8barbikes adventure bike. I‘m very much in love.

From my friend Ian, I learned the value of putting a clear; in front of your test runs in the terminal. Wasn’t aware of the difference it makes for running your tests many times.

Inspecting changes locally before pushing

If you work on your branch you run into the situation that you would like to push your changes to the remote repository. CI will then pick up your changes and run the linting and code quality checks on it. Afterwards, you will see whether you improved the quality. But perhaps there are some new violations that crept into the code? Happens to all of us!

I usually like to see and check if any new issues might come up on CI. This lets me improve them before I push. The checks I run locally depend on the kind of work that I do. Lately that’s a lot of Ruby on Rails again — which is great. I love that framework and the language. To grade my code, I use Rubocop, Reek and Flay. If you run their respective commands on your repository, they will check the whole code base. This might be ok, if you didn’t have any issues before. Since I join teams, these days, that work on legacy projects it is rare that there are no problems with the code. If I run the commands just so, I will get a long list and couldn’t possibly see the issues that I introduced through my changes. Lucikly, there is Git and some “command line foo” that can help us here:

git fetch && git diff-tree -r --no-commit-id --name-only master@\{u\} head | xargs ls -1 2>/dev/null | grep '\.rb$' | xargs rubocop

This command will fetch the current state from the remote and diff your branch/changes to master branch. It then runs rubocop on these changes. In my ~/.aliases.local file I added three lines for all three linters.

# Code Quality
alias rubocop-changes="git fetch && git diff-tree -r --no-commit-id --name-only master@\{u\} head | xargs ls -1 2>/dev/null | grep '\.rb$' | xargs rubocop"
alias reek-changes="git fetch && git diff-tree -r --no-commit-id --name-only master@\{u\} head | xargs ls -1 2>/dev/null | grep '\.rb$' | xargs reek"
alias flay-changes="git fetch && git diff-tree -r --no-commit-id --name-only master@\{u\} head | xargs ls -1 2>/dev/null | grep '\.rb$' | xargs flay"

I am still working on a way to just call one command and have all thee commands run. That doesn’t yet work. Probably because of exit-code reasons, when one linter finds issues.

These simple commands offer a convenient way to find local issues and correct them before pushing to CI.

Avdi Grimm's view on deleting tests

A few weeks ago I wrote about deleting your tests. Yesterday I received the weekly email from Avdi Grimm, where he touches on this subject.

Some premises about my relationship with unit testing:

I like test-driven development. I like driving out individual object design small, isolated (e.g. no database) unit tests. I think of these unit tests as a design aid, full stop. Any help they provide with preventing regressions is gravy. I treat unit tests as disposable. Once they have served their purpose as design aids, I will only keep them around so long as they aren’t getting in the way. These four premises are strongly interconnected. Take away #1 (test first), and tests are no longer a design aid and none of the other points are valid. Take away #2 (isolation) and we get into integration test territory where there’s a much higher possibility of tests having regression-prevention value. Take away #3 (design focus) and the other points lose their justification. Take away #4 (disposability) and I spend all my time updating tests broken code changes.

This makes it easy for me to find myself at cross purposes with others in discussions about unit testing, because often they come into the conversation not sharing my premises. For instance, if your focus in unit testing is on preventing regressions, you might well decide isolated unit tests are more trouble than they are worth. I can’t really argue with that.

A recent conversation on this topic inspired the thought from the driving-design perspective, maybe unit tests are really just a crutch for languages that don’t have a sufficiently strong REPL experience. While I don’t think this is strictly true, I think the perspective shines a useful light on what we’re really trying to accomplish with unit tests. I almost think that what I really want out of unit tests is the ability to fiddle with live objects in a REPL until they do what I want, and then save that REPL session directly to a test for convenience in flagging API changes.

That conversation also spawned the idea of immutable unit tests: unit tests that can only be deleted, never updated. A bit like TCR. I wonder if this would place some helpful incentives on the test-writing process.

You should subscribe to his newsletter, if you haven’t yet.

My submission to Euruko was rejected. ☹ That’s a pity, would have loved to visit Rotterdam. I still have 7 other proposals that might come through. Fingers crossed 🤞

External forces

I am occupied with learning these days. Learning on my own about visualizations of data among other topics. But also learning about learning. For that I read what other people think about learning. There are many things I have to learn about this whole topic. One thought I saw repeatedly, was about external forces, or limiting factors.

Let me elaborate what I mean by that: There are people that can motivate themselves more easily than others can. They reach their goals or at least try very hard. Others give up more easily when they face some resistance. As always, there are people in the middle between these extremes. You know best which group you belong to. 💪

What has this do with software quality? I am getting there… 😉

I am wondering how external forces could help improve quality. If you need to reach your goal and you don’t belong to the group of highly self-motivated people there are options like hiring a coach. Athletes do that all the time. I pay for a “virtual” coach that guides my running efforts.

How could you hire a “virtual” coach for your coding efforts, for reaching your targets on your software quality metrics? You could hire me or other “real” coaches, of course. But that doesn’t scale too well and might be too expensive.

Again, for some people it is easy enough to use static analysis or linting — a kind of coach in it’s own right — and follow their guidelines. Yet, still there are people that ignore the warnings or guidelines imposed upon them by the tools. Reasons may be a hard deadline or too much workload. How could we offer external forces, limiting factors that help them, guide them, towards doing the right thing?

One solution I can think of is to have a robot not accept your code when it is below standard or ignores guidelines. A robot could be anything that measures and grades your code and reports back to your team. Some tools already offer this, for example GitLab. If you want to merge code that decreases the overal quality metrics, you are not allowed to do so. So that would be one.

Another idea: If you try to commit or merge such code, you need to consult with another developer about the code. Once you worked on it together, the other dev has to enter her secret key, to remove the lock on the merge. This forces you to pair on code more often.

When it comes to teaching there is this saying of the “glass has to be empty (enough).” You cannot pour water into it, when it’s already filled. Said ideas 👆probably won’t work for a team that isn’t aiming for learning and improving.

I will continue to think.

You probably know AC/DC. But have you heard of MC/DC?

If you need to test a complex conditional, you should take some time to learn about this one.

holgerfrohloff.de/newslette…

Today I am working on a new visualization of fragmentation of ownership for Git repositories. The more fragmented a repo is, the more developers work on it together. This leads to no clear ownership and makes it easier for defects to creep into the code.

Complex conditionals

The other day we dealt with code coverage and gnarly conditionals. I promised to offer a way to be able to test them properly.

THERE IS NONE.

Ha, what a bad joke. But the real answer might not be better, depending on your point of view. What you have to do is create a table.

(A || B) && C

This is our conditional.

| m | A | B | C | RESULT |
----------------------
| 0 | T | T | T |    T   |
| 1 | T | F | T |    T   | x
| 2 | F | F | T |    F   | x
| 3 | F | T | T |    T   | x
| 4 | T | T | F |    F   |
| 5 | T | F | F |    F   | x
| 6 | F | T | F |    F   |
| 7 | F | F | F |    F   |

# m is the test case
# A, B, C are the atomic parts of the conditional
# RESULT is the result of the evaluation of the conditional

For three terms in a conditional, you can have 8 different cases (2^3). You don’t need to test every case. You have to find those cases where switching one term (A, B or C) changes the RESULT. You take those cases and write tests for them. You can ignore the rest as they don’t bring you any new information. For our example these could be the test cases 1, 2, 3, 4. I marked them with an x,

The general rule of thumb is that you can solve this with n + 1 test cases where n is the number of terms.

This technique is called Modified Condition/Decision Coverage or short MC/DC. I like this name, it’s easy to remember 🤘.

It gets harder to do, when one term of the conditional is used more than once (coupled). Another thing to take note of is that depending on the decision statement in the code, it may not be possible to vary the value of the coupled term such that it alone causes the decision outcome to change. You can deal with this by only testing uncoupled atomic decisions. Or you analyse every case where coupling occurs one-by-one. Then you know which ones to use.

You’ll have to do this in the aerospace software industry or where you deal with safety-critical systems.

If you read this far: Congratulations. This is one of the more in-depth topics of testing software. You deserve a 👏 for learning about these things!

I am reading the new edition of Refactoring by Martin Fowler these days. I am preparing a new workshop on Software Design and Architecture. I bet there are some ideas in there that will inspire me to create some specific exercises. #refactoring

I am now using Micro.blog for my microposts. I will crosspost my newsletters there. And whatever I think makes sense. I will figure it out by just trying. 😎

Code coverage can be misleading

During the last week, I had two discussions about code coverage. Code coverage is the metric of how many lines of code are covered by your automated test suite. Many test frameworks have built-in ways to measure this. Other times you have to install another tool manually. When you run your tests you then see how many lines are not covered by a test. That means that no test was run where this line of code was evaluated or executed or interpreted.

When you reach 100% code coverage, what then? Are you done? Could you guarantee that there are absolutely no bugs in your code?

If you are tempted to say Yes, or “maybe?" then let me tell you that you are wrong.

Consider this piece of code.

If you write a unit test for this method, the line eval... will be interpreted because of the if emergency at the end. The line is thus covered. But the code is not covered or tested.

Admittedly, this is a very trivial example that I made up. In reality, there are some more profound things to consider.

If you have complex conditionals you might need a logic table where you compare all possible combinations of the atomic parts of the conditional.

You cannot possibly evaluate this in your head and know whether you checked for every possible, sensible combination. Yet when you cover that line you are at 100% coverage and can go home, right?

So what do you do? Let’s look at this tomorrow.

Quick wins, part 4: YAGNI

We refactored some code yesterday, to move knowledge about an implementation of a function back to where it belonged: Into the function’s class and into the function itself.

Today I want to talk about another topic that often comes up when you are refactoring, or plainly “changing code.”

It happened the other day, during a workshop I was doing on software testing. The participant wanted to apply his new knowledge and write tests for a given JavaScript class. He had written the code a few weeks before we did the workshop. During the hands-on part of the workshop, he wanted to add tests, to make sure everything worked and to make sure that he understood what I had been talking about.

We were lucky in as so far that something happened that usually happens next: He noticed that his code was not easily testable. The design of his class made it harder for him than he would have liked. We talked about the problem, and he noticed the source of it. His class had too many responsibilities. He extracted the code in question into a new service and could then mock the new service when testing his original class. This was good. He was ecstatic. He made progress!

Having gained so much momentum, he went overboard: During his test-design, he wanted to be too clever and tried to write an elaborate test setup which was to be reused between different test runs. It was supposed to be a reusable, parameterizable do-it-all function that could setup the tests just right. With no duplication. In short, it was so much code with so much logic that it would’ve warranted its own tests. And the worst thing? It didn’t work and made trouble for writing his tests.

I was a bit thankful because that gave me the opportunity to tell him:

Premature optimization is the root of all evil.

Perhaps you’ve heard that saying already. Donald Knuth coined this phrase. There’s more to it, but that could be discussed in another email.

Back to my tester. After talking about the problems his function gave him and the difficulty in getting it right he settled for the simple solution: Write your tests. Accept duplication. Keep it simple, use copy & paste if it makes you faster and is more convenient. Write the tests you need and keep them green. And after all that, and only then, refactor your tests to remove duplication where applicable. Don’t try to write the perfect code from the start. Let the design evolve with the help of your tests. Don’t be afraid to make baby steps and don’t expect to have perfect code after the first try.

I hope you liked this series. Perhaps you could take something away from it. If you have questions, let me know.

Tomorrow I’ll be off; it’s a holiday where I live, and I’ll use it to spend time with my family. See you on Monday. 👋

Quick wins, part 3: Keep it local

Yesterday I closed with this idea:

Spot places where knowledge about something does not belong.

What do I mean by that? Sometimes I come across some code that does not read right. I will use a pseudo code example to illustrate:

class Foo
  def initialize(bar_service)
    @bar_service = bar_service
  end

  def quux
    if @bar_service.greeting == "hello"
      @bar_service.greet("goodbye")
    else
      @bar_service.greet("hello")
    end
  end
end

class BarService
  attr_accessor :greeting
  def greet(message)
    @greeting = message
  end
end

What bothers me with this code? The method quux has too much knowledge about how the @bar_service works. Foo.quux knows that the @bar_service has an instance variable called greeting and at least one specific value it might have ("hello"). It also knows two values that the greet() method might be called with. Now it happens that this knowledge about how the greeting and greet work, is also spread into other parts of the application. What happens if you need to change something about the greet() method? You have to find every place in your application and update it to reflect the new changes. This isn’t good.

There are places like this inside many applications. You might need some practice to spot them, but with some practice it becomes easier. For this example I would like to suggest to move all knowledge about how greeting and greet work inside the BarService. Start with the conditional, like this:

class Foo
  def initialize(bar_service)
    @bar_service = bar_service
  end

  def quux
    @bar_service.greet
  end
end

class BarService
  attr_accessor :greeting
  def greet
    if @greeting == "hello"
      @greeting = "goodbye"
    else
      @greeting = "hello"
    end
  end
end

Now we are free to change the internals of the greet method. We could add a third option or change it completely. The class Foo does not need to change at all. It continues to call greet as if nothing has happened.

One of my overarching topics is testing. A refactoring like this should be covered with tests. Not only do you need tests for the Foo class, but also for how BarService.greet works. And for every part of the app that interacts with either.

Tomorrow we’ll look at another way to do a refactoring.

Quick wins, part 2: Method names revisited

Yesterday, we had the first part of this series on quick wins and simple steps to improve your code quality. It was about naming — specifically variables and method names.

Two things were not 100% right in these examples.

The first error

You might have noticed that my loop examples were written in Ruby code. Yet the method name doSomething was written in camelCase. This is unusual for Ruby code where developers tend to use snake_case for method names. I did not lint that email. Hence no robot told me about my error. I believe it is a good example of the benefit of automatic linting. This error would have been found. If you read the code and were put off by this naming scheme, you even experienced why conventions and rules are necessary: Because coding by the rules helps developers to focus on the semantics of the code, not the syntax.

The second problem

Do you remember that I wrote about JavaScript loops and gave the classic for loop example? I complained about the i variable and that it should be called iterator. Perhaps you did not like this idea and my example? Let me take a step back for a second. When naming variables and method names, you have to make sure they “speak.” The names should indicate their meaning and make it easier for another developer to understand the semantics of your code. Yet, when you are fully aware of the problem domain you code deals with, it can happen that you try to be too specific. If you are, you tend to use long, verbose names for variables and methods. An example could be this:

A common value for the allowed length of method names is 30 chars. The above example could be broken into two methods.

The next possible quick win might be to think about the existence of these methods inside the Article class. Verifying and fixing meta tags surely does not need to be done inside this class. If you want to follow the Single Responsibility Principle, you should make sure that the Article class does not have a reason to change, if you decide to change something about verifying and fixing meta tags. Rather the Article class might change when you decide that an article should have a mandatory sub-headline.

Back to the iterator. This name could pass the test for too specific. Only, when i refers to a variable that is declared and initialized outside of the scope of the loop, does it make sense to choose a different name. Or doesn’t it? As always it depends. The classic for loop is taught in almost every book on JavaScript, and it is easily identifiable as to what it is. But there might be reasons to deviate from that, as I indicated above.

Conclusion

To summarize:

  1. Choose good names. 😎
  2. Spot places where knowledge about something does not belong.

Tomorrow we’ll look at another example for 2 and a practical idea of what to do about it.

Quick wins and simple steps for improving the quality of your code

Good software needs good code. If you want to achieve a high quality in what you ship, you need to care for the quality down to each method you write.

I want to use this week to write a small series on techniques and ideas about how to increase your code quality. When I look at code, it is often possible to find spots in the code, where a simple change can be made. In some cases it’s even an easy tweak. Some of these examples will come from the the actual code that I worked on. Others will be created by me, for this series. You won’t see any code from my clients, of course. The only thing I take from them is the inspiration. And money. 🤣

Naming

A good place to start with is to look at variable names. If you have a call to .map() or .each(), then take a look at what you are iterating. Is is a list of Book objects? Then you should call each item that you are iterating what it is, book.

# this is not good
items.map do |i|
  i.doSomething
end

# this is better
list_of_books.map do |book|
  book.doSomething
end

This would take care of the naming of some variables.

In classic JavaScript loops, you often see a variable called i:

for (i = 0; i < array.length; i++) {
  // something happens here
}

Well, what’s this i anyway? If it’s an iterator, why not call it that? Even worse, when you sometimes combine i with a jand a k: for (i = 1, j = 0, k = 150; i <= 5; i++, j += 30, k -= 30) { /* do work */ } (This is copied from a SO answer)

I bet you a non-trivial amount of money that you won’t be able to tell me without looking it up what these variables refer to 9 months after you wrote code like that.

Will it take a small amount of extra time to come up with a proper name and use that instead? Probably. Will this extra time be saved every time a(nother) human reads that code? Hell yes!

A possible next step would be to change something about the doSomething() method. What the hell does it do? Why doesn’t it tell us already from its name? In this case? Because that’s just pseudo-code for you 😜 But please make sure that you use proper and valid names for your methods and variables.

Power Laws

My work as a consultant offers me the opportunity to accompany a team for a certain amount of time. I join them, we work together, and then I leave again. Our time together gets extended, sometimes. This model has the benefit that I get to know a lot of people and teams — and how they work.

Do you know the Pareto principle? It’s also called the power law distribution or the 80/20 rule. A simple explanation would be that 20 percent of the people own 80 percent of the wealth of the whole world, which makes it relatable and understandable. Only that it’s wrong. By now 10% hold 90% of the wealth already. And it’s getting worse. I don’t have any sources on this right now, and I won’t go looking. Because it was only meant as an image of how this works.

Back to my clients…

There is a similar distribution of 80/20 to be found. 80% of software development teams do the same mistakes over and over again. It starts with a new green-field project. A year of development work passes. A lot of code was written. And after a year the team is frustrated with their software again and doesn’t find a way out. This is bad practice.

If you find yourself in these situations, there are ways out of it. With a lot of intrinsic motivation and the ability to learn from mistakes and external sources, you could be able to drag the ship around and sail into the sunset, happily. But there are a lot of rocks under the water that might wreck your boat. An experienced navigator for these waters could prove beneficial.

In my opinion, a good start is to look to industry standards and follow those as well as common best-practices. Find and learn the rules on how other teams work. They might seem strange; you might not understand or like them. But let me tell you something: They way you worked up until here didn’t work and brought you into this mess. Doing things as you’ve always done them won’t help you a bit.

So, don’t be smart. Find rules. Follow the rules. Stick to them and don’t deviate. Re-evaluate in 6 months. It will hurt. It might not be fun. But it will get better.

If you need a pointer, let me know by replying.

Refactoring without a care

Before I get to today’s topic, I would like to say thank you, to you. My little poem yesterday seemed to resonate with you. At first, I planned to write about it and its meaning today. But your responses indicated that it spoke to you. And I wouldn’t want to ruin this with my ramblings about it. So I’ll just finish with: I enjoyed this very much.

Lately, I spent some more time on Twitter. I don’t know how to use Twitter well (enough). I always have trouble with creating threads or topics. But I (re)discovered some very interesting people, sharing their ideas in long threads.

A few days ago I came across @GeePaw Hill. I believe it was because I followed a few tweets by Kent Beck. GeePaw Hill had this thread where he encouraged people to refactor without caring for the application domain, only for the code. You can read the thread here. He even elaborated some more on his blog.

I find the idea fascinating and will continue to think about that.

A neverending story

Back then; I did it; I liked it much; Found it a necessary touch;

Never it challenged, I; then saw; A source without it; didn’t look too raw;

Since then it flows without it well Some purists call it living hell.