So today I read this interesting conversation by @rknightuk. The topic isn’t “finished” and he might post updates on the topic. If he remembers he might add them to the conversation. But maybe he doesn’t. (I wouldn’t blame him!)

That brings me to the question: How could I make sure I don’t miss the update? Checking in daily is one option. It’s unrealistic for me though. If you consider the case that you want to see posts from someone you don’t follow/subscribe to, how can you make sure you get updates on things you might care about?

This is just me thinking out loud, I don’t know the answer, yet.

Maybe there could be a service that monitors their feed for certain keywords and pings you once they post something containing those keywords? There isn’t even the option to subscribe to a conversation as that conversation doesn’t have an RSS feed associated. One way would be to have a service that subscribes to his RSS feed and monitors that one for designated keywords. If it finds the keywords it creates a post mentioning me.

What comes to mind for me is a CRON job running on a server. But that sucks. Using cron doesn’t feel right to me today. Another idea was to have a GitHub action running. But that always needs a trigger to get started. So what could be a trigger?

goes away to read GitHub Actions docs

Turns out: You can schedule a workflow on GitHub actions, using cron syntax. Awesome! I mean it’s still somehow cron, but it’s more accessible than some bash thingy on some server.

So what is left is figuring out, what the GitHub Actions workflow should run. It could be a javascript or ruby script inside the repository that the workflow belongs to.

  • This script needs read/write access to a micro.blog (or other fediverse) account that can send @mentions to me.
  • The script needs to parse the feed (in this example micro.blog/rknightuk…) for keywords
  • I need an option to provide the keywords. Could be hard-wired into the script for starters
  • if a keyword is found, create a new post mentioning me and containing the link
  • since this could be high-volume as the keywords are maybe too broad it shouldn’t mention/ping the creator of the post

It doesn’t sound too complicated right now. I’ll leave that for later and will come back to this idea!

edit: this sounds like a solution that probably already exists. if you know about something like this, let me know. maybe you could even use google alerts or something like this!

Ich habe heute dieses Video gefunden Happiness by Steve Cutts und das passt ganz wunderbar zum Artikel „No other love“ von @Buddenbohm@mastodon.social vom 12. Januar.


I wrote a little tool yesterday to help me normalize my markdown files from the Gatsby site to be able to import them into Micro.blog properly. Especially I wanted to have redirects setup automatically for all the old posts. My site had simple slugs like holgerfrohloff.de/power-laws. Micro.blog sets up the posts using the dates => https://holgerfrohloff.de/2019/03/01/power-laws.html

For that to work, I needed to have the permalink added to the frontmatter. That wasn’t the case for the majority of posts. I didn’t want to change this manually, so I wrote a script thingy to do it for me. https://github.com/5minpause/postsnormalizer

So maybe this could be helpful if you want to migrate a GatsbyJS site to Micro.blog. /cc @manton


I finally made the migration to micro.blog with my personal site https://holgerfrohloff.de All the posts and newsletters I’ve written are migrated. They are still being published so they are not styled, yet. Have to migrate all pages as well. And then the overal design of the site. I want it to look a little bit different and also showcase some of the functionality that microblog offers me now. Overall I am happy to be even more part of the IndieWeb now.

Photo by Mathew Schwartz on Unsplash



Jeena writes about the challenge to maintain a custom blog software.

I've neglected my rails application which runs my own website for many many years, mostly because it's a lot of work to keep upgrading rails especially when it comes to major version changes. Dependencies break and disapear, API's break, you need to rewrite a lot of code because new concepts and data structures gets introduced, etc. Anyway now I need to upgrade from rails 4.2 which was released 8 years ago to 7.0 which was released last year.

I still like to have written my own website because this way I can have exactly the functionality which I want, not someone else. But it's a lot of work.

I feel the challenge. I started rewriting this very site in Rails, to be able to include Webmentions and support other IndieWeb features. But sadly, I didn’t find enough time recently to work on the project.
So cheers to Jeena and hopefully, he’ll be able to upgrade to Rails 7. Once I’ve finished some more client projects, maybe I’ll be able to take the time to work on this site more.



GitHub codespaces is a great feature by GitHub that let’s me work on an iPad way easier than it was before. It’s also cool for teams to quickly setup a new developer, of switch between branches.

I use it for Ruby on Rails development, mostly. Sometimes a thing happens that prevents me from continuing my work and I had trouble finding out how best to solve this. Here’s what happens: I start my codespace and want to spin up a Rails server, to inspect the website. Once I type bin/rails s the server starts and I get the notification to open up the browser in a new tab. When I navigate there, I see this screen:

It’s a message from nginx that tells me that there is a 502 Bad Gateway error when I want to access my codespace.

The solution that always helps me is to unpublish the port and re-publish it again. Here’s how that looks in the interface:

That should solve your issue and you can quickly access the website.




I joined my last client, Edeka, in April 2020. Since then I am not freelancing anymore. My old Wordpress site ran on quite expensive hosting. Which I could justify as a business expense. But since those are gone now, I have to switch to something cheaper.

This site now runs as a GitHub pages site, powered by Gatsby. Which I love quite a lot. And now I am free to experiment with this site more, again.



Tomorrow I will help a new team member onboard at my client. I am the acting tech-lead for the client right now and so it’s my job to make sure that everything is in place for this new member of the team.

The former tech lead onboarded me in September (not so long ago). He had prepared a list of tasks we had to do together. He even created a whole new Trello board, just for the onboarding. Did he do it because it was so laborious? No! In fact, the onboarding mostly consisted of joining the services and Saas applications we use and getting familiar with all the processes for running the show.

But it did help to be able to cross items off the list. We were always aware of which services were yet to join. We also knew what was missing until I was at 100%. That was certainly helpful.

When I am onboarded at a new client, I generally send them my onboarding survey beforehand. In it I ask questions that are relevant to the process and to me, and that come up with every client. Asking them these questions before I arrive, they tend to be better prepared.

I am really looking forward to having the new developer join the team, tomorrow. I bet he’ll do great. And to help him along, I’ll just follow my onboarding sheet. 😉

Do you use something similar?



A couple of days ago I talked about the Akimbo podcast by Seth Godin. It was an eposide about opportunity cost and how that relates to livelong learning as a software developer.

This morning I was on the train to a dear friend who happens to help me with my financial retirement plannings. She helps me choose equity funds and all those things related. I decided to take the train instead of the bike, which is what I usually use to get around Berlin. Taking the train meant that I could write a few words that moved around in my brain. I also meant I could listen to music or a podcast. It just so happened that the latest episode from Seth was relevant to our topics here again. Its title is the same as this email’s subject:

Why is software so bad?

When I saw the title appear in the episode list in my podcast app of choice (Overcast on iOS), I just had to listen to it. In it, Seth compares the evolution of the development of cars to the stagnating evolution of software. I believe you will get a lot out of this episode. If for nothing else, you will at least get one or two ideas for a software startup you could pursue.

Here’s the link to the episode: www.akimbo.link/blog/s-5-…

See you next week.



Today I heard a recent episode of the Akimbo podcast by Seth Godin. The topic was opportunity cost. As a quick recap, here’s what Wikipedia has to say about opportunity costs:

The opportunity cost, or alternative cost, of making a particular choice is the value of the most valuable choice out of those that were not taken. When an option is chosen from alternatives, the opportunity cost is the “cost” incurred by not enjoying the benefit associated with the best alternative choice

So… about that new JavaScript framework. Or, about the recent addition to React name suspense… have you already taken the time to read up on it? Do you know what it does? It is supposed to change how we render things onto the page and fetch the necessary data to display.

Or did you know that in Ruby 3.0 the handling or positional and keyword arguments will change from the way it was handled in Ruby 2.7 and before? How will your code have to be changed?

And that language Rust… and Elixir/Erlang. What else didn’t you yet learn? All the other developers competing for the best job probably already know at least something about those technologies, right? But I guess, you did watch the latest Netflix series.

Do you feel left behind already? What a shitty feeling. I know I do. About those tech that I wrote above… Every thing I know about those is what you can read in this email! I didn’t yet take any deep dive. I don’t even know GraphQL. (I wanted to add a “yet” but realized how phony that would have been. I won’t learn GraphQL unil I need it on some project!)

So here’s my take on all this FOMO: Ignore it. Learn fundamentals. Learn the basic building blocks of how the stuff works—and worked for last 40 years. It pays to know how a request is handled on the web. Generally. It’s fun to look at the complete request cycle of the Rails framework. But you don’t need it. If you can build a basic REST API you’ll be fine. And if you come across a use case that requires GQL you will be quick enough to adopt it and learn it as you go.

Don’t get freaked out but the lastest trends and what the cool kids think they need to learn. They are usually wrong and have been before.

Learn the basics, practice those “soft” skills like communication, articulating concepts in a way that people understand, practice your story-telling and call you mom regularly. That will help you way more than getting freaked out by the opportunity costs and all the things you might have to learn.


This will be the last email that you’ll get from me sent by Drip. My account will be terminated tomorrow.

I just spent around an hour saving screenshots of my automations and text files with the contents of all emails that I send to my subscribers. What a hassle. And then I’ll have to recreate everything when I’ve found a new home for that.

I would like to go to something like ConvertKit, but they are not GDPR compliant. So that’s definitely not a solution for me. The next best thing looks like Mailjet, but I already know they don’t support everything I’d like to have. ☹

I am still toiling with the idea of writing something myself. But I don’t have time for that.


I spent last week doing my first sabbatical week, inspired by this guy. That’s the reason I didn’t code anything at all for clients.

I worked on my pilot course a bit, but mostly on my website holgerfrohloff.de. I silently launched it yesterday because I figured that I’d already waited for too long. I know it’s not finished and that not everything is worked as it’s supposed to. The reasons for that are mostly a) time and b) Wordpress/PHP. I am still figuring out how Wordpress and PHP work. And it turns out I have a really hard time doing it. Even simple changes to the theme I bought and to Wordpress' logic take me ages—if I am able to do it at all. It sucks.

But I am glad that I launched. Now day by day things will improve. I already know of things that do not work, but if you come across anything, please let me know. You have my email adress 😉


I used the week to spent quality time with my family, played some video games and read great books. But I also made time for high-intensity running workouts. It was an awesome week. I am already looking forward to the next one. Since I will be at my new client by then, I’ll have to see how and when it will be possible.

One work-related thing I that I want to leave you with is this tweet-conversation I had with the creator of the Lucky framework, Paul C. Smith: https://twitter.com/5minpause/status/1159464579249950721 He responded to my question “When is a class too big”.

See you next week, I hope (if I figure out how to send emails with Mailjet or something else by then).


Please excuse this sensational headline. I am referring to classes in your software and to their design. But also to tests, but see below…


The past week I mainly tried to pair with my fellow developers to impart some knowledge onto them. We worked together on integration tests with Nightwatch.js which is something like Capybara from the Ruby world. A question that came up was “when do we stop testing?”

I have to share some details about our setup: We created a React application for the frontend and use the APIs provided by the backend developers. Those APIs are sometimes flaky. And generally it would be a bad idea to access them during the tests. This is because they would introduce a dependency in your tests that is outside of your control. Do you want to have your tests fail, because somebody changed something inside the APIs? And would you like your tests to succeed, regardless of whether the API was accessible or not?

The answer for both question is “yes”. The first type of test is a system test, where you do a complete blackbox-style test of your system for (mostly) functional requirements. If the API changes, you would want to know. And you would like to have your tests fail and notify you of that change. Well, you would want the backend dev to notify you beforehand, but still…

The second type of test I was describing is an integration test. You want to find out whether the code you wrote tries to access the API in the required way and whether your software behaves as specified for the API’s responses. But you don’t need the actual API for that. You should use a MOCK API that replaces the real API and behaves just the same. And this mocked API should be under your control.

So much for the details. Because we wrote integration tests against the mocked API. We sent a request to get data, let’s call it a widget and we specified it by its ID.

GET /widgets/123-567-890 => widget.json

We received JSON and displayed the widget in a form, so the user is able to edit it.

In this test, we edit a widget’s category only. We send a PUT request with the whole object, even though we only update the category (A PATCH would be enough, but this is what the API specified…). We send the request to our mocked API. Since this was the “happy path” testing, we received a response that indicated a success. 🎉

A bit more of background:

  • Back in our UI, where we display the widget, we do not display the category.
  • When we get a successful response for the request, we update the internal state of the app (we don’t use Redux). So we “save” the category.
  • When we open the form the edit the widget, we fetch data from the API first and display that to edit.

Now the question was: “Where do we stop testing?” Since we mock the successful response anyway, do we test that we update the internal state with the widget? But we don’t use or display that state (the updated category) anyway?!

I want to say, I had to think about this for a while. I wasn’t sure what the best way would be. One answer could’ve been to display the edited category somewhere accessible for the test (perhaps as a data-attribute). But where’s the business value in that?

A better answer turned out to be:

  • For the happy path test for a successful response and be done
  • Add tests for the failure case of unsuccessful responses

And the test and it’s questions revealed another thing: We do not need to store the updated category right now. The person who wrote that code did it with a possible future requirement in mind. But it wasn’t and isn’t necessary right now. But since they didn’t use TDD to create the code and its design, they didn’t notice.

So that’s one more history for “tests help increase the quality of code and design”.

This is already long and I wanted to share something on the size of classes and when to refactor them. I guess this has to wait for the next issue.


This topic from last week resonated with you. Thanks for the many responses. I will take you up on your offers and put you in a virtual crowd using Zoom, next time I am doing something like this.


Fixed gear bikes seem to be a favorite for many readers. I hadn’t anticipated that you would have experience riding these. I was glad I shared it.

On a completely unrelated note: In the future I will factor into my selection of clients whether they have a shower in their office or not. Having access to one helps me tremendously to combine my training plans into my working plans. Just yesterday I was able to do an intense interval running training on my way to my client because I knew I could shower afterwards. The alternative would have been to do it in the evening. But I’d rather spent the evening with my family instead. So this really increases my happiness.

Another personal endeavor of mine is to decrease my reliance on external/3rd party solutions for syncing data and storing it. This means stuff like iCloud and Dropbox but also other proprietary solutions inside apps that I use, e.g. DayOne.app

If I write my thoughts for years and years, I do not want to rely on some business to be able to access it. Same with my own business content. I mostly write code and articles, but also thoughts on strategy and lots of other things. I plan my business for the long run, which means I want to be able to access these things forever.

That’s why I begin to turn away from iCloud and friends. I don’t use Ulysses anymore for writing, but Joplin. I run my own cloud service (Nextcloud) and sync all my encrypted plain text files to the cloud and between my devices. That way I can tripple-back-up everything. Joplin is FOSS. I can save the source code in its current state (you bet I already did!) and will be able to access my notes for as long as I have a computer.

So I started migrating my data from the different apps I used to Joplin. This is an ongoing task and I also began to write an Importer to do this automatically. I have thousands of entries and notes in DayOne and other apps. It’s quite a challenge. But this makes sense to me.



On Tuesday I did something new for the first time. And I feel like I failed miserably. I hosted a webinar. Back in the beginning of the year, I sent out proposals to a bunch of conferences with the goal of giving a talk. A conference by NAGW (National Association of Government Web professionals) accepted my submission. I was supposed to give a talk in Utah in September. Due to calendar issues I had to cancel that appearance. So they asked me whether I was up for giving the talk as a webinar to their audience, which I happily agreed to.

Turns out I need people with faces in front of me! Once the time came to start the webinar I was so excited/anxious that I was barely able to keep my breath down. It almost felt like a small panic attack. I don’t know how it came across to the listeners (it was audio-only) but I felt like I could hardly talk or speak. Which made me speak so unbelievably fast that I went through the whole material in half the time it usually takes me. And I bet I forgot a whole lot of things that I should have said. I do know that I am excited and nervous every time I start a talk in front of a crowd of people. But usually this nervousness fades after 3-5 minutes, once I scanned the room and felt the (positive) reactions of the crowd. After all, your listeners want you to “succeed” and give a great talk. But I had nothing of that during the webinar. I had no reassurance that people actually understood what I was talking about. In any case that felt bad. I learned something from that experience. The first reaction was to never give a webinar again. The second experience was that I need to have a different setup next time. I need to interact with people and for that I will try to use the chat-tool that every webinar software offers. I don’t know how well that works, but I will do what works for me almost always: Be frank and open and tell the audience about my anxiousness and let me help them through their interaction in the chat. It think that’s at least worth a try before abandoning webinars altogether.

August has started and it’s the last month at my current client. This means that I will begin to do a lot more pairing with the permanent developers, and less writing features/code myself. This way knowledge transfer is increased and the company and the people get more out of it. This will be more demanding than the months before, but it’s worth it.

If you’ve been part of this newsletter community for some time, you might remember that I told you about an online course on testing, that I was about to create and release. I still am doing this, but I got stuck. It is the first course that I am creating and will no experience I hit some road blocks that I couldn’t solve on my own. Thus, I entered an online community/course on course building, how meta 😉. With this help I am absolutely sure that I can solve the issues and create an interesting pilot course. I will reach out to you again, once I am looking for participants for that.


I bought yet another bike. 😝 I do love my bikes, and I found a great bargain on eBay with a steel frame, only one year old for half the original price. So I bought it. It’s a fixed-gear bike, which means you cannot coast and have to move the legs at all times. Riding home on it yesterday evening I noticed how incredibly a meditating experience this is. As a beginner with fixed-gear bikes like me, you have to be so 100% aware of what you’re doing at all times. There was no space left in my head to think about anything but moving my legs and riding this bike (“don’t stop pedaling”). Because if you, for one second, stop pedaling, the pedal will still move upwards (rotate) and your leg will move and might catapult you out of the saddle. It’s demanding.

In other news, our older daughter will start with school in about one week. We are incredibly excited and a tiny bit afraid 😬.


This is getting long already, so I’ll end this quickly soon. Last time I thought about doing scales in my professional development and how that correlates to doing interval training in sports. But that was wrong. Doing scales is not at all like intervals, but more like easy runs/long runs. You don’t do anything technically demanding but you are doing ground-work, fundamentals, basics — whatever you call that. I had a lengthy discussion about this with my mastermind group. They were the ones pointing out this difference to me.

See you next week.


I am writing again. Mostly for myself. As a journal. In the future I’ll try things a bit differently around here: I want to share what I am thinking about, how I work and what I am working on (as far as it makes sense in regard to my clients and NDAs).


I secured a new client for autumn and the first months of 2020. I’ll be joining them as interim tech lead. I am excited. This will let me work a lot with Rails again. The last months were JS/TS only…

Something that I was proud of during the last week was a caching solution I created. Let me describe it briefly:

  • We are in a TypeScript project
  • There are external APIs we are calling to populate our app with data.
  • We display the same value in multiple places on a page or after navigating somewhere else.
  • We do not have a global state management like Redux etc. (Why is a discussion for another time 😉)
  • Every display of the value resulted in a fetch to the same API resource.

What I built was a cache-aside solution, where I cache the (pending) Promise for the first request. All subsequent requests receive this Promise as return value, without fetching again. Once the Promise is resolved, the value appears where it’s supposed to appear and every requests happens just once. This is possible and makes sense, because the values change very infrequently and using the “old” value until the site is reloaded works just fine. We don’t yet need things like etags and ModifiedSince. But perhaps this will be added in a future situation. Anyway, this was cool and also challenging to find a proper solution. Caching the Promise itself was an idea that was developed together with my co-worker, Franz.

I am redesigning my website these last months. It always takes longer, because nobody pays me for that 😂. It’s progressing nicely, thanks to the help of my designer. She’s great. I am really looking forward to sharing this with you. Also, I need to get back into the rhythm of writing articles. I have a lot of topics planned and see my traffic increase steadily.

I did not renew my subscription to the email marketing service Drip that I used during this last year. They were too expensive anyway. You will receive this mail through Drip, still. But future mails might be sent by something else. I’ll keep you posted and hope to be able to complete the migration without any hassle for you.


I am in the middle of a training program for my next 10 kilometers race. It’s quite intense, I am running 4-5 times a week. To make the time for that I started to do run-commutes. This means I run to my client in the morning, take a shower there and run home in the evening/afternoon. This is demanding but works out quite nicely. The shortest distance to my current client is about 6km, so it’s fine to run it twice.

I have a height-adjustable desk at my home office now and it is awesome. I love the flexibility. I actually stand most of the time. But after lunch, for example, I tend to sit down more. Ha, just wanted to share. If you are on the fence about getting one, you should!

Thinking / Reading

A quote that still makes me think:

Recently, one of my favorite questions to bug people with has been “What is it you do to train that is comparable to a pianist practicing scales?” If you don’t know the answer to that one, maybe you are doing something wrong or not doing enough. Or maybe you are (optimally?) not very ambitious?

It’s from this article: https://marginalrevolution.com/marginalrevolution/2019/07/how-i-practice-at-what-i-do.html

My current answer is “my side project” I am working on (nothing to share publicly here, yet). But I do know that I am not stretching doing that project. Compared to my run training, this is like a relaxed long run. Important for basic fitness and healthy, but nothing that increases my ability by much. The caching solution I wrote about certainly stretched me and I learned new things there. But this was work and not practice. And not deliberate but a happy “accident”. I should be able to plan for these training sessions.

So this is something I am thinking about.



If you work on your branch you run into the situation that you would like to push your changes to the remote repository. CI will then pick up your changes and run the linting and code quality checks on it. Afterwards, you will see whether you improved the quality. But perhaps there are some new violations that crept into the code? Happens to all of us!

I usually like to see and check if any new issues might come up on CI. This lets me improve them before I push. The checks I run locally depend on the kind of work that I do. Lately that’s a lot of Ruby on Rails again — which is great. I love that framework and the language. To grade my code, I use Rubocop, Reek and Flay. If you run their respective commands on your repository, they will check the whole code base. This might be ok, if you didn’t have any issues before. Since I join teams, these days, that work on legacy projects it is rare that there are no problems with the code. If I run the commands just so, I will get a long list and couldn’t possibly see the issues that I introduced through my changes. Lucikly, there is Git and some “command line foo” that can help us here:

git fetch && git diff-tree -r --no-commit-id --name-only master@\{u\} head | xargs ls -1 2>/dev/null | grep '\.rb$' | xargs rubocop

This command will fetch the current state from the remote and diff your branch/changes to master branch. It then runs rubocop on these changes. In my ~/.aliases.local file I added three lines for all three linters.

# Code Quality
alias rubocop-changes="git fetch && git diff-tree -r --no-commit-id --name-only master@\{u\} head | xargs ls -1 2>/dev/null | grep '\.rb$' | xargs rubocop"
alias reek-changes="git fetch && git diff-tree -r --no-commit-id --name-only master@\{u\} head | xargs ls -1 2>/dev/null | grep '\.rb$' | xargs reek"
alias flay-changes="git fetch && git diff-tree -r --no-commit-id --name-only master@\{u\} head | xargs ls -1 2>/dev/null | grep '\.rb$' | xargs flay"

I am still working on a way to just call one command and have all thee commands run. That doesn’t yet work. Probably because of exit-code reasons, when one linter finds issues.

These simple commands offer a convenient way to find local issues and correct them before pushing to CI.


A few weeks ago I wrote about deleting your tests. Yesterday I received the weekly email from Avdi Grimm, where he touches on this subject.

Some premises about my relationship with unit testing:

I like test-driven development. I like driving out individual object design small, isolated (e.g. no database) unit tests. I think of these unit tests as a design aid, full stop. Any help they provide with preventing regressions is gravy. I treat unit tests as disposable. Once they have served their purpose as design aids, I will only keep them around so long as they aren’t getting in the way. These four premises are strongly interconnected. Take away #1 (test first), and tests are no longer a design aid and none of the other points are valid. Take away #2 (isolation) and we get into integration test territory where there’s a much higher possibility of tests having regression-prevention value. Take away #3 (design focus) and the other points lose their justification. Take away #4 (disposability) and I spend all my time updating tests broken code changes.

This makes it easy for me to find myself at cross purposes with others in discussions about unit testing, because often they come into the conversation not sharing my premises. For instance, if your focus in unit testing is on preventing regressions, you might well decide isolated unit tests are more trouble than they are worth. I can’t really argue with that.

A recent conversation on this topic inspired the thought from the driving-design perspective, maybe unit tests are really just a crutch for languages that don’t have a sufficiently strong REPL experience. While I don’t think this is strictly true, I think the perspective shines a useful light on what we’re really trying to accomplish with unit tests. I almost think that what I really want out of unit tests is the ability to fiddle with live objects in a REPL until they do what I want, and then save that REPL session directly to a test for convenience in flagging API changes.

That conversation also spawned the idea of immutable unit tests: unit tests that can only be deleted, never updated. A bit like TCR. I wonder if this would place some helpful incentives on the test-writing process.

You should subscribe to his newsletter, if you haven’t yet.


I am occupied with learning these days. Learning on my own about visualizations of data among other topics. But also learning about learning. For that I read what other people think about learning. There are many things I have to learn about this whole topic. One thought I saw repeatedly, was about external forces, or limiting factors.

Let me elaborate what I mean by that: There are people that can motivate themselves more easily than others can. They reach their goals or at least try very hard. Others give up more easily when they face some resistance. As always, there are people in the middle between these extremes. You know best which group you belong to. 💪

What has this do with software quality? I am getting there… 😉

I am wondering how external forces could help improve quality. If you need to reach your goal and you don’t belong to the group of highly self-motivated people there are options like hiring a coach. Athletes do that all the time. I pay for a “virtual” coach that guides my running efforts.

How could you hire a “virtual” coach for your coding efforts, for reaching your targets on your software quality metrics? You could hire me or other “real” coaches, of course. But that doesn’t scale too well and might be too expensive.

Again, for some people it is easy enough to use static analysis or linting — a kind of coach in it’s own right — and follow their guidelines. Yet, still there are people that ignore the warnings or guidelines imposed upon them by the tools. Reasons may be a hard deadline or too much workload. How could we offer external forces, limiting factors that help them, guide them, towards doing the right thing?

One solution I can think of is to have a robot not accept your code when it is below standard or ignores guidelines. A robot could be anything that measures and grades your code and reports back to your team. Some tools already offer this, for example GitLab. If you want to merge code that decreases the overal quality metrics, you are not allowed to do so. So that would be one.

Another idea: If you try to commit or merge such code, you need to consult with another developer about the code. Once you worked on it together, the other dev has to enter her secret key, to remove the lock on the merge. This forces you to pair on code more often.

When it comes to teaching there is this saying of the “glass has to be empty (enough).” You cannot pour water into it, when it’s already filled. Said ideas 👆probably won’t work for a team that isn’t aiming for learning and improving.

I will continue to think.


The other day we dealt with code coverage and gnarly conditionals. I promised to offer a way to be able to test them properly.


Ha, what a bad joke. But the real answer might not be better, depending on your point of view. What you have to do is create a table.

(A || B) && C

This is our conditional.

| m | A | B | C | RESULT |
| 0 | T | T | T |    T   |
| 1 | T | F | T |    T   | x
| 2 | F | F | T |    F   | x
| 3 | F | T | T |    T   | x
| 4 | T | T | F |    F   |
| 5 | T | F | F |    F   | x
| 6 | F | T | F |    F   |
| 7 | F | F | F |    F   |

# m is the test case # A, B, C are the atomic parts of the conditional # RESULT is the result of the evaluation of the conditional

For three terms in a conditional, you can have 8 different cases (2^3). You don’t need to test every case. You have to find those cases where switching one term (A, B or C) changes the RESULT. You take those cases and write tests for them. You can ignore the rest as they don’t bring you any new information. For our example these could be the test cases 1, 2, 3, 4. I marked them with an x,

The general rule of thumb is that you can solve this with n + 1 test cases where n is the number of terms.

This technique is called Modified Condition/Decision Coverage or short MC/DC. I like this name, it’s easy to remember 🤘.

It gets harder to do, when one term of the conditional is used more than once (coupled). Another thing to take note of is that depending on the decision statement in the code, it may not be possible to vary the value of the coupled term such that it alone causes the decision outcome to change. You can deal with this by only testing uncoupled atomic decisions. Or you analyse every case where coupling occurs one-by-one. Then you know which ones to use.

You’ll have to do this in the aerospace software industry or where you deal with safety-critical systems.

If you read this far: Congratulations. This is one of the more in-depth topics of testing software. You deserve a 👏 for learning about these things!


During the last week, I had two discussions about code coverage. Code coverage is the metric of how many lines of code are covered by your automated test suite. Many test frameworks have built-in ways to measure this. Other times you have to install another tool manually. When you run your tests you then see how many lines are not covered by a test. That means that no test was run where this line of code was evaluated or executed or interpreted.

When you reach 100% code coverage, what then? Are you done? Could you guarantee that there are absolutely no bugs in your code?

If you are tempted to say Yes, or “maybe?" then let me tell you that you are wrong.

Consider this piece of code.

If you write a unit test for this method, the line eval... will be interpreted because of the if emergency at the end. The line is thus covered. But the code is not covered or tested.

Admittedly, this is a very trivial example that I made up. In reality, there are some more profound things to consider.

If you have complex conditionals you might need a logic table where you compare all possible combinations of the atomic parts of the conditional.

You cannot possibly evaluate this in your head and know whether you checked for every possible, sensible combination. Yet when you cover that line you are at 100% coverage and can go home, right?

So what do you do? Let’s look at this tomorrow.


We refactored some code yesterday, to move knowledge about an implementation of a function back to where it belonged: Into the function’s class and into the function itself.

Today I want to talk about another topic that often comes up when you are refactoring, or plainly “changing code.”

It happened the other day, during a workshop I was doing on software testing. The participant wanted to apply his new knowledge and write tests for a given JavaScript class. He had written the code a few weeks before we did the workshop. During the hands-on part of the workshop, he wanted to add tests, to make sure everything worked and to make sure that he understood what I had been talking about.

We were lucky in as so far that something happened that usually happens next: He noticed that his code was not easily testable. The design of his class made it harder for him than he would have liked. We talked about the problem, and he noticed the source of it. His class had too many responsibilities. He extracted the code in question into a new service and could then mock the new service when testing his original class. This was good. He was ecstatic. He made progress!

Having gained so much momentum, he went overboard: During his test-design, he wanted to be too clever and tried to write an elaborate test setup which was to be reused between different test runs. It was supposed to be a reusable, parameterizable do-it-all function that could setup the tests just right. With no duplication. In short, it was so much code with so much logic that it would’ve warranted its own tests. And the worst thing? It didn’t work and made trouble for writing his tests.

I was a bit thankful because that gave me the opportunity to tell him:

Premature optimization is the root of all evil.

Perhaps you’ve heard that saying already. Donald Knuth coined this phrase. There’s more to it, but that could be discussed in another email.

Back to my tester. After talking about the problems his function gave him and the difficulty in getting it right he settled for the simple solution: Write your tests. Accept duplication. Keep it simple, use copy & paste if it makes you faster and is more convenient. Write the tests you need and keep them green. And after all that, and only then, refactor your tests to remove duplication where applicable. Don’t try to write the perfect code from the start. Let the design evolve with the help of your tests. Don’t be afraid to make baby steps and don’t expect to have perfect code after the first try.

I hope you liked this series. Perhaps you could take something away from it. If you have questions, let me know.

Tomorrow I’ll be off; it’s a holiday where I live, and I’ll use it to spend time with my family. See you on Monday. 👋


Yesterday I closed with this idea:

Spot places where knowledge about something does not belong.

What do I mean by that? Sometimes I come across some code that does not read right. I will use a pseudo code example to illustrate:

class Foo
  def initialize(bar_service)
    @bar_service = bar_service

def quux if @bar_service.greeting == "hello" @bar_service.greet("goodbye") else @bar_service.greet("hello") end end end

class BarService attr_accessor :greeting def greet(message) @greeting = message end end

What bothers me with this code? The method quux has too much knowledge about how the @bar_service works. Foo.quux knows that the @bar_service has an instance variable called greeting and at least one specific value it might have ("hello"). It also knows two values that the greet() method might be called with. Now it happens that this knowledge about how the greeting and greet work, is also spread into other parts of the application. What happens if you need to change something about the greet() method? You have to find every place in your application and update it to reflect the new changes. This isn’t good.

There are places like this inside many applications. You might need some practice to spot them, but with some practice it becomes easier. For this example I would like to suggest to move all knowledge about how greeting and greet work inside the BarService. Start with the conditional, like this:

class Foo
  def initialize(bar_service)
    @bar_service = bar_service

def quux @bar_service.greet end end

class BarService attr_accessor :greeting def greet if @greeting == "hello" @greeting = "goodbye" else @greeting = "hello" end end end

Now we are free to change the internals of the greet method. We could add a third option or change it completely. The class Foo does not need to change at all. It continues to call greet as if nothing has happened.

One of my overarching topics is testing. A refactoring like this should be covered with tests. Not only do you need tests for the Foo class, but also for how BarService.greet works. And for every part of the app that interacts with either.

Tomorrow we’ll look at another way to do a refactoring.


Yesterday, we had the first part of this series on quick wins and simple steps to improve your code quality. It was about naming — specifically variables and method names.

Two things were not 100% right in these examples.

The first error

You might have noticed that my loop examples were written in Ruby code. Yet the method name doSomething was written in camelCase. This is unusual for Ruby code where developers tend to use snake_case for method names. I did not lint that email. Hence no robot told me about my error. I believe it is a good example of the benefit of automatic linting. This error would have been found. If you read the code and were put off by this naming scheme, you even experienced why conventions and rules are necessary: Because coding by the rules helps developers to focus on the semantics of the code, not the syntax.

The second problem

Do you remember that I wrote about JavaScript loops and gave the classic for loop example? I complained about the i variable and that it should be called iterator. Perhaps you did not like this idea and my example? Let me take a step back for a second. When naming variables and method names, you have to make sure they “speak.” The names should indicate their meaning and make it easier for another developer to understand the semantics of your code. Yet, when you are fully aware of the problem domain you code deals with, it can happen that you try to be too specific. If you are, you tend to use long, verbose names for variables and methods. An example could be this:

A common value for the allowed length of method names is 30 chars. The above example could be broken into two methods.

The next possible quick win might be to think about the existence of these methods inside the Article class. Verifying and fixing meta tags surely does not need to be done inside this class. If you want to follow the Single Responsibility Principle, you should make sure that the Article class does not have a reason to change, if you decide to change something about verifying and fixing meta tags. Rather the Article class might change when you decide that an article should have a mandatory sub-headline.

Back to the iterator. This name could pass the test for too specific. Only, when i refers to a variable that is declared and initialized outside of the scope of the loop, does it make sense to choose a different name. Or doesn’t it? As always it depends. The classic for loop is taught in almost every book on JavaScript, and it is easily identifiable as to what it is. But there might be reasons to deviate from that, as I indicated above.


To summarize:

  1. Choose good names. 😎
  2. Spot places where knowledge about something does not belong.

Tomorrow we’ll look at another example for 2 and a practical idea of what to do about it.


Good software needs good code. If you want to achieve a high quality in what you ship, you need to care for the quality down to each method you write.

I want to use this week to write a small series on techniques and ideas about how to increase your code quality. When I look at code, it is often possible to find spots in the code, where a simple change can be made. In some cases it’s even an easy tweak. Some of these examples will come from the the actual code that I worked on. Others will be created by me, for this series. You won’t see any code from my clients, of course. The only thing I take from them is the inspiration. And money. 🤣


A good place to start with is to look at variable names. If you have a call to .map() or .each(), then take a look at what you are iterating. Is is a list of Book objects? Then you should call each item that you are iterating what it is, book.

# this is not good
items.map do |i|

# this is better list_of_books.map do |book| book.doSomething end

This would take care of the naming of some variables.

In classic JavaScript loops, you often see a variable called i:

for (i = 0; i < array.length; i++) {
  // something happens here

Well, what’s this i anyway? If it’s an iterator, why not call it that? Even worse, when you sometimes combine i with a jand a k: for (i = 1, j = 0, k = 150; i <= 5; i++, j += 30, k -= 30) { /* do work */ } (This is copied from a SO answer)

I bet you a non-trivial amount of money that you won’t be able to tell me without looking it up what these variables refer to 9 months after you wrote code like that.

Will it take a small amount of extra time to come up with a proper name and use that instead? Probably. Will this extra time be saved every time a(nother) human reads that code? Hell yes!

A possible next step would be to change something about the doSomething() method. What the hell does it do? Why doesn’t it tell us already from its name? In this case? Because that’s just pseudo-code for you 😜 But please make sure that you use proper and valid names for your methods and variables.


My work as a consultant offers me the opportunity to accompany a team for a certain amount of time. I join them, we work together, and then I leave again. Our time together gets extended, sometimes. This model has the benefit that I get to know a lot of people and teams — and how they work.

Do you know the Pareto principle? It’s also called the power law distribution or the 80/20 rule. A simple explanation would be that 20 percent of the people own 80 percent of the wealth of the whole world, which makes it relatable and understandable. Only that it’s wrong. By now 10% hold 90% of the wealth already. And it’s getting worse. I don’t have any sources on this right now, and I won’t go looking. Because it was only meant as an image of how this works.

Back to my clients…

There is a similar distribution of 80/20 to be found. 80% of software development teams do the same mistakes over and over again. It starts with a new green-field project. A year of development work passes. A lot of code was written. And after a year the team is frustrated with their software again and doesn’t find a way out. This is bad practice.

If you find yourself in these situations, there are ways out of it. With a lot of intrinsic motivation and the ability to learn from mistakes and external sources, you could be able to drag the ship around and sail into the sunset, happily. But there are a lot of rocks under the water that might wreck your boat. An experienced navigator for these waters could prove beneficial.

In my opinion, a good start is to look to industry standards and follow those as well as common best-practices. Find and learn the rules on how other teams work. They might seem strange; you might not understand or like them. But let me tell you something: They way you worked up until here didn’t work and brought you into this mess. Doing things as you’ve always done them won’t help you a bit.

So, don’t be smart. Find rules. Follow the rules. Stick to them and don’t deviate. Re-evaluate in 6 months. It will hurt. It might not be fun. But it will get better.

If you need a pointer, let me know by replying.


Before I get to today’s topic, I would like to say thank you, to you. My little poem yesterday seemed to resonate with you. At first, I planned to write about it and its meaning today. But your responses indicated that it spoke to you. And I wouldn’t want to ruin this with my ramblings about it. So I’ll just finish with: I enjoyed this very much.

Lately, I spent some more time on Twitter. I don’t know how to use Twitter well (enough). I always have trouble with creating threads or topics. But I (re)discovered some very interesting people, sharing their ideas in long threads.

A few days ago I came across @GeePaw Hill. I believe it was because I followed a few tweets by Kent Beck. GeePaw Hill had this thread where he encouraged people to refactor without caring for the application domain, only for the code. You can read the thread here. He even elaborated some more on his blog.

I find the idea fascinating and will continue to think about that.


Back then; I did it; I liked it much; Found it a necessary touch;

Never it challenged, I; then saw; A source without it; didn’t look too raw;

Since then it flows without it well Some purists call it living hell.


Back then; I did it; I liked it much; Found it a necessary touch;

Never it challenged, I; then saw; A source without it; didn’t look too raw;

Since then it flows without it well Some purists call it living hell.


[…] we do have formal rules that we should obey when writing code. A team has rules, and new team members need to learn them before trying to write any code.

That’s what I wrote yesterday. It’s my email so I can write whatever I think is correct. You’ll let me know through your answers if I am wrong.

My friend Tino answered on Friday and asked whether a university degree or certificates might function as a driver’s license. And that is true to a certain degree. I am getting a new certificate these days as well. I hope to complete the exam on Wednesday (ISTQB Advanced Level — Technical Test Analyst).

The obvious difference to a driver’s license? I am not legally required to obtain one before I can start writing code. Tino also said that he’d find it interesting to be (self)tested in current web-standards and best practices. I do believe these tests are valuable. If I come around to create one, I’ll let you know.

Back to the beginning of the email. Why do rules matter to a team? Developers have their style for writing code. Even if there are rules and certain regulations you have to follow, developers still find ways to write code in their unique style. And that’s a good thing. It would be boring otherwise. Still, this style has to obey the rules. Here’s why:

  • The code won’t be too complex. Because your static analysis tools tell you if your cyclomatic complexity metric is too high.
  • Classes and modules will have low coupling and high cohesion. This leads to code that’s more easily testable and has higher reusability than other code.
  • If you have an error if explaining comments are missing, you could make sure that your developers take some extra time to make sure code can be easier to understand. Other rules, like variable and method/class/module naming conventions, have the same goal.

In short: Rules help your team to write code that is maintainable and has low technical debt. This reduces the total costs of ownership. If you only look at the cost of writing the code and delivering the software, the costs might be higher if you follow stricter rules. Over the complete lifecycle of a software (product), the total costs would be lower because of better maintainability and a lower number of defects.

This is already getting long. See you tomorrow with even more thoughts on this topic.


We don’t have rules of the road for software development. You don’t have to stop at every red light or keep your speed below a specific limit.

Well, yes. We do have rules. If you write your whole program in only one file, someone will tell you that this is bad. At least I hope that’s the case! If you only use variable names like x or y, your coworkers will flag this during code review. Perhaps you already have static analysis tools that tell you before your coworkers do?

While we do not have a driver’s license, we do have formal rules that we should obey when writing code. A team has rules, and new team members need to learn them before trying to write any code. Otherwise, it could feel like driving on the wrong side of the road: Driving on the right side of the road feels natural to you if you’ve never done it any differently. But it can have dramatic consequences if everyone else expects you to drive on the left side.


Every country I know has an obligatory driving license before you are allowed to drive a car by yourself.

No country I know has an obligatory coding license before you are allowed to code by yourself.

The longer the time that passed between the driving license and the current day, the more reckless and careless drivers have become. What I mean is that new drivers are careful. For the rules, for their passengers, for other cars and people. The longer they drive, the more they bend the rules, pass yellow or red lights, or speed just a tiny bit. Everonye does it, why shouldn’t they do it as well. No one is looking anyway…

I won’t argue for a coding license. It would be fruitless anyway, and it would be too hard to establish a standard. But there so many simple (not easy) wins you could have with the proper knowledge and attitude. So many legacy systems less and so many more successful projects.

When was the last time you compared your skills in coding with other people and had the possibility to spot places you could improve? How do you score your ability anyway? How do you decide what to learn or focus on?

Do you decide these things before you start to code on a project, or do you only find out in hindsight, when issues arise or new features begin to get harder to realize? Do keep a list of problems you identified and make sure you avoid them in the future?

I’ll take the next week to look into this more.


I came across a very interesting article written by Herb Caudill, on different perspectives on rewriting software. He highlights 6 different stories of how a rewrite went. You can read about Basecamp, Netscape Navigator, Gmail, and others.

It is a long article, over 30 minutes according to Medium.com. And I am also sorry for linking to Medium. I don’t like them, and I resent sending them traffic. But this story might be worth it.

Here you go: Lessons from 6 software rewrite stories

NB: I do have experience with software rewrites myself. I took part in two endeavors. One thing was a client application. The legacy app was written in Rails 2 (I believe, it might have been Rails 3) and heavily patched. This made maintenance and feature development quite expensive. We rewrote the software but kept quite close to the original in functionality. The rewrite enabled us to use modern gems and solutions we had created in-house for other clients. This used synergies. It was an ambitious project. In the end, I think it didn’t make too much sense, financially. But I couldn’t be sure about that one.

Another project where I helped on a rewrite concerned frameworks for iOS applications. We had customers that wanted to publish iPad magazines on the App Store. To make it easier for themselves, previous developers had written a custom publishing framework. This framework was reused on every project. It was extendable, reusable and efficient. But it was also difficult to handle and very limiting with regards to layout and design of the magazines. Which was a problem for the clients. So they set out to rewrite this. I joined the company while the project was still in progress. I left 2 years later. The project was still ongoing. The rewritten framework was used in every client project, alongside the older framework. For some features, you had to use the old one, because they weren’t yet supported on the new framework. For other use-cases, you had to use the new framework. Especially for certain pages in the magazine, with new layouts. Yeah, it sucked.


Matthias Berth is a German expert on software delivery and software quality. He politely disagreed with me on the idea that you should delete all your tests.

I decided to call this day “Video Wednesday” and record an answer as a video. I just posted it on LinkedIn, and thought you might like to watch it there.

It even has subtitles 😉


Matthias Berth is a German expert on software delivery and software quality. He politely disagreed with me on the idea that you should delete all your tests.

I decided to call this day “Video Wednesday” and record an answer as a video. I just posted it on LinkedIn, and thought you might like to watch it there.

It even has subtitles 😉


How can I get my coworkers to write better code?

We closed with this question, yesterday. If you want to be able to motivate your coworkers to write better code, you have to know where they stand right now. I already wrote a few articles onthis topic. Follow these links, and you’ll get a good idea on what to measure, how and why. You’re welcome.

After reading and measuring and talking with your coworkers, you are still left with the idea of motivating them.

I am good with code and perhaps with words (you decide). Motivation is a “people-topic”. This is psychology. I do know a good book, a classic that you could read: Peopleware: Productive Projects and Teams The contents in the book will help you understand how to build a great team, how to motivate people and understand their goals. You could adapt this knowledge and make them write better code. Other ideas:

  • Invite me to give a workshop on software design and architecture, testing or how to write better code

  • Try to use gamification. This could mean that you publicize the code quality metrics and make it a game to increase the score. The weekly winner gets a price (a half-day off of work?). Better: Make it a team effort. Let them all be winners because it was probably not one person that created all the code in the first place, right?

  • Send them to workshops, conferences or use learning sessions to help them understand how a better/different way to write code would help them with their job.

    I sincerely believe that engineers, programmers, developers, coders, and hackers (which one are you?) take pride in their work. They always try to do the best they can. I haven’t yet met a single person who deliberately wrote shitty code. Perhaps it was a byproduct of too little knowledge or experience. But never was it their intent. If you help them level up, they will get better. And your code and products will too.


81% of respondents who were satisfied with their code review process were also satisfied with the overall quality of their software. Respondents who were not satisfied with their code review process were half as likely to be satisfied in their overall software quality, with only 40% respectively.

This is from a research study done by SmartBear.

This image tells us that the majority things code review is the way to go to increase code quality. This might be true, or it might not. The quality of a review depends 100% on the knowledge, ability to communicate and the time a reviewer takes to dive into the code. Static analysis is way down in second to the last place. Unit testing is right behind code reviews.

I could argue either way. My problem with this study is the term code quality. Code quality to me means that you talk about things like coupling/cohesion, readability, and maintainability, adherence to standards, low bug count. Quality metrics your code exhibits. Code review is not the best tool to increase this metric. Robots and static analysis are. You need impartial tools that hold you to a strict standard. People don’t do this. They are lazy. If you talk about software quality, on the other hand, that’s where you need people. Thinking about and discussing software architecture and design, debating about usability. Fine-tuning the visual design of a product. This is where reviews are the go-to tool for the job.

I guess this distinction was unclear to most participants. That’s a shame.

You can read the study here. I don’t link to their signup form for the study. They ask details like your phone number before you can download it. That’s bad practice, hence the direct link. I’d be happy to hear your thoughts on this. Do you make a distinction between code and software?


81% of respondents who were satisfied with their code review process were also satisfied with the overall quality of their software. Respondents who were not satisfied with their code review process were half as likely to be satisfied in their overall software quality, with only 40% respectively.

This is from a research study done by SmartBear.

This image tells us that the majority things code review is the way to go to increase code quality. This might be true, or it might not. The quality of a review depends 100% on the knowledge, ability to communicate and the time a reviewer takes to dive into the code. Static analysis is way down in second to the last place. Unit testing is right behind code reviews.

I could argue either way. My problem with this study is the term code quality. Code quality to me means that you talk about things like coupling/cohesion, readability, and maintainability, adherence to standards, low bug count. Quality metrics your code exhibits. Code review is not the best tool to increase this metric. Robots and static analysis are. You need impartial tools that hold you to a strict standard. People don’t do this. They are lazy. If you talk about software quality, on the other hand, that’s where you need people. Thinking about and discussing software architecture and design, debating about usability. Fine-tuning the visual design of a product. This is where reviews are the go-to tool for the job.

I guess this distinction was unclear to most participants. That’s a shame.

You can read the study here. I don’t link to their signup form for the study. They ask details like your phone number before you can download it. That’s bad practice, hence the direct link. I’d be happy to hear your thoughts on this. Do you make a distinction between code and software?


List of programming principles

This is a nice list of principles you could (or should?) follow in your programming.


Disclaimer: I forked the repository from the original source. I want to preserve it for you since I don’t know what will happen to the original.

While things like YAGNI and KISS are rather well known, there are other ideas that are put quite well.


  • Encapsulate What Changes
  • Orthogonality
  • Inversion of Control

To be clear: This list is nothing new. I do like the way they put it together, the idea that it’s growing and the further resources they link to.

Have a look!



Chad clicked the button and created his pull request. He had worked on his feature really long — it must have been three weeks. Finally, it was time to integrate his changes into the master branch so they could go live with the new version of the app. This was the first huge feature that was his responsibility. His team lead, Janet, gave him the ticket for the task and he set out to write his code. “When it finally goes live,” he thought, “the churn of our users should drastically go down!”.

Primarily he had found many places in the app, where users committed some actions. Up until now, these actions weren’t registered anywhere, so nothing and nobody tracked them. There was no record of what the user did or didn’t do inside the app. Marketing alerted management that too many users did not renew their accounts, or outright canceled. In turn, management asked the developers to do something about it. Together the team decided to record all user actions and put them into a log of all activities for this user. This way they could make calculations which users were not active in the app and reach out to them to prevent them from abandoning the application, or so they hoped. The development of the feature enabled Chad to take a thorough look at the whole application. After all, he had to integrate his code into all kinds of places. And so he did. Today he was finally ready to publish the code and begin the merging. To integrate it into the master branch so that it could get deployed, he had to create a pull request. “Since my code is well written and worked when I tested it, it shouldn’t take too long for this merge to complete.” was his conviction. He assigned the pull request to his team lead and another backend developer that he had talked to during lunch a few days earlier. Chad used the lunch to tell her about his progress on the feature, and she seemed interested. So another set of eyes shouldn’t hurt.

When Sarah, the backend developer on the team, started work the next day, she saw the notification for the pull request in her email inbox. She decided to take a look. After all, Sarah didn’t like it herself when pull requests lingered for too long. So she dove right in. The first thing she noticed was the warning git gave her: The pull request couldn’t be merged because it had conflicts with master. She decided this should be something minor because it happened rather often and decided to ignore it. Chad would just do a quick rebase, and everything should be fine. No need to worry. She proceeded to look at the changes: 4100 lines were added, and 2521 were removed. There were many files changed as well because Chad copied his code into all the different places. When she looked into the data, things looked strange because the code in all these different parts of the app didn’t look the same. Sometimes Chad’s code style fit with his surroundings, but most of the time it didn’t. At least his code looked the same everywhere — because he made sure to pay attention when copy-and-pasting his changes, he later said proudly.

Anyway, Sarah didn’t want to take too long for this review. Since Chad’s code looked the same everywhere, she reviewed two files and what she saw made sense to her. Sarah remarked that it was somehow hard to follow what was happening because he didn’t follow her style in the parts that she wrote. Especially the variable names, she didn’t like. But she would give his pull request a thumbs-up if he could change them in her part of the code.

Janet took a look during the evening the next day since she had lots of meetings to attend. The first thing she did, was asking Chad what this pull request was all about. He hadn’t written any description for his pull request. She also remarked that the conflicts with master were due to this feature taking so long. Many parts of the app had already evolved from his weeks' old state. There was a lot of work to update everything. She also wanted him to proceed quickly with this since this feature already took too long. Marketing was eager to get something into their hands since users left the app left and right.

It took Chad another two days to update his code and bring everything in line with the changes from master. Eventually, the conflicts were gone and Sarah and Janet thumbs-upped the pull request to be merged with master. It went live after weeks of development with only a very superficial review. Marketing began to receive logs of who was inactive in the app, and where active users spend most of their time. They also received angry calls from customers that the app is slower to respond than usual and that it sometimes just crashes—in places that used to work before.

It turned out that Chad’s changes were not tested, neither automatically nor manually. The changes also blocked the main thread of the application with a synchronous network call that sometimes took very long (depending on the user’s network) and sometimes just hangs and crashed the app. Since there was no style guide for the developers, the code looked differently everywhere you looked, and no linters enforced any style at all. The copy-and-paste changes that Chad had integrated made for a tedious find-and-replace once they had to fix the bugs. And it was all over the app. Therefore fixing things took another two weeks, and there was no end in sight.

In the end, management canceled the whole feature, because it just didn’t work at all for most of the users. They replaced it with integrating their web analytics engine. A radically simpler approach, but it did work right away.