— 1 min read
If you buy a piece of furniture, like a chair, you can compare it with all the other chairs in the exhibition. You can sit on them and compare how they feel. You get a feel for the wood, whether it‘s cheap or high quality.
How do you do that with code? Once the software is released you are able to compare and evaluate. But before?
Consider the case that you are looking for an OSS library for your project. You find 3 contenders (often there are more, I know!). How do you know which one is best? If they offer demos, you’ll probably compare those, right? I know I would. I did it just yesterday. But if two are more or less equally good in fulfilling your use case, what would you do? Do you compare the source code? Good idea, I’d say. Feasible if the libs are small. Impractical for bigger ones.
Alright, I believe what most of us do—perhaps even as a first step—is evaluating the library’s popularity! If more devs use library A then it will probably be better, right? Perhaps it’s written by someone you’ve heard of before? Even better! More credibility for this lib. But… did you look at the actual code? Do you have any (hard) metric for its quality?
There are tools you could use, like Code Climate and others. But they might only tell you half the story.
To get to the point: We do not have a reliable, universal way to judge code quality, and in essence our work. The only thing that comes close is peer review and your colleagues' judgment.
What do you think? How do you measure the quality of (your) work?