Software Principles 4/5: Efficiency

Software Principles 4/5: Efficiency

 · 14 min read

Principle #3: Effectiveness

“The Only Way to go Fast is to go Well”

In Part 1 of my blog series on Four Principles of Software Development, I proposed the P.E.E.R. acronym (People, Effectiveness, Efficiency, and Relevancy) to describe four principles that set a senior developer apart and define how software ought to be done. The third of these ideas is the idea of efficiency. Efficiency is all about one thing, speed. The speed at which you can build, test, deploy, and modify software is one of the most important things in software development. The motivation behind so many technical decisions, like decisions to refactor code, add automated tests, restructure a team, hire new developers, or use another technology or framework, often distills down to one thing, increasing development speed. As a developer, an important part of the job is to always be on the lookout for ways to increase efficiency.

But how can we go faster? Is it by cutting corners, skipping writing tests, foregoing code reviews, and not stopping to refactor? To an inexperienced developer, all of those things can seem like great ways to be more efficient, but in reality, those strategies will often only slow down the team in the long run by leading to a fragile and brittle code base that can not be easily modified. When it comes to efficiency, Robert Martin (aka Uncle Bob), author of Clean Code and founder of cleancoders.com, argues that it’s often better to have well-structured code that operates incorrectly, than to have poorly structured code that operates correctly. This is because well-structured can easily be changed to be correct. However, poorly structured code can not easily be modified as business requirements change. Thus, Uncle Bob concludes, with his famous catch-phrase: “The only way to go fast is to go well”.

I think there are four major components to building software efficiently:

  1. Code Architecture
  2. Clean Code
  3. Development Pipeline
  4. Automated Testing

Code Architecture

Uncle Bob states that “nothing is better at slowing you down [than coupling]”. This coupling could be database code coupled to business code, business code coupled to UI code, or UI code coupled to infrastructure code. In any of these cases and in others, it is best to separate your code into layers or boundaries in such a way that allows you to change one layer or area without making significant modifications to another. This can be achieved by using techniques such as Clean Architecture and the SOLID principles introduced by Uncle Bob. This philosophy allows your program to be incredibly flexible, which is what every piece of software needs. A good test to know if your code is architecturally sound is to think about swapping something out. Could you change the database structure? Could you change the entire database to a different vendor? Could you change 3rd party vendors that you integrate with? Could you change the UI framework you’re using? Now, I realize that not all of these will be feasible for every software project. The very nature of some projects might couple you to some of these technologies. But the idea is that good architecture will decouple these different areas of the project as much as possible. That’s what architecture is all about. It’s about decoupling code to help you go fast.

Clean Code

Another way to ensure you’re able to go fast is to make sure you write clean code. While software architecture usually refers to the structure, coupling, and cohesion of several different layers of software, clean code dives more into the details of exactly how each layer should be written. Clean code encompasses everything from commit messages, naming, comments, code style, tech debt, refactoring, strategic techniques, and more. There are many books and articles written on this subject (namely, Clean Code by Uncle Bob), but the whole idea is that these ideas can be studied and they all have best practices to help keep your code easy to understand, easy to test, easy to deliver, and perhaps, most importantly, easy to modify. Again, the whole idea behind clean code is to help you go fast.

Development Pipeline

A development pipeline refers to the processes and tools used to take an idea or a feature on its journey from conception to deployment. This is one of the more advanced concepts of efficiency. It is often overlooked by small and large teams alike, but the rewards in speed and efficiency can pay dividends if teams invest time to create and maintain a good development pipeline. Development pipelines usually consist of at least five major stages: design, implementation, verification, deployment, and monitoring. An ideal pipeline would also support parallelism. That is to say that multiple features could be in each pipeline stage at the same time and features can shift positions in the pipeline so that they are not required to keep the same order as they pass through the pipeline.

Design

The development pipeline oftentimes starts with some sort of requirements gathering phase. I’ve already covered this in [Part 3] of this blog series, so I won’t go into that here. After requirements, there is often a discovery, research, and/or design phase. These phases are typically done by product designers, but it’s critically important for developers to be involved (at least in a small capacity) as early as possible. I’m not saying that a whole team of developers should be present in all aspects of design, but it goes a long way to involve at least one developer for periodic check-ins. This helps ensure that developers and designers understand each other’s motives and can sync up and make changes quickly.

Implementation

Typically, after design, the feature is implemented with code, but the design phase doesn’t have to be perfectly static. The feature can bounce back and forth between design and development and sometimes the two of them can even merge. I’ve had several projects work well when developers and designers paired up together and solved problems in real-time. The point is that good collaboration between developers and designers can’t be understated.

Verification

After implementation, there is typically a verification phase. For the development team, this often includes code reviews. Code reviews are an important way to share knowledge across a team and to ensure that the code checked in follows all the standards and guidelines set out by the team. The larger a project is and the more developers it has, the more important it is to automate as many of these developer verifications as possible. Ideally, if there is an agreed-upon convention or guideline, the code should be self-enforcing. This includes things like automatically formatting your code, enforcing standards and conventions with analysis and linting tools, automatically checking test coverage, etc. This concept is known as Continuous Integration and the main idea is that as many of these verification checks as possible should be caught by an automated system instead of involving manual developer effort.

For other teams and stakeholders such as design, product, and security this phase might include hands-on testing or some kind of audit. As developers, our job is to make this as easy and asynchronous as possible for everybody involved. There are lots of different techniques for this. Sometimes I’ve been on projects where a PR would automatically build and deploy to a test server and notify the appropriate people to provide approvals. Other times, if the feature scenarios have been too complicated I’ve taken screen capture video recordings of myself demonstrating and explaining the feature. We can also familiarize ourselves with audits that other teams do to see if any of them can be automated.

Deployment

After all verifications are complete, the code must be deployed. The complexity of this varies greatly depending on the nature of the project. I’ve seen some very simple projects and some that are incredibly complex. Regardless of how complex this process is, every effort should be made to automate this process and achieve what is referred to as continuous deployment. With tools like containerization, virtualization, deployment services, and many more, there is rarely a good excuse for not practicing continuous deployment.

Monitoring

Just because our code is deployed doesn’t mean that our pipeline is finished. We still need to monitor our application to collect things like usage data, crash reports, performance metrics, and user feedback. Again, as much of this as possible should be automated so that it can be fed back into the beginning of the pipeline.

Conclusion

A mature pipeline that satisfies the needs of all teams and stakeholders involved can have tremendous implications for development efficiency. In short, having a good pipeline streamlines the entire development process and helps you go fast.

Automated Testing

Automated testing is a very popular topic in software development and there is no shortage of opinions and controversies surrounding the topic. Depending on the project, striving for 100% test coverage can be a great goal, but before we write automated tests (typically unit tests or integration tests), we have to ask ourselves why we are writing tests in the first place. For anybody that has written tests, it can sure feel like it takes longer, sometimes much, much longer to complete a task if we have to write tests for it. I can definitely relate to that, and it is often very true, but usually, it’s only true in the short term. In the short term, it can take much longer to write tests for code, but many of the benefits of writing tests are for the long term. They are for years down the road when you don’t remember all the details of how the code works and for the many other developers that may one day need to modify your code. When that day comes, the time it took you to write tests will likely pale in comparison to the time you’ve saved that future developer.

So, in my opinion, the reason we write tests is to help us go fast. And if they don’t help us go fast, then we should stop writing them. In fact, we should not only stop writing them but we should delete any tests that are not helping us go fast.

My opinions on testing have largely been influenced by reading the work of Vladimir Khorikov, especially his book Unit Testing: Principles, Practices, and Patterns. In his book, he lays out four pillars of a good unit test. They are Protection against regression, Resistance to refactoring, Fast feedback, and Maintainability (the explanations of the pillars are beyond the scope of this post, but I strongly recommend looking up Khorikov’s work). Each time we write a test we should try to optimize the test’s score for each of these four pillars. This will be highly dependent on the nature of the project and the team. Often, we can find a way to write tests that score highly according to Khorikov’s four pillars tests. But, if we find ourselves in a situation where we can’t, then we should not write the test as it will likely do more harm than good.

The value of testing is real and that value translates to speed. Uncle Bob said it best:

“But without a doubt the most important benefit of a good test suite is confidence. A well designed test suite with a high degree of coverage eliminates, or at least strongly mitigates the fear of change. And when you aren’t afraid to change your code, you will clean it. And if you clean it, it won’t rot. And if it doesn’t rot, then the software team can go fast.”

Robert Martin - The Clean Code Blog

Series

  1. Four Principles of Software Development
  2. Software Principle #1: People
  3. Software Principle #2: Effectiveness
  4. Software Principle #3: Efficiency
  5. Software Principle #4 Relevancy