Newsletter
Newsletter

When Not to Test Code?

Scroll down
Fred Lackey
Fred Lackey
Expert:
  • Location
    Atlanta, Georgia
  • Experience
    40+ Years

2022-01-28

6:47 AM

fredlackey

I know. I know. Settle down.

No, I’m not advocating you should ignore writing tests. I’m simply saying there is a time and a place for everything and, if you want to actually ship a product, instead of just talking about shipping a product, you need to know when to execute. And, that includes the question of when to add tests.

This conversation came up when talking to a non-technical project manager this morning. So, my apologies if it’s written in that voice.

Also, this is my opinion. At the time of writing, I’ve been getting paid to write code for … wait for it… 40 years … yes forty years. And, during that time, I’ve been either the sole developer or a primary developer in many successful software products. So, I know a thing or three about this stuff. But, even with all of that, I am not saying this is the answer. It is simply my opinion.

What is a test?

An “automated test” is a set of rules and logic crafted by the developer to ensure their software item does not break in the future. These tests are automatically run and return a “pass” or “fail” result based on the rules the developer crafted. They can be very useful but are a very slippery slope. Here’s why…

Test-Driven-Development (Don’t let people do this… period.)

About 10-15 years ago someone came up with the idea of writing a massive amount of test rules (usually in lieu of having solid architecture documents) and then forcing their team to write code to satisfy those rules. The general concept is that your coding tasks were considered “complete” once all of your tests passed. The failure people rarely discussed is you generally had so many tests, and so much specialized code written to run those tests, that your development time took twice as long. What’s more, is now your developers had to become proficient in both the application’s logic and the logic within the tests. Sadly, every so often new developers will discover this technique and advocate for it without really knowing what they’re in store for.

Procrastination

There’s always a coding task that nobody wants to touch. It may be incredibly boring or too much of a “brain burner” to be excited about. Just like someone pushing their morning workout to some imaginary “sweet spot” after work, developers sometimes lie to themselves about the importance of working on tests in lieu of actually getting work done. And, just like pushing that morning fitness routine, this type of decision-making is seldom intentional.

Premature Test-ification

Generally speaking, any software module or application has three phases: discovery & planning, development, and maintenance. Even if they are actively coding, the majority of that time is usually spent upfront in the discovery & planning stage. This may include learning how to work with an upstream API, flushing out actual source code patterns for the team to use, or testing a concept they think may work. The real development… the meaningful development… the permanent code doesn’t actually start to be written until all of those boxes are ticked. And, until that moment, the codebase is a bit of a “moving target.” Any tests written during these phases are basically disposable since there is a massive chance they will no longer be correct and will need to be rewritten. Essentially, they are what is considered “throwaway” code at that point.

Anti-Patterns / Lack of Standards

One of the factors that annoy most developers is having to take over someone else’s code which is drastically different from their own. Part of the benefit of using tests is to help enforce coding languages, standards, and styles. Unfortunately, going back to the “premature testification” concept, in many cases those differences are acceptable and even encouraged. Startup endeavors, new development teams, and brand new software endeavors are all perfect examples of this. More often than not, in those scenarios, the primary focus is delivering that first prototype or initial product. Additionally, in those environments, you usually have a mix of skillsets. And, while that mix of skillsets will normalize over time, since those scenarios generally offer little opportunity for downtime as it is, increasing the team burden by trying to normalize disconnected skillsets often leads to a dramatic decrease in productivity or, even worse, increases the error rate of a product.

Where do tests fit in the source code?

In general, the three common types of tests are meant to SEAL a codebase, help guarantee binary compatibility between components, and protect the product from unintended code changes. More specifically…

Unit Tests

Every software product is made up of a larger number of smaller “modules” or “components” that work together to accomplish a larger task. For example, your email application probably has a “Contact” component for collecting info on people you send messages to. That Contact component would then have an “Address” component for storing the person’s mailing address. And, that Address component might have a “Postal Code” component to handle the validation of the various postal code formats around the world. Once they’re complete, each of those smaller components / modules generally have their own smaller set of tests that only test their tiny chunk of logic.

UI Tests

This one’s usually the most complex and ambiguous type of test to complete. Should a software solution have a user interface of any kind, regardless of whether it’s web-based or installed on a user’s workstation, it is common to “seal” the user interface project with a set of UI tests. These tests generally use a “headless” mechanism that quietly launches the user interface and programmatically mimics the actions a user would normally perform. From clicking buttons to scrolling to typing in text boxes, anything a user would do, the test mechanism performs those steps and reports back if the expected outcome is the actual outcome.

End-To-End Tests (Integration Testing)

One of the final types of tests is an overreaching test meant to ensure that each of the pieces meant to work together is actually capable of interacting as expected. For example, the act of a user clicking on part of the user interface probably sends data to a server and, more than likely, that server probably talks to other servers. An end-to-end test is meant to ensure events taking place on one end of the technology stack cause that chain of other events to take place properly.

So when should tests be added?

In short, the initial addition of tests should proceed when all of the following are true:

a. Design and prototyping stages are 100% complete for the module, component, or feature in question. Remember, tests are meant to ensure binary compatibility as the codebase matures;

b. Your team is fluent in the proposed testing library or technology. Give them time to ramp up and prove their skills if they are not; and,

c. The project’s lifecycle is comfortable enough, and deadlines are far enough away, so the act of adding tests to not jeopardize the project.

Go ahead. Flame me.

I get it. You’ve read this and you’re pissed. That’s cool. Again, it’s just an opinion. I’m not saying I’m right or that I know everything. All I know is that this logic has served me well in the past four decades of professional software development. Take it or leave it.

Feel free to comment if you have questions or think I’m a nut job.

author avatar
fredlackey
Posted in Programming
Write a comment
© 2024 Fred Lackey, All Rights Reserved.
Write me a message

    Messages and contact info remain private at all times.