Hi all,
Writing automated tests is a bit controversial - not because there’s anything wrong with the tests themselves, but because any time you spend writing test code is time you don’t spend writing product code. I get it - tests can take a long time to write, and they don’t ~directly~ provide value to the product. Thus, anyone focused on maximizing the velocity of new feature development naturally doesn’t want to spend half of their time writing test code.
So today I want to make the case for automated tests. I’m not going to dive into the details of the different types of tests (i.e. unit, functional, integration, etc.), or how to write “good” tests - instead I want to focus on the benefits of building automated tests, and the risks involved in not adding automated testing.
I should also note - just because you write automated tests doesn’t mean they’re useful. You can write ineffective tests, or test the wrong types of things. For simplicity, this post assumes that you’re writing effective tests (which isn’t a given).
I’ll dive more into that down below, along with a few great resources, what I’ve been up to recently, and this week’s question.
- Kyle
Making a Case For Automated Testing
There’s a lot I could say on this topic - but I want to keep this short and to the point. Here’s three reasons why you need automated tests.
Easier refactoring
You’re working on a large, multi-year project. Due to early design decisions and a change in business requirements, you need to perform a large refactor. This involves ripping apart your entire codebase, changing it to fit a new model, putting it back together…and making sure everything still works.
On a project of this scale, there’s a decent chance that you’ll make some mistakes during the refactor. You’ll catch some of these during manual testing - but unless you’re really thorough, you probably won’t find all of the problems. A well-written set of test cases provides peace-of-mind that your refactored code works…or quickly points out your bugs before they get discovered in production.
Save time on manual validation and catch bugs immediately
In college, assignments are small enough in scope that automated testing can feel unnecessary - you write the code, spend ten minutes manually validating that a few examples work correctly, hit submit, and never think about that code again.
In industry, your code has a much longer lifespan, and is much more complex.
If I were to manually test the project I work on at my job, it would probably take me a full day. Given that new changes are pushed to the codebase multiple times per day, manually doing thorough testing on all of these builds is unrealistic.
But, that might be okay - we ship on a monthly cadence, so I could spend one day per month before a release validating that everything is working…right?
That’s a really bad idea.
Automated tests, assuming the use of continuous integration, catch bugs as soon as they sneak into the codebase. Every time I push code, I immediately know if I broke something, which means it gets resolved before my change is checked in. If you’re only testing once per month, pinpointing when/where bugs are coming from gets a lot more challenging.
Once you find the bug, you also need to fix it ASAP - assuming you’re about to ship, you don’t want to release unstable code. You either need to 1) work really fast to address the bug, 2) delay the release, or 3) pull out the changes that appear to be problematic.
OK - you made some changes, and you think you resolved the bug…but just to be sure, do you spend another full day validating that you didn’t destabilize anything else when you applied the fix? After all, the change was made quickly at the last minute, and without knowing when the bug first showed up, how do we know that we didn’t just address a side effect of the real issue?
Manual validation for large scale projects takes a long time. It’s obviously still a good idea to try out your code before you push it - but a well-written set of test cases makes this process much less time-intensive and increases your confidence.
Reusability
Adding the first substantial tests to your project can be daunting. There might be a lot of infrastructure that needs to be built, which is going to chew into development time - or, depending on your experience with testing, you might not even know how to get started. That’s the bad news.
The good news is that once you build out that infrastructure and figure out how to build your first tests, the tests you add after that will be substantially easier to write and maintain. Most of my tests build on top of infrastructure that’s used by dozens of other tests - this helps abstract away the behind-the-scenes pieces that aren’t relevant to my tests, making the process a lot less painful.
Initially, manual testing might be faster than putting together those first tests…but in the long run, adding new tests becomes a lot quicker, whereas the time required for manual testing scales linearly. This trade-off might make sense for small/throw-away projects, but being able to add coverage for a new feature in a few minutes is a much better option than spending a few minutes doing manual validation…every single time you update/release your code.
Things Worth Reading
From The Archive
In Case You Missed It
(🐔Video🐔) I wrote “Hello, World!” in 50 programming languages…and it was painful.
(Post) Designing Your First Feature
This Week’s Question
I want to hear from you! Leave a comment on this story or email me at support@kylekeirstead.com with your thoughts/ideas.
Did you learn how to write test code in college? If so, was it just theory-based, or did you get hands-on experience?