Software Engineer
by
As discussed by Mike Wacker from Google, end-to-end tests are known to be flaky, run for a long time and when they fail, it is hard to isolate failure root cause. A part of those problems stems from the anti-patterns appearing in such tests and addressing those anti-patterns may make your tests more reliable, more useful in isolating root causes and cheaper to maintain.
The following list of 8 anti-patterns comes from my test automation experience. Some I found in legacy test suites me and my teams inherited. Other were committed by candidates I interviewed for testing positions. Selected ones comes from our fellow developers who helped us in test automation.
That usually happens when you start small and think small, without remote perspective in mind. Let’s imagine you’re testing authentication in your system with a sample user:
String testUser = "mgawinecki@tokyo.jp";
When coming back to the test code a month later you might ask yourself, why you wanted to test with this particular user? Is it because you wanted to test for a user that is inactive? Or maybe for a Japanese-speaking user? Hard to guess. And when the test suite grows up to several hundred test cases, maintaning hardcoded test data becomes a nightmare. How do you handle it? Similar way you handle magic number anti-pattern.
Imagine same checks must be run against both Firefox and Chrome; or against local and then pre-production test environments. No way to do it if you hardcoded references to browser type, server host or databases. A solution is to make your tests environment-agnostic and provide configuration to the test at runtime, e.g., by reading it from configuration file. Additionally, updating configuration will not require modifying multiple files.
Taking environment state for granted is often over optimistic. Unlike in unit tests, end-to-end setup gives little control over test environment state, particularly when it is shared with other teams. When the test starts failing it might be because of a new bug introduced or because environment is not in a state your test needs. For instance, a user you use for tests has been locked out by another team or flight schedule has changed and no longer you can use a connection from London to Los Angeles in your tests. There are a number of ways to handle such issues:
Each of those solutions can be done manually before each test run but in case of large number of tests and dynamic environment it simply does not scale well. An alternative is to automate one of those solutions. The last one is usually the easiest to implement and saves execution time. It does not garranty test data for your test but it will skip (not fail!) the test immediately when it is clear it will provide no useful feedback. JUnit’s assumeThat construction can be intuitive here:
assumeThat(testUser, existsInSystem());
Some people are aware that environment state may change, so they try to make their test verifying different things depending on the environment state:
if (existsInSystem(testUser)) {
// test for existing user
...
} else {
// test for not existing user
...
}
However, this is a shortsighted workaround as it makes your test non-deterministic: you will never be sure which path will be verified in the next pass. In extreme case, if the environment is always in the same state, only one execution path will be tested. In general, there’s no reason to have one test method if you’re testing two different outcomes.
When assertions in tests are failing with almost no clue why
Expected: true
Actual: false
it is hard to isolate a root case of a failure. This happens when using simple assertions like assertTrue
or assertEqual
. A better solution is to use custom matchers in combination with custom messages:
assertThat("Account with debit is missing", accounts, contains(expectedAccountWithDebit));\
Once a test fails you will need to understand what has happenned before. However, if you don’t want to know that, follow this anti-pattern:
Jokes apart, the goal of addressing this issue is to ease reproducing the problem you found with the smallest cost. Running same test again and debugging the test and the system under test again and again is usually expensive and can be ineffective for intermittent bugs.
Tests that mix details of system business logic with steps of test scenario are hard to read and maintain. A solution is to separate what the test is testing from how it is doing. In software development this separation of concerns is known as encapsulation. I found a number of ways to do it in test automation:
UserExistsMatcher
,I found a good introduction to the latter two approaches in the article Writing Clean Tests – Replace Assertions with a Domain-Specific Language.
Waiting in your test 4 seconds
Thread.sleep(4*1000);
because your production system usually takes 4 seconds to go out over the network, take some data, and come back over the network with the result — this is baaad. That’s bad because your test becomes fragile to network congestion: it will start failing when network latency increases. What you actually intented is to wait until “a response is returned” or “an object appears”, using explicit and implicit waits (in Selenium) or active pulling in general (e.g., with Awaility library).
Obviously, there are exceptions where explicit sleeping is fine.
Presented anti-patterns demonstrate that writing system tests is a slightly different beast than writing unit tests. Sure, some anti-patterns like a Wet Floor can happen both here and there. However, in this post I have focused on anti-patterns specific for end-to-end tests. I have no funny names for those an yet, so if you come up with any let me know.
I’m waiting for your feedback! Do you agree or dis-agree with some anti-patterns? Or maybe you encountered some others?