Software Engineer
by
Test frameworks provide scaffolding for building automated tests: domain-specific vocabulary to describe your business scenarios, loggers to generate test results in a standardized format, or “glue” to talk to various services. Such already implemented elements of the framework initially speed up scripting automated tests.
One test framework may work fine for multiple teams testing similar products and working in a similar way. However, in big organizations, different teams develop and test significantly different things and work in their own ways. In my current company development culture is no different from that. In such cases, teams feel tempted to add more and more functionality to the already bloated framework, which results in an anti-pattern called Frankenstein’s Framework or Wunder Framework. Such frameworks illustrate needless complexity and immobility — they become hard to use and maintain.
I have learned that the hard way. However, I did not want to “throw the baby out with the bathwater”. I found routines common for different testing teams, but each team was still automating the same routines from scratch. For instance:
Teams can use one or more libraries, but it is up to each team which ones to pick up.
A test framework does multiple things such as generating test data, mocking external systems, logging debug information, and handling interaction with Web page UI. A test library should do only one of those things and not the others. For instance, a mocking library would support mocking system X.
The concept of tools doing only one thing comes from the Unix philosophy on how to build software. It has been present in the software engineering industry for more than 40 years and stands in opposition to building systems as monoliths. Its benefits have been widely discussed but only now, as I’m writing this post, they finally seem obvious to me. It has taken me much time to understand why moving from monolith frameworks to test libraries is worth the effort.
With a large system, it’s hard to find one single person that knows, on his own, how all of the pieces such as authentication, UI navigation, report generation, and database population work together. A similar problem occurs with frameworks for testing such systems, it’s nigh impossible for any one person to have the breadth and depth of experience required to understand how all parts of the framework work. Splitting code from the framework into common libraries lets testers specialize in their particular strengths. For instance, my team had enough expertise in authentication layer of our flagship product to develop an authentication library for testing purposes. Another team focused on building a library for testing UI reports related to traffic billing, their domain of expertise. Each team has focused on one thing and did it well!
With a library that does only one thing, fixing bugs and adding new features is easier. Rather than working through a complex monolithic test framework and worrying about complex regression testing, a maintainer of the library can focus on a single small set of functionalities. I once contributed to the test framework with a feature I needed in my tests. It took significant time to release its new version with my changes. Making sure changes didn’t impact users of the framework took days and thus we released new versions infrequently. With a library, the process of development and releasing is much faster.
Another benefit of libraries is the possibility to compose functionalities together. Imagine, you would like to send an HTTP request to a protected resource requiring authentication. The request fails with an error and your colleagues want to reproduce the problem with curl command line tool. The whole functionality can be achieved by composing three different libraries: REST-assured (for sending HTTP requests), internal authentication library (for authentication and signing HTTP requests with session tokens) and curl logger (for printing curl commands).
// Authentication library
Session session = new RestAssuredAuthClient(baseUri).authenticate(user, password);
RestAssured
.given()
// CURL library
.config(CurlLoggingRestAssuredConfigFactory.createConfig())
// Authentication library
.filter(new RestAssuredSigningFilter(session))
.formParam("startDate", "2018-09-05")
.post("/results")
.then()
.code(201)
If a library should do only one thing, then you probably ask yourselves what could be these single things in test automation. Here are a few ideas that come to my mind:
Note that test libraries can be specific to your product or can have a wider audience. For instance, Selenium library provides a base for interacting with any Web UI, while the Luna Portal reporting library, built on the top of Selenium, provides routines for interacting with UI of our specific system.
How to find potential areas for a testing library in your current project? I have learned that this is an organic process. Usually, when the project starts I do not have enough knowledge about the domain and the internals of the system. Automated scripts for testing grow slowly and are frequently refactored as my initial assumptions about the system and the domain often turn out to be wrong.
It is not a bad thing if both test scripts and routines used by those scripts live in the same repository – it eases frequent refactoring. I try to follow the rule of three, that is when the same code is used three times or more, I extract it into a separate procedure or a helper class. It takes time to understand whether those classes are stable enough to be factored into a separate test library.
We have built a number of such libraries in one of our teams responsible for building the authentication layer. The initial goal was to ease scripting regression tests for authentication, but ultimately the libraries proved useful for other teams. Two have been open sourced and are used by both customers building applications on top of the company infrastructure and by the online community testing REST services unrelated to my current employer.
The whole endevour with introducing test libraries was a joint of effort of my team at Akamai. Many thanks to, in no particular order: Anatoly Maiegov, Mariusz Jędraczka, Krzysztof Głowiński, Martin Meyer, Antonio di Maio, Bartłomiej Szczepanik, Patrizio Rullo, and Chema del Barco.