Some myths about integration testing

Well decomposed application's code is testable very easily through unit tests. However, unit tests don't guarantee that these code snippets mixed together will work correctly. For this kind of check we should use integration tests. Tests which nobody likes...

In this article we'll try to, first, figure out why some programmers prefer to test code integration manually rather than in automated way. Secondly, for each accusation we'll try to find qualified counter-arguments. Let's begin.

Integration tests take too much time

It's maybe the accusation met the most often. Some programmers don't like integration tests because they slow down the process of continuous integration (CI). And it's almost true because integration tests are supposed to use connection to database and to be executed in environment looking like production environment. And all these operations consume a lot of time.

Counter-argument: you don't need to execute integration tests with every build launched automatically by Jenkins on code changing. Surefire plugin in Maven allows you to configure tests for different profiles. So, you can imagine to define profiles for unit testing, another one for integration testing - the first one executed by Jenkins after every change on code base, the second one executed less often, for example every 3 hours. Sample configuration looks like:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-surefire-plugin</artifactId>
  <version>2.18.1</version>
  <configuration>
    <excludes>
      <exclude>**/*IntegrationTest.java</exclude>
    </excludes>
  </configuration>
</plugin>

Even if tests execution every 3 hours doesn't really mean "continious", it provides supplementary level of tests, maybe easier maintainable than mocked unit tests.

Integration tests take time to write

Sometimes we, programmers, complain about the lack of time planified for tests writing. It's the reason why an argument against integration testing is this lack of time. And it's also almost true. Well written unit tests should cover the big part of problematic situations. However, they cover application in unitary way, so aren't able to check if two small methods are working well together and are producing expected results.

Counter-argument: well, today we'll take 3 more hours to define appropriated integration tests for one feature, but in the future we'll gain 3 days on fixing unexpected bugs or refactoring old and legacy code. The gain hours-days is a little bit exaggerated but it helps to keep in mind the idea that the tests generally exist to facilitate programmers life.

Moreover, integration tests help to "memorize" some applications bugs. You can imagine the situation when following code produce a problem but two separated methods not:

// correctly worked unitary cases
public List<String> getData() {
  // 
}

public List<Person> consumeData(List<String> names) {
  //
}

// problematic integration case
public void integrateNewPeople() {
  List<Person> people = consumeData(getData());
  // do other stuff with constructed people object
}

Without integration tests, detecting potential bugs on several mixed methods is more difficult than with them. Secondly, bugs detected by real users can be easily implemented as the cases of integration tests, allowing to keep the trace and be sure that further changes on the code won't break down fixed errors.

But it's project manage who defines integration tests

Another argument against integration tests I met was "but it's Project Manager/human testers who is making the integration tests". This argument is valid because it's not useful to make the same thing twice. However, it becomes less evident when the project team changes very often. Imagine that the PM or testers change every 3-6 months and that newcomers don't know the project domain and integration testing tools used by previous integration testers.

Counter-argument: In this case the programmatic form of integration tests could be useful. Programmer will, potentially, read coded test cases quicker than his colleague from another domain who doesn't have technical background or doesn't appropriate quickly new technologies. And in consequence he'll be able to understand project domains better and in more detailed way than somebody else reading only some tests definition.

Integration tests in code help also to document better the project and to understand the code to new programmers working on it.

Defining integration tests is slow

Sometimes we can hear that writing real integration tests is slow. "Real" means here tests based on, for example, SQL queries generated to embedded database or specific format files used to generate sample dataset. It's quicker to mock objects in programmatic way and launch tests. But it's subjective. Does below code is clearer than SQL queries generated once and put inside test database before launching the integration tests build ?

DataProviderDAO provider = Mockito.mock(DataProviderDAO.class);

@Test
public void checkIfPersonExists() {
  Mockito.when(provider.getPerson("First name", "Last name")).thenReturn(true);

  assertThat(personService.exists("First name", "Last name")).isTrue();
}

@Test
public void checkIfPersonNotExists() {
  Mockito.when(provider.getPerson("First name", "Last name")).thenReturn(false);

  assertThat(personService.exists("First name", "Last name")).isTrue();
}

Counter-argument: Or, with defined queries (or appropriated DSL executing this kind of queries):

INSERT INTO person (first_name, last_name) VALUES ("First name", "Last name"));

Test cases are simplified:

@Test
public void checkIfPersonExists() {
  assertThat(personService.exists("First name", "Last name")).isTrue();
}

@Test
public void checkIfPersonNotExists() {
  assertThat(personService.exists("First nameX", "Last nameX")).isFalse();
}

Reusability of dataset defined as SQL queries demand less effort than reusability of mocked objects (at least in the quantity of code to write). Of course, the preference for clearly defined data (in SQL, CSV... formats) or for objects mocking is a subjective criterion and it's a little bit difficult to argue for one or the other.

This article tries to debunk some myths about integration tests. However, it shouldn't be read as a text about glory for this kind of testing. Every application should be covered first by well defined unit tests and only after enriched by the cases of integration tests. This article tries only to prove that integration tests can complete very well unit tests and that some of arguments against using them are debatable.


If you liked it, you should read:

📚 Newsletter Get new posts, recommended reading and other exclusive information every week. SPAM free - no 3rd party ads, only the information about waitingforcode!