Dockerising your legacy integration tests

by Thomas ZeemanOctober 25, 2018

Docker logoAs most of you will know, every now and then you can expect the question ‘Can you help fix something on this ancient project?’ According to the Universal Software Ageing table, that means anything from last month to much, much older. I got such a question recently, being in-between projects, and this project fell squarely into the ‘much older’ category. As luck would have it there was still a previous developer around, for some definition of around that is. (Sidenote, if you’re not that lucky, I suggest you look into this other thing called Software Archaeology.)

So having had the ultra quick introduction (‘there’s the git repo, good luck’) I started looking at what was needed to get this thing running on my local machine and earn that much coveted ‘Works on My Machine Certification‘. Luckily it was already using Java 8 and maven so I could immediately start firing off:

<br>
mvn clean verify<br>

Obviously, that didn’t go too well out of the box. Someone had found it necessary not only to run some integration tests against a database but also to have that database be a regular MySQL instead of some in-memory one like H2 or HSQLDB.

Now, how do we get to go past this hurdle? In the old days, we would have installed a local copy of MySQL, manually configured it with the user and database necessary for the project and then run the tests again. But even a Grey Beard developer like myself learns some new tricks every now and then and the last few years there’s that thing called Docker that helps us run stuff if and when we need it.

Integration testing with MySQL – take 1

So, we now know we need MySQL and want to use Docker. Let’s get started there.

<br>
docker container run mysql<br>

Ok. If that would be all, there wouldn’t be much of a point to this blog now, would there? Indeed. The above is not all. Our test requires the database server to be a specific version, available at a certain address, with a certain user and a certain database. And since we don’t want to keep a console open every time, let’s run it in the background:

<br>
docker container run -e MYSQL_ROOT_PASSWORD=MyRootPassword01 \<br>
-e MYSQL_USER=myuser \<br>
-e MYSQL_PASSWORD=MyPassword01 \<br>
-e MYSQL_DATABASE=legacydb \<br>
-p 3306:3306 -d mysql:5.7<br>

Ok. At least the tests want to start now, but for some reason, it complains that the tables generated by Flyway (for an introduction to that see a write up by another colleague) cannot be found. Long story short, we hit that particular quirk where MySQL is running on Linux (in the container) and we’re using Hibernate to talk to it: MySQL created the tables in all lower case and Hibernate asks for the camel-cased name. Usually solved by hacking the /etc/mysql/my.cnf file on the server, but this is Docker and we don’t want to create our own custom container just for that. Luckily, the official MySQL Docker image allows you to pass in command arguments so using

<br>
docker container run -e MYSQL_ROOT_PASSWORD=MyRootPassword01 \<br>
-e MYSQL_USER=myuser \<br>
-e MYSQL_PASSWORD=MyPassword01 \<br>
-e MYSQL_DATABASE=legacydb \<br>
-p 3306:3306 -d mysql:5.7 \<br>
--lower_case_table_names=1<br>

we finally manage to get those pesky tests to pass. Does that mean we’re done yet? Of course not. Although this solution has gone from a laborious set of documented steps to a significantly lesser amount of steps, it is still documentation and we all know how well that is read…

Integration testing with MySQL – take 2

The question then becomes if there is a way to automate all of this a little further. Perhaps as part of the build so we don’t have to write the documentation nor have to do anything but run the mvn clean verify command.

As it happens, there are several maven plugins that can help you there. Due to some prior encounter, I’ve chosen to use the docker-maven-plugin from fabric8.io.

Having gone through 1 iteration already, it should be no surprise that the direct translation of the above docker command ends up something like this in our maven pom.xml:

<br>
&lt;plugin&gt;<br>
&lt;groupId&gt;io.fabric8&lt;/groupId&gt;<br>
&lt;artifactId&gt;docker-maven-plugin&lt;/artifactId&gt;<br>
&lt;version&gt;0.20.0&lt;/version&gt;<br>
&lt;configuration&gt;<br>
&lt;dockerHost&gt;unix:///var/run/docker.sock&lt;/dockerHost&gt;<br>
&lt;verbose&gt;true&lt;/verbose&gt;<br>
&lt;images&gt;<br>
&lt;image&gt;<br>
&lt;name&gt;mysql:5.7&lt;/name&gt;<br>
&lt;run&gt;<br>
&lt;ports&gt;<br>
&lt;port&gt;3306:3306&lt;/port&gt;<br>
&lt;/ports&gt;<br>
&lt;env&gt;<br>
&lt;MYSQL_DATABASE&gt;legacydb&lt;/MYSQL_DATABASE&gt;<br>
&lt;MYSQL_USER&gt;myuser&lt;/MYSQL_USER&gt;<br>
&lt;MYSQL_PASSWORD&gt;MyPassword01&lt;/MYSQL_PASSWORD&gt;<br>
&lt;MYSQL_ROOT_PASSWORD&gt;MyRootPassword01&lt;/MYSQL_ROOT_PASSWORD&gt;<br>
&lt;/env&gt;<br>
&lt;cmd&gt;<br>
&lt;arg&gt;--lower_case_table_names=1&lt;/arg&gt;<br>
&lt;/cmd&gt;<br>
&lt;wait&gt;<br>
&lt;time&gt;10000&lt;/time&gt;<br>
&lt;/wait&gt;<br>
&lt;/run&gt;<br>
&lt;/image&gt;<br>
&lt;/images&gt;<br>
&lt;/configuration&gt;<br>
&lt;/plugin&gt;<br>

This creates the same container and can be run when needed by including it in our maven command

<br>
mvn clean docker:run verify<br>

Hm… I don’t like to change that! I was finally getting some fingerspitzengefühl for mvn clean verify and now you want me to add some other parameter to that line? Can’t we automate that too?

Yes, we can! With the following execution block in our docker-maven-plugin:

<br>
&lt;executions&gt;<br>
&lt;execution&gt;<br>
&lt;id&gt;start&lt;/id&gt;<br>
&lt;phase&gt;pre-integration-test&lt;/phase&gt;<br>
&lt;goals&gt;<br>
&lt;goal&gt;start&lt;/goal&gt;<br>
&lt;/goals&gt;<br>
&lt;/execution&gt;<br>
&lt;execution&gt;<br>
&lt;id&gt;stop&lt;/id&gt;<br>
&lt;phase&gt;post-integration-test&lt;/phase&gt;<br>
&lt;goals&gt;<br>
&lt;goal&gt;stop&lt;/goal&gt;<br>
&lt;/goals&gt;<br>
&lt;/execution&gt;<br>
&lt;/executions&gt;<br>

This makes sure that before the integration tests are started, we start the docker container and when the integrations tests are down, we stop it. You did properly separate the integration tests from the regular unit tests, didn’t you?

So, that would be a slight alteration to your default maven surefire plugin configuration then:

<br>
&lt;excludes&gt;<br>
&lt;exclude&gt;**/*IntegrationTest.java&lt;/exclude&gt;<br>
&lt;exclude&gt;**/*IntegrationTests.java&lt;/exclude&gt;<br>
&lt;exclude&gt;**/*IT.java&lt;/exclude&gt;<br>
&lt;/excludes&gt;<br>

That should cover the typical naming strategies.

And while we’re at it. We do want our container to be taken down after the tests have run, also if there was a failure. That’s where the maven-failsafe-plugin is for:

<br>
&lt;plugin&gt;<br>
&lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt;<br>
&lt;artifactId&gt;maven-failsafe-plugin&lt;/artifactId&gt;<br>
&lt;version&gt;2.22.0&lt;/version&gt;<br>
&lt;configuration&gt;<br>
&lt;encoding&gt;UTF-8&lt;/encoding&gt;<br>
&lt;/configuration&gt;<br>
&lt;executions&gt;<br>
&lt;execution&gt;<br>
&lt;id&gt;integration-test&lt;/id&gt;<br>
&lt;phase&gt;integration-test&lt;/phase&gt;<br>
&lt;configuration&gt;<br>
&lt;includes&gt;<br>
&lt;include&gt;**/*IntegrationTest.java&lt;/include&gt;<br>
&lt;include&gt;**/*IntegrationTests.java&lt;/include&gt;<br>
&lt;include&gt;**/*IT.java&lt;/include&gt;<br>
&lt;/includes&gt;<br>
&lt;/configuration&gt;<br>
&lt;goals&gt;<br>
&lt;goal&gt;verify&lt;/goal&gt;<br>
&lt;/goals&gt;<br>
&lt;/execution&gt;<br>
&lt;/executions&gt;<br>
&lt;/plugin&gt;<br>

And now we’re back to our mvn clean verify command again…

Integration testing with MySQL – take 3

According to the Rule of Three, there has to be a Part 3. Rejoice then, because there is!

After part two, we had our database automatically handled via maven and we could leave it at that, but what if I would like to run a test from my IDE? Or if I were to develop a new test or debug an existing one? Do I keep running it via maven? Nooooooooo.

Ok. Halloween is not there yet, so hold off on the dramatics there. Still. There is that thingy with using the solution via maven… can we work around that too?

Yes, we can! Using some even newer-fangled thingy called test containers instead of using the fabric8.io plugin. It allows us to use a container in our test code as if it was just another object.

<br>
public class SomeIntegrationTest {</p>
<p>@ClassRule<br>
private MySQLContainer mysql = new MySQLContainer()<br>
.withConfigurationOverride("mysql_overrides");</p>
<p>// do test stuff<br>
}<br>

Unlike the previous two steps, we do have to add a custom .cnf file to work around the table name issue and the mysql_overrides parameter refers to a directory on the classpath where we could have a file called override.cnf with the following content:

<br>
[mysqld]<br>
lower_case_table_names = 1<br>

Now, it so happens to be that our integration tests use Spring. Mind you that would be the plain Spring Framework, not that recent whatchamacallit upstart Spring Boot. So we could completely rewrite our tests to ditch all of that work, or we could integrate this testcontainer.org magic into our existing test. Like this for instance:

<br>
@RunWith(SpringJUnit4ClassRunner.class)<br>
@WebAppConfiguration<br>
@ContextConfiguration(initializers = { SomeIntegrationTest.Initializer.class })<br>
public class SomeIntegrationTest {</p>
<p>@ClassRule<br>
private MySQLContainer mysql = new MySQLContainer()<br>
.withConfigurationOverride("mysql_overrides");</p>
<p>// do test stuff</p>
<p>static class Initializer implements ApplicationContextInitializer {</p>
<p>public void initialize(ConfigurableApplicationContext context) {<br>
Properties properties = new Properties() {{<br>
put("jdbc.url", mysql.getJdbcUrl());<br>
put("jdbc.username", mysql.getUsername());<br>
put("jdbc.password", mysql.getPassword());<br>
}};</p>
<p>context.getEnvironment().getPropertySources().addLast(<br>
new PropertiesPropertySource("test-containers-mysql",<br>
properties));<br>
}<br>
}<br>
}<br>

Since we have several more tests like this, it sounds like we want to abstract that part away in some superclass. Something to the tune of:

<br>
public abstract class AbstractIT {</p>
<p>@ClassRule<br>
private static MySQLContainer mysql = new MySQLContainer()<br>
.withConfigurationOverride("mysql_overrides");</p>
<p>public static class Initializer implements ApplicationContextInitializer {</p>
<p>public void initialize(ConfigurableApplicationContext context) {<br>
Properties properties = new Properties() {{<br>
put("jdbc.url", mysql.getJdbcUrl());<br>
put("jdbc.username", mysql.getUsername());<br>
put("jdbc.password", mysql.getPassword());<br>
}};</p>
<p>context.getEnvironment().getPropertySources().addLast(<br>
new PropertiesPropertySource("test-containers-mysql",<br>
properties));<br>
}<br>
}<br>
}<br>

Which allows us to reduce our original test class to this again:

<br>
@RunWith(SpringJUnit4ClassRunner.class)<br>
@WebAppConfiguration<br>
@ContextConfiguration(initializers = { AbstractIT.Initializer.class })<br>
public class SomeIntegrationTest extends AbstractIT {</p>
<p>// do test stuff<br>
}<br>

So far so good. Now we can refactor all the integration tests and we’re done. Right! Right?

Oops. We got a failure… how’s that happening?

Here too, we’re running into some ‘known issue’. Luckily, someone also posted a workaround. So we end up with the following abstract class instead:

<br>
public abstract class AbstractIT {</p>
<p>private static MySQLContainer mysql = new MySQLContainer()<br>
.withConfigurationOverride("mysql_overrides");</p>
<p>static {<br>
mysql.start();<br>
}</p>
<p>...<br>
}<br>

Using this, we can continue using the existing tests and don’t have to introduce all kinds of maven plugins. Just modify all existing test code…

Summary

Now, where does this leave us?

Whatever step we take, we have to have Docker installed. That used to be a thing, but many systems are fully supported now so I’ll assume that is not an issue anymore. Besides, just having to support Docker on your machine instead of a gazillion databases and other server software in as many different versions sounds like an improvement in its own right.

Step 1 would obviously be the least invasive and quickest to implement. All you have to do is get the right docker run command and you’re done. And then you have to remember that one, hand it to your successor somehow and don’t forget to keep it running after a reboot… You get the point I guess.

Step 2 gives you some more automation and should be compatible with CI environments as well. It also required only changes to build configuration. There are some drawbacks though. I’ve already mentioned the fact you’re now dependant on running the integration tests with maven all the time. Observant readers may also have noticed that the plugin configuration contained a line starting with unix. If you’re working on Windows that would have to change and if you’re working in an environment like me, where people can choose their own tools (within limits, though I am surprised we haven’t got someone running OpenBSD or somesuch on their development machine).

Step 3 arguably gives you the most power. It uses Docker and is independent of any build tool so it works in your favourite IDE as well. It does come at a cost though. It requires code changes, which, depending on the age and state of the project, may not be that easy.
I also noted during my experiments that the build times would vary quite a bit. Whereas the maven plugin solution would clock in at 2 minutes sharp every time for my project, the testcontainers.org solution would usually be a bit slower and vary between 1:56 and 2:13 in my, granted highly non-scientific experiments. YMMV.

So, what am I going to do? Integrate the testcontainers.org solution into our project as that seems the most useful for us. And for the future, I’ve got a few more options to choose from.

More information from Trifork:

Download the document