Testing and amount of effort put on testing a system or just a piece of code is one of the most important factors implying the quality of resulting software. Here I want to give some insight into what has worked very well for me in the past few years of managing, heading and participating in software development teams.
Before even diving into the details, I want to clarify two things.
First of the things to clarify is the definition of "integration test". I know there are a bunch of naming conventions and similar codex for what are different levels of testing and how they are named but lots of them are just fuzzy around what is a unit and what not so I decided to just describe what I mean under the term integration test and you can name it as you like.
Integration testing for me is (specially when talking about microservices) when all internally developed modules (or services) are up and running against each other and against (probably mocked) external systems/modules. This means you are basically testing all the parts of system you have developed, including interactions and compatibility between them.
The second, even more diversified area is what you shall test. There are people who swear by the name of unit tests per method and the ones that call them just a waste of time and effort.
In my opinion integration testing is a very important level of testing. Honestly, whenever I have control over a project, this is the most important test and gets most of the testing resources dedicated to. Normally unit tests are only limited to things and methods and classes that do significantly complicated logic and regression tests that are introduced when the project goes on and bugs are discovered.
There are a bunch of reasons why such integration tests are so important to me, but I will just list following as this is not really the main scope of this note:
considering the amount of effort, they cover large portions of code compared to unit tests that compare only few lines each
unlike (most of) unit tests, their resulting conditions applied on running code are directly derived from real cases and are not just there to give developers a satisfying feeling that everything works
they are very high level and normally map to the requirements and user stories which is invaluable and inevitable to test
they make sure that the system works all together which is a valuable information in the era of microservices
Whether or not those previously mentioned reasons gets you so far to put the highest priority on integration testing or you just do it as last action before checking in your code, such tests need to be run. Nowadays considering the extremely lowered price of computation thanks to cloud and similar sharing technologies, continuously testing your codes is a most have. I'm not going to try convincing you here but just to point out some important features that I always consider while building a CI pipeline.
Choose a CI that gives you as much as possible ready without getting in your way. Also pay attention to the possibility of building different branches (if you follow some well-known workflow) independently and using different pipelines. Its even better if you can script your pipeline as part of your code and check it in. That's what DevOps was supposed to be, right beside development!
Once you feel the power of script based CI, you will never go back to GUI for building up your pipeline again.
At the beginning (and even right now) lots of CI systems started out by sharing build environment between builds. This could have its own benefits but at the end you will either mess things up between different versions and environments that you need to test with and against or you will just skip testing at some point.
So find or adopt a CI system that isolates builds, like many that do it using containers. GitLab CI is my favorite here that uses Docker images.
All the above said, there is still one more thing to point out, how do you do your integration tests?
There are frameworks and tools out there that let you run your application context and then run your integration tests in the same context. Examples of these can be named as Arquillian mostly for Java EE environments and Spring's test framework for Spring based environments and applications. However I have a general objection against running integration tests in such manner. I believe when your testing, you should be as much as possible near to production environment. What I don't like about most of these frameworks is that they tamper with the setup and processes.
So why should we do that when we can just run the app in a better setup? My normal setup here is that I let each module (internal or external) to run in a separate container. Link them together where it is necessary and let them run. Then run the integration tests against the system running. This way tests send inputs and monitor outputs and check the results. Some real black-box testing that makes tests and the internals of the modules independent.
In such a setup, external modules may be mocked or not but that depends on your needs.
Lets look at an example. I have a system where two microservices are built, a user management service and a notification service. They are loosely coupled with a messaging system (say Kafka) and the user management needs a mongoDB to run too. The notification service needs an SMTP service to fulfil its job.
In above scenario, my integration tests will be setup like this:
a mongoDB container
a container for Kafka
a container for user management service linked to mongoDB and Kafka
an SMTP mock container like MailDev
a container for notification service linked to Kafka and SMTP mock one
After this setup is up and running, integration tests will run. The tests send input either as events or API calls and get and check the results either by checking the fired events, sent emails or returned results from API or a combination.
I have put lot of thought into this process and I would like to hear any feedback regarding it.