Automated integration testing is a pig, but it’s an important pig. And it’s important for precisely the reasons why it’s so difficult.
The wider the scope of your automated test – i.e. the wider the chasm that your integration testing pig has to fly across (rather like the one in the above photo) – the more problems you’ll encounter; but these are also “real-world” problems that will be faced by your code, both during development and when it’s released. So an integration test that breaks – and they will break frequently, e.g. when a dependent system has changed unexpectedly – isn’t just a drag, it’s providing an early warning that the system is about to break “for real”. Think of an integration test as your network canary.
Integration tests are also important, and worth the pain of setting them up and keeping them working, because without them, you’ve just got unit tests. By themselves, unit tests are too myopic; they don’t assert that a complete operation works, from the point when a user clicks “Go” through the complete operation, to the results displayed on the user’s screen. An end-to-end test confirms that all the pieces fit together as expected. Integration is potentially problematic in any project – which is why it’s so important to test.
One of the themes in this chapter is that you can use a conceptual design (i.e. robustness diagram - a picture of a use case, showing controllers, entities and boundary objects) to identify what kinds of tests to write - e.g. a controller "talking" to an entity class suggests that you'd write a normal unit test; a controller talking to an external system interface suggests that you'd write a "controller-level" integration test; a controller talking to another controller suggests that you'd write an "algorithm-level" unit test; and so on.
It's kind of a systematic approach to driving tests from your design...