I would like to know how you approach integration testing of your plugins? Does it always come down to designing isolated unit tests, which assert whether the resulting output of a function matches the expected one? Or are there better ways of evaluating performance of the whole system?
Let’s imagine our plugin generates urban layouts based on an input curve and simulates daylighting conditions of the resulting geometry. We could break the process down into a few steps:
- Subdivide the input curve
- Generate geometry
- Run daylight simulation
Now, I would like to perform all these steps sequentially with various permutations of data sets:
- concave/convex curves; planar/non-planar curves; some vertices below/above 0 elevation etc.
- various rules for geometry generation (only high-rise; only residential; high/low density; mix use etc.)
- run the simulation with a few curated weather files including purposely introduced errors in the data set
This could allow me to spot edge-cases uncaught by individual unit tests. It might be that the curve subdivision step always succeeds, but it results in very small cells, which trip my geometry generation logic. Or the resulting building geometry can’t be handled by the simulation module.
Currently I am performing these steps manually but would love to hear about your experience with automating similar integration tests and how best to approach it.