Here’s how to add some juice to your automation by leveraging the tools you’ve already created…..
Most automation I’ve seen just validates the response is as expected. But what if it is not?
Do you just report PASS / FAIL?
Do you just report Foo was ABC expected XYZ?
How about going deeper? A person is going to need to investigate so give them more information to work with.
When something fails – kick off a trouble shooting diagnostic using automation tools you already have.
Query databases or call APIs to show the state at the time of the test failure.
The state may have changed by the time a person gets around to investigating the issue so it may help to know what it looked like at the time the test ran.
Here’s an example of a few things you could add to the report to make it more helpful
- FAIL – Foo was ABC expected XYZ? <= this is the usual
- <the date & time>
- SELECT foo FROM bar where….
- API request & response – to provide supporting data
Format this information in a way a person can copy & paste to repeat the query or API call.
In a recent automation framework I created, I have a custom assertion method. When a test case makes an assertion which fails, that kicks off a trouble shooting module which invokes some GET & LIST APIS, searches log files for “Internal Server Error” etc. I try to provide as much information as possible to make investigations easier & faster. You can also copy & paste all this information into a bug tracking system so the developer has as much to work with as possible.