Saturday, June 23, 2012

Challenging claims, part 1 – Test Automation


I was reading yesterday this http://qatestlab.com/knowledge-center/qa-testing-materials/can-automated-testing-replace-the-smart-software-tester/ article and noticed it contains many claims I don’t (fully) agree with. Before I started writing this post, I chose to read a few more from them to have a better idea what they are saying – maybe I had missed something. The more I read, the more I am convinced this is a company selling fake testing (inputting “QA” in the name of the company doesn’t help at all with this feeling). I hope I am wrong with that and there is some misunderstanding somewhere. However, in the meantime, I will challenge some claims from their blog, starting from the article I quoted above.

“Beginning the automated testing before the software can support the testing will only make more work through additional maintenance and rework.”

Firstly, sentences like this are usually of no value to testing, in my opinion. To me, it’s like saying “don’t write code too early” or “wasting money on irrelevant documentation is useless”. Nonetheless, there is a mistake also. Test automation can be written before any product code is done and it can still be rather maintenance and rework free. If you don’t believe me, ask from a TDD evangelist.
(Yes, I am making an assumption here that the author meant “beginning writing the test automation” when he wrote “beginning the automated testing”. I am also making an assumption he means test automation scripts when he writes about test automation. I wish to be corrected if this is not what he meant.)

“Eventually, automated testing takes the daily monotonous work of conducting the same action over and over away from software testers.”

I have heard this multiple times. It’s usually said by testers who execute test cases someone else told them to do. Sometimes it’s said by testers who for example want to create a lot of user accounts at the start of each sprint.
Testing is not monotonous. It’s monotonous only when done wrong (in my opinion). Think of it like sex. It most likely gets boring if you only do the missionary position under the blanket on Saturday evenings when the lights are off. Maybe not my best analogy, but surely you see the connection?
What I would like to ask in this case is why I am doing the same things over and over again, could I stop doing it, what other means there are etc. I would also like to ask from the author what he considers as “automated testing” in this context.

“Automation will repeat test after test for days on end, never failing to conduct them in exactly the equal way.”

I’ve never seen this happen in real life. I’ve never heard this happens in real life. Automation fails because of source code changes, infrastructure updates, porting to different systems, timing errors… Even for the same reasons the code fails it’s supposed to test! There are false positives, false negatives, blockages, crashes etc. And just to mention it, in this universe I am seeing around myself, it’s pretty much impossible to conduct a test in the exact same way as it was done previously. Sounds philosophical? Good, it means you started thinking of it!

“Automated testing never gets tired or burnt out or forgets to do a step.”

Indeed, automation doesn’t get tired or forget, but it does, however, fail. We don’t call it computer getting tired; we call it for example a memory leak or taking over CPU. We don’t call it forgetting to do a step; we call it missing a step, having a step that was changed, getting a timeout for a step… Testing is (in most cases I am aware of) not about endless struggle to repeat the same things and hoping something will (not) break at some point.

“Automated testing can just confirm that the software is as good today as it was yesterday.”

Automated testing cannot confirm that. Just like testers don’t assure quality. This was the initial claim which led me to believe you are selling fake testing. If you really claim you/your automation can confirm this, you are either lying or ignorant. Good test automation can "provide some confidence that nothing really big and obvious broke", like Matt Heusser wrote.

4 comments:

  1. Very nice debunking, thing is, based on all the claims in the original post on qatestlab you can keep writing posts!
    In fact, I am now tempted to take all the quotes you took up and give different reasons for them to be false :)

    Thanks for the inspiration for some future posts of my own!

    ReplyDelete
    Replies
    1. Thanks a lot, Martijn!

      I like that idea a lot. The quotes are wrong in so many ways and I'd love to see someone with good test automation skills to discuss about them. Please ping me on Twitter when you have published the post!


      Best regards,
      Jari

      Delete
  2. Thank you for the quote, which I think is a good one. If I had a few hundred more characters (it was over twitter), I would have added that, while nothing big might break with the software you are compiling and releasing, there could always be something else that breaks.

    Someone changes the configuration on your webserver so now apache won't run, or downloads a 3rd party OS component so the same, or downloads the newest DB driver and, while the software ran in your test environment, it doesn't run in prod, or the network connection goes down, or someone breaks a firewall rule and port-80 no longer serves traffic ... there are all kinds of black swans that traditional, linear, gui-driving automated testing won't catch.

    But, for 140 chars, I think my quote is decent. Thank you! :-)

    ReplyDelete
    Replies
    1. Hi Matthew,

      Thanks for your insight! It's very good to notice something completely different can break. A good tester has an open mindset.

      I have seen a lot of cases where a change somewhere affects a feature in a completely different place. For example, once a field was added on a web site's sign up form, which caused the reporting functionality to break because there was all the sudden a new column in the DB. (Change in client UI caused a bug in the admin side.) Nothing else was found to have broken, but nobody either expected the reporting to fail.

      Definitely, 140 characters is a limitation we are living with, but somehow we are still able to make it work. Twitter is great!


      Best regards,
      Jari

      Delete