In the previous
(http://jarilaakso.blogspot.com/2012/02/illusions-on-testing-part-i-chimera.html)
post I noted five different chimeras of testing. They were:
1.
Pre-scripting testing work
2.
Using terminology (test vs. check)
3.
Women are better on an average
4.
Comprehending exploratory testing
5.
Writing claims without arguments
As some of
you already speculated, I am going to put some blame on ISTQB etc. for all
these problems. I am not saying it’s all because of them. I actually blame the
testers for all this, they are responsible of their own answers, but it’s quite
clear how “certification” is visible in their answers. I don’t want to turn my
blog as a rampage against any kind of “authority”; I am only pointing out for example
what kind of problems certifications can cause.
When we
think about the problems with pre-scripting testing, as in creating test cases,
we can rapidly see the testers were able only to cover very shallow cases.
There were indeed really good ideas too, but the problem I see is that testers
weren’t able to write down some really high risk cases, things I would call “basic
tests”. On the other hand, I am pretty sure many of the testers would perform
these tests. Maybe not on the first “round”, but evidently when given time.
Possible reasons why this problem occurred: too hasty work, poor documentation
skills and lack of imagination.
The second
problem I noted was lack of differentiating testing and checking. I’ve talked
lately with quite many people about this and pretty much everyone seemed to
think it’s not a necessary differentiation. I know this might sound like a
troll, however, I claim everyone serious about testing understands the difference
and uses it accordingly. I don’t really care if non-testers use “test” in any
way as long as testers use it properly. All professions have their own
terminology (jargon) for a reason. An additional rationale for differentiating
the words is for example to avoid anyone thinking that testing can be
automated. A few reasons for the problem: not being serious about testing,
using ISTQB terminology and working with customers who use a certain kind of
terminology.
On the third
point I illustrated how women were doing better than men in the test. It’s true
that some of the best answers came from men, but in overall women had better
results. (As a side note, I remember reading about quite many surveys on
various domains where the same patters repeats.) There are many possible
reasons for this result and I think one big part is that generally women analyze
situations more than men. Men seemed to lack confidence (online searches for
answers and even misquoted) and have short answers, which could imply they
either were lazy or want to use more time with hands-on activities.
The fourth chimera
was about exploratory testing. Everyone appears to write it in their CV, but few
understand what it is. It’s seen as something that will be done after all test
cases have been executed and there is time to look for other issues, it’s
widely described as experience-based technique and commonly understood it doesn’t
have any means for providing information what has been done. My blame-list for
reasons: ISTQB, lack of professional aspiration and the culture in the company
they are working in.
I saved the
best part for the last. There seems to be a strong culture of giving claims
without justification. If the claims would make sense (i.e. I would agree with
the claim without hesitation and I would know how the tester came to the
conclusion), I could accept some of them. But in this case there were a lot of
claims I just can’t accept. Most likely not even with strong argumentation,
which would be the minimum requirement for me to consider the statements as
valid ones. (Before sending the problems to testers, I mentioned there are no
correct answers, but they need to be able to convince me their answer is a good
one.) The main reason behind claims without explanations: lack of critical
thinking.
What do you
think are the reasons for the mentioned issues? Maybe you want to share another
general problem you have seen in testers thinking?