This document discusses roles in software quality. It describes roles like development, checking, and exploration which map to roles in ice hockey like attack, defense, and goalie. It emphasizes that exploration is important for finding issues to help ensure software quality, just like a goalie is important for defense in hockey. The document is presented by Sami Söderblom and provides information about his background and contact details.
1 of 30
Download to read offline
More Related Content
Sami Söderblom - Building The All-Star Lineup of Quality
3. Me?
Sami Söderblom
+358 41 538 2001
sami.soderblom@sogeti.com
sami.soderblom@gmail.com
35 yrs old, married, Hontai Yoshin Ryu
jujutsu, photography, disc golf
Working steadily since the age of 13
10 yrs of testing, test/quality mgmt, process
development, training, coaching, etc.
Testing experience in domains such as video surveillance,
advertising, insurance, banking, telecom, video gaming, retail sales, freight
logistics, public sector, human resouces, electric networks…
Company experience in Fortum, Finnet, The Finnish Consumers’ Association,
Telia Mobile, Siemens, Mirasys, Blyk, Tapiola, Itella, Nordea, The Finnish
National Board of Customs, Fingrid, Teosto…
Steering group member in Finnish Association of Software Testing
11. Algorithmic vs. heuristic thinking
ef
D
ine
pro
d
m
ble
s
Techn
iques
Ru
le
Defined solutio
ns
terminism
De
Metrics
Algorithms
Readi
ng
ast
”F
ng
hinki
t
”
Exp
”Slow
ect
Auto
mat
ed
ion
res
ults
antification
Qu
am
rawing
D
a map
so
f th
um
b
g
in
ink
” th
Gut feelings
Explo
ra
tion
Heuristics
ns
isio
Dec
As
soc
iati
ap
ntu
I
on
s
on
iti
Creativity
tr
Arbi
Educate
d
es
hoic
ary c
guesses
12. The Microcosmos of Quality
Taste
Taste
Taste
Heuristics
Salt
Problem
Pepper
Algorithms
19. Drive
”You focus on failure, so your clients can focus on success.” –Lessons Learned
in Software Testing by Kaner, Bach and Pettichord
Development
Checking
Exploration
25. Roles in ice hockey
Attack
Attack
Defence
Attack
Defence
Goalie
26. Roles in software development
Development
Development
Checking
Development
Checking
Exploration
27. The All-Star Lineup of Quality
Development
QUALITY
n
es i g
D
n
utio
c
Exe
g
rnin
Checking
Exploration
Lea
“Testing is a quest within a vast, complex, changingrspace. We seek bugs.
ing
ee
St
It is not the process of demonstrating that the product CAN work, but
exploring if it WILL.” –James Bach
#7: Familiar:
Does the software behave as intended under the conditions it's supposed to be able to handle?
Explicit requirements
Planned testing
Algorithms
Unfamiliar:
Are there any other risks?
Implicit requirements, needs, wants, etc.
Exploratory testing
Heuristics
Familiar you can expect and you prepare for the problems in it. Perhaps even prevent them. Defined problems and defined solutions. But the unfamiliar doesn’t work that way. When moving to the unfamiliar you have to expose yourself to situations where observations can be made. There aren’s specific places where bugs hide, but there are areas where they might hide. Controlling the chaos.
#8: Suunnitelmavetoisen testauksen suunnittelu- ja valmisteluvaihe vie enemmän aikaa kuin itse testaus.
Muutoksien aiheuttama korjaustaakka sotkee tulevien erien testaussuunnitteluaikataulun. Samalla testauksesta katoaa kokonaiskuva paljonko on työtä on tehty, paljonko on jäljellä, jne.
Monesti ajaudutaan hiomaan testitapauksia täydellisiksi sen sijaan, että mietittäisiin onko kaikki tarvittavat toiminnallisuudet huomioitu testeissä.
Helposti tuudittaudutaan ajattelemaan, että kunhan kaikki tehdyt testit on ajettu, kaikki on hyvin…
Tarkitus on pääosin todistamista, tutkiminen on altistumista uudelle tiedolle.
Jos ei ole kehitystä, ei ole tarkistusta ja tutkimusta. Kaikki kolme luovat edellytykset kaikille kolmelle toimia. Esim. jos ei ole tarkistusta, tutkimuksesta tulee tarkistusta, koska kaikki läpi pääsevät, mahdollisesti vakavatkin ongelmat vievät kaiken huomion. Tarkistus ei kuitenkaan ole ensin vaan tutkimuksen kautta sekin rakennetaan.
Kehittäminen ja tarkistus ovat pääsääntöisesti hyvin tietoteknisiä, kvantifioituja toimintoja. Testaus kokonaisuutena kuitenkin sitoo ihmisen tarpeen ohjelmistoon ja täten siihen vaikuttavat psykologia (käyttäytyminen, oppiminen, havainnointi, jne.), antropologia, jopa filosofia. Testauksessa pyritään oppimaan ja löytämään, ja apuna ovat tottakai kaikki tietotekniikan keinot.
Laatuaspektiin esimerkiksi Eeva Wahlström. Nyrkkeilijää koulittaessa kehitysvaihe opettaa lyömään, liikkumaan, laittaa lenkille, puntille, jne. Tarkistusvaiheessa katsotaan mitä on saatu aikaan ja opetetaan reaktioita erilaisiin tapahtumiin (suora lyönti ja sen väistö/torjunta). Testausvaiheessa tuodaan vastustaja, joka saattaa tehdä mitä vaan. Sparrataan laatu kuntoon. Eevalla sparrivastustajina miehiä, jotta kehässä vastaantulevat naiset olisivat helppoa kauraa.
Testaajan ammattitaito lepää usein geneerisissä malleissa, joita esim. ISTQB, TMap, jne. tarjoavat. Ei ole järkevää toistaa geneerisiä testitapauksia (raja-arvot, ekvivalenssiluokat, jne.) kerta toisensa jälkeen ihmisen toimesta. Ne tulee liittää osaksi tarkistuskoneistoa, joka on yleensä automaatiota. Mitä ala on oppinut testauksesta tulisi ottaa käyttöön kehityksessä ja tarkistuksessa! Tälle luotava prosessi, jossa otettu huomioon myös ylläpitotoiminta! Varsinainen testaus on kuitenkin aina lähtökohtaisesti aivojen ohjaamaa (sapient activity).
Jos testaajat sidotaan tarkistukseen, he kokevat tekemisen taakaksi. Tämä ei hyvä lähtökohta testaamiselle. Jos testausta tehdään vain skriptien mukaan ja tarkistaen, ei koskaan päästä kartoittamaan niitä inhimillisiä etuja/ominaisuuksia, joilla virheitä löytää. Sami ei olisi koskaan päässyt korostamaan vahvuuksiaan ja korjaamaan heikkouksiaan, jos muutosta ei olisi tapahtunut. Sama pätee kaikkiin. Vapaus ja vastuu on nostanut esiin loistavia testaajia.
Tarkistuksessa ilmeneviin bugeihin ei tarvita ammattitestaajia ja näiden intuitiota. Ammattitestaajat voivat kuitenkin koordinoida tätä toimintaa ja toimia esim. valmentajan roolissa.
Taksikuski-analogia:
On hyviä ja on huonoja. Huonoilla apuna GPS, joka ikäänkuin ammattitaidon korvike. Toki GPS:ää voivat käyttää myös ammattilaiset (esim. ruuhkatutka). Lopullinen tavoite kuitenkin poistaa resurssiongelma mahdollistamalla kaikkien tuleminen taksikuskeiksi.
Jos ei keskity bugien etsimiseen ja niihin keinoihin millä niitä löytää, ei voi koskaan tulla hyväksi siinä.
Muista myös tunnepuoli. Kehitys ja kehittäjillä resurssoitu tarkistus ovat toimintoja, jossa sihdataan lopputulokseen, saamaan aikaan jotain. Luomisprosessissa syntyy tunneside omaan tuotokseen eikä sitä pysty enää arvioimaan puolueettomasti. Oma lapsi ei ole koskaan ruma, jne. Testaaja ei alistu tähän vaan pitää pään kylmänä, arvioi lopputuloksen lisäksi jatkuvasti myös tietä lopputulokseen ja tuottaa tietoa päätöksenteon tueksi.
Muista myös "information objective" eli mitä tietoa sidosryhmät haluavat. Joskus voi olla, että sidosryhmiä kiinnostaa vain asioiden todistaminen tai testauksen saattaminen syntipukiksi. Silloin palvellaan tätä tarvetta, vaikka kyseessä onkin arvokkaiden resurssien väärinkäyttö. Jos testauksen on kuitenkin tarkoitus tarkastella laatua ja järjestelmän tuottamaa lisäarvoa loppukäyttäjälle, tutussa pyöriminen ei riitä. Tarvitaan tutkimusta.
#10: Questions like ”How about nominal range?”, ”Do you mean that a testcase set to ’Passed’ state cannot be set to ’Failed’ by anyone?” are heuristics that point out the fallacy of leading with algorithms in testing.
#12: Deep Blue and Garry Kasparov both play chess in about same level of expertise. Deep Blue calculates every possible move, every algorith, and uses the most effective one. Mr. Kasparov uses intuition and heuristics that fit the context, and removes ”useless” moves within an instant. The results of a decision vs. the effort that it requires. Rapid cognition and snap judgments give often as good results as thorough thinking, or even better.
To read:
Blink by Malcolm Gladwell
Thinking, Fast and Slow by Daniel Kahneman
#13: After applying spices, etc. the tasting is not the same as before. The context changes so approach is different and the same heuristic won’t work.
Heuristics try to find the problem, a pattern for something that can be fixed with specific means. Algorithms help to make sure that this problem won’t happen again within this context.
#14: Milk: Spellcheck doesn’t find anything wrong with this sentence. Common sense however states otherwise.
Dancer: Is that a good photograph? What makes it so? Basic knowledge about aesthetics, composition, light usage, etc.
Mobile phone: Which is the best? Opinions of those who matter.
Nestori the cat: Some might think it’s the ugliest cat they’ve seen. For Sami it’s the most beautiful cat. Why? Emotional bond, that can easily form into something you’ve invested your time into.
And so on. Everything affects on testing and how it’s results are interpreted.
#15: All very valid questions that sprout more questions that sprout more questions, and so on. Expanding test coverage.
#17: This is how your mindmap looks like after heuristic barrage. Stars indicate how well you think you’ve tested a certain area of interest:
- One star. Superficial inspection. Doesn't require any testing expertise, but doesn't mean that bugs are not found. Actually the most serious ones are often the easiest to find.
- Two stars. Tested as well as it's possible withing these time and resource constraints.
- Three stars. Additional testing wouldn't bring any added value. The tester has bled his heart into this and has no ammo left.
#20: Development and verification drive for completion. Testing drives for seeking out the risks that prevent this completion. Remember also the emotional bond that grows towards your creation. A tester does not have that.
Remember also that when you’re driving for completion, you base your doing mainly on stated requirements, explicit want, a definition of done. But quality is so much more than that; Needs for something can be anything, ranging from clearly stated explicit ones to highly challenging implicit ones, they can be in all kinds of forms, come from different directions and stakeholders, and ultimately they can be something that just cannot be put to words.
Driving for completion can also lead to losing yourself in implementation. When people are too closely involved in development process they can become blind of the problems and risks that may persist in the end result or in the process. They might develop an emotional bond to the product or the way of doing, and eventually inability to say: “Our baby is ugly.” The absence of defocus is also a risk in this situation; The objects under observation are often too small and/or removed from their context that it’s impossible to map the impact they have in big picture. This often leads to indifference on many quality viewpoints such as usability, performance, testability, etc.
#21: When the orchestra plays Beethoven’s 5th, it not that big of an issue if individual musician plays some individual notes awry. It’s the high points that matter. Big drum hits, 4/4 sequence sync, ends of parts of the song and of course the end of the song.
#22: As in Beethovens 5th, the high points matter. Set a calendar and follow progress. You can set control points the prevent the risks of waterfall nonsense.
If there are regulations bound by law or standards, they can be verified. However note that all the risks that might threat them are found better when testing. Neither is however a safety net, because the area to cover is infinite and constantly changing.
#23: Anyone can test, but make sure there’s someone to take responsibility for it. And pair them up with professional testers. Do pair-testing, keep testing workshops, testing related events, playful bug hunts (remember bug triage), etc. Replace those idle moments of Facebook and Angry Birds with something fun and productive, namely testing.
People who just do something are resources, they play positions. Those who take responsibility have roles.
#24: You do via POSITION, you take responsibility via ROLE.
#25: You develop products via the POSITION of a tester. You develop testing via the ROLE of a tester. If mixed with verification and coding, neither development will take place.
#28: Verification is proving something. Exploration and testing is exposing yourself to new information.
You need all three to build quality. All three create suitable conditions for all three to operate. It must be however understood that they are vastly different areas of expertise. Development and verification are more prone to algorithmic approaches as testing has more heuristic nature. Testing binds human need to software and thus it’s influenced by things beyond quantification. If testers are bound to verification activities, they consider that as a burden. If testing is done via scripts and algorithms, a tester can never use his/hers ability to find things. You don’t need the skills of professional testers to find those rare bugs that are found in verification. If you don’t concentrate on bugs and the skills that help you finding them, you can never be a good tester.
What the industry has learned about testing should be taken into use in development and verification. Eventually a heuristic becomes an algorithm and if possible and sensible, it should be automated.
Boxer analogy:
Testing is like the sparring partner for a boxer. You can use algorithms to improve movement, hitting, physical strenght, etc. by only with the help of heuristic training opponent the boxer can prepare himself/herself against the real opponent (reference to production release).
Taxi driver analogy:
There are good ones and bad ones. The bad ones have GPS, a replacement for expertise. Of course GPS is used also by the good ones, but they lead with thinking and expertise. Behind of it all is to remove the resource problem; When you have GPS, anyone can be a taxi driver. That threat lurks testing too.