This document discusses the software testing process, including determining the test methodology, planning tests, test design and implementation. It covers determining the appropriate quality standard and testing strategy based on potential damage from failures. Factors for planning tests like what to test, sources for test cases, who performs tests, where and when tests are terminated are also outlined. Rating systems to prioritize modules, integrations and applications based on damage severity and risk are presented.
1 of 38
Download to read offline
More Related Content
CEN6070.1.Chapter10.1.ppt
1. OHT 10.1
1
The testing process
Determining the test methodology phase
Planning the tests
Test design
Test implementation
Test case design
Test case data components
Test case sources
Automated testing
The process of automated testing
Types of automated testing
Advantages and disadvantages of automated testing
Alpha and beta site testing programs
Chapter 10 Software Testing -
Implementation
2. OHT 10.2
2
Introduction
Need to discuss
testing effectiveness and
testing efficiency.
Equivalently, want to reduce number of undetected errors
and yet test with fewer resources over less time.
Need to design testing procedures to be effective in
detecting errors and doing so as efficiently as possible.
Also want to look at automatic testing too.
3. OHT 10.3
3
Introduction Topics to Cover
Much of this set of slides focuses on testing at
various levels:
Unit tests
Integration tests
System tests.
4. OHT 10.4
4
Ultimate Desired Outcomes for the
Chapter
Today:
Describe the process of planning and designing tests
Discuss the sources for test cases, with their advantages
and disadvantages
Next:
List the main types of automated software tests
Discuss the advantages and disadvantages of automated
computerized testing as compared to manual testing
Explain alpha and beta site test implementation and
discuss their advantages and disadvantages.
5. OHT 10.5
5
10.1 The Testing Process
Testing is done throughout the development process.
Testing is divided into phases beginning in the
design phase and ending at the customers site.
Testing process is illustrated in the next slide.
The two fundamental decisions that must be made
before planning for testing can occur are:
What is the required software quality standard, and
What is the software testing strategy.
6. OHT 10.6
6
Determining the test
methodology
Planning the tests
Designing the tests
Performing the tests
(implementation)
7. OHT 10.7
7
Determining the Appropriate
Software Quality Standard
Different standards required for different software applications.
e.g. safety-critical software or aircraft instrumentation - critical.
In other cases, a medium-level quality would be sufficient, and
So, the expected damage resulting from failed software impacts
standard of software quality.
Samples of damage to customers and users as well as to
developers are shown on the next two slides:
8. OHT 10.8
8
1. Endangers the safety of human beings
2. Affects an essential organizational function with no system
replacement capability available
3. Affects functioning of firmware, causing malfunction of an entire
system
4. Affects an essential organizational function but a replacement is
available
5. Affects proper functioning of software packages for business
applications
6. Affects proper functioning of software packages for a private
customer
7. Affects functioning of a firmware application but without affecting the
entire system.
8. Inconveniences the user but does not prevent accomplishment of the
systems capabilities
9. OHT 10.9
9
1. Financial losses
* Damages paid for physical injuries
Aircraft or auto instrumentation; health equipment. Law suites!!
* Damages paid to organizations for malfunctioning of
software
Companies have many lawyers on staff!!!
* Purchase cost reimbursed to customers
* High maintenance expenses for repair of failed systems
2. Non-quantitative damages
* Expected to affect future sales
* Substantially reduced current sales
10. OHT 10.10
10
Determining Software Testing Strategy
Big Bank or Incremental? So, do we want the
testing strategy to be big bang or incremental?
Major testing at end in the past.
If incremental, top down or bottom up?
Which parts of the testing plan should be done
using white box testing?
Black box?
Which parts of the test plan should be done using
an automated test model?
11. OHT 10.11
11
Planning the Tests
We need to undertake:
Unit tests
Integration tests, and
System Tests.
Unit tests deal with small hunks modules,
functions, objects, classes;
Integration tests deal with units constituting a
subsystem or other major hunks of capability, and
System tests refer to the entire software package
or system.
These are often done by different constitutencies!!
12. OHT 10.12
12
Lots of Questions
So we first need to consider five basic issues:
What to test
Which sources do we use for test cases
Who is to perform the tests
Where to perform the tests, and
When to terminate the tests.
Questions with not so obvious answers!
13. OHT 10.13
13
What to Test
We would like to test everything.
Not very practical.
Cannot undertake exhaustive testing
Number of paths is infinite.
Consider:
Do we totally test modules that are 98% reused?
Do we really need to test things that have been
repeatedly tested with only slight changes?
How about testing by newbees?
Testing on sensitive modules that pose lots of risk?
14. OHT 10.14
14
So, which modules need to be unit tested?
Which integrations should be tested?
Maybe low priority applications tested in unit
testing may not be needed or included in the
system tests.
Lots of planning is needed, as testing IS a very
expensive undertaking!
15. OHT 10.15
15
Rating Units, Integrations, and
Applications
We need to rate these issues to determine their
priority in the testing plan.
Rate based on two factors:
1. Damage severity level severity of results if
module / application fails.
How much damage is done??
Will it destroy our business? Our reputation??
2. Software risk level what is the probability
of failure.
Factors affecting risk next slide:
16. OHT 10.16
16
Module/Application Issues
1. Magnitude (size)
2. Complexity and difficulty
3. Percentage of original software (vs. percentage of
reused software)
Programmer Issues
4. Professional qualifications
5. Experience with the module's specific subject matter.
6. Availability of professional support (backup of
knowledgeable and experience).
7. Acquaintance with the programmer and the ability to
evaluate his/her capabilities.
17. OHT 10.17
17
Computations
Essential to calculate risk levels
We must budget our testing due to high cost
But we must temper our testing with cost/risk!
Helps to determine what to test and to what extent.
Use a combined rating bringing together
damage severity (how serious would the damage
be?) and risk severity / probability.
Sample calculations: A is damage; B is risk.
C = A + B
C = k*A + m * B
C = A * B // most familiar with this one
18. OHT 10.18
18
If we are including unit, integration, and applications in a
test plan, we need to know how much/many resources are
needed.
Thus, we need to prioritize.
Higher the rating, and priority more allocation of
testing resources are needed.
Consider:
Some tests are based on high percentages of reused code.
Some applications are developed by new employees
We typically use a 5 point scale, where 5 is high.
(Variations include a 1-5 severity level and a probability of
0.0 to 1.0 of their occurrence.)
Can see results for Super Teacher application in next slide.
19. OHT 10.19
19
Combined rating method
Application Damage
Severity
Factor A
Damage
Severity
Factor B
A +B 7xA+2xB A x B
1. Input of test results 3 2 5 (4) 25 (5) 6 (4)
2. Interface for input and output of pupilsdata to
and from other teachers
4 4 8 (1) 36 (1) 16 (1)
3. Preparation of lists of low achievers 2 2 4 (5-6) 18 (7) 4 (5-6)
4. Printing letters to parents of low achievers 1 2 3 (7-8) 11 (8) 2 (8)
5. Preparation of reports for the school principal 3 3 6 (3) 27 (3) 9 (3)
6. Display of a pupils achievements profile 4 3 7 (2) 34 (2) 12 (2)
7. Printing of pupils term report card 3 1 3 (7-8) 23 (6) 3 (7)
8. Printing of pupils year-end report card 4 1 4 (5-6) 26 (4) 4 (5-6)
(Damage / p())
20. OHT 10.20
20
1. Which Sources Should be Used for
Test Cases?
Do we use live test cases or synthetic test cases.
All three types of tests should consider these.
Use live data or contrived (dummy) data??
What do you think??
Also need to consider single / combined tests and
the number of tests.
How about if the testing is top down? Bottom up?
What sources do you think might be needed then??
21. OHT 10.21
21
Who Performs the Tests?
Unit Testing done by the programmer and/or
development team.
Integration Testing can be the development team or
a testing unit.
System Testing usually done by an independent
testing team (internal or external (consultants) team.
For small companies, another testing team from another
development team can be used and swapped.
Can always outsource testing too.
22. OHT 10.22
22
Where to Perform the Tests?
Typically at the software developers site..
For system tests, test at developers or
customers site (target site).
If outsourced, testing can be done at consultants
site.
23. OHT 10.23
23
When are Tests Terminated?
This is always the $64,000 question!!
Decision normally applies to system tests.
Five typical alternatives
1. Completed Implementation Route
Test until all is error free. (good luck)
All testing, regression testing;
Disregards budget and timetable constraints.
Applies to perfection approach
24. OHT 10.24
24
2. Mathematical Models Application Route:
Here modeling us used to estimate percentage of
undetected errors based on rate of error detection.
When detection rate falls below a certain level, stop.
Disadvantage: math model may not fully represent
the projects characteristics.
Thus testing may be cut short or extended too far.
Advantage: Well-defined stopping point.
25. OHT 10.25
25
3. Error Seeding Route
Here, we seed errors prior to testing.
Underlying assumption is that percentage of discovered
seeded errors will correspond to the percentage of real
errors detected.
Stop once residual percentage of undetected seeded
errors reaches a predefined level considered acceptable for
passing the system.
Disadvantages: extra workload for testers; also based on
past experiences of some testers;
Too, seeding method can not accurately estimate the
residual rate of undetected errors in unfamiliar systems.
26. OHT 10.26
26
4. The dual independent testing teams route:
Here two teams implement the testing process
independently.
Compare lists of detected errors.
Calculate the number of errors left undetected
Lots of statistics here.
High costs. Justified when??
27. OHT 10.27
27
5. Termination after resources have petered out.
This means stop when budgets or time for testing has run out.
Very common in industry
28. OHT 10.28
28
Test Plan
Lastly, system testing is documented in a software
test plan.
Common formats are available.
29. OHT 10.29
29
Test Design and Software Test Plan
Products of Test Design
Detailed design and procedures for each test
The input database / files for testing.
There are standard software test plans (STP)
templates
30. OHT 10.30
30
1 Scope of the tests
1.1 The software package to be tested (name, version and revision)
1.2 The documents that provide the basis for the planned tests
2 Testing environment
2.1 Sites
2.2 Required hardware and firmware configuration
2.3 Participating organizations
2.4 Manpower requirements
2.5 Preparation and training required of the test team
31. OHT 10.31
31
3 Tests details (for each test)
3.1 Test identification
3.2 Test objective
3.3 Cross-reference to the relevant design document and the requirement
document
3.4 Test class
3.5 Test level (unit, integration or system tests)
3.6 Test case requirements
3.7 Special requirements (e.g., measurements of response times, security
requirements)
3.8 Data to be recorded
4 Test schedule (for each test or test group) including time
estimates for:
4.1 Preparation
4.2 Testing
4.3 Error correction
4.4 Regression tests
32. OHT 10.32
32
1 Scope of the tests
1.1 The software package to be tested (name, version and
revision)
1.2 The documents providing the basis for the designed tests
(name and
version for each document)
2 Test environment (for each test)
2.1 Test identification (the test details are documented in the
STP)
2.2 Detailed description of the operating system and hardware
configuration
and the required switch settings for the tests
2.3 Instructions for software loading
33. OHT 10.33
33
3. Testing process
3.1 Instructions for input, detailing every step of the input
process
3.2 Data to be recorded during the tests
4. Test cases (for each case)
4.1 Test case identification details
4.2 Input data and system settings
4.3 Expected intermediate results (if applicable)
4.4 Expected results (numerical, message, activation of
equipment, etc.)
5. Actions to be taken in case of program failure/cessation
6. Procedures to be applied according to the test results
summary
34. OHT 10.34
34
Test Implementation
Really, this is just running the tests, correction of
tests, running regression tests,
Testing is done when the outcomes satisfy the
developers.
When are these tests run?? (time of day/ date??)
35. OHT 10.35
35
Regression Testing
Need not test everything.
Typically re-test only those artifacts directly changed
and those providing inputs and outputs to these
changed artifacts (modules).
Very often new errors are introduced when changes
are made.
Theres always risk in not testing everything but
these decisions must be made.
Results of testing are documented in a test report.
37. OHT 10.37
37
1. Test identification, site, schedule and participation
1.1 The tested software identification (name, version and revision)
1.2 The documents providing the basis for the tests (name and
version for each document)
1.3 Test site
1.4 Initiation and concluding times for each testing session
1.5 Test team members
1.6 Other participants
1.7 Hours invested in performing the tests
2. Test environment
2.1 Hardware and firmware configurations
2.2 Preparations and training prior to testing
38. OHT 10.38
38
3. Test results
3.1 Test identification
3.2 Test case results (for each test case individually)
4. Summary tables for total number of errors, their
distribution and types
4.1 Summary of current tests
4.2 Comparison with previous results (for regression test summaries)
5. Special events and testers' proposals
5.1 Special events and unpredicted responses of the software during testing
5.2 Problems encountered during testing.
5.3 Proposals for changes in the test environment, including test preparations
5.4 Proposals for changes or corrections in test procedures and test case files