We present a set of automatic testing tools constructed for SA Forum based HA middleware. We demonstrate the robustness testing approach by comparing the results of benchmarking carried out on three HA middleware implementations.
1 of 22
Download to read offline
More Related Content
Comparing robustness of AIS-based middleware implementations
1. Budapest University of Technology and Economics
Department of Measurement and Information Systems
Comparing Robustness of AIS-Based
Middleware Implementations
Zolt叩n Micskei, Istv叩n Majzik
Budapest University of
Technology and Economics
Francis Tam
Nokia Research Center
Nokia Group
International Service Availability Symposium (ISAS) 2007
3. Robustness
The degree to which a system operates
correctly in the presence of
oexceptional inputs or
ostressful environmental conditions.
[IEEE Std 610.12.1990]
4. Robustness testing
Functional testing
o Conformance, expected output included
o Valid inputs, some of the invalids
Robustness testing
Try to break the system
Large amount of invalid input
5. Goal
Test and compare robustness of HA MW
o Based on common interface
Several fault type and mode
automatic test generation
6. Fault model: Primary sources
Custom Application
AIS implementation
Operating System
Hardware
API calls
OS calls
7. Fault model: Secondary sources
Custom Application
AIS implementation
Operating System
Hardware
External
Components
Human
Interface
API calls
OS calls
HW failures
Operators
10. Type specific testing
Goal: test the whole interface
saAmfInitialize saAmfPmStart saComponentNameGet
Handle invalid
Handle closed
Handle invalid
Handle closed
Monitoring started
Component not
registered
Handle invalid
Handle closed
Component not
registered
Pointer null
11. Type specific testing
Goal: test the whole interface
saAmfInitialize saAmfPmStart saComponentNameGet
SaAmfHandleT SaAmfName
Handle invalid
Handle closed
Name invalid
Component not
registered
12. Type specific testing
For each function
o Fill a template with the parameters
o Invalid and valid values
Middleware specific:
o state based calls
o Complex setup code for type values
o Running tests as SA-aware components
14. Mutation-based testing
Goal: test complex scenarios using
multiple functions
How?
o Write complex test
o Mutate existing code with injecting
typical robustness faults
Sources to mutate
o SAFtest
o Functional tests in openais
16. OS call wrapper
Goal: test environment conditions
Provide workload
Intercept system calls and
o delay,
o change return value.
Support in OS:
o e.g. strace and LD_PRELOAD in Linux
17. Testing results
Three middleware
o Openais version 0.80.1 and trunk
o Fujitsu Siemens SAFE4TRY
Test execution environment
o Configuration file, restart MW, logging
Results:
o Differences in headers
o Test program aborts
o Middleware crashes
18. Type specific
openais-0.80.1 openais-trunk SAFE4TRY
success 24568 26019 29663
segmentation fault 1100 1468 0
timeout 467 2178 2
SAFE4TRY seems to
be more robust to
these kind of inputs
For 6 functions in openais the middleware itself crashed
In openais 0.69
segmentation fault was
8001 out of 13460
20. OS call wrapper
openais-0.80.1 openais-trunk SAFE4TRY
No failure observed 6 5 5
Application failed 0 2 1
Middleware failed 3 2 3
Observations:
All are vulnerable for system call failure
Some calls cause failure for all: e.g. socket
Some depends on system: e.g. bind
21. Future work - Obtaining metrics
Large amount of output
Number of failed tests for a function
robustness faults in the function
Help:
o Assigning expected error codes
o Data mining tools / decision tree
22. Lessons learnt
Simple tests can find robustness failures
Different methods find different failures
There are problems even with the headers
Existing applications not up-to-date
o LDAP DN format, component name get
Middleware differ heavily
o How-to start, stop; configuration files
For complex scenarios, OS call failures
o Detailed workload, complex test setup needed
Robustness improving