際際滷

際際滷Share a Scribd company logo
Budapest University of Technology and Economics
Department of Measurement and Information Systems
Comparing Robustness of AIS-Based
Middleware Implementations
Zolt叩n Micskei, Istv叩n Majzik
Budapest University of
Technology and Economics
Francis Tam
Nokia Research Center
Nokia Group
International Service Availability Symposium (ISAS) 2007
Motivation
Comparison: mostly performance. However:
Application
Component 1 Component N
SAF AIS
invalid input
A faulty
application
could crash
even the HA
middleware!
Robustness
 The degree to which a system operates
correctly in the presence of
oexceptional inputs or
ostressful environmental conditions.
[IEEE Std 610.12.1990]
Robustness testing
Functional testing
o Conformance, expected output included
o Valid inputs, some of the invalids
Robustness testing
 Try to break the system
 Large amount of invalid input
Goal
 Test and compare robustness of HA MW
o Based on common interface
 Several fault type and mode 
automatic test generation
Fault model: Primary sources
Custom Application
AIS implementation
Operating System
Hardware
API calls
OS calls
Fault model: Secondary sources
Custom Application
AIS implementation
Operating System
Hardware
External
Components
Human
Interface
API calls
OS calls
HW failures
Operators
Our testing tools
TBTS-TG
(type spec.)
Workload
MBST-TG
(mutation)
Operating system
Hardware
OS call wrapper
HA Middleware
Testing tools
TBTS-TG
(type spec.)
Workload
MBST-TG
(mutation)
Operating system
Hardware
OS call wrapper
HA Middleware
Type specific testing
 Goal: test the whole interface
saAmfInitialize saAmfPmStart saComponentNameGet
Handle invalid
Handle closed
Handle invalid
Handle closed
Monitoring started
Component not
registered
Handle invalid
Handle closed
Component not
registered
Pointer null
Type specific testing
 Goal: test the whole interface
saAmfInitialize saAmfPmStart saComponentNameGet
SaAmfHandleT SaAmfName
Handle invalid
Handle closed
Name invalid
Component not
registered
Type specific testing
 For each function
o Fill a template with the parameters
o Invalid and valid values
 Middleware specific:
o state based calls
o Complex setup code for type values
o Running tests as SA-aware components
Testing tools
TBTS-TG
(type spec.)
Workload
MBST-TG
(mutation)
Operating system
Hardware
OS call wrapper
HA Middleware
Mutation-based testing
 Goal: test complex scenarios using
multiple functions
 How?
o Write complex test
o Mutate existing code with injecting
typical robustness faults
 Sources to mutate
o SAFtest
o Functional tests in openais
Testing tools
TBTS-TG
(type spec.)
Workload
MBST-TG
(mutation)
Operating system
Hardware
OS call wrapper
HA Middleware
OS call wrapper
 Goal: test environment conditions
 Provide workload
 Intercept system calls and
o delay,
o change return value.
 Support in OS:
o e.g. strace and LD_PRELOAD in Linux
Testing results
 Three middleware
o Openais version 0.80.1 and trunk
o Fujitsu Siemens SAFE4TRY
 Test execution environment
o Configuration file, restart MW, logging
 Results:
o Differences in headers
o Test program aborts
o Middleware crashes
Type specific
openais-0.80.1 openais-trunk SAFE4TRY
success 24568 26019 29663
segmentation fault 1100 1468 0
timeout 467 2178 2
SAFE4TRY seems to
be more robust to
these kind of inputs
For 6 functions in openais the middleware itself crashed
In openais 0.69
segmentation fault was
8001 out of 13460
Mutation based
Example from the observed failures:
OS call wrapper
openais-0.80.1 openais-trunk SAFE4TRY
No failure observed 6 5 5
Application failed 0 2 1
Middleware failed 3 2 3
 Observations:
 All are vulnerable for system call failure
 Some calls cause failure for all: e.g. socket
 Some depends on system: e.g. bind
Future work - Obtaining metrics
 Large amount of output
 Number of failed tests for a function 
robustness faults in the function
 Help:
o Assigning expected error codes
o Data mining tools / decision tree
Lessons learnt
 Simple tests can find robustness failures
 Different methods find different failures
 There are problems even with the headers
 Existing applications not up-to-date
o LDAP DN format, component name get
 Middleware differ heavily
o How-to start, stop; configuration files
 For complex scenarios, OS call failures
o Detailed workload, complex test setup needed
 Robustness improving

More Related Content

Comparing robustness of AIS-based middleware implementations

  • 1. Budapest University of Technology and Economics Department of Measurement and Information Systems Comparing Robustness of AIS-Based Middleware Implementations Zolt叩n Micskei, Istv叩n Majzik Budapest University of Technology and Economics Francis Tam Nokia Research Center Nokia Group International Service Availability Symposium (ISAS) 2007
  • 2. Motivation Comparison: mostly performance. However: Application Component 1 Component N SAF AIS invalid input A faulty application could crash even the HA middleware!
  • 3. Robustness The degree to which a system operates correctly in the presence of oexceptional inputs or ostressful environmental conditions. [IEEE Std 610.12.1990]
  • 4. Robustness testing Functional testing o Conformance, expected output included o Valid inputs, some of the invalids Robustness testing Try to break the system Large amount of invalid input
  • 5. Goal Test and compare robustness of HA MW o Based on common interface Several fault type and mode automatic test generation
  • 6. Fault model: Primary sources Custom Application AIS implementation Operating System Hardware API calls OS calls
  • 7. Fault model: Secondary sources Custom Application AIS implementation Operating System Hardware External Components Human Interface API calls OS calls HW failures Operators
  • 8. Our testing tools TBTS-TG (type spec.) Workload MBST-TG (mutation) Operating system Hardware OS call wrapper HA Middleware
  • 9. Testing tools TBTS-TG (type spec.) Workload MBST-TG (mutation) Operating system Hardware OS call wrapper HA Middleware
  • 10. Type specific testing Goal: test the whole interface saAmfInitialize saAmfPmStart saComponentNameGet Handle invalid Handle closed Handle invalid Handle closed Monitoring started Component not registered Handle invalid Handle closed Component not registered Pointer null
  • 11. Type specific testing Goal: test the whole interface saAmfInitialize saAmfPmStart saComponentNameGet SaAmfHandleT SaAmfName Handle invalid Handle closed Name invalid Component not registered
  • 12. Type specific testing For each function o Fill a template with the parameters o Invalid and valid values Middleware specific: o state based calls o Complex setup code for type values o Running tests as SA-aware components
  • 13. Testing tools TBTS-TG (type spec.) Workload MBST-TG (mutation) Operating system Hardware OS call wrapper HA Middleware
  • 14. Mutation-based testing Goal: test complex scenarios using multiple functions How? o Write complex test o Mutate existing code with injecting typical robustness faults Sources to mutate o SAFtest o Functional tests in openais
  • 15. Testing tools TBTS-TG (type spec.) Workload MBST-TG (mutation) Operating system Hardware OS call wrapper HA Middleware
  • 16. OS call wrapper Goal: test environment conditions Provide workload Intercept system calls and o delay, o change return value. Support in OS: o e.g. strace and LD_PRELOAD in Linux
  • 17. Testing results Three middleware o Openais version 0.80.1 and trunk o Fujitsu Siemens SAFE4TRY Test execution environment o Configuration file, restart MW, logging Results: o Differences in headers o Test program aborts o Middleware crashes
  • 18. Type specific openais-0.80.1 openais-trunk SAFE4TRY success 24568 26019 29663 segmentation fault 1100 1468 0 timeout 467 2178 2 SAFE4TRY seems to be more robust to these kind of inputs For 6 functions in openais the middleware itself crashed In openais 0.69 segmentation fault was 8001 out of 13460
  • 19. Mutation based Example from the observed failures:
  • 20. OS call wrapper openais-0.80.1 openais-trunk SAFE4TRY No failure observed 6 5 5 Application failed 0 2 1 Middleware failed 3 2 3 Observations: All are vulnerable for system call failure Some calls cause failure for all: e.g. socket Some depends on system: e.g. bind
  • 21. Future work - Obtaining metrics Large amount of output Number of failed tests for a function robustness faults in the function Help: o Assigning expected error codes o Data mining tools / decision tree
  • 22. Lessons learnt Simple tests can find robustness failures Different methods find different failures There are problems even with the headers Existing applications not up-to-date o LDAP DN format, component name get Middleware differ heavily o How-to start, stop; configuration files For complex scenarios, OS call failures o Detailed workload, complex test setup needed Robustness improving