ºÝºÝߣ

ºÝºÝߣShare a Scribd company logo
Fool ME Twice:
Exploring and Exploiting Error Tollerance in Physics
Based Animation
THOMAS Y. YEH
GLENN REINMAN
SANJAY J. PATEL
PETROS FALOUTSOS
SIGGRAPH 2010
What It talks about?
ï‚´ The error tolerance of human perception offers a range of opportunities to trade
numerical accuracy for performance in physics-based simulation.
ï‚´ a methodology to identify the maximum error tolerance of physics simulation.
ï‚´ Application of this methodology in the evaluation of four case studies.
Introduction
ï‚´ Physics based animation (PBA)?
ï‚´ Physically based animation is an area of interest within computer graphics concerned
with the simulation of physically plausible behaviors at interactive rates.
Fight Night Champion, EA Sports
ï‚´ VIDEO of Physics based animation:
ï‚´ https://www.youtube.com/watch?v=hhbhC_wfrEM
Challenges?
ï‚´ the benefits of PBA come at a considerable computational cost,
which grows prohibitively with the number and complexity of
objects and interactions in the virtual world.
ï‚´ It becomes extremely challenging to satisfy such complex worlds in
real time.
Then how games and other PBA look so
well?
ï‚´ Fortunately, there is a tremendous amount of parallelism in the physical
simulation of complex scenes.
ï‚´ Exploiting this parallelism for performance is an active area of research
both in terms of software techniques and hardware accelerators.
ï‚´ Prior work, has addressed this problem-
ï‚´ PhysX [AGEIA],
ï‚´ GPUs [Havok],
ï‚´ the Cell [Hofstee 2005], and
ï‚´ ParallAX
Another avenue to help improve PBA
performance?
ï‚´perceptual error tolerance
ï‚´ There is a fundamental difference between accuracy and believability
in interactive entertainment.
ï‚´ The results of PBA do not need to be absolutely accurate, but do need
to appear correct (i.e., believable) to human users.
ï‚´ It has been demonstrated that there is a surprisingly large degree of
error tolerance in our perceptual ability ( independent of a viewer’s
understanding of physics )
ï‚´ The top row is the baseline, and the bottom row is the simulation with 7-bit mantissa floating-point
computation in Narrowphase and LCP. The results are different but both are visually correct.
ï‚´ This perceptual error tolerance can be exploited by a wide spectrum of
techniques ranging from high-level software techniques down to low-level
hardware optimizations.
ï‚´ At the application level, Level of Detail (LOD) simulation can be used to
handle distant objects with simpler models.
ï‚´ At the physics engine library level, one option is to use approximate
algorithms optimized for speed rather than accuracy
ï‚´ At the compiler level, dependencies among parallel tasks could be broken
to reduce synchronization overhead.
ï‚´ At the hardware design level, floating point precision reduction can be
leveraged to reduce area, reduce energy, or improve performance for
physics accelerators
3 main contributions in this paper-
ï‚´ A methodology to evaluate physical simulation errors in complex dynamic
scenes.
ï‚´ Identify the maximum error that can be injected into each phase of the
low-level numerical PBA computation
ï‚´ We explore software timestep tuning, iteration-count tuning, fast estimation
with error control, and hardware precision reduction to exploit error
tolerance for performance.
Computational phases of a Typical
Physics Engine
ï‚´ PBA requires the numerical solution of the differential equations of motion
of all objects in a scene.
ï‚´ The timestep is one of the most important simulation parameters, and it
largely defines the accuracy of the simulation.
ï‚´ For interactive applications the timestep needs to be in the range of 0.01 to 0.03
simulated seconds or smaller, so that the simulation can keep up with display
rates.
ï‚´ Broad-phase :
ï‚´ This is the first step of Collision Detection (CD).
ï‚´ Using approximate bounding volumes, it efficiently culls away pairs of objects that cannot
possibly collide.
ï‚´ Does not have to be serialized, the most useful algorithms are those that update a spatial
representation of the dynamic objects in a scene.
ï‚´ Updating these spatial structures (hash tables, kd-trees, sweep-and-prune axes) is not easily
mapped to parallel architectures.
ï‚´ Narrow-phase :
ï‚´ This is the second step of CD that determines the contact points between each pair of
colliding objects.
 Each pair’s computational load depends on the geometric properties of the objects
involved.
 The overall performance is affected by broad phase’s ability to minimize the number of pairs
considered in this phase.
ï‚´ This phase exhibits massive Fine-Grain (FG) parallelism since object-pairs are independent of
each other.
ï‚´ Island Creation :
ï‚´ After generating the contact joints linking interacting objects together, the engine serially
steps through the list of all objects to create islands (connected components) of interacting
objects.
ï‚´ This phase is serializing in the sense that it must be completed before the next phase can
begin. The full topology of the contacts isn’t known until the last pair is examined by the
algorithm, and only then can the constraint solvers begin.
ï‚´ Simulation step:
ï‚´ For each island, given the applied forces and torques, the engine computes the resulting
accelerations and integrates them to compute each object’s new position and velocity.
ï‚´ This phase exhibits both Coarse-Grain (CG) and Fine-Grain (FG) parallelism. Each island is
independent, and the constraint solver for each island contains independent iterations of
work.
ï‚´ We further split this component into two phases.
ï‚´ Island processing : which includes constraint setup and integration(CG).
ï‚´ LCP : which includes the solving of constraint equations (FG).
Simulation Accuracy and Stability
ï‚´ The discrete approximation of the equations of motion introduce errors in the results of any
nontrivial physics-based simulation.
ï‚´ 3 kinds of errors in order of increasing importance-
ï‚´ Imperceptible. These are errors that cannot be perceived by an average human observer.
ï‚´ Visible but bounded. There are errors that are visible but remain bounded.
ï‚´ Catastrophic. These errors make the simulation unstable which results in numerical explosion. In this
case, the simulation often reaches a state from which it cannot recover gracefully.
ï‚´ This behaviour is due to the constraint reordering that the iterative constraint solver
employs to reduce bias.
ï‚´ constraint reordering : technique that improves the stability of the numerical constraint
solver
Two simulation runs with the same initial conditions but different constraint ordering. The results are different but
both are visually correct.
Visible But Bounded Error
Perceptual Believability
ï‚´ six main categories of perceptual adaptive techniques proposed in the graphics
community:
ï‚´ interactive
ï‚´ Graphics
ï‚´ image fidelity
ï‚´ Animation
ï‚´ virtual environments,
ï‚´ and visualization
ï‚´ Non-photorealistic rendering.
ï‚´ Study on the visual tolerance of ballistic motion for character animation:
ï‚´ Errors in horizontal velocity were found to be more detectable than vertical velocity.
ï‚´ added accelerations were easier to detect than added deceleration.
Simulation Believability
ï‚´ 3 Physics Engines:
ï‚´ ODE
ï‚´ Newton
ï‚´ Novodex
ï‚´ These tests involved friction, gyroscopic forces, bounce, constraints,
accuracy, scalability, stability, and energy conservation.
ï‚´ All tests show significant differences between the three engines, and the
engine choice produces different simulation results with the same initial
conditions.
ï‚´ Even without any error-injection, there is no single correct simulation for real-
time PBA in games as the algorithms are optimized for speed rather than
accuracy.
METHODOLOGY
To come up with the metrics and the methodology
to evaluate believability.
Experimental Setup
ï‚´ The scene is composed :
ï‚´ building enclosed on all four sides by brick walls with one opening.
ï‚´ The wall sections framing the opening are unstable.
ï‚´ Ten humans ,mass, and joints are stationed within the enclosed area.
ï‚´ A cannon shoots fast (88 m/s) cannonballs at the building, and two cars collide into opposing walls.
Assuming time starts at 0 sec, one cannonball is shot every 0.04 sec. until 0.4 sec.
ï‚´ The cars are accelerated to roughly 100 miles/hr (44 m/sec) at time 0.12 to crash into the walls.
ï‚´ No forces are injected after 0.4 sec. Because we want to measure the maximum and average
errors, we target the time period with the most interaction (the first 55 frames).
Error Sampling Methodology
ï‚´ To evaluate the numerical error tolerance of PBA, we inject errors at a per-
instruction granularity.
ï‚´ Errors injected into Floating Point (FP) add, subtract, and multiply
instructions, as these make up the majority of FP operations for this
workload.
ï‚´ The goal is to show how believable the simulation is for a particular
magnitude of allowed error.
Method
ï‚´ randomly determine the amount of error injected at each instruction,
ï‚´ but vary the absolute magnitude of allowed error for different runs.
ï‚´ This allowed error bound is expressed as a maximum percentage change from
the correct value, in either the positive or negative direction.
For example, an error bound of 1% would mean that the correct value of an FP
computation could change by any amount in the range from −1% to 1%.
ï‚´ To avoid Biasing ,A random percentage, less than the preselected max and
min, is applied to the result to compute the absolute injected error.
ï‚´ For each configuration, average of the results from 100 different simulations (each
with a different random seed) to ensure that our results converge.
ï‚´ 100 simulations are enough to converge by comparing results with only 50 simulations,
these results are identical
Error Metric
ï‚´ to determine when the behaviour of a simulation with error is still believable
through numerical analysis
Simulation worlds. CD = Collision Detection. IP = Island
Processing. E = Error-injected.
All worlds are created with the same initial state, and the same set of
injected forces.
ï‚´ Error-injected : where random errors within the preselected range are
injected for every FP +/−/∗ instruction.
ï‚´ Baseline refers: to the deterministic simulation with no error injection.
ï‚´ Synched world: a world where the state of every object and contact is
copied from the error-injected world after each simulation step’s collision
detection.
ï‚´ The island processing computation of Synched contains no error injection, so it is
using the collisions detected by Error-injected but is performing correct island
processing.
ï‚´ The Synched world is created to isolate the effects of errors in island Processing
7 numerical metrics:
ï‚´ Energy Difference. Difference in total energy between baseline and error-injected worlds:
due to energy conservation, the total energy in these two worlds should match.
 Penetration Depth. Distance from the object’s surface to the contact point created by
collision detection. This is measured within the simulation world.
ï‚´ Constraint Violation. Distance between object position and where object is supposed to
be based on statically defined joints(car’s suspension or human limbs).
ï‚´ Linear Velocity Magnitude. Difference in linear velocity magnitude for the same object
between Error-Injected and Synched worlds.
ï‚´ Angular Velocity Magnitude. Difference in angular velocity magnitude for the same object
between Error-Injected and Synched worlds.
ï‚´ Linear Velocity Angle. Angle between linear velocity vectors of the same object inside
Error-Injected and Synched worlds.
ï‚´ Gap Distance. Distance between two objects that are found to be colliding, but are not
actually touching.
If penetration is equally large in the Baseline world and Error-injected world, then injected
error has not made things worse.
Numerical Error Tolerance
ï‚´Use error metrics:
 inject errors in the four phases and measure the response.
From response determine how accuracy can be traded off with performance.
ï‚´How different phases are affected with errors?
Broad Phase: omit actually colliding pairs leading to missed collisions, sent Narrow
phase more pairs degrading performance.
Narrow Phase: missed collisions, different contact points, additional object pairs
mistakenly taken as colliding, wrong angular components at contact points
Island Phase: errors in equations-drastically alter motion, error in energy, momentum
etc.
Numerical error analysis
ï‚´Experiments with with increasing degree of error injected from 0.0001% to
100%
Each metric’s maximal error plotted against error injected for each phase
alone and then in all phases.
Numerical error analysis
Broad Phase: highest tolerance, does not
resulting simulation blow-up with increasingly
large errors injected
Island Processing: lowest tolerance, most
sensitive
Both of them are series phases
The parallel phases: Narrow phase and LCP
show average sensitivity.
X-axis shows the maximum possible injected error.
Note: Extremely large numbers and infinity are
converted to the max value of each Y-axis scale
for better visualization.
Numerical error analysis
ï‚´The energy change and max penetration both increase rapidly with injected
error.
Numerical error analysis
ï‚´The max constraint error and max linear velocity error both increase rapidly with
injected error.
Numerical error analysis
ï‚´Gap error: remain consistently small, since it would need broad phase to allow to pass to
narrow phase and narrow phase to mistakenly find them touching
ï‚´Max linear velocity: these error typically last only on a single frame and is not readily
visible.
Acceptable Error threshold
ï‚´How to find acceptable error threshold?
Instead of using fixed thresholds, find knee in the curve where simulation differences
start to diverge towards instability.
These points are of catastrophic errors as confirmed by visual inspection
ï‚´Table I: maximum % error tolerated by each computation phase(using 100 samples), based
on finding the earliest knee where simulation blows-up over all error metric curves.
Threshold Evaluation-1
ï‚´Visually investigate the differences in threshold.
ï‚´Watch each error injected scene in real time to see believability including presence of
jitter, unrealistic deflections
ï‚´Experiments with error rates above thresholds had clear visual error such as bricks
moving on their own, human figures flying apart, etc.
Threshold Evaluation-2
ï‚´Compare errors to a very simple scenario
with clear error thresholds (previous work)
ï‚´This scenario has two colliding spheres with
no gravity.
ï‚´Injected errors with error bounds from
previous section's values.
ï‚´No perceptible errors were seen.
ï‚´However the scenario was too simple and
thresholds may not generalize to more
complex animations(pointed out in previous
works)
ï‚´The first column shows perceptual metric values for 0.1%
error injection and second column shows thresholds from
prior work
Threshold Evaluation-3
ï‚´In complex scenarios, enabling reordering of constraints makes perceptual metric
exceed the threshold from O'Sullivan et al[2003]
ï‚´Changes in order of constraints(which depend on initialization order) result in
simulation differences in ODE.st that prior work's threshold not useful
ï‚´The first column shows perceptual metric values for
0.1% error injection and second column shows
thresholds from prior work
ï‚´Observations:
Magnitude of error much more in a complex
scene than in a simple one.
Despite some large variations in velocity and
angle of deflection,energy is relatively
unaffected.
Magnitude of error suggest that prior work's
threshold not useful
ï‚´Penetration can be controlled by smaller time-
steps although for same time-step error injection
doesn't worsen from baseline case
Threshold Evaluation-4
ï‚´Some metrics: penetration and gap depend on time-step
ï‚´Reducing time-step to 0.001 seconds reduced penetration and gap to 0.001m and
0.009m respectively
ï‚´This justifies that high errors in these metrics are a function of timestep and not error
injection.
Case Studies-Simulation Time-step
ï‚´Simulation time-step largely defines accuracy.
ï‚´Energy difference as the measure
ï‚´Baseline-60 FPS, 20 iterations for LCP solver
ï‚´Observations:
Stablizes at 34 FPS
30 also acceptable but in instability region
ï‚´However for a gaming application 60 FPS is advisable
since: different user input and synchronize timing of
rendering and physics.
Case Studies-Iteration Count
ï‚´Iterations: another important parameter
ï‚´Study for 1 to 30 iterations with 60 and 33 FPS
ï‚´Observations:
60 FPS-stable from 30 to 11 iterations suggesting
default 20 iteration count is conservative
33 FPS-unstable even at 30 iteration suggesting
iteration count scaling can't be used to
compensate time-step errors.
Case Studies-Fast estimation with error control
ï‚´FEEC-optimization technique: uses two thread a)
precise thread b)estimation thread
ï‚´ Estimation thread estimates before precision
thread for AI or rendering purposes using lower
iteration count.
ï‚´Increases Hardware utilization.
ï‚´Observations:
Stable from 20 iterations down to 1.
Case Studies-Precision Reduction
ï‚´To reduce size of floating point hardware
ï‚´With error tolerance thresholds we estimate the
number of bits that can be reduced in hardware
ï‚´We then compare with simulation results.
ï‚´Reduction Methodology:
Reduce mantissa not exponential
Reduce both input values and then result
Rounding modes:nearest,truncation
Maximum error: 2^-x with truncation and 2^-
(x+1) with rounding for x bit mantissa.
Case Studies-Precision Reduction
ï‚´We see that actual simulation us more tolerant
than stricter numerically derived thresholds
ï‚´This further gives confidence in our numerical error
tolerance thresholds.
Thank You

More Related Content

Fool me twice

  • 1. Fool ME Twice: Exploring and Exploiting Error Tollerance in Physics Based Animation THOMAS Y. YEH GLENN REINMAN SANJAY J. PATEL PETROS FALOUTSOS SIGGRAPH 2010
  • 2. What It talks about? ï‚´ The error tolerance of human perception offers a range of opportunities to trade numerical accuracy for performance in physics-based simulation. ï‚´ a methodology to identify the maximum error tolerance of physics simulation. ï‚´ Application of this methodology in the evaluation of four case studies.
  • 3. Introduction ï‚´ Physics based animation (PBA)? ï‚´ Physically based animation is an area of interest within computer graphics concerned with the simulation of physically plausible behaviors at interactive rates.
  • 5. ï‚´ VIDEO of Physics based animation: ï‚´ https://www.youtube.com/watch?v=hhbhC_wfrEM
  • 6. Challenges? ï‚´ the benefits of PBA come at a considerable computational cost, which grows prohibitively with the number and complexity of objects and interactions in the virtual world. ï‚´ It becomes extremely challenging to satisfy such complex worlds in real time.
  • 7. Then how games and other PBA look so well? ï‚´ Fortunately, there is a tremendous amount of parallelism in the physical simulation of complex scenes. ï‚´ Exploiting this parallelism for performance is an active area of research both in terms of software techniques and hardware accelerators. ï‚´ Prior work, has addressed this problem- ï‚´ PhysX [AGEIA], ï‚´ GPUs [Havok], ï‚´ the Cell [Hofstee 2005], and ï‚´ ParallAX
  • 8. Another avenue to help improve PBA performance? ï‚´perceptual error tolerance ï‚´ There is a fundamental difference between accuracy and believability in interactive entertainment. ï‚´ The results of PBA do not need to be absolutely accurate, but do need to appear correct (i.e., believable) to human users. ï‚´ It has been demonstrated that there is a surprisingly large degree of error tolerance in our perceptual ability ( independent of a viewer’s understanding of physics )
  • 9. ï‚´ The top row is the baseline, and the bottom row is the simulation with 7-bit mantissa floating-point computation in Narrowphase and LCP. The results are different but both are visually correct.
  • 10. ï‚´ This perceptual error tolerance can be exploited by a wide spectrum of techniques ranging from high-level software techniques down to low-level hardware optimizations. ï‚´ At the application level, Level of Detail (LOD) simulation can be used to handle distant objects with simpler models. ï‚´ At the physics engine library level, one option is to use approximate algorithms optimized for speed rather than accuracy ï‚´ At the compiler level, dependencies among parallel tasks could be broken to reduce synchronization overhead. ï‚´ At the hardware design level, floating point precision reduction can be leveraged to reduce area, reduce energy, or improve performance for physics accelerators
  • 11. 3 main contributions in this paper- ï‚´ A methodology to evaluate physical simulation errors in complex dynamic scenes. ï‚´ Identify the maximum error that can be injected into each phase of the low-level numerical PBA computation ï‚´ We explore software timestep tuning, iteration-count tuning, fast estimation with error control, and hardware precision reduction to exploit error tolerance for performance.
  • 12. Computational phases of a Typical Physics Engine ï‚´ PBA requires the numerical solution of the differential equations of motion of all objects in a scene. ï‚´ The timestep is one of the most important simulation parameters, and it largely defines the accuracy of the simulation. ï‚´ For interactive applications the timestep needs to be in the range of 0.01 to 0.03 simulated seconds or smaller, so that the simulation can keep up with display rates.
  • 13. ï‚´ Broad-phase : ï‚´ This is the first step of Collision Detection (CD). ï‚´ Using approximate bounding volumes, it efficiently culls away pairs of objects that cannot possibly collide. ï‚´ Does not have to be serialized, the most useful algorithms are those that update a spatial representation of the dynamic objects in a scene. ï‚´ Updating these spatial structures (hash tables, kd-trees, sweep-and-prune axes) is not easily mapped to parallel architectures.
  • 14. ï‚´ Narrow-phase : ï‚´ This is the second step of CD that determines the contact points between each pair of colliding objects. ï‚´ Each pair’s computational load depends on the geometric properties of the objects involved. ï‚´ The overall performance is affected by broad phase’s ability to minimize the number of pairs considered in this phase. ï‚´ This phase exhibits massive Fine-Grain (FG) parallelism since object-pairs are independent of each other.
  • 15. ï‚´ Island Creation : ï‚´ After generating the contact joints linking interacting objects together, the engine serially steps through the list of all objects to create islands (connected components) of interacting objects. ï‚´ This phase is serializing in the sense that it must be completed before the next phase can begin. The full topology of the contacts isn’t known until the last pair is examined by the algorithm, and only then can the constraint solvers begin.
  • 16. ï‚´ Simulation step: ï‚´ For each island, given the applied forces and torques, the engine computes the resulting accelerations and integrates them to compute each object’s new position and velocity. ï‚´ This phase exhibits both Coarse-Grain (CG) and Fine-Grain (FG) parallelism. Each island is independent, and the constraint solver for each island contains independent iterations of work. ï‚´ We further split this component into two phases. ï‚´ Island processing : which includes constraint setup and integration(CG). ï‚´ LCP : which includes the solving of constraint equations (FG).
  • 17. Simulation Accuracy and Stability ï‚´ The discrete approximation of the equations of motion introduce errors in the results of any nontrivial physics-based simulation. ï‚´ 3 kinds of errors in order of increasing importance- ï‚´ Imperceptible. These are errors that cannot be perceived by an average human observer. ï‚´ Visible but bounded. There are errors that are visible but remain bounded. ï‚´ Catastrophic. These errors make the simulation unstable which results in numerical explosion. In this case, the simulation often reaches a state from which it cannot recover gracefully.
  • 18. ï‚´ This behaviour is due to the constraint reordering that the iterative constraint solver employs to reduce bias. ï‚´ constraint reordering : technique that improves the stability of the numerical constraint solver Two simulation runs with the same initial conditions but different constraint ordering. The results are different but both are visually correct. Visible But Bounded Error
  • 19. Perceptual Believability ï‚´ six main categories of perceptual adaptive techniques proposed in the graphics community: ï‚´ interactive ï‚´ Graphics ï‚´ image fidelity ï‚´ Animation ï‚´ virtual environments, ï‚´ and visualization ï‚´ Non-photorealistic rendering. ï‚´ Study on the visual tolerance of ballistic motion for character animation: ï‚´ Errors in horizontal velocity were found to be more detectable than vertical velocity. ï‚´ added accelerations were easier to detect than added deceleration.
  • 20. Simulation Believability ï‚´ 3 Physics Engines: ï‚´ ODE ï‚´ Newton ï‚´ Novodex ï‚´ These tests involved friction, gyroscopic forces, bounce, constraints, accuracy, scalability, stability, and energy conservation. ï‚´ All tests show significant differences between the three engines, and the engine choice produces different simulation results with the same initial conditions. ï‚´ Even without any error-injection, there is no single correct simulation for real- time PBA in games as the algorithms are optimized for speed rather than accuracy.
  • 21. METHODOLOGY To come up with the metrics and the methodology to evaluate believability.
  • 22. Experimental Setup ï‚´ The scene is composed : ï‚´ building enclosed on all four sides by brick walls with one opening. ï‚´ The wall sections framing the opening are unstable. ï‚´ Ten humans ,mass, and joints are stationed within the enclosed area. ï‚´ A cannon shoots fast (88 m/s) cannonballs at the building, and two cars collide into opposing walls. Assuming time starts at 0 sec, one cannonball is shot every 0.04 sec. until 0.4 sec. ï‚´ The cars are accelerated to roughly 100 miles/hr (44 m/sec) at time 0.12 to crash into the walls. ï‚´ No forces are injected after 0.4 sec. Because we want to measure the maximum and average errors, we target the time period with the most interaction (the first 55 frames).
  • 23. Error Sampling Methodology ï‚´ To evaluate the numerical error tolerance of PBA, we inject errors at a per- instruction granularity. ï‚´ Errors injected into Floating Point (FP) add, subtract, and multiply instructions, as these make up the majority of FP operations for this workload. ï‚´ The goal is to show how believable the simulation is for a particular magnitude of allowed error.
  • 24. Method ï‚´ randomly determine the amount of error injected at each instruction, ï‚´ but vary the absolute magnitude of allowed error for different runs. ï‚´ This allowed error bound is expressed as a maximum percentage change from the correct value, in either the positive or negative direction. For example, an error bound of 1% would mean that the correct value of an FP computation could change by any amount in the range from −1% to 1%. ï‚´ To avoid Biasing ,A random percentage, less than the preselected max and min, is applied to the result to compute the absolute injected error. ï‚´ For each configuration, average of the results from 100 different simulations (each with a different random seed) to ensure that our results converge. ï‚´ 100 simulations are enough to converge by comparing results with only 50 simulations, these results are identical
  • 25. Error Metric ï‚´ to determine when the behaviour of a simulation with error is still believable through numerical analysis Simulation worlds. CD = Collision Detection. IP = Island Processing. E = Error-injected. All worlds are created with the same initial state, and the same set of injected forces.
  • 26. ï‚´ Error-injected : where random errors within the preselected range are injected for every FP +/−/∗ instruction. ï‚´ Baseline refers: to the deterministic simulation with no error injection. ï‚´ Synched world: a world where the state of every object and contact is copied from the error-injected world after each simulation step’s collision detection. ï‚´ The island processing computation of Synched contains no error injection, so it is using the collisions detected by Error-injected but is performing correct island processing. ï‚´ The Synched world is created to isolate the effects of errors in island Processing
  • 27. 7 numerical metrics: ï‚´ Energy Difference. Difference in total energy between baseline and error-injected worlds: due to energy conservation, the total energy in these two worlds should match. ï‚´ Penetration Depth. Distance from the object’s surface to the contact point created by collision detection. This is measured within the simulation world. ï‚´ Constraint Violation. Distance between object position and where object is supposed to be based on statically defined joints(car’s suspension or human limbs). ï‚´ Linear Velocity Magnitude. Difference in linear velocity magnitude for the same object between Error-Injected and Synched worlds. ï‚´ Angular Velocity Magnitude. Difference in angular velocity magnitude for the same object between Error-Injected and Synched worlds. ï‚´ Linear Velocity Angle. Angle between linear velocity vectors of the same object inside Error-Injected and Synched worlds. ï‚´ Gap Distance. Distance between two objects that are found to be colliding, but are not actually touching. If penetration is equally large in the Baseline world and Error-injected world, then injected error has not made things worse.
  • 28. Numerical Error Tolerance ï‚´Use error metrics:  inject errors in the four phases and measure the response. From response determine how accuracy can be traded off with performance. ï‚´How different phases are affected with errors? Broad Phase: omit actually colliding pairs leading to missed collisions, sent Narrow phase more pairs degrading performance. Narrow Phase: missed collisions, different contact points, additional object pairs mistakenly taken as colliding, wrong angular components at contact points Island Phase: errors in equations-drastically alter motion, error in energy, momentum etc.
  • 29. Numerical error analysis ï‚´Experiments with with increasing degree of error injected from 0.0001% to 100% ï‚´Each metric’s maximal error plotted against error injected for each phase alone and then in all phases.
  • 30. Numerical error analysis Broad Phase: highest tolerance, does not resulting simulation blow-up with increasingly large errors injected Island Processing: lowest tolerance, most sensitive Both of them are series phases The parallel phases: Narrow phase and LCP show average sensitivity. X-axis shows the maximum possible injected error. Note: Extremely large numbers and infinity are converted to the max value of each Y-axis scale for better visualization.
  • 31. Numerical error analysis ï‚´The energy change and max penetration both increase rapidly with injected error.
  • 32. Numerical error analysis ï‚´The max constraint error and max linear velocity error both increase rapidly with injected error.
  • 33. Numerical error analysis ï‚´Gap error: remain consistently small, since it would need broad phase to allow to pass to narrow phase and narrow phase to mistakenly find them touching ï‚´Max linear velocity: these error typically last only on a single frame and is not readily visible.
  • 34. Acceptable Error threshold ï‚´How to find acceptable error threshold? Instead of using fixed thresholds, find knee in the curve where simulation differences start to diverge towards instability. These points are of catastrophic errors as confirmed by visual inspection ï‚´Table I: maximum % error tolerated by each computation phase(using 100 samples), based on finding the earliest knee where simulation blows-up over all error metric curves.
  • 35. Threshold Evaluation-1 ï‚´Visually investigate the differences in threshold. ï‚´Watch each error injected scene in real time to see believability including presence of jitter, unrealistic deflections ï‚´Experiments with error rates above thresholds had clear visual error such as bricks moving on their own, human figures flying apart, etc.
  • 36. Threshold Evaluation-2 ï‚´Compare errors to a very simple scenario with clear error thresholds (previous work) ï‚´This scenario has two colliding spheres with no gravity. ï‚´Injected errors with error bounds from previous section's values. ï‚´No perceptible errors were seen. ï‚´However the scenario was too simple and thresholds may not generalize to more complex animations(pointed out in previous works) ï‚´The first column shows perceptual metric values for 0.1% error injection and second column shows thresholds from prior work
  • 37. Threshold Evaluation-3 ï‚´In complex scenarios, enabling reordering of constraints makes perceptual metric exceed the threshold from O'Sullivan et al[2003] ï‚´Changes in order of constraints(which depend on initialization order) result in simulation differences in ODE.st that prior work's threshold not useful ï‚´The first column shows perceptual metric values for 0.1% error injection and second column shows thresholds from prior work ï‚´Observations: Magnitude of error much more in a complex scene than in a simple one. Despite some large variations in velocity and angle of deflection,energy is relatively unaffected. Magnitude of error suggest that prior work's threshold not useful ï‚´Penetration can be controlled by smaller time- steps although for same time-step error injection doesn't worsen from baseline case
  • 38. Threshold Evaluation-4 ï‚´Some metrics: penetration and gap depend on time-step ï‚´Reducing time-step to 0.001 seconds reduced penetration and gap to 0.001m and 0.009m respectively ï‚´This justifies that high errors in these metrics are a function of timestep and not error injection.
  • 39. Case Studies-Simulation Time-step ï‚´Simulation time-step largely defines accuracy. ï‚´Energy difference as the measure ï‚´Baseline-60 FPS, 20 iterations for LCP solver ï‚´Observations: Stablizes at 34 FPS 30 also acceptable but in instability region ï‚´However for a gaming application 60 FPS is advisable since: different user input and synchronize timing of rendering and physics.
  • 40. Case Studies-Iteration Count ï‚´Iterations: another important parameter ï‚´Study for 1 to 30 iterations with 60 and 33 FPS ï‚´Observations: 60 FPS-stable from 30 to 11 iterations suggesting default 20 iteration count is conservative 33 FPS-unstable even at 30 iteration suggesting iteration count scaling can't be used to compensate time-step errors.
  • 41. Case Studies-Fast estimation with error control ï‚´FEEC-optimization technique: uses two thread a) precise thread b)estimation thread ï‚´ Estimation thread estimates before precision thread for AI or rendering purposes using lower iteration count. ï‚´Increases Hardware utilization. ï‚´Observations: Stable from 20 iterations down to 1.
  • 42. Case Studies-Precision Reduction ï‚´To reduce size of floating point hardware ï‚´With error tolerance thresholds we estimate the number of bits that can be reduced in hardware ï‚´We then compare with simulation results. ï‚´Reduction Methodology: Reduce mantissa not exponential Reduce both input values and then result Rounding modes:nearest,truncation Maximum error: 2^-x with truncation and 2^- (x+1) with rounding for x bit mantissa.
  • 43. Case Studies-Precision Reduction ï‚´We see that actual simulation us more tolerant than stricter numerically derived thresholds ï‚´This further gives confidence in our numerical error tolerance thresholds.