- •Preface
- •Biological Vision Systems
- •Visual Representations from Paintings to Photographs
- •Computer Vision
- •The Limitations of Standard 2D Images
- •3D Imaging, Analysis and Applications
- •Book Objective and Content
- •Acknowledgements
- •Contents
- •Contributors
- •2.1 Introduction
- •Chapter Outline
- •2.2 An Overview of Passive 3D Imaging Systems
- •2.2.1 Multiple View Approaches
- •2.2.2 Single View Approaches
- •2.3 Camera Modeling
- •2.3.1 Homogeneous Coordinates
- •2.3.2 Perspective Projection Camera Model
- •2.3.2.1 Camera Modeling: The Coordinate Transformation
- •2.3.2.2 Camera Modeling: Perspective Projection
- •2.3.2.3 Camera Modeling: Image Sampling
- •2.3.2.4 Camera Modeling: Concatenating the Projective Mappings
- •2.3.3 Radial Distortion
- •2.4 Camera Calibration
- •2.4.1 Estimation of a Scene-to-Image Planar Homography
- •2.4.2 Basic Calibration
- •2.4.3 Refined Calibration
- •2.4.4 Calibration of a Stereo Rig
- •2.5 Two-View Geometry
- •2.5.1 Epipolar Geometry
- •2.5.2 Essential and Fundamental Matrices
- •2.5.3 The Fundamental Matrix for Pure Translation
- •2.5.4 Computation of the Fundamental Matrix
- •2.5.5 Two Views Separated by a Pure Rotation
- •2.5.6 Two Views of a Planar Scene
- •2.6 Rectification
- •2.6.1 Rectification with Calibration Information
- •2.6.2 Rectification Without Calibration Information
- •2.7 Finding Correspondences
- •2.7.1 Correlation-Based Methods
- •2.7.2 Feature-Based Methods
- •2.8 3D Reconstruction
- •2.8.1 Stereo
- •2.8.1.1 Dense Stereo Matching
- •2.8.1.2 Triangulation
- •2.8.2 Structure from Motion
- •2.9 Passive Multiple-View 3D Imaging Systems
- •2.9.1 Stereo Cameras
- •2.9.2 3D Modeling
- •2.9.3 Mobile Robot Localization and Mapping
- •2.10 Passive Versus Active 3D Imaging Systems
- •2.11 Concluding Remarks
- •2.12 Further Reading
- •2.13 Questions
- •2.14 Exercises
- •References
- •3.1 Introduction
- •3.1.1 Historical Context
- •3.1.2 Basic Measurement Principles
- •3.1.3 Active Triangulation-Based Methods
- •3.1.4 Chapter Outline
- •3.2 Spot Scanners
- •3.2.1 Spot Position Detection
- •3.3 Stripe Scanners
- •3.3.1 Camera Model
- •3.3.2 Sheet-of-Light Projector Model
- •3.3.3 Triangulation for Stripe Scanners
- •3.4 Area-Based Structured Light Systems
- •3.4.1 Gray Code Methods
- •3.4.1.1 Decoding of Binary Fringe-Based Codes
- •3.4.1.2 Advantage of the Gray Code
- •3.4.2 Phase Shift Methods
- •3.4.2.1 Removing the Phase Ambiguity
- •3.4.3 Triangulation for a Structured Light System
- •3.5 System Calibration
- •3.6 Measurement Uncertainty
- •3.6.1 Uncertainty Related to the Phase Shift Algorithm
- •3.6.2 Uncertainty Related to Intrinsic Parameters
- •3.6.3 Uncertainty Related to Extrinsic Parameters
- •3.6.4 Uncertainty as a Design Tool
- •3.7 Experimental Characterization of 3D Imaging Systems
- •3.7.1 Low-Level Characterization
- •3.7.2 System-Level Characterization
- •3.7.3 Characterization of Errors Caused by Surface Properties
- •3.7.4 Application-Based Characterization
- •3.8 Selected Advanced Topics
- •3.8.1 Thin Lens Equation
- •3.8.2 Depth of Field
- •3.8.3 Scheimpflug Condition
- •3.8.4 Speckle and Uncertainty
- •3.8.5 Laser Depth of Field
- •3.8.6 Lateral Resolution
- •3.9 Research Challenges
- •3.10 Concluding Remarks
- •3.11 Further Reading
- •3.12 Questions
- •3.13 Exercises
- •References
- •4.1 Introduction
- •Chapter Outline
- •4.2 Representation of 3D Data
- •4.2.1 Raw Data
- •4.2.1.1 Point Cloud
- •4.2.1.2 Structured Point Cloud
- •4.2.1.3 Depth Maps and Range Images
- •4.2.1.4 Needle map
- •4.2.1.5 Polygon Soup
- •4.2.2 Surface Representations
- •4.2.2.1 Triangular Mesh
- •4.2.2.2 Quadrilateral Mesh
- •4.2.2.3 Subdivision Surfaces
- •4.2.2.4 Morphable Model
- •4.2.2.5 Implicit Surface
- •4.2.2.6 Parametric Surface
- •4.2.2.7 Comparison of Surface Representations
- •4.2.3 Solid-Based Representations
- •4.2.3.1 Voxels
- •4.2.3.3 Binary Space Partitioning
- •4.2.3.4 Constructive Solid Geometry
- •4.2.3.5 Boundary Representations
- •4.2.4 Summary of Solid-Based Representations
- •4.3 Polygon Meshes
- •4.3.1 Mesh Storage
- •4.3.2 Mesh Data Structures
- •4.3.2.1 Halfedge Structure
- •4.4 Subdivision Surfaces
- •4.4.1 Doo-Sabin Scheme
- •4.4.2 Catmull-Clark Scheme
- •4.4.3 Loop Scheme
- •4.5 Local Differential Properties
- •4.5.1 Surface Normals
- •4.5.2 Differential Coordinates and the Mesh Laplacian
- •4.6 Compression and Levels of Detail
- •4.6.1 Mesh Simplification
- •4.6.1.1 Edge Collapse
- •4.6.1.2 Quadric Error Metric
- •4.6.2 QEM Simplification Summary
- •4.6.3 Surface Simplification Results
- •4.7 Visualization
- •4.8 Research Challenges
- •4.9 Concluding Remarks
- •4.10 Further Reading
- •4.11 Questions
- •4.12 Exercises
- •References
- •1.1 Introduction
- •Chapter Outline
- •1.2 A Historical Perspective on 3D Imaging
- •1.2.1 Image Formation and Image Capture
- •1.2.2 Binocular Perception of Depth
- •1.2.3 Stereoscopic Displays
- •1.3 The Development of Computer Vision
- •1.3.1 Further Reading in Computer Vision
- •1.4 Acquisition Techniques for 3D Imaging
- •1.4.1 Passive 3D Imaging
- •1.4.2 Active 3D Imaging
- •1.4.3 Passive Stereo Versus Active Stereo Imaging
- •1.5 Twelve Milestones in 3D Imaging and Shape Analysis
- •1.5.1 Active 3D Imaging: An Early Optical Triangulation System
- •1.5.2 Passive 3D Imaging: An Early Stereo System
- •1.5.3 Passive 3D Imaging: The Essential Matrix
- •1.5.4 Model Fitting: The RANSAC Approach to Feature Correspondence Analysis
- •1.5.5 Active 3D Imaging: Advances in Scanning Geometries
- •1.5.6 3D Registration: Rigid Transformation Estimation from 3D Correspondences
- •1.5.7 3D Registration: Iterative Closest Points
- •1.5.9 3D Local Shape Descriptors: Spin Images
- •1.5.10 Passive 3D Imaging: Flexible Camera Calibration
- •1.5.11 3D Shape Matching: Heat Kernel Signatures
- •1.6 Applications of 3D Imaging
- •1.7 Book Outline
- •1.7.1 Part I: 3D Imaging and Shape Representation
- •1.7.2 Part II: 3D Shape Analysis and Processing
- •1.7.3 Part III: 3D Imaging Applications
- •References
- •5.1 Introduction
- •5.1.1 Applications
- •5.1.2 Chapter Outline
- •5.2 Mathematical Background
- •5.2.1 Differential Geometry
- •5.2.2 Curvature of Two-Dimensional Surfaces
- •5.2.3 Discrete Differential Geometry
- •5.2.4 Diffusion Geometry
- •5.2.5 Discrete Diffusion Geometry
- •5.3 Feature Detectors
- •5.3.1 A Taxonomy
- •5.3.2 Harris 3D
- •5.3.3 Mesh DOG
- •5.3.4 Salient Features
- •5.3.5 Heat Kernel Features
- •5.3.6 Topological Features
- •5.3.7 Maximally Stable Components
- •5.3.8 Benchmarks
- •5.4 Feature Descriptors
- •5.4.1 A Taxonomy
- •5.4.2 Curvature-Based Descriptors (HK and SC)
- •5.4.3 Spin Images
- •5.4.4 Shape Context
- •5.4.5 Integral Volume Descriptor
- •5.4.6 Mesh Histogram of Gradients (HOG)
- •5.4.7 Heat Kernel Signature (HKS)
- •5.4.8 Scale-Invariant Heat Kernel Signature (SI-HKS)
- •5.4.9 Color Heat Kernel Signature (CHKS)
- •5.4.10 Volumetric Heat Kernel Signature (VHKS)
- •5.5 Research Challenges
- •5.6 Conclusions
- •5.7 Further Reading
- •5.8 Questions
- •5.9 Exercises
- •References
- •6.1 Introduction
- •Chapter Outline
- •6.2 Registration of Two Views
- •6.2.1 Problem Statement
- •6.2.2 The Iterative Closest Points (ICP) Algorithm
- •6.2.3 ICP Extensions
- •6.2.3.1 Techniques for Pre-alignment
- •Global Approaches
- •Local Approaches
- •6.2.3.2 Techniques for Improving Speed
- •Subsampling
- •Closest Point Computation
- •Distance Formulation
- •6.2.3.3 Techniques for Improving Accuracy
- •Outlier Rejection
- •Additional Information
- •Probabilistic Methods
- •6.3 Advanced Techniques
- •6.3.1 Registration of More than Two Views
- •Reducing Error Accumulation
- •Automating Registration
- •6.3.2 Registration in Cluttered Scenes
- •Point Signatures
- •Matching Methods
- •6.3.3 Deformable Registration
- •Methods Based on General Optimization Techniques
- •Probabilistic Methods
- •6.3.4 Machine Learning Techniques
- •Improving the Matching
- •Object Detection
- •6.4 Quantitative Performance Evaluation
- •6.5 Case Study 1: Pairwise Alignment with Outlier Rejection
- •6.6 Case Study 2: ICP with Levenberg-Marquardt
- •6.6.1 The LM-ICP Method
- •6.6.2 Computing the Derivatives
- •6.6.3 The Case of Quaternions
- •6.6.4 Summary of the LM-ICP Algorithm
- •6.6.5 Results and Discussion
- •6.7 Case Study 3: Deformable ICP with Levenberg-Marquardt
- •6.7.1 Surface Representation
- •6.7.2 Cost Function
- •Data Term: Global Surface Attraction
- •Data Term: Boundary Attraction
- •Penalty Term: Spatial Smoothness
- •Penalty Term: Temporal Smoothness
- •6.7.3 Minimization Procedure
- •6.7.4 Summary of the Algorithm
- •6.7.5 Experiments
- •6.8 Research Challenges
- •6.9 Concluding Remarks
- •6.10 Further Reading
- •6.11 Questions
- •6.12 Exercises
- •References
- •7.1 Introduction
- •7.1.1 Retrieval and Recognition Evaluation
- •7.1.2 Chapter Outline
- •7.2 Literature Review
- •7.3 3D Shape Retrieval Techniques
- •7.3.1 Depth-Buffer Descriptor
- •7.3.1.1 Computing the 2D Projections
- •7.3.1.2 Obtaining the Feature Vector
- •7.3.1.3 Evaluation
- •7.3.1.4 Complexity Analysis
- •7.3.2 Spin Images for Object Recognition
- •7.3.2.1 Matching
- •7.3.2.2 Evaluation
- •7.3.2.3 Complexity Analysis
- •7.3.3 Salient Spectral Geometric Features
- •7.3.3.1 Feature Points Detection
- •7.3.3.2 Local Descriptors
- •7.3.3.3 Shape Matching
- •7.3.3.4 Evaluation
- •7.3.3.5 Complexity Analysis
- •7.3.4 Heat Kernel Signatures
- •7.3.4.1 Evaluation
- •7.3.4.2 Complexity Analysis
- •7.4 Research Challenges
- •7.5 Concluding Remarks
- •7.6 Further Reading
- •7.7 Questions
- •7.8 Exercises
- •References
- •8.1 Introduction
- •Chapter Outline
- •8.2 3D Face Scan Representation and Visualization
- •8.3 3D Face Datasets
- •8.3.1 FRGC v2 3D Face Dataset
- •8.3.2 The Bosphorus Dataset
- •8.4 3D Face Recognition Evaluation
- •8.4.1 Face Verification
- •8.4.2 Face Identification
- •8.5 Processing Stages in 3D Face Recognition
- •8.5.1 Face Detection and Segmentation
- •8.5.2 Removal of Spikes
- •8.5.3 Filling of Holes and Missing Data
- •8.5.4 Removal of Noise
- •8.5.5 Fiducial Point Localization and Pose Correction
- •8.5.6 Spatial Resampling
- •8.5.7 Feature Extraction on Facial Surfaces
- •8.5.8 Classifiers for 3D Face Matching
- •8.6 ICP-Based 3D Face Recognition
- •8.6.1 ICP Outline
- •8.6.2 A Critical Discussion of ICP
- •8.6.3 A Typical ICP-Based 3D Face Recognition Implementation
- •8.6.4 ICP Variants and Other Surface Registration Approaches
- •8.7 PCA-Based 3D Face Recognition
- •8.7.1 PCA System Training
- •8.7.2 PCA Training Using Singular Value Decomposition
- •8.7.3 PCA Testing
- •8.7.4 PCA Performance
- •8.8 LDA-Based 3D Face Recognition
- •8.8.1 Two-Class LDA
- •8.8.2 LDA with More than Two Classes
- •8.8.3 LDA in High Dimensional 3D Face Spaces
- •8.8.4 LDA Performance
- •8.9 Normals and Curvature in 3D Face Recognition
- •8.9.1 Computing Curvature on a 3D Face Scan
- •8.10 Recent Techniques in 3D Face Recognition
- •8.10.1 3D Face Recognition Using Annotated Face Models (AFM)
- •8.10.2 Local Feature-Based 3D Face Recognition
- •8.10.2.1 Keypoint Detection and Local Feature Matching
- •8.10.2.2 Other Local Feature-Based Methods
- •8.10.3 Expression Modeling for Invariant 3D Face Recognition
- •8.10.3.1 Other Expression Modeling Approaches
- •8.11 Research Challenges
- •8.12 Concluding Remarks
- •8.13 Further Reading
- •8.14 Questions
- •8.15 Exercises
- •References
- •9.1 Introduction
- •Chapter Outline
- •9.2 DEM Generation from Stereoscopic Imagery
- •9.2.1 Stereoscopic DEM Generation: Literature Review
- •9.2.2 Accuracy Evaluation of DEMs
- •9.2.3 An Example of DEM Generation from SPOT-5 Imagery
- •9.3 DEM Generation from InSAR
- •9.3.1 Techniques for DEM Generation from InSAR
- •9.3.1.1 Basic Principle of InSAR in Elevation Measurement
- •9.3.1.2 Processing Stages of DEM Generation from InSAR
- •The Branch-Cut Method of Phase Unwrapping
- •The Least Squares (LS) Method of Phase Unwrapping
- •9.3.2 Accuracy Analysis of DEMs Generated from InSAR
- •9.3.3 Examples of DEM Generation from InSAR
- •9.4 DEM Generation from LIDAR
- •9.4.1 LIDAR Data Acquisition
- •9.4.2 Accuracy, Error Types and Countermeasures
- •9.4.3 LIDAR Interpolation
- •9.4.4 LIDAR Filtering
- •9.4.5 DTM from Statistical Properties of the Point Cloud
- •9.5 Research Challenges
- •9.6 Concluding Remarks
- •9.7 Further Reading
- •9.8 Questions
- •9.9 Exercises
- •References
- •10.1 Introduction
- •10.1.1 Allometric Modeling of Biomass
- •10.1.2 Chapter Outline
- •10.2 Aerial Photo Mensuration
- •10.2.1 Principles of Aerial Photogrammetry
- •10.2.1.1 Geometric Basis of Photogrammetric Measurement
- •10.2.1.2 Ground Control and Direct Georeferencing
- •10.2.2 Tree Height Measurement Using Forest Photogrammetry
- •10.2.2.2 Automated Methods in Forest Photogrammetry
- •10.3 Airborne Laser Scanning
- •10.3.1 Principles of Airborne Laser Scanning
- •10.3.1.1 Lidar-Based Measurement of Terrain and Canopy Surfaces
- •10.3.2 Individual Tree-Level Measurement Using Lidar
- •10.3.2.1 Automated Individual Tree Measurement Using Lidar
- •10.3.3 Area-Based Approach to Estimating Biomass with Lidar
- •10.4 Future Developments
- •10.5 Concluding Remarks
- •10.6 Further Reading
- •10.7 Questions
- •References
- •11.1 Introduction
- •Chapter Outline
- •11.2 Volumetric Data Acquisition
- •11.2.1 Computed Tomography
- •11.2.1.1 Characteristics of 3D CT Data
- •11.2.2 Positron Emission Tomography (PET)
- •11.2.2.1 Characteristics of 3D PET Data
- •Relaxation
- •11.2.3.1 Characteristics of the 3D MRI Data
- •Image Quality and Artifacts
- •11.2.4 Summary
- •11.3 Surface Extraction and Volumetric Visualization
- •11.3.1 Surface Extraction
- •Example: Curvatures and Geometric Tools
- •11.3.2 Volume Rendering
- •11.3.3 Summary
- •11.4 Volumetric Image Registration
- •11.4.1 A Hierarchy of Transformations
- •11.4.1.1 Rigid Body Transformation
- •11.4.1.2 Similarity Transformations and Anisotropic Scaling
- •11.4.1.3 Affine Transformations
- •11.4.1.4 Perspective Transformations
- •11.4.1.5 Non-rigid Transformations
- •11.4.2 Points and Features Used for the Registration
- •11.4.2.1 Landmark Features
- •11.4.2.2 Surface-Based Registration
- •11.4.2.3 Intensity-Based Registration
- •11.4.3 Registration Optimization
- •11.4.3.1 Estimation of Registration Errors
- •11.4.4 Summary
- •11.5 Segmentation
- •11.5.1 Semi-automatic Methods
- •11.5.1.1 Thresholding
- •11.5.1.2 Region Growing
- •11.5.1.3 Deformable Models
- •Snakes
- •Balloons
- •11.5.2 Fully Automatic Methods
- •11.5.2.1 Atlas-Based Segmentation
- •11.5.2.2 Statistical Shape Modeling and Analysis
- •11.5.3 Summary
- •11.6 Diffusion Imaging: An Illustration of a Full Pipeline
- •11.6.1 From Scalar Images to Tensors
- •11.6.2 From Tensor Image to Information
- •11.6.3 Summary
- •11.7 Applications
- •11.7.1 Diagnosis and Morphometry
- •11.7.2 Simulation and Training
- •11.7.3 Surgical Planning and Guidance
- •11.7.4 Summary
- •11.8 Concluding Remarks
- •11.9 Research Challenges
- •11.10 Further Reading
- •Data Acquisition
- •Surface Extraction
- •Volume Registration
- •Segmentation
- •Diffusion Imaging
- •Software
- •11.11 Questions
- •11.12 Exercises
- •References
- •Index
116 |
M.-A. Drouin and J.-A. Beraldin |
A sheet-of-light system such as the one illustrated at Fig. 3.3(right) can be calibrated similarly by replacing the tables ti (x1, y1) = x2 and ti (x1, y1) = Z by ti (α, y2) = x2 and ti (α, y2) = Z where α is the angle controlling the orientation of the laser plane, y2 is a row of the camera and x2 is the measured laser peak position for the camera row y2. Systems that use a Gray code with sub-pixel localization of the fringe transitions could be calibrated similarly. Note that tables ti and ti can be large and the values inside those tables may vary smoothly. It is, therefore, possible to fit a non-uniform rational B-spline (NURBS) surface or polynomial surface over those tables in order to reduce the memory requirement. Moreover, different steps are described in [25] that make it possible to reduce the sensitivity to noise of a non-parametric calibration procedure.
3.6 Measurement Uncertainty
In this section, we examine the uncertainty associated with 3D points measured by an active triangulation scanner. This section contains advanced material and may be omitted on first reading. Some errors are systematic in nature while others are random. Systematic errors may be implementation dependent and an experimental protocol is proposed to detect them in Sect. 3.7. In the remainder of this section, random errors are discussed. This study is performed for area-based scanners that use phase shift. An experimental approach for modeling random errors for the Gray code method will be presented in Sect. 3.7. Moreover, because the description requires advanced knowledge of the image formation process, the discussion of random errors for laser-based scanners is postponed until Sect. 3.8.
In the remainder of this section, we examine how the noise in the images of the camera influences the position of 3D points. First, the error propagation from image intensity to pixel coordinate is presented for the phase shift approach described in Sect. 3.4.2. Then, this error on the pixel coordinate is propagated through the intrinsic and extrinsic parameters. Finally, the error-propagation chain is used as a design tool.
3.6.1 Uncertainty Related to the Phase Shift Algorithm
In order to perform the error propagation from the noisy images to the phase value associated with a pixel [x1, y1]T , we only consider the B1(x1, y1) and B2(x1, y1) elements of vector X(x1, y1) in Eq. (3.23). Thus, Eq. (3.23) becomes
B1(x1, y1), B2(x1, y1) T = M I(x1, y1) |
(3.33) |
where M is the last two rows of the matrix (MT M)−1MT used in Eq. (3.23). First, assuming that the noise is spatially independent, the joint probability density function p(B1(x1, y1), B2(x1, y1)) must be computed. Finally, the probability density
3 Active 3D Imaging Systems |
117 |
function for the phase error p( |
φ) is obtained by changing the coordinate system |
from Cartesian to polar coordinates and integrating over the magnitude. Assuming that the noise contaminating the intensity measurement in the images is a zero-mean Gaussian noise, p(B1(x1, y1), B2(x1, y1)) is a zero-mean multivariate Gaussian distribution [27, 28]. Using Eq. (3.33), the covariance matrix ΣB associated with this distribution can be computed as
ΣB = M ΣI M T |
(3.34) |
where ΣI is the covariance matrix of the zero-mean Gaussian noise contaminating the intensity measured in the camera images [27, 28].
We give the details for the case θi = 2π i/N when the noise on each intensity measurement is independent with a zero mean and variance σ 2. One may verify that
ΣB = σ 2 |
|
0 |
2/N . |
(3.35) |
|
|
2/N |
0 |
|
This is the special case of the work presented in [53] (see also [52]).
Henceforth, the following notation will be used: quantities obtained from measurement will use a hat symbol to differentiate them from the unknown real quanti-
ties. As an example B(x1, y1) is the real unknown value while ˆ |
|
|
1 |
, y |
1 |
) |
is the value |
|||||||||||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
B(x |
|
|
|
|
|
|
|
||||||
computed from the noisy images. The probability density function is |
|
|
|
|
|
|
|
|||||||||||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
ˆ1(x1, y1), ˆ |
2 |
|
|
|
1 |
|
|
|
|
1 |
|
= |
|
N |
|
|
γ (x1,y1) |
|
|
|
|
|
|
|
|
|
|
(3.36) |
|||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
(x |
, y |
) |
4π σ 2 e− |
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
p B |
B |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||||
where |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
N ((B |
B (x |
1 |
, y |
1 |
))2 |
+ |
(B (x |
1 |
, y |
1 |
) |
− |
B |
(x |
1 |
, y |
1 |
))2) |
|
||||||||||||||||||||
γ (x |
1 |
, y |
1 |
) |
= |
|
|
|
1(x1, y1) − ˆ1 |
|
|
|
|
|
|
|
|
|
2 |
|
|
|
ˆ2 |
|
|
|
|
|
|
|
. (3.37) |
|||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
4σ 2 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
ˆ1 = |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
ˆ2 = |
||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
r cos(φ |
+ |
|
|
|
|
|
|
||||||||||||
Now changing to a polar coordinate system using B |
|
|
|
|
|
|
|
|
φ) and B |
|||||||||||||||||||||||||||||||||||||||
r sin(φ + |
|
|
φ) and B1 = B cos φ and B2 = B sin φ and integrating over r in the |
|||||||||||||||||||||||||||||||||||||||||||||
domain [0, ∞] we obtain the probability density function |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
− |
B2N |
|
B2N cos2 |
φ |
|
|
√ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
√ |
|
|
|
|
|
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
N cos |
|
|
|
|
|
|
|
|||||||||||||||
|
|
|
|
|
|
|
e |
4σ 2 |
(2σ + e |
4σ 2 |
|
|
|
|
|
|
|
|
|
|
|
φ (1 + erf( |
B |
|
|
φ |
))) |
|
||||||||||||||||||||
p( |
|
φ) = |
|
|
|
|
|
|
B N π cos |
|
|
|
2σ |
|
|
|
|
|
(3.38) |
|||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4π σ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||||||||
which is independent of φ and where |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
erf(z) = |
|
|
|
2 |
|
|
|
z |
e−t |
2 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
√ |
|
|
|
|
|
dt. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
(3.39) |
||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
π |
|
|
0 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
When σ is small and B is large, p( φ) can be approximated by the probability
density function of a zero-mean Gaussian distribution of variance |
2σ 2 |
(see [53] for |
|
B2N |
|||
|
|
118 |
M.-A. Drouin and J.-A. Beraldin |
details). Assuming that the spatial period of the pattern is ω, the positional error on x2 is a zero-mean Gaussian noise with variance
σx22 = |
ω2σ 2 |
(3.40) |
2π 2B2N . |
The uncertainty interval can be reduced by reducing either the spatial period of the pattern, or the variance σ 2, or by increasing either the number of patterns used or the intensity ratio (i.e. B ) of the projection system. Note that even if B is unknown, it can be estimated by projecting a white and a black image; however, this is only valid when the projector and camera are in focus (see Sect. 3.8).
3.6.2 Uncertainty Related to Intrinsic Parameters
When performing triangulation using Eq. (3.31), the pixel coordinates of the camera are known and noise is only present on the measured pixel coordinates of the projector. Thus, the intrinsic parameters of the camera do not directly influence the uncertainty on the position of the 3D point. The error propagation from the pixel coordinates to the normalized view coordinates for the projector can easily be computed. The transformation in Eq. (3.12) is linear and the variance associated with x2 is
2 |
|
σ 2 s |
2 |
|
= |
x2 |
x2 |
(3.41) |
|
σx2 |
|
|
||
d2 |
|
where sx2 and d are intrinsic parameters of the projector and σx22 is computed using Eq. (3.40). According to Eq. (3.41), as the distance d increases, or sx2 is reduced, the variance will be reduced. However, in a real system, the resolution may not be limited by the pixel size but by the optical resolution (see Sect. 3.8). and increasing d may be the only effective way of reducing the uncertainty. As will be explained in
Fig. 3.9 The reconstruction volume of two systems where only the focal length of the projector is different (50 mm at left and 100 mm at right). The red lines define the plane in focus in the camera. Figure courtesy of NRC Canada
3 Active 3D Imaging Systems |
119 |
Sect. 3.8, when d is increased while keeping the standoff distance constant the focal length must be increased; otherwise, the image will be blurred. Note that when d is increased the field of view is also reduced. The intersection of the field of view of the camera and projector defines the reconstruction volume of a system. Figure 3.9 illustrates the reconstruction volume of two systems that differ only by the focal length of the projector (i.e. the value of d also varies). Thus, there is a trade off between the size of the reconstruction volume and the magnitude of the uncertainty.
3.6.3 Uncertainty Related to Extrinsic Parameters
Because the transformation of Eq. (3.31) from a normalized image coordinates to 3D points is non-linear, we introduce a first-order approximation using Taylor’s expansion. The solution close to xˆ2 can be approximated by
|
|
|
|
|
|
d |
|
|
|
|
||
|
|
|
|
Q(x1,y1) xˆ2 + |
x2 ≈ Q(x1,y1) xˆ2 + |
|
Q(x1,y1) xˆ2 |
|
x2 |
|
(3.42) |
|
|
|
|
|
dx2 |
|
|||||||
|
|
|
|
|
|
ˆ |
|
|
|
|
||
where |
|
|
|
|
|
|
|
|
|
|||
|
d |
|
,y1) xˆ2 |
|
|
|
|
|
|
|
||
|
|
Q(x1 |
|
|
|
|
|
|
|
|||
|
dx2 |
|
|
|
|
|
|
|
||||
|
ˆ |
|
|
|
|
|
|
x1 |
|
|
||
|
|
|
(−r33Tx + r13Tz − r31Tx x1 + r11Tzx1 − r32Tx y1 + r12Tzy1 |
) |
|
|
||||||
= |
y |
. |
(3.43) |
|||||||||
|
|
|||||||||||
|
(r13 + r11x1 − r33xˆ2 − r31x1xˆ2 + r12y1 − r32xˆ2y1)2 |
|
11 |
|
Since a first order approximation is used, the covariance matrix associated to a 3D point can be computed similarly as ΣB in Eq. (3.35) [27, 28]. Explicitly, the covariance matrix associated with a 3D point is
|
|
x12 |
x1y1 |
x1 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||
Σ |
x1 |
y1 |
y12 |
y1 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||
|
= x1 |
y1 |
1 |
|
− |
|
|
+ |
|
|
|
|
− |
|
|
|
|
+ |
|
|
|
|
|
||||
|
|
|
|
− |
|
+ |
|
|
|
1 |
|
|
|
1 |
|
|
|
1 |
|
|
1 |
2 |
|||||
|
× |
|
( r33Tx |
|
r13Tz |
|
r31Tx x |
|
r11Tzx |
|
r32Tx y |
|
r12Tzy |
|
)2 |
σx2 (3.44) |
|||||||||||
|
|
|
|
(r13 |
+ |
r11x |
− |
r33x |
− |
r31x |
|
x |
+ |
r12y |
|
− |
r32x |
y |
)4 |
|
|
||||||
|
|
|
|
|
|
|
1 |
ˆ2 |
|
|
1 |
ˆ2 |
|
|
1 |
|
ˆ2 1 |
|
|
|
|
where σ 2 is computed using Eq. (3.40) and Eq. (3.41).
x2
The covariance matrix can be used to compute a confidence region which is the multi-variable equivalent to the confidence interval.4 The uncertainty over the range
4A confidence interval is an interval within which we are (1 − α)100 % confident that a point measured under the presence of Gaussian noise (of known mean and variance) will be within this interval (we use α = 0.05).