Preview only show first 10 pages with watermark. For full document please download

Zoom Dependent Lens Distortion Mathematical Models

   EMBED


Share

Transcript

J Math Imaging Vis (2012) 44:480–490 DOI 10.1007/s10851-012-0339-x Zoom Dependent Lens Distortion Mathematical Models Luis Alvarez · Luis Gómez · Pedro Henríquez Published online: 11 May 2012 © Springer Science+Business Media, LLC 2012 Abstract We propose new mathematical models to study the variation of lens distortion models when modifying zoom setting in the case of zoom lenses. The new models are based on a polynomial approximation to account for the variation of the radial distortion parameters through the range of zoom lens settings and, on the minimization of a global error energy measuring the distance between sequences of distorted aligned points and straight lines after lens distortion correction. To validate the performance of the method we present experimental results on calibration pattern images and on sport event scenarios using broadcast video cameras. We obtain, experimentally, that using just a second order polynomial approximation of lens distortion parameter zoom variation, the quality of lens distortion correction is as good as the one obtained individually frame by frame using independent lens distortion model for each frame. Keywords Camera calibration · Distortion model · Zoom dependent model · Radial distortion · 3D scenarios 1 Introduction It is known that camera lens induces image distortion. The magnitude of such distortion depends on some factors as lens quality or lens zoom. One important consequence of lens distortion is that the projection of 3D straight lines in the image are curves (no longer straight lines). Usually, the lens distortion models used in computer vision depends on radial functions of image pixels coordinates and they can be estimated using just image information. The basic standard lens distortion model used in computer vision (see for instance [1–3]) is given by the following expression: ˜ xˆ ≡ L(x) = xc + L(r)(x − xc ) where x = (x, y) is the original image point (distorted), xˆ = (x, ˆ y) ˆ is the corrected (undistorted) point, xc = (xc , yc ) is the center of the camera distortion model, usually near the  image center, r = (x − xc )2 + (y − yc )2 and L(r) is the function which defines the shape of the distortion model. L(r) is usually approximated by the polynomial L(r) = 1 + k1 r 2 + k2 r 4 + k3 r 6 + · · · , L. Alvarez · P. Henríquez CTIM, Departamento de Informática y Sistemas, Universidad de Las Palmas de Gran Canaria, Campus de Tafira, 35017, Las Palmas, Spain L. Alvarez e-mail: [email protected] P. Henríquez e-mail: [email protected] L. Gómez () CTIM, Departamento de Ingeniería Electrónica y Automática, Universidad de Las Palmas de Gran Canaria, Campus de Tafira, 35017, Las Palmas, Spain e-mail: [email protected] (1) (2) where vector k = (k1 , k2 , . . . , kNk )T represents the radial distortion parameters. The complexity of the model is given by the polynomial degree we use to approximate L(r) (i.e. the dimension of k). Non radial terms to account for tangential or decentering effects can also be included in the models [3–6], although for standard camera lens are usually neglected. The distortion models given by (1) are well-known, simple and can be estimated using just image information. In particular Alvarez, Gómez and Sendra [7] propose an algebraic method to compute lens distortion models by correcting the line distortion induced by the lens distortion. J Math Imaging Vis (2012) 44:480–490 Apart from the above, which is normally applied to monofocal camera, the lens zoom settings must be taken into account to correctly calibrate a zoom lens camera used in a real scenario. This has been specially addressed within the scope of close-range photogrammetry measurement in the last years [8–10]. By modifying the focus and the zoom values, a zoom camera can be adjusted to several fields of views, depth of fields and even lightning conditions. Applications of zoom lenses are widely found in 3D scene depth reconstruction [11], telepresence [12], robot navigation [13, 14] or visual tracking [15, 16] among others. This paper is organized as follows: Sect. 2 deals with the more relevant related works devoted to lens distortion and zoom lens camera models. In Sect. 3 we present some fundamental aspects about the zoom lens geometry. The proposed lens distortion model and the experimental setup is discussed in Sect. 4. Experimental results are shown in Sect. 5, followed by some conclusions in Sect. 6. 2 Related Works Zoom-dependent camera calibration traditionally is devoted to modelling the variation of the camera parameters (matrix of the camera intrinsic parameters and the rotation and translation matrices) along a predefined minimum and maximum zoom settings. See [17] for a zoom model accounting for the intrinsic variation only, or [18] for a model regarding both intrinsic and extrinsic parameters variation. To calibrate a zoom lens camera system within a given zoom range, a number of lens settings and the related calibration data are usually stored in a look-up table [19]. Thus, for each lens setting, a considerable number of measurements, requiring a huge amount of time, have to be done. Then, the collected data are processed using a leastsquares method (Levenberg-Marquardt or other optimization method) and, by applying a convenient model, the result is the matrix of the camera intrinsic parameters expressed as a function of the zoom setting [17, 20, 21]. Due to radial lens distortion varies with both changing zoom and focus, the effect of lens distortion is usually included as part of the intrinsic parameters and it is estimated during the calibration procedure by iteratively undistorting the images generated by the camera [21, 22]. For a detailed analysis of the effect of radial lens distortion for consumer grade-cameras see [9], where it can be concluded that, – the variation of the radial distortion is non-linear with the zoom, – the radial distortion reaches a maximum at shortest focal length, even in cases where zero crossings occur. Besides authors [9] show that, for medium-accuracy digital close-range photogrammetry applications, the variation 481 Fig. 1 Pinhole projection model. f is the effective focal distance of the first radial distortion coefficient along the zoom field can be modelled by: k1ci = d0 + d1 cid2 , (3) where ci is the principal distance, di are empirical coefficients and, for the cameras analyzed in [9], d2 ranges from around −0.2 to −4.1. These results are for a focus setting spanning from 5 to 21 mm. In [23] authors discussed a method for automatically correcting the radial lens distortion in a zoom lens video camera system. The method uses 1-parameter lens distortion models (i.e. only k1 is considered) and two local different models to account for the barrel and pincushion distortion. After sampling some images (video frames) with different focal lengths, authors use POVIS hardware system to estimate focal length and k1 for each frame, then least-squares method is applied to fit a quadratic polynomial for the first radial distortion coefficient k1 , having as a variable the inverse of the focal length f , k1 (1/f ) = c0 + c1 (1/f ) + c2 (1/f )−2 , (4) where {ci } represent the polynomial coefficients. It can be noted that to build a zoom dependent lens distortion model for a set of m images, it is required to estimate the frame by frame lens distortion model by minimizing any appropriate energy function accounting for the deviation between distorted points and corrected (undistorted) ones. 3 Zoom Lens Geometry We assume that, after lens distortion correction, camera image formation follows Pinhole projection model which is widely used in Computer Vision. In Fig. 1 we illustrate the basic pinhole model where f is the effective focal distance and d is the distance of a scene point to the camera projection plane. Using trigonometric relations we can obtain: R rc = . f d −f (5) In the case of a real zoom lens, the effective lens f depends on 2 adjustable lens control parameters: (1) the zoom lens setting and (2) the in-focus distance parameter, that is the 482 J Math Imaging Vis (2012) 44:480–490 The main goal of this paper is to model the variation of lens distortion model parameters with respect to effective focal distance f . First we observe that, using (5) we obtain R = d · rc Fig. 2 Basic thin lens model. f ∞ is the focal distance for points situated at infinity. f is the effective focal distance when lens is focused at distance d 1 − rc , f so, in particular, the variation of R is linear with respect to 1/f . We expect that for the in-focus plane, the lens distortion magnitude would depend on R and therefore, the natural choice to model the variation of lens distortion parameter ki is a function of 1/f , that is ki (f ) ≡ Pi (1/f ), distance of the projection image plane to points in the scene where the lens is focused. In Fig. 2 we illustrate the basic thin lens model where we can appreciate the variation of the effective focal distance with respect to focused distance. Zoom lens setting is the most significant parameter in the effective focal distance f value. Maximum zoom lens setting interval value is usually provided by the manufacturer. For instance in the numerical experiments we use a NIKKOR AF-S 18-200 lens with maximum zoom lens setting interval [18, 200]. This maximum interval is obtained by adequate combination of zoom lens and in-focus distance settings and in the case we fix the in-focus distance setting the interval of effective focal distance is smaller. As we will see in the numerical experiments such interval is about [20.56, 127.36] for an in-focus distance of 1185 mm. (7) (8) where ki (f ) represents the lens distortion parameter ki for the effective focal length f . In fact, in [23], authors divide the focal distance interval in 2 regions (pincushion and barrel areas) and in each area a different polynomial approximation in 1/f variable is used to model zoom variation. Probably as they use a single parameter lens distortion model, to divide the focal length interval value is required to improve the accuracy. As we use more complex lens distortion models, we can deal with the whole focal distance range without separating the interval in several regions. In what follows we will assume that Pi (.) is approximated by a polynomial function, that is: i N Pi (x) ≡ a0i + a1i x + a2i x 2 + · · · + aN x , (9) therefore lens distortion model depends on {ani } and we denote by L{ani } (f, r) ≡ 1 + P1 (1/f )r 2 + P2 (1/f )r 4 + · · · , (10) 4 Proposed Lens Distortion Model the radial lens distortion model for the effective focal distance f and by We start by introducing some basic concepts. We use the general approach to estimate L(r) by imposing the requirement that after lens distortion correction, the projection of 3D lines in the image has to be 2D straight lines. This approach has been used in [1, 7] to minimize the following objective error function, which is expressed in terms of the distance of the primitive points to the associated line after lens distortion correction: xˆ = L˜ {ani } (f, x), Nl N p (l) ˜ l,p ), Sl )    dist2 (L(x E {ki } = , Nl · Np (l) p (6) l where Nl is the number of line primitives detected in the image, Np (l) is the number of extracted points associated to a line primitive, xl,p is a primitive point associated to line ˜ is the lens distortion model given by (1) and E({ki }) Sl , L(.) is the average of the squared distance of the primitive line point to a straight line after lens distortion correction. This error function is widely applied and the minimization can be carried out through any optimization method (gradient-like). (11) the lens distortion correction of point x using the above lens distortion model. We propose to estimate the polynomial coefficients {ani } by minimizing the error function: (l,m) M N l (m) Np  dist2 (L˜ {ani } (fm , xm,l,p ), Sm,l )  i   , E G an = M · Nl (m) · Np (l, m) m p l (12) where M is the number of images, fm is the effective focal distance associated to image m, Nl (m) is the number of line primitives detected in image m, Np (l, m) is the number of extracted points associated to a particular line primitives, xm,l,p is a primitive point associated to line Sm,l . We observe that EG ({ani }) is the average of the frame distortion error given by (6) when distortion coefficients are estimated using the polynomial models. Concerning the distortion center variation with respect to the effective focal length, we do not assume any model J Math Imaging Vis (2012) 44:480–490 483 Fig. 3 Calibration pattern composed by a collection of 31 × 23 white strips because we do not expect a significant variation. As we will see in the numerical experiments, the influence of distortion center variation is negligible. So we assume that the lens distortion center is the image center. In what follows, we call as frame by frame model to the lens distortion model estimated independently for each frame within the zoom range of interest. For a n-degree polynomial for the Taylor expansion of (1), the frame by frame model for m images, is the set of the radial distortion coefficients provided by minimizing (6), expressed as,   p p p (13) k = k1 , k2 , . . . , kn , p = 1, 2, . . . , m . 4.1 Experimental Setup To validate the proposed model we have built a calibration pattern (see Fig. 3) composed by a collection of 31 × 23 white strips. The dimensions of the calibration pattern is 1330 × 1010 mm. The camera is fixed in front of the calibration pattern and we take a number of photos by changing the zoom lens setting of the camera covering its whole value interval. For each image we estimate the edge border lines of the white strip (using for instance method proposed in [24]) which provide us with the distorted lines we use to validate our model. Moreover, for each image, effective focal distance is estimated using expression f= d · rc , R + rc (14) obtained from (5) where d is the distance from the camera projection plane to the calibration pattern, rc is the distance between 2 consecutive strips in the image and R is the distance between 2 consecutive strips in the calibration pattern. Note that small errors related to the value of the distanceto-target, d in expression (14), would somehow be compensated during the global minimization and, therefore, it would not affect the efficiency of the model To summarize, the procedure we use to validate the proposed approach using the calibration pattern can be divided in the following steps: 1. We take a collection of photos of the calibration pattern for a fixed in-focus distance, covering its whole zoom lens setting value interval. 2. We extract distorted lines in the image. 3. For each image we compute effective focal distance using (14). 4. We estimate zoom lens distortion polynomial coefficient model by minimizing expression (12). 5. We analyze the lens distortion error obtained using: (i) the proposed zoom dependent lens distortion model for the whole zoom lens setting interval, (ii) lens distortion model obtained independently for each image by minimizing energy error (6) and (iii) the original lens distortion error without using any lens distortion correction. The expression (12), is minimized by a simple gradient method applying an appropriate step length to account for the differences in magnitude of the variables (note that they range from 1.e-005 to 1.e-017). We estimate the initial solution as: 1. We select some images and calculate the distortion coefficients for the frame by frame model. 2. We fit the quadratic polynomials (10) to the distortion coefficients from the frame by frame model using least squares error. From the experiments carried out, a reduced number of estimations through the frame by frame model is enough. The results shown in the next sections were calculated using only three images for the frame by frame model: the image corresponding to the maximum focus fmax , the image corresponding to the minimun focus fmin , and an image captured with the focus (fmin + fmax )/2. 5 Numerical Experiments We have performed numerical experiments in two different scenarios. First, we check the accuracy of the proposed model using the calibration pattern introduced above. Then, we apply the model to a real scenario: a video sequence of a soccer match with a significant zoom lens variation. In Fig. 4 we show video frames of both scenarios. In the calibration pattern experiment, we use a Nikon D90 camera with a NIKKOR AF-S 18-200 mm lens, and a CCD geometry of 23.7 × 15.6 mm. The resolution of the captured image is 4288 × 2848 pixels. In the soccer video sequence, we deal with professional HD video TV camera with frame resolution of 1920 × 1080 pixels (the class of video camera typically used in broadcasting sport events). In this case, the camera manufacturer or zoom lens range is unknown. We estimate, for each frame, the effective focal distance taking into account the real dimensions of soccer 484 J Math Imaging Vis (2012) 44:480–490 Fig. 4 Example of image frames of test video sequences used in the numerical experiments: calibration pattern (left), real soccer match (right) Fig. 5 On the left, primitives used in the numerical experiments for the geometric pattern: top (f = 20.55 mm), and bottom (f = 127.35 mm). On the right, images and primitives with distortion removed applying the proposed zoom model. We can appreciate the variation of the lens distortion models with respect to f , looking at the curvature of image boundary: in the case of f = 127.35 mm, the lens distortion correction is significantly smaller than in the case of f = 20.55 mm court (which are “a priori” known) and the size of the projected soccer court in the image. This size can be computed using the homography from the image soccer court to the real soccer model. Such homography can be estimated using the approach proposed in the classical Zhang calibration method [26]. Note that the expression (14) is not used in this case. We remark the significant differences between the test scenarios selected, which point out the wide possibilities of the proposed methodology. For the calibration pattern, in Fig. 5 we present the image primitives used to calculate the lens distortion models to be embedded in the proposed zoom model. It can be seen the selected primitives for two cases corresponding to a minimum zoom (zoom lens setting = 18 mm which corresponds to an effective focal distance f = 20.55 mm) and to a maximum zoom (zoom lens setting = 200 mm and an effective focal distance f = 127.35 mm). We notice that there is a significant difference between the effective focal distance range and the range provided by lens manufacturer. The reason is: on the one hand that the effective focal length depends on the “in focus” distance and on the other hand the effective focal length is obtained using (14) where some elements as d and R are estimated by hand so without a J Math Imaging Vis (2012) 44:480–490 485 Fig. 6 On the left, primitives used in the numerical experiments for the soccer video sequence: top (f = 45.16 mm), and bottom (f = 156.55 mm). On the right, images and primitives with distortion removed by applying the proposed zoom model. We can appreciate the variation of the lens distortion models with respect to f , looking at the curvature of image boundary: in the case of f = 156.55 mm, the lens distortion correction is significantly smaller than in the case of f = 45.16 mm. The zoom focus is unknown and it has been estimated (see Sect. 5) high precision. In any case the focal length range is not very relevant in our work because a variation in the focal length range is compensated in the estimation of the coefficients of the lens distortion model so we do not expect a significant difference in the lens distortion correction quality. The image primitives consist of a set of edge points belonging, respectively, to the horizontal and vertical white stripes within the pattern. This can be performed using any edge detector (see for instance [24] or [25] for further details). For instance, the number of primitives extracted were, for the case f = 20.55 mm, 303,623 points and 103 lines and for the case f = 127.35 mm, 52,433 points and 18 lines. The total amount of primitive points extracted in the 50 images we used in the experiment were 8,166,660. For the case of the soccer video sequence, it can be seen in Fig. 6 the primitives selected to account for the radial distortion model. In this case, the total number of available primitive points is smaller than in the case of the calibration pattern and it corresponds to the line center of the white strips appearing on the soccer terrain (sideline, halfway line goal line and the ones belonging to the goal box and to the penalty box), as it can be appreciated in the figure. Note that these primitives may not be always available (visible), thus, calibrating this kind of images is a challenging problem because there are a few number of visible primitives to perform the calibration. We represent the cases for two zoom settings, f = 45.16 mm and f = 156.55 mm which correspond to effective focal distance extrema of the video sequence frame. The number of primitives extracted were, for the case f = 45.16 mm, 1060 extracted points and 13 lines, and 957 selected points and 8 lines, for the case f = 156.55 mm. The total amount of primitive points extracted in the 55 images we used in the experiment were 55,447. We first evaluated the performance of the proposed zoom lens distortion model for the calibration geometric pattern and, after a detailed evaluation, we applied the zoom lens distortion model to the soccer video sequence. 5.1 Results for the Calibration Pattern We have to note that the calibration pattern can be seen as an ideal zoom experiment with a dense distribution of line primitives which allow us to analyze accurately the lens distortion model behavior. We first evaluated the influence of lens distortion center variation through the focal distance range (f = 20.55 mm to f = 127.35 mm). The center of radial distortion is also minimized when estimating the k1 and k2 distortion coefficients for the frame by frame model. We used the algebraic method [7] to estimate these coefficients and, by means of the steepest descent algorithm, we improved the solution calculating the RMS distance function as it was also explained in [7]. 486 J Math Imaging Vis (2012) 44:480–490 Fig. 7 Displacement of the center of distortion for the geometric pattern Fig. 8 Relative error improvement percentage optimizing lens distortion center According with the results obtained, we can conclude that the variation of lens distortion center can be neglected for two reasons. First, as it is shown in Fig. 7, the displacement of the center of distortion for the distortion model (13) is very small (with a maximum norm of ≈ 4 pixels). Second, as it is shown in Fig. 8, the relative improvement percentage of the energy error (6), when the lens distortion center is optimized and when it is not optimized (the distortion center is the image center), is very small (with a maximum percentage of 1.5 %). Therefore we can conclude that the influence of the lens distortion center variation is negligible. Therefore, in what Fig. 9 Variation of estimated distortion coefficients k1 (top) and k2 (bottom) with the inverse of focal distance and the estimated second order polynomial approximation. We observe that the polynomials fit accurately the distortion parameter distribution (specially in the case of k1 which is the more important parameter). Moreover the variation of k1 and k2 with respect to the polynomials move in opposite directions so we expect a motion compensation in terms of lens distortion model correction follows we will consider that the lens distortion center is the image center. The variation of estimated distortion coefficients along the zoom field using the frame by frame model can be seen in Fig. 9 (represented as a function of the inverse of the focal distance). We also represent the estimated second order polynomial approximation. We observe that the polynomials fit accurately the distortion parameter distribution (specially in the case of k1 which is the more important parameter). J Math Imaging Vis (2012) 44:480–490 Moreover, we can observe that, in general, in the focal distances where k1 varies with respect to the polynomial approximation, the variation of k2 with respect to the polynomial move in the opposite directions. So we expect a motion compensation in terms of lens distortion model correction. Figure 10 shows the performances of the proposed zoom model. In this figure we present for each frame: (i) the original lens distortion error (6) without using any lens distortion correction, (ii) lens distortion model obtained independently for each image by minimizing energy error (6) and (iii) energy error (6) computed using the proposed polynomial zoom dependent lens distortion model. We observe that the quality of the distortion correction obtained using the proposed zoom model is as good as the one obtained in an independent way frame by frame. In Table 1 we summarize the RMS values for the 3 models presented in Fig. 10. From these results, it can be appreciated that the relative RMS value difference between the zoom dependent proposed model and model estimated independently frame by frame is just around 1.43 % (for the results in mm). Due to the fact that the calibration pattern is standing in a front-parallel position with respect to the cam- Fig. 10 Distance function for the geometric pattern estimated by the three models (dashed line: original error function, solid line: proposed quadratic zoom model, dotted line: frame by frame model (13)) Table 1 Summary of results for the geometric pattern (RMS values) 487 era projection plane, the pixels to mm units conversion is trivial using (5). In Fig. 11 the maximum distortion error is shown. This error has been calculated for a image pixel located at a corner point (the top corner has been selected). Note that the maximum error spans around 200 pixels to correct the radial distortion due to the zoom. In this experiment, the optimized lens distortion zoom model coefficients are given by the polynomials: k1 (f ) = 2.26 × 10−9 − 7.19 × 10−7 (1/f ) + 2.14 × 10−5 (1/f )2 , k2 (f ) = 1.74 × 10−17 + 7.36 × 10−15 (1/f ) − 6.55 × 10−13 (1/f )2 . 5.2 Results for the Soccer Video Sequence The soccer video sequence we use have been taken by broadcast video camera and it has been provided to us by MEDIAPRODUCCION S.L. company. The video sequence is in HD resolution (1920 × 1080 pixels) and it last 28 seconds (841 frames). The zoom settings ranges from Fig. 11 Maximum distortion error for the pattern estimated using the proposed model. The error is calculated as ˆx − xc  − x − xc , where, xˆ is the corrected (undistorted) point, x is a point located at a corner of the image, and xc is the center of distortion Distortion model comparison Pixels Millimeters Residue without using lens distortion model 4.2991 2.0421 Residue from frame by frame model 1.8192 0.8482 Residue from proposed zoom polynomial model 1.8241 0.8604 488 J Math Imaging Vis (2012) 44:480–490 Fig. 12 Distance function for the soccer video sequence estimated by the three models (dashed line: original error function, solid line: proposed quadratic zoom model, dotted line: frame by frame model (13)) Fig. 13 Maximum distortion error for the soccer estimated using the proposed model. The error is calculated as ˆx − xc  − x − xc , where, xˆ is the corrected (undistorted) point, x is a point located at a corner of the image, and xc is the center of distortion 45.16 mm to 156.55 mm. To estimate the proposed zoom dependent polynomial model we have selected 55 frames covering the whole range of effective focal distance. We have obtained the following polynomial models for the lens distortion coefficients: Table 2 Summary of results for the soccer video images set (RMS values) k1 (f ) = 2.65 × 10 −8 − 8.88 × 10 −6 (1/f ) + 2.86 × 10−4 (1/f )2 , Distortion model comparison Pixels Residue without using lens distortion model 1.0601 Residue from frame by frame model 0.5478 Residue from proposed zoom polynomial model 0.5551 k2 (f ) = 1.99 × 10−14 + 3.07 × 10−12 (1/f ) − 1.04 × 10−10 (1/f )2 . In Fig. 12 the performances of the proposed zoom model are illustrated. As in the case of the calibration pattern experiment, we present a comparison of the lens distortion error measures for (i) the original lens distortion error (6) without using any lens distortion correction, (ii) lens distortion model obtained independently for each image by minimizing energy error (6) and (iii) energy error (6) computed using the proposed polynomial zoom dependent lens distortion model. We observe that the quality of the distortion correction obtained using the proposed zoom model is as good as the one obtained in an independent way frame by frame. In Table 2 we summarize the RMS values for the 3 models presented in Fig. 12. The single zoom model has been also included to compare with. From these results, it can be appreciated that the relative RMS value difference between the zoom dependent proposed model and model estimated independently frame by frame is just around 1.33 %. These results are expressed only in pixels because, as the camera is not in a front-parallel position respect to the view, we can not associate a real single length measure (meters) to the pixel size. In Fig. 13 the maximum distortion error is shown. As it can be seen, it varies around 20 pixels with the zoom to correct the radial distortion. One very important advantage of the proposed model is that using the obtained polynomials we can estimate the lens distortion model for any effective focal distance f . In particular we can obtain lens distortion models for the whole video sequence (841 frames) in spite of just 55 frames have been used to estimate the polynomials. To illustrate the application of the proposal to the whole video sequence, we have created a video where the lens distortion is corrected for each frame using the proposed zoom dependent polynomial model (see this demo video at http://www.ctim.es/demo101/). 6 Conclusions New mathematical models to study the variation of lens distortion models for a zoom camera have been discussed. Such models are based on a polynomial approximation to account for the variation of the radial distortion parameters through the range of zoom and, on the minimization of a global er- J Math Imaging Vis (2012) 44:480–490 ror energy measuring the distance between sequences of distorted aligned points and straight lines after lens distortion correction. We have obtained that using a second order polynomial approximation, the quality of lens distortion correction is as good as for the frame by frame approach. This is remarkable because using only 6 parameters (3 for the polynomial associated to the first lens distortion coefficient k1 and 3 parameters for the second coefficient k2 ) we can estimate the lens distortion model for any effective focal distance of zoom lens. The proposed model have been applied to estimate the zoom dependent lens distortion model for a calibration pattern and for a real soccer video sequence filmed using a professional video camera. The results for both cases show the potentiality of the proposed model. Acknowledgements This research has partially been supported by the MICINN project reference MTM2010-17615 (Ministerio de Ciencia e Innovación. Spain). We acknowledge MEDIAPRODUCCION S.L. company who has kindly provided to us with the real soccer video sequence we use in the numerical experiments. References 1. Devernay, F., Faugeras, O.: Straight lines have to be straight. Mach. Vis. Appl. 13(1), 14–24 (2001) 2. Faugeras, O., Luong, Q.-T., Papadopoulo, T.: The Geometry of Multiple Images. MIT Press, Cambridge (2001) 3. Tsai, R.Y.: A versatile camera calibration technique for highaccuracy 3D machine vision metrology using off-the-shelf tv cameras and lenses. IEEE J. Robot. Autom. 3(4), 323–344 (1987) 4. Faugeras, O.: Three-Dimensional Computer Vision. MIT Press, Cambridge (1993) 5. Weng, J., Cohen, P., Herniou, M.: Camera calibration with distortion models and accuracy evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992) 6. Light, D.L.: The new camera calibration system at the US geological survey. Photogramm. Eng. Remote Sens. 58(2), 185–188 (1992) 7. Alvarez, L., Gomez, L., Sendra, R.: An algebraic approach to lens distortion by line rectification. J. Math. Imaging Vis. 35(1), 36–50 (2009) 8. Fraser, C.S., Shortis, M.R.: Variation of distortion within the photographic field. Photogramm. Eng. Remote Sens. 58(6), 851–855 (1992) 9. Fraser, C., Al-Ajlouni, S.: Zoom-dependent camera calibration in digital close-range photogrammetry. Photogramm. Eng. Remote Sens. 72(9), 1017–1026 (2006) 10. Bräuer-Burchardt, C., Heinze, M., Munkelt, C., Kühmstedt, P., Notni, G.: Distance dependent lens distortion variation in 3D measuring systems using fringe projection. In: BMVC 2006, pp. 327– 336 (2006) 11. Irani, M., Anandan, P.: A unified approach to moving object detection in 2D and 3D scenes. IEEE Trans. Pattern Anal. Mach. Intell. 20(6), 577–589 (1998) 12. Hampapur, A., Brown, L., Connell, J., et al.: Smart video surveillance: exploring the concept of multiscale spatiotemporal tracking. IEEE Signal Process. Mag. 22(2), 38–51 (2005) 13. Martinez, E., Torras, C.: Contour-based 3D motion recovery while zooming. Robot. Auton. Syst. 44(3–4), 219–227 (2003) 489 14. Fahn, C., Lo, C.: A high-definition human face tracking system using the fusion of omni-directional and PTZ cameras mounted on a mobile robot. In: 5th IEEE Conference on Industrial Electronics and Applications (ICIEA), Taichung, China, pp. 6–11 (2010) 15. Fayman, J., Sudarsky, O., Rivlin, E.: Zoom tracking. In: Proceedings of the International Conference on Robotics and Automation, Leuven, Belgium, pp. 2783–2788 (1998) 16. Peddigari, V., Kehtarnavaz, N.: A relational approach to zoom tracking for digital still cameras. IEEE Trans. Consum. Electron. 51(4), 1051–1059 (2005) 17. Ergum, B.: Photogrammetric observing the variation of intrinsic parameters for zoom lenses. Sci. Res. Essays 5(5), 461–467 (2010) 18. Wilson, R., Shafer, S.: A perspective projection camera model for zoom lenses. In: Proc. Second Conference on Optical 3-D Measurements Techniques, Switzerland, October (1993) 19. Tarabanis, K., Tsai, R., Goodman, D.: Modeling of a computercontrolled zoom lens. In: Proceedings of IEEE International Conference on Robotics and Automation, vol. 2, pp. 1545–1551 (1992) 20. Li, M., Lavest, J.: Some aspects of zoom lens camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 18(11), 1105–1110 (1996) 21. Atienza, R., Zelinsky, A.: A practical zoom camera calibration technique: an application on active vision for human-robot interaction. In: Proceedings of Australian Conference on Robotics and Automation, Sydney, Australia, pp. 85–90 (2001) 22. Benhimane, S., Malis, E.: Self-calibration of the distortion of a zooming camera by matching points at different resolutions. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, pp. 2307–2312 (2004) 23. Kim, D., Shin, H., Oh, J., Sohn, K.: Automatic radial distortion correction in zoom lens video camera. J. Electron. Imaging 19(4), 43010–43017 (2010) 24. Alvarez, L., Esclarín, J., Trujillo, A.: A model based edge location with subpixel precision. In: Proceedings IWCVIA 03: International WorkShop on Computer Vision and Image Analysis, Las Palmas de Gran Canaria, Spain, pp. 29–32 (2003) 25. Alemán-Flores, M., Alvarez, L., Henríquez, P., Mazorra, L.: Morphological thick line center detection. In: Proceedings ICIAR’2010. LNCS, vol. 6111, pp. 71–80. Springer, Berlin (2010) 26. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000) Luis Alvarez has received a M.Sc. in applied mathematics in 1985 and a Ph.D. in mathematics in 1988, both from Complutense University (Madrid, Spain). Between 1991 and 1992 he worked as post-doctoral researcher at CEREMADE laboratory in the computer vision research group directed by Prof. Jean-Michel Morel. Since 2000 he is full professor at the University of Las Palmas of Gran Canaria (ULPGC). He has created the research group Análisis Matemático de Imágenes (AMI) at the ULPGC. He is an expert in computer vision and applied mathematics. His main research interest areas are the applications of mathematical analysis to computer vision including problems like multiscale analysis, mathematical morphology, optic flow estimation, stereo vision, shape representation, medical imaging, synthetic image generation, camera calibration, etc. 490 Luis Gómez has received a M.Sc. in Physics in 1988 (UNED, Madrid, SPAIN) and a Ph.D. in Telecommunication engineering in 1992 (University of Las Palmas de Gran Canaria (ULPGC, SPAIN). Since 1994 he is an assistant professor at the University of Las Palmas of Gran Canaria (ULPGC). His main research interest areas are the applications of optimization to engineering problems, such as ultrasound imaging, camera calibration, etc. . . . He is working at the CTIM (Centro de Tecnologías de la Imagen, ULPGC) group directed by professor Luis Álvarez. J Math Imaging Vis (2012) 44:480–490 Pedro Henríquez has received a M.Sc. in Computer Science in 2008 from University of Las Palmas de Gran Canaria (ULPGC, SPAIN). Since 2008 he is a Ph.D. student at the University of Las Palmas de Gran Canaria (ULPGC). He is working at the CTIM (Centro de Tecnologías de la Imagen, ULPGC) group directed by professor Luis Álvarez. His main research interest areas are video processing, lens distortion model estimation, camera calibration, features detection and tracking.