We introduce a generalized camera calibration model that is able to determine the camera parameters without requiring perfect rectangular road-lane markings, thus overcoming the limitations of state-of-the-art calibration models. The advantage of the new model is that it can cope in situations where road-lane markings do not form a perfect rectangle, making calibration by trapezoidal patterns or parallelograms possible. The model requires only four reference points—the lane width and the length of the left and right lane markings—to determine the camera parameters. Through real-world surveying experiments, the new model is shown to be effective in defining the 2D/3D transformation (or vice versa) when there is no rectangular pattern on the road, and can also cope with trapezoidal patterns, near-parallelograms, and imperfect rectangles. This development greatly increases the flexibility and generality of traditional camera calibration models.
This paper proposed a knowledge-based methodology for determining the resolvability of N occluded vehicles seen in a monocular image sequence. The resolvability of each vehicle is determined by: firstly, deriving the relationship between the camera position and the number of vertices of a projected cuboid on the image; secondly, finding the direction of the edges of the projected cuboid in the image; and thirdly, modeling the maximum number of occluded cuboid edges of which the occluded cuboid is irresolvable. The proposed methodology has been tested rigorously on a number of real world monocular traffic image sequences that involves multiple vehicle occlusions, and is found to be able to successfully determine the number of occluded vehicles as well as the resolvability of each vehicle. We believe the proposed methodology will form the foundation for a more accurate traffic flow estimation and recognition system.
This paper presents a novel algorithm for handling occlusion in visual traffic surveillance (VTS) by geometrically splitting the model that has been fitted onto the composite binary vehicle mask of two occluded vehicles. The proposed algorithm consists of a critical points detection step, a critical points clustering step and a model partition step using the vanishing point of the road. The critical points detection step detects the major critical points on the contour of the binary vehicle mask. The critical points clustering step selects the best critical points among the detected critical points as the reference points for the model partition. The model partition step partitions the model by exploiting the information of the vanishing point of the road and the selected critical points. The proposed algorithm was tested on a number of real traffic image sequences, and has demonstrated that it can successfully partition the model that has been fitted onto two occluded vehicles. To evaluate the accuracy, the dimensions of each individual vehicle are estimated based on the partitioned model. The estimation accuracies in vehicle width, length and height are 95.5%, 93.4% and 97.7% respectively.
In modern traffic surveillance, computer vision methods are often employed to detect vehicles of interest because of the rich information content contained in an image. In this paper, we propose an efficient method for extracting the boundary of vehicles free from their moving cast shadows and reflective regions. The extraction method is based on the hypothesis that regions of similar texture are less discriminative, disregarding intensity differences between the vehicle body and the cast shadow or reflection on the vehicle. In this novel algorithm, a united likelihood map that based on the relationship of texture, luminance and chrominance of each pixel is initially constructed. Subsequently, a foreground mask is constructed by applying morphological operations. Vehicles can be successfully extracted and different vehicle components can be efficiently distinguished by the related autocorrelation index within the vehicle mask.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.