Open Access
28 November 2017 Detection scheme for a partially occluded pedestrian based on occluded depth in lidar–radar sensor fusion
Seong Kyung Kwon, Eugin Hyun, Jin-Hee Lee, Jonghun Lee, Sang Hyuk Son
Author Affiliations +
Abstract
Object detections are critical technologies for the safety of pedestrians and drivers in autonomous vehicles. Above all, occluded pedestrian detection is still a challenging topic. We propose a new detection scheme for occluded pedestrian detection by means of lidar–radar sensor fusion. In the proposed method, the lidar and radar regions of interest (RoIs) have been selected based on the respective sensor measurement. Occluded depth is a new means to determine whether an occluded target exists or not. The occluded depth is a region projected out by expanding the longitudinal distance with maintaining the angle formed by the outermost two end points of the lidar RoI. The occlusion RoI is the overlapped region made by superimposing the radar RoI and the occluded depth. The object within the occlusion RoI is detected by the radar measurement information and the occluded object is estimated as a pedestrian based on human Doppler distribution. Additionally, various experiments are performed in detecting a partially occluded pedestrian in outdoor as well as indoor environments. According to experimental results, the proposed sensor fusion scheme has much better detection performance compared to the case without our proposed method.

1.

Introduction

In recent years, research on intelligent vehicles has become very interesting to many people.1,2 In addition, various techniques have been applied to develop autonomous driving technologies, such as advanced driver assistance systems, advanced smart cruise control, lane keeping assist systems, and autonomous emergency braking systems. In spite of such technological advancements, pedestrian accident rates continue to increase annually.

Technologies for detecting objects including a pedestrian have been continuously developed to use camera, lidar, and radar. Lidar is able to measure a precise distance between the sensor and an object. In addition, the sensor has a wide field of view. On the other hand, lidar-based detection techniques have relatively low performance compared to that of camera-based techniques, which use the shape of an object. The radar makes use of the Doppler shift to extract the distance and velocity. However, it is difficult for a radar-based algorithm to recognize objects because they appear as many scattered signals, and long processing time is required. Research on overcoming sensor limitations has been actively carried out.

Sensor fusion techniques have been proposed to overcome the limitations of single sensors and to improve the detection performance. In the case of the lidar–camera fusion technique, lidar-based signal processing has been used to extract the region of interest (RoI) of an object and camera-based image processing has been applied to reduce the overall processing time.3 In other words, image processing has finally been done in the region allocated from lidar. Conventional lidar–radar fusion technique has mainly been used to detect the presence of vehicles and motorcycles.4,5 Velocity information on moving vehicles is extracted from radar; the shapes and types of objects are estimated using the width, length, height, and position measured with lidar. However, the detection method for pedestrians has not yet been investigated in conventional lidar–radar fusion techniques because they have low reflection and fewer scattering points during pedestrian movements. In particular, when a pedestrian is partially obscured by another object, there is tremendous difficulty in detecting the occluded pedestrian.6 This is because it is difficult to extract a pedestrian’s shape or feature information. To solve this problem, various studies using sensor fusion techniques based on camera have been proposed for detecting occluded objects.712 The sensor fusion techniques use camera to detect and classify objects, as determining the similarity between the object extracted from the sensor and the training data for the machine learning. However, to perform machine learning using this technique, various features of pedestrians are required. Camera-based techniques extract features, such as faces, arms, and legs that partially appear from pedestrians. To classify objects, similarity estimation and tracking techniques are performed over several frames. During this processing, there is the possibility of missing pedestrians due to a lack of feature data.

To improve the detection performance, we propose, in this paper, a new occluded pedestrian detection scheme using lidar–radar sensor fusion. In our proposed method, we have introduced two new concepts: occluded depth and occlusion RoI. An occluded depth is a special region in order to determine whether an occluded target exists or not. A moving object inside the occluded depth is measured only by radar, not by lidar. To detect occluded pedestrians, we used various RoIs, such as radar RoI, lidar RoI, fusion RoI, and occlusion RoI. First, radar RoIs are created according to the range and azimuthal angle of target objects measured by radar. Similarly, lidar RoIs are assigned by using object information obtained from lidar, such as the width, length, height, position, and curve features. Fusion RoIs are made by superimposing radar RoIs and lidar RoIs. The occlusion RoI is created by overlapped the radar RoI with the occluded depth to detect obstructed objects. Therefore, by taking advantage of the position and Doppler information of the object within the occlusion RoIs, we can determine whether there are moving objects or not. Finally, the radar human Doppler pattern is used to determine if the occluded object is a pedestrian.

This paper is organized as the following: Sec. 2 reviews the previous object detection technologies; Sec. 3 proposes the occluded depth based lidar–radar sensor fusion. Section 4 investigates the experimental results for verifying the detection performance of our proposed method taking real road environments into consideration. Finally, Sec. 5 summarizes the concluding remarks.

2.

Related Work

Conventional object detection technologies often employ a camera sensor because it extracts easily various features, such as the color, contour, and image pattern. In the image processing, the sliding window method is used for object detection and feature extraction in the whole image.2 However, their technologies have drawbacks like heavy calculation burden and high sensitivity to the environment. In this paper, we use lidar and radar instead of the camera in order to detect the occluded pedestrian as well as pedestrian. This section reviews the conventional sensor fusion technologies for occluded object detection.

2.1.

Lidar–Camera Sensor Fusion

Generally, lidar–camera sensor fusion has been used for object detection. Image processing using a camera distinguishes objects based on the color information of an image. Therefore, various color information is used for object detection, but the detection performance is obviously affected by temporal changes of light. Also, since image processing is performed on a pixel-by-pixel basis in every image, it requires complicated calculations. A typical example is a histogram of oriented gradient (HoG). HoG is a technique for calculating a gradient from a portion of an image. Based on the calculated gradient, the object is segmented with the results of the histogram.3 Lidar processing for object detection estimates the contour based on the object information, such as the width, length, height, and position. However, it is difficult for the lidar-based detection method to obtain features, including the color of an object. To compensate their shortcomings of individual sensors, lidar–camera sensor fusion is used. Through sensor fusion, the computational complexity of image processing is reduced and the detection performance is improved. Lidar–camera sensor fusion is usually composed of three steps: calibration, feature extraction, and classification.3,13

The first step is a calibration process that matches the coordinates of lidar and camera. In a calibration step, the distance between an object and two sensors is measured, and the position difference is compensated. The second step is to extract object information using lidar. In case of 2-D lidar process, its width and thickness are obtained, whereas 3-D lidar process extracts its width, thickness, and height of an object. The final step is the object classification using camera. The image obtained from a camera is categorized on its position of an object basis. The classification process is performed by using machine learning algorithms, such as support vector machine and AdaBoost.2,3 AdaBoost is one of adaptive boosting and accomplishes the final strong classifier by adding weighted results to multiple weak classifiers.3

Another technique to apply in lidar–camera sensor fusion is occlusion reasoning methods. In the case of partially occluded objects, it is difficult to extract its features of a target accurately. Typical occlusion reasoning methods are edge-contour based reasoning and frame comparison reasoning. Edge-contour based reasoning is a method to infer an object by means of its edge and contour of an occluded object.810 Frame comparison reasoning uses a continuously input image dataset in accordance with time. This method determines the occlusion by analyzing the continuous image data. The occluded object is inferred by comparing the current and previous frames. This method is very effective in case a short-time instant occlusion happens. However, it is difficult to estimate a consecutive occlusion because it is limited by buffer size.11,12,14 These fusion methods still have a reliability problem due to sensitivity to light change. In cases in which a sensor is highly sensitive to light change, an image of an object can be created even in low light conditions, but the image loses contrast and appears blurry because of increased noise. In a real driving environment, this change tends to cause sensors to miss obstacles. In addition, it is difficult for lidar–camera sensor fusion to simultaneously detect objects and estimate their speeds in a single frame. Therefore, to overcome this limitation, lidar–radar sensor fusion has been proposed to detect occluded objects and their movement.

2.2.

Lidar–Radar Sensor Fusion

Lidar–radar sensor fusion is robust to environmental change than camera because it uses the laser and radio frequency signal.4,5,1519 To our best of our knowledge, conventional lidar–radar fusion has been mainly focused on detecting moving vehicles, and researches on pedestrian detection have not been investigated yet before. In previous methods, a vehicle is estimated by obtaining its width, length, and shape from lidar and then acquiring its velocity from radar.4

A typical occlusion detection method using lidar–radar sensor fusion is modeling-based method.20 The vehicle measured by lidar shows an L-shape based on the shape of the front and side. However, in the case of non-line of sight (LoS) vehicles that are partially obscured by other obstacles, the lidar measurement data do not form a perfect L-shape. This imperfect shape is compensated for L-shape through Ramer algorithm.20,21 However, they have some limitations to detect a pedestrian since a human has an arbitrary shape.

In this paper, we propose a new method for a partially occluded pedestrian detection in lidar–radar sensor fusion. In our proposed method, we introduce new concepts of an occluded depth generation and occlusion RoI for occluded pedestrian detections. In addition, we will explain the procedure of our proposed detection scheme in the next section.

3.

New Detection Scheme for Partially Occluded Pedestrian

Our proposed method consists of object detection, sensor fusion, and then pedestrian detection. Before object detection for lidar and radar sensor fusion, the calibration is performed for matching to the respective coordinate system to compensate for the installment position difference between the two sensors.

Figure 1 shows the schematic procedure of our proposed pedestrian detection. In the case of radar during the object detection step, the parameter information of objects, such as range, velocity, and angle, is measured through R&V calculation. In the case of lidar, many scattering data are separated into several groups by clustering before features, such as the width, length, height, position, and curve can be extracted. In sensor fusion, lidar and radar RoIs have been selected through the outputs of the sensors. Radar RoI is made using the detected range, velocity, and azimuthal angle, which are the measurement output of radar for moving objects; lidar RoIs are created using object information obtained from lidar, such as the width, length, height, position, and curve features. The occluded depth is a new means to discover an obscured area hidden by obstacles. An object within the occluded depth is detected using the radar measurement information; the occluded object is estimated as a pedestrian by means of radar human Doppler distribution. In addition, to reduce the amount of data to be computed, our proposed method has made precise fusion RoI by superimposing the RoIs of each sensor. Then, the occlusion RoI is the area, where the depth and radar RoI overlap, meaning that a hidden object may exist within this zone. Finally, a pedestrian can be detected utilizing the human Doppler pattern from radar and/or the human fitting curve from lidar.

Fig. 1

Our proposed scheme for human detection using lidar and radar sensor fusion.

OE_56_11_113112_f001.png

3.1.

Object Detection

The radar measures the range and velocity information of the object.15 The lidar measures the distance, angle, and height of an object. The measurement data are represented as many scattering points. Through the clustering process, the scattering points are combined into several groups that are strongly dependent on the number of objects. In general, there are many methods for clustering lidar scattering data, such as distance-based clustering, standard deviation clustering, and K-means. Distance-based clustering is a popular method for grouping into the same cluster if the distance between each scattered point is within a threshold value. The threshold is calculated by a vector norm operation in accordance with distance. Standard deviation clustering is computed using the standard deviation of the object points obtained from lidar and generates a cluster if the threshold is not exceeded. This method is suitable for clustering objects in similar locations but, in contrast to distance-based clustering, it is not suitable for separating adjacent objects of different types. K-means is a well-known clustering technique. However, the clustering performance is strongly dependent on the initial value. In this paper, we used a simple distance-based clustering method.

Figure 2 shows their human lidar features according to various human postures. As can be seen in Fig. 2, the lidar data have unique features forming a streamlined shape. In order to characterize human lidar features, the lidar data are approximately fitted to a quadratic and higher-order polynomial function. The human slope features, with quadratic polynomial curve fitting, are uniquely distinguished from other objects. In this experiment, we use a low-resolution 2-D lidar (RPLIDAR produced by Robopeak). One of human characteristics is clearly represented by a quadratic curve, even though the low-resolution lidar has fewer data points than the high-resolution lidar. The approximated quadratic curve based on the lidar data is the human fitting curve. We extend this technique to the high-resolution lidar to extract human slope features.

Fig. 2

The fitting curves versus lidar measurement data according to several human postures.

OE_56_11_113112_f002.png

3.2.

Sensor Fusion

This subsection describes the various processes of RoI generation and the sensor fusion schemes. The radar RoI is determined by the range and azimuthal angle of the radar output. In addition, the radar can measure a walking human even in a partially occluded environment. During walking, the human Doppler distribution has unique repetitive Doppler and micro-Doppler patterns because one leg is fixed as the other takes a step forward.22 Due to this radar Doppler pattern, a human is distinguishable from other obstacles. This human radar Doppler pattern is called a human radar feature. The lidar RoI is set based on the measured width, length, height, and slope of the fitting curve. Referring to Ref. 1, the thresholds for human width and length are set to 1.2 m, and the thresholds of height are from 0.8 to 2 m. In addition, the slope threshold of the human fitting curve applied in this paper is set to 10. The slope of the fitting curve is the maximum slope, as shown in Fig. 2. The width, length, height, and slope to generate the RoI are defined as the lidar human feature.

Generally, because they detect only LoS objects, detection algorithms based on lidar have difficulty in detecting whether or not objects are partially occluded by obstacles. To confirm the possibility of existing objects being hidden behind other obstacles, we define the new concept of occluded depth. The occluded depth is the area behind the object measured in the lidar; it is not measured in the lidar. However, an object may exist within this region. The occluded depth is set to the area filled from the outermost point of the object measured by lidar to the maximum detection distance of the lidar.

Figure 3 shows the creation of radar RoI and lidar RoI, how to make a connection between occluded depth and lidar RoI, how to generate occlusion RoI using occlusion depth and radar RoI, and how to create fusion RoI. We assume that the relative position difference between radar and lidar is compensated for after calibration processing. Figure 3(a) shows the creation of radar RoI. As shown in Fig. 3(a), radar RoI is expressed as an area, detected by radar, surrounding a target of interest. In other words, radar RoI is the rectangular area M surrounding four edge points m1=(r1sinθr2,r1cosθr2), m2=(r1sinθr2,r1cosθr2), m3=(r1sinθr2,r2cosθr2), and m4=(r1sinθr2,r2cosθr2), where θr is the radar angular resolution and r1=drΔr2 and r2=dr+Δr2. And dr is the distance between the radar and the target of interest and Δr is the radar range resolution, equivalent to c/2B, in which c is the velocity of light and B is the bandwidth. Radar RoI is strongly dependent on the radar range resolution and the angular resolution. Figure 3(b) shows the generation of lidar RoI. Like radar RoI, the lidar RoI is expressed as the rectangular area G including four edge points surrounding targets of interest, that is, g1=(l1sinθl2+Δw2,l1cosθl2), g2=(l1sinθl2Δw2,l1cosθl2), g3=(l2sinθl2+Δw2,l2cosθl2), and g4=(l2sinθl2Δw2,l2cosθl2), where θl is the lidar measured horizontal angle of the target of interest, dl is the distance between the lidar and the surface of the target, and l1=dlΔl2 and l2=dl+Δl2. Δw and Δl are predetermined width and length boundaries, respectively.1 Figure 3(c) shows how to generate the occluded depth from lidar measurement. As shown in Fig. 3(c), occluded depth is created by the rectangular area P consisting of four edge points p1=(dlsinθl2,dlcosθl2), p2=(dlsinθl2,dlcosθl2), p3=(dmaxsinθl2,  dmaxcosθl2) and p4=(dmaxsinθl2,dmaxcosθl2), where dmax is the lidar maximum detection distance. Occluded depth is generated from the objects detected in lidar. Figure 3(d) shows how to create occlusion RoI from occluded depth and radar RoI. We can identify the possibility of the existence of any occluded objects by utilizing occluded depth and radar RoI because radar measures any objects within the occluded depth; lidar, however, cannot measure such objects. Thus, occlusion RoI is a new region generated by overlapping the occluded depth and radar RoI, which process is necessary to find any occluded objects. As shown in Fig. 3(d), the occlusion RoI is created by the overlapping region O consisting of four edge points o1=(r3sinθl2,r3cosθl2), o2=(r3sinθl2,r3cosθl2), o3=(r4sinθl2,r4cosθl2) and o4=(r4sinθl2,r4cosθl2), where θr>θl indicates that the radar angular resolution θr is greater than the lidar measurement horizontal angle θl. Note that the equations for the four edge points of occlusion RoI are the same as those for the four edge points of radar RoI, except for θl is used instead of θr. Figure 3(e) shows the creation of fusion RoI, an overlapped region produced by combining radar RoI and lidar RoI. As shown in Fig. 3(e), the fusion RoI is generated by the overlapping region surrounding the four edge points F1 to F4. In other words, the fusion RoI F can be simply given by MG, where M and L are the radar and lidar RoIs, respectively.

Fig. 3

Procedure of RoI generation: (a) radar RoI, (b) lidar RoI, (c) occluded depth, (d) occlusion RoI, and (e) fusion RoI.

OE_56_11_113112_f003.png

3.3.

Pedestrian Detection

An occluded object detection is a processing step to identify any object located in the occluded depth. The occluded objects are detected by utilizing the radar measurement results such as the measured range, velocity, and Doppler pattern. In this paper, we identify any hidden object as a pedestrian within the occlusion RoI by using this radar human feature. In the pedestrian detection step, in case of an occluded person, the radar human feature is only used for pedestrian detection. However, in case that an object is outside from an occlusion RoI, a person is detected by using the radar human feature and the lidar human feature that are labeled by width, length, height, slope of fitting curve, and Doppler.

Figure 4 is a step-by-step procedure of lidar and radar sensor fusion for pedestrian detection based on the occluded depth. Figure 4(a) is a snapshot picture showing a situation in which a pedestrian is obscured by another pedestrian. Figure 4(b) is the lidar measurement data. Figures 4(c) and 4(d) are the radar RoI and lidar RoI, respectively. The lidar measures only the front one of two pedestrians. Figures 4(e) and 4(f) show the occluded depth, and the fusion and occlusion RoIs, respectively. As mentioned above, the fusion RoI is generated by superimposing lidar RoI and radar RoI. And, the occlusion RoI is built by overlapping the occluded depth and radar RoI. These fusion RoI and occlusion RoI are specified zones for occluded and not occluded object detection, respectively. Figure 4(g) is the final pedestrian detection result showing occluded pedestrian represented by occlusion RoI as well as a LoS pedestrian in fusion RoI. The occluded pedestrian is detected from the radar Doppler pattern within the occlusion RoI. And also, the pedestrian detection is done by using the radar human feature and the lidar human feature within the fusion RoI.

Fig. 4

Procedure of lidar and radar sensor fusion scheme based on occluded depth: (a) a snapshot picture of an occlusion situation, (b) the lidar measurement data, (c) the radar RoI, (d) the lidar RoI, (e) the generated occluded depth, (f) the fusion RoI and occlusion RoI, and (g) the pedestrian detection results.

OE_56_11_113112_f004.png

4.

Experiment

To verify the performance of our proposed algorithm, various experiments are done in both indoor and real road environments. Both obvious and occluded situations are included in the experiments.

4.1.

Experimental Setup

We used a Velodyne VLP-16 lidar and a 24-GHz frequency modulated continuous wave (FMCW) radar23,24 in the experiment. Table 1 shows their critical specifications. The radar was attached to its bumper position of a vehicle and the lidar was installed on about 2 m height from the ground. They have similar configurations of the autonomous vehicle. Experiments were performed at varying obstacle distances by about 6, 8, and 10 m. Multiple target experiments were made under the environment that two moving people get close and separate each other. The developed logging board and software25 were used to collect the radar measurement data.

Table 1

The specifications of the lidar and radar used in the experiments.

SpecificationLidarRadar
Type903 nm laserFMCW
# of channels161
Maximum range (meter)10015 (human)
Field of view (horizontal)360 deg30 deg
Field of view (vertical)30 deg (+15  deg to 15  deg)
Center frequency (GHz)24
Radial velocity (km/h)+200 to 200
Bandwidth(MHz)250
Sampling rate (MHz)0.35

The radar used a fast-ramp based FMCW modulation.15 The signal reflected from an object was sampled by means of an analog digital converter. The sampled signal per ramp was transformed by the range-frequency spectrum through a range-FFT. Generally, the signal reflected from a pedestrian can be masked off due to a strong surrounding clutter because a pedestrian has a relatively low reflectance. Thus, a moving target indication removed clutter components and extracted only a moving object. And then, a range-Doppler map was built from a Doppler-FFT processing. The object was finally extracted by finding peaks with an adaptive threshold based on cell averaging constant false-alarm rate in a 2-D range-Doppler map. The obtained distance and velocity are the object information in the radar RoI. The Doppler pattern of a pedestrian was obtained from a Doppler-FFT processing.

4.2.

Experimental Results

Many experiments are made in various indoor and outdoor situations. In the experiments, targets mean both ordinary and partial occluded pedestrians. Figure 5 summarizes the indoor experimental scenarios and some typical examples of final detection results. In the indoor scenario (1), the pedestrian moves without any occlusion. In scenario (2), pedestrians are partially hidden by obstacles. In scenario (3), pedestrians are hidden temporarily and they move out any LoS space. In the experimental results, lidar RoI is represented by L_RoI, radar RoI by R_RoI, occluded depth by Occ_D, fusion RoI by F_RoI, and occlusion RoI by O_RoI. Also, the symbol #P means that a person has been detected.

Fig. 5

Indoor experimental environment.

OE_56_11_113112_f005.png

Figure 6 shows an example of pedestrian detection results in several indoor open environments. In an open environment, as shown in Fig. 6, a pedestrian was clearly measured in both the lidar and the radar, simultaneously. Figure 6(a) is the experimental real picture. Figure 6(b) shows the lidar measurement data. The horizontal boundary in the radar RoI was determined by the azimuthal coverage of radar. The vertical boundary is set to any predetermined threshold that identifies a pedestrian in Ref. 1. In Figure 6(c), the radar RoI and the lidar raw data cluster are represented by the dash-dot rectangle and black points, respectively. Since there is only one moving object in the radar RoI, the object in this cluster was moving. The lidar RoI was indicated by the dot rectangle, which made by clustering the measured data with the appropriate human width, length, height, and slope thresholding. To obtain its precise position of an object, the fusion RoI was extracted by overlapping the radar RoI and lidar RoI each other, as shown in Fig. 6(e). The fusion RoI represents line rectangle, as shown in Fig. 6(f). The occluded depth was assigned as an area filled between two lines in Fig. 6(g). Figure 6(f) shows the final detection results in the situation without any barrier. To confirm a pedestrian within the fusion RoI, the radar measured Doppler pattern was used. Figures 6(i) and 6(j) show the radar measurement range and Doppler spectra in frequency domain, respectively. Figure 6(j) is the velocity distribution of a pedestrian measured from radar. The measured instantaneous velocity has a broad distribution about 5  km/h due to a pedestrian movement. Figures 6(k) and 6(l) show the measurement range and Doppler variations along a time of the radar output for a moving pedestrian, respectively. A pedestrian moved back and forth repeatedly in front of the sensors. As depicted in Fig. 6(l), the radar Doppler distribution is caused by the leg and arm sway during a human movement. Generally, a moving pedestrian has repetitive Doppler and several micro-Doppler patterns when the pedestrian moves one leg fixes and the other swings.

Fig. 6

The experimental results in an indoor scenario: (a) experimental environment, (b) the lidar measurement data, (c) the radar RoI, (d) the lidar RoI, (e) the RoI overlay for extraction of the fusion RoI, (f) the Fusion RoI, (g) the occluded depth, (h) the final pedestrian detection result, (i) the radar range spectrum, (j) the radar velocity spectrum, (k) the radar range variations of a pedestrian with time, and (l) the radar Doppler variations of a pedestrian with time.

OE_56_11_113112_f006.png

Figure 7 summarizes the outdoor experimental situations and some typical examples of final detection results. In the experiments, single-target and multitarget were taken into considerations. Also, various occluded scenarios were included considering momentary and partial occlusion in both temporal and spatial events. The outdoor scenarios (1), (2), and (3) have the same configuration as the indoor scenarios. The outdoor scenarios (4) and (5) were considered for multiple targets and partial occlusion situations.

Fig. 7

Outdoor experiment situations and some final detection results.

OE_56_11_113112_f007.png

Figure 8 shows the pedestrian detection result in an outdoor scenario (2) with the front occlusion. Figure 8(a) is a snapshot picture showing an occluded pedestrian situation in the outdoors. Here, a pedestrian is partially occluded.

Fig. 8

The experimental detection results for the front occlusion scenario: (a) a snapshot picture showing a partially occluded pedestrian in an outdoor experimental environment, (b) the LIDAR measurement data, (c) the occluded depth and the RADAR RoI, (d) the partially occluded pedestrian detection result, (e) the measured RADAR range spectrum, (f) the measured RADAR Doppler spectrum, (g) the RADAR range variations of a pedestrian with time, and (h) the RADAR Doppler variations of a pedestrian with time.

OE_56_11_113112_f008.png

Figure 8(b) is the measured lidar data. In the measured lidar data, the pedestrian is not measured because it is occluded by the front obstacle. Figure 8(c) shows the radar RoI and the occluded depth. The occluded pedestrian can be detected by overlapping the radar RoI and occluded depth. Figure 8(d) is the occluded pedestrian detection result using the radar Doppler distribution. Figure 8(e) is the radar measured range spectrum corresponding to the obstacles, trees, and a pedestrian. Figure 8(f) is the radar measured Doppler spectrum. A broad peak is observed at a velocity of about 5  km/hr and this peak corresponds to an occluded pedestrian. Figures 8(g) and 8(h) are the radar range and Doppler variations along time in a condition that a pedestrian is moving with a partial occlusion, respectively.

Figure 9 shows one of detection results for an occluded pedestrian in many outdoor experiments. Figure 9(a) is a real experimental picture. A pedestrian is partially occluded by the front barrier. Figure 9(b) is the lidar measurement data. In spite that a pedestrian exists behind obstacles, the pedestrian is not detected. Figure 9(c) is the occluded depth and the radar RoI. In the proposed algorithm, the occluded depth and the radar RoI overlay each other, and the occlusion RoI is generated by superimposing the radar RoI and the occluded depth. Figure 9(d) shows the detection result of an occluded pedestrian by using occlusion RoI. Figures 9(e) and 9(f) are the radar measured range and Doppler spectra, respectively. Two peaks corresponding to obstacles and a pedestrian are clearly separated in the range spectrum and a broaden peak in the velocity of about 5  km/h is observed in the Doppler spectrum. Figure 9(g) is another picture of the same experiment. Figure 9(h) is the lidar measurement data. A pedestrian is detected close to the adjacent obstacle. Figure 9(i) is the estimated radar and lidar RoIs. Figure 9(j) is the fusion RoI and the final pedestrian detection results. A pedestrian and its neighboring obstacles are separated by using the radar velocity information within the fusion RoI. Figures 9(k) and 9(l) show the measured range and Doppler spectrum of the radar output for the objects, respectively. A pedestrian and its neighboring obstacles in the range spectrum are merged because of very close distance each other. However, a pedestrian is clearly distinguishable from stationary obstacles in the Doppler spectrum. Figures 9(m) and 9(n) show the range and Doppler measurement along a time of radar output for the pedestrian movement in this experiment, respectively.

Fig. 9

The experimental detection results for a single occluded pedestrian: (a and g) a snapshot pictures showing an outdoor experimental environment, (b and h) the lidar measurement raw data, (c) The occluded depth and the radar RoI, (d) the occlusion RoI and occluded pedestrian detection result, (e) the radar measured range spectrum for the pedestrian detection, (f) the radar measured Doppler spectrum for the pedestrian detection, (i) the predicted fusion RoI, (j) the pedestrian detection result within the fusion RoI, (k) the radar measured Doppler spectrum, (l) the radar measured Doppler spectrum, (m) the radar range variations of a pedestrian with time, and (n) the radar Doppler variations of a pedestrian with time.

OE_56_11_113112_f009.png

To verify the detection performance in various complex outdoor environments, we also performed multiple target experiments. Figure 10 shows the multitarget experimental results considering that an occlusion partially occurs. The experiment was carried out in real pavements. Figures 10(a) and 10(g) show actual experimental pictures about situations where two pedestrians are moving. Figure 10(b) shows the lidar measurement raw data. In this situation, an occlusion did not occur, and both pedestrians and the surrounding objects were measured in the raw data. In Fig. 10(c), the R_RoI stands for radar RoI. Because the two pedestrians moved very closely, they were located inside one unified radar RoI marked by a dash-dot square. The lidar RoI is shown in two dot boxes. Figure 10(d) shows the fusion RoI generated from the respective detection result of each pedestrian. Figure 10(e) shows the range profile of the radar output for two pedestrians. There are many other clutters besides two pedestrians.

Fig. 10

The experimental detection results in real complex environments: (a and h) a snapshot picture, (b and i) the lidar measurement raw data, (c) the predicted fusion RoI, (d) the fusion RoIs and pedestrian detection results, (e) the measured radar range profile, (f) the measured radar Doppler profile for pedestrian #1, (g) the measured radar Doppler profile for pedestrian #2, (j) the occluded depth and the radar RoI, (k) the partially occluded pedestrian detection result, (l) the measured radar range profile, (m) the measured radar Doppler profile, (n) the radar range variations of two pedestrians with time, and (o) the radar Doppler variations of two pedestrians with time.

OE_56_11_113112_f010.png

Figures 10(f) and 10(g) show the Doppler profiles of radar outputs for pedestrian #1 and pedestrian #2, respectively. According to these radar Doppler profiles, pedestrians are distinguished from clutters. Figure 10(h) is a snapshot picture that one person is partially obscured by the other person in the same scenario. Figure 10(i) shows the lidar measured data. A pedestrian is clearly represented by many scattering points while a partially occluded pedestrian has sparse points. Figure 10(j) shows the radar RoI and the occluded depth. The radar RoI and occluded depth are marked in the dash-dot squares and the in gray line, respectively. In addition, the lidar RoI and the occlusion RoI are described by the dot box and the blue line box, respectively. Figure 10(k) is the final pedestrian detection result. Here, the radar Doppler human pattern and human fitting curve were utilized to identify a pedestrian. Figure 10(l) shows the radar range profiles for two pedestrians in close proximity. As shown in Fig. 10(l), in case that two pedestrians are very close, they cannot be separated by only using radar range profiles. However, our proposed algorithm distinguishes two pedestrians with the Doppler difference in the radar Doppler profile. Figures 10(n) and 10(o) show radar range and Doppler variations along a time for two pedestrians, respectively.

According to various indoor and outdoor experiments, we proved that our proposed scheme is very effective to detect a partially occluded pedestrian. Table 2 summarizes the pedestrian detection rates analyzed from various experiments. In the previous studies, sensor fusion methods based on lidar and radar have not yet been investigated for detecting occluded pedestrians. In this paper, in order to verify the effectiveness of the proposed sensor fusion method at detecting partially occluded pedestrians, we have demonstrated detection improvement using the cases with and without the proposed method. Without our proposed method, the detection rates in indoor and outdoor are about 45% and 53%, respectively. This is caused by no detection of occluded pedestrians. However, with our proposed method, the detection rate of about 89% or more is obtained in both indoor and outdoor because occluded pedestrians are detected by using an occluded depth and the radar human feature. According to the experimental results, our proposed sensor fusion scheme makes much more improvement in detecting a partially occluded pedestrian in temporal and spatial events. Even though in the case of real road environments surrounded by various obstacles such as trees and light poles and so on, our proposed method has much higher detection rate of about 89%.

Table 2

Detection results of indoor and outdoor scenarios.

Total frameDetection frame (without proposal)Detection rateDetection frame (with proposal)Detection rate
Indoor experiment2194100045.6%208895.2%
Outdoor experiment2140113553.0%191089.3%

5.

Conclusions

We propose a new scheme to detect a partially occluded pedestrian by using the occluded depth in lidar–radar sensor fusion. Generally, lidar and camera have been used for pedestrian detection. However, a camera is very sensitive to light intensity and environmental change. In addition, the occlusion detection in image processing requires a heavy computation burden and much more processing complexity.

In this paper, we introduce the new concept of occluded depth and occlusion RoI in order to determine whether an occluded object exists or not. When an object is hidden by any obstacles, the lidar has the difficulty in measuring an occluded object, whereas the radar measures occlusion objects. In particular, a moving object is well detected by using the radar Doppler pattern. Therefore, a partially occluded pedestrian is finally detected by using the radar human Doppler pattern within the occlusion RoI through lidar and radar sensor fusion.

To verify the performance of the partial occlusion detection in our proposed method, various experiments are performed in both indoor and real road environments. According to experimental results, our proposed sensor fusion scheme has much better detection performance compared to the case without our proposed method. This sensor fusion scheme will be very useful in an autonomous vehicle field because a hidden pedestrian can be detected by using our scheme before a collision happens. This occluded pedestrian detection scheme will be applied to prevent accidents in active safety systems of autonomous vehicles, such as an active emergency braking system.

Disclosures

The authors declare no conflict of interest.

Acknowledgments

This research was supported in part by Global Research Laboratory Program (2013K1A1A2A02078326) through NRF, DGIST Research and Development Program (17-IT-01 and CPS Global Center) funded by the Ministry of Science, ICT & Future Planning, and Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korean government (MSIP) (Grant No. B0101-15-0557, Resilient Cyber-Physical Systems Research).

References

1. 

K. Kidono et al., “Pedestrian recognition using high-definition LIDAR,” in Intelligent Vehicles Symp. IV, 405 –410 (2011). http://dx.doi.org/10.1109/IVS.2011.5940433 Google Scholar

2. 

C. Premebida et al., “A lidar and vision-based approach for pedestrian and vehicle detection and tracking,” in Intelligent Transportation Systems Conf. (ITSC), 1044 –1049 (2007). http://dx.doi.org/10.1109/ITSC.2007.4357637 Google Scholar

3. 

C. Premebida, O. Ludwig and U. Nunes, “LIDAR and vision-based pedestrian detection system,” J. Field Rob., 26 (9), 696 –711 (2009). http://dx.doi.org/10.1002/rob.v26:9 Google Scholar

4. 

D. Gohring et al., “Radar/lidar sensor fusion for car-following on highways,” in Int. Conf. on Automation, Robotics and Applications (ICARA), 407 –412 (2011). http://dx.doi.org/10.1109/ICARA.2011.6144918 Google Scholar

5. 

C. Blanc, L. Trassoudaine and J. Gallice, “EKF and particle filter track-to-track fusion—a quantitative comparison from radar, lidar obstacle tracks,” in Int. Conf. on Information Fusion, (2005). http://dx.doi.org/10.1109/ICIF.2005.1592007 Google Scholar

6. 

K. Dan, D. Gray and H. Tao, “A viewpoint invariant approach for crowd counting,” in Int. Conf. Pattern Recognition (ICPR), 1187 –1190 (2006). http://dx.doi.org/10.1109/ICPR.2006.197 Google Scholar

7. 

H. Cheng, N. Zheng and J. Qin, “Pedestrian detection using sparse Gabor filter and support vector machine,” in Intelligent Vehicles Symp. (IV), 583 –587 (2005). http://dx.doi.org/10.1109/IVS.2005.1505166 Google Scholar

8. 

A. Yilmaz, L. Xin and M. Shah, “Contour-based object tracking with occlusion handling in video acquired using mobile cameras,” IEEE Trans. Pattern Anal. Mach. Intell., 26 1531 –1536 (2004). http://dx.doi.org/10.1109/TPAMI.2004.96 ITPIDJ 0162-8828 Google Scholar

9. 

M. Z. Zia, M. Stark and K. Schindler, “Explicit occlusion modeling for 3D object class representations,” in IEEE Conf. on Computer Vision and Pattern Recognition, 3326 –3333 (2013). http://dx.doi.org/10.1109/CVPR.2013.427 Google Scholar

10. 

N. Payet and S. Todorovic, “From contours to 3D object detection and pose estimation,” in Int. Conf. on Computer Vision, 983 –990 (2011). http://dx.doi.org/10.1109/ICCV.2011.6126342 Google Scholar

11. 

Y. Zhou and H. Tao, “A background layer model for object tracking through occlusion,” in Int. Conf. on Computer Vision, 1079 –1085 (2003). http://dx.doi.org/10.1109/ICCV.2003.1238469 Google Scholar

12. 

S.-P. Lin, Y.-H. Chen and B.-F. Wu, “A real-time multiple-vehicle detection and tracking system with prior occlusion detection and resolution, and prior queue detection and resolution,” in Int. Conf. on Pattern Recognition (ICPR), 828 –831 (2006). http://dx.doi.org/10.1109/ICPR.2006.159 Google Scholar

13. 

C. Premebida, O. Ludwig and U. Nunes, “Exploiting lidar-based features on pedestrian detection in urban scenarios,” in Int. Conf. on Intelligent Transportation Systems Conf. (ITSC), 1 –6 (2009). http://dx.doi.org/10.1109/ITSC.2009.5309697 Google Scholar

14. 

A. Faro, D. Giordano and C. Spampinato, “Adaptive background modeling integrated with luminosity sensors and occlusion processing for reliable vehicle detection,” IEEE Trans. Intell. Transp. Syst., 12 1398 –1412 (2011). http://dx.doi.org/10.1109/TITS.2011.2159266 Google Scholar

15. 

E. Hyun, Y. S. Jin and J. H. Lee, “A pedestrian detection scheme using a coherent phase difference method based on 2D range-Doppler FMCW radar,” Sensors, 16 (1), 124 (2016). http://dx.doi.org/10.3390/s16010124 SNSRES 0746-9462 Google Scholar

16. 

J. Han et al., “Enhanced road boundary and obstacle detection using a downward-looking lidar sensor,” IEEE Trans. Veh. Technol., 61 971 –985 (2012). http://dx.doi.org/10.1109/TVT.2012.2182785 Google Scholar

17. 

J. Holilinger, B. Kutscher and R. Close, “Fusion of lidar and radar for detection of partially obscured objects,” Proc. SPIE, 9468 946806 (2015). http://dx.doi.org/10.1117/12.2177050 PSISDG 0277-786X Google Scholar

18. 

K. Na et al., “Fusion of multiple 2D lidar and radar for object detection and tracking in all directions,” in Int. Conf. on Connected Vehicles and Expo (ICCVE), 1058 –1059 (2014). http://dx.doi.org/10.1109/ICCVE.2014.7297512 Google Scholar

19. 

C. Blanc, L. Trassoudaine and Y. Le Guilloux, “Track to track fusion method applied to road obstacle detection,” in Int. Conf. on Information Fusion, (2004). Google Scholar

20. 

F. Nashashibi and A. Bargeton, “Laser-based vehicles tracking and classification using occlusion reasoning and confidence estimation,” in Intelligent Vehicles Symp. (IV), 847 –852 (2008). http://dx.doi.org/10.1109/IVS.2008.4621244 Google Scholar

21. 

A. Börcs et al., “A model-based approach for fast vehicle detection in continuously streamed urban LIDAR point clouds,” in Asian Conf. on Computer Vision, 413 –425 (2014). Google Scholar

22. 

Y. Kim, S. Ha and J. Kwon, “Human detection using Doppler radar based on physical characteristics of targets,” IEEE Geosci. Remote Sens. Lett., 12 (2), 289 –293 (2015). http://dx.doi.org/10.1109/LGRS.2014.2336231 Google Scholar

23. 

H. Rohling, S. Heuel and H. Ritter, “Pedestrian detection procedure integrated into an 24 GHz automotive radar,” in IEEE Radar Conf., (2010). http://dx.doi.org/10.1109/RADAR.2010.5494432 Google Scholar

24. 

E. Hyun, Y. Jin and J. Lee, “Development of 24 GHz FMCW level measurement radar system,” in IEEE Radar Conf., 796 –799 (2014). http://dx.doi.org/10.1109/RADAR.2014.6875698 Google Scholar

25. 

Y. Jin et al., “Implementation of the real time data logging system for automotive radar development,” in Int. Symp. on Embedded Technology (ISET), (2016). Google Scholar

Biography

Seong Kyung Kwon received his BS degree in electronic engineering from Keimyung University, Korea, in 2015. He received his MS degree in information and communication engineering from Daegu Gyeongbuk Institute of Science & Technology (DGIST), Korea, in 2017. Since 2017, he has joined in Daegu Gyeongbuk Institute of Science & Technology (DGIST), Korea, as a PhD student. His research interests are autonomous vehicles, cyber physical systems, and sensor fusion.

Eugin Hyun received his BS, MS, and PhD degrees in electronic engineering from Yeungnam University, Korea, 1999, 2011, and 2005, respectively. Since 2005, he joined the DGIST, Daegu, Korea, as a principal researcher. From 2007 to 2013, he also joined in the Department of Electronic Engineering, the undergraduate colleges, Yeungnam University, Korea, as an adjunct professor. His primary research interests are FMCW/UWB radar signal processing (detection, tracking, and classification), the design and implementation of digital signal processors, radar architecture design, and application-based radar system development (automotive, smart surveillance, motion indication, ICT convergence, and IoT).

Jin-Hee Lee is a senior researcher in the Department of Future Automotive Technology Research Center of Daegu Gyeongbuk Institute of Science & Technology (DGIST), Korea. From 2015 to 2016, she joined CPS Global Center of DGIST. She received her BS degree in computer science from Korea National Open University, and MS and PhD degrees in computer and information engineering from Inha University of Korea. Her research interests include cyber-physical systems, human–computer interaction, and ubiquitous computing.

Jonghun Lee received his BS degree in electronics engineering from Sungkyunkwan University, Korea, in 1996 and obtained his MS and PhD degrees in electrical and electronics and computer science from Sungkyunkwan University, Korea, in 1998 and 2002, respectively. From 2002 to 2005, he joined Samsung Electronics Company as a senior research engineer. Since 2005, he has been a principal researcher and adjunct professor in Daegu Gyeongbuk Institute of Science & Technology (DGIST), Korea. Also, he has been an adjunct professor in the Graduate School of Information and Communication Department in Yeungnam University, Korea. His primary research interests are detection, tracking, recognition for radar (FMCW & UWB radar), radar-based sensor fusion, and radar signal processing. He is IEEE senior member.

Sang Hyuk Son is the president of Daegu Gyeongbuk Institute of Science & Technology (DGIST). He has been a professor of the Computer Science Department at the University of Virginia, and WCU chair professor at Sogang University. He received his BS degree in electronics engineering from Seoul National University, MS degree from KAIST, and the PhD in computer science from the University of Maryland, College Park. His research interests include cyber physical systems, real-time and embedded systems, and wireless sensor networks. He is IEEE fellow and fellow of the Korean Academy of Science and Technology.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Seong Kyung Kwon, Eugin Hyun, Jin-Hee Lee, Jonghun Lee, and Sang Hyuk Son "Detection scheme for a partially occluded pedestrian based on occluded depth in lidar–radar sensor fusion," Optical Engineering 56(11), 113112 (28 November 2017). https://doi.org/10.1117/1.OE.56.11.113112
Received: 1 June 2017; Accepted: 6 November 2017; Published: 28 November 2017
Lens.org Logo
CITATIONS
Cited by 17 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Radar

LIDAR

Sensor fusion

Doppler effect

Environmental sensing

Sensors

Cameras

Back to Top