Open Access
20 September 2021 Visualizing veins from color images under varying illuminations for medical applications
Ru Jia, Chaoying Tang, Biao Wang
Author Affiliations +
Abstract

Significance: Effective vein visualization is critically important for several clinical procedures, such as venous blood sampling and intravenous injection. Existing technologies using infrared device or ultrasound rely on professional equipment and are not suitable for daily medical care. A regression-based vein visualization method is proposed.

Aim: We visualize veins from conventional RGB images to provide assistance in venipuncture procedures as well as clinical diagnosis of some venous insufficiency.

Approach: The RGB images taken by digital cameras are first transformed to spectral reflectance images using Wiener estimation. Multiple regression analysis is then applied to derive the relationship between spectral reflectance and the concentrations of pigments. Monte Carlo simulation is adopted to get prior information. Finally, vein patterns are visualized from the spatial distribution of pigments. To minimize the effect of illumination on skin color, light correction and shading removal operations are performed in advance.

Results: Experimental results from inner forearms of 60 subjects show the effectiveness of the regression-based method. Subjective and objective evaluations demonstrate that the clarity and completeness of vein patterns can be improved by light correction and shading removal.

Conclusions: Vein patterns can be successfully visualized from RGB images without any professional equipment. The proposed method can assist in venipuncture procedures. It also shows promising potential to be used in clinical diagnosis and treatment of some venous insufficiency.

1.

Introduction

Venipuncture is one of the most common clinical procedures in everyday life. In general, it is used for venous blood sampling or intravenous injection. Hands and forearms are the main venipuncture sites. In clinical treatment, when trying to eliminate varicose veins and spider veins, clinicians also look for puncture sites to inject a sclerosant medication.1 At present, the most common way to locate veins is still to see with naked eyes or to touch with fingers, which depends significantly on the clinicians’ experience. For patients with thick fat, narrow veins, dark skin tone, or excessive body hair, the success rate of vein puncture may be decreased. Moreover, in some highly contagious disease contexts, such as COVID-19, the clinicians must wear medical goggles and surgical gloves, which makes the operation more difficult. Venipuncture failure would increase the suffering of patients both physically and psychologically. Therefore, an effective vein visualization device is needed. Existing technologies include infrared/near-infrared imaging,1,2 transillumination imaging,3 multispectral imaging,4,5 and ultrasound imaging.6 The first three technologies mainly use the difference in absorption properties between venous blood and other tissues to visualize veins, whereas ultrasound imaging utilizes soundwaves to detect vein structures and venous blood flow. However, the above technologies rely on professional equipment that is high cost and not suitable for daily medical care or telemedicine. Besides, some equipment needs direct skin contact, which cannot be used in patients with fragile skin and is not hygienic from a public health perspective. Therefore, it is critical to propose a simple, effective, and contactless vein visualization technology for daily medical treatment.

In this paper, we propose a regression-based method to visualize veins from color skin images taken by conventional digital cameras. No other professional equipment is required. We start with the analysis of light propagating in skin. Skin is a multilayered, inhomogeneous tissue. When light enters the skin, it is scattered, reflected, or absorbed. The reflected part is captured by human eyes or a camera to form the color we see. Based on this, we inverse the light–tissue interaction and color formation process to obtain skin properties from skin color. In this way, veins can be visualized from color skin images. We evaluate the proposed method on a dataset of 60 subjects and demonstrate that it can perform better than the state-of-art methods both qualitatively and quantitatively. The remainder of this paper is organized as follows. Section 2 briefly overviews related work in vein visualization. Section 3 discusses the proposed regression-based method. Section 4 reports the experimental results. Section 5 offers the conclusion.

2.

Related Works

Recently, some technologies for visualizing veins from color skin images have been proposed. Tang et al.7 proposed a vein visualizing method based on image mapping. They extracted information from a pair of synchronized color and near-infrared images and used a neural network to map RGB values to NIR intensities. However, the model is completely learned from a dataset, so it is only a numerical solution. In addition, when the lighting condition changes, the model may get unreliable results. Tang et al.8 also proposed a vein visualization method based on optics and skin biophysics. They model skin color formation process based on Kubelka–Munk theory and then use a neural network to fit the inverse process. Vein patterns are derived from the distributions of the biophysical parameters from the inverse model. It does not rely on synchronized color and near-infrared images as the training set, but the inverse process is still based on the neural network approximation. In addition, all the deep learning-based vein visualization methods9,10 encounter the “black box” problem, which makes it difficult to improve the algorithm theoretically. Watanabe and Tanaka11 visualized veins by emphasizing the saturation of a color image. The algorithm is only based on image enhancement. For veins that are invisible in the color skin images, this method shows no result. Song et al.12 proposed a vein visualization method based on Weiner estimation using smart phone cameras. Reflectance images were reconstructed from conventional RGB images, and the 620-nm reflectance image was chosen to visualize veins. However, the reflectance images in 620 nm are not clear enough to show vein patterns because visible light cannot achieve the penetration depth as the near-infrared light. Thus, a postimage processing method was then employed to enhance contrast. Moreover, their method requires calibration for each camera device and illumination, which is not practical for widespread use. Sharma and Hefeeda13 also visualized veins from reconstructed spectral images. They used deep learning method to map RGB images to hyperspectral images in the range of 820 to 920 nm. Their method achieved good results, but training the model needs hyperspectral images, which are expensive to obtain.

3.

Methodology

In an RGB image, veins are almost invisible to the naked eye because the pixels have very similar intensity values to those of other skin tissues. However, the biophysical parameters of veins and generic tissue are significantly different, which makes it possible to uncover vein patterns from their spatial distribution. This is the key idea of the regression-based method.

The color of the skin mainly depends on the skin structure and various pigments in the skin.14 Melanin and hemoglobin are the two main pigments. The properties of environment illumination and camera are also key factors in the process of color formation. Mathematically, the color formation process can be expressed as follows:8

Eq. (1)

[R,G,B]=f(E(λ),S(λ),Cm,Cb),
where E(λ) represents the illuminant, S(λ) represents the spectral response functions of a camera. Here, Cm and Cb are the concentrations of melanin and blood, respectively. The color formation process is a well-posed problem, i.e., given the specific values of biophysical properties, illuminant, and camera model, the RGB values of a pixel can be uniquely determined. For example, Zoller and Kienle15 developed software that can generate the image of a blood vessel in skin according to specific input parameters. On the contrary, the inverse process f1 is an ill-posed problem, which is more complicated and can lead to multiple solutions. Therefore, a priori information should be imposed on the model to obtain the most possible solution.

The proposed regression-based method first preprocesses the input images to minimize illumination influence and remove shading effects. Diffuse reflectance spectral images are then reconstructed from the preprocessed images using human skin reflectance database as a priori information. Finally, the multiple regression analysis is applied to diffuse reflectance spectral images to derive the spatial distribution of melanin and blood based on Lambert–Beer law. Monte Carlo (MC) method is adopted in advance to simulate light propagating in skin to get the diffuse reflectance with varying skin parameters. The spatial distribution of blood can explicitly reflect vein patterns because veins contain much higher concentration of blood than other skin components.

3.1.

Color Skin Image Preprocessing

3.1.1.

Light correction

As shown in Eq. (1), skin color is easily influenced by illumination variation in color formation processes. However, in the real world, the illumination conditions are usually unpredictable and uncontrolled,16 which will bring error to the later estimation of biophysical parameters in Sec. 3.3.2. Therefore, light correction is critical for widening the practical application of the vein visualization method.

In this section, an adaptive gamma correction method17 is applied on the color skin images. The main aim of the method is to calculate the best restoration γ* automatically to maximize the entropy of the transformed image, i.e., after correction, it can be assumed that the image contains most sufficient information. Unlike natural images that usually have rich color diversity, color images of skin have very similar RGB values, and color is the most important information in the biophysical parameters’ estimation process. Therefore, to attenuate the color distortion caused by uneven light, we calculate γ* from the gray-scale image and apply it to each channel of the RGB images, instead of performing corrections only on the V channel.

The best restoration γ* can be computed as17

Eq. (2)

γ*=11NmΩln(um),
where um is the gray scale of the input image, Ω denotes the skin area of an image, and N is the valid number of pixels in Ω.

Then, the gamma correction is performed using γ* from Eq. (2) on each channel as

Eq. (3)

R,G,B=Rγ*,Gγ*,Bγ*.

Figure 1 shows the original images and the images after light correction with corresponding γ* values. The uneven illumination condition is mitigated, and the skin color becomes more similar to their original state.

Fig. 1

Color skin images before and after light correction. (a) and (c) Two color skin images; (b) and (d) corresponding light correction results with (b) γ*=0.694 and (d) γ*=0.758.

JBO_26_9_096006_f001.png

3.1.2.

Shading removal

Light correction only improves the holistic lighting conditions. However, arm skin has a curved surface. When illuminated by a directional light, the incident angle varies across the skin, which will result in shading effect. This section gives a detailed analysis on mechanism of the color formation process and then propose an algorithm to remove shade from skin.

The color of an RGB image is given as

Eq. (4)

Ii(x,y)=0Si(λ)E(λ)r(x,y,λ)wd(x,y)dλ,
where Ii(x,y)(i=R,G,B) represents skin image intensity at pixel (x,y) after lighting correction, Si(λ) are the spectral response functions of a camera, E(λ) is the illuminant, and r(x,y,λ)represents the diffuse reflectance of skin at pixel (x,y) and wavelength λ. Some papers consider human skin as a specular + diffuse model (e.g., Ref. 18); however, most of the images in our dataset have no highlights on skin. Therefore, we consider our skin model as a complete diffuse surface (i.e., Lambertian surface) and only diffuse reflectance is considered in our study. wd represents the shading effect caused by curved surface. It equals to the dot product of the surface normal and the lighting direction and is independent of wavelength.

In computer graphics, it is assumed that the spectral response function can be characterized by a Dirac delta function Si(λ)=S(λi)δ(λλi) with0+Si(λ)dλ=S(λi).19 Under this assumption, the integral representation Eq. (4) can be rewritten into a multiplicative form,

Eq. (5)

Ii(x,y)=S(λi)E(λi)r(x,y,λi)wd(x,y),
where λi is the wavelength corresponding to the maximum value of spectral response function. Then, we take the logarithm of Eq. (5) to obtain the additive form. Before taking the logarithm, Ii(x,y) should be scaled to [0,255] to avoid negative intensity:

Eq. (6)

lnIi(x,y)=lnS(λi)+lnE(λi)+lnr(x,y,λi)+lnwd(x,y).

It can be seen that only the last two terms are dependent on the position of image pixels. Between the two terms, shading wd(x,y) is usually a low-frequency variable that changes smoothly over the skin area, whereas reflectance r(x,y,λi) is a high-frequency variable because reflectance is dependent on the concentration of pigments that distributes inhomogeneously in the skin.18 A bilateral filter is a nonlinear filter that can preserve edges and reduce noises in images. In this study, it was adopted iteratively such that the high-frequency reflectance will gradually be smoothed out and the low-frequency illumination and shading effects will remain. The performance of the bilateral filter relies on the value of the spatial standard deviation σ1 and intensity standard deviation σ2. Inspired by Ref. 18, in our experiment, we choose σ1 and σ2 as

Eq. (7)

σ1=0.05*min(Rx,Ry),

Eq. (8)

σ2=0.05*max(Iremain),
where Rx and Ry are the width and height of the image, respectively. Iremain is the input image of the bilateral filter in each iteration. 0.05 was chosen for both the spatial coefficient and intensity coefficient. Experiment on our 60 subjects indicates that this combination can achieve the best decomposition results.

After bilateral filtering, the low-frequency component is defined as ln(Ibase,i) and the high-frequency component is defined as ln(Idetail,i) with ln(Ibase,i)+ln(Idetail,i)=ln(Ii). It should be noted that vein pattern information will finally be extracted from pigment distribution, so it is embodied in the detail image. Therefore, to eliminate the shading effect, we keep the detail layer and add the global mean of illuminant layer as a base color to obtain the corrected image, i.e.,

Eq. (9)

ln(Icorrect,i(x,y))=ln(Idetail,i(x,y))+1N(x,y)Ωln(Ibase,i(x,y))(i=R,G,B).

The process of shading removal is shown in Fig. 2.

Fig. 2

Shading removal process.

JBO_26_9_096006_f002.png

3.2.

Spectral Image Reconstruction

In this section, Wiener estimation was performed to reconstruct the spectral reflectance images from an RGB image. After shading removal, Eq. (4) can be rewritten as

Eq. (10)

Ii(x,y)=ω¯d0Si(λ)E(λ)r(x,y,λ)dλ(i=R,G,B),
where ωd is the constant coefficient and can be fitted into the illuminant term. Equation (10) is then discretized in the wavelength range of 400 to 700 nm at an interval of 10 nm and rewritten into vector notation as

Eq. (11)

I=SEr,
where S is a 3×31 matrix and each row represents camera spectral response function of each channel. E is a 31×31 diagonal matrix and represents the spectrum of illuminant. r is a 31×1 vector representing the reflectance spectrum of a pixel in an image. I=[R,G,B]T is the corresponding color of the pixel.

The Wiener estimation of r is given as

Eq. (12)

r˜=WI.

The Winer estimation matrix W is calculated by minimizing the square error |r-r˜|, where · denotes the ensemble average. W is derived as20

Eq. (13)

W=rITIIT1=rrTFT(FrrTFT)1,
where F=SE. We assume that S and E are known. In this study, D65 illuminant and JAI AD-080-GE camera are chosen. rrT is the autocorrelation matrix which should be obtained from prior knowledge. In this study, we use a skin reflectance database21 consisting of 4392 spectral reflectance as prior knowledge. The database consists of nine body areas of 482 subjects from three ethnic groups, which are Caucasian, Chinese, and Kurdish. The reflectance is measured using a Minolta CM-2600d spectrophotometer. We extract reflectance in the range of 400 to 700 nm and calculate the average rrTfor the 4392 subjects to get rrT in Eq. (13).

Finally, Eq. (12) is performed to each pixel of the preprocessed arm skin image to get the spectral reflectance images. The result is shown in Fig. 3. The reconstructed diffuse reflectance spectra of two pixels are shown in Fig. 4. It can be seen that their shapes are consistent with the diffuse reflectance spectra of human skin. The diffuse reflectance of vein is lower than that of skin because blood absorbs more light than generic skin, especially in the red wavelength range.

Fig. 3

Spectral reflectance images in the range of 400 to 700 nm at an interval of 20 nm.

JBO_26_9_096006_f003.png

Fig. 4

Reconstructed spectral reflectance of two pixels. (a) A skin area marked with a skin pixel (red square) and a vein pixel (blue square); (b) their diffuse reflectance spectra reconstructed from Wiener estimation.

JBO_26_9_096006_f004.png

3.3.

Vein Patterns Visualization

In this section, the vein patterns are finally visualized from the distribution of blood. As it is mentioned earlier, estimating biophysical parameters from the spectral reflectance images is an ill-posed problem, i.e., the analytical model is difficult to find, and there are many possible solutions. In this section, MC simulation is used as the forward model to construct a biophysical parameters-spectral reflectance dataset as prior information. Then, the relationship between the absorbance spectrum and pigment concentrations is derived from the dataset based on Lambert–Beer law.

3.3.1.

Forward model for light transport in skin

Before MC simulation, we formulate a general model to describe the skin structure and define each layer’s optical properties. In this study, we model skin as a three-layer structure, which are epidermis, dermis, and hypodermis. The optical properties required in MC simulation are absorption coefficient μα(λ)(cm1), scattering coefficient μs(λ)(cm1), anisotropy factor g(λ), refractive index n, and thickness d(cm).

The absorption coefficients μα(λ) of the epidermis and the dermis are mainly dependent on the concentrations of melanin in epidermis and hemoglobin in dermis. We choose μα(λ) published by Donner and Jensen22 for epidermis and dermis, and μα(λ) published by Atencio et al.23 for hypodermis. The optical properties in Ref. 22 are for human skin and those in Ref. 23 are for neonatal forehead skin. The values of μs(λ) are also chosen from Refs. 22 and 23. The anisotropy factor g(λ) is chosen from Ref. 24 for epidermis and dermis and Ref. 23 for hypodermis. The refractive index n is set to be 1.37 for epidermis and dermis and 1.44 for hypodermis.23 The thickness values d are set to be 0.006,25 0.09,23 0.03 cm,23 respectively. The optical properties in Refs. 24 and 25 are also for human skin.

To cover the color variation of skin as diverse as possible, reasonably wide ranges of Cm and Cb are chosen, i.e., Cm from 1.3% to 43%26 and Cb from 0.1% to 7%. Both of the ranges are uniformly divided into 50 points and consequently result in 2500 (Cm,Cb)r(λ) data pairs. The simulated skin spectral reflectance is shown in Fig. 5. In this study, we utilize a GPU-accelerated MC simulation tool CUDAMCML,27 which can accelerate the simulations by about three orders of magnitude than running sequentially on a CPU. The calculation time for one spectrum from 400 to 700 nm at the interval of 10 nm is ∼11 s, using NVIDIA GeForce GT 710 card.

Fig. 5

2500 spectral reflectance obtained from MC simulations.

JBO_26_9_096006_f005.png

3.3.2.

Inverse model based on multiple regression analysis

Using the forward model and the established dataset in Sec. 3.3.1, we try to find the inverse model in this section. First, the diffuse reflectance spectrum r(λ) is transformed into the absorbance spectrum A(λ) by28

Eq. (14)

A(λ)=log10(r(λ)).

According to the modified Lambert–Beer law, the absorbance spectrum A(λ) can be expressed as

Eq. (15)

A(λ)=Cmlepiεm(λ)+Coblderεob(λ)+Cdblderεdb(λ)+D(λ),
where Cm,Cob,Cdb are the molar concentrations of melanin, oxygenated blood, and deoxygenated blood, respectively, and Cb=Cob+Cdb. lepi and lder denote the mean path length in epidermis and dermis, respectively. ε(λ) denotes the molar extinction coefficients of three pigments.29 D(λ) indicates the absorbance of other minor components and scattering loss.

Second, we regard absorbance spectrum as response variable and extinction coefficients as predictor variables and then transform Eq. (15) into a multiple regression model,28

Eq. (16)

A(λ)=amεm(λ)+aobεob(λ)+adbεdb(λ)+a0,
where am,aob,andadb are the regression coefficients describing the contributions of each ε(λ) to A(λ), and are closely related to Cm, Cob, and Cdb, respectively.28 However, am is not only dependent on Cm but also influenced by Cob and Cdb because although the mean path length in epidermis lepi is mainly determined by melanin in epidermis, it is also affected by pigments in the dermis due to the complexity of light–tissue interaction. This conclusion also applies to the aob and adb. It indicates that am,aob,adb,a0 and Cm,Cob,Cdb are interdependent.28 So, another multiple regression model is used to establish the relationship between Cm,Cb and am,atb,a0:

Eq. (17)

Cm=a·bm,

Eq. (18)

Cb=a·btb,

Eq. (19)

a=[1,am,atb,a0,am3,atb3,a03,am·atb·a0,am2·atb,am2·a0,atb2·am,atb2·a0,a02·am,a02·atb],
where atb=aob+adb, a is a 1×14 vector containing am,atb,a0 and their third order terms. bm and btb are 1×14 vectors that should be derived in advance based on the dataset from MC using Eqs. (1619). It should be noted that multiple regression analysis is only performed in 500 to 600 nm at interval of 10 nm rather than in 400 to 700 nm because the spectral features of oxyhemoglobin and deoxyhemoglobin differ more significantly in this range than in the whole visible range and thus can lead to a better separation. For the 2500-absorbance spectra derived from MC simulations, the mean value of the R2 statistic is 0.977±0.010 in 500 to 600 nm whereas it is only 0.905±0.069 in 400 to 700 nm.

Once bm and btb are obtained, we can perform multiple regression analysis Eq. (16) on each pixel of spectral reflectance images in 500 to 600 nm at the interval of 20 nm and get vector a for each pixel. Later, we can use Eqs. (17) and (18) to get the spatial distribution of Cm and Cb, where vein patterns can be observed.

4.

Experimental Results

To evaluate the proposed regression-based method, we collected synchronized RGB/NIR images of inner arms from 60 subjects. The subjects are all Chinese. There were not protocols or eligibility criterion to recruit subjects. We invited as many subjects as possible to construct our skin image database. A JAI AD-080-GE industrial camera was used to capture images. JAI AD-080-GE camera is a 2-CCD camera providing simultaneous RGB/NIR images. When light enters the lens, it is separated by a prism into the visible/color part of the spectrum (400 to 700 nm) and the near-infrared part of the spectrum (750 to 1000 nm). A day light source and an NIR light source were used to illuminate the skin. During collection, one RGB image and one NIR image were captured simultaneously from inner arm of each subject. The RGB images are the test images and the NIR images are ground truth for comparison. D65 is the most commonly used daylight illuminant and is used in this paper as an illuminant. After collection, the skin area was segmented from the original images based on color. In the remaining section, we first validate the regression-based method on our dataset. Then, we evaluate the effect of light correction and shading removal on the regression-based vein visualization method. Three state-of-art methods are used for comparison, including: Watanabe’s image enhancement method,11 Song’s Wiener estimation method,12 and Tang’s optical method.8 Finally, we test the regression-based method on the spider vein images.

To objectively evaluate the proposed method, we extracted vein patterns from both the NIR images and visualized images and compared them pixel-by-pixel. We used the same extraction process as stated in Refs. 10 and 30. At first, we used a filter bank composed of the real parts of 16 Gabor filters with different scales and orientations to get the location of veins. Then, the information images of veins were enhanced and binarized to get the final vein patterns. In the extracted vein patterns, the vein pixels were labeled as 1 and background pixels as 0, which is shown in Fig. 6. Using the vein patterns extracted from NIR images as ground truth, four metrics were calculated to measure the algorithm’s performance, which are accuracy, precision, recall, and F1 score. Mathematically, they are expressed as

Eq. (20)

Accuracy=TP+TNTP+TN+FP+FN,

Eq. (21)

Precision=TPTP+FP,

Eq. (22)

Recall=TPTP+FN,

Eq. (23)

F1=2*Precision*RecallPrecision+Recall.

Fig. 6

Vein visualization and extraction results.

JBO_26_9_096006_f006.png

The confusion matrix in our case is defined as given in Table 1.

4.1.

Validation of the Vein Visualization Method

Figure 6 shows four examples of the experimental results. In Fig. 6, the first row shows four color skin images from our dataset. It should be noted here that the skin boundaries are not exactly smooth, because some background pixels near the boundary have similar value as the skin pixels, so they may be easily misclassified as skin during segment. However, the rough boundary has no impact on vein visualization results, because veins gather in the middle part of arms. The second and third rows show the vein visualization and extraction results from Fig. 6(a). The second row shows the corresponding NIR image, the visualization results from the image enhancement method, the Wiener estimation method, the optical method, and the regression-based method, respectively. The third row shows the corresponding vein patterns extracted from the second row. The remainder of Fig. 6 is the vein visualization and extraction results from Figs. 6(b)6(d). The objective evaluation of the four examples and the means of all 60 images are given in Table 2. The image enhancement method is simplest among the four methods, and it visualizes veins only by emphasizing saturation of the whole image and then extracting R channel. Therefore, the visualized results from their method usually contain less noise, which consequently leads to higher precision, since noises in the visualized images are easily mistaken as small veins in the extracted vein patterns. On the other hand, the vein lines from the image enhancement method are relative dim, making the extracted vein patterns less complete and lead to lower recall. [e.g., in Fig. 6(b), the binarized vein patterns in the second column are less complete than those in the first column.] For hairy skin [e.g., Fig. 6(c)], the image enhancement method’s performance is poor. The recall of the optical method is the highest in all the examples, because the vein patterns obtained from their method are the most distinct. However, the noises are also heavier in these images, resulting in low precision. The Wiener estimation method and the proposed method have a better trade-off between precision and recall. For the 60-arm images in our dataset, the Wiener estimation method performs better than the proposed method. Therefore, in the next section, we apply light correction and shading removal on the original skin images to improve the performance.

Table 1

Confusion matrix.

Vein patterns extracted from visualized images (Predicted)
VeinBackground
Vein patterns extracted from NIR images (ground truth)VeinTPFN
BackgroundFPTN

Table 2

Objective evaluations of vein visualization methods.

ImagesMetricsImage enhancementWiener estimationOpticalRegression-based
Fig. 6(a)Accuracy0.99200.99280.99160.9925
Precision0.78590.78300.70560.7478
Recall0.77980.84230.93450.8939
F1 score0.78280.81160.80410.8143
Fig. 6(b)Accuracy0.98740.98740.98500.9876
Precision0.76400.71700.60340.6819
Recall0.58810.67690.86700.7802
F1 score0.66460.69630.71160.7277
Fig. 6(c)Accuracy0.96650.97070.97110.9751
Precision0.28810.33770.35540.3895
Recall0.59470.66510.76440.6951
F1 score0.38810.44800.48520.4992
Fig. 6(d)Accuracy0.98020.97740.97230.9799
Precision0.55940.50350.43850.5460
Recall0.61890.72640.76280.7014
F1 score0.58770.59470.55690.6140
60 imagesAccuracy0.97960.97970.97780.9765
Precision0.44990.46450.43070.4099
Recall0.47230.53130.71020.5954
F1 score0.45020.48350.52910.4752
Note: The highest mean values are shown in bold.

Since Wiener estimation is used in both the regression-based method and the Wiener estimation method, to further validate the effectiveness of the regression-based method, we compared the results of the two methods in other skin areas. Figure 7(a) shows the skin image of a left upper arm. Figure 7(d) shows the skin image of a thigh and there is a highlight above the knee. These areas contain more fat. Figure 7(g) shows the skin image of a front calf which contains more muscles. Figures 7(b), 7(e), and 7(h) show the results from the Wiener estimation method. Figures 7(c), 7(f), and 7(i) show the results from the regression-based method. The results from the Wiener estimation method can hardly show veins or cannot show veins at all, whereas the regression-based method can produce better visualization results. What is more, the regression-based method is less sensitive to light variation compared with the Wiener estimation method. That is because the Wiener estimation method only uses specific wavelength reflectance images, whereas the regression-based method is based on an accurate optical model. Therefore, the proposed method is robust to different parts of skin.

Fig. 7

Vein visualization results in other skin areas. (a) The skin image of a left upper arm; (d) the skin image of a thigh; (g) the skin image of a front calf; (b) (e), and (h) the visualization results of the Wiener estimation method; and (c), (f), and (i) the visualization results of the regression-based method.

JBO_26_9_096006_f007.png

4.2.

Evaluation of the Light Correction and Shading Removal Algorithms

In this section, we evaluate the effect of light correction and shading removal on the performance of the regression-based vein visualization method. First, we compare the vein visualization results before and after light correction. Figure 8 shows some experimental results. The first column of Fig. 8 shows three sets of original skin images and the images after light correction. The second and third columns are the visualized results of the original skin images and the light corrected images, respectively, with their extracted vein patterns in the row below. The fourth column is the corresponding NIR images and their extracted vein patterns. The objective evaluation of the three examples and the means of all 60 images are shown in Table 3. It can be seen in Fig. 8 that in some skin areas, vein patterns are failed to be visualized from the original image due to the poor lighting condition, whereas from the corrected images, they are clearer. From the extracted vein patterns, we can see that the noises have been greatly reduced. For example, in Fig. 8(d), vein patterns are blurred with noises. After light correction, the noises have been greatly reduced and vein patterns become clearer and more complete as shown in Fig. 8(e). The objective evaluation also proves that the clarity and completeness of vein patterns are improved by light correction.

Fig. 8

Vein visualization results before and after light correction. (a) The visualized result of original image 1; (b) the visualized result of light corrected image 1; (c) the corresponding NIR image; (d)–(f) the extracted vein patterns from (a)–(c), respectively. (g)–(l) and (m)–(r) Two sets of results for examples 2 and 3, respectively.

JBO_26_9_096006_f008.png

Table 3

Objective evaluation of light correction process.

ImagesAccuracyPrecisionRecallF1 score
Original 10.97110.33290.65270.4409
Light corrected 10.98450.54250.70930.6148
Original 20.96130.16730.60540.2622
Light corrected 20.97200.22930.62080.3349
Original 30.97930.18960.43270.2636
Light corrected 30.99310.60110.57530.5879
60 original images0.97650.40990.59540.4752
60 light corrected images0.97910.44980.61780.5136
Note: The highest mean values are shown in bold.

Second, we compare the vein visualization results before and after shading removal. Figure 9 shows some experimental results. The first column of Fig. 9 shows three sets of original skin images (after light correction) and their shading removed results. The second and third columns show the visualized results of the original skin images and the shading removed images, respectively, with their extracted vein patterns in the row below. The fourth column shows the corresponding NIR images and their extracted vein patterns. The objective evaluation of the three examples is shown in Table 4. It can be seen in Fig. 9 that after shading removal of the skin images, the intensities of generic skin become more uniform, making the veins more distinct from skin. Moreover, the visualized results from the shadow area of skin often contain a lot of noises, whereas in the shading removed images, the noises are reduced and vein patterns are clearer [e.g., the bottom left part of skin in Figs. 9(g) and 9(h), the skin near the lower boundary of arm in Figs. 9(m) and 9(n)]. The four objective evaluation metrics are also improved by shading removal process.

Fig. 9

Vein visualization results before and after shading removal. (a) The visualized result of original image 1 (the image is after light correction); (b) the visualized result of shading removed image 1; (c) the corresponding NIR image; (d)–(f) the extracted vein patterns from (a)–(c), respectively. (g)–(l) and (m)–(r) Two sets of results for examples 2 and 3, respectively.

JBO_26_9_096006_f009.png

Table 4

Objective evaluation of shading removal process.

ImagesAccuracyPrecisionRecallF1 score
Original 10.98250.55550.78330.6500
Light corrected 10.98550.60930.83940.7061
Original 20.97200.22930.62080.3349
Light corrected 20.97880.30920.70600.4301
Original 30.97930.46410.80410.5885
Light corrected 30.98310.52700.82270.6424
60 original images0.97910.44980.61780.5136
60 shading removed images0.98030.47180.65320.5414
Note: The highest mean values are shown in bold.

To get the objective evaluation of the whole algorithm, the mean values and boxplots of the four metrics are shown in Table 5 and Fig. 10. It should be noted that the results compared now are obtained using the whole algorithm (including the vein visualization process, the light correction process, and the shading removal process). The results show that the regression-based method has the highest accuracy, precision, and F1 score. However, the recall is lower than that of the optical method. It indicates that further study is required to enhance vein patterns and reduce noise.

Table 5

Objective evaluation of the whole algorithm (60 images).

MethodsAccuracyPrecisionRecallF1 score
Image enhancement0.97960.44990.47230.4502
Wiener estimation0.97970.46450.53130.4835
Optical0.97780.43070.71020.5291
Regression-based0.98030.47180.65320.5414
Note: The highest mean values are shown in bold.

Fig. 10

Boxplots of (a) accuracy, (b) precision, (c) recall, and (d) F1 score of the state-of-art methods and the regression-based method (60 images). It should be noted that IE stands for the image enhancement method, WE stands for the Wiener estimation method, and RB stands for the regression-based method.

JBO_26_9_096006_f010.png

4.3.

Application for Spider Vein Treatment

Spider veins are small, damaged veins that appear on the surface of legs or face. They are usually caused by valves inside some veins having weakened or damaged.31 These veins are named as “feeder veins.” When the valves inside the feeder veins stop working normally, blood may pool inside the veins and cause continuous venous hypertension that makes capillaries connected to the feeder veins become enlarged and bulge. In legs, spider veins are usually the early symptom of varicose veins. The treatments for spider veins include sclerotherapy, phlebectomy, and laser treatment. Sclerotherapy involves injecting a medicine called sclerosant to the affected veins, making them to shrink. Phlebectomy is a minimally invasive surgery to remove some large, damaged veins, while laser treatment is a noninvasive procedure that uses a focused beam of light to destroy smaller veins. All the treatments need to find the affected veins first, and they all need to combine the treatment of the feeder veins with that of the spider veins. Otherwise, spider veins may reappear after some time although they disappear initially.32

Unlike spider veins, feeder veins are often beneath the surface and invisible to naked eyes. If they can be visualized from color images, it will greatly benefit the following treatment. We applied the regression-based method to some spider vein images collected from the internet,3336 and the results are shown in Fig. 11. In Fig. 11, we can see that the visualized veins join the surface spider vein clusters and are larger, which are in accord with the definition of feeder veins. Therefore, we believe that the proposed regression-based method can successfully visualize feeder veins from spider vein images and can assist in the treatment of spider veins.

Fig. 11

Vein visualization results on spider vein images. (a)–(d) Four pairs of color skin images and their corresponding vein visualization results.

JBO_26_9_096006_f011.png

5.

Conclusion

We propose a vein visualization method from color images. The proposed method can achieve clear vein visualization results without any professional equipment. Compared with existing methods, this approach is more accurate and does not require huge amounts of training data. Based on the difference in optical properties between venous blood and generic tissue, we derive biophysical parameters from the spectral reflectance images reconstructed by Wiener estimation. MC simulation is adopted to get prior information. Vein patterns are visualized from the distribution of blood. The effect of illumination and body surface on skin color can be minimized through image preprocessing. Experimental results indicate that the proposed method can visualize veins clearly and correctly. It also shows that the method has the potential to provide the location of veins in the treatment of spider veins. This method performs on a per-pixel basis; in the future, we will consider utilizing the structures of veins and combining the neighboring pixels to improve the visualization results as well as to reduce noise.

Disclosures

The authors have no relevant financial interests in the manuscript and no other potential conflicts of interest to disclose.

Acknowledgments

This work was supported by the Key Research and Development Programs of Jiangsu Province (BE2018720) and the Open Project of Engineering Center of Ministry of Education (NJ2020004).

References

1. 

I. Kundu and B. Anthony, “Imaging the superficial vascular structure for mapping and identification,” Proc. SPIE, 10600 106000X (2018). https://doi.org/10.1117/12.2296020 PSISDG 0277-786X Google Scholar

2. 

R. K. Miyake et al., “Vein imaging: a new method of near infrared imaging, where a processed image is projected onto the skin for the enhancement of vein treatment,” Dermatologic Surg., 32 (8), 1031 –1038 (2006). https://doi.org/10.1111/j.1524-4725.2006.32226.x Google Scholar

3. 

P. Dryden and K. Haselby, “Vein locator,” (2005). Google Scholar

4. 

M. Peng et al., “A methodology for palm vein image enhancement and visualization,” in IEEE Int. Conf. Online Anal. and Comput. Sci., 57 –60 (2016). https://doi.org/10.1109/ICOACS.2016.7563048 Google Scholar

5. 

J. Hsieh et al., “3D multispectral light propagation model for subcutaneous veins imaging,” Proc. SPIE, 6913 69130D (2008). https://doi.org/10.1117/12.772825 PSISDG 0277-786X Google Scholar

6. 

G. Reusz and A. Csomos, “The role of ultrasound guidance for vascular access,” Curr. Opin. Anaesthesiol., 28 (6), 710 –716 (2015). https://doi.org/10.1097/ACO.0000000000000245 COAEE2 Google Scholar

7. 

C. Tang et al., “Visualizing vein patterns from color skin images based on image mapping for forensics analysis,” in Proc. Int. Conf. Pattern Recognit., 2387 –2390 (2012). Google Scholar

8. 

C. Tang et al., “Uncovering vein patterns from color skin images for forensic analysis,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 665 –672 (2011). https://doi.org/10.1109/CVPR.2011.5995531 Google Scholar

9. 

G. Ma et al., “Uncovering vein pattern using generative adversarial network,” Proc. SPIE, 11179 111793R (2019). https://doi.org/10.1117/12.2539601 PSISDG 0277-786X Google Scholar

10. 

C. Tang et al., “Visualizing veins from color skin images using convolutional neural networks,” J. Innov. Opt. Health Sci., 13 (4), 2050020 (2020). https://doi.org/10.1142/S1793545820500200 Google Scholar

11. 

T. Watanabe and T. Tanaka, “Vein authentication using color information and image matching with high performance on natural light,” in ICCAS-SICE, 3625 –3629 (2009). Google Scholar

12. 

J. H. Song et al., “Vein visualization using a smart phone with multispectral wiener estimation for point-of-care applications,” IEEE J. Biomed. Health. Inf., 19 (2), 773 –778 (2015). https://doi.org/10.1109/JBHI.2014.2313145 Google Scholar

13. 

N. Sharma and M. Hefeeda, “Hyperspectral reconstruction from RGB images for vein visualization,” in Proc. 2020 Multimed. Syst. Conf., 77 –87 (2020). Google Scholar

14. 

N. Tsumura et al., “Independent-component analysis of skin color image,” J. Opt. Soc. Am. A:, 16 (9), 2169 –2176 (1999). https://doi.org/10.1364/JOSAA.16.002169 Google Scholar

15. 

C. Zoller and A. Kienle, “Fast and precise image generation of blood vessels embedded in skin,” J. Biomed. Opt., 24 (1), 015002 (2019). https://doi.org/10.1117/1.JBO.24.1.015002 JBOPFO 1083-3668 Google Scholar

16. 

W. Zhang et al., “Improving shadow suppression for illumination robust face recognition,” IEEE Trans. Pattern Anal. Mach. Intell., 41 (3), 611 –624 (2019). https://doi.org/10.1109/TPAMI.2018.2803179 ITPIDJ 0162-8828 Google Scholar

17. 

Y. Lee et al., “Blind inverse gamma correction with maximized differential entropy,” 1 –12 (2020). Google Scholar

18. 

Z. Liu and J. Zerubia, “Skin image illumination modeling and chromophore identification for melanoma diagnosis,” Phys. Med. Biol., 60 (9), 3415 –3431 (2015). https://doi.org/10.1088/0031-9155/60/9/3415 PHMBA7 0031-9155 Google Scholar

19. 

N. Tsumura et al., “Image-based skin color and texture analysis/synthesis by extracting hemoglobin and melanin information in the skin,” ACM Trans. Graph., 22 (3), 770 –779 (2003). https://doi.org/10.1145/882262.882344 ATGRDF 0730-0301 Google Scholar

20. 

P. Stigell, K. Miyata and M. Hauta-Kasari, “Wiener estimation method in estimating of spectral reflectance from RGB images,” Pattern Recognit. Image Anal., 17 (2), 233 –242 (2007). https://doi.org/10.1134/S1054661807020101 Google Scholar

21. 

K. Xiao et al., “Improved method for skin reflectance reconstruction from camera images,” Opt. Express, 24 (13), 14934 –14950 (2016). https://doi.org/10.1364/OE.24.014934 OPEXFF 1094-4087 Google Scholar

22. 

C. Donner and H. W. Jensen, “A spectral shading model for human skin,” in ACM SIGGRAPH Sketches, (147-es2006). Google Scholar

23. 

J. D. Atencio et al., Applications of Monte Carlo Methods in Biology, Medicine and Other Fields of Science, 297 IntechOpen(2011). Google Scholar

24. 

M. J. C. Van Gemert et al., “Skin optics,” IEEE Trans. Biomed. Eng., 36 (12), 1146 –1154 (1989). https://doi.org/10.1109/10.42108 IEBEAX 0018-9294 Google Scholar

25. 

I. Nishidate et al., “Estimation of melanin and hemoglobin using spectral reflectance images reconstructed from a digital RGB image by the Wiener estimation method,” Sensors, 13 (6), 7902 –7915 (2013). https://doi.org/10.3390/s130607902 SNSRES 0746-9462 Google Scholar

26. 

A. Krishnaswamy and G. V. Baranoski, “A study on skin optics,” 1 1 –17 Canada (2004). Google Scholar

27. 

A. Erik et al., “CUDAMCML user manual and implementation notes,” http://www.atomic.physics.lu.se/fileadmin/atomfysik/Biophotonics/Software/CUDAMCML.pdf Google Scholar

28. 

I. Nishidate et al., “Estimation of melanin and hemoglobin in skin tissue using multiple regression analysis aided by Monte Carlo simulation,” J. Biomed. Opt., 9 (4), 700 –710 (2004). https://doi.org/10.1117/1.1756918 JBOPFO 1083-3668 Google Scholar

29. 

R. Abdlaty, “Hyperspectral imaging and data analysis of skin erythema post radiation therapy treatment,” (2016). Google Scholar

30. 

C. Tang et al., “Using multiple models to uncover blood vessel patterns in color images for forensic analysis,” Inf. Fusion, 32 26 –39 (2016). https://doi.org/10.1016/j.inffus.2015.08.004 Google Scholar

31. 

J. Berry, “Treatment and prevention of spider veins,” (2019) https://www.medicalnewstoday.com/articles/324276 (accessed August 2021). Google Scholar

32. 

Center for Vein Restoration, “Getting to the root of the problem,” (2017) https://www.centerforvein.com/blog/getting-to-the-root-of-the-problem (accessed August 2021). Google Scholar

34. 

“Spider vein image,” (2018) https://legacyclinic.com/wp-content/uploads/2018/04/Spider-Veins-671005188.jpg (accessed August 2021). Google Scholar

Biography

Ru Jia is an MEng student at Nanjing University of Aeronautics and Astronautics under professor Chaoying Tang. Her research involves visualizing veins from RGB images using optical model.

Chaoying Tang received her BEng and MEng degrees in automation from Nanjing University of Aeronautics and Astronautics, China. She received her PhD from the School of Computer Science and Engineering, Nanyang Technological University, Singapore, in 2013. Currently, she is an associate professor with the Department of Automatic Control, College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, China. Her research interests include image processing, pattern recognition, and biometrics.

Biao Wang received his BEng degree in aeroengine control, his MEng degree in aeroengine, and his PhD in guidance, navigation and control from Nanjing University of Aeronautics and Astronautics (NUAA) in 1997, 2000, and 2004, respectively. Currently, he is an associate professor of NUAA. His research interests include flight control and visual guidance for unmanned aerial vehicles.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Ru Jia, Chaoying Tang, and Biao Wang "Visualizing veins from color images under varying illuminations for medical applications," Journal of Biomedical Optics 26(9), 096006 (20 September 2021). https://doi.org/10.1117/1.JBO.26.9.096006
Received: 26 April 2021; Accepted: 26 July 2021; Published: 20 September 2021
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Veins

Visualization

Skin

RGB color model

Reflectivity

Blood

Near infrared

Back to Top