In recent years, with the spread of devices with various screen sizes, the demand for resizing methods is increasing. Seam carving (SC) is one of the resizing methods. This method removes visually unimportant areas (hereinafter referred to as "seam") and resizes the image without losing its impression. SC for video images has the problem that the most impressive areas are shifted because the seams are different from frame to frame. There are previous methods to solve this problem, but they require a huge amount of processing time. The proposed method enables SC with reduced misalignment and processing time by area segmentation and seam reuse.
With the recent development of 3D computer graphics technology, the required hardware processing speed has been increasing. As a solution to this problem, there are methods to reduce the drawing load by omitting some drawing processes. Level-of-Detail and Occlusion Culling are examples of conventional methods, but both have the problem that they cannot reduce the rendering load for near objects. In this method, the drawing of near objects that exist in the shadow and are not illuminated by light is omitted, thereby reducing the drawing load while suppressing the loss of impression.
KEYWORDS: Clouds, Sensors, 3D modeling, Calibration, Autoregressive models, 3D metrology, Measurement devices, Time metrology, RGB color model, Data modeling
For choosing the best shoes, it is important to know the size of your feet. Furthermore, to make shoes that fit one's feet, it is necessary to take foot measurements using a 3D measuring device. However, there are four kinds of problems with current foot measurement devices: “long measuring time,” “measuring one foot at a time,” “lack of measurement points,” “devices are expensive”. Therefore, we propose a method for automatic 3D foot shape acquisition that solves these problems. The proposed method uses four depth sensors and AR markers to obtain the shape of both feet simultaneously. In addition, it achieves low cost by reducing the number of sensors used.
In recent years, there are signs and advertisements that use optical illusions. By using the optical illusion, it is easy to not only attract attention but also leave an impression. In addition, characters are always used in advertisements. The design of characters to be impressive is called lettering. Accordingly, we expect that adding an illusion to the lettering process can generate more impressive characters. However, it takes a lot of experience and time to take the illusion into account when lettering. In this paper, we add the illusion of impossible shapes to the lettering process to automatically generate more memorable characters. An impossible shape is a figure which can be recognized visually as a three-dimensional projection, but which cannot exist in reality. In this method, the strokes and contours are extracted from line-drawing characters. Then, the projected image is created by using them. Finally, impossible shape is generated by applying an optical illusion to the projected image.
In recent years, research has been conducted on the reflection of real space onto virtual objects. However, conventional methods have not been able to achieve accurate reflection due to problems such as the direction and angle of the reflected object not matching. In addition, since the light source is not estimated, the color of the virtual object floats away from the surroundings and shadows are not generated. In this study, we propose a method to integrate virtual objects and real space without any discomfort by using light source estimation.
Recently, with the development of smartphones and other small cameras, there are more and more opportunities to take photos even if you are not a professional photographer. In general, the photos with good composition are more impressive. Although there have been many studies on composition in photos, none have focused on the accuracy of determination and evaluation of composition. In this study, we propose a method to determine and evaluate the composition of photos taken. In the proposed method, composition is determined and evaluated in terms of visual and structural features. We believe that our method will assist in the selection of photos.
The widespread use of 3D printers has made it possible to create 3D objects easily at home. However, if a 3D model is
created without considering the position of the center of gravity, the output 3D object may not be able to stand on its own.
In such cases, it is necessary to adjust the 3D model so that it can stand on its own using 3D modeling software again.
However, this process requires a lot of knowledge and experience with 3D modeling software. The purpose of this
research is to calculate the density of the output 3D object and adjust the center of gravity of the 3D model that cannot
stand on its own. This eliminates the need to re-edit the 3D model. In addition, there is no need to change the impression
of the 3D model because there is no need for parts to assist the model to stand on its own.
Recently, there has been an increase in the number of photos taken by people in especially unusual situations such as travel and ceremony. The photo groups taken in unusual situations have similar images and low-quality images such as under/overexposure and blurred images. For that reason, it is time consuming to summarize photos for creating photo albums. Therefore, we propose a method of automatic photo selection for photo albums. The proposed method can avoid to select low-quality images as candidate images automatically. Additionally, similar images are grouped and narrowed down the only most high-quality image that is selected as candidate image. The proposed method calculates the score of each candidate image, and selects final-images according to the score and scenes.
With the digitization of work production, CG has come to be widely used as a background for movies, game maps, and illustrations. The background CG handled by these is generally produced one by one using CG software. However, when creating background CG with conventional CG software, it is necessary to determine the position of buildings and roads, change the size of the CG model to be placed, and create textures corresponding to each CG model. This requires specialized knowledge and skills, and takes a lot of time and effort. Therefore, we propose an automatic background CG generation system that uses two types of deep learning.
Digital cameras are used in various scenarios; however, the sharpness of images captured outdoors may be reduced due to bad weather conditions such as fog and haze. Therefore, to obtain a clear image, it is essential to remove haze. In this study, we propose a haze removal method by separating the sky and foreground regions and applying different processes to the sky region because it has different features from the foreground region; this improves the visibility of the image. We assume that the sky region is a bright region that does not change much throughout the image and extract multiple sky region candidates, which are merged according to color distance. Next, we estimate atmospheric light and transmittance of haze. Atmospheric light is the light scattered in the entire image, and transmittance of haze is the amount of scattered light. The sky region determines the brightness of atmospheric light, and the impression of the entire image determines the color of the atmospheric light. Transmittance of haze is estimated by dark channel features and morphological operations. The conventional method uses a fixed-sized patch due to which a smooth transmission map may not be generated. Our method generates a smooth transmission map for any image by changing the patch size. The haze in the image is removed using the estimated atmospheric light and transmittance; however, the resulting image is dark for which brightness correction is performed and a clean image without haze is obtained.
In recent years, with the improvement of computer performance, it has become possible to represent dense mesh models in computer graphics. However, performing a manipulation on the dense mesh models might be costly. In order to reduce the computational cost during manipulations, a dense model is often manipulated through coarse bounding cages that enclose the model. However, generating cages are usually tedious and time-consuming. In this paper, we propose a method of automatic cage generation by a variational remeshing method. We first evaluate the features, such as curvature, dihedral angles, of an original triangle model and then voxelize it. We extract and triangulate the outer faces of the voxels and transfer the features of the original model to the outer faces. Finally, we apply a variational remeshing method to this triangular mesh. The variational remeshing method is a method minimizing an energy function which corresponds to a good solution by global relaxation until convergence. An experiment result demonstrates that our method is effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.