A process of 2-dimensional (2D) character animation is to segment an illustration into parts such as a face outline or hair. This is a simple and time-consuming process, which involves cutting out the parts to be animated from an illustration. In this study, we focus on automation of the segmentation of the parts of a 2D character animation. Moreover, to improve the accuracy of the process, we combine U-net and pix2pix. Based on the accuracy evaluation, we confirmed that the accuracy of the F-measure of the proposed method is 81.1%, which outperformed U-net and pix2pix by 15.5% and 0.9%, respectively.
A facial caricature is an art form that captures the facial features of a person and exaggerates those features. Caricatures are commonly used to convey humor and sarcasm, as gifts, souvenirs, and social networking avatars. There are attempts to generate facial caricature, but these attempts have just focused on transforming the style into artistic style and do not exaggerate facial features. We focus on the exaggeration of facial features and propose a facial feature exaggeration system for generating portraits. Feature detection by comparing facial landmarks detected by the facial landmark detector with the mean face, and feature exaggeration by transforming the facial landmarks further away from the mean face. We also experiment with a new exaggeration method that combines normalization to exaggerate facial features. The results show that features can be exaggerated without collapsing the shape.
Keyword searches are generally used when searching for illustrations of anime characters. However, keyword searches require that the illustrations be tagged first. The illustration information that a tag can express is limited, and it is difficult to search for a specific illustration. We focus on character attributes that are difficult to express using tags. We propose a new search method using the vectorization degrees of character attributes. Accordingly, we first created a character illustration dataset limited to the hair length attribute and then trained a convolutional neural network (CNN) to extract the features. We obtained a vector representation of the character attributes using CNN and confirmed that they could be used for new searches.
An efficient system to support referencing external information is proposed. External information is information that is not displayed in the original work but is related to work. We employ an annotation system to display information to the user. We are developing Wappen (Web-based Annotation Appending and Sharing Framework) to provide annotation of any scene on a primary terminal, and reference prepared scene annotations on a secondary terminal (when the user is viewing a scene on the primary terminal). Experimental results indicate that the time required to obtain the desired information by viewing annotations using two terminals is less than that using a single terminal. The results of a subjective evaluation show that using two terminals obtained higher evaluation than using a single terminal.
Nowadays, social networking sites (SNS) are used for posting and seeing beautiful photographs. Although many visitors click photographs in theme parks and post them on SNS, finding appropriate photo spots using SNS is not easy because of the enormous number of posted images. This study proposes recommendation algorithms that will help visitors find appropriate photo spots. As test case, we chose Tokyo Disneyland (TDL), which posts its photographs on Twitter. Twitter characteristically merges several photographs into a single image called a collage, which shows the excursion history of visitors. Based on these histories, we apply a collaborative filtering algorithm, which recommends photo spots to visitors. Before designing a photo spot recommendation system for theme parks, we must know the intentions and preferences of theme park visitors. To acquire this knowledge, we conducted a questionnaire survey. The results suggested that male subjects prefer scenic and pose-friendly photo spots, whereas female subjects tend to choose spots that render them “instagrammable,” “beautiful,” and “cute”. Based on these findings, we categorized photo spots inside TDL and created a prototype recommendation system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.