Virtual Try-on with Image Using Deep Learning
The traditional way of shopping apparels involved visiting shops or markets. As online shopping becomes increasingly popular in recent years, the way of shopping has been changed. Recent years have seen a rapid expansion in online shopping.
According to a Statista survey published in April 2017, 40% of consumers reports that they are purchasing items online at least several times per month. More than 20% said they do their shopping online each week. An increase in sales is associated with a problem of returned products from the customers. In general, online transactions have been reporting more than 80% as returns leading to huge business loss.
This project aims to develop a Virtual Try-on with Images Using Deep Learning to provide the customers an easy way to try clothes without having to physically put them on. It is extremely popular among people who shop for clothing online to use virtual try-on systems so they can see how clothing looks on them before buying it. The proposed solution will enable the customers with try-before-you-buy capabilities.
The human parsing offered in the solution helped in the development of a network and image-based virtual try-on that can produce images that are structurally consistent with their originals.
During this study, the focus was to identify multiple poses, shapes, and styles in the reference model image. The study focused primarily on the upper half of the model image with top-wear to train the model.
The methodology used for the project is CRISP-DM. The virtual try-on image generator proposed overcomes the occlusion problem by using two-dimensional images of a person and top wear. The use of deep convolutional neural networks and large datasets has made it much easier to study human parsing and panoptic segmentation. The Deep Convolutional Generative Adversarial Networks (DCGAN) is trained with real spatio-temporal images, namely temporal slices images and noise vectors. The discriminator was trained independently without using the generator with temporal flame images.
The estimation of full-body poses is done using CNNs in regression and classification approaches. Segmentation is used to classify apparel like tops, pants, and dresses in fashion parsing to get the desired information.
For virtual try-on for cloth replacements, the Open-Pose model was selected because it provides an appropriate order of keypoints. The Frechet inception distance (FID) was often used to evaluate generative models, including generative adversarial networks (GANs).
An AR-based virtual try-on helps the shoppers to see how their apparel would appear on their face or body by using a camera-mounted on their front face. Using the pose estimation, the reference garment is modified to fit the pose of the user image using the skeleton pose. The trained model is performing well while dealing with black and white garments. When it comes to the colored garments, the method is capable of preserving the desired pose of human arms and hands of the model.
Keywords: Augmented Reality, Virtual Reality, Ecommerce, GAN, U-Net++, LCR-Net++, VITON-GAN, Style GAN, Online Shopping, CNN, Deep Learning.
MBA Business Analytics