The Covid-19 pandemic has spurred consumers to substitute fitting rooms with augmented reality (AR) using their smartphones and computers. The coronavirus pandemic is reshaping the way we shop. Stores are reopening but being reoriented to avoid interaction: Fitting rooms are taped off, sample counters are closed and merchandise testers are put away.
The virtual fitting room technology market provides offerings for accessories, watches, glasses, hats, clothes, etc . Let’s review how a number of these solutions work under the hood.
A good example of virtual watches try-on is that the AR Try On Watches app allowing users to try on various watches. The answer is predicated on the AR Technology utilizing specific markers printed on a band, which should be worn on a user’s wrist in situ of a watch so as to start out a virtual try-on the watch. The PC vision algorithm processes only those markers visible within the frame and identifies the camera’s position in reference to them. Post that, to render a 3D object correctly, the virtual camera should be placed at an equivalent location.
Overall, technology has its limits (for instance, nobody features a printer at hand to print out the AR band). But if it matches the business use case, it wouldn’t be that difficult to make a product with a production-ready quality. Probably, the crucially important part would be to make proper 3D objects to use.
However, thanks to the use the synthetic data which supposes rendering of photorealistic 3D human feet models with key points and training a model with that data; or to use photogrammetry which supposes the reconstruction of a 3D scene from multiple 2D views to decrease the number of labelling needs.
This kind of solution is much more complicated. So as to enter the market with a ready-to-use product, it is required to gather an outsized enough foot key point dataset (either using synthetic data, photogrammetry, or a mixture of both), train a customized pose estimation model (that would combine both high enough accuracy and inference speed), test its robustness in various conditions and make a foot model. We consider it a medium complexity project in terms of technologies.
Compared to shoes, masks, glasses, and watches, virtual try-on 3D clothes still remain a challenge. The reason is that clothes are deformed when taking the shape of a person’s body. Thus, for the correct use of AR experience, a deep learning model should identify not only basic key points on the human body’s joints but also the body shape in 3D.
Looking at one among the most recent deep learning models DensePose aimed to map pixels of an RGB image of a person to the 3D surface of the human body, we can find out that it’s still not quite suitable for augmented reality. The DensePose’s inference speed is not appropriate for real-time apps, and body mesh detections have insufficient accuracy for the fitting of 3D clothing items. So as to enhance the results, it’s required to gather more annotated data which is a time and resource-consuming task.
In conclusion, I’d say that current virtual fitting rooms work well for items associated with separate body parts like head, face, feet, and arms. But talking about items where the physical body requires to be fully detected, estimated, and modified, the virtual fitting remains in its infancy. However, the AI evolves in leaps and bounds, and therefore, the best strategy is to remain tuned and keep trying.