Inspiration

As more clothing stores move online, customers have less opportunity to try on clothes. This may make them somewhat anxious to purchase outfits, as they may be unsure whether it suits them or would even fit, particularly as the same size number seems to be only a recommendation to many shops. Now, users can customise a to-scale avatar who is able to try on each outfit autonomously.

What It Does

A to-scale 3D avatar personalised by the user or an image of themselves is uploaded, which then reads a text prompt to add clothes. The sizes are accurate and allow the user to understand what the clothing may look like on them.

How We Built It

We used the Stable Diffusion WebGUI API. We developed the API in Python, using it to upload an image of the model. This was done alongside a mask image to highlight where the clothes will be worn, as well as the prompt of what style will be worn by the avatar.

Challenges We Ran Into

Searching for an appropriate pre-trained AI model to automate many of the rendering tasks was difficult, as many of the higher-quality models required a subscription or an enterprise. Smaller models often included subscriptions or paid tokens, though the cost constraints of the project did not allow us to explore these models further.

Accomplishments That We're Proud Of

We survived the 24 hours (well... most of us, anyways). The team also managed to get in contact with the research team at Meta, who are in the process of publishing a revolutionary paper on 2D to 3D image generation whilst retaining physics and logical mesh deformations, even in real-time in AR systems. Although legal issues regarding accessing the source code or API came up, we were still directed to other open-source projects that explored similar concepts. These projects led us to explore how we preferred to display 3D entities, whether they be solid meshes or Gaussian Splatting objects.

What We Learned

Our team had not yet worked with APIs to interact with AI models, so it was very interesting to explore how different repositories and libraries interacted. Many used similar libraries, such as PyTorch and even OpenAI. Also, combining multiple technologies was initially daunting but self-explanatory, such as having Blender, Python and JavaScript files interact meaningfully.

What's Next for Stylelify

We believe our Chrome Extension has the potential to change the way people shop online. With fewer physical stores and more e-commerce sites, people seem to be missing the confidence they would have when trying on clothes. With Meta's imminent research into 2D images of clothing, which may continue models, backgrounds and other possible distractions to the AI model, it would be easier than ever to animate to-scale avatars of users wearing their favourite brands. This would also streamline the user experience, as the workflow would include more AI because an image-to-text model could describe the garment and share this with a text-to-image model that would transpose it onto the avatar, 2D or otherwise.

Link to Slideshow presentation with video demos

We highly recommend you check this out just for the videos! Google Slides Presentation

Built With

Share this project:

Updates