Abstract
目的 由于现有时尚服饰搭配方法缺乏服饰图像局部细节的有效特征表示,难以对不同服饰间的局部兼容性进行建模,限制了服饰兼容性学习的完备性,导致时尚服饰搭配的准确率较低。因此,提出一种全局—局部特征优化的时尚服饰搭配方法。方法 首先,利用不同卷积网络提取时尚服饰的图像和文本特征作为全局特征,同时在卷积网络基础上构建局部特征提取网络,提取时尚服饰图像的局部特征;然后,基于图网络和自注意力机制构建全局—局部兼容性学习模块,通过学习不同时尚服饰全局特征间和局部特征间的交互关系,并定义不同时尚服饰的权重,进行服饰全局和局部兼容性建模;最后,构建服饰搭配优化模型,通过融合套装中所有服饰的全局和局部兼容性优化服饰搭配,并计算搭配得分,输出正确的服饰搭配结果。结果 在公开数据集Polyvore上将本文方法与其他方法进行对比。实验结果表明,利用局部特征提取网络提取的时尚服饰图像局部特征能有效地表示服饰局部信息;构建的全局—局部兼容性学习模块对时尚服饰的全局兼容性和局部兼容性进行了完整建模;构建的时尚服饰搭配优化模型实现了全局和局部兼容性的优化组合,使时尚服饰搭配准确率(fill in the blank,FITB)提高至86.89%。结论 本文提出的全局—局部特征优化的时尚服饰搭配方法,能够有效提高时尚服饰搭配的准确率,较好地满足日常时尚搭配的需求。;Objective Fashion clothing matching has been developing for clothing-relevant fashion research nowadays. Fashion clothing matching studies are required to learn the complex matching relationship(i. e. ,fashion compatibility) among different fashion items in an representation-based outfit. Fashion items have rich partial designs and matching relationships among partial designs. To analyze their global compatibility learning,most of the existing researches are concerned of items’global features(visual and textual features). But,local feature extraction is often ignored for local compatibility,which causes lower performance and accuracy of fashion style matching. Therefore,we develop a fashion style matching method in terms of global-local feature optimization and it is aimed to extract the local features of fashion images for local information representing,construct the local compatibility of fashion items,and improve the global and local compatibility-incorporated accuracy of fashion style matching. Method First,we use two different convolutional neural networks(CNNs)to extract the global features of fashion items separately on the basis of the input fashion images and texts. To extract CNN-based local features of fashion images,a multiple branches-related local feature extraction network is designed. A branch of the local feature extraction network is composed of 1)a convolution layer,2)a batch normalization (BN)layer,and 3)a rectified linear unit(ReLU)activation function. A branch can be used to extract a local feature in the fashion image,and different branches can be used to extract different local features of the fashion image. Second,a global-local compatibility learning module is constructed in terms of graph neural network(GNN)and self-attention mechanism(SAM),which can model both of the global and local compatibility. GNN is used to model interactions among global features and local features separately. The SAM-based weight information of different fashion items is defined and integrated into the modeling,and the item’s global and local compatibility are obtained both. Finally,a fashion clothing matching optimization model is built up to gain optimized matching results. The learned outfit global and local compatibility can be used to integrate all fashion items’global compatibility and local compatibility separately in an outfit. To optimize matching results,the trade-off parameters are then defined to adjust the impact of the outfit global compatibility and local compatibility on fashion style matching. At the same time,the matching score is calculated as well. Different matching schemes have different matching scores,and the optimized fashion style matching result is generated according to the highest score. Result The proposed method is validated on the public Polyvore dataset that includes fashion item images and textual descriptions. The details are presented as follows. The local features of fashion items extracted by our local feature extraction network can represent the fashion items’local information effectively without label-attributed supervision. Our global-local compatibility learning module can be used to learn the fashion item’s global compatibility and local compatibility at the same time,and the weights of different fashion items is involved,which can model the fashion global and local compatibility completely. The fill in the blank(FITB)accuracy ratio of fashion style matching is improved to 86. 89%. Conclusion A fashion clothing matching method is developed in terms of global local feature optimization. First,we construct a local feature extraction network to extract local features of fashion images while the global features of fashion items are extracted. Next,the self-attention mechanism is introduced to weight different fashion items after the global matching relationships and local matching relationships of fashion items with graph network are analyzed,which constructs the global and local compatibilities of fashion items completely. Finally,to obtain the global and local compatibilities of the outfit, our fashion clothing matching optimization model is used to fuse the item’s global compatibility and local compatibility each in an outfit. To optimize matching results,the effectiveness of the two kinds of compatibilities on fashion clothing matching with parameters is adjusted as well. The convergence speed of our method is still slow. The optimization model is only used to combine the global and local compatibility of the outfit linearly. In practice,the relationship between the global compatibility and local compatibility is more complex. To improve the accuracy of fashion clothing matching,future work can be focused on the convergence speed and the clothing matching optimization further.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.