News Release

An approach for unsupervised domain adaptation based on integrated autoencoder

Peer-Reviewed Publication

Higher Education Press

The processing flow of our proposed method

image: 

The processing flow of our proposed method

view more 

Credit: Yi ZHU, Xindong WU, Jipeng QIANG, Yunhao YUAN, Yun LI.

Unsupervised domain adaptation has provoked vast amount of attention and research in past decades. Among all the deep-based methods, the autoencoder-based approach have achieved sound performance for the superiority of no label requirement and fast convergence speed. The existing methods of autoencoders just serially connect the features generated by different autoencoders, which pose challenges for the discriminative representation learning and fail to find the real cross-domain features.
To solve the problems, a research team led by Zhu YI published their new research on 15 Oct 2023 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.
The team proposed a novel representation learning method based on an integrated autoencoders for unsupervised domain adaptation. A sparse autoencoder is introduced to combine the inter- and inner-domain features for minimizing deviations in different domains and improving the performance of unsupervised domain adaptation. Extensive experiments on three benchmark data sets clearly validate the effectiveness of our proposed method compared with several state-of-the-art baseline methods.
In the research, they propose to obtain the inter- and inner-domain features with two different autoencoders. The higher-level and more abstract representations are extracted to capture different characteristics of original input data in source and target domains. A whitening layer is introduced for features processed in inter-domain representations learning. Then a sparse autoencoder is introduced to combine the inter- and inner-domain features for minimizing deviations in different domains and improving the performance of unsupervised domain adaptation.
Firstly, the marginalized AutoEncoder with Maximum Mean Discrepancy (mAEMMD) is introduced to map the original input data into the latent feature space for generating the inter-domain representations between source and target domains simultaneously. Secondly, the Convolutional AutoEncoder (CAE) is utilized to obtain inner-domain representations and keep the relative location of features, which reserves spatial information of the input data in source and target domains. Thirdly, after higher-level features are obtained by these two different autoencoders, a sparse autoencoder is applied for the combination of these inter- and inner-domain representations, on which the new feature representations are utilized for minimizing deviations in different domains.
Future work can focus on learning representations of graph data in which instance relationships are represented with an adjacent matrix, and exploring heterogeneous graph data relationships based on convolutional operation-based autoencoder networks.
 


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.