Representation learning (RL) methods learn objects’ latent embeddings where information is preserved by distance. Since certain distance functions are invariant to certain linear transformations, one may obtain different embeddings while preserving the same information. In dynamic systems, a temporal difference in embeddings may be explained by the stability of the system or by the misalignment of embeddings due to arbitrary transformations. This study focuses on the embedding alignment problem to distinguish structural changes inherent to a system from arbitrary changes caused by representation learning methods, and quantify their magnitudes. In order to avoid any confusion due to the naming conventions in the literature, it should be noted that embedding alignment problems are different from graph matching/network alignment problems. In the representation learning literature, although the embedding alignment issue has been acknowledged, its measurement and empirical analysis have not received sufficient interest. In this work, we explore the embedding alignment and its parts, propose novel metrics to measure alignment and stability, and show their suitability through synthetic experiments. Real-world experiments show that both static and dynamic RL methods are prone to produce misaligned embeddings and such misalignment worsens the performance of dynamic network inference tasks. By ensuring alignment, the prediction accuracy raises by up to 90% in static and up to 40% in dynamic RL methods.
Read full abstract