Modality translation grants diagnostic value to wearable devices by translating signals collected from low-power sensors to their highly-interpretable counterparts that are more familiar to healthcare providers. For instance, bio-impedance (Bio-Z) is a conveniently collected modality for measuring physiological parameters but is not highly interpretable. Thus, translating it to a well-known modality such as electrocardiogram (ECG) improves the usability of Bio-Z in wearables. Deep learning solutions are well-suited for this task given complex relationships between modalities generated by distinct processes. However, current algorithms usually train a single model for all users that results in ignoring cross-user variations. Retraining for new users usually requires collecting abundant labeled data, which is challenging in healthcare applications. In this paper, we build a modality translation framework to translate Bio-Z to ECG by learning personalized user information without training several independent architectures. Furthermore, our framework is able to adapt to new users in testing using very few samples. We design a meta-learning framework that contains shared and user-specific parameters to account for user differences while learning from the similarity amongst user signals. In this model, a meta-learner approximated by a neural network learns how to learn user-specific parameters and can efficiently update them in testing. Our experiments show that the proposed model reduces the percentage root mean square difference (PRD) by 41% compared to training a single model for all users and by 36% compared to training independent models for each user. When adapting the model to new users, our model outperforms fine-tuning a pre-trained model through back-propagation by 40% using as few as two new samples in testing.