Self-supervised models have demonstrated remarkable performance in speech processing by learning latent representations from large amounts of unlabeled data. Adapting these models to low-resource languages yields promising results, but the computational cost of fine-tuning all model parameters is prohibitively high. Adapters offer a solution by introducing lightweight bottleneck structures into pre-trained models for downstream tasks, enabling efficient parameter adaptation. However, randomly initialized adapters often underperform in extremely low-resource scenarios. To address this issue, we explore the Meta-Adapter for self-supervised models and analyzed some limitations of Meta-Adapter including poor learning in language-specific knowledge and meta-overfitting problems. To relieve these problems, we propose the Meta-Adaptable-Adapter (MAA), a new meta leaning algorithm that adapts to low-resource languages quickly and effectively. MAA learns task-specific adapters for feature extraction, and task-independent adapters for feature combination. The experiments on three datasets show superior performance on 31 low-resource languages across seven different language families compared to other adapters, showing better generalization and extensibility.