Coordination among multiple autonomous, distributed cognitive agents is one of the most challenging and ubiquitous problems in Distributed AI and its applications in general, and in collaborative multi-agent systems in particular. A particularly prominent problem in multi-agent coordination is that of group, team or coalition formation. A considerable majority of the approaches to this problem found in the literature assume fixed interactions among autonomous agents involved in the coalition formation process. Moreover, most of the prior research where agents are actually able to learn and adapt based on their past interactions mainly focuses on reinforcement learning techniques at the individual agent level. We argue that, in many important applications and contexts, complex large-scale collaborative multi-agent systems need to be able to learn and adapt at multiple organization, hierarchical and logical levels. In particular, the agents need to be able to learn both at the level of individual agents and at the system or agent ensemble levels, and then to integrate these different sources of learned knowledge and behavior, in order to be effective at solving complex tasks in typical dynamic, partially observable and noisy multi-agent environments. In this paper, we describe a conceptual framework for addressing the problem of learning how to coordinate effectively at three qualitatively distinct levels — those of (i) individual agents, (ii) small groups of agents, and (iii) very large agent ensembles (or alternatively, depending on the nature of a multi-agent system, at the system or central control level). We briefly illustrate the applicability and usefulness of the proposed conceptual framework with an example of how it would apply to an important practical coordination problem, namely that of distributed coordination of a large ensemble of unmanned vehicles on a complex multi-task mission.