An elastic stream computing system is expected to process dynamic and volatile data streams with low latency and high throughput in timely manner. Effective management of stream application is considered one of the keys to achieve elastic computing by scaling in/out the workload of each computing node properly during runtime. Many existing work tried to build an elastic stream computing system from one perspective or at one level, which limited to some extent the system performance improvement. To address the problems brought by single level management, in this paper, we propose and implement a multi-level collaborative framework (called Mc-Stream) for elastic stream computing systems. This paper introduces our solution from the following aspects: (1) Extensive experiments show that system performance is affected by multiple factors locating at different levels. A multi-level collaborative optimization strategy can coordinate those factors and optimize the performance to a greater extent. (2) A system model is constructed to explain the multi-level collaborative framework, with the creation of topology model, data model and grouping model. The process of multi-level collaborative framework is formalized, including optimizing instances number, determining data stream load ratio among instances and deploying instances. (3) The system performance is optimized at multiple levels (user level, instance level, scheduling level, and resource level). It is further improved by the components of lightweight instances management, available resource-aware data stream redirection, fast and effective scheduling management, and asynchronous runtime redeployment without state loss. (4) Mc-Stream is implemented on top of Apache Storm platform. Metrics are evaluated with real-world stream applications, such as the fulfillment of system latency, throughput and resources utilization. Experimental results show the significant improvements made by Mc-Stream: reducing average system latency by 32%, increasing average system throughput by 26% and average resources utilization by 34%, compared with existing state-of-the-art scheduling strategies.