Abstractive multi-document summarization (MDS) aims at summarizing and paraphrasing the salient key information in multiple documents. For dealing with the long-input issue brought by multiple documents, most previous work extracts salient sentence-level information from the input documents and then performs summarizing on the extracted information. However, the aspects of documents are neglected. The limited ability to discover the content on certain aspects hampers the key information seeking and ruins the comprehensiveness of the generated summaries. To solve the issue, we propose a novel Supervised Aspect-Learning Abstractive Summarization framework (SALAS) and a new aspect information loss (AILoss) to learn aspect information to supervise the generating process heuristically. Specifically, SALAS adopts three probes to capture aspect information as both constraints of the objective function and supplement information to be expressed in the representations. Aspect information is explicitly discovered and exploited to facilitate generating comprehensive summaries by AILoss. We conduct extensive experiments on three public datasets. The experimental results demonstrate that SALAS outperforms previous state-of-the-art (SOTA) baselines, achieving a new SOTA performance on the three MDS datasets.