Falling down is a serious problem for health and has become one of the major etiologies of accidental death for the elderly living alone. In recent years, many efforts have been paid to fall recognition based on wearable sensors or standard vision sensors. However, the prior methods have the risk of privacy leaks, and almost all these methods are based on video clips, which cannot localize where the falls occurred in long videos. For these reasons, in this article, the bioinspired vision sensor-based falls temporal localization framework is proposed. The bioinspired vision sensors, such as dynamic and active-pixel vision sensor (DAVIS) camera applied in this work responds to pixels' brightness change, and each pixel works independently and asynchronously compared to the standard vision sensors. This property makes it have a very high dynamic range and privacy preserving. First, to better represent event data, compared with the typical constant temporal window mechanism, an adaptive temporal window conversion mechanism is developed. The temporal localization framework follows a proven proposal and classification paradigm. Second, for the high-efficient and recall proposal generation, different from the traditional sliding window scheme, the event temporal density as the actionness score is set and the 1D-watershed algorithm to generate proposals is applied. In addition, we combine the temporal and spatial attention mechanism with our feature extraction network to temporally model the falls. Finally, to evaluate the performance of our framework, 30 volunteers are recruited to join the simulated fall experiments. According to the results of experiments, our framework can realize precise falls temporal localization and achieve the state-of-the-art performance.