Older people are very likely to fall, which is a significant threat to the health. However, falls are preventable and are not necessarily an inevitable part of aging. Many different fall detection systems have been developed to help people avoid falling. However, traditional systems based on wearable devices or image recognition-based have many disadvantages, such as user-unfriendly, privacy issues. Recently, WiFi-based fall detection systems try to solve the above problems. However, there is a common problem of reduced accuracy. Since the system is trained at the original signal collecting/training place, however, the application is at a different place. The proposed solution only extracts the features of the changed signal, which is caused by a specific human action. To implement this, we used Channel State Information (CSI) to train Convolutional Neural Networks (CNNs) and further classify the action. We have designed a prototype to test the performance of our proposed method. Our simulation results show an average accuracy of same place and different place is 93.2% and 90.3%, respectively.