Falls are a serious concern among the elderly due to being a major cause of harm to their physical and mental health. Despite their potential for harm, they can be prevented with proper care and monitoring. As such, the motivation for this research is to implement an algorithmic solution to the problem of falls that leverages the benefits of Machine Learning to detect falls in the elderly. There are various studies on fall detection that works on one dataset: wearable, environmental, or vision. Such an approach is biased against low fall detection and has a high false alarm rate. According to the literature, using two datasets can result in high accuracy and lower false alarms. The purpose of this study is to contribute to the field of Machine Learning and Fall Detection by investigating the optimal ways to apply common machine and deep learning algorithms trained on multimodal fall data. In addition, it has proposed a multimodal approach by training two separate classifiers using both Machine and Deep Learning and combining them into an overall system using sensor fusion in the form of a majority voting approach. Each trained model outputs an array comprising three percentage numbers, the average of the numbers in the same class from both arrays is then computed, and the highest percentage is the classification result. The working system achieved results were 97% accurate, with the highest being achieved by the Convolutional Neural Network algorithm. These results were higher than other state-of-the-art research conducted in the field.