Year Author Algorithms Used Results Identified Gap 2022 Takuma K., et. al Logistic Regression, SVM, Random Forest, XGBoost , LightGBM , and CatBoost . To resample the data, they used SMOTE, SMOTE Tomek Links, and SMOTE-ENN. The model performance was evaluated by accuracy, recall, precision, F1-score, and ROC-AUC scores and results showed that Boosting algorithms outperformed the traditional classification algorithms and Random Forest. For F1-score, XGBoost combined with SMOTE Tomek Links achieved the highest F1-score. These results suggest that Boosting algorithms performed better than traditional classification algorithms for the ROC-AUC score, despite that the hybrid resampling methods did not necessarily improve the model performance. The available studies simply used undersampling and oversampling techiniques in order to handle data imbalances, The use of hybrid ensemble learning classifiers in improving model accuracy were not effectively used. This study uses SMOTE, SMOTE ENN and SMOTE TOMEK-LINKS to create new dataset that will be used to create ensemble models and evaluate each model performance on these different datasets to see how customer churn datasets behave on hybrid ensemble techniques. 2022 Wagh et. al Random Forest, Decision Tree Used up sampling and ENN, In comparison to a decision tree classifier, a random forest classifier produces better results. With an overall accuracy of 99 %, the random forest classifier predicts churn. The classifier matrix has a precision of 99 %, a recall factor of 99 %, and an accuracy of 99.09 % The available studies simply used undersampling and oversampling techiniques in order to handle data imbalances, The use of hybrid ensemble learning classifiers in improving model accuracy were not effectively used. This study uses SMOTE, SMOTE ENN and SMOTE TOMEK-LINKS to create new dataset that will be used to create ensemble models and evaluate each model performance on these different datasets to see how customer churn datasets behave on hybrid ensemble techniques. 2022 Fujo S. W., et. al Deep-BP-ANN To solve unbalanced issue, the Random Oversampling technique was used to balance both datasets. In predicting customer churn, our findings outperform ML techniques: XG_Boost , Logistic_Regression , Naïve_Bayes , and KNN. 2022 Makurumidze L ., et. al Gradient boost, Random forest, Adaboost and Decision tree Gradient boost and random forest classifiers performed well out of the four on a publicly available bank dataset, but then went on to use Random forest for implementation The available studies simply used undersampling and oversampling techiniques in order to handle data imbalances, The use of hybrid ensemble learning classifiers in improving model accuracy were not effectively used. This study uses SMOTE, SMOTE ENN and SMOTE TOMEK-LINKS to create new dataset that will be used to create ensemble models and evaluate each model performance on these different datasets to see how customer churn datasets behave on hybrid ensemble techniques