Table 2. Five-fold cross-validation result comparison to other ML models with all features

Matrix, mean (SD)
Model Accuracy Specificity Sensitivity Balanced accuracy AUROC
XGB 0.6429(0.0050) 0.6421(0.0052) 0.7048(0.0151) 0.6734(0.0051) 0.7298(0.0051)
GBM 0.6590(0.0065) 0.6585(0.0067) 0.7007(0.0093) 0.6796(0.0051) 0.7388(0.0050)
LGBM 0.6542(0.0066) 0.6535(0.0066) 0.7069(0.0086) 0.6802(0.0055) 0.7401(0.0055)
Random forest 0.6232(0.0208) 0.6220(0.0212) 0.7191(0.0099) 0.6706(0.0064) 0.7284(0.0043)
AdaBoost 0.6434(0.0032) 0.6427(0.0033) 0.6982(0.0152) 0.6705(0.0077) 0.7259(0.0065)
LR 0.6460(0.0017) 0.6448(0.0019) 0.7363(0.0175) 0.6906(0.0082) 0.7538(0.0091)
GBM + LGBM 0.6565(0.0052) 0.6559(0.0053) 0.7043(0.0080) 0.6801(0.0050) 0.7395(0.0052)
GBM + LR 0.6952(0.0038) 0.6953(0.0039) 0.6857(0.0178) 0.6905(0.009) 0.7529(0.0079)
LGBM + LR 0.6823(0.0038) 0.682(0.0039) 0.6996(0.0164) 0.6908(0.0081) 0.7530(0.0080)
GBM + LGBM + LR 0.6892(0.0046) 0.6893(0.0047) 0.6879(0.0152) 0.6886(0.0076) 0.7503(0.0072)
Abbreviations: SD, standard deviation; AUROC, area under receiver of characteristics; XGB, XGBoost; GBM, gradient boosting machine; LGBM, light gradient-boosting machine; AdaBoost, adaptive boosting; LR, logistic regression.
The bold characters were the best-performance model.