Abstract

Measuring toxicity is an important step in drug development. However, the current experimental methods which are used to estimate the drug toxicity are expensive and need high computational efforts. Therefore, these methods are not suitable for large-scale evaluation of drug toxicity. As a consequence, there is a high demand to implement computational models that can predict drug toxicity risks. In this paper, we used a dataset that consists of 553 drugs that biotransformed in the liver. In this data, there are four toxic effects, namely, mutagenic, tumorigenic, irritant and reproductive effects. Each drug is represented by 31 chemical descriptors. This paper proposes two models for predicting drug toxicity risks. The proposed models consist of three phases. In the first phase, the most discriminative features are selected using rough set-based methods to reduce the classification time and improve the classification performance. In the second phase, three different sampling algorithms, namely, Random Under-Sampling, Random Over-Sampling, and Synthetic Minority Oversampling Technique (SMOTE) are used to obtain balanced data. In the third phase, the first proposed model employs the Neutrosophic Rule-based Classification System (NRCS), and the second model uses Genetic NRCS (GNRCS) to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed models obtained high sensitivity (89–93%), specificity (91–97%), and GM (90–94%) for all toxic effects. Overall, the results of the proposed models indicate that it could be utilized for the prediction of drug toxicity in the early stages of drug development.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call