Literature DB >> 34056326

NanoTox: Development of a Parsimonious In Silico Model for Toxicity Assessment of Metal-Oxide Nanoparticles Using Physicochemical Features.

Nilesh Anantha Subramanian1, Ashok Palaniappan1.   

Abstract

Metal-oxide nanoparticles find widespread applications in mundane life today, and cost-effective evaluation of their cytotoxicity and ecotoxicity is essential for sustainable progress. Machine learning models use existing experimental data and learn quantitative feature-toxicity relationships to yield predictive models. In this work, we adopted a principled approach to this problem by formulating a novel feature space based on intrinsic and extrinsic physicochemical properties, including periodic table properties but exclusive of in vitro characteristics such as cell line, cell type, and assay method. An optimal hypothesis space was developed by applying variance inflation analysis to the correlation structure of the features. Consequent to a stratified train-test split, the training dataset was balanced for the toxic outcomes and a mapping was then achieved from the normalized feature space to the toxicity class using various hyperparameter-tuned machine learning models, namely, logistic regression, random forest, support vector machines, and neural networks. Evaluation on an unseen test set yielded >96% balanced accuracy for the random forest, and neural network with one-hidden-layer models. The obtained cytotoxicity models are parsimonious, with intelligible inputs, and an embedded applicability check. Interpretability investigations of the models identified the key predictor variables of metal-oxide nanoparticle cytotoxicity. Our models could be applied on new, untested oxides, using a majority-voting ensemble classifier, NanoTox, that incorporates the best of the above models. NanoTox is the first open-source nanotoxicology pipeline, freely available under the GNU General Public License (https://github.com/NanoTox).
© 2021 The Authors. Published by American Chemical Society.

Entities:  

Year:  2021        PMID: 34056326      PMCID: PMC8154018          DOI: 10.1021/acsomega.1c01076

Source DB:  PubMed          Journal:  ACS Omega        ISSN: 2470-1343


Introduction

Nanotechnology has delivered the promise of “plenty of room at the bottom” with transformative applications for human welfare.[1] The distinctive properties of nanoscale materials have been indispensable in industrial and medical applications, including the delivery of biologically active molecules and development of biosensors for human health and disease.[2] Engineered metal-oxide nanoparticles are characterized by a concentration of sharp edges and lend themselves to a variety of uses (e.g., ref (3)). However, there is a potential caveat to nanobiotechnology: the differential nanoscale behavior of nanomaterials might also result in emergent toxic side effects in the biological domain and ecological realm.[4−7] These hazards are related to the capacity of nanomaterials to engender free radicals in the cellular milieu, which inflict damaging oxidative stress. Such events could trigger inflammatory responses, which could balloon out of control, leading to apoptosis and cytotoxicity[8−11] as well as genotoxicity.[12] The mundane use of nanoparticles has necessitated vigorous safety assessment of toxicity, in the interests of sustainable progress.[13−16] Such methods could also help discern safe-by-design principles that could guide adjustments to the nanoparticle formulation and thereby mitigate adverse effects at the source. Intelligent and alternative testing strategies could accelerate rational design of nanoparticles for optimal functionality and minimal toxicity.[17−20] Various computational methods have been applied to predicting toxicity of engineered nanomaterials,[21−31] but with the accumulation of high-quality data, machine learning methods have shown the most promise.[32] Such techniques provide a noninvasive “instantaneous” readout of nanoparticle toxicity[33−35] and originate from the evolution of quantitative structure–activity relationship (QSAR) models.[36] Machine learning models of nanoparticle toxicity have tended to be either generalized[37] or tissue-specific[38,39] and are built from experimental toxicity data that have been scored, standardized, and curated into databases like the safe and sustainable nanotechnology db (S2NANO).[40−42] Earlier studies have tended to neglect systematic multicollinearity among the predictor variables, which would lead to confounding and data snooping. Second, gross imbalance between the numbers of nontoxic and toxic instances usually exists, which could lead to overfitting to the “nontoxic” class.[43] Third, we were motivated to develop a model that would be agnostic of in vitro characteristics, such as cell line, cell type, and assay method. A truly general model of nanoparticle cytotoxicity, independent of in vitro factors, would lead to significantly broader interpretability and wider applicability.[44] Our study departs also from the notion that tissue-specific models are superior to generalized models[39] and demonstrates that model interpretability is best achieved using a minimal nonredundant feature space, consistent with Occam’s parsimony. We have deployed insights from our study into a majority-voting ensemble classifier, with a view to increasing reliability. Finally, the end-to-end pipeline of our work, including the ensemble classifier, is made freely available as a user-friendly open-source nanosafety prediction system, NanoTox, under GNU GPL (https://github.com/NanoTox). All implementations were carried out in R (www.r-project.org).

Methods

Problem and Dataset

In vitro parameters such as cell type, cell line, cell origin, cell species, and type of assay could be extraneous to modeling the intrinsic hazard posed by a nanoparticle to cellular viability and the environment. This motivated us to formulate the problem in a feature space devoid of biological predictors. The machine learning task is stated as: given a certain nanoparticle at a certain dose for a certain duration, would its administration prove cytotoxic? To address this problem, we used a hybrid dataset building on the physicochemical descriptors and toxicity data found in Choi et al’s study.[36] All in vitro features were removed from the dataset, as noted above. Extrinsic physicochemical properties, namely, dosage and exposure duration, were retained.[45] The periodic table properties of metal-oxide nanoparticles published in Kar et al.[46] were used to augment the dataset. Only complete cases were considered in the process of matching the two datasets. This process yielded a final dataset of 19 features of five metal-oxide nanoparticles: Al2O3, CuO, Fe2O3, TiO2, and ZnO (Table ). Cytotoxicity was used as the outcome variable, encoded as “1” (true) if measured cell viability was <50% with respect to the control, and “0” (false) otherwise. The novel dataset is available on NanoTox.
Table 1

Physicochemical Features of MeO Nanoparticles Considered in Our Study

s notype of featurefeatureshorthand
1intrinsic physicochemical propertiescore sizeCoreSize
2hydrodynamic sizeHydroSize
3surface chargeSurfCharge
4surface areaSurfArea
5conduction band energyEc
6valence band energyEv
7standard enthalpy of formationHsf
8Mulliken electronegativityMeO
9enthalpy of formation of cationenthalpy
10polarization ratioratio
11periodic table propertiespauling electronegativityEneg
12summation of electronegativityesum
13molecular weightMW
14number of oxygen atomsNOxygen
15number of metal atomsNMetal
16ratio of esum to Noxygenesumbyo
17oxidation stateox
18extrinsic physicochemical propertiesexposure timeTime
19dosageDose

Elimination of Multicollinearity

A nonredundant feature space would translate into an optimal hypothesis space. A simple inspection of the properties in Table suggested the existence of correlated features. Correlated features would adversely impact model performance as well as complicate model interpretation. Multicollinearity is an even deeper problem in the pursuit of a nonredundant feature space.[47] The training set alone was used for the feature selection process, to prevent any data leakage from the test set. The dataset was randomly split into a 70:30 train/test ratio stratified on the outcome variable.[48] The existence of highly correlated (Pearson’s ρ ≥ 0.9) variables was ascertained. To address multicollinearity, we used a systematic variance inflation factor (vif) analysis. Each independent variable was regressed on all of the other independent variables in turn, and the goodness of fit of each of these models (fraction of variance explained; R2) was estimated. The vif score for each independent variable was then calculated using eq . In each iteration of the vif analysis, the variable in the current set that had the largest vif score when regressed on all of the other variables was eliminated. This process was continued until a set of variables all of whose vif scores <5.0 was obtained. Note that a vif score of 1.0 is possible only when a variable is perfectly independent of all other variables (all pairwise Pearson’s ρ identically zero).

Feature Transformation

The feature space could be vulnerable to heteroscedastic effects, given the varying scales for the variables. It is necessary to preprocess and prevent features with large variances from swamping out the rest. Positively skewed features could be stabilized using the log transformation. Ec values, which are negative, were first offset by +6.17, then log-transformed. Dosage spanned many orders of 10 and was log10-transformed. Exposure time spanned 2 orders of magnitude, so we performed a log2 transformation. Surface charge whose values could be either positive or negative was standardized (i.e., Z-transformed). All of the other features were log-transformed (to the base e).

Class Rebalancing

The cost of missing a toxic instance is manifold higher than the cost of missing a nontoxic instance, and the imbalance between toxic vs nontoxic instances could exacerbate this problem. In such situations, where the essential problem is to learn the minority outcome class effectively, resampling techniques could be useful.[49] We addressed the class skew problem using Synthetic Minority Over-Sampling TEchnique (SMOTE).[50] SMOTE synthesizes new minority samples from the existing ones, without influencing the instances of the majority class, thereby increasing the number of “toxic” instances relative to the number of nontoxic instances. Balancing the dataset thus would normalize the learning bias arising from unequal representation of the outcome classes.

Predictive Modeling

The overall workflow of our approach is summarized in Figure . The normalized training dataset was balanced using SMOTE, and a variety of classification algorithms were tried and tested, namely, logistic regression,[51] random forests,[52] SVMs,[53] and neural networks.[54,55]Table shows the classifiers and their hyperparameters considered in our work. The optimal values of the hyperparameters were found using 10-fold internal cross-validation.[56] The performance of each optimized model was evaluated on the normalized and unseen test set. To penalize false positives and false negatives equally, we used an objective measure of performance
Figure 1

Workflow of the study up to predictive modeling. Preprocessing refers to both normalization and class balancing. Only the training dataset was used for feature selection; the test set was kept invisible during the model development process.

Table 2

Classifiers Used in Our Study and Their Respective Hyperparametersa

no.classifiertype/Basispackage/functionhyperparametersoptimization
1logistic regressionalgebraicglmthreshold (= 0.5)n/a
2random forestrule-basedrandomForest1 #trees (= 500)caret::train
2 mtry
3support vector machinegeometrice10711 Kernels (linear, radial, polynomial)e1071::tune
2 cost
3 γ
4 degree
4neural networksconnectionistRSNNS1 #hidden layers = 1,2caret::train, caret::mlpML
2 size of each hidden layer
3 decay rate

mtry represents the number of features used for each split in the random forest model.

Workflow of the study up to predictive modeling. Preprocessing refers to both normalization and class balancing. Only the training dataset was used for feature selection; the test set was kept invisible during the model development process. mtry represents the number of features used for each split in the random forest model.

Applicability Domain

The specification of the applicability boundaries of machine learning models would increase their reliability and utility.[44] This would define the perimeter of model generalization to new instances and safeguard against application to atypical data. We used a Euclidean nearest-neighbor approach to define the applicability domain (AD) of the machine learning models.[57] For each instance in the training set, its distances to all of the other training instances were found. The nearest neighbors of each instance are then defined as the k smallest values from this set, where k is an integer parameter set to the square root of the number of instances in the training set. The mean distance of an instance to its k-nearest neighbors is found, and this process is repeated for all instances to yield the sampling distribution of these mean distances. The mean and standard deviation of this sampling distribution were designated as μ and σ, respectively. The applicability domain is then defined as followswhere z is an empirical parameter (related to the z-distribution) that characterizes the width of belief in the model, which is here set to 1.96.

Results

Our dataset consisted of 483 instances of the five metal-oxide nanoparticles with 19 features and one outcome variable. Correlogram plots identified the existence of high correlation among these 19 variables (Figure ) and especially among the periodic table properties (Figure S1). Three clusters of high correlation were revealed: one cluster of enthalpy, Hsf, ratio, ox, Noxygen, and esumbyo; a second cluster of Ec and Ev; and a third cluster of esum, NMetal and MW. Based on the vif analysis, we were able to obtain a feature space of just nine uncorrelated nonredundant variables (Table ). The highest vif of any variable in this feature space was <2.02, indicating little residual multicollinearity (Figure ). This optimal feature space included two periodic table properties (Eneg, NOxygen), five other intrinsic physicochemical properties (CoreSize, HydroSize, SurfArea, SurfCharge, Ec), and both the extrinsic physicochemical properties (Dose, Time). This final dataset of 483 instances with nine features and one outcome variable is available at NanoTox.
Figure 2

Correlogram of the 19 features. The correlation between a row feature and a column feature is shown by a dot in the corresponding cell. The size of the dot represents the magnitude of the correlation, and color represents the sign of the correlation—blue: positive; red: negative. White indicates a value near 0, i.e., independence.

Table 3

Vif Scores for the Features in the Final Reduced Seta

s. no.featurevariance inflation factor
1CoreSize1.65
2HydroSize1.24
3SurfCharge1.85
4SurfArea1.58
5Ec1.50
6time1.19
7dose1.21
8Eneg2.02
9NOxygen1.60

The maximum vif score is ∼2.0, corresponding to maximum R2 ∼ 0.5 (cf. eq ).

Figure 3

Optimal hypothesis space. A correlogram of the optimized feature space shows that no subset of variables in this set would yield multicollinearity.

Correlogram of the 19 features. The correlation between a row feature and a column feature is shown by a dot in the corresponding cell. The size of the dot represents the magnitude of the correlation, and color represents the sign of the correlation—blue: positive; red: negative. White indicates a value near 0, i.e., independence. Optimal hypothesis space. A correlogram of the optimized feature space shows that no subset of variables in this set would yield multicollinearity. The maximum vif score is ∼2.0, corresponding to maximum R2 ∼ 0.5 (cf. eq ). The nine features were normalized, producing acceptable skew values for HydroSize, SurfArea, Ec, and Time (Table ). The normalized dataset was partitioned using a random 70:30 split stratified on the outcome variable, providing a training dataset of 339 instances (with 55 toxic instances), and an independent test dataset of 144 instances (with 23 toxic instances). The training dataset (and not the test dataset) was balanced for the minority toxic instances using SMOTE resampling, yielding 165 toxic and 220 nontoxic instances, for a training dataset of 385 instances. This normalized and balanced dataset was used to train the various classifiers. The optimal hyperparameters of each classifier were determined using the R e1071 package for SVMs (Figure S2), and the R caret package for the neural networks, both one layer (Figure ) and two layers (Figure S3). The full set of model-wise optimal hyperparameters could be found in Table S1. The trained, optimized classifiers were then evaluated on the unseen test dataset. All of the models, except the SVM with polynomial kernel, achieved perfect sensitivity to the toxic instances, i.e., all cytotoxic nanoparticles were classified correctly. The models were not perfectly specific to the nontoxic instances, however. On this count, the random forest and neural network one-layer models outperformed all of the others. They were each frustrated by eight false positives, resulting in a balanced accuracy of 96.69%. Bootstrapping the test set 500 times yielded standard errors of ∼0.0189 for both the random forest and neural network one-layer models, indicating performance robustness. All of the classifiers achieved balanced accuracy >90%. Table summarizes the performance of all of the models on the test set. Five nontoxic instances were classified incorrectly by all of the models, representing refractory instances and constituting a challenge to perfect learning. One of these instances was only marginally viable (0.52), indicating the possible source of refractoriness.
Table 4

Dataset Normalizationa

featuresskewness beforetype of normalizationskewness afterrange (min–max)
CoreSize0.92log–0.22.01–4.82
HydroSize1.76log0.144.30–7.52
SurfCharge0.45z-score0.45–1.62 to +1.98
SurfArea2.14log–0.231.95–5.35
Ec2.68log, with offset–0.230.00–1.54
Time1.36rescale, log–0.480.00–4.58
Dosage1.74log10–1.5–5.00 to +2.48
Eneg1.46log1.260.43–0.64
Noxygen0.66none0.661–3

Log-transformation was performed to the base e. Skewness was controlled, and the range of all predictors was brought into the same order of magnitude.

Figure 4

Hyperparameter tuning, for the neural network—1 layer model. It is seen that the cross-validation accuracy is sensitive to the choice of the set of hyperparameters.

Table 5

Performance of the Various Modelsa

  train set
test set
idclassifieraccuracybalanced accuracycross-valid accuracyaccuracybalanced accuracy
model_1logistic regression0.940.940.930.910.95
model_2random forest0.980.980.940.940.97
model_3aSVM-Linear0.940.9510.90.94
model_3bSVM-Radial0.940.9410.860.92
model_3cSVM-Poly0.980.9810.840.85
model_4aneural network—1L0.960.960.940.940.97
model_4bneural network—2L0.960.950.950.910.95

Models with balanced accuracy >94% are highlighted.

Hyperparameter tuning, for the neural network—1 layer model. It is seen that the cross-validation accuracy is sensitive to the choice of the set of hyperparameters. Log-transformation was performed to the base e. Skewness was controlled, and the range of all predictors was brought into the same order of magnitude. Models with balanced accuracy >94% are highlighted.

Deployment

The applicability domain was calculated with the normalized train data, prior to SMOTE balancing. Substituting k = 19 and z = 1.96 in eq yielded the AD threshold = 2.23. About 95% of the test instances (i.e., 137/144 instances) were located within the AD radius. It must be noted that the misclassified instances did not coincide with these outliers. We have provided a workflow, deployment.R (available at NanoTox), for prediction on new, untested oxides. The prediction is executed by a majority-voting ensemble classifier,[58] since bagging the predictions of the best models on the test set improved the performance to just five false positives (∼98% balanced accuracy). Any new instance for classification supplied by the user is preprocessed (normalized), and its “typicality” is determined by calculating its distances to the instances in the original train data and finding the mean, D, of the 19 closest distances. If D is greater than the AD threshold, then the instance is deemed atypical for requesting the ensemble model. Predictions are obtained using the top two models, the random forest and the neural network one layer, and a consensus prediction is sought. In the absence of a consensus, an ensemble of the top five classifiers, all with balanced accuracy >94% (highlighted in Table ), is used. In the end, the majority prediction of the ensemble classifier is the predicted cytotoxicity of the given instance. Deployment.R automates this pipeline for a batch of new, untested oxides of any size. Furthermore, the RDS images of all of the models trained in our study are provided on NanoTox, for the interested scientist.

Discussion

The results are encouraging since the test set constitutes an independent validation dataset. It is clear that SMOTE balancing made a difference in the ability of the classifiers to detect the under-represented toxic instances. Filtering based on applicability domain and use of an ensemble classification strategy further mitigate model uncertainty given the ‘no free lunch’ theorem.[59] Benchmarking our results with Choi et al.,[37] we see that the best model in each classifier from our work outperformed the corresponding best models of their work (Table ). The overall best models in our work (random forest and neural network one layer) yielded a balanced accuracy of ∼97% compared to 93% for their best overall model (“neural networks”). All of the five models from this work with balanced accuracy >93% are deployed in an ensemble classifier to further mitigate uncertainty in prediction.
Table 6

Benchmarkinga

 balanced accuracy (%)
modelChoi et al.bpresent work
logistic regression9294.63
random forest9196.69
SVM91(a) 94.21
(b) 91.74
(c) 85.21
neural networks93(a) 96.69
(b) 94.63

SVM (a), (b), and (c) correspond to linear, radial, and polynomial kernels. Neural networks (a) and (b) refer to one and two hidden layer(s), respectively. No information regarding model hyperparameters were available in Choi et al. The best-performing models from our work are highlighted.

Ref (37).

SVM (a), (b), and (c) correspond to linear, radial, and polynomial kernels. Neural networks (a) and (b) refer to one and two hidden layer(s), respectively. No information regarding model hyperparameters were available in Choi et al. The best-performing models from our work are highlighted. Ref (37). Measures of variable importance are central to mechanistic insights.[60] Variable importance was assessed using the varImp caret function for the logistic regression model (Figure S4a), neural network one-layer model (Figure S4b), and random network model (Figure a). Dose emerges as the consensus key attribute for prediction; however, there are subtle ranking differences among the different models. NOxygen is a key attribute in both the random forest and neural network one-layer models, but not so for the logistic regression model. Time emerges as another consensus key attribute in all of the models. Logistic regression provides us with not only the effect size (coefficients) of the individual variables but also an estimate of their significance, in terms of the p-value of the coefficients (Table S2). The sign of the coefficient of each variable indicates the class outcome to which the respective variable contributes. It is notable that the two periodic table properties (Eneg, Noxygen) and the quantum chemical property, Ec, show large effect sizes but poor significance, while all of the other variables remain highly significant. Relative importance plots of the neural network models add a direction representing the favored binary outcome[61,62] and obtain concurrence to these findings (Figures b and S5). Dose emerges as the key variable determining nanoparticle toxicity, and Time, HydroSize, and Eneg are the other variables influencing the toxic prediction. NOxygen emerges as the key predictor influencing the nontoxic prediction, and SurfArea, Ec, and CoreSize are the other predictors in this category. The numeric variable importance scores are given in Tables S3 and S4.
Figure 5

(a) Normalized variable importance for the Random Forest model computed with caret. Dose is by and far the attribute with the greatest effect on the toxicity in the Random Forest model. (b) Relative importance plot for the NeuralNet-1L. Positive values correspond to the “true” (i.e., toxic) class, and negative values correspond to the nontoxic class. It is seen that Dose and NOxygen exert the maximum importance on the outcome class, though in opposite directions.

(a) Normalized variable importance for the Random Forest model computed with caret. Dose is by and far the attribute with the greatest effect on the toxicity in the Random Forest model. (b) Relative importance plot for the NeuralNet-1L. Positive values correspond to the “true” (i.e., toxic) class, and negative values correspond to the nontoxic class. It is seen that Dose and NOxygen exert the maximum importance on the outcome class, though in opposite directions. NeuralNetTools was used to visualize the best-performing one-layer neural network model, with the individual connections weighted by their importance[63] (Figure ). The two-layer neural network model was also visualized (Figure S6). Consensus among the models is necessary for explainable AI,[64] and in this direction, we performed a Lek sensitivity analysis with the neural network one-layer model.[65] How does the response variable change with changes in a given explanatory variable, given the context of the other explanatory variables? On investigating the effect of one explanatory variable, all of the other explanatory variables are clustered into a specified number of lakes with like members. While the unevaluated explanatory variables are held constant at the centroid of one lake cluster, the explanatory variable of interest is sequenced from minimum to maximum in 100 quantile steps, with the response variable predicted at each step, yielding a sensitivity curve. This process is iterated for each lake of the unevaluated explanatory variables, yielding the sensitivity profile of the response variable with respect to the specific explanatory variable in the context of the unevaluated explanatory variables. We set the number of clusters to 10, to visualize a sufficient number of response curves for each explanatory variable. In this way, the sensitivity profiles of the response variable are obtained for each predictor (Figure ).
Figure 6

Schematic of the trained neural network one-layer model, with the weights of the connections indicated by the linewidth. Black lines indicate positive weights, and gray lines indicate negative weights. Two bias units are seen, one for the hidden layer and the other for the output layer.

Figure 7

(a) Lek sensitivity analysis of attributes with positive effect on the outcome class. The steep effect of Dose is evident, with the location of the tipping point moving slightly with the cluster of the unevaluated variables. Increasing exposure times and HydroSize are also seen to tip to toxicity. (b) Lek sensitivity analysis of attributes with relatively consistent negative effect on the outcome class: CoreSize, Ec, NOxygen, and SurfArea. The number of lakes of the unevaluated variables is set to 10 in both the cases.

Schematic of the trained neural network one-layer model, with the weights of the connections indicated by the linewidth. Black lines indicate positive weights, and gray lines indicate negative weights. Two bias units are seen, one for the hidden layer and the other for the output layer. (a) Lek sensitivity analysis of attributes with positive effect on the outcome class. The steep effect of Dose is evident, with the location of the tipping point moving slightly with the cluster of the unevaluated variables. Increasing exposure times and HydroSize are also seen to tip to toxicity. (b) Lek sensitivity analysis of attributes with relatively consistent negative effect on the outcome class: CoreSize, Ec, NOxygen, and SurfArea. The number of lakes of the unevaluated variables is set to 10 in both the cases. The two input variables that decisively differentiate the outcome are Dose and Noxygen. Dose appears to exert a nearly thresholding effect on the toxic class. The consistent sigmoidal effect seen in the “dose–response” curve, independent of the lake of unevaluated explanatory variables, echoes the maxim attributed to Paracelsus, “The dose makes a thing poison.” The attributes influencing toxicity also included: (i) Time, with a pronounced effect depending on the lakes of the unevaluated variables; and (ii) HydroSize, with a steady nonlinear effect on toxicity that is also sensitive to the context of the unevaluated explanatory variables. The response profile for Eneg is almost flat at all lakes, indicating little to no effect in changing the outcome. The interpretation of the response with respect to SurfCharge remained obscure. NOxygen emerged as the attribute with the clearest inverse effect on toxicity, with a response profile displaying a tipping point to nontoxic class at most, but not all, of the centroids. Other attributes seen to dial down the toxicity include SurfArea, CoreSize, and Ec. These observations of effect size may be tempered with significance analysis toward a complete understanding. In summary, the ML models of our work are represented by a purely numeric feature space of just nine predictors, and it is possible to consider them in their entirety, similar to the interpretability of a classical QSAR model. The models conform to the Findable, Accessible, Interoperable, Reusable (FAIR) principles and are presented in a unified ensemble prediction engine, NanoTox (https://github.com/NanoTox). In the interest of reproducible research, all the scripts necessary to replicate, apply, and extend our analysis are available at NanoTox. Our methods may be extendable to other classes of engineered nanomaterials requiring urgent, sustainable, and rapid hazard estimation prior to induction in practical uses.[66−69]

Conclusions

We have optimized the problem formulation of cytotoxicity modeling of nanoparticles using a principled approach agnostic of in vitro characteristics. The feature space is trimmed for multicollinearity, tunable hyperparameters were optimized, and the training data were corrected for class imbalance. These steps led to an optimal hypothesis space, thereby improving the performance of the generated ML models to >96% balanced accuracy. The benefits of a parsimonious approach to modeling nanoparticle toxicity include enhanced model interpretability and generalizability. We have embedded our models into an unambiguous ensemble classifier that surpasses ∼98% balanced accuracy. Our entire workflow is available as a free open-source resource for use and enhancement by the scientific community toward proactive noninvasive testing and design of nanoparticles for varied applications.
  44 in total

1.  Nanoscale-alumina induces oxidative stress and accelerates amyloid beta (Aβ) production in ICR female mice.

Authors:  Shahid Ali Shah; Gwang Ho Yoon; Ashfaq Ahmad; Faheem Ullah; Faiz Ul Amin; Myeong Ok Kim
Journal:  Nanoscale       Date:  2015-10-07       Impact factor: 7.790

2.  Multivariate statistical analysis for selecting optimal descriptors in the toxicity modeling of nanomaterials.

Authors:  Sunil Kr Jha; T H Yoon; Zhaoqing Pan
Journal:  Comput Biol Med       Date:  2018-06-18       Impact factor: 4.589

3.  Bayesian Network Resource for Meta-Analysis: Cellular Toxicity of Quantum Dots.

Authors:  Muhammad Bilal; Eunkeu Oh; Rong Liu; Joyce C Breger; Igor L Medintz; Yoram Cohen
Journal:  Small       Date:  2019-06-17       Impact factor: 13.281

4.  Where does the toxicity of metal oxide nanoparticles come from: The nanoparticles, the ions, or a combination of both?

Authors:  Dali Wang; Zhifen Lin; Ting Wang; Zhifeng Yao; Mengnan Qin; Shourong Zheng; Wei Lu
Journal:  J Hazard Mater       Date:  2016-01-29       Impact factor: 10.588

5.  NiO x Nanoflower Modified Cotton Fabric for UV Filter and Gas Sensing Applications.

Authors:  Dinesh Kumar Subbiah; K Jayanth Babu; Apurba Das; John Bosco Balaguru Rayappan
Journal:  ACS Appl Mater Interfaces       Date:  2019-05-24       Impact factor: 9.229

Review 6.  Nanotoxicity: oxidative stress mediated toxicity of metal and metal oxide nanoparticles.

Authors:  Abhijit Sarkar; Manoranjan Ghosh; Parames Chandra Sil
Journal:  J Nanosci Nanotechnol       Date:  2014-01

7.  Periodic table-based descriptors to encode cytotoxicity profile of metal oxide nanoparticles: a mechanistic QSTR approach.

Authors:  Supratik Kar; Agnieszka Gajewicz; Tomasz Puzyn; Kunal Roy; Jerzy Leszczynski
Journal:  Ecotoxicol Environ Saf       Date:  2014-06-18       Impact factor: 6.291

Review 8.  Nanotoxicology and nanoparticle safety in biomedical designs.

Authors:  Jafar Ai; Esmaeil Biazar; Mostafa Jafarpour; Mohamad Montazeri; Ali Majdi; Saba Aminifard; Mandana Zafari; Hanie R Akbari; Hadi Gh Rad
Journal:  Int J Nanomedicine       Date:  2011-05-31

9.  Predicting In Vitro Neurotoxicity Induced by Nanoparticles Using Machine Learning.

Authors:  Irini Furxhi; Finbarr Murphy
Journal:  Int J Mol Sci       Date:  2020-07-25       Impact factor: 5.923

Review 10.  In silico toxicology: computational methods for the prediction of chemical toxicity.

Authors:  Arwa B Raies; Vladimir B Bajic
Journal:  Wiley Interdiscip Rev Comput Mol Sci       Date:  2016-01-06
View more
  2 in total

1.  Identifying diverse metal oxide nanomaterials with lethal effects on embryonic zebrafish using machine learning.

Authors:  Richard Liam Marchese Robinson; Haralambos Sarimveis; Philip Doganis; Xiaodong Jia; Marianna Kotzabasaki; Christiana Gousiadou; Stacey Lynn Harper; Terry Wilkins
Journal:  Beilstein J Nanotechnol       Date:  2021-11-29       Impact factor: 3.649

Review 2.  Digital Innovation Enabled Nanomaterial Manufacturing; Machine Learning Strategies and Green Perspectives.

Authors:  Georgios Konstantopoulos; Elias P Koumoulos; Costas A Charitidis
Journal:  Nanomaterials (Basel)       Date:  2022-08-01       Impact factor: 5.719

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.