Research - (2020) Volume 11, Issue 4
Received: 10-Sep-2020
Published:
22-Sep-2020
, DOI: 10.37421/2150-3494.2020.11.215
Citation: Faranak Khojasteh, Mahmoud Reza Sohrabi, Morteza Khosravi, and Mehran Davallo, et al. “Multi Linear Regression and Artificial Neural Network Modeling Performance for Predicting Coating Rate: Nano-Graphene Coated Cotton as a Case Study.” Chem Sci J 11 (2020). doi: 10.37421/CSJ.2020.11.215
Copyright: © 2020 Khojasteh F, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
To critique the proficiency of multilinear regression (MLR) and artificial neural network (ANN) models for predicting coating process is the major subject of this paper. The efficiency of coating nano-graphene particles on surface cotton as a case study was analyzed. Taguchi L27 orthogonal array was elected as experimental design. The Taguchi results were tested using both S/N (signal to noise) ratios and ANOVA (analysis of variance). The outcome of Taguchi design is labeled as the input for each of MLR and ANN models. The parameters for the MLR model and network architecture for the ANN model were amended. Comparing MLR performance with ANN method, ANOVA test and data analysis showed that ANN is at 99.9% confidence level to predict the process of covering graphene surface on cotton better than MLR model.
Artificial neural network • Cotton • Graphene • Multi Linear Regression • Network architecture
Graphene has a variety of applications when coatings on different materials, such as fibers [1], metal meshes [2], textiles [3], membranes [4], foams [5] and gauze [6]. Fibers have the maximum flexibility and minimum cost related to the rest. Cotton is a suitable selection to switch for graphene to a 3D framework [7]. Cotton fibers are non-toxic, lightweight and eco-friendly [8]. Pretreatment of fibers helps with the easy penetration of the nanoparticles into the surface. NaOH treatment of kapok/cotton fabric improves adhesion characteristics by creating surface roughness [9]. The experimentation has been effected on concentrations of NaOH solution on cellulose in fibers and wood [10-13]. Reduction agent helps to process of coating of cotton by graphene. Naturally, there are several kinds of the reducing agent such as HI, hydrazine derivate, Al, vitamin C in this case [1]. Also, the catalyst helps to accelerate reducing ability of GO. For example, AlCl3 and CaCl2 were used as catalysts [14,15].
Optimizing the factors affecting on coating cotton by graphene can carry out by a successful statistical method for analyzing and predicting chemical data. The chemometric is a way that widely applied in modeling of different science. To achieve sufficient input data for the appropriate model, experimental design Taguchi output had been used. Taguchi is considered an important role in designing methodology [16,17]. A good prediction model can be very powerful in providing a low-cost way to predict the rate and quality of the coating. To effectuate this, the MLR and ANN are the statistical tools are compared. In two options receive inputs (measured data), transmit and produce an output (response variable) [18,19].
The main intention of this study is to compare the abilities of MLR and ANN models predictive models that yield actual results. An adapted prediction model can be very successful in providing an inexpensive way to predict the rate of the coating.
Chemicals and software
The natural cotton was obtained from a regional store. All of the chemical materials obtained from Merck, Germany. GO was processed in hummers procedure [20]. The MINITAB has been applied to create the Taguchi design and MLR model. ANN calculus was handled using the MATLAB software.
Procedure
A piece of cotton (about 1gr) was first soaked in 4% NaOH solution for one hour. Next, it was washed with 10% acetic acid solution and distilled water (up to pH=7). The resulting cotton was placed in a dispersion GO solution, NaHB4 as reducer agent and CaCl2 as a catalyst. Then, the mixture was kept stirring at room temperature for one hour to obtain RGO. Finally, the obtained graphene coated on cotton (GCC) was rinsed with distilled water and dried at 50-60°C [9,14,21].
Graphene oxide concentration, reducing reagent, catalyst and contact time is used for experimental tests. Table 1 enlisted the results of experiments designed by the Taguchi method. Table 2 shows 27 experiments obtained from Taguchi method instead of 1594323 experiments (3)13 that were required for one factor at a time method. Table 3 appears the ranging of the factors set upon by delta values (decrement between higher and lower S/N ratio). For S/N ratio analysis, choosing the "larger- the- better", because the goal of our experiment is enlarged the response. Table 4 shows, the GO concentration with 34.657% of contribution and time with 32.859% of the contribution, have the greatest effect among the factors.
Process parameter | GO (gr/l) concentration | Reagent reduction amount (gr) | Catalyst amount (gr) | Contact time (min) |
---|---|---|---|---|
Level 1 (L1) | 0.025 | 0.500 | 0.01 | 30 |
Level 2 (L2) | 0.050 | 0.570 | 0.02 | 60 |
Level 3 (L3) | 0.075 | 0.687 | 0.03 | 90 |
Run No. | GO (gr/l) concentration | Reagent reduction Amount (gr) | Catalyst amount (gr) | Contact Time (min) | Coating (%) | S/N ratio |
---|---|---|---|---|---|---|
1 | 0.025 | 0.500 | 0.01 | 0.3 | 94.20 | 39.48 |
2 | 0.050 | 0.570 | 0.01 | 60 | 59.50 | 35.49 |
3 | 0.075 | 0.687 | 0.01 | 90 | 97.50 | 39.78 |
4 | 0.025 | 0.570 | 0.02 | 90 | 77.60 | 37.79 |
5 | 0.050 | 0.687 | 0.02 | 30 | 82.50 | 38.33 |
6 | 0.075 | 0.500 | 0.02 | 60 | 89.20 | 39.00 |
7 | 0.025 | 0.687 | 0.03 | 60 | 80.10 | 38.07 |
8 | 0.050 | 0.500 | 0.03 | 90 | 91.90 | 39.26 |
9 | 0.075 | 0.570 | 0.03 | 30 | 98.10 | 39.83 |
10 | 0.025 | 0.500 | 0.01 | 30 | 94.20 | 39.48 |
11 | 0.050 | 0.570 | 0.01 | 60 | 59.50 | 35.49 |
12 | 0.075 | 0.687 | 0.01 | 90 | 97.50 | 39.78 |
13 | 0.025 | 0.570 | 0.02 | 90 | 77.60 | 37.79 |
14 | 0.050 | 0.687 | 0.02 | 30 | 82.50 | 38.33 |
15 | 0.075 | 0.500 | 0.02 | 60 | 89.20 | 39.00 |
16 | 0.025 | 0.687 | 0.03 | 60 | 80.10 | 38.07 |
17 | 0.050 | 0.500 | 0.03 | 90 | 91.90 | 39.26 |
18 | 0.075 | 0.570 | 0.03 | 30 | 98.10 | 39.83 |
19 | 0.025 | 0.500 | 0.01 | 30 | 94.20 | 39.48 |
20 | 0.050 | 0.570 | 0.01 | 60 | 59.50 | 35.49 |
21 | 0.075 | 0.687 | 0.01 | 90 | 97.50 | 39.78 |
22 | 0.025 | 0.570 | 0.02 | 90 | 77.60 | 37.79 |
23 | 0.050 | 0.687 | 0.02 | 30 | 82.50 | 38.33 |
24 | 0.075 | 0.500 | 0.02 | 60 | 89.20 | 39.00 |
25 | 0.025 | 0.687 | 0.03 | 60 | 80.10 | 38.07 |
26 | 0.050 | 0.500 | 0.03 | 90 | 91.90 | 39.26 |
27 | 0.075 | 0.570 | 0.03 | 30 | 98.10 | 39.83 |
Level | GO (gr/l) | NaBH4 (gr) | CaCl2 (gr) | Time (h) |
---|---|---|---|---|
1 | 38.25 | 38.45 | 39.25 | 39.21 |
2 | 38.37 | 37.69 | 37.70 | 37.52 |
3 | 39.05 | 39.54 | 38.72 | 38.94 |
Delta | 0.80 | 1.85 | 1.55 | 1.69 |
Rank | 4 | 1 | 3 | 2 |
Factor | DOF (f) | Sum of Sqrs. (S) | Variance (V) | F-ratio (F) | Pure Sum (S2) | Contribution C (%) |
---|---|---|---|---|---|---|
Catalyst (gr) | 2 | 1.124 | 0.562 | 1.74 | 1.124 | 7.473 |
GO (g/l) | 2 | 5.214 | 2.607 | 0.88 | 5.214 | 34.657 |
Reagent reduction (gr) | 2 | 3.763 | 1.881 | 0.24 | 3.763 | 25.009 |
Contact time | 2 | 4.944 | 2.472 | 1.50 | 4.944 | 32.859 |
MLR modeling
Including mathematical models is based on least squares is MLR. It is easy to use. A multiple linear regression equation shows linear relevance between a response variable (Y), two or more predictorsvariable (X1, X2, ...,XK), estimated value of Y-intercept (b0) and coefficient variable (b1, b2, ..., bK) [22-24]. So the MLR equation is equal to:
Ŷ =b0+b1X1+b2X2+......+bkXk (1)
The multiple linear regression model used in this study,
Ŷ =b0 + b1 [GO] + b2 [NaHB4] + b3 [CaCl2] + b4Δt (2)
ANN modeling
ANN is similar to biological neurons that consist of a set of neurons connects with each other by axon connection. Every neuron includes weights associated with many inputs and only one output. Naturally, except inputs and output layers, ANN also consists of hidden layers, where the communication among the inputs and output are specific by synaptic weights. ANNs are the strongest implements that can be used to predict system identification [25-28]. ANNs are efficient to achieve linear and nonlinear functions. Feedforward backpropagation (FFBP) and Radial basis function (RBF) are examples of network types [29].
The Feedforward (FF) neural network is an uncomplicated architecture, and the backpropagation (BP) is an interest form of ANN. The rigidity of the network depends on the weights of the concrete neurons what are improved for training via backpropagation construction. By exhibit the network algorithm to a special complex of data, the weights and biases are corrected to generate the tendency output [27,30]. Four types of algorithms are used in this study, which is: Levenberg-Marquardt backpropagation (LM), scaled conjugate gradient backpropagation (SCG), gradient descent with momentum backpropagation (GDM), Resilient back-propagation (RP) [31]. Results of statistical data for the coating rate using these four learning algorithms are showed in Table 5. LM learning algorithm has been the best and fastest. Trial and error is the basis of activation function and architecture of the network figure out. Table 6 is attended the suitable architecture (4-6-1- 1) for this model. Figure 1 exact refers to the network model. That’s mean, the network architecture consists four neurons (GO concentration, reagent reduction amount, catalyst dosage and contact time) in the input layer, six neurons in the hidden layer, one neuron in each of the outer and final layers, That's a corrected response (coating rate). The input layer is triggered using the sigmoid activation function whereas the second and third layers are the hidden layer and the output layer, respectively. Figure 2 shows that the network consists of two transfer function, tansig and purelin [32]. The equation for the ANN model is:
Learning algorithm | Number of neurons | Training data | Testing data | ||
---|---|---|---|---|---|
R2 | MSE | R2 | MSE | ||
LM | 1 | 0.81975 | 0.002747 | 0.97178 | 0.007428 |
LM | 2 | 0.98828 | 0.000431 | 0.97859 | 0.001309 |
LM | 3 | 0.95122 | 0.000046 | 0.92059 | 0.003086 |
LM | 4 | 0.97549 | 0.000064 | 0.92745 | 0.002218 |
LM | 5 | 0.99801 | 0.000016 | 0.99998 | 0.000015 |
LM | 6 | 1 | 0.000003 | 1 | 0.000003 |
LM | 7 | 0.79234 | 0.003545 | 1 | 0.000991 |
SCG | 1 | 0.85457 | 0.007567 | 0.99993 | 0.04802 |
SCG | 2 | 0.91824 | 0.003868 | 0.82094 | 0.001058 |
SCG | 3 | 0.94835 | 0.003029 | 0.60908 | 0.01164 |
SCG | 4 | 0.96592 | 0.003187 | 0.91385 | 0.002366 |
SCG | 5 | 0.99738 | 0.003438 | 0.98414 | 0.007274 |
SCG | 6 | 0.95234 | 0.003402 | 0.97123 | 0.008501 |
SCG | 7 | 0.99872 | 0.001678 | 0.99991 | 0.003804 |
SCG | 8 | 1 | 0.0002975 | 1 | 0.0003658 |
SCG | 9 | 1 | 0.001525 | 1 | 0.001525 |
SCG | 10 | 0.96556 | 0.005002 | 0.86025 | 0.003658 |
RP | 1 | 0.84605 | 0.004304 | 0.94882 | 0.01384 |
RP | 2 | 0.98185 | 0.01018 | 0.8454 | 0.002079 |
RP | 3 | 0.97057 | 0.003616 | 0.99751 | 0.004348 |
RP | 4 | 0.94445 | 0.001244 | 0.98089 | 0.000358 |
RP | 5 | 0.99838 | 0.004373 | 0.98212 | 0.003575 |
RP | 6 | 1 | 0.002666 | 1 | 0.002666 |
RP | 7 | 0.99924 | 0.002224 | 0.96972 | 0.006697 |
GDM | 1 | 0.46625 | 0.0115 | 0.93695 | 0.03062 |
GDM | 2 | 0.27910 | 0.03542 | 0.98011 | 0.06121 |
GDM | 3 | -0.032757 | 0.06208 | 0.029816 | 0.04931 |
GDM | 4 | -0.31799 | 0.1227 | -0.82368 | 0.05235 |
GDM | 5 | 0.53999 | 0.03063 | -0.66922 | 0.05697 |
GDM | 6 | 0.87229 | 0.0609 | 0.98804 | 0.02774 |
GDM | 7 | 0.69184 | 0.05538 | 0.86077 | 0.05538 |
Learning algorithm | Network architecture | Training set | Testing set | ||
---|---|---|---|---|---|
R2 | MSE | R2 | MSE | ||
LM | 4-6-1-1 | 1 | 0.000003 | 1 | 0.000003 |
SCG | 4-8-1-1 | 1 | 0.0002975 | 1 | 0.0003658 |
RP | 4-6-1-1 | 1 | 0.002666 | 1 | 0.002666 |
GDM | 4-6-1-1 | 0.87229 | 0.0609 | 0.98804 | 0.02774 |
ANN output = ŶANN = Purelin [w2 × tansig (w1 × {x (1); x (2); x (3); x (4)} + b1) + b2] (3)
Where, w1 and b1 are the weight and bias of output layers, while the x (1), x (2), … x (3) represent the inputs.
In the present study, the predictive capability of MLR and ANN were aimed for graphene coating rate. The experimental design Taguchi results doing as input for these models, experimental design Taguchi results had been used. The Taguchi results were check by the choice of the best run by examining the S/N and ANOVA. The results show GO concentration and contact time are highly effective in coating rate. For MLR and ANN models, we chose the best parameters in the software and so optimized it. Results exposed the attitude of the inputs and output wasn't linear. Therefore, the ANN model is an excellent predicted performance compare to the MLR model for coating rate graphene on cotton.
Chemical Sciences Journal received 912 citations as per Google Scholar report