Research
Pages: 1 - 6Deogratias Nurwaha and Xinhou Wang
This study presents the application of Artificial Intelligence (AI) techniques to predict the morphology of nanofibers
produced by needless electrospinning method. Two straight and parallel copper wire electrodes electrospinning
method was used to produce nanofibers. Using digital image processing software Image Journal, Mean Nanofiber
Diameter (MFD) and Nanofiber Diameter Standard Deviation (NFSD) have been measured and recorded. Adaptive
Neuro-Fuzzy Inference Systems (ANFIS), Support Vector Machines (SVMs) and Gene Expression Programming
(GEP) methods were used for prediction of electrospun nanofiber morphology. Prediction results and experimental
were compared. It was found that SVMs model has better predictive power in comparison with both ANFIS and Gene
Expression Programming models. However, results provided by both GEP and ANFIS are also acceptable. The
relative importance of process parameters as contributor to the nanofiber morphology was also investigated. It was
found that nanofiber morphology was strongly or weakly dependent on processing parameters.
Research Article
Pages: 1 - 15Ameer Khan* and Zartashia Gull
In this paper, the performance of a multi- swarm Harris Hawk’s Optimization algorithm is reported on a set of benchmark functions provided by CEC2005. In literature, any multi-swarming technique has not been applied to Harris Hawk’s Optimization. In this algorithm, the population is designed to include a number of swarms. All the swarms follow the original algorithm of HHO; however, agents from different swarms are regrouped frequently. Regrouping is performed using random permutation and the global best among all the swarms is considered as Rabbit or Best Solution so far, after iteration. Results show significant improvements in benchmark functions
Research Article
Pages: 1 - 6Najam Ul Qadir*, Md. Rafiul Hassan and Khalid Akhtar
Artificial Neural Networks (ANNs) have generally been observed to learn with a relatively higher rate of convergence resulting in an improved training performance if the input variables are preprocessed before being used to train the network. The foremost objectives of data preprocessing include size reduction of the input space, smoother relationship, data normalization, noise reduction, and feature extraction. The most commonly used technique for input space reduction is Principal Component Analysis (PCA) while two of the most commonly used data normalization approaches include the min-max normalization or rescaling, and the z-score normalization also known as standardization. However, the selection of the most appropriate preprocessing method for a given dataset is not a trivial task especially if the dataset contains an unusually large number of training patterns. This study presents a first attempt of combining PCA with each of the two aforementioned normalization approaches for analyzing the network performance based on the Levenberg-Marquardt (LM) training algorithm utilizing exact formulations of both the gradient vector and the Hessian matrix. The network weights have been initialized using a linear least squares method. The training procedure has been conducted for each of the proposed modifications of the LM algorithm for four different types of datasets and the training performance in terms of the average convergence rate and a proposed performance metric has been compared with the Neural Network Toolbox in MATLAB® (R2017a).
Mini Review
Pages: 1 - 2There are a few assignments that nature figures out how to perform without any problem be that as it may, which calculations structured by individuals can't finish. We can discover these undertakings in muddled and variable situations. Mathematicians have directed their concentration toward nature, and on the premise of relationship they made the fluffy rationale, neural systems, and developmental calculations hypotheses. There are different regions of utilization, such as advancement in the executives, hazard the executives, dynamic, the financial exchange, innovation control, reenactment, expectation, bunch examinations, and different parts of utilizations in business. The utilization of the speculations referenced above is in the circle of advancement. We can for the most part notice, for instance, an improvement of mechanical dynamic procedures with the point of streamlining (least misfortunes and costs, most extreme benefit), a streamlining of capital dynamic, a streamlining of portfolios, an enhancement of costs of items and volume of creation, an answer of the issues of movement of sales reps/salesmen, and so forth. The business frameworks have a place with the most confused unique frameworks. The applications in business have explicit highlights in examination with applications in building. The procedures are centered around upgrading pay or benefit, or on enhancing diminishing costs. Along these lines, such applications, both effective and ineffective, are not distributed all the time on account of mystery in the profoundly serious conditions among firms and organizations. The procedures are engaged on private corporate endeavors at cash making. The delicate processing techniques help in business to focus on the correct clients and subsequently it can prompt higher benefits and to accomplishment in the serious battle. There are different delicate processing strategies utilized in business: old style ones and strategies utilizing delicate figuring. The dynamic forms in business are exceptionally muddled, where numerous factors are hard to gauge; they are described by imprecision, vulnerability, unclearness, semitruth, estimation, non-linearity and so forth. Under these conditions, the strategies for delicate figuring, for example, fluffy rationale, neural systems, developmental calculations are fitting. As a rule, one might say that the field of uses of delicate figuring techniques in business covers a wide region of utilizations.
Editor Note
Pages: 1 - 1Global Journal of Technology and Optimization received 664 citations as per Google Scholar report