Short Communication - (2022) Volume 13, Issue 6
Received: 03-Jun-2022, Manuscript No. GJTO-22-71128;
Editor assigned: 05-Jun-2022, Pre QC No. P-71128;
Reviewed: 10-Jun-2022, QC No. Q-71128;
Revised: 15-Jun-2022, Manuscript No. R-71128;
Published:
20-Jun-2022
, DOI: 10.37421/2229-8711.2022.13.303
Citation: Garbay, Amin. Computational Intelligence Strategies Throughout Bioengineering.” Glob J Tech Optim 13 (2022): 303.
Copyright: © 2022 Garba A. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Soft computing is a group of approximate computing methods that can represent and analyse extremely complicated issues. More traditional approaches haven't been able to come up with economical, analytical, or comprehensive answers for these complicated situations. Over the past three decades, soft computing has been intensively researched and used in engineering computers and scientific research. Fuzzy logic, artificial neural networks, genetic algorithms, decision trees, and support vector machines are techniques developed by researchers and engineers in the fields of agricultural and biological engineering to study soil and water regimes related to crop growth, examine the operation of food processing, and support decisionmaking in precision farming. Applications of soft computing in the field of agriculture are described, including the principles and methods, in the context of soil and water. With a tolerance for imperfection, uncertainty, partial truth, and approximation, soft computing can accomplish tractability, robustness, and offer a low-cost solution. As a result, soft computing may now solve issues that more traditional approaches haven't been able to in a way that is efficient, analytical, or comprehensive. In terms of soft computing methods, FL seems to be the first to have defined the core concepts. Other strategies that came later have been impacted by the established fundamental concepts. A number of findings and techniques were released by the Parallel Distributed Processing (PDP) research group. This breakthrough served as a major stimulus for much of the future research and use of ANNs. It also offered a tremendous impetus to the study of brain mechanisms and structure [1,2].
Researchers and engineers have created techniques for FL, ANNs, GAs, Bayesian Inference (BI), Decision Tree (DT), and SVMs in agricultural and biological engineering to investigate soil and water regimes related to crop growth, examine the operation of food processing, and support decisionmaking in precision farming. In order to solve issues, they have also used fusion techniques including FL, ANN, and GA. We haven't come across any instances of fusion being used in both soft and hard computing, though. This might have a lot of potential as a research topic. This essay provides a brief history of soft computing techniques. Applications of soft computing in the realm of agricultural and biological engineering are discussed together with their principles and methodologies. Despite the fact that some earlier research has suggested that logic and human thinking often interact, When faced with uncertainty, imprecision, inadequate information, limited knowledge, and class prejudice, humans exhibit exceptional capacity for reasoning and decisionmaking. FL supports formalising and automating this capability. FL replicates the logic of human control. It can be incorporated into a wide range of goods, from tiny handheld devices to big computerised process control systems. In order to process incoming data like a human operator would, it employs a verbose yet highly descriptive language. It frequently functions when first implemented with little or no adjustment and is quite robust and tolerant of operator and data input [3,4].
ANNs offer a means to simulate biological neurons to tackle difficult issues in a way that is similar to how the human brain does it. Interest in researching the mechanism and structure of the brain has been growing for a very long time, notably since the middle of the previous century. New computational models, connectionist systems or ANNs, based on the biological background, have been developed as a result of the growing academic interest to address complicated issues including pattern recognition, quick information processing, and adaptation. On the basis of the model of a neuron, researchers in the field looked at the potential and capabilities of connecting several fundamental parts. Later, they became interested in the adaptation laws that apply to brain systems. An ANN's architecture, training/learning process, and implementation are a condensed representation of the functions and structure of the human brain. The human brain processes information to solve problems via a network of interconnected processing units called neurons. Each neuron functions independently, autonomously, and in asynchronous fashion. The study of the structure itself as a model for organising and creating man-made computing structures was motivated by the enormous processing capability inherent in biological cerebral networks. When compared to traditional data processing techniques, ANNs offer a model-free, adaptive, parallel-processing, and resilient solution that is fault and failure tolerant, learns, is able to handle imprecise and fuzzy information, and is generalizable. An ANN may map input and output without making any underlying assumptions about how data will be distributed [1,5].
A soft computing method for performing probabilistic reasoning is known as probabilistic computing. The goal of probabilistic reasoning is to integrate belief with the probability theory's ability to handle uncertainty when drawing conclusions. Prior knowledge cannot be incorporated into the computations in traditional inference models. However, there are situations when using prior information will help with the process review. It is a statistical inference that takes prior information and probability distributions into account. Evidence or observations are utilised in the BI process to update a hypothesis' likelihood of being correct. Traditionally, binary hypothesis testing is used to determine statistically which of two hypotheses is true. In a wide range of areas, Bayesian Networks have been regarded as aids for making decisions in complex situations. Bayesian Networks are graphical representations of probability. The characteristics of a set of variables and their probabilistic dependencies are described by each model. The state of the parent node predicts the state of the child node in the graphical, probabilistic models, which enable the structured depiction of a cognitive process based on a link and node structure [2,4].
For the purpose of obtaining genotypes and collecting samples of patients and controls, bioinformatics data must be used. It makes sense to say that significant effort should be put into making sure that the analytic methods used on the data acquired are as successful as feasible. Because a biologist might not be able to comprehend the results of soft computing approaches. Someone who is knowledgeable on parallel computing, for instance, would be needed to help with an issue that has to be answered by several processors. The programme must be simple to use and produce output that is both visible and navigable for it to be intuitive to a biologist. Therefore, we may conclude that biologists must work with soft computing techniques in order to more effectively handle bioinformatics difficulties.
None.
The authors reported no potential conflict of interest.
Google Scholar, Crossref, Indexed at
Google Scholar, Crossref, Indexed at
Google Scholar, Crossref, Indexed at
Google Scholar, Crossref, Indexed at
Global Journal of Technology and Optimization received 847 citations as per Google Scholar report