Value Added Abstracts
Pages: 1 - 1ABANIKANNDA, Mutahir Oluwafemi
Higher education can be better achieved and managed by employing new technologies in education. Artificial intelligence is one of such numerous technologies that could be employed for the management of higher education. Learners can be better managed using Artificial Intelligence. Artificial Intelligence affords an opportunity for a better analysis of each and every student in a class who is a slow learner or lazy enough to understand the topics which has been discussed by the lecturer. Analysis will give clear idea about student’s understanding on each and every topic. If student is lagging in some areas or is not able to understand few topics then Artificial Intelligence analysis would showcase this report to the lecturer so that appropriate action can be taken. Artificial Intelligence analysis would also recommend the topics with basic examples or in an easy manner to the student so that the students’ skill in the particular area where he is uncomfortable can be improved. Technology- managed learning is a veritable tool for learning in higher education across the world due to digital revolution. A lot of academic advantages have been attributed to the use of technology to manage the learning process. The educational system of any country is the foundation of its sustainable development, and higher education contribute to national development through high level manpower training. In Artificial Intelligence, computers are allowed to perform tasks that are ordinarily meant to be performed by the human brain. The use of Artificial Intelligence as a tool for technology- managed learning due to its potential educational advantages in improving the quality of learning, would provide an appropriate analysis of individual students understanding of topics and concepts for which course instructors can have an appropriate feedback for the purpose of improving the student skills in desired areas of knowledge thereby enhancing sustainable development. This paper therefore considers roles that could be played by artificial intelligence in providing technology- managed learning to learners of higher education.
Value Added Abstracts
Pages: 2 - 2Marcela del Pilar Roa Avella
Predicting crime is concerned with the search / examination of predisposing factors of criminal activity in people or places. Currently, there exists A.I tools which attempt to predict levels of risk of crime and facial recognition software. All of these aim to anticipate criminal decisions by tracking trigger factors. Amongst the most recognised algorithms are COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), PROMETEA, Big Brother Watch, PredPol, Harm Assessment Risk Tool (HART) and Actuarial Risk Assessment Instruments (ARAIs).
The aim of this paper is to determine the impact of these tools on human rights through a descriptive investigation using deductive analysis. The results demonstrate that the incorporation of A.I tools for predicting crimes does not guarantee the avoidance of discrimination or bias due to human intervention when it comes to selecting the data that feeds into the algorithm. Moreover, the so-called black box prevents reverse engineering to understand the software’s intelligent process in the decision-making and deliberation of the factors that are being analysed, which constitutes a violation of human rights
Value Added Abstracts
Pages: 3 - 3Manoj CR
As Autonomous vehicles (AV) are closer to becoming reality, it becomes mandatory to be able to characterise the performance of the sensors used in AV before it gets into production deployment. This validation process requires large amounts of ground truth data to be established with precise information about the real world position and pose of the objects around the vehicle. LIDAR is a preferred sensor for ground truthing due to its ability to estimate the depth of the objects precisely. In order to reduce the manual efforts in ground truthing,.In this paper, we propose a novel deep learning network for automated object detection and pose estimation on LIDAR point cloud data, for automated ground truthing. The algorithm is based on a multi-layer deep 3D convolutional neural network with a custom loss function and activation layers. Typically, the volume of point cloud input data itself is huge plus the computational complexity makes training and inferencing highly time intensive. The architecture of the deep learning network is considered in such a way, that the activation functions for the particular layers are designed based on the sensitivity of the feature, which is being extracted from that layer .This helped in saving time of computation with our compromising the accuracy of feature extraction. Loss function was designed so that it utilizes regression to achieve efficient localization of objects. Though the architecture of the network helped improving, the speed of training and inferencing, further improvement was required for efficient training turnaround time and real time inferencing, which was achieved with efficient utilisation of NVDIA GPUs, which was the hardware, used to deploy the automated labelling algorithms The above approaches ensured real time execution of the inferencing engine on Velodyne HDL 64 and 128 LIDAR, which was essential for real time labelling and ground truthing
Value Added Abstracts
Pages: 4 - 4Leandro Romualdo da Silva
Actualy, many business areas use artificial intelligence because of the growth power computational and the lot of amount data generated in the last years is a very representative in current moment of technology.
Nowadays already have a multiple machine learning models applied in companies, but now with AI and RPA we can do more for the companies, and AI and RPA make the companies be more competitors with automatization internal process using computer vision and PNL.
New tools for PNL and computer vision can contribute for accelerate development a new solutions, this bring agility for companies areas. For example accountants, tax, financial, legal and others areas.
I’m going to talk how some companies become more competitive using AI and RPA in diverse areas.
Value Added Abstracts
Pages: 5 - 5Lewis LaBrie
There are three major issues facing the mass adoption of electric vehicles: Range anxiety, charge rate and energy storage lifespan. Can artificial intelligence help navigate these obstacles and create a more efficient and longer lasting energy storage devices? Known correlations exist between energy storage efficiency and ambient and storage device temperatures, but what about the unknown factors? We will discuss these factors, how better data and AI can increase efficiency and optimize charge rate and the future of AI in e-mobility.
Value Added Abstracts
Pages: 6 - 6Åukasz Augustyniak
Natural Language Processing (NLP) projects often fail in their conception (e.g., lack of data, wrong data, wrong annotations), in their delivery, or their business usefulness. It is a good idea to identify what is the proper research problem, what a client is looking for, what are our limitations, and finally, how you would solve the problem. The applied NLP is still rapidly evolving and changing area - I want to present the experience from a several NLP project from social media analysis, news data extraction, call center analytics, legal documents annotation and many more. What are the best practices for data acquisition, creating the first models, packing all into pipelines, and how we can serve these models? In addition, I show how we may communicate our work beyond the client deliverables to guide other applied NLP practitioners, guiding them towards success and pointing them away from many NLP pitfalls.
Value Added Abstracts
Pages: 7 - 7Amel Mhamdi
Standard French means French pronounced without any regional accent and whose syntax, morphology and spelling are described in dictionaries, grammar books and textbooks. Although standard French is taught at school and used in written communication, administrative documentation and Radio/TV national channels, most of regional and local Radio/TV channels as well as specific shows promoting touristic areas can typically carry out regional French accents. In this work, we focus on the problem of speech recognition service for the French media. Our goal is to build useful resources to extend language models that can be used in speech recognition APIs. Over the past few years, the APIs for automated speech recognition (ASR) are playing an increasing crucial role for a variety of media applications. In this paper, we present a lightweight open source framework STT-FTV that allows users to easily connect 4 different ASR tools to be trained with different French regional accents. The speech corpus has been collected and transcribed from various TV and radio contents especially fit in order to represent the major differences with the standard French. This STT-FTV system offers a relative reduction of the Word Error Rate in the context of regional accent transcription.
Value Added Abstracts
Pages: 8 - 8Elisha Marandure
This project is going to address the problem of poor harvest in Zimbabwe and the world at large by creating a machine which is going to carry out a number of functions which will go a long way in alleviating possible poor harvest. Forecasting is key to agricultural activities and therefore this machine will be able to forecast weather patterns e.g. rainfall patterns, so that farmers know when to plant their crops. Furthermore the proposed machine is going to be able to test soil PH as well as fertility levels and such guide farmers on corrective measures so that good yield are realized. One other quality of this machine is it's adaptability to different environment with different weather patterns, but at the same time adapting to the system and give correct information useful for agricultural activities and ensuring good harvest. The proposed machine will also act as a security mechanism which can detect fingerprints and footprint of intruders who might want to steel from the fields to achieve this it will use the alarm system.
The design of the machine will have a similarity to agricultural equipment and tractor design has been chosen. All the microchip controllers and sensors will be mounted on a four wheeled tractor model.
Value Added Abstracts
Pages: 9 - 9Dr. Huiyu Zhou
The study of mouse social behaviours has been increasingly undertaken in neuroscience research. However, automated quantification of mouse behaviours from the videos of interacting mice is still a challenging problem, where object tracking plays a key role in locating mice in their living spaces. In this talk, we propose a novel method to continuously detect and track several mice and
individual parts without requiring any specific tagging. We evaluate our proposed approach against several baselines
on our new datasets, where the results show that our
method outperforms the other state-of-the-art approaches
in terms of accuracy.
Value Added Abstracts
Pages: 10 - 10Hind LBAHY
Automated ML is a set of methods that allow non-experts to
use Machine Learning so easily, to ameliorate its efficiency and to dig more in their researches. Machine learning knew recently a big success and an ever-growing number of fields rely on its performance. Nevertheless, this success is based on human machine learning experts to do some tasks manually.
Moreover, because these tasks are complex, especially for non-
ML-experts, a rapid growth of machine learning applications has appeared and the demand for ML methods and processes that is handy and simple to use is more and more important.
The design of an effective ML requires expert’s knowledge to
improve algorithms in order
to optimize results. So how can we use, for example, K-fold cross-validation to look for an optimal tuning parameter? And
how can this process be made more efficient? The idea of machine learning pipeline is based on the automation of machine
learning workflows. It consists on training a given model
through several steps. However, we can remark that the word ‘pipeline’ refers to one-way flow of data which is not the case of
ML pipelines. In this context, machine learning pipelines rely on repetition of steps and continuous improvement of learning which make them very cyclical and iterative. Therefore, we get successful algorithms and models with good accuracy. The development of the Tree-based Pipeline Optimization Tool (TPOT) recently allows the optimization and the automatic design of the machine learning pipelines when given a problem to deal with without human participation. Therefore, how can we optimize machine learning pipelines using TPOT and a version of genetic programming? which is an automated method for creation of computer programs from a high-level problem statement of a problem.To see how to
optimize the automatic design of machine learning pipelines, we will put all these algorithms into practical application to
manage the information system incidents of one the insurance leaders in the world.
Value Added Abstracts
Pages: 11 - 11Ing. Aneta Zemankova
Artificial intelligence currently represents one of the fastest growing fields. Due to its innovative character, this field is constantly changing, with the biggest companies investing enormous amounts of capital to achieve wide use of artificial intelligence in audit and accounting. One of the trending artificial intelligence technologies is blockchain. The main goal of the paper is to analyse blockchain technology and its implications in audit, as well as to evaluate smart contracts and smart audit procedures. Even though blockchain has been mostly associated with digital currencies such as Bitcoin, it is extremely suitable in areas where transferring value of assets between parties is complicated and expensive, often requiring various central authorities. Therefore, financial services, including audit, might benefit from blockchain to a great extent. The essential results of the paper include overview of audit tasks that will be changed by blockchain in the nearest future. Blockchain changes the way audit evidence is gathered and used, as well as confirmations of outstanding receivables and payables. Sampling, currently widely used audit tool, will most likely be eliminated due to blockchain’s ability to provide real-time verification of all transactions. Furthermore, all the aspects of blockchain contribute to creating a new generation of auditing, based on continuous assurance. Finally, the practical result of this paper is a summary of the Big4 companies latest developed artificial intelligence tools and innovations in the field of blockchain.
Value Added Abstracts
Pages: 12 - 12Jayaraj P B
Conventional drug discovery methods rely primarily invitro experiments conducted with a target molecule and a very large set of small molecules to choose a right ligand. With the exploration space for the right ligand being very large, this approach is highly time consuming and requires high capital for facilitation. Virtual screening, a computational technique used for evaluating a large group of molecules to identify lead molecules, can be used for this purpose to speed up the drug discovery process. Ligand based drug design works by building a conceptual model of the target protein. Ligand based virtual screening uses this model to evaluate and separate active molecules for a target protein. A classes of algorithm in machine leaning called Classification algorithm can be used to build the above model. In this abstract, 3 different machine learning approaches to solve virtual screening is described. The first method utilises an efficient virtual screening technique using Random Forest (RF) classifier. Second technique applies SVM classifier for virtual screening. The third method demonstrates the applicability of Self Organizing Map (SOM) as a classifier for screening ligand molecules, which is first of its kind in this area as per the literature. The talk end with comparing the plus and minus of the three techniques. The GPU parallelisation of these methods will be also explained in details.
Value Added Abstracts
Pages: 13 - 13José Eduardo GarcÃa-Mendiola
There are currently several models of the analogy process that have been implemented in many computer systems. Analogy is a process that we use daily in basic thinking and learning tasks, and is manifested through memories, expectations, and, in general, associations by similarities and differences. Using analogies is something as spontaneous and familiar as implicitly effective in our continuous and our everyday thinking processes. However, aiming to have a complete description of analogy process as humans use, it is not an easy task. In this sense, the simulation of analogy process by computational programs, in general, weakly accounts for its complexity. Phenomenology can help us to unveil the presuppositions analogy operates with, by identifying and discovering the most original forms of objects constitution regarding their genetic origin in the passive (or precognitive) and active synthesis. Thus, genetic phenomenology distinguishes between active genesis, on the one hand, and passive genesis, on the other hand. In the former, the subject participates actively in the constitution of objects ranging from tools and utensils of daily living to systems of thought, artistic creations, mathematical theorems, or scientific theories. Every active genesis presupposes in the subject a kind of passivity that affects it beforehand. This “passivity” does not mean inactivity, but it rather refers to the fact of being involuntarily affected by a variety of habits, for example, dispositions, thinking patterns, motivations, emotions, memories, traditions, and paradigms. Phenomenology of association considers the constitutive role of association syntheses which would be the analogical channels or basic elements of thinking generation by analogy.
Value Added Abstracts
Pages: 14 - 14JOSE LUIS DEL MORAL BARILARI
The author is a Ph.D in Civil Law with International Mention that, after 8 years of hard research claims to have discovered a philosophical-mathematical language in which the entire area of Humanities was structured by Greek Mathematicians-Philosophers and, in particular, Law. It is a binary language that combines beings with ageres (actions) and allows the complete construction of any act, whether legal or not and, with it, the complete computerization of any legal process (whether judicial or extrajudicial). Being a mathematical-philosophical language, it saves the barriers of any language and any legislation in the world. Its application would allow, in less than 6 months, to create an educational product that would explain the entire Law Degree in approximately 3 months. On the other hand, the algorithmic development of the structure could take place in less than 1 year and no more than 5 developers.
Value Added Abstracts
Pages: 15 - 15Joseph Dimos
With recent developments in computational law, that which comprise updates in AI that apply to belief change and the refining of judgment in critical cases, the utilization of a revised AI toolkit is necessitated. The approach of this endeavor encompasses a Bayesian method for inferring causal connections from data sources that have notable features. The defining of a belief revision operator that is evaluated in conjunction with differences in temporality in events, thus can underpin the structure of rules in legal conventions. The approach of this talk is to introduce soundness upon a belief base that corresponds to derived time parameters. The application of belief revision (AGM) that is adjoint to a characterization of axiom of choice for representing differing states of a belief base is certain direction. The denoting of a case to include the efforts at PennSONG that align with the normative structure of belief change thus is a means to infer Alchourrón CE and ‘hierarchies of regulations and their logic’. To offer an extension to Governatori’s work on annulments and defeasible logic, however is an approach to consider that binds with abrogation and the constructing of rule format. Throughout, there will exist an algorithmic presentation that attributes to belief change operators over a classification schema in the setting of discourse analysis.
Value Added Abstracts
Pages: 16 - 16Kacper Radzikowski
Automatic speech recognition (ASR) has been an object of extensive research since the second half of the previous century. ASR systems achieve high accuracy rates, however, only when the system is used for recognizing the speech of native speakers. The score drops in case the ASR system is being used with a non-native speaker of the language to be recognized, as the pronunciation is affected by the patterns of the mother tongue. Traditional approaches for developing speech recognition classifiers are based on supervised learning, relying on the existence of large labeled datasets. In case of non-native speech such datasets do not always exist and even if they do, the number of samples is not always high enough to train accurate classifiers. We have dealt with the problem of the non-native speech in our previous research using different approach of dual-supervised learning [1]. This time, we try tackling the problem using the style transfer methodology. We designed a pipeline for modifying the non-native speech, so that it resembles the native one to a higher extent. In this research, we plan to tackle the problem of non-native accent, using style transfer methodology. We adjust style transfer to the domain of speech (double degree). He is focusing on the area of machine learning and deep learning, especially targeted for speech recognition problem for non-native speakers. He has published several conference and journal papers. Currently he is employed at the Waseda University as a Research Associate. Publication of speakers: 1. Radzikowski, K., Wang, L., Yoshie, O., Nowak, R.: Dual supervised learning for non-native speech recognition. EURASIP Journal on Audio, Speech and Music Processing 2019:3, 1–10 (2019), doi:10.1186/ s13636-018-0146-4 2. Alhindi, Tariq, Savvas Petridis, and Smaranda Muresan. “Where is your Evidence: Improving Fact-checkand sound, to create an algorithm for real-time accent modification. Such an approach could allow to modify non-native speaker’s voice on-the-fly, so that the ASR system can recognize the speech with higher accuracy. Our methodology could potentially be used as a wrapper for existing ASR system, reducing the necessity of training new algorithms for non-native speech.
Value Added Abstracts
Pages: 17 - 17Kelvin Ogba Dafiaghor.
Artificial Intelligence is the development of computer systems to be able to perform task normally requiring human intelligence. Examples are visual perception, speech recognition, decision-making and translation between languages e.t.c. Artificial intelligence is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Artificial intelligence is the technology that drive most modern technology from self driving cars to even our snap chart filter. Artificial intelligence as the best tool to solve global poverty. Artificial intelligence can be used to bridge the gap between the have and the have not between the develop and the underdeveloped world The purpose of this study is to present a systematic view why Humans need to embrace fully Artificial Intelligence, the importance and various applications of Artificial Intelligence, How Artificial Intelligence is better than all inventions before it, the role of Artificial Intelligence in fighting the COVID 19 pandemic, the need for Ethical Artificial Intelligence. This study identifies where humans have gotten to in terms of narrow or strong Artificial Intelligence. Having carried out inquiries on the fears of humans accepting the full possibilities of Artificial Intelligence, this study answers two important questions firstly, will Artificial Intelligence bring about the end of the human race? Secondly, the loss of human resources in the labour market as a result of more smarter, efficient Artificial Intelligence. It is hoped this study will inform humans on why Artificial Intelligence is the single most important tool humanity has conceived.
Value Added Abstracts
Pages: 18 - 18Ken Perez
The United States spends more in aggregate and on a per-capita basis on healthcare than any other country. Spending on medications accounts for over 10% of U.S. healthcare expenditures and has been identified as a key driver of the nation’s higher spending on healthcare. In addition, there are several issues with and shortcomings of the medication management system, including non-adherence and adverse drug events, which often result in emergency department visits and hospitalizations, and increasing administrative burdens placed on clinicians, which cause pharmacists to spend the majority of their time on non-clinical activities. A private sector initiative, the Autonomous Pharmacy Advisory Board, which involves several leading hospitals and health systems from across the U.S., is applying automation, analytics, machine learning and AI to improve operational and clinical outcomes, ensure regulatory compliance, and advance population health. The aforementioned technologies are being used by various companies and healthcare provider organizations to address four significant medication management issues: 1) drug shortages; 2) the opioid crisis; 3) drug diversion; and 4) medication adherence. Practical real-world use cases will be highlighted, along with suggested vectors for future development.
Value Added Abstracts
Pages: 19 - 19Sander De Bock
Despite industrial (r) evolutions, workers are still exposed to factors that increase their risk of musculoskeletal disorders. Exoskeletons aim to support the worker and reduce musculoskeletal stress. While multiple industrial exoskeletons reached the market and multiple exoskeletons have been tested in laboratory conditions, in-sit comparisons of exoskeletons is lacking. In this study two industrial exoskeletons were evaluated; the Laevo and the BackX, both supporting the lower back and hip. Twelve workers completed a test in which the physical work was evaluated without exoskeleton. Subsequently, the test was repeated with two different exoskeleton devices. Each test consisted of two parts; a part with isolated movements (e.g. squat), and a part in which the normal working routine was executed. During each trial heart rate was recorded, together with electromyographic (EMG) data of the right lumbar erector spinae, quadriceps vastus medialis, biceps femoris and trapezius descendens. Session rate of perceived exertion (sRPE) scores were gathered after each set of isolated movements and after the in-situ testing. Furthermore, questionnaires regarding the device’s usability, workload and heat were gathered. The non-parametric Friedman tests compared the different trials. Currently, the data of 7 out of 12 subjects was processed. No differences were found in heart rates, nor activity of the erector spinae. Session rate of perceived exertion data revealed a trend to a reduced perceived exertion during squatting with one exoskeleton (p = 0.073). During movements in which the hip joint was more stationary, sRPE scores were higher when wearing an exoskeleton (p = 0.007). Exoskeletons reduce the perceived exertion while performing the specific movements it was designed for. A higher effort was perceived while wearing the exoskeletons during overhead working.
Value Added Abstracts
Pages: 20 - 20JAMIL ASAAD
The objective of this research was to develop two methods of computational intelligent (CI) techniques, namely, artificial neural network(ANN) and adaptive neural fuzzy inference system (ANFIS).Furthermore, to develop mathematical model using Design Expert software for modeling and predicting performance parametersof theMassey Ferguson (MF-285) tractorunder various field conditions.In this study a MF-285 tractor was instrumented with a low cost and precise data logging system as a means of recording and monitoring the affectual parameters on performance of tractor such as forward speed and instant fuel flow rate during field operation. A moldboard plow was used as tillage tool during the experiments under various tillage depths, engine speeds, forward speeds, tire inflation pressures, moisture contents and cone indexes. Acquired data were used to develop accurate models for drawbar pull, rolling resistance, slippage, Temporal Fuel Consumption (TFC), Area-specificFuel Consumption (AFC), Specific Fuel Consumption (SFC), drawbar power, axle power, net traction ratio, tractive efficiency and power loss.
The results showed that all developed models (ANN, ANFIS and mathematical) had satisfactory performance for predicting aforementioned parameters of tractor in various field conditions. For drawbar pull, ANN technique achieved optimum model with topology 6-8-1 andLevenberg-Marquardt learning method with MSE of 0.000515 and R2 of 0.997. ANFIS method produced the best model with indicators statistical MSE of 0.00541 and R2 of 0.979 for rolling resistance. The premium model for anticipating slippage achieved by ANN with topology 6-8-1 and Bayesian regulation with MSE of 9.3621e-08 and R2 of 0.9999.For drawbar power, the best result was obtained by the ANN with 6-7-1 topology and Bayesian regulation training algorithm with R2 of 0.995 and MSE of 0.00024.
The obtained result showed that the 6-7-1 structured ANN with Levenberg-Marquardt training algorithm represented a good prediction of tractive efficiency with R2 equal to 0.989 and MSE of 0.001327.Also the model of ANN is overcome other models in predicting axle power (Levenberg-Marquardt withstructure 6-7-1, MSE of 0.0001683 and R2 of 0.996), power loss (Bayesian regulation with topology 6-9-1, MSE of 0.0001032 and R2 of 0.985) and net traction ratio (Levenberg-Marquardt with structure 6-9-1, MSE of 0.0006814 and R2 of 0.994).
The performance of predicting fuel consumption (TFC, AFC and SFC) is acceptable. The ANN model with 6-7-1 structure and Levenberg-Marquardt training algorithm had the best performance with R2 of 0.969 and MSE of 0.13427 for TFC prediction.The 6-8-1 topology showed the best power for prediction of AFC with R2 and MSE of 0.885 and 0.01348 with Levenberg-Marquardt training algorithm. ANFIS method achieved the best model for prognostication SFC with MSE of 0.01475 and R2 of 0.9454.
The obtained results confirmed that the ANN, ANFIS and mathematical modelsare able to learn the relationships between the input variables and performance parameters of tractor, very well.
Value Added Abstracts
Pages: 21 - 21Dr. Roya Choupani
During the past few years, brain tumor segmentation in Magnetic Resonance Imaging(MRI) has become an emergent research area in the field of medical imaging system. Accurate detection of brain tumor plays a important role in the diagnosis of tumor. Me and my students develop a program which analyses the MR Images of patient and recognizes the tumor by using image processing and detects the location of tumor. Detection of required area is sensitive and critical subject in segmenting medical images. Accuracy and fast computation time is two important scales for these segmentation algorithms. These algorithms gave different results depending on data sets and anatomic structures of images. We implement some of these algorithms and combine them. With first test of our implemented algorithm we see that sensitivity and computation time(3 seconds) good at Tresholding with small datasets. These tests also show us classification based segmentation algorithms (K-NN, SVM, Bayers) segments tumor accurately and produce good results for large data set but undesirable behaviors can occur in case of where a class is under represented in training data. In addition to that, clustering algorithms (K-means, Fuzzy) performs very simple, fast and produce good results for non-noise image but for noise images it leads to serious inaccuracy in the segmentation. This could be solved by using accurate pre-processing algorithms before segmentation. More tests with this program will continue with new implementations.
Value Added Abstracts
Pages: 22 - 22Williams Bello
Currently, disease management, symptoms classification and diagnoses are manually performed and are time consuming due to the fact that it takes longer time for infected animal to be manually diagnosed especially those animals which are remote from veterinary. These challenges motivate the development of detection tools that can perform automatically using deep learning approaches such as convolutional neural networks which have received great acceptance in literature. This paper seeks to improve on the deep learning methods of managing animal diseases by using a trained model based on pre-trained deep belief network by stacking restricted Boltzmann machines to classify 4000 animal blood images into parasite and non-parasite class using method of contrastive divergence. Features are extracted from the images and the visible variables of the deep belief network are initialized in order to train the deep belief networks. On a final note, the deep belief network is fine tuned discriminately to compute the probability of class labels by employing back-propagation. The method’s accuracy and precision are higher than the previous state-of-the-art methods with 96.30% and 95.20% respectively. We belief this paper is among the most recent applications of deep belief network for detecting trypanosomiasis parasite in animal blood images.
Value Added Abstracts
Pages: 23 - 23Johan von Solms,
Since the financial crisis, the role and responsibility of Treasury has grown significantly in terms of scope and strategic importance. There are an ever-increasing number of requirements from Regulators, Senior Management and Shareholders that Treasury needs to deliver on. Regulatory demands are becoming more onerous in terms of granularity and frequency; while the CEO/CFO increasingly requires real-time attestation that the balance sheet is efficiently optimised and protected against risks.
To meet these demands Treasury needs to revolutionise its time-consuming operational processes and become more strategically orientated. Leveraging digital technologies associated with the 4th Industrial Revolution, can enable the transition towards a ‘Smart Treasury of the Future’. One technology that shows a lot of promise is Artificial Intelligence (AI).
AI has been around for the last couple of decades. However, recent growth in processing power and explosion of data available to ‘learn from’ mean innovative analytical tools are becoming far more useful and effective. This is very relevant for a Treasury, that needs to analyse large amounts of liquidity, capital and risk data, to inform strategic decisions around the scarce resources of the bank. AI can offer many benefits - improving predictability of Financial Planning and Forecasting; providing a holistic real-time view of Intraday cashflows required for Liquidity Management; and deriving smart hedging decisions for Interest Rate and FX risks.
AI also brings new challenges, but banks that invest in next generation technologies, will be rewarded with improved ability to manage risks, higher operational efficiencies, reduction in operating costs, and ensure a completive advantage.
Value Added Abstracts
Pages: 24 - 24Santi Esteva
Into the highest level of invention or discovery we find that many of humanity's advances have been achieved by eccentricities of its author, where he has tried to imagine or create something different from what can be generated by deduction, imitation or reasoning, which they are already elaborate forms of intelligence. In this work is proposed to join one last layer after the abstracting information of the data of the process, and the
first level of the intelligence, like NN, any classification methods, reasoning etc. a new layer of the analysis of this information to search some matching with the eccentricities
proposed, this can produce one result that can be seen as a great level of intelligence if this is according to the hypothesis. The premises and viability are also analyzed to be viable the proposed hypothesis.
Value Added Abstracts
Pages: 25 - 25Somesh Devagekar
The quality of uni-directional tape in production process is affected by environmental conditions like temperature and production speed. Machine vision algorithms on the scanned images are deployed in this context to detect and classify tape damages during the manufacturing procedure. We perform a comparative study among famous feature descriptors for fault candidate generation, then propose own features for fault detection using various machine learning techniques. The empirical results demonstrate the high performance of the proposed system and show preference of random forest and canny edges for classifier and feature generator respectively.
Value Added Abstracts
Pages: 26 - 26Dr Steve Lockey
The rise of Artificial Intelligence (AI) in our society is becoming ubiquitous and undoubtedly holds much promise. However, AI has also been implicated in high profile breaches of trust or ethical standards, and concerns have been raised over the use of AI in initiatives and technologies that could be inimical to society. Public trust and perceptions of AI trustworthiness underpin AI systems’ social licence to operate, and a myriad of company, industry, governmental and intergovernmental reports have set out principles for ethical and trustworthy AI. To guide the responsible stewardship of AI into our society, a firm foundation of research on trust in AI to enable evidence-based policy and practice is required. However, in order to inform and guide future research, it is imperative to first take stock and understand what is already know about human trust in AI. As such, we undertake a review of 100 papers examining the relationship between trust and AI. We found a fragmented, disjointed and siloed literature with an empirical emphasis on experimentation and surveys relating to specific AI technologies. While findings suggest some convergence on the importance of explainability as a determinant of trust in AI technologies, there are still gaps between conceptual arguments and what has been examined empirically. We urge future research to take a more holistic approach and investigate how trust in different referents impacts on attitudinal and behavioural intentions. Doing so will facilitate a more nuanced understanding of what it means to develop trustworthy AI.
Value Added Abstracts
Pages: 27 - 27Tomasz B�?k
The way of using mouse, keyboard, tablet or phone can be a great source of the information about user behaviour. Data can be gathered from a spectrum of sensors built into these devices. Then this data can be transferred into features which characterize user, like manner of typing, speed of cursor or mouse movement curve. This presentation will focus on how to use those features to build behavioural model which will be unique and adjusted to each of thousends of customers. This problem has some important additional aspects. For one user several different models can be built, therefore some aggregation methods has to be delivered. Some of data has to be anonymized. Continuous authentication is excepted by business. These factors define important limitations for models productionization. They will also be discussed.
Value Added Abstracts
Pages: 28 - 28CHAOUACHI Wassim
Structured products are becoming more and more important in the world of investment banking, and more and more investors are incorporating this type of asset in their portfolios. There are various types of structured products, suitable for different investor profiles, including individual.
The objective of this article is to introduce new pricing methods other than Monte Carlo methods to speed up the computation time of some structured products called exotic products. We will show how we reduce computation time from 371 days to 2.11 seconds keeping a very accurate precision. First we will introduce the financial products we price, for that we will describe the environment. Second, knowing that we never used machine learning technics to price products at HSBC, one of the parts of the projects was a proof of concept on vanilla products to see if we can apply such techniques (Machine Learning) on more complex products such as Exotics. Third, We will introduce a new deep learning model for non linear interpolation to price Exotic products: Autocallables for Mono-Underlying then on Multi-Underlyings .The last part of this project was the back-testing of our model on the last months .
Value Added Abstracts
Pages: 29 - 29Yassir Karroute
We strive for the future of the IT industry. We stay abreast with the most advanced and promising technologies that are expected to become increasingly commonplace in the next few years. Our experts design and develop tailor-made solutions for highly challenging technical environments and applications. Our key competencies include IoT along with Big Data, Machine Learning, Deep Learning and Blockchain. We strongly believe that one size doesn’t fit all anymore! Instead, we support the design and development of fully customized, yet enterprise class solutions, that directly fits into disruptive business strategies.
Value Added Abstracts
Pages: 30 - 30Isham Alzoub
Land leveling is one of the most important steps in soil preparation and cultivation. Although land leveling with machines require considerable amount of energy, it delivers a suitable surface slope with minimal deterioration of the soil and damage to plants and other organisms in the soil. Notwithstanding, researchers during recent years have tried to reduce fossil fuel consumption and its deleterious side effects using new techniques such as; Artificial Neural Network (ANN),Imperialist Competitive Algorithm –ANN (ICA-ANN), and regression and Adaptive Neuro-Fuzzy Inference System (ANFIS) andSensitivity Analysis that will lead to a noticeable improvement in the environment. In this research effects of various soil properties such as Embankment Volume, Soil Compressibility Factor, Specific Gravity, Moisture Content, Slope, Sand Percent, and Soil Swelling Index in energy consumption were investigated. The study was consisted of 90 samples were collected from 3 different regions. The grid size was set 20 m in 20 m (20*20) from a farmland in Karaj province of Iran. The aim of this work was to determine best linear model Adaptive Neuro-Fuzzy Inference System (ANFIS) and Sensitivity Analysis in order to predict the energy consumption for land leveling. According to the results of Sensitivity Analysis, only three parameters; Density, Soil Compressibility Factor and, Embankment Volume Index had significant effect on fuel consumption. According to the results of regression, only three parameters; Slope, Cut-Fill Volume (V) and, Soil Swelling Index (SSI) had significant effect on energy consumption. using adaptive neuro-fuzzy inference system for prediction of labor energy, fuel energy, total machinery cost, and total machinery energy can be successfully demonstrated. In comparison with ANN, all ICA-ANN models had higher accuracy in prediction according to their higher R2 value and lower RMSE value. The performance of the multivariate ICA-ANN and regression and artificial neural network and Sensitivity analysis and Adaptive neuro-fuzzy inference system (ANFIS) model was evaluated by using statistical index (RMSE, R2 )). The values of RMSE and R2 derived by ICA-ANN model were, to Labor Energy (0.0146 and 0.9987), Fuel energy (0.0322 and 0.9975), Total Machinery Cost (0.0248 and 0.9963), Total Machinery Energy (0.0161 and 0.9987) respectively, while these parameters for multivariate regression model were, to Labor Energy (0.1394 and 0.9008), Fuel energy (0.1514 and 0.8913), Total Machinery Cost (TMC) (0.1492 and 0.9128), Total Machinery Energy (0.1378 and 0.9103).Respectively, while these parameters for ANN model were, to Labor Energy (0.0159 and 0.9990), Fuel energy (0.0206 and 0.9983), Total Machinery Cost (0.0287 and 0.9966), Total Machinery Energy (0.0157 and 0.9990) respectively, while these parameters for Sensitivity analysis model were, to Labor Energy (0.1899 and 0.8631), Fuel energy (0.8562 and 0.0206), Total Machinery Cost (0.1946 and 0.8581), Total Machinery Energy (0.1892 and 0.8437) respectively, respectively, while these parameters for ANFIS model were, to Labor Energy (0.0159 and 0.9990), Fuel energy (0.0206 and 0.9983), Total Machinery Cost (0.0287 and 0.9966), Total Machinery Energy (0.0157 and 0.9990) respectively, Results showed that ICA_ANN with seven neurons in hidden layer had better.
Editorial
Pages: 31 - 31Editor
We are satisfied to invite you to the " International Conference on Humanoid Robotics, Artificial Intelligence and Automation ". after the fruitful culmination of the arrangement of Artificial Intelligence Congress. The congress is planned to occur in the excellent city of Las Vegas, USA, on March 26-27, 2021. This Humanoid 2020 gathering will give you an excellent research understanding and tremendous thoughts. This conference will also present the advanced research, advanced techniques for Automation and Artificial Intelligence and its related fields. Humanoid 2020 will concentrate on the subject Innovations and headways in Artificial Intelligence. We are certain that you will appreciate the Scientific Program of this up and coming Conference. The point of view of the artificial Intelligence Conference is to set up cutting edge research to assist individuals with seeing how procedures have progressed and how the field has created as of late. Humanoid 2020 is an area of programming building that underscores the creation of canny machines that work and react like individuals. Man-made brainpower is master in examining how human cerebrum thinks, learn, choose, and work while attempting to take care of an issue, and afterward utilizing the results of this investigation as in the reality, the information has some unfortunate properties. In the advanced world, Artificial Intelligence can be utilized from multiple points of view to control robots, Sensors, actuators and so forth. A robotization framework is a framework that controls and shows building association. These frameworks can be set up in a couple of commonplace ways. In this portion, a general development outline work for a structure with complex prerequisites because of the activity, for example, a counselling room will be depicted. Conference will encourage Young Researcher’s Forum, scientists and the researchers in their early stage of career graph to widely discuss their outcome so as to enrich and develop the idea. The ‘Best Poster Award’ is meant to encourage students in taking active part in the International Science platform to sharpen their skills and knowledgebase. J Telecommun Syst Manage ISSN: 2167-0919 Rising Venture Capital Investments Coupled With Increasing R&D Activities in Autonomous Vehicles Market to Drive Artificial Intelligence Market in US Through 2021 According to TechSci Research report, "United States Artificial Intelligence Market, By Application, By Region, By End User Competition Forecast & Opportunities, 2011-2021", the artificial intelligence market in the US is projected to grow at a CAGR of 75% during 2016 - 2021 on account of growing artificial intelligence technology adoption in consumer electronic devices, research and developmental activities in healthcare industry, unmanned aerial vehicles, autonomous cars, etc. This research evaluates enterprise robotics in the United States including companies, technologies, and solutions across industry verticals and applications. The report includes forecasts by industry vertical/application for 2017through2021.
Telecommunications System & Management received 109 citations as per Google Scholar report