Abdelmoty M Ahmed
Al Azhar University, Egypt
Posters & Accepted Abstracts: Adv Robot Autom
Intelligent Machine Translation Systems (MTSs), which translate from or to sign languages (SLs), are important in a world that is showing а continuously growing interest in removing obstacles faced by special needs individuals in communicating. These types of systems can greatly assist in the communication among hearing-impaired communities along with other communities. Regarding of hearing impaired communities, these types of systems are equivalent in order to the speech recognition systems utilized by people just who speak to interact along with machines in a lot more natural method. Few researchers tried to develop MTSs from and to visual sign language. Very few of these attempts were on Arabic sign language (ArSL). None of them succeeded to develop a reliable industrial product, especially in the MTSs from the ArSL to the Arabic text. This paper presents the first system of translation from the ArSL to the Arabic text relying on deep machine learning techniques. Our system is based on three main subsystems; Arabic video sign understanding subsystem, mapping between ArSL and Arabic text (pattern matching) subsystem and Arabic text generation (transformation) subsystem. The second and third subsystem are called deep system modules where we have adopted the design of these subsystems on the Convolutional Neural Network (CNN) algorithm, Additionally, this paper gives a first prototype application of Deep Neural Networks (DNNs) in Machine Translation (MT) which attempts to improve standard MTSs. The experimental results show the effectiveness of our proposed approaches. In the accuracy of translation and recognition of Arabic video sign, the result shows more than 95% recognition accuracy. Our contribution considers the recognition subsystem that using CNNs, hybrid support vector machine with CNN and GPU acceleration. Instead of constructing complex handcrafted features, CNNs are able to automate the process of feature construction. We are able to recognize 50 Arabian gestures with high accuracy. The predictive model is able to generalize on users and surroundings not occurring during training with across-validation accuracy of 91.7%. abd2005moty@yahoo.com
Advances in Robotics & Automation received 1127 citations as per Google Scholar report