ISSN: 2252-8776
Int J Inf & Commun Technol, Vol. 12, No. 1, April 2023: 23-31
30
[2] E. I. Asonye, E. Emma-Asonye, and M. Edward, “Deaf in Nigeria: A preliminary survey of isolated deaf communities,” SAGE
Open, vol. 8, no. 2, pp. 1–11, Apr. 2018, doi: 10.1177/2158244018786538.
[3] R. Brooks, “A guide to the different types of sign language around the World,” K-International, 2018. https://www.k-
international.com/blog/different-types-of-sign-language-around-the-world/ (accessed Jan. 16, 2022).
[4] M. J. Napier, J. Fitzgerald, and E. Pacquette, British sign language for dummies. Chichester: John Wiley & Sons, 2008.
[5] J. Zheng et al., “An improved sign language translation model with explainable adaptations for processing long sign sentences,”
Computational Intelligence and Neuroscience, pp. 1–11, Oct. 2020, doi: 10.1155/2020/8816125.
[6] W. Gao, G. Fang, D. Zhao, and Y. Chen, “A Chinese sign language recognition system based on SOFM/SRN/HMM,” Pattern
Recognition, vol. 37, no. 12, pp. 2389–2402, 2004, doi: 10.1016/S0031-3203(04)00165-7.
[7] M. C. Hu, H. T. Liu, J. W. Li, C. W. Yeh, T. Y. Pan, and L. Y. Lo, “Sign language recognition in complex background scene
based on adaptive skin colour modelling and support vector machine,” International Journal of Big Data Intelligence, vol. 5, no.
1–2, pp. 21–30, 2018, doi: 10.1504/IJBDI.2018.10008140.
[8] H. Wang, X. Chai, X. Hong, G. Zhao, and X. Chen, “Isolated sign language recognition with grassmann covariance matrices,”
ACM Transactions on Accessible Computing, vol. 8, no. 4, pp. 1–21, May 2016, doi: 10.1145/2897735.
[9] Z. Liang, S. Liao, and B. Hu, “3D convolutional neural networks for dynamic sign language recognition,” The Computer Journal,
vol. 61, no. 11, pp. 1724–1736, Nov. 2018, doi: 10.1093/comjnl/bxy049.
[10] S. Yang and Q. Zhu, “Video-based Chinese sign language recognition using convolutional neural network,” in 2017 IEEE 9th
International Conference on Communication Software and Networks, May 2017, pp. 929–934, doi: 10.1109/ICCSN.2017.8230247.
[11] D. Guo, W. Zhou, H. Li, and M. Wang, “Hierarchical LSTM for sign language translation,” in 32nd AAAI Conference on
Artificial Intelligence, AAAI 2018, 2018, pp. 6845–6852, doi: 10.1609/aaai.v32i1.12235.
[12] T. Liu, W. Zhou, and H. Li, “Sign language recognition with long short-term memory,” in 2016 IEEE International Conference
on Image Processing (ICIP), Sep. 2016, pp. 2871–2875, doi: 10.1109/ICIP.2016.7532884.
[13] J. Pu, W. Zhou, and H. Li, “Iterative alignment network for continuous sign language recognition,” in 2019 IEEE/CVF
Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2019, pp. 4160–4169, doi: 10.1109/CVPR.2019.00429.
[14] S. Yang and Q. Zhu, “Continuous Chinese sign language recognition with CNN-LSTM,” in Ninth International Conference on
Digital Image Processing (ICDIP 2017), Jul. 2017, pp. 83-89, doi: 10.1117/12.2281671.
[15] J. Huang, W. Zhou, Q. Zhang, H. Li, and W. Li, “Video-based sign language recognition without temporal segmentation,” in
Proceedings of the AAAI Conference on Artificial Intelligence, Apr. 2018, pp. 2257–2264, doi: 10.1609/aaai.v32i1.11903.
[16] C. Mao, S. Huang, X. Li, and Z. Ye, “Chinese sign language recognition with sequence to sequence learning,” in Communications
in Computer and Information Science, vol. 771, Singapore: Springer, 2017, pp. 180–191, doi: 10.1007/978-981-10-7299-4_15.
[17] N. C. Camgoz, S. Hadfield, O. Koller, H. Ney, and R. Bowden, “Neural sign language translation,” in 2018 IEEE/CVF
Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp. 7784–7793, doi: 10.1109/CVPR.2018.00812.
[18] S.-K. Ko, C. J. Kim, H. Jung, and C. Cho, “Neural sign language translation based on human key point estimation,” Applied
Sciences, vol. 9, no. 13, pp. 1–19, Jul. 2019, doi: 10.3390/app9132683.
[19] O. Koller, N. C. Camgoz, H. Ney, and R. Bowden, “Weakly supervised learning with multi-stream CNN-LSTM-HMMS to
discover sequential parallelism in sign language videos,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.
42, no. 9, pp. 2306–2320, Sep. 2020, doi: 10.1109/TPAMI.2019.2911077.
[20] S. Wang, D. Guo, W. Zhou, Z.-J. Zha, and M. Wang, “Connectionist temporal fusion for sign language translation,” in Proceedings of
the 26th ACM international conference on Multimedia, Oct. 2018, pp. 1483–1491, doi: 10.1145/3240508.3240671.
[21] A. M. Abdullah, J. K. P. Sarwar, and M. A. Fakir, “Flex sensor based hand glove for deaf and mute people,” International
Journal of Computer Networks and Communications Security, vol. 5, no. 2, pp. 38–48, 2017.
[22] V. V Shelke, V. V Khaire, P. E. Kadlag, and K. T. V Reddy, “Communication aid for deaf and dumb people,” International
Research Journal of Engineering and Technology, vol. 06, no. 11, pp. 1930–1933, 2019.
[23] J. Wu, L. Sun, and R. Jafari, “A wearable system for recognizing American sign language in real time using IMU and surface EMG
sensors,” IEEE Journal of Biomedical and Health Informatics, vol. 20, no. 5, pp. 1281–1290, 2016, doi: 10.1109/JBHI.2016.2598302.
[24] K.-W. Kim, M.-S. Lee, B.-R. Soon, M.-H. Ryu, and J.-N. Kim, “Recognition of sign language with an inertial sensor-based data
glove,” Technology and Health Care, vol. 24, no. 1, pp. 223–230, Dec. 2015, doi: 10.3233/THC-151078.
[25] J. Galka, M. Masior, M. Zaborski, and K. Barczewska, “Inertial motion sensing glove for sign language gesture acquisition and
recognition,” IEEE Sensors Journal, vol. 16, no. 16, pp. 6310–6316, Aug. 2016, doi: 10.1109/JSEN.2016.2583542.
[26] D. Lu, Y. Yu, and H. Liu, “Gesture recognition using data glove: An extreme learning machine method,” in 2016 IEEE International
Conference on Robotics and Biomimetics (ROBIO), Dec. 2016, pp. 1349–1354, doi: 10.1109/ROBIO.2016.7866514.
[27] K. Nithyakalyani, S. Ramkumar, and K. Manikandan, “Design and implementation of sign language translator using microtouch
sensor,” International Journal of Scientific and Technology Research, vol. 9, no. 1, pp. 1784–1786, 2020.
[28] M. Abirami, P. A. Emi, N. V. Devi, and P. Padmapriya, “Sign language communication using micro touch sensor,” International Journal
of Pure and Applied Mathematics, vol. 119, no. 15, pp. 507–514, 2018.
[29] L. Sousa, J. M. F. Rodrigues, J. Monteiro, P. J. S. Cardoso, and R. Lam, “GyGSLA: A portable glove system for learning sign
language alphabet,” in Universal Access in Human-Computer Interaction, 2016, pp. 159–170, doi: 10.1007/978-3-319-40238-3_16.
[30] N. Caporusso, L. Biasi, G. Cinquepalmi, G. Trotta, A. Brunetti, and V. Bevilacqua, “A wearable device supporting multiple touch-and gesture-
based languages for the deaf-blind,” Advances in Intelligent Systems and Computing, 2018, pp. 32–41, doi: 10.1007/978-3-319-60639-2_4.
[31] S. P. Dawane and H. G. A. Sayyed, “Hand gesture recognition for deaf and dumb people using GSM module,” International Journal
of Science and Research, vol. 6, no. 5, pp. 2226–2230, 2017.
[32] S. P. More and A. Sattar, “Hand gesture recognition system using image processing,” in 2016 International Conference on
Electrical, Electronics, and Optimization Techniques (ICEEOT), Mar. 2016, pp. 671–675, doi: 10.1109/ICEEOT.2016.7754766.
[33] S. V Matiwade and M. R. Dixit, “Electronic support system for deaf and dumb to interpret sign language of communication,”
International Journal of Innovative Research in Science, Engineering and Technology, vol. 5, no. 5, pp. 8683–8689, 2016.
[34] D. Abdulla, S. Abdulla, R. Manaf, and A. H. Jarndal, “Design and implementation of a sign-to-speech/text system for deaf and dumb people,”
2016 5
th
International Conference on Electronic Devices, Systems and Applications, Dec. 2016, pp. 1–4, doi: 10.1109/ICEDSA.2016.7818467.
[35] N. M. Kakoty and M. D. Sharma, “Recognition of sign language alphabets and numbers based on hand kinematics using a data
glove,” Procedia Computer Science, vol. 133, pp. 55–62, 2018, doi: 10.1016/j.procs.2018.07.008.
[36] A. Pino and G. Kouroupetroglou, “ITHACA: An open source framework for building component-based augmentative and
alternative communication applications,” ACM Transactions on Accessible Computing (TACCESS), vol. 2, no. 4, pp. 1–30, Jun.
2010, doi: 10.1145/1786774.1786775.
[37] P. N. Huu, H. N. Thi Thu, and Q. T. Minh, “Proposing a recognition system of gestures using MobilenetV2 combining single shot detector
network for smart-home applications,” Journal of Electrical and Computer Engineering, pp. 1–18, Feb. 2021, doi: 10.1155/2021/6610461.