Mobile Phone-Based Audio Announcement Detection and Recognition for People with Hearing Impairment

Bibliographic Details
Title: Mobile Phone-Based Audio Announcement Detection and Recognition for People with Hearing Impairment
Authors: Yong Ruan, Yueliang Qian, Xiangdong Wang
Source: Advances in Multimedia, Vol 2018 (2018)
Publisher Information: Hindawi Limited, 2018.
Publication Year: 2018
Collection: LCC:Electronic computers. Computer science
Subject Terms: Electronic computers. Computer science, QA75.5-76.95
More Details: Automatic audio announcement systems are widely used in public places such as transportation vehicles and facilities, hospitals, and banks. However, these systems cannot be used by people with hearing impairment. That brings great inconvenience to their lives. In this paper, an approach of audio announcement detection and recognition for the hearing-impaired people based on the smart phone is proposed and a mobile phone application (app) is developed, taking the bank as a major applying scenario. Using the app, the users can sign up alerts for their numbers and then the system begins to detect audio announcements using the microphone on the smart phone. For each audio announcement detected, the speech within it is recognized and the text is displayed on the screen of the phone. When the number the user input is announced, alert will be given by vibration. For audio announcement detection, a method based on audio segment classification and postprocessing is proposed, which uses a SVM classifier trained on audio announcements and environment noise collected in banks. For announcement speech recognition, an ASR engine is developed using a GMM-HMM-based acoustic model and a finite state transducer (FST) based grammar. The acoustic model is trained on audio announcement speech collected in banks, and the grammar is human-defined according to the patterns used by the automatic audio announcement systems. Experimental results show that character error rates (CERs) around 5% can be achieved for the announcement speech, which shows feasibility of the proposed method and system.
Document Type: article
File Description: electronic resource
Language: English
ISSN: 1687-5680
1687-5699
Relation: https://doaj.org/toc/1687-5680; https://doaj.org/toc/1687-5699
DOI: 10.1155/2018/8786308
Access URL: https://doaj.org/article/acdd3fb3e55d4d569db426fb0b9f1092
Accession Number: edsdoj.3fb3e55d4d569db426fb0b9f1092
Database: Directory of Open Access Journals
Full text is not displayed to guests.
FullText Links:
  – Type: pdflink
    Url: https://content.ebscohost.com/cds/retrieve?content=AQICAHjPtM4BHU3ZchRwgzYmadcigk49r9CVlbU7V5F6lgH7WwEaVM0N2g7rD6Nvh1Fv4EUoAAAA4jCB3wYJKoZIhvcNAQcGoIHRMIHOAgEAMIHIBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDLK1AQssT_zqP0rWFwIBEICBmtigOShfYrJpjpZU1cN25Nwp5tJIUlvdNPxfMsRDUavvvAuAiMjIMGhOpKgAjbGZ-HN8-2Iwpk2O0vydR0uE0-cffM6N0Mkb2QKdyMFJgIrXc57bH6rvShM68UZJRFhAr3J1GmQASsdYsrJSXWWzs6PpE7RafzSYcAHcPyhieEIc5IYp_wZ4q3brSdT8D3kRMv_BvoQKWn4AGNw=
Text:
  Availability: 1
  Value: <anid>AN0131269056;[3gpe]16aug.18;2018Aug17.16:58;v2.2.500</anid> <title id="AN0131269056-1">Mobile Phone-Based Audio Announcement Detection and Recognition for People with Hearing Impairment </title> <sbt id="AN0131269056-2">1. Introduction</sbt> <p>Automatic audio announcement systems are widely used in public places such as transportation vehicles and facilities, hospitals, and banks. However, these systems cannot be used by people with hearing impairment. That brings great inconvenience to their lives. In this paper, an approach of audio announcement detection and recognition for the hearing-impaired people based on the smart phone is proposed and a mobile phone application (app) is developed, taking the bank as a major applying scenario. Using the app, the users can sign up alerts for their numbers and then the system begins to detect audio announcements using the microphone on the smart phone. For each audio announcement detected, the speech within it is recognized and the text is displayed on the screen of the phone. When the number the user input is announced, alert will be given by vibration. For audio announcement detection, a method based on audio segment classification and postprocessing is proposed, which uses a SVM classifier trained on audio announcements and environment noise collected in banks. For announcement speech recognition, an ASR engine is developed using a GMM-HMM-based acoustic model and a finite state transducer (FST) based grammar. The acoustic model is trained on audio announcement speech collected in banks, and the grammar is human-defined according to the patterns used by the automatic audio announcement systems. Experimental results show that character error rates (CERs) around 5% can be achieved for the announcement speech, which shows feasibility of the proposed method and system.</p> <p>Voice is one of the most natural and important communication methods for human beings [[<reflink idref="bib1" id="ref1">1</reflink>] ]. It is playing an increasingly important role and bringing great convenience to our daily lives. As an instance, automatic audio announcement systems are widely used in public places such as transportation vehicles and facilities, hospitals, and banks. For example, nowadays in China, each customer in a bank gets a number on his/her arrival and waits for his/her turn of service until the audio announcement containing the number tells him/her to which counter to go. However, these systems cannot be used by people with hearing impairment. Obviously, this brings great inconvenience to their lives. Although sometimes the audio announcements are companioned by text announcements displayed on screens, it is still inconvenient since the hearing-impaired people need to be always paying attention on the screens, which is tiring and may lead to missing of the information.</p> <p>To make the announcements also available to hearing-impaired people, wireless sensor networks or short messenger systems are adopted to provide reminders via other methods instead of voice [[<reflink idref="bib2" id="ref2">2</reflink>] ]. The disadvantage of these systems is that additional systems need to be deployed in the application scenarios, which is often difficult and expensive.</p> <p>In this paper, we propose a mobile phone-based solution which avoids the deployment of additional systems in public places. An application (app) installed on the mobile phone can automatically detect and recognize audio announcements and remind the user by vibration and text on the mobile phone. There are two major challenges in the system: the detection of audio announcements in audio collected in real-world public places and recognition of the speech in the audio announcements detected.</p> <p>The detection of the audio announcement can be viewed as a task of content-based audio classification, which is inherently a difficult problem of pattern recognition. There are two main issues for audio classification: the selection of audio features and the choice of classifiers [[<reflink idref="bib3" id="ref3">3</reflink>] ]. For the selection of features, Pfeiffer et al. [[<reflink idref="bib4" id="ref4">4</reflink>] ] proposed a theoretical framework that uses a series of perceptual features for automatically audio content analysis. Li et al. [[<reflink idref="bib5" id="ref5">5</reflink>] ] studied the effect of a total of 143 audio features, showing that cepstrum-based features such as Mel-frequency cepstral coefficients (MFCCs) and Linear Predictive Cepstral Coefficients (LPCCs) are more effective than short-term and spectral features in audio classification. Feature-based fusion [[<reflink idref="bib6" id="ref6">6</reflink>] ] and adaptive approach [[<reflink idref="bib7" id="ref7">7</reflink>] ] are also useful in audio classification. As for classifiers used in audio classification [[<reflink idref="bib8" id="ref8">8</reflink>] ], Lu et al. [[<reflink idref="bib9" id="ref9">9</reflink>] ] used support vector machine (SVM) to classify audio into five categories: silence, music, background sound, the voice of a pure speaker, and the voice of a speaker under noise or music, which is similar to the Cuckoo algorithm [[<reflink idref="bib10" id="ref10">10</reflink>] ].</p> <p>Automatic speech recognition (ASR) techniques have been studied for decades and used in many real-world applications. Earlier ASR systems used Hidden Markov Model (HMM) for acoustic model. In recent years, the application of Deep Neural Networks (DNNs) [[<reflink idref="bib11" id="ref11">11</reflink>] ] has significantly improved the accuracy of speech recognition. Neural networks such as Convolutional Neural Networks (CNNs) [[<reflink idref="bib12" id="ref12">12</reflink>] ] and Recurrent Neural Networks (RNNs) [[<reflink idref="bib13" id="ref13">13</reflink>] , [<reflink idref="bib14" id="ref14">14</reflink>] ] are used and proved to be promising [[<reflink idref="bib15" id="ref15">15</reflink>] ], and end-to-end speech recognition methods [[<reflink idref="bib16" id="ref16">16</reflink>] , [<reflink idref="bib17" id="ref17">17</reflink>] ] using Connectionist Temporal Classification (CTC) [[<reflink idref="bib18" id="ref18">18</reflink>] ] and attention [[<reflink idref="bib19" id="ref19">19</reflink>] , [<reflink idref="bib20" id="ref20">20</reflink>] ] are also proposed to further improve the performance. Although the DNN-based approaches are reported to outperform HMM-based systems, very large training data are needed to train the DNNs. For applications on mobile phones, there are also many open cloud services of speech recognition. For example, in China, Baidu, iFLYTEK, Unisound, and other companies provide convenient remote interfaces of speech recognition which can be called by apps on mobile phones. However, these services mainly focus on speech of general domains and low-noise environments and may yield poor performance for speech in special scenarios such as far-field audio announcements in noisy public places [[<reflink idref="bib21" id="ref21">21</reflink>] , [<reflink idref="bib22" id="ref22">22</reflink>] ].</p> <p>In this paper, an approach of audio announcement detection and recognition for the hearing-impaired people based on the smart phone is proposed and an Android app is developed, taking the bank as a major applying scenario. Using the app, the users can sign up alerts for their numbers and then the system begins to detect audio announcements using the microphone on the smart phone. For each audio announcement detected, the speech within it is recognized and the text is displayed on the screen of the phone. When the number the user input is announced, alert will be given by vibration. For audio announcement detection, a method based on audio segment classification and postprocessing is proposed, which uses a SVM classifier trained on audio announcements and environment noise collected in banks. For announcement speech recognition, an ASR engine is developed using a GMM-HMM-based acoustic model and a finite state transducer (FST) based grammar. The acoustic model is trained on audio announcement speech collected in banks, and the grammar is human-defined according to the patterns used by the automatic audio announcement systems. Experimental results show that character error rates (CERs) around 5% can be achieved for the announcement speech, which shows feasibility of the proposed method and system.</p> <hd id="AN0131269056-3">2. System Overview</hd> <p>In this paper, a method of audio announcement detection and recognition is proposed for the hearing-impaired people. Based on the method, we developed a mobile phone-based system, i.e., an Android app that can remind hearing-impaired people of audio announcements containing specified keywords, e.g., numbers. The current version of the app mainly focuses on the scenario of banks, although the proposed method can be used for almost all public places with audio announcements.</p> <p>The framework of the proposed system is shown in Figure 1. The user interface receives user input (a keyword, e.g., a number) and starts monitoring of the keyword. During the monitoring, status information is displayed, and when the keyword is detected, the user interface will notify the user by vibration. The monitoring of the keyword is achieved by real-time detection and recognition of audio announcements in the ambient audio collected by the mobile phone. For each audio announcement detected, keyword matching is performed between the text of the audio announcement and the user input keyword to decide whether the keyword is detected.</p> <p>A more detailed procedure of the proposed systems is shown in the flowchart in Figure 2 and illustrated by the screen shots of the app in Figure 3. After starting launching the app, the user sets the keyword to be monitored, which is usually the number they have been assigned when arriving at the bank, as shown in Figure 3(a). Then, the mobile phone starts to continuously collect audio data and store the data into the buffer pool. At the same time, the system begins detection of audio announcements using the data in the buffer. Each time, audio data of a fixed size is processed. If audio announcements are detected in the data, the audio data containing announcement speech are input into the announcement recognition module, which is a speech recognition engine trained specially for audio announcements. The announcement recognition module transforms the audio into text and matches the text with the user-specified keyword. If the match succeeds, vibration will be triggered to notify the user, as shown in Figure 3(c). Considering that match failure may be caused by errors in speech recognition, during the monitoring, the recognition result of each audio announcement detected is also displayed on the mobile phone screen, as shown in Figure 3(b). With this auxiliary information, the user can understand current situation better and may correct speech recognition errors by themselves using context knowledge.</p> <p>The core of the system is the automatic detection and recognition of audio announcements. Machine learning based methods are proposed for both the detection and recognition tasks, which will be detailed in the following sections. On the other hand, the methods for data collection and keyword matching are relatively simple and will not be further described.</p> <hd id="AN0131269056-4">3. Audio Announcement Detection</hd> <p>The procedure of audio announcement detection is as shown in Figure 4.</p> <p>The audio data collected and stored in the buffer are first divided into frames with a length of 25 ms and without frame shift. Then preprocessing including pre-emphasis and Hamming windowing are performed and audio features are extracted for each frame. The features adopted are 13 MFCC coefficients, short time energy, and zero crossing rate. Therefore, a 15-dimension feature vector is extracted for each frame.</p> <p>A classification-based scheme is used for audio announcement detection. Instead of classifying each frame, classification is performed for segments of 0.5 seconds, since segments can be more distinctive between audio announcement and background noise and segment-level classification is more efficient. Every 20 frames, which is related to an audio segment of 0.5 seconds, are combined into a group of frames, and the features of frames are combined into a 300-dimensional super-vector. The super-vector is then input into the classifier.</p> <p>As for the classifier, Support Vector Machine (SVM) with RBF kernel is adopted due to its wide usage in content-based audio classification. An SVM model is trained with audio data collected in banks. Audio announcements are manually segmented from the audio and used as positive samples, and the remaining audio data without audio announcements are treated as negative samples. Since the audio announcements are relatively few, 1/5 of the negative samples are randomly selected and used for training. During the training stage, 8-fold cross-validation is performed to tune parameters.</p> <p>By using the SVM to classify the audio for every 0.5 seconds, the probability of each 0.5-second segment being an audio announcement can be obtained. Then, postprocessing is performed to decide the starting and ending times of the audio announcement based on the probabilities. In our observation, we found that all the audio announcements in banks are within 6 seconds. Therefore, a sliding window of 6 seconds is adopted. For each time, 12 adjacent groups of frames are used for classification, and the 12 probabilities pi, i=1, …, 12, are obtained. The average probability value p is then calculated as (<reflink idref="bib1" id="ref23">1</reflink>)p=∑i=112pi12If p is greater than a predetermined threshold, the 6-second segment is judged as an audio announcement.</p> <p>Figure 5 shows an example of the postprocessing. The abscissa indicates the time (second), and the ordinate indicates the probability value. Each point indicates the probability of the corresponding 0.5-second audio segment being an audio announcement. The red lines indicate the prediction of the starting and ending times of the audio announcement, and the green lines indicate the ground truth labelled by human.</p> <hd id="AN0131269056-5">4. Audio Announcement Recognition</hd> <p>After obtaining the audio announcements, the speech within them needs to be recognized to yield the text used for keyword matching. Due to the high-noise, and especially, the far collection distance of the audio collected in public places such as banks, general-purpose speech recognition systems which focus on speech collected by close talking microphones cannot be used. We have tried to use some cloud services of speech recognition. However, very poor recognition results were obtained (experimental results will be given in the following section), and some speech recognition engines even returned null for most of the speech. Therefore, we built a speech recognition engine for audio announcements in banks by ourselves, using the open source tool kit KALDI.</p> <p>The challenge of building a speech recognition engine for audio announcements mainly lies in the lack of data. Therefore, we collected 27-hour audio data in 5 banks. The data contains 995 audio announcements in total. Although the amount is still small compared to other speech corpora, it should be noticed that the speech in the audio announcement is with limited vocabulary and of almost fixed pattern of expression. Therefore, a continuous speech recognition system with a small vocabulary and a simple grammar can be built using the data collected.</p> <p>Due to the small amount of the training data, models based on deep neural network are not suitable. The traditional HMM-GMM model is adopted. During the training stage, the training data are divided into frames with 25 ms frame length and 10 ms frame shift. The 12-dimensional MFCC feature is extracted for each frame and, along with short time energy, forms a 13-dimensional vector. Then, after first-order and second-order differentiation, a 39-dimensional feature vector is finally obtained. There are 48 initials and vowels used in the dictionary as phonemes. Both mono-phone and tri-phone-based models are trained, and the training of the tri-phone model uses decision tree clustering for state binding to reduce the number of states. The Baum-Welch algorithm is used for training and beam search of Viterbi algorithm is used for decoding.</p> <p>For scenarios such as banks, hospitals, and transportation vehicles and facilities, audio announcements are mostly generated automatically and follow fixed patterns. For example, for banks in China, audio announcements are usually like “Customer A0017, please go to counter 8”. Therefore, in our work, a grammar is defined according to the pattern of announcements in banks and used in the speech recognition system as language model. The grammar is represented by a finite state transducer, as shown in Figure 6, and is defined using the OpenFst tool in KALDI.</p> <hd id="AN0131269056-6">5. Experiments</hd> <hd id="AN0131269056-7">5.1. Experiment Setup</hd> <p>The experimental data are audio data collected by mobile phones in 5 banks belonging to 3 different companies. The data are stored in 16 KHz, 16-bit, mono PCM WAV files. Information of the data is given in Table 1.</p> <p>Information of experimental data.</p> <p> <ephtml> <table><tr><th align="left">Bank</th><th align="center">Duration (hours)</th><th align="center">Number of announcements</th></tr><tr><td align="left">Zhichunlu</td><td align="center">10</td><td align="center">276</td></tr><tr><td colspan="3" /></tr><tr><td align="left">Shuangyushu</td><td align="center">4</td><td align="center">97</td></tr><tr><td colspan="3" /></tr><tr><td align="left">Kexueyuan</td><td align="center">7</td><td align="center">426</td></tr><tr><td colspan="3" /></tr><tr><td align="left">Xili</td><td align="center">4</td><td align="center">157</td></tr><tr><td colspan="3" /></tr><tr><td align="left">Haidian</td><td align="center">2</td><td align="center">39</td></tr><tr><td colspan="3" /></tr><tr><td align="left">Total</td><td align="center">27</td><td align="center">995</td></tr></table> </ephtml> </p> <p>For audio announcement detection, the task is to detect audio announcements in the audio and give the time boundaries of the audio announcements. In the experiment, the input of this task is audio files collected in banks, and the output is the starting and ending times of the audio announcements in all the files. There may be multiple or no audio announcements in an audio file. The training set is formed by all audio in the first 4 banks and the test set contains audio in the fifth bank. The test data consists of 50 audio files with announcements and 10 files without audio announcements. The starting and ending times of the audio announcements in test data are labelled manually as the ground truth. If the deviation between the ground truth and the detected time is less than a given threshold, the detection is considered to be correct. We compute the recall rate, precision rate, and the F1 value to evaluate the overall detection performance.</p> <p>For speech recognition, to analyse the effect of different data collection sites, two experiments are conducted by using different datasets. The first training set consists of 770 audio files, each containing one audio announcement. These audios are collected in four different banks, while the test data contains 132 audio files collected in a different bank, namely, the fifth one. The second training set consists of 802 audio files collected in all the five banks, and the second test data contains 100 audio files collected also in the same five banks. The training data and test data do not contain same audio, even audio collected in the same day in one bank. The character error rate (CER) and the sentence error rate (SER) are used to measure the accuracy of speech recognition. To demonstrate the infeasibility of current general-purpose speech recognition engines, three cloud services of Mandarin speech recognition, namely, Baidu, iFLYTEK, and Unisound, are also used to recognize both test sets.</p> <hd id="AN0131269056-8">5.2. Experimental Results</hd> <p>For audio announcement detection, experimental results are shown in Table 2, where tolerance is the threshold that both the starting and ending time deviation should be less than. It can be seen that an F1 value of 0.895 and a high recall rate (94%) can be achieved for the 1-second tolerance. In fact, the speech recognition module does not require accurate time boundaries of the speech and can deal with speech with background noise. Therefore, the 1-second tolerance is enough for the speech recognition. An advantage of the proposed method is its high recall rate, since in the application scenarios, miss of the audio announcement will make the system useless while the user can to a degree tolerate some false alarms.</p> <p>Experimental results of audio announcement detection.</p> <p> <ephtml> <table><tr><th align="left"> </th><th align="center">Precision rate</th><th align="center">Recall rate</th><th align="center">F1</th></tr><tr><td align="left">Tolerance = 0.5s</td><td align="center">0.822</td><td align="center">0.740</td><td align="center">0.778</td></tr><tr><td colspan="4" /></tr><tr><td align="left">Tolerance = 1.0s</td><td align="center">0.855</td><td align="center">0.940</td><td align="center">0.895</td></tr></table> </ephtml> </p> <p>The experimental results of audio announcement recognition are shown in Table 3. It can be seen that the mono-phone model outperforms the tri-phone model. This may be due to the small amount of the training data. As for the two different data sets, the performances do not differ much and the CER on the first data set is even lower, which means the system is robust enough to be used in banks in which no training data is collected. For the mono-phone model, a CER about 5% is achieved, which shows feasibility of the proposed method and system. As for the general-purpose speech recognition services, the CERs are very high and the SERs are near 100%. This is because the engines mainly focus on speech data collected by close talking microphones and can not deal with far-field speech collected by a single mobile phone microphone.</p> <p>Results of audio announcement recognition.</p> <p> <ephtml> <table><tr><th align="left"> </th><th align="center" colspan="2">The first dataset (training data from 4 banks and test data from the fifth one)</th><th align="center" colspan="2">The second dataset (training and test data both from 5 banks)</th></tr><tr><td align="left"> </td><td align="center">CER (%)</td><td align="center">SER (%)</td><td align="center">CER (%)</td><td align="center">SER (%)</td></tr><tr><td align="left">Mono-phone</td><td align="center">4.11</td><td align="center">34.09</td><td align="center">5.12</td><td align="center">33.33</td></tr><tr><td align="left">Tri-phone</td><td align="center">5.41</td><td align="center">40.15</td><td align="center">5.41</td><td align="center">40.40</td></tr><tr><td colspan="5" /></tr><tr><td align="left">Baidu</td><td align="center">85.54</td><td align="center">100.00</td><td align="center">73.79</td><td align="center">100.00</td></tr><tr><td align="left">iFLYTEK</td><td align="center">54.67</td><td align="center">98.34</td><td align="center">38.86</td><td align="center">95.41</td></tr><tr><td align="left">Unisound</td><td align="center">95.10</td><td align="center">100.00</td><td align="center">88.93</td><td align="center">100.00</td></tr></table> </ephtml> </p> <hd id="AN0131269056-9">6. Conclusions</hd> <p>This article describes a novel system that runs on a mobile phone. The system can be used by people with hearing impairment to avoid missing audio announcements they concern in public places. An approach of audio announcement detection and recognition is proposed and an Android app is developed, taking the bank as a major applying scenario. For audio announcement detection, a method based on audio segment classification and postprocessing is proposed, which uses a SVM classifier trained on audio announcements and environment noise collected in banks. For announcement speech recognition, an ASR engine is developed using a GMM-HMM-based acoustic model and an FST based grammar. Experimental results show that character error rates (CERs) around 5% can be achieved for the announcement speech, which shows feasibility of the proposed method and system. Future work includes extending the system usage to more public places and improvement of the keyword match module.</p> <hd id="AN0131269056-10">Data Availability</hd> <p>The data used to support the findings of this study are available from the corresponding author upon request.</p> <hd id="AN0131269056-11">Conflicts of Interest</hd> <p>The authors declare that they have no conflicts of interest.</p> <hd id="AN0131269056-12">Acknowledgments</hd> <p>This work is partly supported by Beijing Natural Science Foundation (4172058).</p> <hd id="AN0131269056-13">Citations</hd> <p>1 Juang B.-H., Furui S., Automatic recognition and understanding of spoken language - A first step toward natural human-machine communication, Proceedings of the IEEE, 2000, 88, 8, 1142, 1165, 2-s2.0-0000763574, 10.1109/5.880077</p> <ulist> <item>2 Chander D., Sireesha M. V., Passenger bus alert system for easy navigation of blind, 2004</item> <item>3 Lu L., Zhang H.-J., Jiang H., Content analysis for audio classification and segmentation, IEEE Transactions on Audio, Speech and Language Processing, 2002, 10, 7, 504, 516, 10.1109/tsa.2002.804546, 2-s2.0-0036816475</item> <item>4 Pfeiffer S., Fischer S., Effelsberg W., Automatic audio content analysis, Proceedings of the 1996 4th ACM International Multimedia Conference, November 1996, 21, 30, 2-s2.0-0030396150</item> <item>5 Li D., Sethi I. K., Dimitrova N., McGee T., Classification of general audio data for content-based retrieval, Pattern Recognition Letters, 2001, 22, 5, 533, 544, 2-s2.0-0035308233, 10.1016/S0167-8655(00)00119-7, Zbl1010.68859</item> <item>6 Palo H. K., Mohanty M. N., Chandra M., Efficient feature combination techniques for emotional speech classification, International Journal of Speech Technology, 2016, 19, 1, 135, 150, 2-s2.0-84971537262, 10.1007/s10772-016-9333-9</item> <item>7 Khaldi K., Boudraa A.-O., Turki M., Voiced/unvoiced speech classification-based adaptive filtering of decomposed empirical modes for speech enhancement, IET Signal Processing, 2016, 10, 1, 69, 80, 2-s2.0-84955619466, 10.1049/iet-spr.2013.0425</item> <item>8 Baghel S., Prasanna S. R. M., Guha P., Classification of multi speaker shouted speech and single speaker normal speech, Proceedings of the 2017 IEEE Region 10 Conference, TENCON 2017, November 2017, Malaysia, 2388, 2392, 2-s2.0-85044205583</item> <item>9 Lu L., Zhang H.-J., Li S. Z., Content-based audio classification and segmentation by using support vector machines, Multimedia Systems, 2003, 8, 6, 482, 492, 2-s2.0-0037708486, 10.1007/s00530-002-0065-0</item> <item>10 Shi W., Fan X., Speech classification based on cuckoo algorithm and support vector machines, Proceedings of the 2nd IEEE International Conference on Computational Intelligence and Applications, ICCIA 2017, September 2017, China, 98, 102, 2-s2.0-85043461306</item> <item>11 Yu D., Deng L., Automatic speech recognition, 2016, Springer london limited</item> <item>12 Palaz D., Magimai-Doss M., Collobert R., Analysis of CNN-based speech recognition system using raw speech as input, Proceedings of the 16th Annual Conference of the International Speech Communication Association, INTERSPEECH 2015, September 2015, Germany, 11, 15, 2-s2.0-84955059475</item> <item>13 Sak H., Senior A., Rao K., Irsoy O., Graves A., Beaufays F., Schalkwyk J., Learning acoustic frame labeling for speech recognition with recurrent neural networks, Proceedings of the 40th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2015, April 2014, Australia, 4280, 4284, 2-s2.0-84946084790</item> <item>14 Soltau H., Liao H., Sak H., Neural Speech Recognizer: Acoustic-to-Word LSTM Model for Large Vocabulary Speech Recognition, Proceedings of the Interspeech 2017, 3707, 3711, 10.21437/Interspeech.2017-1566</item> <item>15 Tebelskis J., Speech recognition using neural networks, 1995, Siemens AG</item> <item>16 Amodei D., Ananthanarayanan S., Anubhai R., Deep speech 2: End-to-end speech recognition in english and mandarin, Proceedings of the International Conference on Machine Learning, 2016, 173, 182</item> <item>17 Bahdanau D., Chorowski J., Serdyuk D., Brakel P., Bengio Y., End-to-end attention-based large vocabulary speech recognition, Proceedings of the 41st IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016, March 2016, China, 4945, 4949, 2-s2.0-84973293705</item> <item>18 Graves A., Fernández S., Gomez F., Schmidhuber J., Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks, Proceedings of the 23rd International Conference on Machine Learning, ICML 2006, June 2006, Pittsburgh, Pa, USA, 369, 376, 10.1145/1143844.1143891, 2-s2.0-34250704813</item> <item>19 Bahdanau D., Kyunghyun C., Bengio Y., Neural Machine Translation by Jointly Learning to Align and Translate, In International Conference on Learning Representa-tions, 2015</item> <item>20 Chan W., Jaitly N., Le Q., Vinyals O., Listen, attend and spell: A neural network for large vocabulary conversational speech recognition, Proceedings of the 41st IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016, March 2016, China, 4960, 4964, 2-s2.0-84973351869</item> <item>21 Morgan N., Bourlard H., Continuous speech recognition, IEEE Signal Processing Magazine, 1995, 12, 3, 24, 42, 2-s2.0-0029306621, 10.1109/79.382443</item> <item>22 Leiner B. M. J., Noise-Robust Speech Recognition, 2003</item> </ulist> <p>PHOTO (COLOR): Framework of the proposed system.</p> <p>PHOTO (COLOR): Flowchart of the proposed system.</p> <p>PHOTO (COLOR): Screen shot of the app.</p> <p>PHOTO (COLOR): The procedure of audio announcement detection.</p> <p>PHOTO (COLOR): Example of postprocessing in audio announcement detection.</p> <p>PHOTO (COLOR): Finite state transducer of the grammar used.</p> <aug> <p>By Yong Ruan; Yueliang Qian and Xiangdong Wang</p> </aug> <nolink nlid="nl1" bibid="bib1" firstref="ref1"></nolink> <nolink nlid="nl2" bibid="bib2" firstref="ref2"></nolink> <nolink nlid="nl3" bibid="bib3" firstref="ref3"></nolink> <nolink nlid="nl4" bibid="bib4" firstref="ref4"></nolink> <nolink nlid="nl5" bibid="bib5" firstref="ref5"></nolink> <nolink nlid="nl6" bibid="bib6" firstref="ref6"></nolink> <nolink nlid="nl7" bibid="bib7" firstref="ref7"></nolink> <nolink nlid="nl8" bibid="bib8" firstref="ref8"></nolink> <nolink nlid="nl9" bibid="bib9" firstref="ref9"></nolink> <nolink nlid="nl10" bibid="bib10" firstref="ref10"></nolink> <nolink nlid="nl11" bibid="bib11" firstref="ref11"></nolink> <nolink nlid="nl12" bibid="bib12" firstref="ref12"></nolink> <nolink nlid="nl13" bibid="bib13" firstref="ref13"></nolink> <nolink nlid="nl14" bibid="bib14" firstref="ref14"></nolink> <nolink nlid="nl15" bibid="bib15" firstref="ref15"></nolink> <nolink nlid="nl16" bibid="bib16" firstref="ref16"></nolink> <nolink nlid="nl17" bibid="bib17" firstref="ref17"></nolink> <nolink nlid="nl18" bibid="bib18" firstref="ref18"></nolink> <nolink nlid="nl19" bibid="bib19" firstref="ref19"></nolink> <nolink nlid="nl20" bibid="bib20" firstref="ref20"></nolink> <nolink nlid="nl21" bibid="bib21" firstref="ref21"></nolink> <nolink nlid="nl22" bibid="bib22" firstref="ref22"></nolink>
CustomLinks:
  – Url: https://resolver.ebsco.com/c/xy5jbn/result?sid=EBSCO:edsdoj&genre=article&issn=16875680&ISBN=&volume=2018&issue=&date=20180101&spage=&pages=&title=Advances in Multimedia&atitle=Mobile%20Phone-Based%20Audio%20Announcement%20Detection%20and%20Recognition%20for%20People%20with%20Hearing%20Impairment&aulast=Yong%20Ruan&id=DOI:10.1155/2018/8786308
    Name: Full Text Finder (for New FTF UI) (s8985755)
    Category: fullText
    Text: Find It @ SCU Libraries
    MouseOverText: Find It @ SCU Libraries
  – Url: https://doaj.org/article/acdd3fb3e55d4d569db426fb0b9f1092
    Name: EDS - DOAJ (s8985755)
    Category: fullText
    Text: View record from DOAJ
    MouseOverText: View record from DOAJ
Header DbId: edsdoj
DbLabel: Directory of Open Access Journals
An: edsdoj.3fb3e55d4d569db426fb0b9f1092
RelevancyScore: 882
AccessLevel: 3
PubType: Academic Journal
PubTypeId: academicJournal
PreciseRelevancyScore: 881.885437011719
IllustrationInfo
Items – Name: Title
  Label: Title
  Group: Ti
  Data: Mobile Phone-Based Audio Announcement Detection and Recognition for People with Hearing Impairment
– Name: Author
  Label: Authors
  Group: Au
  Data: <searchLink fieldCode="AR" term="%22Yong+Ruan%22">Yong Ruan</searchLink><br /><searchLink fieldCode="AR" term="%22Yueliang+Qian%22">Yueliang Qian</searchLink><br /><searchLink fieldCode="AR" term="%22Xiangdong+Wang%22">Xiangdong Wang</searchLink>
– Name: TitleSource
  Label: Source
  Group: Src
  Data: Advances in Multimedia, Vol 2018 (2018)
– Name: Publisher
  Label: Publisher Information
  Group: PubInfo
  Data: Hindawi Limited, 2018.
– Name: DatePubCY
  Label: Publication Year
  Group: Date
  Data: 2018
– Name: Subset
  Label: Collection
  Group: HoldingsInfo
  Data: LCC:Electronic computers. Computer science
– Name: Subject
  Label: Subject Terms
  Group: Su
  Data: <searchLink fieldCode="DE" term="%22Electronic+computers%2E+Computer+science%22">Electronic computers. Computer science</searchLink><br /><searchLink fieldCode="DE" term="%22QA75%2E5-76%2E95%22">QA75.5-76.95</searchLink>
– Name: Abstract
  Label: Description
  Group: Ab
  Data: Automatic audio announcement systems are widely used in public places such as transportation vehicles and facilities, hospitals, and banks. However, these systems cannot be used by people with hearing impairment. That brings great inconvenience to their lives. In this paper, an approach of audio announcement detection and recognition for the hearing-impaired people based on the smart phone is proposed and a mobile phone application (app) is developed, taking the bank as a major applying scenario. Using the app, the users can sign up alerts for their numbers and then the system begins to detect audio announcements using the microphone on the smart phone. For each audio announcement detected, the speech within it is recognized and the text is displayed on the screen of the phone. When the number the user input is announced, alert will be given by vibration. For audio announcement detection, a method based on audio segment classification and postprocessing is proposed, which uses a SVM classifier trained on audio announcements and environment noise collected in banks. For announcement speech recognition, an ASR engine is developed using a GMM-HMM-based acoustic model and a finite state transducer (FST) based grammar. The acoustic model is trained on audio announcement speech collected in banks, and the grammar is human-defined according to the patterns used by the automatic audio announcement systems. Experimental results show that character error rates (CERs) around 5% can be achieved for the announcement speech, which shows feasibility of the proposed method and system.
– Name: TypeDocument
  Label: Document Type
  Group: TypDoc
  Data: article
– Name: Format
  Label: File Description
  Group: SrcInfo
  Data: electronic resource
– Name: Language
  Label: Language
  Group: Lang
  Data: English
– Name: ISSN
  Label: ISSN
  Group: ISSN
  Data: 1687-5680<br />1687-5699
– Name: NoteTitleSource
  Label: Relation
  Group: SrcInfo
  Data: https://doaj.org/toc/1687-5680; https://doaj.org/toc/1687-5699
– Name: DOI
  Label: DOI
  Group: ID
  Data: 10.1155/2018/8786308
– Name: URL
  Label: Access URL
  Group: URL
  Data: <link linkTarget="URL" linkTerm="https://doaj.org/article/acdd3fb3e55d4d569db426fb0b9f1092" linkWindow="_blank">https://doaj.org/article/acdd3fb3e55d4d569db426fb0b9f1092</link>
– Name: AN
  Label: Accession Number
  Group: ID
  Data: edsdoj.3fb3e55d4d569db426fb0b9f1092
PLink https://login.libproxy.scu.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsdoj&AN=edsdoj.3fb3e55d4d569db426fb0b9f1092
RecordInfo BibRecord:
  BibEntity:
    Identifiers:
      – Type: doi
        Value: 10.1155/2018/8786308
    Languages:
      – Text: English
    Subjects:
      – SubjectFull: Electronic computers. Computer science
        Type: general
      – SubjectFull: QA75.5-76.95
        Type: general
    Titles:
      – TitleFull: Mobile Phone-Based Audio Announcement Detection and Recognition for People with Hearing Impairment
        Type: main
  BibRelationships:
    HasContributorRelationships:
      – PersonEntity:
          Name:
            NameFull: Yong Ruan
      – PersonEntity:
          Name:
            NameFull: Yueliang Qian
      – PersonEntity:
          Name:
            NameFull: Xiangdong Wang
    IsPartOfRelationships:
      – BibEntity:
          Dates:
            – D: 01
              M: 01
              Type: published
              Y: 2018
          Identifiers:
            – Type: issn-print
              Value: 16875680
            – Type: issn-print
              Value: 16875699
          Numbering:
            – Type: volume
              Value: 2018
          Titles:
            – TitleFull: Advances in Multimedia
              Type: main
ResultId 1