Using the Web Speech API – MDN Web Docs

Chào mừng bạn đến với tbkc.edu.vn trong bài viết về Stt in angurwar 2 chúng tôi sẽ chia sẻ kinh nghiệm chuyên sâu của mình cung cấp kiến thức chuyên sâu dành cho bạn.
Let’s look at the JavaScript in a bit more detail.
Chrome support
As mentioned earlier, Chrome currently supports speech recognition with prefixed properties, therefore at the start of our code we include these lines to feed the right objects to Chrome, and any future implementations that might support the features without a prefix:
The grammar
The next part of our code defines the grammar we want our app to recognize. The following variable is defined to hold our grammar:
Xem thêm:: Cấu trúc và các thành phần cơ bản trong câu (Structures & Basic
The grammar format used is JSpeech Grammar Format (JSGF) — you can find a lot more about it at the previous link to its spec. However, for now let’s just run through it quickly:
- The lines are separated by semicolons, just like in JavaScript.
- The first line — #JSGF V1.0; — states the format and version used. This always needs to be included first.
- The second line indicates a type of term that we want to recognize. public declares that it is a public rule, the string in angle brackets defines the recognized name for this term (color), and the list of items that follow the equals sign are the alternative values that will be recognized and accepted as appropriate values for the term. Note how each is separated by a pipe character.
- You can have as many terms defined as you want on separate lines following the above structure, and include fairly complex grammar definitions. For this basic demo, we are just keeping things simple.
Plugging the grammar into our speech recognition
The next thing to do is define a speech recognition instance to control the recognition for our application. This is done using the SpeechRecognition() constructor. We also create a new speech grammar list to contain our grammar, using the SpeechGrammarList() constructor.
We add our grammar to the list using the SpeechGrammarList.addFromString() method. This accepts as parameters the string we want to add, plus optionally a weight value that specifies the importance of this grammar in relation of other grammars available in the list (can be from 0 to 1 inclusive.) The added grammar is available in the list as a SpeechGrammar object instance.
Xem thêm:: Stt Đểu Đá Xoáy Về Cuộc Đời, Tình Yêu Đậm Chất “Dân Chơi” – Elead
We then add the SpeechGrammarList to the speech recognition instance by setting it to the value of the SpeechRecognition.grammars property. We also set a few other properties of the recognition instance before we move on:
- SpeechRecognition.continuous: Controls whether continuous results are captured (true), or just a single result each time recognition is started (false).
- SpeechRecognition.lang: Sets the language of the recognition. Setting this is good practice, and therefore recommended.
- SpeechRecognition.interimResults: Defines whether the speech recognition system should return interim results, or just final results. Final results are good enough for this simple demo.
- SpeechRecognition.maxAlternatives: Sets the number of alternative potential matches that should be returned per result. This can sometimes be useful, say if a result is not completely clear and you want to display a list if alternatives for the user to choose the correct one from. But it is not needed for this simple demo, so we are just specifying one (which is actually the default anyway.)
Starting the speech recognition
After grabbing references to the output <div> and the HTML element (so we can output diagnostic messages and update the app background color later on), we implement an onclick handler so that when the screen is tapped/clicked, the speech recognition service will start. This is achieved by calling SpeechRecognition.start(). The forEach() method is used to output colored indicators showing what colors to try saying.
Receiving and handling results
Once the speech recognition is started, there are many event handlers that can be used to retrieve results, and other pieces of surrounding information (see the SpeechRecognition events.) The most common one you’ll probably use is the result event, which is fired once a successful result is received:
Xem thêm:: Các bước xử lý hóa đơn điện tử đã lập nhưng bị sai sót … – MeInvoice
The second line here is a bit complex-looking, so let’s explain it step by step. The SpeechRecognitionEvent.results property returns a SpeechRecognitionResultList object containing SpeechRecognitionResult objects. It has a getter so it can be accessed like an array — so the first [0] returns the SpeechRecognitionResult at position 0. Each SpeechRecognitionResult object contains SpeechRecognitionAlternative objects that contain individual recognized words. These also have getters so they can be accessed like arrays — the second [0] therefore returns the SpeechRecognitionAlternative at position 0. We then return its transcript property to get a string containing the individual recognized result as a string, set the background color to that color, and report the color recognized as a diagnostic message in the UI.
We also use the speechend event to stop the speech recognition service from running (using SpeechRecognition.stop()) once a single word has been recognized and it has finished being spoken:
Handling errors and unrecognized speech
The last two handlers are there to handle cases where speech was recognized that wasn’t in the defined grammar, or an error occurred. The nomatch event seems to be supposed to handle the first case mentioned, although note that at the moment it doesn’t seem to fire correctly; it just returns whatever was recognized anyway:
The error event handles cases where there is an actual error with the recognition successfully — the SpeechRecognitionErrorEvent.error property contains the actual error returned: