How to Use iOS Speech Recognition in Offline Mode

Is there a way to use iOS speech recognition in offline mode?

I am afraid that there is no way to do it (however, please make sure to check the update at the end of the answer).

As mentioned at the Speech Framework Official Documentation:

Best Practices for a Great User Experience:

Be prepared to handle the failures that can be caused by reaching speech recognition limits.
Because speech recognition is a network-based service, limits are
enforced so that the service can remain freely available to all apps.


As an end user perspective, trying to get Siri's help without connecting to a network should displays a screen similar to:

Sample Image

Also, When trying to send a massage -for example-, you'll notice that the mike button should be disabled if the device is unconnected to a network.

Sample Image

Natively, the iOS itself won't able this feature until checking network connection, I assume that would be the same for the third-party developer when using the Speech Framework.


UPDATE:

After watching Speech Recognition API Session (especially, the part 03:00 - 03:25) , I came up with:

Speech Recognition API usually requires an internet connection, but there are some of new devices do support this feature all the time; You might want to check whether the given language is available or not.

Adapted from SFSpeech​Recognizer Documentation:

Note that a supported speech recognizer is not the same as an
available speech recognizer; for example, the recognizers for some
locales may require an Internet connection
. You can use the
supported​Locales() method to get a list of supported locales and the
is​Available property to find out if the recognizer for a specific
locale is available.



Further Reading:

These topics might be related:

  • Which iOS devices support​ offline speech recognition?
  • How to Enable Offline Dictation on Your iPhone?
  • Will Siri ever work offline?

How to use Offline Speech Recognition inside the iOS SDK?

There are many, here are the SDK's that I have used earlier (both are free and offline)

  1. Openears

  2. Flite

Does SFSpeechRecognizer have a limit if supportsOnDeviceRecognition is true and offline mode is available?

According to this presentation at the WWDC2019, On-device has no limits.

https://developer.apple.com/videos/play/wwdc2019/256/

WWDC2019

chinese text to speech Translation in offline mode in iPhone

I prefer OpenEars. It's an offline mode iOS framework for iPhone voice recognition and speech synthesis (TTS) .

To support Chinese language, just change its default english acoustic model and grammar files(.languagemodel & .dic) with Chinese acoustic model and grammar files.

Regarding different language,Please note below comments

Speech recognition engines require two types of files to recognize
speech. They require an acoustic model, which is created by taking
audio recordings of speech and their transcriptions (taken from a
speech corpus), and 'compiling' them into a statistical
representations of the sounds that make up each word (through a
process called 'training'). They also require a language model or
grammar file. A language model is a file containing the probabilities
of sequences of words. A grammar is a much smaller file containing
sets of predefined combinations of words. Language models are used for
dictation applications, whereas grammars are used in desktop command
and control or telephony interactive voice response (IVR) type
applications.

So search for a Chinese acoustic model and grammar files and download it, replace these with Openears(or any other speech engine's) default acoustic model and grammar files. That's it.

Downloaded Acoustic model should have the following files

mdef
feat.params
mixture_weights
means
noisedict
transition_matrices
variances

So find these Openears files and replace with downloaded Chinese acoustic model. Don't forget to change the grammar files.

or Create your own acoustic model using CMUSphinx tutorial.

You can download different acoustic models from cmusphinx website.

For chinese,please do a google search and get it.

SFSpeechRecognitionRequest on Xamarin.iOS requiresOnDeviceRecognition. Is it available?

There is no such property in the documentation, but if you write the code such property is available:

        var request = new SFSpeechUrlRecognitionRequest(new NSUrl("path", false));
request.RequiresOnDeviceRecognition = true;

ios speech to text conversion

Openears will support free speech recognition and text-to-speech functionalities in offline mode.

They have FliteController Class Reference, which controls speech synthesis (TTS) in OpenEars.

They have done an excellent job in speech recognition area.

However, please note that it will detect only the words that you mentioned in vocabulary files.It iss good to work as offline mode to get the better performance.

@Halle: Correct me if I'm wrong.

You have a paid option, Dragon Dictation which is working as online engine.

or use VocalKit: Shim for Speech Recognition on iPhone

I would like to point out that , none of them are accurate than Siri (Siri SDK is not available yet).



Related Topics



Leave a reply



Submit