|
From: | Chandan Sharma |
Subject: | [Linphone-developers] Use Audio for speech during active Linphone SIP call |
Date: | Mon, 13 Oct 2014 20:42:31 +0800 |
Hi
I am using a Linphone SIP solution on android. My requirement is to have a SIP call active and at the same time use Google speech recognition. While the call is active, we want to use the Microhpne for speech recognition. For this, it is OK to have no audio in the call so that the mic can be free to handle the speech recognition.. Inorder, to achieve this, I have done following things with linphone SDK
1) Did not get audio focus while in call.
2) Mute the mic.
But, when we start the Speech using the following code, the google speech recoginition crashes.
What I feel is that the SIP call has taken over the audio, although I have tried my best to remove it. Please let me know how to make sure that the SIP call does not use audio control when speech recognition is ON. Also, I should be able to add audio back to the call later on when I want it.
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
// intent.putExtra(RecognizerIntent.EXTRA_PROMPT,
// "Please speak slowly and enunciate clearly.");
try {
startActivityForResult(intent, requestCode);
10-16 00:51:26.530: E/GoogleRecognitionService(18416): onError
10-16 00:51:26.530: E/GoogleRecognitionService(18416): com.google.android.voicesearch.speechservice.connection.AudioRecognizeException: Error thrown by AMR encoder
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at com.google.android.voicesearch.speechservice.NetworkRecognitionEngine.startSendLoop(NetworkRecognitionEngine.java:153)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at com.google.android.voicesearch.speechservice.NetworkRecognitionEngine.internalStartRecognition(NetworkRecognitionEngine.java:129)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at com.google.android.voicesearch.speechservice.NetworkRecognitionEngine.startRecognition(NetworkRecognitionEngine.java:84)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at com.google.android.voicesearch.speechservice.RecognitionEngineWrapper.startRecognition(RecognitionEngineWrapper.java:50)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at com.google.android.voicesearch.speechservice.RecognitionEngineRetrier.startRecognition(RecognitionEngineRetrier.java:93)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at java.lang.reflect.Method.invokeNative(Native Method)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at java.lang.reflect.Method.invoke(Method.java:511)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at com.google.android.voicesearch.util.ThreadChanger$1$1.run(ThreadChanger.java:63)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:442)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at java.util.concurrent.FutureTask.run(FutureTask.java:137)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:150)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:264)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at java.lang.Thread.run(Thread.java:856)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): Caused by: java.io.IOException: AudioRecord not initialized
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at com.google.android.voicesearch.endpointer.MicrophoneInputStream.ensureStarted(MicrophoneInputStream.java:78)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at com.google.android.voicesearch.endpointer.MicrophoneInputStream.read(MicrophoneInputStream.java:127)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at com.google.android.voicesearch.endpointer.BluetoothAwareMicrophoneInputStream.read(BluetoothAwareMicrophoneInputStream.java:82)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at com.google.common.io.ByteStreams.read(ByteStreams.java:806)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at com.google.android.voicesearch.audio.reader.Tee.fillBuffers(Tee.java:260)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at com.google.android.voicesearch.audio.reader.Tee.get(Tee.java:165)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at com.google.android.voicesearch.audio.reader.Tee$TeeInputStream.read(Tee.java:372)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at java.io.InputStream.read(InputStream.java:163)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at com.google.android.voicesearch.speechservice.AudioSource.captureLoop(AudioSource.java:108)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at com.google.android.voicesearch.speechservice.AudioSource.access$000(AudioSource.java:26)
10-16 00:51:26.530: E/GoogleRecognitionService(18416): at com.google.android.voicesearch.speechservice.AudioSource$1.run(AudioSource.java:76)
10-16 00:51:26.531: D/AudioTrack(18058): start 0x5e27edc0
10-16 00:51:26.533: I/AudioService(394): AudioFocus abandonAudioFocus() from address@hidden@4275dec0
10-16 00:51:26.533: I/AudioService(394): removeFocusStackEntry() removing top of stack
10-16 00:51:26.533: I/AudioService(394): notifyTopOfAudioFocusStack: clientId = address@hidden
10-16 00:51:26.534: I/AudioService(394): Clear remote control display
[Prev in Thread] | Current Thread | [Next in Thread] |