Glass Voice Command Nearest Match from Given List

Glass voice command nearest match from given list

The Google GDK doesn't support this feature yet. However, the necessary features are already available in some libraries and you can use them as long as the GDK doesn't support this natively.
What you have to do:

  1. Pull the GlassVoice.apk from your Glass: adb pull /system/app/GlassVoice.apk

  2. Use dex2jar to convert this apk into a jar file.

  3. Add the jar file to your build path

Now you can use this library like this:

public class VoiceActivity extends Activity {

private VoiceInputHelper mVoiceInputHelper;
private VoiceConfig mVoiceConfig;

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.voice_activity);

String[] items = {"red", "green", "blue", "orange"};
mVoiceConfig = new VoiceConfig("MyVoiceConfig", items);
mVoiceInputHelper = new VoiceInputHelper(this, new MyVoiceListener(mVoiceConfig),
VoiceInputHelper.newUserActivityObserver(this));
}

@Override
protected void onResume() {
super.onResume();
mVoiceInputHelper.addVoiceServiceListener();
}

@Override
protected void onPause() {
super.onPause();
mVoiceInputHelper.removeVoiceServiceListener();
}

public class MyVoiceListener implements VoiceListener {
protected final VoiceConfig voiceConfig;

public MyVoiceListener(VoiceConfig voiceConfig) {
this.voiceConfig = voiceConfig;
}

@Override
public void onVoiceServiceConnected() {
mVoiceInputHelper.setVoiceConfig(mVoiceConfig, false);
}

@Override
public void onVoiceServiceDisconnected() {

}

@Override
public VoiceConfig onVoiceCommand(VoiceCommand vc) {
String recognizedStr = vc.getLiteral();
Log.i("VoiceActivity", "Recognized text: "+recognizedStr);

return voiceConfig;
}

@Override
public FormattingLogger getLogger() {
return FormattingLoggers.getContextLogger();
}

@Override
public boolean isRunning() {
return true;
}

@Override
public boolean onResampledAudioData(byte[] arg0, int arg1, int arg2) {
return false;
}

@Override
public boolean onVoiceAmplitudeChanged(double arg0) {
return false;
}

@Override
public void onVoiceConfigChanged(VoiceConfig arg0, boolean arg1) {

}
}

}

How to Navigate a Google Glass GDK Immersion Application using Voice Command only?

I'm writing out the entire code in detail since it took me such a long time to get this working.. perhaps it'll save someone else valuable time.

This code is the implementation of Google Contextual Voice Commands as described on Google Developers here: Contextual voice commands

ContextualMenuActivity.java

   package com.drace.contextualvoicecommands;

import android.app.Activity;
import android.os.Bundle;
import android.view.Menu;
import android.view.MenuItem;
import com.drace.contextualvoicecommands.R;
import com.google.android.glass.view.WindowUtils;

public class ContextualMenuActivity extends Activity {

@Override
protected void onCreate(Bundle bundle) {
super.onCreate(bundle);

// Requests a voice menu on this activity. As for any other
// window feature, be sure to request this before
// setContentView() is called
getWindow().requestFeature(WindowUtils.FEATURE_VOICE_COMMANDS);
setContentView(R.layout.activity_main);
}

@Override
public boolean onCreatePanelMenu(int featureId, Menu menu) {
if (featureId == WindowUtils.FEATURE_VOICE_COMMANDS) {
getMenuInflater().inflate(R.menu.main, menu);
return true;
}
// Pass through to super to setup touch menu.
return super.onCreatePanelMenu(featureId, menu);
}

@Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.main, menu);
return true;
}

@Override
public boolean onMenuItemSelected(int featureId, MenuItem item) {
if (featureId == WindowUtils.FEATURE_VOICE_COMMANDS) {
switch (item.getItemId()) {
case R.id.dogs_menu_item:
// handle top-level dogs menu item
break;
case R.id.cats_menu_item:
// handle top-level cats menu item
break;
case R.id.lab_menu_item:
// handle second-level labrador menu item
break;
case R.id.golden_menu_item:
// handle second-level golden menu item
break;
case R.id.calico_menu_item:
// handle second-level calico menu item
break;
case R.id.cheshire_menu_item:
// handle second-level cheshire menu item
break;
default:
return true;
}
return true;
}
// Good practice to pass through to super if not handled
return super.onMenuItemSelected(featureId, item);
}
}

activity_main.xml (layout)

 <?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent" >

<TextView
android:id="@+id/coming_soon"
android:layout_alignParentTop="true"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/voice_command_test"
android:textSize="22sp"
android:layout_marginRight="40px"
android:layout_marginTop="30px"
android:layout_marginLeft="210px" />
</RelativeLayout>

strings.xml

<resources>
<string name="app_name">Contextual voice commands</string>
<string name="voice_start_command">Voice commands</string>
<string name="voice_command_test">Say "Okay, Glass"</string>
<string name="show_me_dogs">Dogs</string>
<string name="labrador">labrador</string>
<string name="golden">golden</string>
<string name="show_me_cats">Cats</string>
<string name="cheshire">cheshire</string>
<string name="calico">calico</string>
</resources>

AndroidManifest.xml

 <manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.drace.contextualvoicecommands"
android:versionCode="1"
android:versionName="1.0" >

<uses-sdk
android:minSdkVersion="19"
android:targetSdkVersion="19" />

<uses-permission android:name="com.google.android.glass.permission.DEVELOPMENT"/>

<application
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name" >

<activity
android:name="com.drace.contextualvoicecommands.ContextualMenuActivity"
android:label="@string/app_name" >
<intent-filter>
<action android:name="com.google.android.glass.action.VOICE_TRIGGER" />
</intent-filter>

<meta-data
android:name="com.google.android.glass.VoiceTrigger"
android:resource="@xml/voice_trigger_start" />
</activity>

</application>
</manifest>

It's been Tested and works great under Google Glass XE22 !

Google Glass voice command send activity to background

If I understand correctly your question, you want to customize the Contextual Voice Command, by deleting the gray overlay above your activity.

If yes, I wanted to do the same thing but with XE19 we do not have this possibility yet.

I asked a question and got a respons with a custom solution if you are interested :
Custom Voice Input

EDIT: I also find another solution customizing the Voice Input From google, but it won't work when the XE will be upgraded : Another Custom Voice Input

Detecting whether Glassware was launched via voice command or the touch menu

The GDK does not yet provide a way to do this. If you would like to see this feature added, please file an enhancement request in our issue tracker!

Where to request for new Glass voice trigger command?

There is a link on the glassware checklist to the form for suggesting new voice commands.

Why is my voice command missing from the ok glass menu in XE16?

Answering my own question, since this seems to be impacting a lot of developers.

Voice commands changed a bit in XE16. Unlisted voice commands, like the one specified in your configuration, now require an additional permission. Add this to your manifest:

<uses-permission android:name="com.google.android.glass.permission.DEVELOPMENT" />

When you're ready to release your Glassware, you must use a built-in static voice command. XML for this kind of command would look more like this:

<?xml version="1.0" encoding="utf-8"?>
<trigger command="START_A_RUN" />

Where START_A_RUN is one of the items from this list. If none of the listed commands are appropriate for your Glassware, you should request the addition of a voice command. This can take some time, so it's best to do this as early as possible.



Related Topics



Leave a reply



Submit