Third party cookies may be stored when visiting this site. Please see the cookie information.

PenguinTutor YouTube Channel

Raspberry Pi with Google AIY - voice recognition

Google and MagPi (the official Raspberry Pi magazine) have released a project in the magazine using the Google API engine. This is known as AIY and was released with issue 57 of the magazine. The finished project is a cardboard box with a button switch on top. It includes a audio card and a microphone, but the thing that makes this all work is software that connects to the Google Cloud Service using voice recognition to add voice control to a Raspberry Pi and whatever program that you write for it.

Google AIY Cardboard voice recognition for the Raspberry PI

The instructions for getting started are provided at: AIYProjects with Google.
It explains the setup for a Raspberry Pi 3, but I used a Raspberry Pi 1 B+. The one thing I had to do differently is to use a different computer to setup the cloud configuration through the webpage as on the B+ it was much too slow.

You will need to follow all the instructions down to 3.3 which is where it refers you to the action.py file. This is where you can customize the commands to run code on the Raspberry Pi.

Shutdown

As the box is designed to run without keyboard or monitor, one of the first things that I did was to add a shutdown command which as it's name suggests shuts the Raspberry Pi down cleanly. This is done by adding the following code snippets to the action.py file which is based on an example provided in the file.

The first part is to add a line to the actor instructions.

actor.add_keyword(_('shutdown'), Shutdown(say))

The command above instructs the Raspberry Pi to call the Shutdown command which is included in the action.py file (it actually creates an instance of a Shutdown object and calls the run method).

This is then implemented using the following code:

class Shutdown(object):

    def __init__(self, say):
        self.say = say
        self.shell_command = "sudo" 
        self.shell_args = "shutdown -h now"

    def run(self, voice_command):
        self.say('Shutting down now');
        subprocess.call(self.shell_command+" "+self.shell_args, shell=True)

Now run the src/main.py command and you should be able to press the button (or clap if you have set that up) and say shutdown. You could create your own version that uses other words like "power off" or "activate self destruct" if you prefer.

Controlling NeoPixels

The other thing that I the AIY project for is to control NeoPixels. This is very much work in progress at the moment, but I was already in the process of converting my NeoPixel GUI application to a client-server model using python bottle and a web control.

You can see this in action on the video below:

The NeoPixel code is on github NeoPixel GUI code on GitHub. I am working in a separate client-server branch but it in a state of change at the moment, so you will probably be better off waiting for it to be merged into the master branch at a later stage.

I have then added individual entries for each of the commands to the action.py file. This then uses wget to request the file (future versions will probably use the python webbrowser or something similar instead):

actor.add_keyword(_('lights'), Lights(say,_('lights')))

and

class Lights(object):

    def __init__(self, say, keyword):
        self.say = say
        self.keyword = keyword

    def run(self, voice_command):
        # Get string
        print ("Keyword "+voice_command)
        getcmd = "wget http://192.168.0.106/"
        command = getcmd+"status"
        if ("off" in voice_command) :
            command = getcmd + "alloff"
        elif ("on" in voice_command) :
            command = getcmd + "allon?colour=white"
        elif ("red" in voice_command) :
            command = getcmd + "allon?colour=red"
        elif ("green" in voice_command) :
            command = getcmd + "allon?colour=green"
        elif ("blue" in voice_command) :
            command = getcmd + "allon?colour=blue"
        elif ("rainbow" in voice_command) :
            command = getcmd + "sequence?seq=rainbow"
        elif ("chaser" in voice_command) :
            command = getcmd + "sequence?seq=chaser"
        elif ("disco" in voice_command) :
            command = getcmd + "sequence?seq=theatreChaseRainbow"
        # Hard coded colours at the moment
        subprocess.check_output(command, shell=True).strip()
        self.say(voice_command)

Note that this is a quick hack (which leaves temporary files in the local folder). I will be looking at cleaning up this code in future, but I'm currently working on the NeoPixel side first.

Summary

The Google / Raspberry Pi AIY kit has only been available (via The MagPi) for a few days and so this is just an initial look at what it is capable of. I'll be looking at improving this code in a future version and perhaps adding voice control to even more projects.