In this lesson, you will create a Scratch project that uses the TM2Scratch extension to recognize rock, paper, and scissors hand gestures using a Google Teachable Machine Image model.
Note: You will need a webcam or a camera on your computer to do this project.
In this lesson, you will create a Scratch project that uses a Google Teachable Machine Image model to recognize rock, paper, and scissors hand gestures.
If you have not created your own image model in the previous lesson, you can use the following link for an image model that has been trained to recognise rock, paper and scissors hand gestures.
First, go to stretch3.github.io to create a new Scratch project.
Stretch3 is an experimental version of Scratch 3 that allows you to use additional extensions like TM2Scratch, which we will need for using our AI image model.
Now, let's add the TM2Scratch extension to your project.
Click on the 'Extensions' button at the bottom-left corner of the screen, and then click on 'TM2Scratch'. This extension allows you to use Google Teachable Machine Image models in your Scratch projects.
When you add this extension, it will automatically try and use your computer's camera. If necessary, click on 'Allow' to give it permission to use your camera. You should see what the camera is showing in the stage area.
Scratch Extensions make it possible to connect Scratch projects with external hardware (such as LEGO WeDo or micro:bit), sources of information on the web (such as Google Translate and Amazon Text to Speech), or blocks allowing for more advanced functionality.
When an extension is enabled, its blocks appear in a location with the same name as the extension.
To load an extension, click the icon in the bottom-left hand corner of the screen and select an extension.
Now, let's set up the image model that our project will use. Add the following code to the Cat sprite:
when green flag clicked turn video [on v] :: extension Label once every (1 v) seconds :: extension image classification model URL [link to your model] :: extension
Put the link of your image model into the
image classification model URL block.
Here's what the code does:
turn video [on v]: This block turns on the video from your webcam or computer camera.
Label once every (1 v) seconds: This block tells the program to recognize the hand gesture every second.
image classification model URL [link to your model]: This block sets the URL of the image model that will be used to recognize the hand gestures.
Click on the 'image label' block in the TM2Scratch extension category to turn it on. This will show you the current image label being applied to what's being recorded on your webcam.