-
Launch 8
Machine learning model that can sort images
-
Lecture1.1
-
Lecture1.2
-
Lecture1.3
-
Lecture1.4
-
Lecture1.5
-
Lecture1.6
-
Quiz1.1
-
Lecture1.7
-
-
Natural Language Processing 7
Machine learning model that can recognise natural language commands
-
Lecture2.1
-
Lecture2.2
-
Lecture2.3
-
Lecture2.4
-
Lecture2.5
-
Quiz2.1
-
Lecture2.6
-
-
Recommendation Systems 6
Machine learning model that can recommend the reading age of a book based on data about the book
-
Lecture3.1
-
Lecture3.2
-
Lecture3.3
-
Lecture3.4
-
Lecture3.5
-
Quiz3.1
-
-
Decisions and Ethics 4
Presentation or report summarising key points
-
Lecture4.1
-
Lecture4.2
-
Lecture4.3
-
Lecture4.4
-
-
Machine Learning Algorithms 12
In this session we will be looking at some of the algorithms that make machine learning possible.
-
Lecture5.1
-
Lecture5.2
-
Lecture5.3
-
Lecture5.4
-
Lecture5.5
-
Lecture5.6
-
Lecture5.7
-
Lecture5.8
-
Lecture5.9
-
Lecture5.10
-
Lecture5.11
-
Lecture5.12
-
-
(Optional) Python and Orange 3
Data visualiastions using Orange Python code for importing data and running machine learning algorithms (decision trees and kNN)
-
Lecture6.1
-
Lecture6.2
-
Lecture6.3
-
[LAB] How to use (step by step)
Setup
- Open https://stretch3.github.io.
- Open “Choose an Extension” window and select “ML2Scratch”.
- Chrome asks you to allow the access to Camera, then click “Allow”.
- Check the checkboxes besides “label”, “counts of label 1”, “counts of label 2” and “counts of label 3” blocks.
Training
- Show “rock” hand sign to the camera and click “train label 1” block. This is to train the machine to recognize “rock” sign as label 1.
- Keep clicking the button until you capture about 20 images. The number of images captured is displayed in “counts of label 1” field in Stage window.
- Show “paper” hand sign to the camera and keep clicking “train label 2” block until you get 20 as “counts of label 2”.
- Show “scissors” hand sign to the camera and keep clicking “train label 3” block until you get 20 as “counts of label 3”.
Recognition
- After training, the recognition result shows in the “label” field in Stage area. If you show “rock”, the “label” should show “1”, if you show “paper”, the “label” should show “2” and if you show “scissors”, the “label” should show “3”.
- You can use “when received label #” blocks and create a sample program like this:
Switching between images to be learned/classified
You can switch the images to be learned/classified.
By default, Scratch’s stage image is used for learning/classification.
If there is a webcam image on the stage, it learns/classified the webcam image, or if the “Turn off video” block stops showing the webcam image and shows a game or animation screen, etc., it uses that screen for learning/classfication.
If you want to learn/classify only the webcam’s image, you can use
It can be switch to a webcam image for learning/classification. If you want to move the character by gestures on the camera image, I think this is a more accurate way to judge.
Download/Upload
With ML2Scratch, you can download and save the trained model on your PC by using the “download learning data” block.
Click, specify the file download destination, and press the “Save” button. The learning data will be saved as a file <numerical string>.json.
The project itself is not saved automatically like a normal Scratch, so select “File” > “Save to your computer” and save it on your PC as a .sb3 file.
To reopen a saved project, choose “File” > “Load from your computer” and select the saved .sb3 file. After that, upload the learning data.
The saved learning data can be uploaded in the “upload learning data” block.
When you click, a window called “upload learning data” opens, so click the “Select file” button, select the training data file (<numerical sequence>.json), and press Click.
At this time, be aware that the data that has been learned will be overwritten.