Skip to content

Latest commit

 

History

History

07_training

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 

Training a Neural Network

Session A: Training a Neural Network

Objectives:

  • Learn steps to construct a vanilla neural network and train a classification model with ml5.js.
  • Understand Neural Network architecture
    • What is a Perceptron?
    • What is a multi-layered perceptron?
    • Activation Functions
  • Understand the terminology of the training process
    • Training
    • Learning Rate
    • Epochs
    • Batch size
    • Loss
  • Understand the difference between training and inference
  • Slides

Background: Perceptron and Artifical Neural Networks

p5.js Oscillators

Related projects that map gesture to sound

Machine Learning for Human Creative Practice

ml5.js: Train Your Own Neural Network

Session B: Real-time Data

Objectives:

  • Revisit and examine the concepts of classification and regression as applied to real-time interaction.

Pose Data as inputs to Neural Network

Pixel Data as inputs to Neural Network

Face Data

Hand Data

Project References

Assignment

  1. Watch Machine Learning for Human Creative Practice, Dr. Rebecca Fiebrink at Eyeo 2018. Write a response to the following question posed by Dr. Fiebrink:
    • How can machine learning support people's existing creative practices? Expand people's creative capabilities?
  2. Dream up and design the inputs and outputs of a real-time machine learning system for interaction and audio/visual performance. This could be an idea well beyond the scope of what you can do in a weekly exercise.
  3. Open api chat p5 sketch: https://editor.p5js.org/yining/sketches/cnlmIOoL9
  4. Create your own p5+ml5 sketch that trains a model with real-time interactive data. This can be a prototype of the aforementioned idea or a simple exercise where you run this week's code examples with your own data. Here are some exercise suggestions:
    • Try to invent more elegant and intuitive interaction for collecting real-time data beyond clicking buttons?
    • What other real-time inputs might you consider beyond mouse position, image pixels, or face/pose/hand tracking? Could you use real-time sensor data?
    • What other real-time outputs might you consider beyond color or sound modulation? Could the output be a physical computing device? Multiple outputs like R,G,B values?
    • Improve the handPose example we built in class https://editor.p5js.org/yining/sketches/dX-aN-8E7
      • https://editor.p5js.org/yining/sketches/qi3iwsxg6
      • Can you add more keypoints from the hand to the data collection? (All the keypoints?)
      • Can you add more classification categories?
      • Can you create an interface for training and showing the results of model's prediction?
      • Can you turn this into a regression model?
  5. Complete a blog post with your response, real-time ML system, and documentation of your code exercise and link from the homework wiki.