Skip to content

jchunn/Ambition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

Repository files navigation

Intention

This is a repository of all the things I would like to convert into projects, preferably using both R and Python. Really it's somewhere for me to store all my notes, links, blogs, tweets, hallway whispers, I would like to revisit at some point. Step 1. organize notes.

Ambition

Fun videos to see

(https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw/videos)

Piping to learn

  • Containers
    + Docker
    + S3 cluster
    + ECR
  • Amazon Athea Optimization
  • Amazon SageMaker
    + How to productionize ML models
    + End-toEnd ML Platform
    + Zero setup
    + Flexible model training
    + Tensor Flow
    + mxnet
    + Gluon
    + scales out with demand. Pay by demand, by second.
    + UX -- sagemaker console + Jupyter notebooks
    + Use SageMaker's hosted Notebook Instances
    + or Apache Spark through EMR and the SageMaker Spark SDK
    + or SageMaker's Console for a point and click experience
    + of your own device (EC2, laptop, etc)
    + Training/Hosting
    + Custom models via Docker/ECR
    + Low latency, high throughput, and high reliability
    + Zero downtime deployment and A/B testing
    + Trained model artifact is uploaded to S3 + Built-in algorithms
    + XGBoost, FM, Linear, and DeepAR Time-Series Forecasting for supervised learning
    + Kmeans, PCA, and Word2Vec for clustering and pre-processing
    + Image Classification
    + Native TensorFlow and MXNet support + Build your own algorithm
    1. Pick your preferred framework
    + SciKit learn
    + R
    + PyTorch
    + Java
    2. ... upload to Docker .. then...
    + Hyperparameter optimization (automatic model tuning)
    + Spark SDK reads from S3
    + SageMaker instances are typically smaller, not typically used for data munging. Data has been munged
    + See sagemaker Jupyter notebook repository

Regularization notes:

  • L1 still is a shrinkage penalty, it just shrinks coefficients exactly to zero in a finite amount of time, where L2 shrinks them asymptotically to zero.
  • L2 is preferred over L1 for several reasons. L2 is a shrinkage penalty, where L1 will just set values to zero. So L2 "moderates" your parameter values, while L1 just eliminates them. Also, L2 is differentiable everywhere, so it makes back

Neural Nets notes:

About

Collection of Things to Learn

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published