Skip to content

HeyMaslo/companion-server

Repository files navigation

A Server For Every Signal

Maslo Companion Server

GitHub stars Open Source? Yes! Twitter Follow

Overview

A Companion’s existence gives us a reason to express ourselves, go on adventures, and learn who we are. Companions are similar to Assistants like Siri and Alexa, but with more emphasis on personal growth and development.

Maslo Companion Server is a self contained signal processing and machine learning server deployable to major cloud providers and private data centers or to your local computer.

With Companion Server, developers are able to pass unstructured signals for the computer to observe. The server then returns insights about human interactions that can be incorporated into enhanced products. Note: Original server was using lower level code with WolframEngine, python, tensorflow but it has been simplified and smallified for maintenance and portability reasons.

The companion server can easily be attached to other systems by passing in images or text, and receiving a JSON response.

Use Cases

  • Observing and understanding context in data
  • Tagging large corpus of content
  • Reorganizing photos
  • Analyzing company data
  • Making social media image filters
  • Learning and more... there are so many possibilities

Getting Started

Developing from local container

You can run docker-compose up --build (within the /apis folder) which will start a container that will allow you to start developing locally.

It will run nodemon to start the server so any change should be reflected immediately.

Make sure you are using the right port (you can change it on the docker-compose.yml file). It is set to map 41690 to 8080.

Containers and Installations

Official Maslo Builds

https://hub.docker.com/r/heymaslo/maslocompanionserver/tags

Simple info on packaging up docker and node. See here for basic concepts: https://nodejs.org/en/docs/guides/nodejs-docker-webapp/

Important notes

Some of the node_modules have bugs for tensorflow running local models etc. One issue in Deeplab here: tensorflow/tfjs#3723

It is important to bring these EXACT node modules contained in this image until that bug is fixed. Additionally, if you want a full blackbox experience with no outside internet access you must load all models from LOCAL FILE storage or a LOCAL/Internal URL. The code shows a couple of ways to do this. It's not that fun to track it all down.

Getting a Docker Container

Basic image build of app/models:

docker build -t [yourdockerhub]/maslocompanionserver

Tag build for dockerhub push:

docker tag [yourdockerhub]/maslocompanionserver:latest [yourdockerhub]/maslocompanionserver:[tag.release.minorreleasenumber]

Push to docker hub

docker push [yourdockerhub]/maslocompanionserver:1.0.8

Run from dockerhub

docker run [yourdockerhub]/maslocompanionserver:1.0.8

to binds port 8080 of the container to TCP port 41960 or to any other port of the host machine just run the command bellow. Make sure the desirable port is available.

docker run -p 41960:8080 un1crom/maslocompanionserver:1.0.8

Pull from docker

docker pull un1crom/maslocompanionserver:1.0.8

To stop and start docker images, see docker documentation for "run" and "start" and "rm" etc.

To test your new Maslo Companion Server you can start it here

REMEMBER THAT THE PORT 8080 will be forwarded from docker port!

curl --location --request POST 'localhost:49160/analyzeMedia' \
--form 'media=/path/to/your/file.jpeg' \
--form 'type=image/jpeg' \
--form 'originMediaID=sdfsadfasdf' \
--form 'modelsToCall={"imageMeta": 1,"imageSceneOut": 1,"imageObjects": 1,"imageTox": 1,"imagePose": 1,"faces": 1,"photoManipulation": 1}'

Private Clouds/Data Centers

Kubernetes

For kubernetes orchestration... standard approaches should work. This is a simple, stateless nodejs/expressjs container.

YAMLs for the kube/minikube spin up for testing:

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
name: maslocompanionserver
spec:
replicas: 1
selector:
    matchLabels:
    app: maslocompanionserver
template:
    metadata:
    labels:
        app: maslocompanionserver
    spec:
    containers:
        - name: app
        image: heymaslo/maslocompanionserver:1.0.8
        ports:
            - containerPort: 8080
        imagePullPolicy: Always
    imagePullSecrets:
        - name: regcred

Service

apiVersion: v1
kind: Service
metadata:
    name: maslocompanionserver
spec:
    type: NodePort
    selector:
        app: maslocompanionserver
    ports:
    -   protocol: TCP
        port: 3000
        targetPort: 8080
        
        ### dockerizing nodejs apps

Public Clouds

Smoke Test

The smoke-test directory includes JavaScript for iterating through a directory of images and writing out the resulting JSON to a local file. Please open index.js and adjust the paths to your image directory, JSON output file, and any other parameters you'd like to adjust. Then run from the command line: node index.js

API

Head to the docs folder or https://heymaslo.github.io/companion-server/ for full documentation on the APIs.

NodeJS, ExpressJS, and Tensorflow

The package.json has all the details on what's in play. Please note there are sometimes bugs in NPM modules.

What Are The Algos?

Machine Learning models of various kinds. All implemented in nodejs/javascript.

Very Important Note About Machine Learning and Bias and Training Data and Assumptions

ALL DATA IS BIASED. Biased by the era in which it was produced, biased by the systems used to produce it, limited by the measurement technology involved. More problematically ALL DATA IS BIASED by our mostly arbitrary ways of reducing and categorizing the world. All data is built upon assumptions about how the world could possible be categorized, divided, generalized or specified. While this can be useful and sometimes move us towards understanding and scientific fact, that is very rare.

Developers rarely write anything from scratch so they carry on the bias from the past where they don't have time or attention to change it. It is no different with this server and its originating code, datasets and models. It is a server and it takes on the capabilities of the context it is put to use in. The machine learning models included in this server are public and built from reasearch/public datasets. All of it is inspectable and changeable. And you should inspect it and change it.

Transparency and engagment is the only possible ethical stance for machine learning and big data systems.

The Libraries and Algos

Companion Server Currently Activates (as of August 6):

ML Models

Maslo Trained Models

Maslo.ai trained some models directly

  • FerFace - a very basic model for classifying faces, using many of the available Fer datasets floating around: https://github.com/microsoft/FERPlus etc
  • PhotoManipulation - a very basic image classification dataset to classify photos manipulated by instagram, snapchat, photoshop etc.
  • Era of Photos - a large dataset of predicting what year/decade an image was created (using the qualities of the image without metadata)
  • Day or Night - simple classification of images from night or day. could definitely ramp that up

Non ML signal processing

Tested Models included or possibly going to be included

Mostly Tensorflow

Various Model Transfers and Serving Mechanisms

Some AutoML models loaded into tensorflow

Onxy

Wolfram and MXnet:

Models and Datasets in addition to the Libraries and Algos

Photo Manipulation Data

Era of Photo

Images

Cities

Images with Questions

Full licensed dataset

Gender Coded Word Lists

Recognition:

Facial Expression Data: R Vemulapalli, A Agarwala, “A Compact Embedding for Facial Expression Similarity”, CoRR, abs/1811.11283, 2018.

Some very fun shadow detection/time of day stuff:

Algos for Image Lines:

Intents

Tinder Data:

Get More Data:

These demos are always useful to understand some of the models:

Libraries & things worth worth knowing about

note: To speed things up dramatically, install our node backend, which binds to TensorFlow C++, by running npm i @tensorflow/tfjs-node, or npm i @tensorflow/tfjs-node-gpu if you have CUDA. Then call require('@tensorflow/tfjs-node'); (-gpu suffix for CUDA) at the start of your program. Visit https://github.com/tensorflow/tfjs-node for more details.

Mildly interesting data

Interesting image javascript

Coming Soon

A lot. Generative things. Hyperobject things.

Audio

We will be adding audio processing to this. It requires a slightly different approach to some of these algo approaches due to strange world of audio codecs and issues of speech to text, etc. All very solvable and we have solved them in different ways but not without more horsepower than a little old nodejs server.

Video

Video is just Lots of Pictures (moving pictures aka "frames") and Audio.

Biomarkers and IoT

Generally pretty easy.

Story telling and Generative Art

We are already connecting this server up to Story Writing APIs, chatbots and more.

Contributing

When contributing to this repository, please first discuss the change you wish to make via issue before making a change.

  1. Ensure any install or build dependencies are removed before the end of the layer when doing a build.
  2. Update the README.md with details of changes to the interface, this includes new environment variables, exposed ports, useful file locations and container parameters.
  3. You may merge the Pull Request in once you have the sign-off of two other developers, or if you do not have permission to do that, you may request the second reviewer to merge it for you.