This is the code from a small project I worked on during the summer of 2017. It explored the generation of image semantic segmentation training data from 3D renders. You can read about it here.
- Download the NYU V2 labelled dataset and save it somewhere.
cd src
andpip3 install -r requirements.txt
cd ..
and runpython3 src/convert_nyu_v2.py <PATH TO NYU DATASET>
. This will populate model/nyu_data with the baseline data in the correct format.- Install
model/caffe-segnet-cudnn5
by following the instructions that are somewhere online :) - Create a symlink from
/SegNet
to themodel/
folder. Alternatively, you can go through all files and replace instances of/SegNet
with the path to the model folder. Sorry, I know this isn't ideal, but it's how caffe-segnet wanted things to be installed. - Run
/SegNet/caffe-segnet-cudnn5/build/tools/caffe train -gpu <GPU_ID> -solver <SOLVER_PROTOTXT>
, where SOLVER_PROTOTXT is either/SegNet/models/nyu_segnet_solver.prototxt
or/SegNet/models/combined_segnet_solver.prototxt
based on whether or not you want to exclude the synthetic images I've generated already. - Wait for a long time depending on how good your gpu is. You may need to fiddle with batch size to get it to run more effectively - look at the caffe-segnet tutorial for help here.
- Run
python3 src/compute_bn_statistics.py
to generate your final .caffemodel file. I think that you can find info on this in the caffe-segnet tutorial. - Run
python3 src/score_model.py
on the images you want to test your model with.
If you just want to see some synthetic examples I've already generated, look in model/syn_data/images
. Otherwise, you can follow these steps to make your own.
- Fire up Blender and model and light a scene of your choosing (cycles rendering engine only).
- Change your Blender scripts directory to point to
blender_scripts/
- Select relevant objects and use the
Label Selected (syntrain)
addon to apply a label to them. - Open
src/setup_blender_nodes.py
in your script editor. This will set up the compositor nodes to output the labels and images from your scene to a folder namedrender/
in the same directory as your .blend file. If you make changes to labels after running this script, you need to delete these created compositor nodes and run it again. Again, not ideal, but it doesn't take that long. - Render your scene and check the
render/
folder for output. - Note that if you plan on using these images to train this model, you'll need to update the relevant
train.txt
file and recalculate the class weights usingpython3 src/calculate_class_weighting.py
.
https://www.blendswap.com/blends/view/88906 https://www.blendswap.com/blends/view/72366 https://www.blendswap.com/blends/view/17385 https://www.blendswap.com/blends/view/85400 https://www.blendswap.com/blends/view/42851