Skip to content
This repository has been archived by the owner on Aug 5, 2022. It is now read-only.

Releases: intel/caffe

Caffe_v1.1.6

22 May 08:29
Compare
Choose a tag to compare
  1. Optimize the inference performance of first layer INT8 convolution
  2. Support multi-instance running with weight sharing in inference
  3. Add Windows support in training and inference under single node
  4. Fix bugs in FC INT8/LARS/calibration tool

Caffe_v1.1.6

13 May 06:13
Compare
Choose a tag to compare
Caffe_v1.1.6 Pre-release
Pre-release
  1. Optimize the inference performance of first layer INT8 convolution
  2. Support multi-instance running with weight sharing in inference
  3. Add Windows support in training and inference under single node
  4. Fix bugs in FC INT8/LARS/SSD detection/calibration tool

Caffe_v1.1.5

04 Mar 06:02
Compare
Choose a tag to compare
  1. Support memory optimization for inference
  2. Enable INT8 InnerProduct and its calibration support
  3. Release the full INT8 model of ResNet-50 v1.0
  4. Fix in-place concat for INT8 inference with single batch size

Caffe_v1.1.4

02 Feb 09:13
Compare
Choose a tag to compare
  1. Enabled single-node VNET training and inference
  2. Enhanced full convolution calibration to support models with customized data layer
  3. Enabled inference benchmarking scripts with multiple instances inference support
  4. Supported INT8 accuracy test in docker image

Caffe_v1.1.3

11 Dec 07:32
Compare
Choose a tag to compare
  1. Upgraded to MKLDNN v0.17
  2. Supported INT8 convolution with signed input
  3. Added more 3D layers support

Caffe_v1.1.2a

08 Nov 07:10
Compare
Choose a tag to compare

Features:

  1. Support multi-node inference

Caffe_v1.1.2

29 Sep 03:26
Compare
Choose a tag to compare
  • Features
  1. INT8 inference
    Inference speed improved with upgraded MKL-DNN library.
    In-place concat for latency improvement with batch size 1. Scale unify for concat for better performance. Support added in calibration tool as well

  2. FP32 inference
    Performance improved on detectionOutput layer with ~3X
    Add MKL-DNN 3D convolution support

  3. Multi-node training
    SSD-VGG16 multi-node training is supported

  4. New models
    Support training of R-FCN object detection model
    Support training of Yolo-V2 object detection model
    Support inference of SSD-MobileNet object detection model
    Added the SSD-VGG16 multi-node model that converges to SOTA

  5. Build improvement
    Fixed compiler warnings using GCC7+ version

  6. Misc
    MKLML upgraded to mklml_lnx_2019.0.20180710
    MKL-DNN upgraded to v0.16+ (4e333787e0d66a1dca1218e99a891d493dbc8ef1)

  • Known issues
  1. INT8 inference accuracy drop for convolutions with output channel 16-individable
  2. FP32 training cannot reach SOTA accuracy with Winograd convolution

Caffe_v1.1.1a

09 Apr 23:34
Compare
Choose a tag to compare
  • Features
  1. Update the batch size for benchmark scripts
  • Bug fixings
  1. Fix docker image build target cpu-ubuntu

Caffe_v1.1.1

27 Mar 00:16
Compare
Choose a tag to compare
  • Features
  1. INT8 inference
    Inference speed improved with upgraded MKL-DNN library.
    Accuracy improved with channel-wise scaling factor. Support added in calibration tool as well.
  2. Multi-node training
    Better training scalability on 10Gbe with prioritized communication in gradient all-reduce.
    Support Python binding for multi-node training in pycaffe.
    Default build now includes multi-node training feature.
  3. Layer performance optimization: dilated convolution and softmax
  4. Auxiliary scripts
    Added a script to parse the training log and plot loss trends (tools/extra/caffe_log_parser.py and tools/extra/plot_loss_trends.py).
    Added a script to identify the batch size for optimal throughput given a model (scripts/obtain_optimal_batch_size.py).
    Improved benchmark scripts to support Inception-V3 and VGG-16
  5. New models
    Support inference of R-FCN object detection model.
    Added the Inception-V3 multi-node model that converges to SOTA.
  6. Build improvement
    Merged PR#167 "Extended cmake install package script for MKL"
    Fixed all ICC/GCC compiler warnings and enabled warning as error.
    Added build options to turn off each inference model optimization.
    Do not try to download MKL-DNN when there is no network connection.
  • Misc
  1. MLSL upgraded to 2018-Preview
  2. MKL-DNN upgraded to 464c268e544bae26f9b85a2acb9122c766a4c396

Caffe_v1.1.0

13 Jan 07:14
Compare
Choose a tag to compare
  • Features
  1. Support INT8 inference. A calibration tool is provided to transform FP32 models to INT8 models
  2. Support convolution and element-wise sum fusion, boosting inference performance (e.g. ResNet-50)
  3. Support SSD training and inference with pure MKLDNN engine
  4. Enhance MSRA weight filler with scale parameter
  5. Support performance collection on single node in the same way as multi-node
  6. Set CPU_ONLY as default in CMake configuration
  • Bug fixings
  1. Fix correctness issue on layers with various engines
  2. Sync sampling bug fix 96175b from Wei Liu’s SSD branch
  3. Fix multi-node crash issue running from pycaffe
  4. Correct link library of MLSL for multi-node
  5. Fix build issue of weight quantization
  • Misc
  1. Upgrade MKLML to 2018.0.1.20171227 and MKLDNN to v0.12
  2. Update models for multi-node training
  3. Enhance installation and benchmarking scripts