New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any Roadmap Available? #162
Comments
Yes something like OpenCV weekly meetings, Torch roadmap at torch/torch7#326 or Gitlab direction could be really helpful for better planning PR activities by the community and to not overlap too much with Google Tensorflow team internal work. |
Fixes tensorflow#162. Change: 112102777
@vrv Thank you. Can you add a section "we are on..." where you can expose what is already in work by Google team. It seems that you don't use WIP PR for internals so could be useful to update a section like this to avoid contributors to start to works on things already allocated internally. |
These are all things we are actively working on. We don't yet have a more aspirational longer term timeline yet, when we do, we will publish that as well. That may include things we know we want but are not yet working on. Where there are things that are useful for the community to take on, we have marked them with the contributions welcome tag. For instance, many issues relating to performance improvements are marked this way. |
@martinwicke Ok thank you for the clarification. But e.g. #22 is tagged contributions welcome but it is also in the Roadmap. Is it tagged contributions welcome because is "long term" or do you have already started to design/code on it? I'm asking you this because the mean of my previous comment was to clarify, for a contributor, what make sense to start in his own fork to not conflict further with an emerging internal code developed on same target. |
In case of #22, We are coordinating with the people on that thread (and On Thu, Jan 14, 2016 at 9:27 AM bhack notifications@github.com wrote:
|
* Add -Wno-c++11-narrowing to ComputeCpp device compiler flags to avoid build errors on 32-bit targets. * Added SYCL support to DeviceSpec.parse_from_string - fixes a regression in running the Resnet sample from the TensorFlow models repository with SYCL. * Bumped Eigen version. * [OpenCL] Adds option to disable SYCL vectorization (tensorflow#161) Adds an option to the configure script to disable SYCL vectorization. This also rewrites and cleans up the computecpp.tpl build script, though the actual behaviour has not changed. * [OpenCL] Fixes Variable Resource op for SYCL (tensorflow#162) Recent changes to the VariableResource ops were broken for SYCL. This fixes the errors introduced by those changes. * [OpenCL] Alignment fixed in Eigen Don't need to use the alignment workaround any more, as the underlying problem is fixed in Eigen. * [OpenCL] Adds Eigen changes for new RC * [OpenCL] Adds support for SYCL devices to nn_ops_test * [OpenCL] Fixes multiple registrations of same op The registration of `ReadVariableOp` does not depend on the datatype, so we were registering more than ne of the same op. * [OpenCL] Adds naive forward pass Conv2D kernel Provides a very naive unoptimised forward convolution SYCL kernel. * [OpenCL] Adds naive backprop for SYCL Conv2D Adds both filter and input backprop * [OpenCL] Fixes multiple registrations of same op (tensorflow#163) The registration of `ReadVariableOp` does not depend on the datatype, so we were registering more than ne of the same op. * [ACL] Adding ARM Compute Library * [ACL] Adds gemm code * [ACL] Adds ARM_NO_EXCEPTIONS * [ACL] Don't register half for ARM * [ACL] Adds linking to OpenCL * Tidied up formatting of ACL integration. * Bug fixes to ARM Compute Library GEMM integration into matmul, from Duncan McBain. * Fixed typos in configure.py help messages. * Reverted formatting and logging changes that aren't related to ACL.
This PR is a stepping stone towards supporting generic multi-store source loop nests in affine loop fusion. It extends the algorithm to support fusion of multi-store loop nests that: 1. have only one store that writes to a function-local live out, and 2. the remaining stores are involved in loop nest self dependences or no dependences within the function. Closes #162 COPYBARA_INTEGRATE_REVIEW=tensorflow/mlir#162 from dcaballe:dcaballe/multi-output-fusion 7fb7dec6fe8b45f5ce176f018bfe37b256420c45 PiperOrigin-RevId: 273773907
This PR is a stepping stone towards supporting generic multi-store source loop nests in affine loop fusion. It extends the algorithm to support fusion of multi-store loop nests that: 1. have only one store that writes to a function-local live out, and 2. the remaining stores are involved in loop nest self dependences or no dependences within the function. Closes #162 COPYBARA_INTEGRATE_REVIEW=tensorflow/mlir#162 from dcaballe:dcaballe/multi-output-fusion 7fb7dec6fe8b45f5ce176f018bfe37b256420c45 PiperOrigin-RevId: 273773907
As stated on http://tensorflow.org,
I would therefore like to know if a roadmap exists.
The text was updated successfully, but these errors were encountered: