Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any Roadmap Available? #162

Closed
stephenbaidu opened this issue Nov 12, 2015 · 6 comments
Closed

Any Roadmap Available? #162

stephenbaidu opened this issue Nov 12, 2015 · 6 comments
Labels
type:docs-bug Document issues

Comments

@stephenbaidu
Copy link

As stated on http://tensorflow.org,

TensorFlow is not complete; it is intended to be built upon and extended.

I would therefore like to know if a roadmap exists.

@teamdandelion teamdandelion added the type:docs-bug Document issues label Nov 12, 2015
@bhack
Copy link
Contributor

bhack commented Jan 9, 2016

Yes something like OpenCV weekly meetings, Torch roadmap at torch/torch7#326 or Gitlab direction could be really helpful for better planning PR activities by the community and to not overlap too much with Google Tensorflow team internal work.

@bhack
Copy link
Contributor

bhack commented Jan 11, 2016

@vrv vrv closed this as completed in cbdf278 Jan 14, 2016
kentonl pushed a commit to kentonl/tensorflow that referenced this issue Jan 14, 2016
Fixes tensorflow#162.
Change: 112102777
@bhack
Copy link
Contributor

bhack commented Jan 14, 2016

@vrv Thank you. Can you add a section "we are on..." where you can expose what is already in work by Google team. It seems that you don't use WIP PR for internals so could be useful to update a section like this to avoid contributors to start to works on things already allocated internally.

@martinwicke
Copy link
Member

These are all things we are actively working on. We don't yet have a more aspirational longer term timeline yet, when we do, we will publish that as well. That may include things we know we want but are not yet working on.

Where there are things that are useful for the community to take on, we have marked them with the contributions welcome tag. For instance, many issues relating to performance improvements are marked this way.

@bhack
Copy link
Contributor

bhack commented Jan 14, 2016

@martinwicke Ok thank you for the clarification. But e.g. #22 is tagged contributions welcome but it is also in the Roadmap. Is it tagged contributions welcome because is "long term" or do you have already started to design/code on it? I'm asking you this because the mean of my previous comment was to clarify, for a contributor, what make sense to start in his own fork to not conflict further with an emerging internal code developed on same target.

@martinwicke
Copy link
Member

In case of #22, We are coordinating with the people on that thread (and
others). It hasn't gotten beyond initial planning yet. If you are
considering contributing for anything, do comment on the associated issue
to find out the current state of affairs and to avoid duplicating effort.
The issues will be updated with major developments, but usually the actual
development happens outside the issue thread.

On Thu, Jan 14, 2016 at 9:27 AM bhack notifications@github.com wrote:

@martinwicke https://github.com/martinwicke Ok thank you for the
clarification. But e.g. #22
#22 is tagged
contributions welcome but it is also in the Roadmap. Is it tagged
contributions welcome because is "long term" or do you have already started
to design/code on it? I'm asking you this because the mean of my previous
comment was to clarify, for a contributor, what make sense to start in his
own fork to not conflict further with an emerging internal code developed
on same target.


Reply to this email directly or view it on GitHub
#162 (comment)
.

lukeiwanski pushed a commit to codeplaysoftware/tensorflow that referenced this issue Oct 26, 2017
* Add -Wno-c++11-narrowing to ComputeCpp device compiler flags to avoid build errors on 32-bit targets.

* Added SYCL support to DeviceSpec.parse_from_string - fixes a regression in running the Resnet sample from the TensorFlow models repository with SYCL.

* Bumped Eigen version.

* [OpenCL] Adds option to disable SYCL vectorization (tensorflow#161)

Adds an option to the configure script to disable SYCL vectorization.
This also rewrites and cleans up the computecpp.tpl build script, though
the actual behaviour has not changed.

* [OpenCL] Fixes Variable Resource op for SYCL (tensorflow#162)

Recent changes to the VariableResource ops were broken for SYCL. This
fixes the errors introduced by those changes.

* [OpenCL] Alignment fixed in Eigen

Don't need to use the alignment workaround any more, as the underlying
problem is fixed in Eigen.

* [OpenCL] Adds Eigen changes for new RC

* [OpenCL] Adds support for SYCL devices to nn_ops_test

* [OpenCL] Fixes multiple registrations of same op

The registration of `ReadVariableOp` does not depend on the datatype, so
we were registering more than ne of the same op.

* [OpenCL] Adds naive forward pass Conv2D kernel

Provides a very naive unoptimised forward convolution SYCL kernel.

* [OpenCL] Adds naive backprop for SYCL Conv2D

Adds both filter and input backprop

* [OpenCL] Fixes multiple registrations of same op (tensorflow#163)

The registration of `ReadVariableOp` does not depend on the datatype, so
we were registering more than ne of the same op.

* [ACL] Adding ARM Compute Library

* [ACL] Adds gemm code

* [ACL] Adds ARM_NO_EXCEPTIONS

* [ACL] Don't register half for ARM

* [ACL] Adds linking to OpenCL

* Tidied up formatting of ACL integration.

* Bug fixes to ARM Compute Library GEMM integration into matmul, from Duncan McBain.

* Fixed typos in configure.py help messages.

* Reverted formatting and logging changes that aren't related to ACL.
tensorflow-copybara pushed a commit that referenced this issue Oct 9, 2019
This PR is a stepping stone towards supporting generic multi-store
source loop nests in affine loop fusion. It extends the algorithm to
support fusion of multi-store loop nests that:
 1. have only one store that writes to a function-local live out, and
 2. the remaining stores are involved in loop nest self dependences
    or no dependences within the function.

Closes #162

COPYBARA_INTEGRATE_REVIEW=tensorflow/mlir#162 from dcaballe:dcaballe/multi-output-fusion 7fb7dec6fe8b45f5ce176f018bfe37b256420c45
PiperOrigin-RevId: 273773907
tensorflow-copybara pushed a commit that referenced this issue Nov 19, 2019
This PR is a stepping stone towards supporting generic multi-store
source loop nests in affine loop fusion. It extends the algorithm to
support fusion of multi-store loop nests that:
 1. have only one store that writes to a function-local live out, and
 2. the remaining stores are involved in loop nest self dependences
    or no dependences within the function.

Closes #162

COPYBARA_INTEGRATE_REVIEW=tensorflow/mlir#162 from dcaballe:dcaballe/multi-output-fusion 7fb7dec6fe8b45f5ce176f018bfe37b256420c45
PiperOrigin-RevId: 273773907
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type:docs-bug Document issues
Projects
None yet
Development

No branches or pull requests

4 participants