Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in creating the final binary using AOT compilation for CPU backend #13482

Closed
martozzz opened this issue Oct 4, 2017 · 10 comments
Closed
Labels
stat:awaiting response Status - Awaiting response from author type:support Support issues

Comments

@martozzz
Copy link

martozzz commented Oct 4, 2017

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below): v1.3.0-rc1-2665-g242b0f1
  • Python version: Python3
  • Bazel version (if compiling from source): 0.6.0
  • CUDA/cuDNN version: No
  • GPU model and memory: No
  • Exact command to reproduce:
    bazel build //tensorflow/compiler/aot/tests:my_binary

Describe the problem

I simply followed the tutorial here: https://www.tensorflow.org/performance/xla/tfcompile

According to Step 1 and 2, I compiled the subgraph and generated the header (test_graph_tfmatmul.h) and object (test_graph_tfmatmul.o) files using tfcompile;

According to Step 3, I used the example code (named as my_code.cc) to invoke the subgraph;

According to Step 4, I added the code snippet cc_binary to the existing BUILD file (//tensorflow/compiler/aot/tests/BUILD), and tried to create the final binary with the command:

bazel build //tensorflow/compiler/aot/tests:my_binary

but I got the following error:

undeclared inclusion(s) in rule '//tensorflow/compiler/aot/tests:my_binary': this rule is missing dependency declarations for the following files included by 'tensorflow/compiler/aot/tests/my_code.cc': '/home/tensorFlow_src/tensorflow/tensorflow/compiler/aot/tests/test_graph_tfmatmul.h'

Source code / logs

my_code.cc (exactly the same as in the tutorial):

#define EIGEN_USE_THREADS
#define EIGEN_USE_CUSTOM_THREAD_POOL

#include <iostream>
#include "third_party/eigen3/unsupported/Eigen/CXX11/Tensor"
#include "tensorflow/compiler/aot/tests/test_graph_tfmatmul.h" // generated

int main(int argc, char** argv) {
  Eigen::ThreadPool tp(2);  // Size the thread pool as appropriate.
  Eigen::ThreadPoolDevice device(&tp, tp.NumThreads());

  foo::bar::MatMulComp matmul;
  matmul.set_thread_pool(&device);

  // Set up args and run the computation.
  const float args[12] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12};
  std::copy(args + 0, args + 6, matmul.arg0_data());
  std::copy(args + 6, args + 12, matmul.arg1_data());
  matmul.Run();

  // Check result
  if (matmul.result0(0, 0) == 58) {
    std::cout << "Success" << std::endl;
  } else {
    std::cout << "Failed. Expected value 58 at 0,0. Got:"
              << matmul.result0(0, 0) << std::endl;
  }

  return 0;
}

cc_binary in BUILD file:

cc_binary(
    name = "my_binary",
    srcs = ["my_code.cc"],
    deps = [
        "//tensorflow/compiler/aot/tests:test_graph_tfmatmul",
        "//third_party/eigen3",
    ],
    linkopts = ["-lpthread",]
)
@asimshankar
Copy link
Contributor

Could you share the complete contents of the BUILD file and the full error message? From the snippet you've shared, it seems that it is having trouble building the target //tensorflow/compiler/aot/tests:tfcompile_test, which suggests some unexpected change to the BUILD file.

Also, please fill in all information asked for in the issue template. In this particular case, details about the exact version of the source code you're using to build would be helpful.

FYI @tatatodd

@asimshankar asimshankar added stat:awaiting response Status - Awaiting response from author type:support Support issues labels Oct 4, 2017
@martozzz
Copy link
Author

martozzz commented Oct 4, 2017

@asimshankar Sorry for the misleading. The error message should be:
ERROR: /home/tensorFlow_src/tensorflow/tensorflow/compiler/aot/tests/BUILD:128:1: undeclared inclusion(s) in rule '//tensorflow/compiler/aot/tests:my_binary': this rule is missing dependency declarations for the following files included by 'tensorflow/compiler/aot/tests/my_code.cc': '/home/tensorFlow_src/tensorflow/tensorflow/compiler/aot/tests/test_graph_tfmatmul.h' Target //tensorflow/compiler/aot/tests:my_binary failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 1.467s, Critical Path: 1.14s FAILED: Build did NOT complete successfully

The complete BUILD file:

licenses(["notice"])  # Apache 2.0

package(
    default_visibility = ["//visibility:private"],
)

load("//tensorflow/compiler/aot:tfcompile.bzl", "tf_library")
load("//tensorflow:tensorflow.bzl", "tf_cc_test")

test_suite(
    name = "all_tests",
    tags = ["manual"],
    tests = [
        ":test_graph_tfadd_test",
        ":test_graph_tfadd_with_ckpt_saver_test",
        ":test_graph_tfadd_with_ckpt_test",
        ":test_graph_tffunction_test",
        ":test_graph_tfgather_test",
        ":test_graph_tfmatmul_test",
        ":test_graph_tfmatmulandadd_test",
        ":test_graph_tfsplits_test",
        ":tfcompile_test",
    ],
)

py_binary(
    name = "make_test_graphs",
    testonly = 1,
    srcs = ["make_test_graphs.py"],
    srcs_version = "PY2AND3",
    deps = [
        "//tensorflow/core:protos_all_py",
        "//tensorflow/python",  # TODO(b/34059704): remove when fixed
        "//tensorflow/python:array_ops",
        "//tensorflow/python:client",
        "//tensorflow/python:framework_for_generated_wrappers",
        "//tensorflow/python:math_ops",
        "//tensorflow/python:platform",
        "//tensorflow/python:session",
        "//tensorflow/python:training",
        "//tensorflow/python:variables",
    ],
)

genrule(
    name = "gen_test_graphs",
    testonly = 1,
    outs = [
        "test_graph_tfadd.pb",
        "test_graph_tfadd_with_ckpt.ckpt",
        "test_graph_tfadd_with_ckpt.pb",
        "test_graph_tfadd_with_ckpt_saver.ckpt",
        "test_graph_tfadd_with_ckpt_saver.pb",
        "test_graph_tfadd_with_ckpt_saver.saver",
        "test_graph_tffunction.pb",
        "test_graph_tfgather.pb",
        "test_graph_tfmatmul.pb",
        "test_graph_tfmatmulandadd.pb",
        "test_graph_tfsplits.pb",
    ],
    cmd = "$(location :make_test_graphs) --out_dir $(@D)",
    tags = ["manual"],
    tools = [":make_test_graphs"],
)

tf_library(
    name = "test_graph_tfadd",
    testonly = 1,
    config = "test_graph_tfadd.config.pbtxt",
    cpp_class = "AddComp",
    graph = "test_graph_tfadd.pb",
    # This serves as a test for the list of minimal deps included even when
    # include_standard_runtime_deps is False.  If this target fails to
    # compile but the others in this directory succeed, you may need to
    # expand the "required by all tf_library targets" list in tfcompile.bzl.
    include_standard_runtime_deps = False,
    tags = ["manual"],
)

tf_library(
    name = "test_graph_tfadd_with_ckpt",
    testonly = 1,
    config = "test_graph_tfadd_with_ckpt.config.pbtxt",
    cpp_class = "AddWithCkptComp",
    freeze_checkpoint = "test_graph_tfadd_with_ckpt.ckpt",
    graph = "test_graph_tfadd_with_ckpt.pb",
    tags = ["manual"],
)

tf_library(
    name = "test_graph_tfadd_with_ckpt_saver",
    testonly = 1,
    config = "test_graph_tfadd_with_ckpt.config.pbtxt",
    cpp_class = "AddWithCkptSaverComp",
    freeze_checkpoint = "test_graph_tfadd_with_ckpt_saver.ckpt",
    freeze_saver = "test_graph_tfadd_with_ckpt_saver.saver",
    graph = "test_graph_tfadd_with_ckpt_saver.pb",
    tags = ["manual"],
)

tf_library(
    name = "test_graph_tffunction",
    testonly = 1,
    config = "test_graph_tffunction.config.pbtxt",
    cpp_class = "FunctionComp",
    graph = "test_graph_tffunction.pb",
    tags = ["manual"],
)

tf_library(
    name = "test_graph_tfgather",
    testonly = 1,
    config = "test_graph_tfgather.config.pbtxt",
    cpp_class = "GatherComp",
    graph = "test_graph_tfgather.pb",
    tags = ["manual"],
)

tf_library(
    name = "test_graph_tfmatmul",
    #testonly = 1,
    config = "test_graph_tfmatmul.config.pbtxt",
    cpp_class = "foo::bar::MatMulComp",
    graph = "test_graph_tfmatmul.pb",
    #tags = ["manual"],
)

cc_binary(
    name = "my_binary",
    srcs = ["my_code.cc"],  # include test_graph_tfmatmul.h to access the generated header
    deps = [
        "//tensorflow/compiler/aot/tests:test_graph_tfmatmul",  # link in the generated object file
        "//third_party/eigen3",
    ],
    linkopts = [
        "-lpthread",
    ]
)

tf_library(
    name = "test_graph_tfmatmulandadd",
    testonly = 1,
    config = "test_graph_tfmatmulandadd.config.pbtxt",
    cpp_class = "MatMulAndAddComp",
    graph = "test_graph_tfmatmulandadd.pb",
    tags = ["manual"],
)

tf_library(
    name = "test_graph_tfsplits",
    testonly = 1,
    config = "test_graph_tfsplits.config.pbtxt",
    cpp_class = "SplitsComp",
    graph = "test_graph_tfsplits.pb",
    tags = ["manual"],
)

tf_cc_test(
    name = "tfcompile_test",
    srcs = ["tfcompile_test.cc"],
    tags = ["manual"],
    deps = [
        ":test_graph_tfadd",
        ":test_graph_tfadd_with_ckpt",
        ":test_graph_tfadd_with_ckpt_saver",
        ":test_graph_tffunction",
        ":test_graph_tfgather",
        ":test_graph_tfmatmul",
        ":test_graph_tfmatmulandadd",
        ":test_graph_tfsplits",
        "//tensorflow/core:test",
        "//tensorflow/core:test_main",
        "//third_party/eigen3",
    ],
)

#-----------------------------------------------------------------------------

filegroup(
    name = "all_files",
    srcs = glob(
        ["**/*"],
        exclude = [
            "**/METADATA",
            "**/OWNERS",
        ],
    ),
    visibility = ["//tensorflow:__subpackages__"],
)

The exact source version:
v1.3.0-rc1-2665-g242b0f1

@aselle aselle removed the stat:awaiting response Status - Awaiting response from author label Oct 4, 2017
@martozzz
Copy link
Author

martozzz commented Oct 5, 2017

@asimshankar @aselle FYI, I tried the latest source (version: v1.3.0-rc1-3000-g840dcae) this morning, and the error still exists. I simply followed the tutorial (https://www.tensorflow.org/performance/xla/tfcompile) without changing anything. In addition, I found the same error was also mentioned in #11726, where @ankitachandak initially was getting the same error in build:

ERROR: /local/mnt/workspace/ankitac/virtual/tensorflow/tensorflow/compiler/aot/tests/aot_project/BUILD:12:1: undeclared inclusion(s) in rule '//tensorflow/compiler/aot/tests/aot_project:test_graph_binary':
this rule is missing dependency declarations for the following files included by 'tensorflow/compiler/aot/tests/aot_project/test_graph.cc':
  '/local/mnt/workspace/ankitac/virtual/tensorflow/tensorflow/compiler/aot/tests/aot_project/test_graph_tfmatmul.h'.
Target //tensorflow/compiler/aot/tests/aot_project:test_graph_binary failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 5.092s, Critical Path: 1.34s

May I have your advice on this issue? Thanks.

@tatatodd
Copy link
Contributor

@MartinZZZ What happens when you try to build the existing tfcompile_test?

blaze build //tensorflow/compiler/aot/tests:tfcompile_test

That corresponds to the following source file:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/aot/tests/tfcompile_test.cc

@tatatodd tatatodd added the stat:awaiting response Status - Awaiting response from author label Oct 11, 2017
@martozzz
Copy link
Author

@tatatodd Thanks for your suggestion.
I just built the existing tfcompile_test with the following command:

bazel build //tensorflow/compiler/aot/tests:tfcompile_test

and the error occurred as well:

ERROR: /home/tensorFlow_src/tensorflow/tensorflow/compiler/aot/tests/BUILD:147:1: undeclared inclusion(s) in rule '//tensorflow/compiler/aot/tests:tfcompile_test':
this rule is missing dependency declarations for the following files included by 'tensorflow/compiler/aot/tests/tfcompile_test.cc':
  '/home/tensorFlow_src/tensorflow/tensorflow/compiler/aot/tests/test_graph_tfmatmul.h'
Target //tensorflow/compiler/aot/tests:tfcompile_test failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 1433.945s, Critical Path: 50.08s
FAILED: Build did NOT complete successfully

Here is the BUILD file:

licenses(["notice"])  # Apache 2.0

package(
    default_visibility = ["//visibility:private"],
)

load("//tensorflow/compiler/aot:tfcompile.bzl", "tf_library")
load("//tensorflow:tensorflow.bzl", "tf_cc_test")
load("//tensorflow:tensorflow.bzl", "tf_cc_binary")

# Optional runtime utilities for use by code generated by tfcompile.
cc_library(
    name = "runtime",
    srcs = ["runtime.cc"],
    hdrs = ["runtime.h"],
    visibility = ["//visibility:public"],
    deps = [
        "//tensorflow/core:framework_lite",
    ],
)

tf_cc_test(
    name = "runtime_test",
    srcs = ["runtime_test.cc"],
    deps = [
        ":runtime",
        "//tensorflow/compiler/tf2xla:xla_local_runtime_context",
        "//tensorflow/core:framework",
        "//tensorflow/core:test",
        "//tensorflow/core:test_main",
    ],
)

# Don't depend on this directly; this is only used for the benchmark test
# generated by tf_library.
cc_library(
    name = "tf_library_test_main",
    testonly = 1,
    visibility = ["//visibility:public"],
    deps = ["//tensorflow/core:test_main"],
)

cc_library(
    name = "tfcompile_lib",
    srcs = [
        "codegen.cc",
        "compile.cc",
        "flags.cc",
    ],
    hdrs = [
        "codegen.h",
        "compile.h",
        "flags.h",
    ],
    deps = [
        ":runtime",  # needed by codegen to print aligned_buffer_bytes
        "//tensorflow/compiler/tf2xla",
        "//tensorflow/compiler/tf2xla:common",
        "//tensorflow/compiler/tf2xla:tf2xla_proto",
        "//tensorflow/compiler/tf2xla:tf2xla_util",
        "//tensorflow/compiler/tf2xla:xla_compiler",
        "//tensorflow/compiler/tf2xla/kernels:xla_cpu_only_ops",
        "//tensorflow/compiler/tf2xla/kernels:xla_ops",
        "//tensorflow/compiler/xla:shape_util",
        "//tensorflow/compiler/xla:statusor",
        "//tensorflow/compiler/xla:util",
        "//tensorflow/compiler/xla:xla_data_proto",
        "//tensorflow/compiler/xla/client:client_library",
        "//tensorflow/compiler/xla/client:compile_only_client",
        "//tensorflow/compiler/xla/service:compiler",
        "//tensorflow/compiler/xla/service/cpu:cpu_compiler",
        "//tensorflow/core:core_cpu",
        "//tensorflow/core:core_cpu_internal",
        "//tensorflow/core:framework",
        "//tensorflow/core:framework_internal",
        "//tensorflow/core:lib",
        "//tensorflow/core:protos_all_cc",
    ],
)

tf_cc_test(
    name = "codegen_test",
    srcs = ["codegen_test.cc"],
    data = ["codegen_test_h.golden"],
    deps = [
        ":tfcompile_lib",
        "//tensorflow/compiler/xla:shape_util",
        "//tensorflow/core:lib",
        "//tensorflow/core:test",
        "//tensorflow/core:test_main",
    ],
)

tf_cc_binary(
    name = "tfcompile",
    visibility = ["//visibility:public"],
    deps = [":tfcompile_main"],
)

cc_library(
    name = "tfcompile_main",
    srcs = ["tfcompile_main.cc"],
    visibility = ["//visibility:public"],
    deps = [
        ":tfcompile_lib",
        "//tensorflow/compiler/tf2xla:tf2xla_proto",
        "//tensorflow/compiler/tf2xla:tf2xla_util",
        "//tensorflow/compiler/xla/legacy_flags:debug_options_flags",
        "//tensorflow/compiler/xla/service:compiler",
        "//tensorflow/core:core_cpu",
        "//tensorflow/core:core_cpu_internal",
        "//tensorflow/core:framework",
        "//tensorflow/core:framework_internal",
        "//tensorflow/core:lib",
        "//tensorflow/core:protos_all_cc",
    ],
)

# NOTE: Most end-to-end tests are in the "tests" subdirectory, to ensure that
# tfcompile.bzl correctly handles usage from outside of the package that it is
# defined in.

# A simple test of tf_library from a text protobuf, mostly to enable the
# benchmark_test.
tf_library(
    name = "test_graph_tfadd",
    testonly = 1,
    config = "test_graph_tfadd.config.pbtxt",
    cpp_class = "AddComp",
    graph = "test_graph_tfadd.pbtxt",
    tags = ["manual"],
)

# A test of tf_library that includes a graph with an unknown op, but where
# the compilation works because the unknown op is not needed for the fetches.
tf_library(
    name = "test_graph_tfunknownop",
    testonly = 1,
    config = "test_graph_tfunknownop.config.pbtxt",
    cpp_class = "UnknownOpAddComp",
    graph = "test_graph_tfunknownop.pbtxt",
    tags = ["manual"],
)

# A test of tf_library that includes a graph with an unknown op, but where
# the compilation works because the op between the unknown op and the
# fetches is a feed.
tf_library(
    name = "test_graph_tfunknownop2",
    testonly = 1,
    config = "test_graph_tfunknownop2.config.pbtxt",
    cpp_class = "UnknownOpAddComp",
    graph = "test_graph_tfunknownop.pbtxt",
    tags = ["manual"],
)

# A test of tf_library that includes a graph with an unknown op, but where
# the compilation works because the unknown op is fed.
tf_library(
    name = "test_graph_tfunknownop3",
    testonly = 1,
    config = "test_graph_tfunknownop3.config.pbtxt",
    cpp_class = "UnknownOpAddComp",
    graph = "test_graph_tfunknownop.pbtxt",
    tags = ["manual"],
)

# Utility library for benchmark binaries, used by the *_benchmark rules that are
# added by the tfcompile bazel macro.
cc_library(
    name = "benchmark",
    srcs = ["benchmark.cc"],
    hdrs = ["benchmark.h"],
    visibility = ["//visibility:public"],
    deps = [
        # The purpose of the benchmark library is to support building an aot
        # binary with minimal dependencies, to demonstrate small binary sizes.
        #
        # KEEP THE DEPENDENCIES MINIMAL.
        "//tensorflow/core:framework_lite",
    ],
)

cc_library(
    name = "benchmark_extra_android",
    tags = [
        "manual",
        "notap",
    ],
    visibility = ["//visibility:public"],
)

tf_cc_test(
    name = "benchmark_test",
    srcs = ["benchmark_test.cc"],
    tags = ["manual"],
    deps = [
        ":benchmark",
        ":test_graph_tfadd",
        "//tensorflow/core:test",
        "//tensorflow/core:test_main",
    ],
)

test_suite(
    name = "all_tests",
    tags = ["manual"],
    tests = [
        ":benchmark_test",
        ":codegen_test",
        ":runtime_test",
        ":test_graph_tfadd_test",
        ":test_graph_tfunknownop2_test",
        ":test_graph_tfunknownop3_test",
        ":test_graph_tfunknownop_test",
        "//tensorflow/compiler/aot/tests:all_tests",
    ],
)

exports_files([
    "benchmark_main.template",  # used by tf_library(...,gen_benchmark=True)
    "test.cc",  # used by tf_library(...,gen_test=True)
])

# -----------------------------------------------------------------------------

filegroup(
    name = "all_files",
    srcs = glob(
        ["**/*"],
        exclude = [
            "**/METADATA",
            "**/OWNERS",
        ],
    ),
    visibility = ["//tensorflow:__subpackages__"],
)

The current TensorFlow version is v1.3.0-rc1-3000-g840dcae

May I have your advice?

@aselle aselle removed the stat:awaiting response Status - Awaiting response from author label Oct 11, 2017
@tatatodd
Copy link
Contributor

@MartinZZZ Can you try again, starting with a clean copy of the sources (without any of your changes), and running:

bazel build //tensorflow/compiler/aot/tests:tfcompile_test

If that still fails, there's either some issue with the way your bazel is set up, or something is going wrong elsewhere.

@tatatodd tatatodd added the stat:awaiting response Status - Awaiting response from author label Oct 11, 2017
@martozzz
Copy link
Author

@tatatodd Thanks for your suggestion. I tried a clean copy of the sources, and built the tfcompile_test successfully.

But error still occurs when building tfmatmul according to the tutorial. Here is the detailed procedure, and hope to have your advice (The following 4 steps are corresponding to the 4 steps in the tutorial):

  • Step 1: Configure the subgraph. The config file already exists as test_graph_tfmatmul.config.pbtxt in directory //tensorflow/compiler/aot/tests;

  • Step 2.1: Generate the graph file test_graph_tfmatmul.pb:

python3 ./make_test_graphs.py --out_dir=./
  • Step 2.2: Compile the graph using tfcompile:
~/tensorFlow_src/tensorflow/bazel-bin/tensorflow/compiler/aot/tfcompile --graph="./test_graph_tfmatmul.pb" --config="./test_graph_tfmatmul.config.pbtxt" --entry_point="test_graph_tfmatmul" --cpp_class="foo::bar::MatMulComp" --out_object="test_graph_tfmatmul.o" --out_header="test_graph_tfmatmul.h" --target_features="+avx2"
  • Step 3: Creating a file named my_code.cc:
#define EIGEN_USE_THREADS
#define EIGEN_USE_CUSTOM_THREAD_POOL

#include <iostream>
#include "third_party/eigen3/unsupported/Eigen/CXX11/Tensor"
#include "tensorflow/compiler/aot/tests/test_graph_tfmatmul.h" // generated

int main(int argc, char** argv) {
    Eigen::ThreadPool tp(2);  // Size the thread pool as appropriate.
    Eigen::ThreadPoolDevice device(&tp, tp.NumThreads());

    foo::bar::MatMulComp matmul;
    matmul.set_thread_pool(&device);

    // Set up args and run the computation.
    const float args[12] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12};
    std::copy(args + 0, args + 6, matmul.arg0_data());
    std::copy(args + 6, args + 12, matmul.arg1_data());
    matmul.Run();

    // Check result
    if (matmul.result0(0, 0) == 58) {
        std::cout << "Success" << std::endl;
    } else {
        std::cout << "Failed. Expected value 58 at 0,0. Got:"
                    << matmul.result0(0, 0) << std::endl;
    }

    return 0;
}
  • Step 4.1: Create the BUILD file:
# Example of linking your binary
# Also see //third_party/tensorflow/compiler/aot/tests/BUILD
load("//tensorflow/compiler/aot:tfcompile.bzl", "tf_library")

# The same tf_library call from step 2 above.
tf_library(
    name = "test_graph_tfmatmul",
    cpp_class = "foo::bar::MatMulComp",
    graph = "test_graph_tfmatmul.pb",
    config = "test_graph_tfmatmul.config.pbtxt",
)

# The executable code generated by tf_library can then be linked into your code.
cc_binary(
    name = "my_binary",
    srcs = [
        "my_code.cc",  # include test_graph_tfmatmul.h to access the generated header
    ],
    deps = [
        ":test_graph_tfmatmul",  # link in the generated object file
        "//third_party/eigen3",
    ],
    linkopts = [
        "-lpthread",
    ]
)
  • Step 4.2: Create the final binary:
bazel build --config=opt --copt=-mavx2 --copt=-mfma --config=mkl --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" //tensorflow/compiler/aot/tests:my_binary

Finally, it will print:

ERROR: /home/tensorFlow_clean/tensorflow/tensorflow/compiler/aot/tests/BUILD:14:1: undeclared inclusion(s) in rule '//tensorflow/compiler/aot/tests:my_binary':
this rule is missing dependency declarations for the following files included by 'tensorflow/compiler/aot/tests/my_code.cc':
  '/home/tensorFlow_clean/tensorflow/tensorflow/compiler/aot/tests/test_graph_tfmatmul.h'
Target //tensorflow/compiler/aot/tests:my_binary failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 7.339s, Critical Path: 6.69s
FAILED: Build did NOT complete successfully

@aselle aselle removed the stat:awaiting response Status - Awaiting response from author label Oct 11, 2017
@tatatodd
Copy link
Contributor

@MartinZZZ If tfcompile_test builds successfully from clean sources, your edits must be causing the problem.

I think the problem is your steps 2.1 and 2.2. These steps are conflicting with the files test_graph_tfmatmul.pb, test_graph_tfmatmul.h and test_graph_tfmatmul.o that already exist.

In particular your step 2.2 doesn't actually exist in the tutorial. Step 2 of the tutorial is trying to say that tf_library will run tfcompile for you to generate the header and object files.

Start over from clean sources, skip your steps 2.1 and 2.2, and see if that helps.

@tatatodd tatatodd added the stat:awaiting response Status - Awaiting response from author label Oct 11, 2017
@martozzz
Copy link
Author

martozzz commented Oct 11, 2017

@tatatodd I skipped Step 2.2 and it worked. Thanks so much for your help:)

Btw, it does not complain about the unused SIMD instructions either regarding issue #13500 (The same procedure as this issue). Does it mean the TensorFlow binary is now compiled to use these instructions?

@tatatodd
Copy link
Contributor

@MartinZZZ great, I'm glad we figured out the problem! :)

I'll comment on #13500 separately on that issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:awaiting response Status - Awaiting response from author type:support Support issues
Projects
None yet
Development

No branches or pull requests

4 participants