Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]Out-of-Bounds Read in DecodeBmpOp class(tensorflow/core/kernels/decode_bmp_op.cc) #14959

Closed
zhangbo5891001 opened this issue Nov 29, 2017 · 3 comments · Fixed by #14967
Closed
Labels
stat:contribution welcome Status - Contributions welcome

Comments

@zhangbo5891001
Copy link

zhangbo5891001 commented Nov 29, 2017


System information

  • The following is output of tf_env_collect.sh:
    == cat /etc/issue ===============================================
    Linux ubuntu 4.4.0-31-generic Ruby API #50~14.04.1-Ubuntu SMP Wed Jul 13 01:07:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
    VERSION="14.04.5 LTS, Trusty Tahr"
    VERSION_ID="14.04"

== are we in docker =============================================
No

== compiler =====================================================
c++ (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

== uname -a =====================================================
Linux ubuntu 4.4.0-31-generic #50~14.04.1-Ubuntu SMP Wed Jul 13 01:07:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

== check pips ===================================================
numpy (1.13.3)
protobuf (3.4.0)
tensorflow (1.4.0)
tensorflow-tensorboard (0.4.0rc2)

== check for virtualenv =========================================
True

== tensorflow import ============================================
tf.VERSION = 1.4.0
tf.GIT_VERSION = v1.4.0-rc1-11-g130a514
tf.COMPILER_VERSION = v1.4.0-rc1-11-g130a514
Sanity check: array([1], dtype=int32)

== env ==========================================================
LD_LIBRARY_PATH is unset
DYLD_LIBRARY_PATH is unset

== nvidia-smi ===================================================
tensorflow/tools/tf_env_collect.sh: line 105: nvidia-smi: command not found

== cuda libs ===================================================

Describe the problem

The DecodeBmpOp class is the decoder of bmp file. The class is in
tensorflow/core/kernels/decode_bmp_op.cc. When dealling with a bmp file, the decoder doesn't invalidate the meta info of bmp file, such as header_size, width, height. It causes a Out-of-Bound Read in DecodeBmpOp::Decode func or DecodeBmpOp ::Compute func. If given an evil bmp file, the program using this API will crash.

Source code / logs

  • Here is the crash call stack of program:
    The bmp file in the attachment causes a crash.

Program received signal SIGSEGV, Segmentation fault.
0x0000000004200bc0 in tensorflow::DecodeBmpOp::Decode (this=this@entry=0x602400032a40, input=input@entry=0x601c000214ce "33", output=0x7fffcefcf800 "", width=width@entry=2336, height=height@entry=61727, channels=channels@entry=3, top_down=top_down@entry=false) at tensorflow/core/kernels/decode_bmp_op.cc:122
Program received signal SIGSEGV (fault address 0x601c19caaa10)
pwndbg> bt
#0 0x0000000004200bc0 in tensorflow::DecodeBmpOp::Decode (this=this@entry=0x602400032a40, input=input@entry=0x601c000214ce "33", output=0x7fffcefcf800 "", width=width@entry=2336, height=height@entry=61727, channels=channels@entry=3, top_down=top_down@entry=false) at tensorflow/core/kernels/decode_bmp_op.cc:122
#1 0x0000000004202d3b in tensorflow::DecodeBmpOp::Compute (this=0x602400032a40, context=) at tensorflow/core/kernels/decode_bmp_op.cc:88
#2 0x00007ffff3ed8880 in tensorflow::ThreadPoolDevice::Compute (this=, op_kernel=0x602400032a40, context=0x7fffffff8320) at tensorflow/core/common_runtime/threadpool_device.cc:59
#3 0x00007ffff3d47110 in tensorflow::(anonymous namespace)::ExecutorState::Process (this=, tagged_node=..., scheduled_usec=) at tensorflow/core/common_runtime/executor.cc:1652
#4 0x00007ffff3d4cc0c in operator() (__closure=) at tensorflow/core/common_runtime/executor.cc:2055
#5 std::_Function_handler<void(), tensorflow::(anonymous namespace)::ExecutorState::ScheduleReady(const TaggedNodeSeq&, tensorflow::(anonymous namespace)::ExecutorState::TaggedNodeReadyQueue*)::__lambda3>::_M_invoke(const std::_Any_data &) (__functor=...) at /usr/include/c++/4.8/functional:2071
#6 0x00007ffff3db2351 in operator() (this=0x7fffffff8790) at /usr/include/c++/4.8/functional:2471
#7 operator() (__closure=, c=...) at tensorflow/core/common_runtime/graph_runner.cc:146
#8 std::_Function_handler<void(std::function<void()>), tensorflow::GraphRunner::Run(tensorflow::Graph*, tensorflow::FunctionLibraryRuntime*, const NamedTensorList&, const std::vector<std::basic_string >&, std::vectortensorflow::Tensor*)::__lambda2>::_M_invoke(const std::_Any_data &, std::function<void()>) (__functor=..., __args#0=...) at /usr/include/c++/4.8/functional:2071
#9 0x00007ffff3ce0258 in operator() (__args#0=..., this=) at /usr/include/c++/4.8/functional:2471
#10 tensorflow::(anonymous namespace)::ExecutorState::ScheduleReady (this=0x6026000fd1a0, ready=..., inline_ready=0x0) at tensorflow/core/common_runtime/executor.cc:2055
#11 0x00007ffff3d00a0c in ScheduleReady (inline_ready=0x0, ready=..., this=) at tensorflow/core/common_runtime/executor.cc:2046
#12 RunAsync (done=, this=) at tensorflow/core/common_runtime/executor.cc:1439
#13 tensorflow::(anonymous namespace)::ExecutorImpl::RunAsync (this=this@entry=0x602400032f40, args=..., done=...) at tensorflow/core/common_runtime/executor.cc:2564
#14 0x00007ffff3db999f in Run (args=..., this=0x602400032f40) at ./tensorflow/core/common_runtime/executor.h:117
#15 tensorflow::GraphRunner::Run (this=this@entry=0x6004002d08f0, graph=graph@entry=0x603a00003140, function_library=function_library@entry=0x602400033800, inputs=std::vector of length 0, capacity 0, output_names=std::vector of length 1, capacity 1 = {...}, outputs=outputs@entry=0x7fffffffa810) at tensorflow/core/common_runtime/graph_runner.cc:174
#16 0x00007ffff3c4f36d in tensorflow::ConstantFold (opts=..., function_library=function_library@entry=0x602400033800, env=env@entry=0x60060000e140, partition_device=partition_device@entry=0x6024000395c0, graph=graph@entry=0x603a00003480, was_mutated=was_mutated@entry=0x7fffffffb260) at tensorflow/core/common_runtime/constant_folding.cc:603
#17 0x00007ffff3db02bd in tensorflow::GraphOptimizer::Optimize (this=this@entry=0x7fffffffbe90, runtime=runtime@entry=0x602400033800, env=0x60060000e140, device=0x6024000395c0, graph=graph@entry=0x60060038d2f0, shape_map=shape_map@entry=0x0) at tensorflow/core/common_runtime/graph_optimizer.cc:66
#18 0x000000000ad04984 in tensorflow::DirectSession::GetOrCreateExecutors (this=this@entry=0x604000007080, inputs=..., outputs=..., target_nodes=..., executors_and_keys=executors_and_keys@entry=0x7fffffffc4f0, run_state_args=run_state_args@entry=0x7fffffffc730) at tensorflow/core/common_runtime/direct_session.cc:1208
#19 0x000000000ad0d0f7 in tensorflow::DirectSession::Run (this=, run_options=..., inputs=std::vector of length 0, capacity 0, output_names=std::vector of length 1, capacity 1 = {...}, target_nodes=std::vector of length 0, capacity 0, outputs=0x0, run_metadata=0x0) at tensorflow/core/common_runtime/direct_session.cc:472
#20 0x000000000ad6fa95 in tensorflow::ClientSession::Run (this=this@entry=0x7fffffffdc30, run_options=..., inputs=std::unordered_map with 0 elements, fetch_outputs=std::vector of length 1, capacity 1 = {...}, run_outputs=std::vector of length 0, capacity 0, outputs=outputs@entry=0x0, run_metadata=run_metadata@entry=0x0) at tensorflow/cc/client/client_session.cc:127
#21 0x000000000ad74b6d in Run (outputs=0x0, run_outputs=std::vector of length 0, capacity 0, fetch_outputs=std::vector of length 1, capacity 1 = {...}, inputs=std::unordered_map with 0 elements, this=0x7fffffffdc30) at tensorflow/cc/client/client_session.cc:90
#22 tensorflow::ClientSession::Run (this=this@entry=0x7fffffffdc30, fetch_outputs=std::vector of length 1, capacity 1 = {...}, outputs=outputs@entry=0x0) at tensorflow/cc/client/client_session.cc:76
#23 0x000000000048042a in main (argc=1, argc@entry=2, argv=argv@entry=0x7fffffffe2e8) at tensorflow/examples/decode_image/main.cc:99
#24 0x00007ffff03c1f45 in __libc_start_main (main=0x47e4b0 <main(int, char**)>, argc=2, argv=0x7fffffffe2e8, init=, fini=, rtld_fini=, stack_end=0x7fffffffe2d8) at libc-start.c:287
#25 0x0000000000737eca in _start ()

  • source code of c++ program:
    #include
    #include
    #include

#include "tensorflow/cc/ops/const_op.h"
#include "tensorflow/cc/ops/image_ops.h"
#include "tensorflow/cc/ops/standard_ops.h"
#include "tensorflow/core/framework/graph.pb.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/graph/default_device.h"
#include "tensorflow/core/graph/graph_def_builder.h"
#include "tensorflow/core/lib/core/errors.h"
#include "tensorflow/core/lib/core/stringpiece.h"
#include "tensorflow/core/lib/core/threadpool.h"
#include "tensorflow/core/lib/io/path.h"
#include "tensorflow/core/lib/strings/stringprintf.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/core/platform/init_main.h"
#include "tensorflow/core/platform/logging.h"
#include "tensorflow/core/platform/types.h"
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/util/command_line_flags.h"
#include "tensorflow/cc/client/client_session.h"

// These are all common classes it's handy to reference with no namespace.
using tensorflow::Flag;
using tensorflow::Tensor;
using tensorflow::Status;
using tensorflow::string;
using tensorflow::int32;

int main(int argc, char* argv[]) {
string image = "tensorflow/examples/label_image/data/grace_hopper.jpg";
std::vector flag_list = {
Flag("image", &image, "image to be processed"),
};
string usage = tensorflow::Flags::Usage(argv[0], flag_list);
const bool parse_result = tensorflow::Flags::Parse(&argc, argv, flag_list);
if (!parse_result) {
LOG(ERROR) << usage;
return -1;
}

// We need to call this to set up global state for TensorFlow.
tensorflow::port::InitMain(argv[0], &argc, &argv);
if (argc > 1) {
LOG(ERROR) << "Unknown argument " << argv[1] << "\n" << usage;
return -1;
}

    using namespace ::tensorflow::ops;

auto root = tensorflow::Scope::NewRootScope();
tensorflow::Output file_reader = ReadFile(root.WithOpName("input_image"), image);
tensorflow::Output gif_reader = DecodeGif(root.WithOpName("gif_reader"), file_reader);
tensorflow::Output bmp_reader = DecodeBmp(root.WithOpName("bmp_reader"), file_reader);
tensorflow::Output jpeg_reader = DecodeJpeg(root.WithOpName("jpeg_reader"), file_reader);
tensorflow::Output png_reader = DecodePng(root.WithOpName("png_reader"), file_reader);

    std::vector<tensorflow::Tensor> outputs;
    tensorflow::ClientSession session(root);

session.Run({gif_reader}, nullptr);
session.Run({bmp_reader}, nullptr);
session.Run({jpeg_reader}, nullptr);
session.Run({png_reader}, nullptr);

return 0;
}

  • source code of python program:
    import argparse
    import tensorflow as tf

if name == "main":
file_name = "tensorflow/examples/label_image/data/grace_hopper.jpg"
parser = argparse.ArgumentParser()
parser.add_argument("--image", help="image to be processed")
args = parser.parse_args()
if args.image:
file_name = args.image
file_reader = tf.read_file(file_name, "file_reader")
image_reader = tf.image.decode_bmp(file_reader, name='bmp_reader')
sess = tf.Session()
sess.run(image_reader)

evil.zip

@yongtang
Copy link
Member

Added a PR #14967 to fix the crash.

@tatatodd tatatodd added the stat:contribution welcome Status - Contributions welcome label Nov 30, 2017
@tatatodd
Copy link
Contributor

Thanks @yongtang! I'll close this out once your PR gets merged.

@Deydeq
Copy link

Deydeq commented Feb 23, 2020

Duplicate of #

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:contribution welcome Status - Contributions welcome
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants