Coder Social home page Coder Social logo

tensorflow / build Goto Github PK

View Code? Open in Web Editor NEW
253.0 253.0 100.0 418 KB

Build-related tools for TensorFlow

License: Apache License 2.0

Shell 37.02% Python 17.60% Dockerfile 20.11% Go 0.62% Starlark 0.09% C++ 0.09% JavaScript 8.52% CSS 4.70% Pug 11.24%

build's Introduction

Python PyPI DOI CII Best Practices OpenSSF Scorecard Fuzzing Status Fuzzing Status OSSRank Contributor Covenant TF Official Continuous TF Official Nightly

Documentation
Documentation

TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.

TensorFlow was originally developed by researchers and engineers working within the Machine Intelligence team at Google Brain to conduct research in machine learning and neural networks. However, the framework is versatile enough to be used in other areas as well.

TensorFlow provides stable Python and C++ APIs, as well as a non-guaranteed backward compatible API for other languages.

Keep up-to-date with release announcements and security updates by subscribing to [email protected]. See all the mailing lists.

Install

See the TensorFlow install guide for the pip package, to enable GPU support, use a Docker container, and build from source.

To install the current release, which includes support for CUDA-enabled GPU cards (Ubuntu and Windows):

$ pip install tensorflow

Other devices (DirectX and MacOS-metal) are supported using Device plugins.

A smaller CPU-only package is also available:

$ pip install tensorflow-cpu

To update TensorFlow to the latest version, add --upgrade flag to the above commands.

Nightly binaries are available for testing using the tf-nightly and tf-nightly-cpu packages on PyPi.

Try your first TensorFlow program

$ python
>>> import tensorflow as tf
>>> tf.add(1, 2).numpy()
3
>>> hello = tf.constant('Hello, TensorFlow!')
>>> hello.numpy()
b'Hello, TensorFlow!'

For more examples, see the TensorFlow tutorials.

Contribution guidelines

If you want to contribute to TensorFlow, be sure to review the contribution guidelines. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.

We use GitHub issues for tracking requests and bugs, please see TensorFlow Forum for general questions and discussion, and please direct specific questions to Stack Overflow.

The TensorFlow project strives to abide by generally accepted best practices in open-source software development.

Patching guidelines

Follow these steps to patch a specific version of TensorFlow, for example, to apply fixes to bugs or security vulnerabilities:

  • Clone the TensorFlow repo and switch to the corresponding branch for your desired TensorFlow version, for example, branch r2.8 for version 2.8.
  • Apply (that is, cherry-pick) the desired changes and resolve any code conflicts.
  • Run TensorFlow tests and ensure they pass.
  • Build the TensorFlow pip package from source.

Continuous build status

You can find more community-supported platforms and configurations in the TensorFlow SIG Build community builds table.

Official Builds

Build Type Status Artifacts
Linux CPU Status PyPI
Linux GPU Status PyPI
Linux XLA Status TBA
macOS Status PyPI
Windows CPU Status PyPI
Windows GPU Status PyPI
Android Status Download
Raspberry Pi 0 and 1 Status Py3
Raspberry Pi 2 and 3 Status Py3
Libtensorflow MacOS CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Linux CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Linux GPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Windows CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Windows GPU Status Temporarily Unavailable Nightly Binary Official GCS

Resources

Learn more about the TensorFlow community and how to contribute.

Courses

License

Apache License 2.0

build's People

Contributors

angerson avatar avditvs avatar beckerhe avatar bhack avatar chandrasekhard2 avatar cramasam avatar dependabot[bot] avatar deven-amd avatar hrw avatar itaiz-google avatar izuk avatar jayfurmanek avatar joycebrum avatar justkw avatar learning-to-play avatar michaelhudgins avatar mihaimaruseac avatar nitins17 avatar npanpaliya avatar perfinion avatar pranve avatar rsanthanam-amd avatar rsdmse avatar sachinprasadhs avatar sampathweb avatar sanadani avatar shwetaoj avatar sub-mod avatar wamuir avatar wdirons avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

build's Issues

raspberry pi documentation wrong

tensorflow/tools/ci_build/ci_build.sh PI-PYTHON39 tensorflow/tools/ci_build/pi/build_raspberry_pi.sh AARCH64 should be tensorflow/tools/ci_build/ci_build.sh PI -PYTHON39 tensorflow/tools/ci_build/pi/build_raspberry_pi.sh AARCH64?

Addition of Arm Software Developers DockerHub to "TensorFlow Containers" section of the README.

The "TensorFlow Containers" section of the Build README currently lists Arm Neoverse-N1 containers hosted on Linaro's Dockerhub.

Arm now has a DockerHub presence, and have begun making TensorFlow images available here: https://hub.docker.com/r/armswdev/tensorflow-arm-neoverse-n1, currently builds for TF 2.3 with and without oneDNN are included. The intention is to update this repository with a regular cadence.

Please could you add to this section of the README, a new row with:
Owner = "Arm"
Type = "TensorFlow AArch64 Neoverse-N1 CPU stable"
Status = "Static"
Artifacts = https://hub.docker.com/r/armswdev/tensorflow-arm-neoverse-n1

latest-jupyter (2.11.1) - error: command 'x86_64-linux-gnu-gcc' failed: No such file or directory

My build stopped working after the latest release.

Dockerfile:

FROM tensorflow/tensorflow:latest-jupyter
RUN pip install scipy
RUN pip install tensorflow-text
RUN pip install tf-models-official
RUN pip install tensorflow-addons

This is the error that I get at the step of installing tf-models-official:

0 124.3 Building wheels for collected packages: kaggle, pycocotools, seqeval, promise
#0 124.3   Building wheel for kaggle (setup.py): started
#0 124.5   Building wheel for kaggle (setup.py): finished with status 'done'
#0 124.5   Created wheel for kaggle: filename=kaggle-1.5.13-py3-none-any.whl size=77717 sha256=a4837ea18a23a8198b0deff4ddf3aa0aca8d28e4224087ea8d5224816262d99b
#0 124.5   Stored in directory: /root/.cache/pip/wheels/e6/8e/67/e07554a720a493dc6b39b30488590ba92ed45448ad0134d253
#0 124.6   Building wheel for pycocotools (pyproject.toml): started
#0 125.3   Building wheel for pycocotools (pyproject.toml): finished with status 'error'
#0 125.3   error: subprocess-exited-with-error
#0 125.3   
#0 125.3   × Building wheel for pycocotools (pyproject.toml) did not run successfully.
#0 125.3   │ exit code: 1
#0 125.3   ╰─> [20 lines of output]
#0 125.3       running bdist_wheel
#0 125.3       running build
#0 125.3       running build_py
#0 125.3       creating build
#0 125.3       creating build/lib.linux-x86_64-cpython-38
#0 125.3       creating build/lib.linux-x86_64-cpython-38/pycocotools
#0 125.3       copying pycocotools/__init__.py -> build/lib.linux-x86_64-cpython-38/pycocotools
#0 125.3       copying pycocotools/cocoeval.py -> build/lib.linux-x86_64-cpython-38/pycocotools
#0 125.3       copying pycocotools/mask.py -> build/lib.linux-x86_64-cpython-38/pycocotools
#0 125.3       copying pycocotools/coco.py -> build/lib.linux-x86_64-cpython-38/pycocotools
#0 125.3       running build_ext
#0 125.3       cythoning pycocotools/_mask.pyx to pycocotools/_mask.c
#0 125.3       building 'pycocotools._mask' extension
#0 125.3       creating build/temp.linux-x86_64-cpython-38
#0 125.3       creating build/temp.linux-x86_64-cpython-38/common
#0 125.3       creating build/temp.linux-x86_64-cpython-38/pycocotools
#0 125.3       x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -I/tmp/pip-build-env-_j3itgx7/overlay/lib/python3.8/site-packages/numpy/core/include -I./common -I/usr/include/python3.8 -c ./common/maskApi.c -o build/temp.linux-x86_64-cpython-38/./common/maskApi.o -Wno-cpp -Wno-unused-function -std=c99
#0 125.3       /tmp/pip-build-env-_j3itgx7/overlay/lib/python3.8/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /tmp/pip-install-bnls790y/pycocotools_3ce5931933fa4ef99b1f32ec26256829/pycocotools/_mask.pyx
#0 125.3         tree = Parsing.p_module(s, pxd, full_module_name)
#0 125.3       error: command 'x86_64-linux-gnu-gcc' failed: No such file or directory
#0 125.3       [end of output]
#0 125.3   
#0 125.3   note: This error originates from a subprocess, and is likely not a problem with pip.
#0 125.3   ERROR: Failed building wheel for pycocotools
#0 125.3   Building wheel for seqeval (setup.py): started
#0 125.7   Building wheel for seqeval (setup.py): finished with status 'done'
#0 125.7   Created wheel for seqeval: filename=seqeval-1.2.2-py3-none-any.whl size=16165 sha256=56b1200949f6d3c324e5a91a1703db2b9effa2190b17179bac51bff412025d85
#0 125.7   Stored in directory: /root/.cache/pip/wheels/ad/5c/ba/05fa33fa5855777b7d686e843ec07452f22a66a138e290e732
#0 125.7   Building wheel for promise (setup.py): started
#0 126.0   Building wheel for promise (setup.py): finished with status 'done'
#0 126.0   Created wheel for promise: filename=promise-2.3-py3-none-any.whl size=21484 sha256=f2167a7e7e14777805719f7546799e6a47b07c7a97504e9069729228b8bade82
#0 126.0   Stored in directory: /root/.cache/pip/wheels/54/aa/01/724885182f93150035a2a91bce34a12877e8067a97baaf5dc8
#0 126.0 Successfully built kaggle seqeval promise
#0 126.0 Failed to build pycocotools
#0 126.0 ERROR: Could not build wheels for pycocotools, which is required to install pyproject.toml-based projects

It works if I pin to 2.11.0

Proposal: Making better usage of this repository

@perfinion and I have been stuck on making the most of this tensorflow/build repo, so I put together a little policy proposal that should help our goals:

  1. Showcase and emphasize community build-related projects and resources
  2. Require minimal upkeep and decision-making from the repo owners

With those in mind, consider these new rules defining what goes into the repo:

Repo Guidelines

  1. The repo's README is a list of items that can be one of two things:
    1. Link to an external repository with a brief description
    2. Description of a subfolder in the repo.
  2. Subfolders can be either:
    1. Code for a small bit of code, or a guide, or documentation (Why not a wiki?: GitHub wikis don't have PRs or ownership.) -- anything, but it should be small.
      • The README's header includes the same subfolder description and attribution as in the root README.
      • The attribution matches up with an entry in the root CODEOWNERS file for the directory.
      • CI: If CI is important we'd prefer a separate repo be used, but small things like GitHub actions should be OK. We'll have to see some examples before making a decision on this.
    2. A collection of subfolders with a common theme, e.g. a bunch of independent Docker projects.
      • All root README descriptions will be moved into this folder's README.
      • This isn't necessary until many similar folders are added.

With this, the repo can hold anything, in a simple structure, with clear ownership, like Windows Subsystem for Linux guides or the Dockerfiles we currently have or SIG Build-official guides for resource usage (once we get around to them...). We'll have to rearrange a couple things, but nothing major. It'll be a nice way to formalize a showcase & guide.

I think we can use this to similarly extract more community stuff from the main tensorflow/tensorflow repo, which will let us feature more community projects. We'll leave the Community Builds table on the main README until we see a reason to migrate it (like if this plan works really well).

What are your thoughts? Leave a comment and/or join us in Dec. 1st's meeting (see http://bit.ly/tf-sig-build-notes).

Add the OpenSSF Scorecard Github Action

Hi! I'm here on behalf of Google, working to improve the supply-chain security of open-source projects. I would like to suggest the adoption of the Scorecard in your repository.

The Scorecard system combines dozens of automated checks to let maintainers better understand their project's supply-chain security posture. It is developed by the OpenSSF, a non-profit foundation dedicated to improving the security of the open-source community.

Considering that Tensorflow is a community group and receives contribution from the community itself, the Scorecard could help to garantee that the repository and the contribution process is safe from malicious sabotage.

With that in mind and to make the process easier to replicate in github projects, the OpenSSF has also developed the Scorecard GitHub Action, which adds the results of its checks to the project's security dashboard, as well as suggestions on how to solve any issues (see examples below). This Action has been adopted by 1600+ projects already.

More about Scorecard Github Action, please check Reducing security risk in open source software with GitHub Actions and OpenSSF Scorecards V4

Would you be interested in a PR which adds this Action? Optionally, it can also publish your results to the OpenSSF REST API, which allows a badge with the project's score to be added to its README.

Code scanning dashboard with multiple alerts, including Code-Review and Token-Permissions

Detail of a Token-Permissions alert, indicating the specific file and remediation steps

Dashboard improvement ideas

(old and completed ideas have been removed from this list)

  • Add toggle to use shapes instead of squares
  • Add pending status support for Kokoro jobs (blocked by Kokoro-side, can't do much here)
  • Maybe? If a day only has one job, just squash the status dot into one single day badge
  • Normalize job titles
  • Long-term: Add PR presubmit statuses
  • Long-term: Make usable for other repositories

Container Road Map

Road Map for Docker Containers

This is the same roadmap document that I'm using internally, with the internal bits taken out.

I am forcing these containers to get continuous support by using them for TF's internal CI: if they don't work, then our tests don't work. While I'm getting that ready during Q4 and Q1, I'm explicitly avoiding features that the TF team is not going to use, which would be dead-on-arrival unless we set up more testing for them, which I don't have the cycles to consider yet.

TF Nightly Milestone - Q4 Q1

Goal: Replicable container build of our tf-nightly Ubuntu packages

  • Containers can build tf-nightly package
  • SIG Build repository explains how to build tf-nightly package in Containers
  • Documentation exists on how to make changes to the containers
  • Suite of Kokoro jobs exists that publishes 80%-identical-to-now tf-nightly via containers
  • TF-nightly is officially built with the new containers
  • Documentation exists on how to use and debug containers

Release Test Milestone - Q4 Q1

Goal: Replicable container builds of our release tests, supporting each release

  • Containers can run same-as-now Nightly release tests
  • SIG Build repository explains how to run release tests as we do
  • Suite of CI jobs exists that matches current rel/nightly jobs
  • Existing release jobs replaced (but reversible if needed) by Container-based equivalent
  • Containers may be maintained and updated separately for TF release branches
  • Containers used for nightly/release libtensorflow and ci_sanity (now "code_check") jobs

CI & RBE Milestone - Q4 Q1/Q2

Goal: The main tests and our RBE tests use the same Docker container, updated in one place

  • Containers support internal presubmit/continuous build behavior
  • Containers are used in internal buildcop-monitored, DevInfra-owned presubmit/continuous jobs
  • Containers can be used in RBE
  • Containers are used as RBE environment for internal buildcop-monitored, DevInfra-owned jobs
  • DevInfra's GitHub-side presubmit tests use the containers
  • Containers are published on gcr.io
  • There is an easy way to verify if a change to the containers will not break the whole internal test suite

Forward Planning Milestone - Q2

Goal: Establish clear plan for any future work related to these containers. This is internal team planning stuff so I've removed it.

Downstream & OSS Milestone - Q2/Q3

Goal: Downstream users and custom-op developers use the same containers as our CI

  • SIG Addons / SIG IO use these Containers (or derivative) instead of old custom-op ones
  • Custom-op documentation migrated to SIG Build repository
  • Resolve: what to do about inconvenient default packages for e.g. SIG Addons (keras-nightly, etc.)
  • Resolve: what to do about inconveniently large image sizes for e.g. GPU content not needed
  • Docker-related documentation on tensorflow.org replaced with these containers
  • "devel" containers deprecated in favor of SIG Build containers

Non existing archive

I am getting the following issue while building the tf in the docker image:
command :

sudo docker exec tf bazel build --jobs 26   //tensorflow/tools/pip_package:build_pip_package
log:
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Options provided by the client:
  Inherited 'common' options: --isatty=0 --terminal_columns=80
INFO: Reading rc options for 'build' from /tf/tensorflow/.bazelrc:
  Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /tf/tensorflow/.bazelrc:
  'build' options: --define framework_shared_object=true --java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --host_java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library
INFO: Reading rc options for 'build' from /tf/tensorflow/.tf_configure.bazelrc:
  'build' options: --action_env PYTHON_BIN_PATH=/usr/bin/python3 --action_env PYTHON_LIB_PATH=/usr/lib/python3/dist-packages --python_path=/usr/bin/python3
INFO: Reading rc options for 'build' from /tf/tensorflow/.bazelrc:
  'build' options: --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/common,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/fallback,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils
INFO: Found applicable config definition build:short_logs in file /tf/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /tf/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:linux in file /tf/tensorflow/.bazelrc: --copt=-w --host_copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels --distinct_host_configuration=false --experimental_guard_against_concurrent_changes
INFO: Found applicable config definition build:dynamic_kernels in file /tf/tensorflow/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
Loading: 
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/tensorflow/runtime/archive/3e404ce1177fd189b28f0a84e75a43619dad01bd.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded

I have tried to download the archive with wget but no luck with error 404(locally and on the docker container)

a bug about protobuf when i follow the Golang Install Guide

i am following the Golang install guide. i want to use tensorflow in golang.
and when i try to "go generate" in "~/gopath/src/github.com/tensorflow/tensorflow/tensorflow/go/op".
there are some error about protobuf

google/protobuf/any.proto: File not found.
google/protobuf/duration.proto: File not found.
google/protobuf/wrappers.proto: File not found.
tensorflow/stream_executor/dnn.proto: Import "google/protobuf/wrappers.proto" was not found or had errors.
tensorflow/stream_executor/dnn.proto:130:3: "google.protobuf.UInt64Value" is not defined.
tensorflow/core/protobuf/autotuning.proto: Import "google/protobuf/any.proto" was not found or had errors.
tensorflow/core/protobuf/autotuning.proto: Import "google/protobuf/duration.proto" was not found or had errors.
tensorflow/core/protobuf/autotuning.proto: Import "tensorflow/stream_executor/dnn.proto" was not found or had errors.
tensorflow/core/protobuf/autotuning.proto:55:7: "stream_executor.dnn.AlgorithmProto" is not defined.
tensorflow/core/protobuf/autotuning.proto:77:3: "google.protobuf.Duration" is not defined.
tensorflow/core/protobuf/autotuning.proto:85:5: "stream_executor.dnn.AlgorithmProto" is not defined.
tensorflow/core/protobuf/autotuning.proto:92:3: "google.protobuf.Any" is not defined.
../genop/generate.go:19: running "bash": exit status 1
generate.go:17: running "go": exit status 1

how to solve this bug?
i am looking forward to you help
best
lisa

Devel images without source

Can we push on dockerhub devel images without set CHECKOUT_TF_SRC=1 buildargs?
Generally to develop on SIGs you don't need full Tensorflow soruce code checkout and we could recover 500+MB just bypassing the source code layer.
/cc @gunan

HTTP ERROR 404 Not Found

I get this error when I open the TensorFlow Builds ( Linux AMD ROCm GPU Stable : TF 2.x). Also, I believe the img used in the readme didn't load.

URI: /job/tensorflow/job/release-rocmfork-r25-rocm-enhanced/job/rocm-4.2.0-python3x-whls/

golang: Cannot run in arm64 macOS M1; x86_64 architecture warnings

I was able to build tensorflow from source but now I cannot build a golang app that uses TF lib in mac with M1 processor:


bazel build --config opt  //tensorflow/tools/lib_package:libtensorflow -c opt --macos_sdk_version=12.1 --config=opt //tensorflow:libtensorflow.so
sudo cp bazel-bin-tensorflow/libtensorflow* /usr/local/lib/.

make build
mkdir -p dist
# github.com/tensorflow/tensorflow/tensorflow/go
ld: warning: ignoring file /usr/local/lib/libtensorflow.so, building for macOS-x86_64 but attempting to link with file built for macOS-arm64
Undefined symbols for architecture x86_64:
  "_TFE_ContextListDevices", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TFE_ContextListDevices in _x003.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TFE_ContextListDevices)
  "_TFE_ContextOptionsSetAsync", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TFE_ContextOptionsSetAsync in _x003.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TFE_ContextOptionsSetAsync)
  "_TFE_ContextOptionsSetConfig", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TFE_ContextOptionsSetConfig in _x003.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TFE_ContextOptionsSetConfig)
  "_TFE_DeleteContext", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TFE_DeleteContext in _x003.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TFE_DeleteContextOptions, __cgo_5439f5b57bf2_Cfunc_TFE_DeleteContext )
  "_TFE_DeleteContextOptions", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TFE_DeleteContextOptions in _x003.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TFE_DeleteContextOptions)
  "_TFE_DeleteTensorHandle", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TFE_DeleteTensorHandle in _x012.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TFE_DeleteTensorHandle)
  "_TFE_NewContext", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TFE_NewContext in _x003.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TFE_NewContextOptions, __cgo_5439f5b57bf2_Cfunc_TFE_NewContext )
  "_TFE_NewContextOptions", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TFE_NewContextOptions in _x003.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TFE_NewContextOptions)
  "_TFE_NewTensorHandle", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TFE_NewTensorHandle in _x012.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TFE_NewTensorHandle)
  "_TFE_TensorHandleCopyToDevice", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TFE_TensorHandleCopyToDevice in _x012.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TFE_TensorHandleCopyToDevice)
  "_TFE_TensorHandleDataType", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TFE_TensorHandleDataType in _x012.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TFE_TensorHandleDataType)
  "_TFE_TensorHandleDeviceName", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TFE_TensorHandleDeviceName in _x012.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TFE_TensorHandleDeviceName)
  "_TFE_TensorHandleDim", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TFE_TensorHandleDim in _x012.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TFE_TensorHandleDim)
  "_TFE_TensorHandleNumDims", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TFE_TensorHandleNumDims in _x012.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TFE_TensorHandleNumDims)
  "_TFE_TensorHandleResolve", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TFE_TensorHandleResolve in _x012.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TFE_TensorHandleResolve)
  "_TF_AddControlInput", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_AddControlInput in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_AddControlInput)
  "_TF_AddGradientsWithPrefix", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_AddGradientsWithPrefix in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_AddGradientsWithPrefix)
  "_TF_AddInput", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_AddInput in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_AddInputList, __cgo_5439f5b57bf2_Cfunc_TF_AddInput )
  "_TF_AddInputList", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_AddInputList in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_AddInputList)
  "_TF_AllocateTensor", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_AllocateTensor in _x011.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_AllocateTensor)
  "_TF_CloseSession", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_CloseSession in _x008.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_CloseSession)
  "_TF_DeleteBuffer", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_DeleteBuffer in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_DeleteBuffer)
  "_TF_DeleteDeviceList", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_DeleteDeviceList in _x003.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_DeleteDeviceList)
  "_TF_DeleteGraph", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_DeleteGraph in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_DeleteGraph)
  "_TF_DeleteImportGraphDefOptions", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_DeleteImportGraphDefOptions in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_DeleteImportGraphDefOptions)
  "_TF_DeleteLibraryHandle", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_DeleteLibraryHandle in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_DeleteLibraryHandle)
  "_TF_DeletePRunHandle", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_DeletePRunHandle in _x008.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_DeletePRunHandle)
  "_TF_DeleteSession", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_DeleteSession in _x008.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_DeleteSessionOptions, __cgo_5439f5b57bf2_Cfunc_TF_DeleteSession )
  "_TF_DeleteSessionOptions", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_DeleteSessionOptions in _x008.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_DeleteSessionOptions)
  "_TF_DeleteStatus", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_DeleteStatus in _x010.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_DeleteStatus)
  "_TF_DeleteTensor", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_DeleteTensor in _x011.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_DeleteTensor)
  "_TF_DeviceListCount", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_DeviceListCount in _x008.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_DeviceListCount)
  "_TF_DeviceListMemoryBytes", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_DeviceListMemoryBytes in _x008.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_DeviceListMemoryBytes)
  "_TF_DeviceListName", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_DeviceListName in _x008.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_DeviceListName)
  "_TF_DeviceListType", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_DeviceListType in _x008.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_DeviceListType)
  "_TF_Dim", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_Dim in _x011.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_Dim)
  "_TF_FinishOperation", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_FinishOperation in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_FinishOperation)
  "_TF_GetCode", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_GetCode in _x010.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_GetCode)
  "_TF_GraphGetTensorNumDims", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_GraphGetTensorNumDims in _x006.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_GraphGetTensorNumDims)
  "_TF_GraphGetTensorShape", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_GraphGetTensorShape in _x006.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_GraphGetTensorShape)
  "_TF_GraphImportGraphDef", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_GraphImportGraphDef in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_GraphImportGraphDef)
  "_TF_GraphNextOperation", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_GraphNextOperation in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_GraphNextOperation)
  "_TF_GraphOperationByName", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_GraphOperationByName in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_GraphOperationByName)
  "_TF_GraphToGraphDef", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_GraphToGraphDef in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_GraphToGraphDef)
  "_TF_ImportGraphDefOptionsAddInputMapping", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_ImportGraphDefOptionsAddInputMapping in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_ImportGraphDefOptionsAddInputMapping)
  "_TF_ImportGraphDefOptionsSetDefaultDevice", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_ImportGraphDefOptionsSetDefaultDevice in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_ImportGraphDefOptionsSetDefaultDevice)
  "_TF_ImportGraphDefOptionsSetPrefix", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_ImportGraphDefOptionsSetPrefix in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_ImportGraphDefOptionsSetPrefix)
  "_TF_LoadLibrary", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_LoadLibrary in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_LoadLibrary)
  "_TF_LoadSessionFromSavedModel", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_LoadSessionFromSavedModel in _x007.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_LoadSessionFromSavedModel)
  "_TF_Message", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_Message in _x010.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_Message)
  "_TF_NewBuffer", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_NewBuffer in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_NewBuffer)
  "_TF_NewGraph", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_NewGraph in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_NewGraph)
  "_TF_NewImportGraphDefOptions", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_NewImportGraphDefOptions in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_NewImportGraphDefOptions)
  "_TF_NewOperation", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_NewOperation in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_NewOperation)
  "_TF_NewSession", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_NewSession in _x008.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_NewSession, __cgo_5439f5b57bf2_Cfunc_TF_NewSessionOptions )
  "_TF_NewSessionOptions", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_NewSessionOptions in _x008.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_NewSessionOptions)
  "_TF_NewStatus", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_NewStatus in _x010.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_NewStatus)
  "_TF_NumDims", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_NumDims in _x011.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_NumDims)
  "_TF_OperationDevice", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationDevice in _x006.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationDevice)
  "_TF_OperationGetAttrBool", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrBool in _x002.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrBoolList, __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrBool )
  "_TF_OperationGetAttrBoolList", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrBoolList in _x002.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrBoolList)
  "_TF_OperationGetAttrFloat", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrFloat in _x002.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrFloatList, __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrFloat )
  "_TF_OperationGetAttrFloatList", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrFloatList in _x002.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrFloatList)
  "_TF_OperationGetAttrInt", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrInt in _x002.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrIntList, __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrInt )
  "_TF_OperationGetAttrIntList", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrIntList in _x002.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrIntList)
  "_TF_OperationGetAttrMetadata", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrMetadata in _x002.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrMetadata)
  "_TF_OperationGetAttrShape", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrShape in _x002.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrShapeList, __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrShape )
  "_TF_OperationGetAttrShapeList", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrShapeList in _x002.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrShapeList)
  "_TF_OperationGetAttrString", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrString in _x002.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrString, __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrStringList )
  "_TF_OperationGetAttrStringList", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrStringList in _x002.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrStringList)
  "_TF_OperationGetAttrTensor", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrTensor in _x002.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrTensorList, __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrTensor )
  "_TF_OperationGetAttrTensorList", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrTensorList in _x002.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrTensorList)
  "_TF_OperationGetAttrType", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrType in _x002.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrType, __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrTypeList )
  "_TF_OperationGetAttrTypeList", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrTypeList in _x002.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationGetAttrTypeList)
  "_TF_OperationInput", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationInput in _x006.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationInputType, __cgo_5439f5b57bf2_Cfunc_TF_OperationInput )
  "_TF_OperationInputType", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationInputType in _x006.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationInputType)
  "_TF_OperationName", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationName in _x006.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationName)
  "_TF_OperationNumInputs", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationNumInputs in _x006.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationNumInputs)
  "_TF_OperationNumOutputs", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationNumOutputs in _x006.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationNumOutputs)
  "_TF_OperationOpType", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationOpType in _x006.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationOpType)
  "_TF_OperationOutputConsumers", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationOutputConsumers in _x006.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationOutputConsumers)
  "_TF_OperationOutputListLength", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationOutputListLength in _x006.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationOutputListLength)
  "_TF_OperationOutputNumConsumers", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationOutputNumConsumers in _x006.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationOutputNumConsumers)
  "_TF_OperationOutputType", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_OperationOutputType in _x006.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_OperationOutputType)
  "_TF_SessionListDevices", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SessionListDevices in _x008.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SessionListDevices)
  "_TF_SessionPRun", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SessionPRun in _x008.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SessionPRun, __cgo_5439f5b57bf2_Cfunc_TF_SessionPRunSetup )
  "_TF_SessionPRunSetup", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SessionPRunSetup in _x008.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SessionPRunSetup)
  "_TF_SessionRun", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SessionRun in _x008.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SessionRun)
  "_TF_SetAttrBool", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SetAttrBool in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SetAttrBoolList, __cgo_5439f5b57bf2_Cfunc_TF_SetAttrBool )
  "_TF_SetAttrBoolList", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SetAttrBoolList in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SetAttrBoolList)
  "_TF_SetAttrFloat", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SetAttrFloat in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SetAttrFloat, __cgo_5439f5b57bf2_Cfunc_TF_SetAttrFloatList )
  "_TF_SetAttrFloatList", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SetAttrFloatList in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SetAttrFloatList)
  "_TF_SetAttrInt", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SetAttrInt in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SetAttrInt, __cgo_5439f5b57bf2_Cfunc_TF_SetAttrIntList )
  "_TF_SetAttrIntList", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SetAttrIntList in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SetAttrIntList)
  "_TF_SetAttrShape", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SetAttrShape in _x004.o
     (maybe you meant: _TF_SetAttrShapeList_Helper, __cgo_5439f5b57bf2_Cfunc_TF_SetAttrShapeList_Helper , __cgo_5439f5b57bf2_Cfunc_TF_SetAttrShapeList , __cgo_5439f5b57bf2_Cfunc_TF_SetAttrShape )
  "_TF_SetAttrShapeList", referenced from:
      _TF_SetAttrShapeList_Helper in _x004.o
      __cgo_5439f5b57bf2_Cfunc_TF_SetAttrShapeList in _x004.o
      __cgo_5439f5b57bf2_Cfunc_TF_SetAttrShapeList_Helper in _x004.o
     (maybe you meant: _TF_SetAttrShapeList_Helper, __cgo_5439f5b57bf2_Cfunc_TF_SetAttrShapeList_Helper , __cgo_5439f5b57bf2_Cfunc_TF_SetAttrShapeList )
  "_TF_SetAttrString", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SetAttrString in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SetAttrStringList, __cgo_5439f5b57bf2_Cfunc_TF_SetAttrString )
  "_TF_SetAttrStringList", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SetAttrStringList in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SetAttrStringList)
  "_TF_SetAttrTensor", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SetAttrTensor in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SetAttrTensor, __cgo_5439f5b57bf2_Cfunc_TF_SetAttrTensorList )
  "_TF_SetAttrTensorList", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SetAttrTensorList in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SetAttrTensorList)
  "_TF_SetAttrType", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SetAttrType in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SetAttrTypeList, __cgo_5439f5b57bf2_Cfunc_TF_SetAttrType )
  "_TF_SetAttrTypeList", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SetAttrTypeList in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SetAttrTypeList)
  "_TF_SetConfig", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SetConfig in _x008.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SetConfig)
  "_TF_SetDevice", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SetDevice in _x004.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SetDevice)
  "_TF_SetTarget", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_SetTarget in _x008.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_SetTarget)
  "_TF_TensorBitcastFrom", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_TensorBitcastFrom in _x011.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_TensorBitcastFrom)
  "_TF_TensorByteSize", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_TensorByteSize in _x011.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_TensorByteSize)
  "_TF_TensorData", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_TensorData in _x011.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_TensorData)
  "_TF_TensorType", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_TensorType in _x011.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_TensorType)
  "_TF_Version", referenced from:
      __cgo_5439f5b57bf2_Cfunc_TF_Version in _x013.o
     (maybe you meant: __cgo_5439f5b57bf2_Cfunc_TF_Version)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [build/binaries] Error 2
It's complaining cause the .so generated file is not for x86_64


compile_commands.json

It could be nice if we could generate/distribute compile_commands.json to better interact with some IDE and our c++ code.

I.e. In Bazel and also in Vscode the the official Bazel team vscode plugin we don't have in tree support to generate compile_commands:
bazelbuild/bazel#258
bazelbuild/vscode-bazel#179

I've tried some quite popular community workaround like:
https://github.com/grailbio/bazel-compilation-database

But the problem is that we are using --action_env for build when we generate our .bazelrc with python configure.py and so these arguments not supported in other command like e.g. bazel query bazelbuild/bazel#10226.
This is going to invalidate the official bazel vscode plugin that need to execute query command but also the bazel-compilation-database workaround cause we have problem to retrieve our env variable.

Process for building Tensorflow that maximises use of system libraries?

Hi there, I raised a question about this on the Discord server, but was advised to come here and ask instead.

Are there any instructions/guidelines on how to build a smaller, lighter Tensorflow shared library (C bindings) and also Python module that maximises the use of standard system libraries, eg in Ubuntu 18.04, 20.04, CentOS or MSYS2?

With 29 dependencies listed in the THIRD_PARTY_TC_C_LICENSES, it seems to me that this 320 MB shared library could be potentially much smaller if it made use of system libraries instead of statically linked versions, and if there is a decent TensorFlow test suite then it should not be too hard to assure that the library is still functioning correctly.

I note https://github.com/tensorflow/build/tree/master/official_build_environments ('They should have as few external dependencies as possible'). With the dockers and statically linked shared libraries, it seems that TensorFlow community has a strong bias against this approach, preferring more self-contained builds with less risk of platform-specific divergence. I just wonder if, as the code's user base grows, whether there might be a case for more 'standard' OS packaging of this library (eg .deb, .rpm) and then also the associated Python bindings, which presumably in the 'standard install' also contains a separate copy of the shared library?

Add arm64 third-party CI

System information

TensorFlow version (you are using): 2.0+ and master branch
Are you willing to contribute it (Yes/No): Yes
Describe the feature and the current behavior/state.
Currently, Tensorflow only has the official build CI on X86 and third-party build CIs on x86 and ppc64. There is no CI for arm64. Adding a arm64 CI can help community to discover arm64 problems easily.

OpenLab supports public CI system for opensource projects[1]. Now it supports arm64 arch and the tensorflow nightly build jobs for 2.0+ and master version have been added there as well[2]. It runs tensorflow build everyday at UTC-18.

Just like what tensorflow do currently, we can just easily add a new badge in the README.md file to link the Openlab arm64 third-party CI.

As you can see in the page[2], tensorflow 2.0, 2.1 and 2.2(master) build well on aarch64. But in some aws libs strongly based on x86 ARCH, so in master branch, I skip that part for build. You can see the build brief in [3], download the build whl packages there, and see the details logs in [4].

So adding the arm64 build CI is useful for the community.

1: https://openlabtesting.org
2: http://status.openlabtesting.org/builds?project=tensorflow%2Ftensorflow
3: http://status.openlabtesting.org/build/c816e5c9d6cc4519b933414fc6044d28
4: https://logs.openlabtesting.org/logs/periodic-18/github.com/tensorflow/tensorflow/master/tensorflow-arm64-build-daily-v2.1.0/c816e5c/

Additional context
Now the test is CPU only and is basing on Ubuntu 18.04 and python3.6. More can be added in the future.

And I'm from OpenLab commuinty. I'll keep looking after the tensorflow arm64 CI and try my best to fix the arm64 failure then.

Here is an example[1] we done in pytorch community. See 'Linux (aarch64) CPU' badge in the README.md.
Also for another community[2], we done in greenplum-db community. See 'Zuul Regression Test On Arm' badge in the README.md

1: https://github.com/pytorch/pytorch/blob/master/README.md
2: https://github.com/greenplum-db/gpdb

Will this change the current api? How?
No
Who will benefit with this feature?
The arm64 users and developers

  • Notes

And there is some other discussion about the same problem in tensorflow repo. see tensorflow/tensorflow#40463
https://groups.google.com/a/tensorflow.org/forum/#!topic/build/zTbmc0T6jAw

And for now, @AshokBhat is working on fix the aws-lab libs for ARM support. We already build the master branch after tensorflow/tensorflow#40700 in my local repo, and downgrade numpy via pip3 install numpy==1.18.0 with cmd:
bazel clean --expunge ; bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package --local_ram_resources=10240 --local_cpu_resources=7 --verbose_failures

All thanks you guys and @AshokBhat 's kind help.

How easy it is to externalize the visibility of the benchmarks we already run continuously?

This a pinpoint to not lost the benchmarking subthread started at tensorflow/tensorflow#33945 (comment)

Quoting @zongweiz

We have [TF2 benchmarks] (https://github.com/tensorflow/models/tree/master/official/benchmark) from every official model garden model and they continuously running internally, using PerfZero framework. We should be able to select a set of benchmarks and hook it up with github CI (jenkins). Added @sganeshb to this thread for information and we will follow up.
Another thought is to start CI tests using a selected set of tf_benchmarks

/cc @alextp @naveenjha123 @sganeshb

compile_commands.json

Migrating from https://github.com/tensorflow/addons/issues/1894
It could be nice if we could generate/distribute compile_commands.json to better interact with some IDE and our c++ code.

I.e. In Bazel and also in Vscode the the official Bazel team vscode plugin we don't have in tree support to generate compile_commands:
bazelbuild/bazel#258
bazelbuild/vscode-bazel#179

I've tried some quite popular community workaround like:
https://github.com/grailbio/bazel-compilation-database

But the problem is that we are using --action_env for build when we generate our .bazelrc with python configure.py and so these arguments not supported in other command like e.g. bazel query bazelbuild/bazel#10226.
This is going to invalidate the official bazel vscode plugin that need to execute query command but also the bazel-compilation-database workaround cause we have problem to retrieve our env variable.

Enhancement: Add signed SLSA attestations to nightly docker builds

Hi! I'm reaching out on behalf of the Open Source Security Foundation (openssf.org). We work on improving supply-chain security of critical open source projects. We believe we can help improve the tamper resistance of TF's nightly docker builds, with just a few lines of code. Starting with nightly docker builds is relatively low-risk, and allows some testing of the verification flow before moving to the more stable images.

These docker builds are built in this repository via GitHub Actions. Adding a provenance attestation during the build process allows a cryptographically verifiable guarantee that the image was built in this repository. This provenance is a file with metadata that lets users know that the container images were built from your repository’s workflow and not altered by anyone.

The container provenance generator workflow uploads the attestation in the registry using Sigstore's attestation specification. This is projected to go GA in the next few weeks (the API may be change in future releases of the generator), and we would love to use TF's nightly builds as an early adoption.

Generating these attestations is also a step toward adopting Supply-chain Levels for Software Artifacts (SLSA). SLSA is a security framework to improve transparency and authenticity of the build / release process. It’s designed and used by a consortium of companies including Google, Intel, Chainguard, Citi, and Datadog, under the umbrella of the Open Source Security Foundation. By generating these attestations in the future for stable docker builds, your project will reach SLSA Level 3 for provenance. You can even add a SLSA badge to your repository so users know that you take security seriously.

If you're interested in reading more, check out this recent blog post, which describes how some of the recent supply-chain attacks would have been prevented using SLSA provenance.

Happy to answer any question you might have.

I'll follow up with a PR and link here to show the changes needed to add provenance to the docker workflow.

cc @ianlewis @mihaimaruseac @laurentsimon

Updating ppc64el wheel to latest version (currently: 2.2.0)

Hi,

Expected Behavior

The ppc64el build at https://powerci.osuosl.org/job/TensorFlow2_PPC64LE_GPU_Release_Build/ should provide the latest stable version.

Actual Behavior

The version provided is 2.2.0, while the latest version is 2.3.1.

Details

I am using the ppc64el wheel linked above on a POWER8 system: it works fine, but it's still at 2.2.0.

The main issue I have is installation time, because it depends on a specific version of scipy. When installing the tensorflow wheel, pip needs to build this version of scipy, which takes about 20 minutes.
The dependency on scipy is not actually required and got removed in 2.3.1: tensorflow/tensorflow#41867

So, a 2.3.1 ppc64el wheel would make installation significantly faster.

Building a newer wheel

I used the script https://github.com/tensorflow/build/blob/master/ci_environments/ppc64le/gpu_build.sh together with the ibmcom/tensorflow-ppc64le:gpu-devel-manylinux2014 docker image and I managed to build tensorflow 2.3.1 with it.

There was one issue with some versions of python in the container:

/opt/python/cp37-cp37/bin/python3.7: error while loading shared libraries: libcrypt.so.2: cannot open shared object file: No such file or directory

I could fix that by adding /usr/local/lib/ to $LD_LIBRARY_PATH (most notably in the .tf_configure.bazelrc file generated by the script).

Tidy up the Community Builds Table

The Community Builds stable has been migrated, but still could use cleanup. See my comment here for the specifics of each entry:

  • RedHat builds: fix broken links (@sub-mod)
  • AMD builds: fix stale links and split the 1.x / 2.x row(s) into two rows each (@deven-amd, last comment)
  • Openlab builds: verify owner and recency, and split 1.x / 2.x rows. Stable last ran for TF 2.1. (@bzhaoopenstack)
  • Intel builds: verify owner and build status and split 1.x / 2.x rows. (@gaurides @claynerobison)

@sub-mod and @deven-amd were both looking at their issues, but I haven't heard back from Openlab or Intel yet, so I'll reach out if I need to.

Folder structure within the Project

Expected Behavior

Probably a good idea to create a structure like named directories so that Contributors from different vendors(IBM, RedHat, Intel etc) can contribute the build scripts in proper location.

Actual Behavior

Currently no directory structure exists.A bit confusing where to add files.

Proposal

Create directories for different Platforms, common scripts , cpu/gpu,

  • common_scripts
  • x86_64
    • cpu
      • Dockerfile
      • build_scripts
    • gpu
    • mkl
  • ppc64le
    • cpu
    • gpu

@perfinion @ewilderj @wdirons

issue encountered while running `go run hello_tf.go`

While running go run hello_tf.go

build command-line-arguments: cannot load github.com/tensorflow/tensorflow/tensorflow/go/stream_executor: module github.com/tensorflow/tensorflow@latest found (v2.6.0+incompatible, replaced by /go/src/github.com/tensorflow/tensorflow), but does not contain package github.com/tensorflow/tensorflow/tensorflow/go/stream_executor

Bazel build doesn't check cache actions for source code build

I followed the official documentation and compiled the source code successfully in my own PC, but each time I added any VLOG or some other small code changes and then rebuilt, Bazel didn't seem to use any cache of actions resulting in very long compilation time. Most of time is spent at recompiling source or dependency files like llvm, files under tensorflow/compiler/xla etc, which haven't been changed by me at all.

But I also tried docker image build method mentioned in docs above, in docker container provided by docs Bazel can use cache to do incremental build.

So how do I check or set Bazel configuration in my PC to do incremental build and speed up compilation?

Provide Bazel cache for TensorFlow builds

Providing a TensorFlow build cache could be very helpful to external developers, and lower the barrier to entry of contributing to TF.

Some ideas for this we've discussed before are:

  • Offer Bazel RBE resources on behalf of SIG Build. This service is in alpha on GCP.
  • Provide a read-only build cache in a GCP bucket.
  • Provide devel_cache Docker images containing a build cache (these could be very large)
  • Provide code-and-cache volumes for the docker devel images.

See also:

Tensorflow Win Docker image

Can we publish a small Wine+Tensorflow docker image in our Dockerhub account?

It could really help to make minor local Win debug activities instead of waiting for Kokoro label maintainers permission and slow CI building cycles, specially for python only bugs.

E.g. Follow my recent triage case at tensorflow/tensorflow#48086 (comment)

P.s. I know that Tensorflow still doesn't run in the source tree and so we need some bad hack to patch/edit installed python wheel files diractly. But it is better then nothing and to "bruteforce" the CI.

/cc @theadactyl @joanafilipa @mihaimaruseac

LD_LIBRARY_PATH may be incorrect in Docker

Docker has env. variable LD_LIBRARY_PATH set but the path doesn't exist

tf-docker /tf/tensorflow > echo $LD_LIBRARY_PATH
/usr/local/nvidia/lib:/usr/local/nvidia/lib64

tf-docker /tf/tensorflow > ls /usr/local/
bin/       cuda-11.1/ etc/       include/   libexec/   sbin/      src/
cuda/      cuda-11.2/ games/     lib/       man/       share/

But this directory doesn't exist in the Docker Image. It doesn't cause any issue, but would be nice to fix it.
From what I read, "nvidia" -> "cuda" is the right path and it exists in the Docker

/usr/local/cuda/lib64 seems to have the right .so files.

Reference issues:

Tensorflow nightly commit

Can we publish somewhere the commit that It Is used every day for the nightly wheels and on which the nightly tests are executed?

Python 3.5 EOL

Python 3.5 is EOL starting from 13th Sept.
We have already discussed this in the Gitter channel this is just a reminder.
Also many users could be confused by this warnings e.g. when building the Raspberry wheel.

unable to locate package python3-virtualenv

It seems that it cannot install python3-virutalenv because it doesn't exist (raspberry pi docker image).
It is in the Ubuntu repository though.

I added RUN apt search virtualenv into the dockerfile, seems that python-virtualenv should be used instead?

Sorting...
Full Text Search...
dh-virtualenv/trusty 0.6-1 all
  wrap and build python packages using virtualenv

python-tox/trusty-updates 1.6.0-1ubuntu1 all
  virtualenv-based automation of test activities

python-virtualenv/trusty-updates 1.11.4-1ubuntu1 all
  Python virtual environment creator

virtualenv-clone/trusty 0.2.4-1 all
  script for cloning a non-relocatable virtualenv

virtualenvwrapper/trusty 4.1.1-1 all
  extension to virtualenv for managing multiple virtual Python environments

I fixed the install_deb_packages but now it aborted instead of installing the files (like it'd press n instead of y)
This was because I had a separate command and adding -y fixed that.

The pip injstallation script is completely messed up, too.

  1. It uses python3.4
  2. The wrong get-pip script is downloaded.

Docker exec ctrl+c suggestion

- If you interrupt a `docker exec` command with `ctrl-c`, you will get your
shell back but the command will continue to run. You cannot reattach to it.
If you have any suggestions for handling this, let us know.

  • If you interrupt a docker exec command with ctrl-c, you will get your
    shell back but the command will continue to run. You cannot reattach to it.
    If you have any suggestions for handling this, let us know.

We could run a tmux or byobu session in the container so it could be easy to attach again to the session without having a detached process in case of crtl+c

GPU passthrough problem: Failed to initialize NVML: Driver/library version mismatch

With nvidia-docker2 installed, nvidia-smi is supposed to work when passing --gpus=all to a container (CUDA 11.x needs ldconfig as well):

$ docker run --gpus=all --rm -it nvidia/cuda:11.2.1-runtime-ubuntu20.04 bash -c "ldconfig && nvidia-smi"

This is supposed to be working in the SIG Build containers as well, but they get a driver/library version mismatch error instead:

$ docker run --gpus=all --rm -it tensorflow/build:latest-python3.9 bash -c "ldconfig && nvidia-smi"
Failed to initialize NVML: Driver/library version mismatch

I'm not sure what is causing this. The dockerfiles start from the CUDA base packages and install a bunch more packages on top, which has worked in the past (on the old dockerfiles, for example). Maybe some kernel module related thing is missing? I thought I remembered this working when I set it up, but I don't have any evidence to prove it.

This is blocking my work on migrating TensorFlow's official CI to use Docker.

fatal: ambiguous argument '835d7da'

I want to install tensorflow v2.5.0. I follow the guide bellow: https://github.com/tensorflow/build/tree/86ca46079c158b7a54fc0fadf824e14e02918a16/golang_install_guide.
The only different is that--depth.

1.git clone --branch v2.5.0 --depth=1000 https://github.com/tensorflow/tensorflow.git /root/go/src/github.com/tensorflow/tensorflow  
2.cd /root/go/src/github.com/tensorflow/tensorflow
3.git format-patch -1 835d7da --stdout | git apply

I got an error:

fatal: ambiguous argument '835d7da': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.