Coder Social home page Coder Social logo

jetson-tx2-pytorch's People

Contributors

andrewadare avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

jetson-tx2-pytorch's Issues

c++: error: unrecognized command line option ‘-mavx2’

[ 15%] Building CXX object src/ATen/CMakeFiles/ATen.dir/native/cpu/ReduceOpsKernel.cpp.AVX.cpp.o
c++: error: unrecognized command line option ‘-mavx2’
src/ATen/CMakeFiles/ATen.dir/build.make:81862: recipe for target 'src/ATen/CMakeFiles/ATen.dir/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp.o' failed
make[2]: *** [src/ATen/CMakeFiles/ATen.dir/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
c++: error: unrecognized command line option ‘-mavx’
src/ATen/CMakeFiles/ATen.dir/build.make:81886: recipe for target 'src/ATen/CMakeFiles/ATen.dir/native/cpu/ReduceOpsKernel.cpp.AVX.cpp.o' failed
make[2]: *** [src/ATen/CMakeFiles/ATen.dir/native/cpu/ReduceOpsKernel.cpp.AVX.cpp.o] Error 1
CMakeFiles/Makefile2:233: recipe for target 'src/ATen/CMakeFiles/ATen.dir/all' failed
make[1]: *** [src/ATen/CMakeFiles/ATen.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error

Has anyone ever met this question? How to solve it?

parse error in template argument list

Hi,

When I run 'python setup.py install --user' ,it output an error:
[ 85%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen_cuda.dir/native/cuda/ATen_cuda_generated_TensorFactories.cu.o
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:41:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SoftMaxForward<2, ::at::cuda::type , ::at::acc_type<double, (bool)1> , ::at::native::operator ::LogSoftMaxForwardEpilogue>( _ZN2at4cuda4typeIdEE *&__cuda_0,_ZN2at4cuda4typeIdEE *&__cuda_1,int &__cuda_2){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462619cunn_SoftMaxForwardILi2EddNS1_25LogSoftMaxForwardEpilogueEEEvPT0_S5_i( (_ZN2at4cuda4typeIdEE &)__cuda_0,(_ZN2at4cuda4typeIdEE &)__cuda_1,(int &)__cuda_2);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:41:17: error: template-id ‘__wrapper__device_stub_cunn_SoftMaxForward<2, at::cuda::type, at::acc_type<double, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SoftMaxForward(_ZN2at4cuda4typeIdEE
&, _ZN2at4cuda4typeIdEE
&, int&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:46:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SoftMaxForward<2, ::at::cuda::type , ::at::acc_type<float, (bool)1> , ::at::native::operator ::LogSoftMaxForwardEpilogue>( _ZN2at4cuda4typeIfEE *&__cuda_0,_ZN2at4cuda4typeIfEE *&__cuda_1,int &__cuda_2){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462619cunn_SoftMaxForwardILi2EffNS1_25LogSoftMaxForwardEpilogueEEEvPT0_S5_i( (_ZN2at4cuda4typeIfEE &)__cuda_0,(_ZN2at4cuda4typeIfEE &)__cuda_1,(int &)__cuda_2);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:46:17: error: template-id ‘__wrapper__device_stub_cunn_SoftMaxForward<2, at::cuda::type, at::acc_type<float, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SoftMaxForward(_ZN2at4cuda4typeIfEE
&, _ZN2at4cuda4typeIfEE
&, int&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:51:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SoftMaxForward<2, ::at::cuda::type< ::at::Half> , ::at::acc_type< ::__half, (bool)1> , ::at::native::operator ::LogSoftMaxForwardEpilogue>( _ZN2at4cuda4typeINS_4HalfEEE *&__cuda_0,_ZN2at4cuda4typeINS_4HalfEEE *&__cuda_1,int &__cuda_2){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462619cunn_SoftMaxForwardILi2E6__halffNS1_25LogSoftMaxForwardEpilogueEEEvPT0_S6_i( (_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_0,(_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_1,(int &)__cuda_2);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:51:17: error: template-id ‘__wrapper__device_stub_cunn_SoftMaxForward<2, at::cuda::typeat::Half, at::acc_type<__half, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SoftMaxForward(_ZN2at4cuda4typeINS_4HalfEEE
&, _ZN2at4cuda4typeINS_4HalfEEE
&, int&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:56:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SpatialSoftMaxForward< ::at::cuda::type , ::at::acc_type<double, (bool)1> , ::at::native::operator ::LogSoftMaxForwardEpilogue>( _ZN2at4cuda4typeIdEE *&__cuda_0,_ZN2at4cuda4typeIdEE *&__cuda_1,::uint32_t &__cuda_2,::uint32_t &__cuda_3,::uint32_t &__cuda_4){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462626cunn_SpatialSoftMaxForwardIddNS1_25LogSoftMaxForwardEpilogueEEEvPT_S5_jjj( (_ZN2at4cuda4typeIdEE &)__cuda_0,(_ZN2at4cuda4typeIdEE &)__cuda_1,(::uint32_t &)__cuda_2,(::uint32_t &)__cuda_3,(::uint32_t &)__cuda_4);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:56:17: error: template-id ‘__wrapper__device_stub_cunn_SpatialSoftMaxForward<at::cuda::type, at::acc_type<double, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SpatialSoftMaxForward(_ZN2at4cuda4typeIdEE
&, _ZN2at4cuda4typeIdEE
&, uint32_t&, uint32_t&, uint32_t&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:61:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SpatialSoftMaxForward< ::at::cuda::type , ::at::acc_type<float, (bool)1> , ::at::native::operator ::LogSoftMaxForwardEpilogue>( _ZN2at4cuda4typeIfEE *&__cuda_0,_ZN2at4cuda4typeIfEE *&__cuda_1,::uint32_t &__cuda_2,::uint32_t &__cuda_3,::uint32_t &__cuda_4){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462626cunn_SpatialSoftMaxForwardIffNS1_25LogSoftMaxForwardEpilogueEEEvPT_S5_jjj( (_ZN2at4cuda4typeIfEE &)__cuda_0,(_ZN2at4cuda4typeIfEE &)__cuda_1,(::uint32_t &)__cuda_2,(::uint32_t &)__cuda_3,(::uint32_t &)__cuda_4);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:61:17: error: template-id ‘__wrapper__device_stub_cunn_SpatialSoftMaxForward<at::cuda::type, at::acc_type<float, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SpatialSoftMaxForward(_ZN2at4cuda4typeIfEE
&, _ZN2at4cuda4typeIfEE
&, uint32_t&, uint32_t&, uint32_t&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:66:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SpatialSoftMaxForward< ::at::cuda::type< ::at::Half> , ::at::acc_type< ::__half, (bool)1> , ::at::native::operator ::LogSoftMaxForwardEpilogue>( _ZN2at4cuda4typeINS_4HalfEEE *&__cuda_0,_ZN2at4cuda4typeINS_4HalfEEE *&__cuda_1,::uint32_t &__cuda_2,::uint32_t &__cuda_3,::uint32_t &__cuda_4){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462626cunn_SpatialSoftMaxForwardI6__halffNS1_25LogSoftMaxForwardEpilogueEEEvPT_S6_jjj( (_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_0,(_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_1,(::uint32_t &)__cuda_2,(::uint32_t &)__cuda_3,(::uint32_t &)__cuda_4);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:66:17: error: template-id ‘__wrapper__device_stub_cunn_SpatialSoftMaxForward<at::cuda::typeat::Half, at::acc_type<__half, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SpatialSoftMaxForward(_ZN2at4cuda4typeINS_4HalfEEE
&, _ZN2at4cuda4typeINS_4HalfEEE
&, uint32_t&, uint32_t&, uint32_t&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:71:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SoftMaxBackward<2, ::at::cuda::type , ::at::acc_type<double, (bool)1> , ::at::native::operator ::LogSoftMaxBackwardEpilogue>( _ZN2at4cuda4typeIdEE *&__cuda_0,_ZN2at4cuda4typeIdEE *&__cuda_1,_ZN2at4cuda4typeIdEE *&__cuda_2,int &__cuda_3){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462620cunn_SoftMaxBackwardILi2EddNS1_26LogSoftMaxBackwardEpilogueEEEvPT0_S5_S5_i( (_ZN2at4cuda4typeIdEE &)__cuda_0,(_ZN2at4cuda4typeIdEE &)__cuda_1,(_ZN2at4cuda4typeIdEE &)__cuda_2,(int &)__cuda_3);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:71:17: error: template-id ‘__wrapper__device_stub_cunn_SoftMaxBackward<2, at::cuda::type, at::acc_type<double, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SoftMaxBackward(_ZN2at4cuda4typeIdEE
&, _ZN2at4cuda4typeIdEE
&, _ZN2at4cuda4typeIdEE
&, int&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:76:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SoftMaxBackward<2, ::at::cuda::type , ::at::acc_type<float, (bool)1> , ::at::native::operator ::LogSoftMaxBackwardEpilogue>( _ZN2at4cuda4typeIfEE *&__cuda_0,_ZN2at4cuda4typeIfEE *&__cuda_1,_ZN2at4cuda4typeIfEE *&__cuda_2,int &__cuda_3){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462620cunn_SoftMaxBackwardILi2EffNS1_26LogSoftMaxBackwardEpilogueEEEvPT0_S5_S5_i( (_ZN2at4cuda4typeIfEE &)__cuda_0,(_ZN2at4cuda4typeIfEE &)__cuda_1,(_ZN2at4cuda4typeIfEE &)__cuda_2,(int &)__cuda_3);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:76:17: error: template-id ‘__wrapper__device_stub_cunn_SoftMaxBackward<2, at::cuda::type, at::acc_type<float, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SoftMaxBackward(_ZN2at4cuda4typeIfEE
&, _ZN2at4cuda4typeIfEE
&, _ZN2at4cuda4typeIfEE
&, int&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:81:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SoftMaxBackward<2, ::at::cuda::type< ::at::Half> , ::at::acc_type< ::__half, (bool)1> , ::at::native::operator ::LogSoftMaxBackwardEpilogue>( _ZN2at4cuda4typeINS_4HalfEEE *&__cuda_0,_ZN2at4cuda4typeINS_4HalfEEE *&__cuda_1,_ZN2at4cuda4typeINS_4HalfEEE *&__cuda_2,int &__cuda_3){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462620cunn_SoftMaxBackwardILi2E6__halffNS1_26LogSoftMaxBackwardEpilogueEEEvPT0_S6_S6_i( (_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_0,(_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_1,(_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_2,(int &)__cuda_3);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:81:17: error: template-id ‘__wrapper__device_stub_cunn_SoftMaxBackward<2, at::cuda::typeat::Half, at::acc_type<__half, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SoftMaxBackward(_ZN2at4cuda4typeINS_4HalfEEE
&, _ZN2at4cuda4typeINS_4HalfEEE
&, _ZN2at4cuda4typeINS_4HalfEEE
&, int&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:86:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SpatialSoftMaxBackward< ::at::cuda::type , ::at::acc_type<double, (bool)1> , ::at::native::operator ::LogSoftMaxBackwardEpilogue>( _ZN2at4cuda4typeIdEE *&__cuda_0,_ZN2at4cuda4typeIdEE *&__cuda_1,_ZN2at4cuda4typeIdEE *&__cuda_2,::uint32_t &__cuda_3,::uint32_t &__cuda_4,::uint32_t &__cuda_5){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462627cunn_SpatialSoftMaxBackwardIddNS1_26LogSoftMaxBackwardEpilogueEEEvPT_S5_S5_jjj( (_ZN2at4cuda4typeIdEE &)__cuda_0,(_ZN2at4cuda4typeIdEE &)__cuda_1,(_ZN2at4cuda4typeIdEE &)__cuda_2,(::uint32_t &)__cuda_3,(::uint32_t &)__cuda_4,(::uint32_t &)__cuda_5);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:86:17: error: template-id ‘__wrapper__device_stub_cunn_SpatialSoftMaxBackward<at::cuda::type, at::acc_type<double, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SpatialSoftMaxBackward(_ZN2at4cuda4typeIdEE
&, _ZN2at4cuda4typeIdEE
&, _ZN2at4cuda4typeIdEE
&, uint32_t&, uint32_t&, uint32_t&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:91:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SpatialSoftMaxBackward< ::at::cuda::type , ::at::acc_type<float, (bool)1> , ::at::native::operator ::LogSoftMaxBackwardEpilogue>( _ZN2at4cuda4typeIfEE *&__cuda_0,_ZN2at4cuda4typeIfEE *&__cuda_1,_ZN2at4cuda4typeIfEE *&__cuda_2,::uint32_t &__cuda_3,::uint32_t &__cuda_4,::uint32_t &__cuda_5){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462627cunn_SpatialSoftMaxBackwardIffNS1_26LogSoftMaxBackwardEpilogueEEEvPT_S5_S5_jjj( (_ZN2at4cuda4typeIfEE &)__cuda_0,(_ZN2at4cuda4typeIfEE &)__cuda_1,(_ZN2at4cuda4typeIfEE &)__cuda_2,(::uint32_t &)__cuda_3,(::uint32_t &)__cuda_4,(::uint32_t &)__cuda_5);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:91:17: error: template-id ‘__wrapper__device_stub_cunn_SpatialSoftMaxBackward<at::cuda::type, at::acc_type<float, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SpatialSoftMaxBackward(_ZN2at4cuda4typeIfEE
&, _ZN2at4cuda4typeIfEE
&, _ZN2at4cuda4typeIfEE
&, uint32_t&, uint32_t&, uint32_t&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:96:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SpatialSoftMaxBackward< ::at::cuda::type< ::at::Half> , ::at::acc_type< ::__half, (bool)1> , ::at::native::operator ::LogSoftMaxBackwardEpilogue>( _ZN2at4cuda4typeINS_4HalfEEE *&__cuda_0,_ZN2at4cuda4typeINS_4HalfEEE *&__cuda_1,_ZN2at4cuda4typeINS_4HalfEEE *&__cuda_2,::uint32_t &__cuda_3,::uint32_t &__cuda_4,::uint32_t &__cuda_5){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462627cunn_SpatialSoftMaxBackwardI6__halffNS1_26LogSoftMaxBackwardEpilogueEEEvPT_S6_S6_jjj( (_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_0,(_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_1,(_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_2,(::uint32_t &)__cuda_3,(::uint32_t &)__cuda_4,(::uint32_t &)__cuda_5);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:96:17: error: template-id ‘__wrapper__device_stub_cunn_SpatialSoftMaxBackward<at::cuda::typeat::Half, at::acc_type<__half, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SpatialSoftMaxBackward(_ZN2at4cuda4typeINS_4HalfEEE
&, _ZN2at4cuda4typeINS_4HalfEEE
&, _ZN2at4cuda4typeINS_4HalfEEE
&, uint32_t&, uint32_t&, uint32_t&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:101:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SoftMaxForward<2, ::at::cuda::type , ::at::acc_type<double, (bool)1> , ::at::native::operator ::SoftMaxForwardEpilogue>( _ZN2at4cuda4typeIdEE *&__cuda_0,_ZN2at4cuda4typeIdEE *&__cuda_1,int &__cuda_2){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462619cunn_SoftMaxForwardILi2EddNS1_22SoftMaxForwardEpilogueEEEvPT0_S5_i( (_ZN2at4cuda4typeIdEE &)__cuda_0,(_ZN2at4cuda4typeIdEE &)__cuda_1,(int &)__cuda_2);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:101:17: error: template-id ‘__wrapper__device_stub_cunn_SoftMaxForward<2, at::cuda::type, at::acc_type<double, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SoftMaxForward(_ZN2at4cuda4typeIdEE
&, _ZN2at4cuda4typeIdEE
&, int&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:106:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SoftMaxForward<2, ::at::cuda::type , ::at::acc_type<float, (bool)1> , ::at::native::operator ::SoftMaxForwardEpilogue>( _ZN2at4cuda4typeIfEE *&__cuda_0,_ZN2at4cuda4typeIfEE *&__cuda_1,int &__cuda_2){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462619cunn_SoftMaxForwardILi2EffNS1_22SoftMaxForwardEpilogueEEEvPT0_S5_i( (_ZN2at4cuda4typeIfEE &)__cuda_0,(_ZN2at4cuda4typeIfEE &)__cuda_1,(int &)__cuda_2);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:106:17: error: template-id ‘__wrapper__device_stub_cunn_SoftMaxForward<2, at::cuda::type, at::acc_type<float, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SoftMaxForward(_ZN2at4cuda4typeIfEE
&, _ZN2at4cuda4typeIfEE
&, int&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:111:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SoftMaxForward<2, ::at::cuda::type< ::at::Half> , ::at::acc_type< ::__half, (bool)1> , ::at::native::operator ::SoftMaxForwardEpilogue>( _ZN2at4cuda4typeINS_4HalfEEE *&__cuda_0,_ZN2at4cuda4typeINS_4HalfEEE *&__cuda_1,int &__cuda_2){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462619cunn_SoftMaxForwardILi2E6__halffNS1_22SoftMaxForwardEpilogueEEEvPT0_S6_i( (_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_0,(_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_1,(int &)__cuda_2);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:111:17: error: template-id ‘__wrapper__device_stub_cunn_SoftMaxForward<2, at::cuda::typeat::Half, at::acc_type<__half, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SoftMaxForward(_ZN2at4cuda4typeINS_4HalfEEE
&, _ZN2at4cuda4typeINS_4HalfEEE
&, int&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:116:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SpatialSoftMaxForward< ::at::cuda::type , ::at::acc_type<double, (bool)1> , ::at::native::operator ::SoftMaxForwardEpilogue>( _ZN2at4cuda4typeIdEE *&__cuda_0,_ZN2at4cuda4typeIdEE *&__cuda_1,::uint32_t &__cuda_2,::uint32_t &__cuda_3,::uint32_t &__cuda_4){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462626cunn_SpatialSoftMaxForwardIddNS1_22SoftMaxForwardEpilogueEEEvPT_S5_jjj( (_ZN2at4cuda4typeIdEE &)__cuda_0,(_ZN2at4cuda4typeIdEE &)__cuda_1,(::uint32_t &)__cuda_2,(::uint32_t &)__cuda_3,(::uint32_t &)__cuda_4);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:116:17: error: template-id ‘__wrapper__device_stub_cunn_SpatialSoftMaxForward<at::cuda::type, at::acc_type<double, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SpatialSoftMaxForward(_ZN2at4cuda4typeIdEE
&, _ZN2at4cuda4typeIdEE
&, uint32_t&, uint32_t&, uint32_t&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:121:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SpatialSoftMaxForward< ::at::cuda::type , ::at::acc_type<float, (bool)1> , ::at::native::operator ::SoftMaxForwardEpilogue>( _ZN2at4cuda4typeIfEE *&__cuda_0,_ZN2at4cuda4typeIfEE *&__cuda_1,::uint32_t &__cuda_2,::uint32_t &__cuda_3,::uint32_t &__cuda_4){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462626cunn_SpatialSoftMaxForwardIffNS1_22SoftMaxForwardEpilogueEEEvPT_S5_jjj( (_ZN2at4cuda4typeIfEE &)__cuda_0,(_ZN2at4cuda4typeIfEE &)__cuda_1,(::uint32_t &)__cuda_2,(::uint32_t &)__cuda_3,(::uint32_t &)__cuda_4);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:121:17: error: template-id ‘__wrapper__device_stub_cunn_SpatialSoftMaxForward<at::cuda::type, at::acc_type<float, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SpatialSoftMaxForward(_ZN2at4cuda4typeIfEE
&, _ZN2at4cuda4typeIfEE
&, uint32_t&, uint32_t&, uint32_t&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:126:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SpatialSoftMaxForward< ::at::cuda::type< ::at::Half> , ::at::acc_type< ::__half, (bool)1> , ::at::native::operator ::SoftMaxForwardEpilogue>( _ZN2at4cuda4typeINS_4HalfEEE *&__cuda_0,_ZN2at4cuda4typeINS_4HalfEEE *&__cuda_1,::uint32_t &__cuda_2,::uint32_t &__cuda_3,::uint32_t &__cuda_4){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462626cunn_SpatialSoftMaxForwardI6__halffNS1_22SoftMaxForwardEpilogueEEEvPT_S6_jjj( (_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_0,(_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_1,(::uint32_t &)__cuda_2,(::uint32_t &)__cuda_3,(::uint32_t &)__cuda_4);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:126:17: error: template-id ‘__wrapper__device_stub_cunn_SpatialSoftMaxForward<at::cuda::typeat::Half, at::acc_type<__half, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SpatialSoftMaxForward(_ZN2at4cuda4typeINS_4HalfEEE
&, _ZN2at4cuda4typeINS_4HalfEEE
&, uint32_t&, uint32_t&, uint32_t&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:131:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SoftMaxBackward<2, ::at::cuda::type , ::at::acc_type<double, (bool)1> , ::at::native::operator ::SoftMaxBackwardEpilogue>( _ZN2at4cuda4typeIdEE *&__cuda_0,_ZN2at4cuda4typeIdEE *&__cuda_1,_ZN2at4cuda4typeIdEE *&__cuda_2,int &__cuda_3){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462620cunn_SoftMaxBackwardILi2EddNS1_23SoftMaxBackwardEpilogueEEEvPT0_S5_S5_i( (_ZN2at4cuda4typeIdEE &)__cuda_0,(_ZN2at4cuda4typeIdEE &)__cuda_1,(_ZN2at4cuda4typeIdEE &)__cuda_2,(int &)__cuda_3);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:131:17: error: template-id ‘__wrapper__device_stub_cunn_SoftMaxBackward<2, at::cuda::type, at::acc_type<double, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SoftMaxBackward(_ZN2at4cuda4typeIdEE
&, _ZN2at4cuda4typeIdEE
&, _ZN2at4cuda4typeIdEE
&, int&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:136:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SoftMaxBackward<2, ::at::cuda::type , ::at::acc_type<float, (bool)1> , ::at::native::operator ::SoftMaxBackwardEpilogue>( _ZN2at4cuda4typeIfEE *&__cuda_0,_ZN2at4cuda4typeIfEE *&__cuda_1,_ZN2at4cuda4typeIfEE *&__cuda_2,int &__cuda_3){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462620cunn_SoftMaxBackwardILi2EffNS1_23SoftMaxBackwardEpilogueEEEvPT0_S5_S5_i( (_ZN2at4cuda4typeIfEE &)__cuda_0,(_ZN2at4cuda4typeIfEE &)__cuda_1,(_ZN2at4cuda4typeIfEE &)__cuda_2,(int &)__cuda_3);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:136:17: error: template-id ‘__wrapper__device_stub_cunn_SoftMaxBackward<2, at::cuda::type, at::acc_type<float, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SoftMaxBackward(_ZN2at4cuda4typeIfEE
&, _ZN2at4cuda4typeIfEE
&, _ZN2at4cuda4typeIfEE
&, int&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:141:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SoftMaxBackward<2, ::at::cuda::type< ::at::Half> , ::at::acc_type< ::__half, (bool)1> , ::at::native::operator ::SoftMaxBackwardEpilogue>( _ZN2at4cuda4typeINS_4HalfEEE *&__cuda_0,_ZN2at4cuda4typeINS_4HalfEEE *&__cuda_1,_ZN2at4cuda4typeINS_4HalfEEE *&__cuda_2,int &__cuda_3){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462620cunn_SoftMaxBackwardILi2E6__halffNS1_23SoftMaxBackwardEpilogueEEEvPT0_S6_S6_i( (_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_0,(_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_1,(_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_2,(int &)__cuda_3);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:141:17: error: template-id ‘__wrapper__device_stub_cunn_SoftMaxBackward<2, at::cuda::typeat::Half, at::acc_type<__half, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SoftMaxBackward(_ZN2at4cuda4typeINS_4HalfEEE
&, _ZN2at4cuda4typeINS_4HalfEEE
&, _ZN2at4cuda4typeINS_4HalfEEE
&, int&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:146:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SpatialSoftMaxBackward< ::at::cuda::type , ::at::acc_type<double, (bool)1> , ::at::native::operator ::SoftMaxBackwardEpilogue>( _ZN2at4cuda4typeIdEE *&__cuda_0,_ZN2at4cuda4typeIdEE *&__cuda_1,_ZN2at4cuda4typeIdEE *&__cuda_2,::uint32_t &__cuda_3,::uint32_t &__cuda_4,::uint32_t &__cuda_5){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462627cunn_SpatialSoftMaxBackwardIddNS1_23SoftMaxBackwardEpilogueEEEvPT_S5_S5_jjj( (_ZN2at4cuda4typeIdEE &)__cuda_0,(_ZN2at4cuda4typeIdEE &)__cuda_1,(_ZN2at4cuda4typeIdEE &)__cuda_2,(::uint32_t &)__cuda_3,(::uint32_t &)__cuda_4,(::uint32_t &)__cuda_5);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:146:17: error: template-id ‘__wrapper__device_stub_cunn_SpatialSoftMaxBackward<at::cuda::type, at::acc_type<double, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SpatialSoftMaxBackward(_ZN2at4cuda4typeIdEE
&, _ZN2at4cuda4typeIdEE
&, _ZN2at4cuda4typeIdEE
&, uint32_t&, uint32_t&, uint32_t&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:151:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SpatialSoftMaxBackward< ::at::cuda::type , ::at::acc_type<float, (bool)1> , ::at::native::operator ::SoftMaxBackwardEpilogue>( _ZN2at4cuda4typeIfEE *&__cuda_0,_ZN2at4cuda4typeIfEE *&__cuda_1,_ZN2at4cuda4typeIfEE *&__cuda_2,::uint32_t &__cuda_3,::uint32_t &__cuda_4,::uint32_t &__cuda_5){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462627cunn_SpatialSoftMaxBackwardIffNS1_23SoftMaxBackwardEpilogueEEEvPT_S5_S5_jjj( (_ZN2at4cuda4typeIfEE &)__cuda_0,(_ZN2at4cuda4typeIfEE &)__cuda_1,(_ZN2at4cuda4typeIfEE &)__cuda_2,(::uint32_t &)__cuda_3,(::uint32_t &)__cuda_4,(::uint32_t &)__cuda_5);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:151:17: error: template-id ‘__wrapper__device_stub_cunn_SpatialSoftMaxBackward<at::cuda::type, at::acc_type<float, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SpatialSoftMaxBackward(_ZN2at4cuda4typeIfEE
&, _ZN2at4cuda4typeIfEE
&, _ZN2at4cuda4typeIfEE
&, uint32_t&, uint32_t&, uint32_t&)’ does not match any template declaration
In file included from tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:1:0:
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:156:17: error: parse error in template argument list
template<> __specialization_static void __wrapper__device_stub_cunn_SpatialSoftMaxBackward< ::at::cuda::type< ::at::Half> , ::at::acc_type< ::__half, (bool)1> , ::at::native::operator ::SoftMaxBackwardEpilogue>( _ZN2at4cuda4typeINS_4HalfEEE *&__cuda_0,_ZN2at4cuda4typeINS_4HalfEEE *&__cuda_1,_ZN2at4cuda4typeINS_4HalfEEE *&__cuda_2,::uint32_t &__cuda_3,::uint32_t &__cuda_4,::uint32_t &__cuda_5){__device_stub__ZN2at6native66_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a462627cunn_SpatialSoftMaxBackwardI6__halffNS1_23SoftMaxBackwardEpilogueEEEvPT_S6_S6_jjj( (_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_0,(_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_1,(_ZN2at4cuda4typeINS_4HalfEEE &)__cuda_2,(::uint32_t &)__cuda_3,(::uint32_t &)__cuda_4,(::uint32_t &)__cuda_5);}}}}
^
/tmp/tmpxft_00007024_00000000-4_SoftMax.cudafe1.stub.c:156:17: error: template-id ‘__wrapper__device_stub_cunn_SpatialSoftMaxBackward<at::cuda::typeat::Half, at::acc_type<__half, true>, >’ for ‘void at::native::_GLOBAL__N__42_tmpxft_00007024_00000000_7_SoftMax_cpp1_ii_826a4626::__wrapper__device_stub_cunn_SpatialSoftMaxBackward(_ZN2at4cuda4typeINS_4HalfEEE
&, _ZN2at4cuda4typeINS_4HalfEEE
&, _ZN2at4cuda4typeINS_4HalfEEE
&, uint32_t&, uint32_t&, uint32_t&)’ does not match any template declaration
cc1plus: warning: unrecognized command line option ‘-Wno-absolute-value’
CMake Error at ATen_cuda_generated_SoftMax.cu.o.Release.cmake:279 (message):
Error generating file
/home/ubuntu/pytorch0.3/aten/build/src/ATen/CMakeFiles/ATen_cuda.dir/native/cuda/./ATen_cuda_generated_SoftMax.cu.o

src/ATen/CMakeFiles/ATen_cuda.dir/build.make:1169: recipe for target 'src/ATen/CMakeFiles/ATen_cuda.dir/native/cuda/ATen_cuda_generated_SoftMax.cu.o' failed
make[2]: *** [src/ATen/CMakeFiles/ATen_cuda.dir/native/cuda/ATen_cuda_generated_SoftMax.cu.o] Error 1
make[2]: *** 正在等待未完成的任务....
CMakeFiles/Makefile2:209: recipe for target 'src/ATen/CMakeFiles/ATen_cuda.dir/all' failed
make[1]: *** [src/ATen/CMakeFiles/ATen_cuda.dir/all] Error 2
Makefile:129: recipe for target 'all' failed

How to fix it?

Install Error: "aarch64-linux-gnu-gcc: internal compiler error"

I ran into a compiler error using cmake 3.5.1. I might try to upgrade the compiler to 3.7...

(cv40py35) nvidia@tegra-ubuntu:~/pytorch$ cmake --version
cmake version 3.5.1

Error:
"aarch64-linux-gnu-gcc: internal compiler error (program cc1plus)"

This could just be a memory issue since I tried building without swap space (usb drive)...


-- Build files have been written to: /home/nvidia/pytorch/torch/lib/build/THPP
Scanning dependencies of target THPP
[ 11%] Building CXX object CMakeFiles/THPP.dir/Traits.cpp.o
[ 22%] Building CXX object CMakeFiles/THPP.dir/storages/THCStorage.cpp.o
[ 33%] Building CXX object CMakeFiles/THPP.dir/tensors/THCTensor.cpp.o
[ 44%] Building CXX object CMakeFiles/THPP.dir/storages/THStorage.cpp.o
[ 55%] Building CXX object CMakeFiles/THPP.dir/tensors/THTensor.cpp.o
[ 66%] Building CXX object CMakeFiles/THPP.dir/tensors/THCSTensor.cpp.o
[ 77%] Building CXX object CMakeFiles/THPP.dir/tensors/THSTensor.cpp.o
[ 88%] Building CXX object CMakeFiles/THPP.dir/TraitsCuda.cpp.o
[100%] Linking CXX shared library libTHPP.so
[100%] Built target THPP
Install the project...
-- Install configuration: "Release"
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/lib/libTHPP.so.1
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/lib/libTHPP.so
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/THPP.h
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/TraitsCuda.hpp
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/Storage.hpp
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/Tensor.hpp
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/Traits.hpp
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/storages/generic/THCStorage.hpp
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/storages/generic/THStorage.hpp
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/storages/THCStorage.hpp
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/storages/THStorage.hpp
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/tensors/generic/THCSTensor.hpp
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/tensors/generic/THTensor.hpp
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/tensors/generic/THSTensor.hpp
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/tensors/generic/THCTensor.hpp
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/tensors/THCSTensor.hpp
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/tensors/THTensor.hpp
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/tensors/THSTensor.hpp
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/tensors/THCTensor.hpp
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/THPP/Type.hpp
-- The C compiler identification is GNU 5.4.0
-- The CXX compiler identification is GNU 5.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for clock_gettime in rt
-- Looking for clock_gettime in rt - found
-- Configuring done
-- Generating done
CMake Warning:
Manually-specified variables were not used by the project:

CUDA_NVCC_FLAGS
NO_CUDA
THCS_LIBRARIES
THCUNN_SO_VERSION
THC_LIBRARIES
THC_SO_VERSION
THD_SO_VERSION
THNN_SO_VERSION
THPP_LIBRARIES
THS_LIBRARIES
TH_INCLUDE_PATH
TH_LIB_PATH
TH_SO_VERSION
Torch_FOUND

-- Build files have been written to: /home/nvidia/pytorch/torch/lib/build/libshm
Scanning dependencies of target torch_shm_manager
Scanning dependencies of target shm
[ 25%] Building CXX object CMakeFiles/torch_shm_manager.dir/manager.cpp.o
[ 50%] Building CXX object CMakeFiles/shm.dir/core.cpp.o
/home/nvidia/pytorch/torch/lib/libshm/manager.cpp: In function 'void print_init_message(const char*)':
/home/nvidia/pytorch/torch/lib/libshm/manager.cpp:58:37: warning: ignoring return value of 'ssize_t write(int, const void*, size_t)', declared with attribute warn_unused_result [-Wunused-result]
write(1, message, strlen(message));
^
/home/nvidia/pytorch/torch/lib/libshm/manager.cpp:59:20: warning: ignoring return value of 'ssize_t write(int, const void*, size_t)', declared with attribute warn_unused_result [-Wunused-result]
write(1, "\n", 1);
^
[ 75%] Linking CXX shared library libshm.so
[ 75%] Built target shm
[100%] Linking CXX executable torch_shm_manager
CMakeFiles/torch_shm_manager.dir/manager.cpp.o: In function main': manager.cpp:(.text.startup+0x3c): warning: the use of tmpnam' is dangerous, better use `mkstemp'
[100%] Built target torch_shm_manager
Install the project...
-- Install configuration: "Release"
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/lib/libshm.so
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/include/libshm.h
-- Installing: /home/nvidia/pytorch/torch/lib/tmp_install/bin/torch_shm_manager
running build
running build_py
-- Building version 0.1.10+ac9245a
creating build
creating build/lib.linux-aarch64-3.5
creating build/lib.linux-aarch64-3.5/torch
copying torch/_torch_docs.py -> build/lib.linux-aarch64-3.5/torch
copying torch/_utils.py -> build/lib.linux-aarch64-3.5/torch
copying torch/storage.py -> build/lib.linux-aarch64-3.5/torch
copying torch/serialization.py -> build/lib.linux-aarch64-3.5/torch
copying torch/tensor.py -> build/lib.linux-aarch64-3.5/torch
copying torch/version.py -> build/lib.linux-aarch64-3.5/torch
copying torch/functional.py -> build/lib.linux-aarch64-3.5/torch
copying torch/_tensor_str.py -> build/lib.linux-aarch64-3.5/torch
copying torch/_tensor_docs.py -> build/lib.linux-aarch64-3.5/torch
copying torch/init.py -> build/lib.linux-aarch64-3.5/torch
creating build/lib.linux-aarch64-3.5/tools
copying tools/init.py -> build/lib.linux-aarch64-3.5/tools
creating build/lib.linux-aarch64-3.5/torch/legacy
copying torch/legacy/init.py -> build/lib.linux-aarch64-3.5/torch/legacy
creating build/lib.linux-aarch64-3.5/torch/utils
copying torch/utils/model_zoo.py -> build/lib.linux-aarch64-3.5/torch/utils
copying torch/utils/hooks.py -> build/lib.linux-aarch64-3.5/torch/utils
copying torch/utils/init.py -> build/lib.linux-aarch64-3.5/torch/utils
creating build/lib.linux-aarch64-3.5/torch/backends
copying torch/backends/init.py -> build/lib.linux-aarch64-3.5/torch/backends
creating build/lib.linux-aarch64-3.5/torch/multiprocessing
copying torch/multiprocessing/pool.py -> build/lib.linux-aarch64-3.5/torch/multiprocessing
copying torch/multiprocessing/init.py -> build/lib.linux-aarch64-3.5/torch/multiprocessing
copying torch/multiprocessing/reductions.py -> build/lib.linux-aarch64-3.5/torch/multiprocessing
copying torch/multiprocessing/queue.py -> build/lib.linux-aarch64-3.5/torch/multiprocessing
creating build/lib.linux-aarch64-3.5/torch/sparse
copying torch/sparse/init.py -> build/lib.linux-aarch64-3.5/torch/sparse
creating build/lib.linux-aarch64-3.5/torch/distributed
copying torch/distributed/remote_types.py -> build/lib.linux-aarch64-3.5/torch/distributed
copying torch/distributed/init.py -> build/lib.linux-aarch64-3.5/torch/distributed
copying torch/distributed/collectives.py -> build/lib.linux-aarch64-3.5/torch/distributed
creating build/lib.linux-aarch64-3.5/torch/_thnn
copying torch/_thnn/utils.py -> build/lib.linux-aarch64-3.5/torch/_thnn
copying torch/_thnn/init.py -> build/lib.linux-aarch64-3.5/torch/_thnn
creating build/lib.linux-aarch64-3.5/torch/autograd
copying torch/autograd/gradcheck.py -> build/lib.linux-aarch64-3.5/torch/autograd
copying torch/autograd/stochastic_function.py -> build/lib.linux-aarch64-3.5/torch/autograd
copying torch/autograd/function.py -> build/lib.linux-aarch64-3.5/torch/autograd
copying torch/autograd/engine.py -> build/lib.linux-aarch64-3.5/torch/autograd
copying torch/autograd/init.py -> build/lib.linux-aarch64-3.5/torch/autograd
copying torch/autograd/variable.py -> build/lib.linux-aarch64-3.5/torch/autograd
creating build/lib.linux-aarch64-3.5/torch/nn
copying torch/nn/parameter.py -> build/lib.linux-aarch64-3.5/torch/nn
copying torch/nn/functional.py -> build/lib.linux-aarch64-3.5/torch/nn
copying torch/nn/init.py -> build/lib.linux-aarch64-3.5/torch/nn
copying torch/nn/init.py -> build/lib.linux-aarch64-3.5/torch/nn
creating build/lib.linux-aarch64-3.5/torch/cuda
copying torch/cuda/sparse.py -> build/lib.linux-aarch64-3.5/torch/cuda
copying torch/cuda/nccl.py -> build/lib.linux-aarch64-3.5/torch/cuda
copying torch/cuda/streams.py -> build/lib.linux-aarch64-3.5/torch/cuda
copying torch/cuda/random.py -> build/lib.linux-aarch64-3.5/torch/cuda
copying torch/cuda/comm.py -> build/lib.linux-aarch64-3.5/torch/cuda
copying torch/cuda/init.py -> build/lib.linux-aarch64-3.5/torch/cuda
creating build/lib.linux-aarch64-3.5/torch/optim
copying torch/optim/adam.py -> build/lib.linux-aarch64-3.5/torch/optim
copying torch/optim/adagrad.py -> build/lib.linux-aarch64-3.5/torch/optim
copying torch/optim/adadelta.py -> build/lib.linux-aarch64-3.5/torch/optim
copying torch/optim/rmsprop.py -> build/lib.linux-aarch64-3.5/torch/optim
copying torch/optim/sgd.py -> build/lib.linux-aarch64-3.5/torch/optim
copying torch/optim/asgd.py -> build/lib.linux-aarch64-3.5/torch/optim
copying torch/optim/optimizer.py -> build/lib.linux-aarch64-3.5/torch/optim
copying torch/optim/init.py -> build/lib.linux-aarch64-3.5/torch/optim
copying torch/optim/adamax.py -> build/lib.linux-aarch64-3.5/torch/optim
copying torch/optim/lbfgs.py -> build/lib.linux-aarch64-3.5/torch/optim
copying torch/optim/rprop.py -> build/lib.linux-aarch64-3.5/torch/optim
creating build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/LeakyReLU.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Sum.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Concat.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Bilinear.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/CosineDistance.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/AddConstant.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/DistKLDivCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/DotProduct.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/WeightedEuclidean.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/MarginRankingCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Add.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/L1HingeEmbeddingCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/GradientReversal.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialAdaptiveMaxPooling.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/VolumetricFullConvolution.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Cosine.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Log.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SoftShrink.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/MaskedSelect.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/CMul.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialConvolution.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/L1Cost.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Clamp.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/BatchNormalization.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/utils.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/MM.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Unsqueeze.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Sequential.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/PartialLinear.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SelectTable.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/MulConstant.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialReflectionPadding.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Reshape.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Min.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Tanh.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Criterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/HardTanh.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/VolumetricAveragePooling.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/LookupTable.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialLPPooling.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SoftMarginCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialMaxPooling.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialReplicationPadding.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Sqrt.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SplitTable.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Replicate.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/CSubTable.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialSubtractiveNormalization.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Squeeze.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Index.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Narrow.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/MultiLabelMarginCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Abs.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/ParallelTable.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/VolumetricConvolution.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialUpSamplingNearest.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialCrossMapLRN.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/DepthConcat.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/VolumetricDropout.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialDivisiveNormalization.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/MSECriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/HardShrink.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Threshold.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/CosineEmbeddingCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Module.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Identity.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/MarginCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Mean.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/WeightedMSECriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/FlattenTable.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Contiguous.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Transpose.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/CrossEntropyCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialClassNLLCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SoftMax.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/CMulTable.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialSubSampling.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Dropout.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialFractionalMaxPooling.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/CDivTable.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/TemporalConvolution.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/JoinTable.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/NarrowTable.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/VolumetricBatchNormalization.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialDilatedConvolution.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/CAddTable.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/MultiLabelSoftMarginCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialFullConvolution.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/LogSoftMax.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/MV.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/MixtureTable.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/PReLU.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialConvolutionLocal.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Container.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialAveragePooling.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/RReLU.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Linear.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Exp.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Padding.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Max.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Select.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialBatchNormalization.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/AbsCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/LogSigmoid.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/CriterionTable.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Square.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SmoothL1Criterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialDropout.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Euclidean.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/View.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/ClassNLLCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/MultiMarginCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/TemporalMaxPooling.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SoftSign.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/ReLU6.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialSoftMax.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/VolumetricMaxPooling.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Power.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialFullConvolutionMap.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Copy.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/init.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/ReLU.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Mul.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/MultiCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/VolumetricMaxUnpooling.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/PairwiseDistance.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialMaxUnpooling.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Parallel.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialConvolutionMap.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SoftPlus.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/HingeEmbeddingCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/TanhShrink.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/BCECriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Normalize.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/VolumetricReplicationPadding.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialContrastiveNormalization.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SpatialZeroPadding.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/Sigmoid.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/ConcatTable.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/TemporalSubSampling.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/L1Penalty.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/ClassSimplexCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/ParallelCriterion.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/ELU.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
copying torch/legacy/nn/SoftMin.py -> build/lib.linux-aarch64-3.5/torch/legacy/nn
creating build/lib.linux-aarch64-3.5/torch/legacy/optim
copying torch/legacy/optim/adam.py -> build/lib.linux-aarch64-3.5/torch/legacy/optim
copying torch/legacy/optim/nag.py -> build/lib.linux-aarch64-3.5/torch/legacy/optim
copying torch/legacy/optim/adagrad.py -> build/lib.linux-aarch64-3.5/torch/legacy/optim
copying torch/legacy/optim/adadelta.py -> build/lib.linux-aarch64-3.5/torch/legacy/optim
copying torch/legacy/optim/rmsprop.py -> build/lib.linux-aarch64-3.5/torch/legacy/optim
copying torch/legacy/optim/sgd.py -> build/lib.linux-aarch64-3.5/torch/legacy/optim
copying torch/legacy/optim/asgd.py -> build/lib.linux-aarch64-3.5/torch/legacy/optim
copying torch/legacy/optim/cg.py -> build/lib.linux-aarch64-3.5/torch/legacy/optim
copying torch/legacy/optim/init.py -> build/lib.linux-aarch64-3.5/torch/legacy/optim
copying torch/legacy/optim/adamax.py -> build/lib.linux-aarch64-3.5/torch/legacy/optim
copying torch/legacy/optim/lbfgs.py -> build/lib.linux-aarch64-3.5/torch/legacy/optim
copying torch/legacy/optim/rprop.py -> build/lib.linux-aarch64-3.5/torch/legacy/optim
creating build/lib.linux-aarch64-3.5/torch/utils/ffi
copying torch/utils/ffi/init.py -> build/lib.linux-aarch64-3.5/torch/utils/ffi
creating build/lib.linux-aarch64-3.5/torch/utils/trainer
copying torch/utils/trainer/trainer.py -> build/lib.linux-aarch64-3.5/torch/utils/trainer
copying torch/utils/trainer/init.py -> build/lib.linux-aarch64-3.5/torch/utils/trainer
creating build/lib.linux-aarch64-3.5/torch/utils/serialization
copying torch/utils/serialization/read_lua_file.py -> build/lib.linux-aarch64-3.5/torch/utils/serialization
copying torch/utils/serialization/init.py -> build/lib.linux-aarch64-3.5/torch/utils/serialization
creating build/lib.linux-aarch64-3.5/torch/utils/data
copying torch/utils/data/dataloader.py -> build/lib.linux-aarch64-3.5/torch/utils/data
copying torch/utils/data/dataset.py -> build/lib.linux-aarch64-3.5/torch/utils/data
copying torch/utils/data/init.py -> build/lib.linux-aarch64-3.5/torch/utils/data
copying torch/utils/data/sampler.py -> build/lib.linux-aarch64-3.5/torch/utils/data
creating build/lib.linux-aarch64-3.5/torch/utils/trainer/plugins
copying torch/utils/trainer/plugins/progress.py -> build/lib.linux-aarch64-3.5/torch/utils/trainer/plugins
copying torch/utils/trainer/plugins/accuracy.py -> build/lib.linux-aarch64-3.5/torch/utils/trainer/plugins
copying torch/utils/trainer/plugins/plugin.py -> build/lib.linux-aarch64-3.5/torch/utils/trainer/plugins
copying torch/utils/trainer/plugins/monitor.py -> build/lib.linux-aarch64-3.5/torch/utils/trainer/plugins
copying torch/utils/trainer/plugins/logger.py -> build/lib.linux-aarch64-3.5/torch/utils/trainer/plugins
copying torch/utils/trainer/plugins/loss.py -> build/lib.linux-aarch64-3.5/torch/utils/trainer/plugins
copying torch/utils/trainer/plugins/time.py -> build/lib.linux-aarch64-3.5/torch/utils/trainer/plugins
copying torch/utils/trainer/plugins/init.py -> build/lib.linux-aarch64-3.5/torch/utils/trainer/plugins
creating build/lib.linux-aarch64-3.5/torch/backends/cudnn
copying torch/backends/cudnn/rnn.py -> build/lib.linux-aarch64-3.5/torch/backends/cudnn
copying torch/backends/cudnn/init.py -> build/lib.linux-aarch64-3.5/torch/backends/cudnn
creating build/lib.linux-aarch64-3.5/torch/autograd/_functions
copying torch/autograd/_functions/stochastic.py -> build/lib.linux-aarch64-3.5/torch/autograd/_functions
copying torch/autograd/_functions/reduce.py -> build/lib.linux-aarch64-3.5/torch/autograd/_functions
copying torch/autograd/_functions/tensor.py -> build/lib.linux-aarch64-3.5/torch/autograd/_functions
copying torch/autograd/_functions/blas.py -> build/lib.linux-aarch64-3.5/torch/autograd/_functions
copying torch/autograd/_functions/basic_ops.py -> build/lib.linux-aarch64-3.5/torch/autograd/_functions
copying torch/autograd/_functions/pointwise.py -> build/lib.linux-aarch64-3.5/torch/autograd/_functions
copying torch/autograd/_functions/linalg.py -> build/lib.linux-aarch64-3.5/torch/autograd/_functions
copying torch/autograd/_functions/init.py -> build/lib.linux-aarch64-3.5/torch/autograd/_functions
copying torch/autograd/_functions/compare.py -> build/lib.linux-aarch64-3.5/torch/autograd/_functions
creating build/lib.linux-aarch64-3.5/torch/nn/utils
copying torch/nn/utils/clip_grad.py -> build/lib.linux-aarch64-3.5/torch/nn/utils
copying torch/nn/utils/rnn.py -> build/lib.linux-aarch64-3.5/torch/nn/utils
copying torch/nn/utils/init.py -> build/lib.linux-aarch64-3.5/torch/nn/utils
creating build/lib.linux-aarch64-3.5/torch/nn/backends
copying torch/nn/backends/backend.py -> build/lib.linux-aarch64-3.5/torch/nn/backends
copying torch/nn/backends/thnn.py -> build/lib.linux-aarch64-3.5/torch/nn/backends
copying torch/nn/backends/init.py -> build/lib.linux-aarch64-3.5/torch/nn/backends
creating build/lib.linux-aarch64-3.5/torch/nn/_functions
copying torch/nn/_functions/conv.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions
copying torch/nn/_functions/padding.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions
copying torch/nn/_functions/dropout.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions
copying torch/nn/_functions/loss.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions
copying torch/nn/_functions/rnn.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions
copying torch/nn/_functions/batchnorm.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions
copying torch/nn/_functions/linear.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions
copying torch/nn/_functions/init.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions
copying torch/nn/_functions/activation.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions
creating build/lib.linux-aarch64-3.5/torch/nn/parallel
copying torch/nn/parallel/data_parallel.py -> build/lib.linux-aarch64-3.5/torch/nn/parallel
copying torch/nn/parallel/replicate.py -> build/lib.linux-aarch64-3.5/torch/nn/parallel
copying torch/nn/parallel/parallel_apply.py -> build/lib.linux-aarch64-3.5/torch/nn/parallel
copying torch/nn/parallel/scatter_gather.py -> build/lib.linux-aarch64-3.5/torch/nn/parallel
copying torch/nn/parallel/_functions.py -> build/lib.linux-aarch64-3.5/torch/nn/parallel
copying torch/nn/parallel/init.py -> build/lib.linux-aarch64-3.5/torch/nn/parallel
creating build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/container.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/sparse.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/conv.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/utils.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/module.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/padding.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/upsampling.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/pooling.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/distance.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/pixelshuffle.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/dropout.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/normalization.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/loss.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/rnn.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/batchnorm.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/linear.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/init.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
copying torch/nn/modules/activation.py -> build/lib.linux-aarch64-3.5/torch/nn/modules
creating build/lib.linux-aarch64-3.5/torch/nn/_functions/thnn
copying torch/nn/_functions/thnn/sparse.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions/thnn
copying torch/nn/_functions/thnn/auto.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions/thnn
copying torch/nn/_functions/thnn/upsampling.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions/thnn
copying torch/nn/_functions/thnn/pooling.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions/thnn
copying torch/nn/_functions/thnn/normalization.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions/thnn
copying torch/nn/_functions/thnn/loss.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions/thnn
copying torch/nn/_functions/thnn/init.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions/thnn
copying torch/nn/_functions/thnn/activation.py -> build/lib.linux-aarch64-3.5/torch/nn/_functions/thnn
creating build/lib.linux-aarch64-3.5/torch/lib
copying torch/lib/libshm.so -> build/lib.linux-aarch64-3.5/torch/lib
copying torch/lib/libTHNN.so.1 -> build/lib.linux-aarch64-3.5/torch/lib
copying torch/lib/libTHCUNN.so.1 -> build/lib.linux-aarch64-3.5/torch/lib
copying torch/lib/libTHS.so.1 -> build/lib.linux-aarch64-3.5/torch/lib
copying torch/lib/libTH.so.1 -> build/lib.linux-aarch64-3.5/torch/lib
copying torch/lib/libnccl.so -> build/lib.linux-aarch64-3.5/torch/lib
copying torch/lib/libTHC.so.1 -> build/lib.linux-aarch64-3.5/torch/lib
copying torch/lib/libTHCS.so.1 -> build/lib.linux-aarch64-3.5/torch/lib
copying torch/lib/libTHPP.so.1 -> build/lib.linux-aarch64-3.5/torch/lib
copying torch/lib/torch_shm_manager -> build/lib.linux-aarch64-3.5/torch/lib
copying torch/lib/THCUNN.h -> build/lib.linux-aarch64-3.5/torch/lib
copying torch/lib/THNN.h -> build/lib.linux-aarch64-3.5/torch/lib
creating build/lib.linux-aarch64-3.5/torch/lib/include
creating build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THFilePrivate.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THRandom.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THStorage.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/TH.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THTensorMacros.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THMath.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THDiskFile.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THGeneral.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THAtomic.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THFile.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THVector.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THTensor.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THTensorApply.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THGenerateHalfType.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THGenerateAllTypes.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THHalf.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THMemoryFile.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THTensorDimApply.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THBlas.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THAllocator.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THGenerateIntTypes.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THLapack.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THGenerateFloatTypes.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
copying torch/lib/include/TH/THLogAdd.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH
creating build/lib.linux-aarch64-3.5/torch/lib/include/TH/generic
copying torch/lib/include/TH/generic/THTensorMath.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH/generic
copying torch/lib/include/TH/generic/THStorage.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH/generic
copying torch/lib/include/TH/generic/THTensorLapack.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH/generic
copying torch/lib/include/TH/generic/THTensorCopy.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH/generic
copying torch/lib/include/TH/generic/THVector.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH/generic
copying torch/lib/include/TH/generic/THTensor.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH/generic
copying torch/lib/include/TH/generic/THTensorRandom.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH/generic
copying torch/lib/include/TH/generic/THStorageCopy.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH/generic
copying torch/lib/include/TH/generic/THTensorConv.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH/generic
copying torch/lib/include/TH/generic/THBlas.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH/generic
copying torch/lib/include/TH/generic/THLapack.h -> build/lib.linux-aarch64-3.5/torch/lib/include/TH/generic
creating build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCGenerateFloatType.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCAllocator.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCSleep.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCGenerateByteType.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCGeneral.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCGenerateHalfType.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCTensorCopy.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCTensorConv.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCStream.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCCachingAllocator.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCCachingHostAllocator.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCStorage.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCGenerateShortType.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCGenerateLongType.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCHalf.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCTensorTopK.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCGenerateAllTypes.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCGenerateFloatTypes.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THC.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCThreadLocal.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCTensorRandom.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCGenerateIntType.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCStorageCopy.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCGenerateCharType.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCTensor.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCBlas.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCTensorMath.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
copying torch/lib/include/THC/THCGenerateDoubleType.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC
creating build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorMathReduce.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorMasked.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorCopy.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorMathCompareT.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorScatterGather.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCStorage.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorMathCompare.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorIndex.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorMathPointwise.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorMathBlas.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorRandom.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCStorageCopy.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorSort.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorMathScan.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorMathPairwise.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensor.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorMathMagma.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
copying torch/lib/include/THC/generic/THCTensorMath.h -> build/lib.linux-aarch64-3.5/torch/lib/include/THC/generic
running build_ext
-- Building with NumPy bindings
-- Detected cuDNN at /usr/lib/aarch64-linux-gnu, /usr/include
-- Detected CUDA at /usr/local/cuda-8.0
building 'torch._C' extension
creating build/temp.linux-aarch64-3.5
creating build/temp.linux-aarch64-3.5/torch
creating build/temp.linux-aarch64-3.5/torch/csrc
creating build/temp.linux-aarch64-3.5/torch/csrc/utils
creating build/temp.linux-aarch64-3.5/torch/csrc/autograd
creating build/temp.linux-aarch64-3.5/torch/csrc/autograd/functions
creating build/temp.linux-aarch64-3.5/torch/csrc/nn
creating build/temp.linux-aarch64-3.5/torch/csrc/cuda
creating build/temp.linux-aarch64-3.5/torch/csrc/cudnn
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/PtrWrapper.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/PtrWrapper.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/Size.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/Size.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/Storage.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/Storage.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/utils.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/utils.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/Module.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/Module.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/Exceptions.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/Exceptions.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/Tensor.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/Tensor.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/utils/object_ptr.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/utils/object_ptr.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/Generator.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/Generator.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/allocators.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/allocators.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/serialization.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/serialization.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/autograd/function.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/autograd/function.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/autograd/variable.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/autograd/variable.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/DynamicTypes.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/DynamicTypes.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/autograd/init.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/autograd/init.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/autograd/grad_buffer.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/autograd/grad_buffer.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/autograd/engine.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/autograd/engine.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/byte_order.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/byte_order.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/autograd/python_function.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/autograd/python_function.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/autograd/python_engine.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/autograd/python_engine.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/autograd/functions/batch_normalization.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/autograd/functions/batch_normalization.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/nn/THNN_generic.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/nn/THNN_generic.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/autograd/functions/init.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/autograd/functions/init.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/autograd/python_cpp_function.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/autograd/python_cpp_function.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/cuda/Stream.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/cuda/Stream.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/autograd/python_variable.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/autograd/python_variable.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/cuda/Tensor.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/cuda/Tensor.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/cuda/utils.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/cuda/utils.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/cuda/Module.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/cuda/Module.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/cuda/serialization.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/cuda/serialization.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/cuda/Storage.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/cuda/Storage.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/cudnn/BatchNorm.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/cudnn/BatchNorm.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/cudnn/Conv.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/cudnn/Conv.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
aarch64-linux-gnu-gcc: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See file:///usr/share/doc/gcc-5/README.Bugs for instructions.
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/nvidia/pytorch -I/home/nvidia/pytorch/torch/csrc -I/home/nvidia/pytorch/torch/lib/tmp_install/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/TH -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THPP -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THNN -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/local/cuda-8.0/include -I/home/nvidia/pytorch/torch/lib/tmp_install/include/THCUNN -I/usr/include -I/usr/include/python3.5m -c torch/csrc/cudnn/Handles.cpp -o build/temp.linux-aarch64-3.5/torch/csrc/cudnn/Handles.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda-8.0/lib64 -DWITH_CUDNN
error: command 'aarch64-linux-gnu-gcc' failed with exit status 4
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++

AttributeError: module 'torch' has no attribute '__version__'

I was able to build pytorch using your instruction set, but when testing the installation, I ran into a missing attribute error (see below):

Nvidia TX1
Ubuntu 16
Python 3.5
Cmake 3.7.2

torch==0.1.10+ac9245a
torchvision==0.2.2.post3

(cv40py35) nvidia@tegra-ubuntu:~$ which cmake
/usr/local/bin/cmake
(cv40py35) nvidia@tegra-ubuntu:~$ cmake --version
cmake version 3.7.2
Cuda 8
Cudnn 5

(cv40py35) nvidia@tegra-ubuntu:~$ python3 -c 'import sys; print("\n".join(sys.path))'

/home/nvidia/.virtualenvs/cv40py35/lib/python35.zip
/home/nvidia/.virtualenvs/cv40py35/lib/python3.5
/home/nvidia/.virtualenvs/cv40py35/lib/python3.5/plat-aarch64-linux-gnu
/home/nvidia/.virtualenvs/cv40py35/lib/python3.5/lib-dynload
/usr/lib/python3.5
/usr/lib/python3.5/plat-aarch64-linux-gnu
/home/nvidia/.virtualenvs/cv40py35/lib/python3.5/site-packages


(cv40py35) nvidia@tegra-ubuntu:/~$ python
Python 3.5.2 (default, Nov 23 2017, 16:37:01)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.

import torch
print(torch.version)
Traceback (most recent call last):
File "", line 1, in
AttributeError: module 'torch' has no attribute 'version'

pytorch-install.log

CMake Error at ATen_generated_THCTensorMode.cu.o.Release.cmake:279 (message): Error generating file

Hi, I met the a error as following:
[ 18%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir//THC/ATen_generated_THCTensorScatterGather.cu.o
[ 18%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/
/THC/ATen_generated_THCTensorTopK.cu.o
[ 18%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir//THC/ATen_generated_THCTensorSort.cu.o
[ 18%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/
/THC/ATen_generated_THCTensorTypeUtils.cu.o
[ 19%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir//THC/ATen_generated_THCSortUtils.cu.o
[ 19%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/
/THC/ATen_generated_THCTensorMode.cu.o
[ 19%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir//THC/generated/ATen_generated_THCTensorSortByte.cu.o
[ 19%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/
/THC/generated/ATen_generated_THCTensorMathCompareTByte.cu.o
Killed
CMake Error at ATen_generated_THCTensorMode.cu.o.Release.cmake:279 (message):
Error generating file
/home/nvidia/pytorch/aten/build/src/ATen/CMakeFiles/ATen.dir/__/THC/./ATen_generated_THCTensorMode.cu.o

src/ATen/CMakeFiles/ATen.dir/build.make:210: recipe for target 'src/ATen/CMakeFiles/ATen.dir//THC/ATen_generated_THCTensorMode.cu.o' failed
make[2]: *** [src/ATen/CMakeFiles/ATen.dir/
/THC/ATen_generated_THCTensorMode.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs....
CMakeFiles/Makefile2:233: recipe for target 'src/ATen/CMakeFiles/ATen.dir/all' failed
make[1]: *** [src/ATen/CMakeFiles/ATen.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2

have u met it?waiting for ur reply! thx

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.