paucarre / tiefvision Goto Github PK
View Code? Open in Web Editor NEWEnd-to-end deep learning image-similarity search engine
License: Apache License 2.0
End-to-end deep learning image-similarity search engine
License: Apache License 2.0
When running the file
luajit split-encoder-classifier.lua
luajit: split-encoder-classifier.lua:15: module 'image' not found:
no field package.preload['image']
no file 'nil'
no file '/home/sparksdrobe/project/tiefvision/src/torch/image.lua'
no file './image.so'
no file '/usr/local/lib/lua/5.1/image.so'
no file '/dsvm/tools/torch/lib/lua/5.1/image.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
[C]: in function 'require'
split-encoder-classifier.lua:15: in main chunk
[C]: at 0x00405d50
I'm getting the above error
Thanks
We tried to install the code base in a Linux provisioned machine in Azure cloud, the instructions given in
Developer guide are not complete and wholesome....some of the issues i have listed below......If someone has a better developer guide/or means to set the code base kindly share over here..
Open ended statement ...like below change what in application.conf
TiefVision's can be configured to use another database, like postgresql. Just change *slick.dbs.bounding_box. ** from $TIEFVISION_HOME/src/scala/tiefvision-web/conf/application.conf.
Transfer learning issue refer link...
#71
In Bounding Box Regression section......when I run the querry
SELECT COUNT(*) FROM BOUNDING_BOX;.........it is always giving me issue like
syntax error near unexpected token (
Hi,
I assume the GPU is only required for training but not for the runtime? Is this correct?
gracies
When I try to run similarity-db.lua in the 9-similarity-db folder, I get the below error:
luajit: similarity-db.lua:39: attempt to call method 'double' (a nil value)
stack traceback:
similarity-db.lua:39: in function 'similarityDb'
similarity-db.lua:75: in main chunk
[C]: at 0x00405d50
As far as what I looked up, "a nil value" means that the function does not exist. The closest article on that I came across was this karpathy/char-rnn#2, but the issue was resolved by re-building Torch. I have done that several times. I have build it using LUAJIT and Lua as described in http://torch.ch/docs/getting-started.html
Please let me know if I am missing something here. Thanks in advance!
The input of the autoencoder can be either the last max pool layer or the one before the output of the classifier.
I tried to install
"luarocks install inn" but i found error
Installing https://raw.githubusercontent.com/torch/rocks/master/inn-1.0-0.rockspec...
Using https://raw.githubusercontent.com/torch/rocks/master/inn-1.0-0.rockspec... switching to 'build' mode
Cloning into 'imagine-nn'...
remote: Counting objects: 27, done.
remote: Compressing objects: 100% (25/25), done.
remote: Total 27 (delta 1), reused 9 (delta 0), pack-reused 0
Receiving objects: 100% (27/27), 15.26 KiB | 0 bytes/s, done.
Resolving deltas: 100% (1/1), done.
Checking connectivity... done.
cmake -E make_directory build;
cd build;
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH="/home/stylabs/torch/install/bin/.." -DCMAKE_INSTALL_PREFIX="/home/stylabs/torch/install/lib/luarocks/rocks/inn/1.0-0";
make
-- The C compiler identification is GNU 5.4.0
-- The CXX compiler identification is GNU 5.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Torch7 in /home/stylabs/torch/install
-- Found CUDA: /usr (found suitable version "7.5", minimum required is "6.5")
-- Compiling for CUDA architecture: 3.5
-- Configuring done
-- Generating done
-- Build files have been written to: /tmp/luarocks_inn-1.0-0-1407/imagine-nn/build
[ 33%] Building NVCC (Device) object CMakeFiles/inn.dir/inn_generated_ROIPooling.cu.o
/usr/include/string.h: In function ‘void* __mempcpy_inline(void*, const void*, size_t)’:
/usr/include/string.h:652:42: error: ‘memcpy’ was not declared in this scope
return (char *) memcpy (__dest, __src, __n) + __n;
^
CMake Error at inn_generated_ROIPooling.cu.o.cmake:267 (message):
Error generating file
/tmp/luarocks_inn-1.0-0-1407/imagine-nn/build/CMakeFiles/inn.dir//./inn_generated_ROIPooling.cu.o
CMakeFiles/inn.dir/build.make:70: recipe for target 'CMakeFiles/inn.dir/inn_generated_ROIPooling.cu.o' failed
make[2]: *** [CMakeFiles/inn.dir/inn_generated_ROIPooling.cu.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/inn.dir/all' failed
make[1]: *** [CMakeFiles/inn.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2
Error: Build error: Failed building.
Hi,
I tried to follow the steps in developer.md, I encountred a problem after :
cd $TIEFVISION_HOME/src/torch/1-split-encoder-classifier
luajit split-encoder-classifier.lua
this is the error I got :
THCudaCheck FAIL file=/home/dim/torch/extra/cutorch/lib/THC/THCGeneral.c line=66 error=35 : CUDA driver version is insufficient for CUDA runtime version
luajit: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version at /home/dim/torch/extra/cutorch/lib/THC/THCGeneral.c:66
stack traceback:
[C]: at 0x7f8d34a44720
[C]: in function 'require'
/home/dim/torch/install/share/lua/5.1/cutorch/init.lua:2: in main chunk
[C]: in function 'require'
/home/dim/torch/install/share/lua/5.1/cunn/init.lua:3: in main chunk
[C]: in function 'require'
/home/dim/torch/install/share/lua/5.1/inn/ffi.lua:6: in main chunk
[C]: in function 'require'
/home/dim/torch/install/share/lua/5.1/inn/init.lua:3: in main chunk
[C]: in function 'require'
split-encoder-classifier.lua:16: in main chunk
[C]: at 0x00405d50
PS : I'm using cuda 8.0
Thanks,
I think the reason of the following error of failing to open the Similarity Editor web page is my failing to generate the SIMILARITY database.
(When checking my H2 database, I can find correct data stored in the BOUNDING_BOX db, but no data stored in the SIMILARITY db)
So have I forgotten any necessary things to generate the SIMILARITY database in H2?
---older edition
I have finished all except Supervised Image Similarity (Deep Rank). But when I open the Similarity Editor for the first step, I have got the error in the following picture. Have I forgotten any necessary things?
[error] - application -
! @760kb02li - Internal server error, for (GET) [/similarity_editor] ->
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[IllegalArgumentException: bound must be positive]]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:265) ~[play_2.11-2.4.6.jar:2.4.6]
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:191) ~[play_2.11-2.4.6.jar:2.4.6]
at play.core.server.Server$class.logExceptionAndGetResult$1(Server.scala:50) [play-server_2.11-2.4.6.jar:2.4.6]
at play.core.server.Server$$anonfun$getHandlerFor$4.apply(Server.scala:59) [play-server_2.11-2.4.6.jar:2.4.6]
at play.core.server.Server$$anonfun$getHandlerFor$4.apply(Server.scala:57) [play-server_2.11-2.4.6.jar:2.4.6]
at scala.util.Either$RightProjection.flatMap(Either.scala:522) [scala-library.jar:na]
at play.core.server.Server$class.getHandlerFor(Server.scala:57) [play-server_2.11-2.4.6.jar:2.4.6]
at play.core.server.NettyServer.getHandlerFor(NettyServer.scala:33) [play-netty-server_2.11-2.4.6.jar:2.4.6]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$8.apply(PlayDefaultUpstreamHandler.scala:132) [play-netty-server_2.11-2.4.6.jar:2.4.6]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$8.apply(PlayDefaultUpstreamHandler.scala:132) [play-netty-server_2.11-2.4.6.jar:2.4.6]
at scala.util.Either.fold(Either.scala:99) [scala-library.jar:na]
at play.core.server.netty.PlayDefaultUpstreamHandler.messageReceived(PlayDefaultUpstreamHandler.scala:120) [play-netty-server_2.11-2.4.6.jar:2.4.6]
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty.jar:na]
at com.typesafe.netty.http.pipelining.HttpPipeliningHandler.messageReceived(HttpPipeliningHandler.java:62) [netty-http-pipelining.jar:na]
at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88) [netty.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty.jar:na]
at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108) [netty.jar:na]
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty.jar:na]
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) [netty.jar:na]
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459) [netty.jar:na]
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536) [netty.jar:na]
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435) [netty.jar:na]
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) [netty.jar:na]
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) [netty.jar:na]
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) [netty.jar:na]
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) [netty.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) [netty.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) [netty.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) [netty.jar:na]
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [netty.jar:na]
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty.jar:na]
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [netty.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_144]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
Caused by: java.lang.IllegalArgumentException: bound must be positive
at java.util.Random.nextInt(Random.java:388) ~[na:1.8.0_144]
at scala.util.Random.nextInt(Random.scala:66) ~[scala-library.jar:na]
at core.ImageProcessing$.randomSmilarityImage(ImageProcessing.scala:24) ~[na:na]
at controllers.Application.similarityEditor(Application.scala:63) ~[na:na]
at router.Routes$$anonfun$routes$1$$anonfun$applyOrElse$12$$anonfun$apply$12.apply(Routes.scala:542) ~[na:na]
at router.Routes$$anonfun$routes$1$$anonfun$applyOrElse$12$$anonfun$apply$12.apply(Routes.scala:542) ~[na:na]
at play.core.routing.HandlerInvokerFactory$$anon$12$$anon$13.call(HandlerInvoker.scala:89) ~[play_2.11-2.4.6.jar:2.4.6]
at play.core.routing.TaggingInvoker.call(HandlerInvoker.scala:32) ~[play_2.11-2.4.6.jar:2.4.6]
at router.Routes$$anonfun$routes$1$$anonfun$applyOrElse$12.apply(Routes.scala:542) ~[na:na]
at router.Routes$$anonfun$routes$1$$anonfun$applyOrElse$12.apply(Routes.scala:542) ~[na:na]
at play.core.routing.GeneratedRouter.call(GeneratedRouter.scala:95) ~[play_2.11-2.4.6.jar:2.4.6]
at router.Routes$$anonfun$routes$1.applyOrElse(Routes.scala:541) ~[na:na]
at router.Routes$$anonfun$routes$1.applyOrElse(Routes.scala:471) ~[na:na]
at scala.PartialFunction$Lifted.apply(PartialFunction.scala:223) ~[scala-library.jar:na]
at scala.PartialFunction$Lifted.apply(PartialFunction.scala:219) ~[scala-library.jar:na]
at play.api.routing.Router$class.handlerFor(Router.scala:38) ~[play_2.11-2.4.6.jar:2.4.6]
at play.core.routing.GeneratedRouter.handlerFor(GeneratedRouter.scala:86) ~[play_2.11-2.4.6.jar:2.4.6]
at play.api.http.DefaultHttpRequestHandler.routeRequest(HttpRequestHandler.scala:170) ~[play_2.11-2.4.6.jar:2.4.6]
at play.api.http.JavaCompatibleHttpRequestHandler.routeRequest(HttpRequestHandler.scala:201) ~[play_2.11-2.4.6.jar:2.4.6]
at play.api.GlobalSettings$$anonfun$onRouteRequest$1.apply(GlobalSettings.scala:166) ~[play_2.11-2.4.6.jar:2.4.6]
at play.api.GlobalSettings$$anonfun$onRouteRequest$1.apply(GlobalSettings.scala:165) ~[play_2.11-2.4.6.jar:2.4.6]
at scala.Option.flatMap(Option.scala:171) ~[scala-library.jar:na]
at play.api.GlobalSettings$class.onRouteRequest(GlobalSettings.scala:165) ~[play_2.11-2.4.6.jar:2.4.6]
at play.api.DefaultGlobal$.onRouteRequest(GlobalSettings.scala:212) ~[play_2.11-2.4.6.jar:2.4.6]
at play.api.GlobalSettings$class.onRequestReceived(GlobalSettings.scala:110) ~[play_2.11-2.4.6.jar:2.4.6]
at play.api.DefaultGlobal$.onRequestReceived(GlobalSettings.scala:212) ~[play_2.11-2.4.6.jar:2.4.6]
at play.api.http.GlobalSettingsHttpRequestHandler.handlerForRequest(HttpRequestHandler.scala:183) ~[play_2.11-2.4.6.jar:2.4.6]
at play.core.server.Server$$anonfun$sendHandler$1$1.apply(Server.scala:38) ~[play-server_2.11-2.4.6.jar:2.4.6]
at play.core.server.Server$$anonfun$sendHandler$1$1.apply(Server.scala:37) ~[play-server_2.11-2.4.6.jar:2.4.6]
at scala.util.Success$$anonfun$map$1.apply(Try.scala:236) ~[scala-library.jar:na]
at scala.util.Try$.apply(Try.scala:191) ~[scala-library.jar:na]
at scala.util.Success.map(Try.scala:236) ~[scala-library.jar:na]
at play.core.server.Server$class.sendHandler$1(Server.scala:37) [play-server_2.11-2.4.6.jar:2.4.6]
at play.core.server.Server$$anonfun$getHandlerFor$4.apply(Server.scala:58) [play-server_2.11-2.4.6.jar:2.4.6]
... 38 common frames omitted
Check the torch source code and detect parts that don't have embedded unit testing
I am following the developer manual, it says to start the web server, run "activator run", but there isn't an activator command...
I am trying to start the default database sever using $TIEFVISION_HOME/src/h2/service start
it gives me the following output :
sudo docker build -t paucarre/tiefvision-db src/h2
Sending build context to Docker daemon 6.144kB
Step 1/6 : FROM openjdk:8-alpine
---> a8bd10541772
Step 2/6 : ENV H2_VERSION 1.4.192
---> Using cache
---> 460b2748551c
Step 3/6 : COPY create-tables.sql /root/create-tables.sql
---> Using cache
---> 93cdd30bba8d
Step 4/6 : RUN mkdir /opt && wget http://repo1.maven.org/maven2/com/h2database/h2/$H2_VERSION/h2-$H2_VERSION.jar -O /opt/h2.jar -q
---> Using cache
---> 1018cb636893
Step 5/6 : EXPOSE 8082 9092
---> Using cache
---> 6028d83bd5d3
Step 6/6 : ENTRYPOINT java -cp /opt/h2.jar org.h2.tools.RunScript -url jdbc:h2:~/tiefvision -user sa -script /root/create-tables.sql && java -cp /opt/h2.jar org.h2.tools.Server -tcp -tcpAllowOthers -web -webAllowOthers
---> Using cache
---> 983d0439189e
Successfully built 983d0439189e
Successfully tagged paucarre/tiefvision-db:latest
but when i try to open http://localhost:9000/bounding_box it gives me an following error
Create two enpoints or a single parmetrized endpoint that allows to dynamically change between deep rank and unsupervised search.
how do i save and search over million of images ?
Came across Pau's talk at O'Reilly that mentioned that Gilt built the entire tiefvision framework using tensorflow. Is there a chance it has been open sourced?
i get this error when trying to access web ui
RuntimeException: java.lang.NoClassDefFoundError: Could not initialize class core.ImageProcessing$
Hello,
really great work!
Have you tried to accomplish image similarity search based on local features?
Searching for parts of an object for example in a collection of images would be a good use case.
best,
Nelson
Hello thank you for great work. I have a dataset which contains 70k image and similarity database creation phase working so slow. I need to wait almost 323 days to create similarity database. It is creating feature map of image in dataset and comparing it with all other image's extracted feature maps.
Is there any faster thing or am I doing something wrong ?
I am trying to use this endpoint to start the training. I have already created the bounding boxes for my whole dataset. I checked the h2 db and the data seems correct: http://imgur.com/a/sCzAO
When I use the endpoint I got this error: http://imgur.com/a/io5nY
This is the output of console
[error] - application -
! @73e8ifnao - Internal server error, for (GET) [/generate_bounding_box_train_and_test_files] ->
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[NullPointerException: null]]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:265) ~[play_2.11-2.4.6.jar:2.4.6]
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:191) ~[play_2.11-2.4.6.jar:2.4.6]
at play.api.GlobalSettings$class.onError(GlobalSettings.scala:179) [play_2.11-2.4.6.jar:2.4.6]
at play.api.DefaultGlobal$.onError(GlobalSettings.scala:212) [play_2.11-2.4.6.jar:2.4.6]
at play.api.http.GlobalSettingsHttpErrorHandler.onServerError(HttpErrorHandler.scala:94) [play_2.11-2.4.6.jar:2.4.6]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$9$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:151) [play-netty-server_2.11-2.4.6.jar:2.4.6]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$9$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:148) [play-netty-server_2.11-2.4.6.jar:2.4.6]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) [scala-library-2.11.6.jar:na]
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:215) [scala-library-2.11.6.jar:na]
at scala.util.Try$.apply(Try.scala:191) [scala-library-2.11.6.jar:na]
at scala.util.Failure.recover(Try.scala:215) [scala-library-2.11.6.jar:na]
at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324) [scala-library-2.11.6.jar:na]
at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324) [scala-library-2.11.6.jar:na]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library-2.11.6.jar:na]
at play.api.libs.iteratee.Execution$trampoline$.executeScheduled(Execution.scala:109) [play-iteratees_2.11-2.4.6.jar:2.4.6]
at play.api.libs.iteratee.Execution$trampoline$.execute(Execution.scala:71) [play-iteratees_2.11-2.4.6.jar:2.4.6]
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) [scala-library-2.11.6.jar:na]
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) [scala-library-2.11.6.jar:na]
at scala.concurrent.Promise$class.complete(Promise.scala:55) [scala-library-2.11.6.jar:na]
at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153) [scala-library-2.11.6.jar:na]
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235) [scala-library-2.11.6.jar:na]
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235) [scala-library-2.11.6.jar:na]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library-2.11.6.jar:na]
at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121) [scala-library-2.11.6.jar:na]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [scala-library-2.11.6.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [scala-library-2.11.6.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [scala-library-2.11.6.jar:na]
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [scala-library-2.11.6.jar:na]
Caused by: java.lang.NullPointerException: null
at scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:192) ~[scala-library-2.11.6.jar:na]
at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:192) ~[scala-library-2.11.6.jar:na]
at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66) ~[scala-library-2.11.6.jar:na]
at scala.collection.mutable.ArrayOps$ofRef.foldLeft(ArrayOps.scala:186) ~[scala-library-2.11.6.jar:na]
at core.DatabaseProcessing$.cropsGenerated(DatabaseProcessing.scala:256) ~[classes/:na]
at core.DatabaseProcessing$$anonfun$generateBoundingBoxDatabaseImages$1$$anonfun$apply$18.apply(DatabaseProcessing.scala:225) ~[classes/:na]
at core.DatabaseProcessing$$anonfun$generateBoundingBoxDatabaseImages$1$$anonfun$apply$18.apply(DatabaseProcessing.scala:224) ~[classes/:na]
at scala.collection.Iterator$class.foreach(Iterator.scala:750) ~[scala-library-2.11.6.jar:na]
at scala.collection.AbstractIterator.foreach(Iterator.scala:1202) ~[scala-library-2.11.6.jar:na]
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) ~[scala-library-2.11.6.jar:na]
at scala.collection.AbstractIterable.foreach(Iterable.scala:54) ~[scala-library-2.11.6.jar:na]
at core.DatabaseProcessing$$anonfun$generateBoundingBoxDatabaseImages$1.apply(DatabaseProcessing.scala:224) ~[classes/:na]
at core.DatabaseProcessing$$anonfun$generateBoundingBoxDatabaseImages$1.apply(DatabaseProcessing.scala:223) ~[classes/:na]
at scala.util.Success$$anonfun$map$1.apply(Try.scala:236) ~[scala-library-2.11.6.jar:na]
at scala.util.Try$.apply(Try.scala:191) [scala-library-2.11.6.jar:na]
at scala.util.Success.map(Try.scala:236) ~[scala-library-2.11.6.jar:na]
... 8 common frames omitted
If i create the folder <$TIEFVISION_HOME>/resources/bounding-boxes/crops I get the following error:
http://imgur.com/a/DaKo6
It seems that the crop image dataset is not being generated. I followed the developer guide carefully but with no luck. It is hard to see if I am missing something or this is a bug
For all the scala and torch code, create the folders automatically if they don't exist.
Hi,
While trying to follow steps given in developer.md, I got stuck in Transfer Learning step.
It could not find nin_imagenet.caffemodel required in split-encoder-classifier.lua
Please let me know if there is any tool to download dataset and model files required to reproduce you results.
Thanks,
Prince
hi, in bboxlib.lua it loads several models by:
local function loadLocator(index)
return torch.load(tiefvision_commons.modelPath('locatorconv-' .. index .. '.model'))
end
local function loadClassifier()
return torch.load(tiefvision_commons.modelPath('classifier.model'))
end
local function loadEncoder()
return torch.load(tiefvision_commons.modelPath('encoder.model'))
end
do you have pre-trained models so i can download?
thanks,
Hey, guys. I encountered a problem when I setup the database.
When I run $TIEFVISION_HOME/src/h2/service start
I got this error
docker: Error response from daemon: pull access denied for paucarre/tiefvision-db, repository does not exist or may require 'docker login'.
Can anyone help me?
There was an error during encoding of this image http://i.imgur.com/fOVWBs4.jpg.
The error has been encountered in this line bgrImage[{ 1, {}, {} }] = scaledImage[{ 3, {}, {} }]
Traceback:
luajit: bad argument #2 to '?' (index out of bound at /root/torch/pkg/torch/generic/Tensor.c:971)
stack traceback:
[C]: at 0x7fb3878b6850
[C]: in function '__index'
./0-tiefvision-commons/tiefvision_commons.lua:82: in function 'preprocess'
./0-tiefvision-commons/tiefvision_commons.lua:73: in function 'loadImage'
./8-similarity-db-cnn/similarity_db_lib.lua:26: in function 'encodeImage'
8-similarity-db-cnn/generate-similarity-db.lua:30: in function 'createDb'
8-similarity-db-cnn/generate-similarity-db.lua:66: in main chunk
I want to use the TiefVision, but I can not find the training database and the nin_imagenet.caffemodel mentioned in 1-split-encoder-classifier.lua? Help, please!
Add more unit tests to validate that the scala-side is properly working and does what it's supposed to do.
I ran the encode-training-and-test-images.lua to get the encodings from the data for classification and I got this error.
The problem is located at this line of code, where the script tries to allocate memory for the embeddings.
For my dataset, this line would allocate 2.6 GB of VRAM. I have a GTX970M with 3GB but at the time this line is evaluated, the encoder model is already in memory, occupying about 700MB of VRAM, so thats why I get this error.
A quick and dirty fix is to change the allocation to RAM, by changing this line of code to something like this:
local inputs = torch.Tensor(batches, batch_size, 384, 11, 11)
And also changing #L33 to this:
inputs[batch][batch_el] = inputs[batch][batch_el]:set(encodedInput:double())
I'm just opening this issue for reference in case that someone else have the same problem
A good new feature would be to check the amount of VRAM available and the memory to be allocated whenever this happens in code and handling it accordingly.
I run luajit split-encoder-classifier.lua. i found error loadcaffe not found ??
luajit: split-encoder-classifier.lua:14: module 'loadcaffe' not found:
no field package.preload['loadcaffe']
no file '/home/stylabs/.luarocks/share/lua/5.1/loadcaffe.lua'
no file '/home/stylabs/.luarocks/share/lua/5.1/loadcaffe/init.lua'
no file '/home/stylabs/torch/install/share/lua/5.1/loadcaffe.lua'
no file '/home/stylabs/torch/install/share/lua/5.1/loadcaffe/init.lua'
no file './loadcaffe.lua'
no file '/home/stylabs/torch/install/share/luajit-2.1.0-beta1/loadcaffe.lua'
no file '/usr/local/share/lua/5.1/loadcaffe.lua'
no file '/usr/local/share/lua/5.1/loadcaffe/init.lua'
no file '/home/stylabs/tiefvision/src/torch/loadcaffe.lua'
no file '/home/stylabs/torch/install/lib/loadcaffe.so'
no file '/home/stylabs/.luarocks/lib/lua/5.1/loadcaffe.so'
no file '/home/stylabs/torch/install/lib/lua/5.1/loadcaffe.so'
no file './loadcaffe.so'
no file '/usr/local/lib/lua/5.1/loadcaffe.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
[C]: in function 'require'
split-encoder-classifier.lua:14: in main chunk
[C]: at 0x00405d50
What is the error and how to solve it
I've trained my dataset withe my set of images. I was wondering if it is possible to find a match using an external image from a picture I've taken with a phone camera
For request 'GET /save_bounding_box?name=026.jpg&left=150.39996337890625&right=74.39996337890625&top=166.8000030517578&bottom=28.800003051757812&width=218&height=292' [Cannot parse parameter left as Int: For input string: "150.39996337890625"]
Make documentation that explains how to use TiefVision from end to end
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.