Coder Social home page Coder Social logo

boxcars's People

Contributors

jakubsochor avatar meher-chinmaya avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

boxcars's Issues

Trained Model fails in evaluation and single image classification

Hi Jakub,

I tried to train new ResNet50 model, and it seemed quite good on the result.
Uploading Screenshot from 2022-08-24 10-05-51.png…
However, when I tried to evaluate using
python3 train_eval.py --eval path-to-model.h5
the evaluation result on 'test' part is bad
Screenshot from 2022-08-24 10-20-35

Also when I tried to use the model to predict single image class, it also went bad.
Here is my code for single image prediction.(quote some lines from your work)
Where did I do wrong in this thanks!

from keras.models import load_model                          
from keras.preprocessing.image import load_img               
from keras.preprocessing.image import img_to_array           
from keras.applications.resnet50 import preprocess_input     
from keras.applications.resnet50 import decode_predictions   
import cv2           
import numpy as np   

path_to_model = ....
model= load_model(path_to_model)      
image = cv2.imread("027491_007.png")                          
image = cv2.resize(image,(224,224))                           
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)                
image = (image.astype(np.float32) - 116)/128.                 
x = np.empty([1] + list((224,224)) + [3], dtype=np.float32)   
x[0, ...] = image                                             
                                                              
predict = model.predict(x);                                   
pred_idx = np.argmax(predict)                                 
print(predict,pred_idx)                                       

lots of wrong labeled images in the training boxcars116k dataset

Till now, I find following images are wrong-labeled. Would you please tell me how to automatically filter out such sort of outliers in the dataset.

wrong_labeled_image_ids = [520, 687, 8698, 1410, 4734, 5667, 33492]

` # Load image
wrong_labeled_image_id = wrong_labeled_image_ids [0]
vehicle_id, instance_id = dataset.X[wrong_labeled_image_id ]
class_ids = [dataset.Y[image_id]]
class_ids = np.array(class_ids, dtype=np.int32)

# Get corresponding data
vehicle, instance, bb3d, bb2d = dataset.get_vehicle_instance_data(vehicle_id, instance_id)
image = dataset.get_image(vehicle_id, instance_id)`

when we map the bb2d and bb3d onto the original image, we can get following image
image

Some more wrong-labeled images:
Image ID: 4734
image

Image ID: 1410
image

How to create the pickle files for my own dataset?

Hi @JakubSochor, thanks for your research on 3D bounding box estimation.
I have a problem and it's the same as the topic.
I'd like to create those pickle files for my own dataset, but I have no idea what is the format of those pickle files.
Can you give me some tips for doing this work, especially for dataset.pkl and atlas.pkl?

Thanks in advance

Training on Custom data

Hi @JakubSochor thanks for sharing the code and wonderful work , below i have mentioned few queries

1.How to generate the 3d bounding box for custom data, is source code available
2. Can i train on custom dataset like bikes ? is so should i createdataset similar to boxcars116k
3.Can we use this method to train two objects like car and bike in the same image
4, What is the inference time in CPU and GPU ?

Thanks in advance

Is there any full image corresponding to the patch

Thank you for your interesting and practical research.
I wonder if there is any full image corresponding to the vehicle patch, or I can transform the 2/3D coordinates to the full image, which help me validate full image do help 3D bounding box estimate. If I miss this important information, please inform me, I'll appreciate much.

requirements.txt file

Many versions are incompatible:
Failed to build numpy scipy
ERROR: yellowbrick 0.9.1 has requirement numpy>=1.13.0, but you'll have numpy 1.12.0 which is incompatible.
ERROR: yellowbrick 0.9.1 has requirement scipy>=1.0.0, but you'll have scipy 0.18.1 which is incompatible.
ERROR: xarray 0.15.1 has requirement numpy>=1.15, but you'll have numpy 1.12.0 which is incompatible.
ERROR: umap-learn 0.5.1 has requirement numpy>=1.17, but you'll have numpy 1.12.0 which is incompatible.
ERROR: umap-learn 0.5.1 has requirement scipy>=1.0, but you'll have scipy 0.18.1 which is incompatible.
ERROR: tifffile 2021.3.17 has requirement numpy>=1.15.1, but you'll have numpy 1.12.0 which is incompatible.
ERROR: textgenrnn 1.4.1 has requirement keras>=2.1.5, but you'll have keras 1.2.2 which is incompatible.
ERROR: tensorflow 2.4.1 has requirement h5py~=2.10.0, but you'll have h5py 2.6.0 which is incompatible.
ERROR: tensorflow 2.4.1 has requirement numpy~=1.19.2, but you'll have numpy 1.12.0 which is incompatible.
ERROR: tensorflow 2.4.1 has requirement protobuf>=3.9.2, but you'll have protobuf 3.2.0 which is incompatible.
ERROR: tensorflow 2.4.1 has requirement six~=1.15.0, but you'll have six 1.10.0 which is incompatible.
ERROR: tensorflow-probability 0.12.1 has requirement numpy>=1.13.3, but you'll have numpy 1.12.0 which is incompatible.
ERROR: tensorflow-metadata 0.28.0 has requirement protobuf<4,>=3.7, but you'll have protobuf 3.2.0 which is incompatible.
ERROR: tensorflow-hub 0.11.0 has requirement protobuf>=3.8.0, but you'll have protobuf 3.2.0 which is incompatible.
ERROR: tensorflow-datasets 4.0.1 has requirement protobuf>=3.6.1, but you'll have protobuf 3.2.0 which is incompatible.
ERROR: tensorboard 2.4.1 has requirement protobuf>=3.6.0, but you'll have protobuf 3.2.0 which is incompatible.
ERROR: spacy 2.2.4 has requirement numpy>=1.15.0, but you'll have numpy 1.12.0 which is incompatible.
ERROR: seaborn 0.11.1 has requirement numpy>=1.15, but you'll have numpy 1.12.0 which is incompatible.
ERROR: seaborn 0.11.1 has requirement scipy>=1.0, but you'll have scipy 0.18.1 which is incompatible.
ERROR: scikit-image 0.16.2 has requirement scipy>=0.19.0, but you'll have scipy 0.18.1 which is incompatible.
ERROR: pywavelets 1.1.1 has requirement numpy>=1.13.3, but you'll have numpy 1.12.0 which is incompatible.
ERROR: pysndfile 1.3.8 has requirement numpy>=1.13.0, but you'll have numpy 1.12.0 which is incompatible.
ERROR: pynndescent 0.5.2 has requirement scipy>=1.0, but you'll have scipy 0.18.1 which is incompatible.
ERROR: pymc3 3.7 has requirement h5py>=2.7.0, but you'll have h5py 2.6.0 which is incompatible.
ERROR: pymc3 3.7 has requirement numpy>=1.13.0, but you'll have numpy 1.12.0 which is incompatible.
ERROR: pymc3 3.7 has requirement theano>=1.0.4, but you'll have theano 0.8.2 which is incompatible.
ERROR: pyerfa 1.7.2 has requirement numpy>=1.16, but you'll have numpy 1.12.0 which is incompatible.
ERROR: pyarrow 3.0.0 has requirement numpy>=1.16.6, but you'll have numpy 1.12.0 which is incompatible.
ERROR: plotnine 0.6.0 has requirement numpy>=1.16.0, but you'll have numpy 1.12.0 which is incompatible.
ERROR: plotnine 0.6.0 has requirement scipy>=1.2.0, but you'll have scipy 0.18.1 which is incompatible.
ERROR: pandas 1.1.5 has requirement numpy>=1.15.4, but you'll have numpy 1.12.0 which is incompatible.
ERROR: opencv-python 3.4.2.17 has requirement numpy>=1.14.5, but you'll have numpy 1.12.0 which is incompatible.
ERROR: opencv-contrib-python 4.1.2.30 has requirement numpy>=1.14.5, but you'll have numpy 1.12.0 which is incompatible.
ERROR: numba 0.51.2 has requirement numpy>=1.15, but you'll have numpy 1.12.0 which is incompatible.
ERROR: librosa 0.8.0 has requirement numpy>=1.15.0, but you'll have numpy 1.12.0 which is incompatible.
ERROR: librosa 0.8.0 has requirement scipy>=1.0.0, but you'll have scipy 0.18.1 which is incompatible.
ERROR: kapre 0.1.3.1 has requirement keras>=2.0.0, but you'll have keras 1.2.2 which is incompatible.
ERROR: jaxlib 0.1.62+cuda110 has requirement numpy>=1.16, but you'll have numpy 1.12.0 which is incompatible.
ERROR: imgaug 0.2.9 has requirement numpy>=1.15.0, but you'll have numpy 1.12.0 which is incompatible.
ERROR: googleapis-common-protos 1.53.0 has requirement protobuf>=3.12.0, but you'll have protobuf 3.2.0 which is incompatible.
ERROR: google-colab 1.0.0 has requirement six~=1.15.0, but you'll have six 1.10.0 which is incompatible.
ERROR: google-cloud-bigquery 1.21.0 has requirement protobuf>=3.6.0, but you'll have protobuf 3.2.0 which is incompatible.
ERROR: google-api-python-client 1.12.8 has requirement six<2dev,>=1.13.0, but you'll have six 1.10.0 which is incompatible.
ERROR: google-api-core 1.26.1 has requirement protobuf>=3.12.0, but you'll have protobuf 3.2.0 which is incompatible.
ERROR: google-api-core 1.26.1 has requirement six>=1.13.0, but you'll have six 1.10.0 which is incompatible.
ERROR: fbprophet 0.7.1 has requirement numpy>=1.15.4, but you'll have numpy 1.12.0 which is incompatible.
ERROR: fastai 1.0.61 has requirement numpy>=1.15, but you'll have numpy 1.12.0 which is incompatible.
ERROR: fancyimpute 0.4.3 has requirement keras>=2.0.0, but you'll have keras 1.2.2 which is incompatible.
ERROR: dm-tree 0.1.5 has requirement six>=1.12.0, but you'll have six 1.10.0 which is incompatible.
ERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.
ERROR: cvxpy 1.0.31 has requirement numpy>=1.15, but you'll have numpy 1.12.0 which is incompatible.
ERROR: cvxpy 1.0.31 has requirement scipy>=1.1.0, but you'll have scipy 0.18.1 which is incompatible.
ERROR: blis 0.4.1 has requirement numpy>=1.15.0, but you'll have numpy 1.12.0 which is incompatible.
ERROR: astropy 4.2 has requirement numpy>=1.17, but you'll have numpy 1.12.0 which is incompatible.
ERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.
ERROR: tensorflow-gpu 2.4.1 has requirement h5py~=2.10.0, but you'll have h5py 2.6.0 which is incompatible.
ERROR: tensorflow-gpu 2.4.1 has requirement numpy~=1.19.2, but you'll have numpy 1.12.0 which is incompatible.
ERROR: tensorflow-gpu 2.4.1 has requirement protobuf>=3.9.2, but you'll have protobuf 3.2.0 which is incompatible.
ERROR: tensorflow-gpu 2.4.1 has requirement six~=1.15.0, but you'll have six 1.10.0 which is incompatible.

About how to estimate the 3DBB

I am confused about how to estimate the 3D bounding box with the contour of the vehicles and the direction of the VP. I also wonder the input when using CNN for estimation of directions towards vanishing points, the whole image with several car or the cropped image with only a car?

Help in understanding dataset.pkl

Hello,

I'm trying to get the direction of VP's so that I can make a 3DBB. So, as mentioned in the paper's fig 7, I made the model to predict the direction of vanishing points by predicting a set of 3 angles.

{'098': {'pp': array([ 427.5,  240.5]),
   'vp1': array([ 948.281 ,   92.7385]),
   'vp3': array([  455.78606968,  5066.38864576]),
   'focal': 835.6732001667876,
   'vp2': array([-952.23 ,  103.878])}

This is an example of a camera's properties taken from dataset.pkl. As mentioned in the paper, I expected a set of 3 angles to be given in the dataset each corresponding to a VP. But it seems a set of numbers are given corresponding to each VP. I'm confused here.

  1. What are those numbers?
  2. How do I proceed from here to construct the 3DBB?

Can someone help me out in understanding this?

Thanks in advance!

How to show the detected image ?

hello, I was followed the README.md, and got the final_model.h5, but if I want to detect one image using the final_model.h5, how to show the results of the detection, it's like the 3D bounding box on the image(Fig.5 in your Paper). Thank you.

Where to find the NET that estimates 3D bb ?

Hello , Thank you for this interesting research. So far I have been playing with this repository and I was able to reproduce the results in the paper.

I am more intersted in the part that you are estimating 3D bounding boxes, However in the paper and in the repository I couldnt find any clue about how to use that independently , Can you please summuarize or list some link that could help me to understand 3D bb estimation part

Thank you in advance,

Link to trained model is down

Hi, the link for downloading trained model doesn't work anymore, can you please provide link for alternative download?

Confusion of contour detection

According to the paper Object Contour Detection with a Fully Convolutional Encoder-Decoder Network, I trained a network based on dataset PASCAL VOC. But unfortunately, the network always output black image.
Can you share me the code about contour detection?

And is your traffic research website down? I can't open it.

Any help will be appreciated!

The coordinates of vanishing point

In the dataset.pkl, every camera has three VPs' 2d coordinates, and I wonder whether those coordinates are the VPs' position in the image plane?
To validate this , I use the equation(5) :U= (ux, uy, f), V= (ux, uy, f),P= (px, py, 0),W= (U− P) × (V− P) in the article 'Fully Automatic Roadside Camera Calibration for Traffic Surveillance'.
The VPs I use are the VPs of camera videnska, I use vp1=[ 2018.41228263, 77.64316627], vp2=[ 63.71104018, 64.09228882], pp=[ 350.5, 250.5] and f=667.9198918 to compute vp3, and the result is array([-716.16197463, 48.24208282, 667.9198918 ]), it is very different to the position [ 334.37621358, 2675.97844949] given in dtaset.pkl, and it seems that the VPs' coordinates given in the dataset.pkl are not the VPs' positions in the image plane, so I want to ask what's the coordiante system of VPs given in dataset.pkl

Use trained model for single image prediction

Hi JakubSochor, thanks for interesting research and this repo. It is easily to setup and run the training. I used the model (fine-tune on ResNet50) I trainied with BoxCars116l Dataset on my computer to predict on single image:

`from keras.models import load_model
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
from keras.applications.resnet50 import preprocess_input
from keras.applications.resnet50 import decode_predictions

path_to_model = "/home/hung/source/python/read_model/models/final_model.h5"
model= load_model(path_to_model)

image = load_img("car.png", target_size=(224, 224))
image = img_to_array(image)
image = image.reshape(1, image.shape[0], image.shape[1], image.shape[2])
image = preprocess_input(image)

predict = model.predict(image);
label = decode_predictions(predict);
print(label)`

But it got error like this ValueError: decode_predictions expects a batch of predictions (i.e. a 2D array of shape (samples, 1000)). Found array with shape: (1, 107)
I understand that because number of class is different from origin imagenet classes. Can you prove me a way to make prediction, sorry I am new to Keras and not understand your code at all.
Thank you.

Bounding Box Estimator

Are you planning to provide the 3D Bounding Box estimator model? It would be interesting to try that model for training purposes and in other applications.

Link to dataset is down

I get an error on two different internet connections while trying to access this link to download the dataset. Is this a temporary issue?

what's the coordinate of the image

I see that in your article, you set the principle point be the middle of the image, and in the dataset.pkl, every camera's PP cordiante is[ 427.5, 240.5], so it confuse me with the coordinate of the image。

This is just a car patch classification example

I think this repo has nothing to do with 3D box prediction but just a simple and naive classification of car image patchs?

It's make no sense to do such a job and claim that can do 3D box predictions......

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.