Coder Social home page Coder Social logo

tfyolo's Introduction

tfyolo's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tfyolo's Issues

Weights for yolov5 & training loss

❔Looking for weights for yolov5 and how to minimize training loss

Are there existing weights which can be used ?
And I have seen that you linked the weights from yolov4, do you have any advice on how to use them with this model?

Additionally what would you consider a "good" loss ?
I currently have a loss of about 200 which seems a bit high and yields results wich are not that good.
How can i improve my loss ?
Thank you for your help!

detect wrong

After finishing train 100 epoch with voc dataset,I detect the train sample.but It's find nothing .
the Loss about 500.

Error when importing from YoloV5

❔Question

I exported a saved_model from YoloV5 and tried to use it with this. Using detect.py with the exported model, I got the following error:

InvalidArgumentError: cannot compute Pack as input #3(zero-based) was expected to be a float tensor but is a int32 tensor [Op:Pack] name: packed

at the following line in detect.py
pred_bbox = [tf.reshape(x, (tf.shape(x)[0], -1, tf.shape(x)[-1])) for x in pred_bbox]

Have anyone had any success moving model from the other yolov5 to this one's?

EDIT: I included nms within the exported model, which isn't compatible with the code here.

LICENSE?

❔Question

Hello!

Thank you for the project!
I'd be happy to know, do you plan to attach some kind of LICENSE to it? MIT perhaps?

Thank you,
Ben

Additional context

Learning rate scheduler not updated in train_step

My tensorflow version is 2.1.0. I found that when calling step() of the learning rate scheduler, lr is not updated (the scheduler works fine when tested individually). I guess it has something to do with distributed strategy run process. The problem is fixed if moving the learning rate updating process to the main loop, instead of in training step function.

https://github.com/LongxingTan/Yolov5/blob/88acfd988decc4cc78335cfb6eb50f1975294c1f/yolo/train.py#L122

精度对齐

❔Question

你好,请问您开源的模型有和原始作者的开源模型做精度对齐了码?

Additional context

TF version performance

❔Question

Dear @LongxingTan,
Thanks for your tf2 version of yolov5.
I am curious about how your trained model's performance compared to the PyTorch version (i.e. coco, voc)?

Additional context

自定义数据集训练

❔Question

用自己的数据集训练时,不同尺寸的图片维度不同拼接会报错,请问下这个代码里面的马赛克增强代码是正确的吗?

Additional context

ValueError: Dimension 1 in both shapes must be equal, but are 13 and 14. Shapes are [8,13,13] and [8,14,14]. for '{{node yolo/concat/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32](yolo/conv_39/mul, yolo/upsample/resize/ResizeNearestNeighbor, yolo/concat/concat/axis)' with input shapes: [8,13,13,128], [8,14,14,128], [] and with computed input tensors: input[2] = <-1>.

[Errno 2] No such file or directory: '../data/voc2012/VOCdevkit/VOC2012/train.txt'

when executing - ! python train.py, there's an error showing up - [Errno 2] No such file or directory: '../data/voc2012/VOCdevkit/VOC2012/train.txt'

I tried to look for the .txt file in the given directory, but I couldn't find it. Did anyone else have this same issue?

I also checked the read_data.py script, but I cant seem to find the directory linked to it. How to resolve this issue.

To be clear here's the entire error:

Traceback (most recent call last):
File "/home/jovyan/Yolov5/yolo/train.py", line 153, in
DataReader = DataReader(params['train_annotations_dir'], img_size=params['img_size'], transforms=transforms,
File "/home/jovyan/Yolov5/yolo/dataset/read_data.py", line 21, in init
self.annotations = self.load_annotations(annotations_dir)
File "/home/jovyan/Yolov5/yolo/dataset/read_data.py", line 65, in load_annotations
with open(annotations_dir, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '../data/voc2012/VOCdevkit/VOC2012/train.txt'

Use of different yolov5 models

❔Question

is it enough to change "yolo-m-mish.yaml" part, when i want to use different yolo structures (e.g yolo-s-mish.yaml or yolo-l-mish.yaml) in config.py in line:
parser.add_argument('--yaml_dir', type=str, default='configs/yolo-m-mish.yaml', help='model.yaml path')

Additional context

Create Anchors result in nan iou

Hi,

Thanks for the simple and yet great implementation of yolov5. I prepared my data using the instructions in prepare_data.py. You mentioned that we should create anchors using create_anchor.py. However, when I run create anchor.py it is going for a long time and iou is as follows:

the iou is [0.5387414 nan nan nan nan nan nan
nan nan]

If I run the detect.py script without running the create anchors it results into no box detection for the test image. Would you please help me to figure out what I am doing wrong?

I am using my own dataset and the input format is the same as you mentioned I have also changed the .yaml file with the correct number of classes. When I train my loss is around 200 after 30 epochs which I am not sure it is ok or not.

operation in Yolov5/yolo/dataset/augment_data.py

In Yolov5/yolo/dataset/augment_data.py
Yolov5/yolo/dataset/augment_data.py", line 103
M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT
What operation it is doing?

And can I save the trained model and use that for further uses.

Thanks

loss nan

Thank you for providing a useful repository.

I run this train code on TF2.4.

python train.py --train_annotations_dir ../data/voc/voc_train.txt --test_annotations_dir ../data/voc/voc_test.txt --class_name_dir ../data/voc/voc.names --multi_gpus 2

After 5k iteration, loss is nan...

Please show us your training parameters and result information on this repo.

Could you provide the pre-trained weights

❔Question

Which is working for your code? I have been looking around the internet and couldn't find the TensorFlow(2.x) or Keras's weights or model. It would be nice you could provide it. Otherwise, there is no way to see if your repo works or not. Thanks.

Additional context

loss 过大

作者您好,我按照READ.ME运行完train.py后,loss较大约为90左右,且test的map精度很低
。您知道产生这个的原因是什么吗?或者怎么解决它。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.