Coder Social home page Coder Social logo

multiyolov5's People

Contributors

ab-101 avatar aehogan avatar alexstoken avatar alexwang1900 avatar anon-artist avatar ayushexel avatar borda avatar dependabot-preview[bot] avatar developer0hye avatar dlawrences avatar edurenye avatar glenn-jocher avatar kinoute avatar laughing-q avatar lorenzomammana avatar lornatang avatar nanocode012 avatar olehb avatar ownmarc avatar taoxiesz avatar tkianai avatar tommao23 avatar toretak avatar uyzhang avatar wang-xinyu avatar wanghaoyang0106 avatar williemaddox avatar youngjinshin avatar yuriribeiro avatar yxnong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

multiyolov5's Issues

python convert2Yolo/example.py

❔Question

Additional context

你好 在处理数据的时候并没有在convert2Yolo的文件夹下找到example.py 文件

关于pt转onnx问题

❔Question

用model下面的export,seg头结果不一致。

Additional context

使用的是yolov5m_citybdd.yaml 配置文件

Model

Hi,
Thank you for sharing.
Where can I download the pre-trained model? I couldn't find it.

Thanks in advance

训练的过程map总是0

❔Question

train: Scanning '/home/wx/dataset/multiyolov5/fod/detdata/labels/train.cache' images and labels... 11 found, 0 missing, 2 empty, 0 corrupted: 100%|██████████| 11/11 [01:19<?, ?it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.40it/s]
39/999 5.31G 0.1613 0.03633 0 0.1977 0.02364 9 832: 100%|██████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.40it/s]
Class Images Labels P R [email protected] [email protected]:.95: 100%|████████████████████████████████████████████████████| 2/2 [00:00<00:00, 6.99it/s]
all 10 0 0 0 0 0

Additional context

查看日志像是标签未加载!
我的任务是检测1类,语义分割加上背景为3类

Segloss/Giou not showing

On training, it only shows detection loss in tensorboard and no loss related to segmentation?

‍致近期使用multiyolov5仓库的各位

明天开始我将专心投入到考研中,仓库近期不会再有结构/功能上的更新。
若后续考上研我会继续在本仓库

  1. 跟进yolov5的后续更新(P6看上去挺适合做分割的)
  2. 尝试新的分割头
  3. 增加更多任务的集成比如MOT和depth estimation等
  4. 重构接口(毕设赶时间接口确实很烂,代码也东拼西凑魔改比较乱)
  5. 增加多卡训练和部署的支持。

近期遇到较为明显的bug麻烦请在issue中指出,我会尽力在空闲时间修复。但训custom data之类的问题我近期将不再回复。训练自己的数据是一个比较专门的问题,各个数据集有适合的方案(包括结构和数据增强),一些通用的数据增强手段可能并不是适合你的数据(例如在红绿灯左右箭头识别中使用随机水平翻转,yolov5的mosaic增强对某些数据不合适),因此若训练自己的模型最好熟悉原版yolov5和本仓库的代码。

关于loss所在百分比的问题

我这边尝试了动态方法,收敛的更快。更稳定。刚开始训练用的是固定值,就是作者的。当训练到了固定批次--64*n采用动态的。动态设计根据focus-loss灵感来的--这里可以用早停法,当某个head的收敛了,loss小于设定值,开始使用动态,但是有个问题,在自己的数据上训练,波动太大,不收敛,还是建议用固定轮次。

how to train use multigpu?

when i use this scripy but met this error, how to change code?
python -m torch.distributed.launch --nproc_per_node 6 --master_port 1234 train.py --data xxx.yaml --cfg yolov5s_city_seg.yaml --batch-size 60 --epochs 200 --weights ./yolov5s.pt --workers 32 --label-smoothing 0.1 --img-size 832 --noautoanchor --device 2,3,4,5,6,7

RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument find_unused_parameters=Truetotorch.nn.parallel.DistributedDataParallel; (2) making sure all forwardfunction outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module'sforwardfunction. Please include the loss function and the structure of the return value offorward` of your module when reporting this issue

RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR有遇到的嘛?

RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
You can try to repro this exception using the following code snippet. If that doesn't trigger the error, please include your original repro script when reporting this issue.

import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.allow_tf32 = True
data = torch.randn([8, 128, 52, 104], dtype=torch.half, device='cuda', requires_grad=True)
net = torch.nn.Conv2d(128, 2, kernel_size=[1, 1], padding=[0, 0], stride=[1, 1], dilation=[1, 1], groups=1)
net = net.cuda().half()
out = net(data)
out.backward(torch.randn_like(out))
torch.cuda.synchronize()

ConvolutionParams
data_type = CUDNN_DATA_HALF
padding = [0, 0, 0]
stride = [1, 1, 0]
dilation = [1, 1, 0]
groups = 1
deterministic = false
allow_tf32 = true
input: TensorDescriptor 000001F1E879F590
type = CUDNN_DATA_HALF
nbDims = 4
dimA = 8, 128, 52, 104,
strideA = 692224, 5408, 104, 1,
output: TensorDescriptor 000001F1E879F670
type = CUDNN_DATA_HALF
nbDims = 4
dimA = 8, 2, 52, 104,
strideA = 10816, 5408, 104, 1,
weight: FilterDescriptor 000001F1A0BDA0D0
type = CUDNN_DATA_HALF
tensor_format = CUDNN_TENSOR_NCHW
nbDims = 4
dimA = 2, 128, 1, 1,
Pointer addresses:
input: 000000078F464000
output: 0000000765FC1E00
weight: 0000000765DFFE00

没有lab.pt的百度网盘连接

❔Question

up主,有deeplabv3+分割头训练的网络权重 的 网盘连接吗?我的电脑配置不够,跑不动这个deeplab分割头的网络

Additional context

请教问题,尺寸的问题。

我的数据增强和yolov5的数据增强是一样的,尺寸也是一样的。分割最终收敛到了miou 收敛到了57。这可能是什么原因呢?

数据集

❔Question

大佬,请问下这个可以使用自己改的yolov5模型吗?还是只能用官方提供的, 训练这个项目的话,是不是数据集必须包含检测标签和分割标签?

Additional context

模型固化出现问题

❔Question

感谢您的工作,效果非常好!!!
但我在固化您的模型以便在c++中进行调用的时候,遇到了一些问题
在使用 torch.jit.trace() 时候发生了问题
ts = torch.jit.trace(model, out)
ts.save("pspv5s.torchscript.pt")
提示我:List inputs to traced functions must have consistent element type. Found Tuple[Tensor, List[Tensor]] and Tensor

image

能否请求您的帮助?

Additional context

训练时遇到问题,提示如下

❔Question

作者你好,请问我在训练的时候train_custom.py的时候出现错误File "train_custom.py", line 641, in
train(hyp, opt, device, tb_writer)
File "train_custom.py", line 463, in train
f.write(s + '%10.4g' * 7 % results + '\n') # append metrics, val_loss
UnboundLocalError: local variable 's' referenced before assignment

Additional context

显存?

❔Question

想问一下,至少需要多大的显存才能复现项目的结果?

另外,readme中图像2-4是哪个数据集中的呢?

Additional context

怎样取得demo的效果?

❔Question

Additional context

你好,我在使用您提供的权重文件进行复现,但是检测出来的类仅有车辆和行人,没有demo所示的建筑类
image
请问怎样才能得到demo所示的结果?
image

nan when run detection

Before submitting a bug report, please be aware that your issue must be reproducible with all of the following, otherwise it is non-actionable, and we can not help you:

If this is a custom dataset/training question you must include your train*.jpg, test*.jpg and results.png figures, or we can not help you. You can generate these with utils.plot_results().

🐛 Bug

A clear and concise description of what the bug is.

To Reproduce (REQUIRED)

Input:

import torch

a = torch.tensor([5])
c = a / 0

Output:

Traceback (most recent call last):
  File "/Users/glennjocher/opt/anaconda3/envs/env1/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-5-be04c762b799>", line 5, in <module>
    c = a / 0
RuntimeError: ZeroDivisionError

Expected behavior

A clear and concise description of what you expected to happen.

Environment

If applicable, add screenshots to help explain your problem.

  • OS: [e.g. Ubuntu]
  • GPU [e.g. 2080 Ti]

Additional context

Add any other context about the problem here.

IndexError: index 483 is out of bounds for axis 0 with size 19

感谢你开源了你的项目,我运行代码遇到了这个问题!

🐛 Bug

File "/home/wx/Projects/multiyolov5/detect.py", line 197, in detect
mask = label2image(seg.max(axis=0)[1].cpu().numpy(), Cityscapes_COLORMAP)[:, :, ::-1]
File "/home/wx/Projects/multiyolov5/detect.py", line 72, in label2image
return colormap[X, :]
IndexError: index 483 is out of bounds for axis 0 with size 19

To Reproduce (REQUIRED)

Input:

def label2image(pred, COLORMAP=Cityscapes_COLORMAP):
    colormap = np.array(COLORMAP, dtype='uint8')
    X = pred.astype('int32')
    return colormap[X, :]

Output:

Traceback (most recent call last):
  File "/home/wx/Projects/multiyolov5/detect.py", line 279, in <module>
    detect()
  File "/home/wx/Projects/multiyolov5/detect.py", line 197, in detect
    mask = label2image(seg.max(axis=0)[1].cpu().numpy(), Cityscapes_COLORMAP)[:, :, ::-1]
  File "/home/wx/Projects/multiyolov5/detect.py", line 72, in label2image
    return colormap[X, :]
IndexError: index 483 is out of bounds for axis 0 with size 19

Expected behavior

A clear and concise description of what you expected to happen.

Environment

If applicable, add screenshots to help explain your problem.

  • OS: ubuntu16.04
  • GPU [e.g. 2070] torch 1.7.1+cu101 CUDA:0 (GeForce RTX 2070, 7979.1875MB)

复现项目的问题?

你好,请问项目还更新吗?按照您的思路,在YOLOv9进行复现时,出现了ValueError: Expected more than 1 value per channel when training的错误,请问您有遇到过相似的问题吗?具体错误代码如下:
Traceback (most recent call last):
File "E:\paper3 multi task learning\yolov9-multiv5\multi_train_dual.py", line 716, in
main(opt)
File "E:\paper3 multi task learning\yolov9-multiv5\multi_train_dual.py", line 610, in main
train(opt.hyp, opt, device, callbacks)
File "E:\paper3 multi task learning\yolov9-multiv5\multi_train_dual.py", line 121, in train
model = Model(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
File "E:\paper3 multi task learning\yolov9-multiv5\models\yolo.py", line 618, in init
m.stride = torch.tensor([s / x.shape[-2] for x in forward(torch.zeros(1, ch, s, s))]) # forward
File "E:\paper3 multi task learning\yolov9-multiv5\models\yolo.py", line 617, in
forward = lambda x: self.forward(x)[0]
File "E:\paper3 multi task learning\yolov9-multiv5\models\yolo.py", line 632, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "E:\paper3 multi task learning\yolov9-multiv5\models\yolo.py", line 531, in _forward_once
x = m(x) # run
File "E:\software\anaconda\envs\multiyolo\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\software\anaconda\envs\multiyolo\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "E:\paper3 multi task learning\yolov9-multiv5\models\yolo.py", line 65, in forward
return self.out(feat)
File "E:\software\anaconda\envs\multiyolo\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\software\anaconda\envs\multiyolo\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "E:\software\anaconda\envs\multiyolo\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
input = module(input)
File "E:\software\anaconda\envs\multiyolo\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\software\anaconda\envs\multiyolo\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "E:\paper3 multi task learning\yolov9-multiv5\models\common.py", line 1288, in forward
feat1 = F.interpolate(self.conv1(self.pool1(x)), (h, w), mode='bilinear', align_corners=True)
File "E:\software\anaconda\envs\multiyolo\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\software\anaconda\envs\multiyolo\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "E:\paper3 multi task learning\yolov9-multiv5\models\common.py", line 54, in forward
return self.act(self.bn(self.conv(x)))
File "E:\software\anaconda\envs\multiyolo\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\software\anaconda\envs\multiyolo\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "E:\software\anaconda\envs\multiyolo\lib\site-packages\torch\nn\modules\batchnorm.py", line 175, in forward
return F.batch_norm(
File "E:\software\anaconda\envs\multiyolo\lib\site-packages\torch\nn\functional.py", line 2480, in batch_norm
_verify_batch_size(input.size())
File "E:\software\anaconda\envs\multiyolo\lib\site-packages\torch\nn\functional.py", line 2448, in _verify_batch_size
raise ValueError(f"Expected more than 1 value per channel when training, got input size {size}")
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 64, 1, 1])

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.