Coder Social home page Coder Social logo

lanenet's Introduction

LaneNet lane detection in Pytorch

LaneNet is a segmentation-tasked lane detection algorithm, described in [1] "Towards end-to-end lane detection: an instance segmentation approach" . The key idea of instance segmentation should be referred to [2] "Semantic instance segmentation with a discriminative loss function". This repository contains a re-implementation in Pytorch.

News

  • Codebase would be updated these days. There are many bugs currently in this repo. Sorry for the late response.

Data preparation

CULane

The dataset is available in CULane. Please download and unzip the files in one folder, which later is represented as CULane_path. Then modify the path of CULane_path in config.py.

CULane_path
├── driver_100_30frame
├── driver_161_90frame
├── driver_182_30frame
├── driver_193_90frame
├── driver_23_30frame
├── driver_37_30frame
├── laneseg_label_w16
├── laneseg_label_w16_test
└── list

Note: absolute path is encouraged.

Tusimple

The dataset is available in here. Please download and unzip the files in one folder, which later is represented as Tusimple_path. Then modify the path of Tusimple_path in config.py.

Tusimple_path
├── clips
├── label_data_0313.json
├── label_data_0531.json
├── label_data_0601.json
└── test_label.json

Note: seg_label images and gt.txt, as in CULane dataset format, will be generated the first time Tusimple object is instantiated. It may take time.

Demo Test

For single image demo test:

python demo_test.py -i demo/demo.jpg 
                    -w path/to/weight
                    -b 1.5
                    [--visualize / -v]

An untested model can be downloaded [here]. (It will be uploaded soon.)

Train

  1. Specify an experiment directory, e.g. experiments/exp0. Assign the path to variable exp_dir in train.py.

  2. Modify the hyperparameters in experiments/exp0/cfg.json.

  3. Start training:

    python train.py [-r]
  4. Monitor on tensorboard:

    tensorboard --logdir='experiments/exp0' > experiments/exp0/board.log 2>&1 &
    

Note

  • My model is trained with torch.nn.DataParallel. Modify it according to your hardware configuration.

Reference

[1]. Neven, Davy, et al. "Towards end-to-end lane detection: an instance segmentation approach." 2018 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2018.

[2]. De Brabandere, Bert, Davy Neven, and Luc Van Gool. "Semantic instance segmentation with a discriminative loss function." arXiv preprint arXiv:1708.02551 (2017).

lanenet's People

Contributors

harryhan618 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lanenet's Issues

tiny suggestion, move loss computation outside forward()

thanks for the sharing!
I read both your implementation of lanenet and SCNN, very clear and elegant.
a small suggestion for lanenet, maybe it would be better to move loss computation outside forward(). Since in inference, we don't need to compute loss

H-Net

没有 H-Net 实现过程

I want to ask, why didn't you use regularization loss on discriminative loss?

I saw in the second paper it was said that there are three components to discriminatory loss: variance, distance, and regulization loss. and you define reg_loss in your code. but you did not use it because you said it was not used in the original paper. So the loss function you use is the loss from the first paper and the loss from the second paper, exclude reg_loss. why?

loss function you used:
loss = a * seg_loss + b * var_loss + c * dist_loss

why not:
loss = a * seg_loss + b * var_loss + c * dist_loss + d * reg_loss

评价指标

您好,请问您知道怎么在 Tusimple数据集上评估模型性能吗?

About demo_test.py

Hi!Im trying to test a image,(demo_test.py).
Then an error occurs
Traceback (most recent call last):
File "demo_test.py", line 67, in
main()
File "demo_test.py", line 31, in main
x = transform(img)[0]
File "C:\Users\xuyun\Desktop\LaneNet-master-20190926\utils\transforms\transforms.py", line 43, in call
sample = t(sample)
File "C:\Users\xuyun\Desktop\LaneNet-master-20190926\utils\transforms\transforms.py", line 70, in call
img = sample.get('img')
AttributeError: 'numpy.ndarray' object has no attribute 'get'
Any help would be amazing.

A lesson about DDP and computation graph

This is a lesseon I learnt about DistributedDataParallel and forward & backward computation graph. I debugged this issue for really long time and finally find it. So I would like to record it here.
The lesson is: when calculating the loss, in some cases, always get the output involved into the loss computation graph rather than skip it .

Here's the case. When I want to calculate a loss that I implement myself, say discriminative_loss in this repo, I used to ignore it if there is no positive sample. That is to say:

loss_all = 0
if len(torch.unique(seg_gt))==0:  # only background pixel, so I don't calculate loss
  continue
else:
  loss_some_task = calculate_loss(pred, seg_gt)
  loss_all = loss_all + loss_some_task

Thus in this case, when a sample has no positive pixel, pred won't get involve into computation graph for calculating loss.
Ok, here is the tricky part. When using DDP, it expects the output of the model should either involved in the computation graph or not across all processes. So a special case happens when pred is involved in the loss compuation graph in Process#1 and otherwise in Process#2. This will make both Processes stuck.

So always let the output get involved in to loss compuation graph. I would revise the code above like this:

loss_all = 0
if len(torch.unique(seg_gt))==0:  # only background pixel, so I don't calculate loss
  _nonsense_sum = pred.sum()
  _nonsense_zero = torch.zeros_like(_nonsense_sum)
  loss_all = loss_all + _nonsense_sum * _nonsense_zero
else:
  loss_some_task = calculate_loss(pred, seg_gt)
  loss_all = loss_all + loss_some_task

How to train Cityscapes dataset

Hi!How to load and train cityscapes dataset? Cityscapes dataset has no bounding box, so how do you train it for detection? thanks.

about dist_loss

When read the code, I found the dist_loss may be have some mistakes,
dist_loss = dist_loss + torch.sum(F.relu(dist - self.delta_d)**2) / (num_lanes * (num_lanes-1)) / 2

I think it should be
dist_loss = dist_loss + torch.sum(F.relu(self.delta_d - dist)**2) / (num_lanes * (num_lanes-1)) / 2

but i am not sure, thank you!

x in demo_test.py

Hello,

Thank you for the code, I wanted to understand what it is use of line 31-32 in demo_test.py where x is declared. Was it supposed to me the argument to net.

image

'cfg.json' and weights file

hello, I want to run your code, but I can't find the 'cfg.json' and weights file. If you would like to provide the 'cfg.json' and weights file, I'll grateful appreciate it.

Are you plan reproduce FastDraw?

Thank you for your great work, and there are new paper about lane detection posted. In my oppion, FastDraw: Addressing the Long Tail of Lane Detection by Adapting a Sequential Prediction Network is very good paper, So are you going reproduce this?

Tusimple evaluation resizing

Hi,

Are you evaluating your model with tusimple ?The tusimple evaluation takes as input h samples and lane points with the size of 720 x 1280 however if we train a network on smaller size then there are two ways to go about it.

a) resize the network output image to 720 x 1280.

b) resize the points to smaller size and generate y samples/lane points pair.

Which one do you think is best ?

Curvature

Hello, Is it possible to get the curvature values as well while doing curve fitting? In real time?

about quantitative evaluation

When will you publish your test code?
I used the postprocess code from MaybeShewill-CV/lanenet-lane-detection, and I got very low accuracy.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.