Coder Social home page Coder Social logo

fuzailpalnak / building-footprint-segmentation Goto Github PK

View Code? Open in Web Editor NEW
119.0 3.0 32.0 561 KB

Building footprint segmentation from satellite and aerial imagery

Home Page: https://fuzailpalnak-buildingextraction-appbuilding-extraction-s-ov1rp9.streamlitapp.com/

License: Apache License 2.0

Python 100.00%
semantic-segmentation satellite-imagery gis satellite-imagery-segmentation building-footprint-segmentation deep-learning pytorch building-footprints

building-footprint-segmentation's Introduction

Hi there ๐Ÿ‘‹

Github stats

building-footprint-segmentation's People

Contributors

fuzailpalnak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

building-footprint-segmentation's Issues

ttAugment installation error - conflict in numpy versions

Hi
When I try to install ttAugment. it throws an error "pip subprocess to install build dependencies did not run successfully". So I tried installing the libraries individually with the exact versions used in the notebook PredictionWithAugmentations.ipynb. there is a conflict in the versions for image-fragment 0.2.2 and opencv-python 4.5.5.62. How to solve and install ttAugment?
Screenshot 2023-06-03 223802

cannot install with pip

Hi, I get the following error when I try either pip install building-footprint-segmentation==0.2.1 with or without version (when I don't specify the version, the error doesn't show it either):

Could not find a version that satisfies the requirement building-footprint-segmentation==0.2.1 (from versions: ) No matching distribution found for building-footprint-segmentation==0.2.1

How to get outline of the mask as polygon

Hi Fuzail,

How are you. Hope you are doing good.
Can you help me how can I extract the polygons of the detected buildings.
Any help is appreciated.

Thanks in advance,
San

Use weight file

Hello my friend!

your work is amazing! Congratulations.

I need your help. I wish I could use the pre-trained weights. Also, how can I use the code to be able to use some test images and verify that it performs the segmentation correctly!

Thank so much!

import error with latest albumentation versions

Hello, I tried running your example, but there's an error. The one I got to was

ImportError: cannot import name 'from_dict' from 'albumentations' (/usr/local/lib/python3.7/dist-packages/albumentations/core/__init__.py)

I suspect in one of the albumentations updates, the locations of some methods were shifted around. There may be other import errors, I haven't gotten past this one just yet, but you may be quicker at finding it as you're more familiar with your code

Save best model with F1 score

Hi,
I'm using your scripts to train the model on a custom dataset.
Analyzing your code, i noticed that you use validation loss to save the best model. I would like to save it based on the F1 metric.
Could you please help me changing the script to address my request?

Thank you,
Francesca

Error when running example with MFRN and massachusets dataset

Hi Fuzail,

Thanks so much for publishing your code!!
I have a question, I'm trying to run your example code with the MFRN model, the massachusets dataset in the folder structure that's required and I left all the other settings as is.
I get the following error:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-25-b54aed7e715d> in <module>
----> 1 trainer.train(start_epoch=0, end_epoch=1)

~/amrefenv/lib/python3.6/site-packages/building_footprint_segmentation/trainer.py in train(self, start_epoch, end_epoch, step, bst_vld_loss)
    179                     to_new_line_data=True,
    180                 )
--> 181                 raise ex
    182 
    183         one_liner.one_line(

~/amrefenv/lib/python3.6/site-packages/building_footprint_segmentation/trainer.py in train(self, start_epoch, end_epoch, step, bst_vld_loss)
     93 
     94                 train_loss, train_metric, step, progress_bar = self.state_train(
---> 95                     step, progress_bar
     96                 )
     97                 progress_bar.close()

~/amrefenv/lib/python3.6/site-packages/building_footprint_segmentation/trainer.py in state_train(self, step, progress_bar)
    203             train_data = gpu_variable(train_data)
    204 
--> 205             prediction = self.model(train_data["images"])
    206             calculated_loss = self.criterion(train_data["ground_truth"], prediction)
    207             self.optimizer.zero_grad()

~/amrefenv/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1100         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1101                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102             return forward_call(*input, **kwargs)
   1103         # Do not call functions when jit is used
   1104         full_backward_hooks, non_full_backward_hooks = [], []

~/amrefenv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs)
    164 
    165             if len(self.device_ids) == 1:
--> 166                 return self.module(*inputs[0], **kwargs[0])
    167             replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
    168             outputs = self.parallel_apply(replicas, inputs, kwargs)

~/amrefenv/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1100         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1101                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102             return forward_call(*input, **kwargs)
   1103         # Do not call functions when jit is used
   1104         full_backward_hooks, non_full_backward_hooks = [], []

~/amrefenv/lib/python3.6/site-packages/building_footprint_segmentation/seg/binary/models/mfrn.py in forward(self, input_feature)
    287 
    288         transition_up_1 = self.mfrn.decoder.decodertransition1(
--> 289             bottle_neck, skip_connections.pop()
    290         )
    291         dense_layer_6 = self.mfrn.decoder.decoderdenseblock1(transition_up_1)

~/amrefenv/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1100         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1101                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102             return forward_call(*input, **kwargs)
   1103         # Do not call functions when jit is used
   1104         full_backward_hooks, non_full_backward_hooks = [], []

~/amrefenv/lib/python3.6/site-packages/building_footprint_segmentation/seg/binary/models/mfrn.py in forward(self, x, skip)
    114     def forward(self, x, skip):
    115         out = self.Transpose(x)
--> 116         out = torch.cat([out, skip], 1)
    117         return out
    118 

RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 92 but got size 93 for tensor number 1 in the list.

Do you know what the issue could be?

train model always precision : 0.00000, f1 : 0.00000, recall : 0.00000, iou : 0.00000

Hello Fuzail,

Thanks first , i tried train DLinkNet34 with Massachusetts Buildings Dataset , the train metrc is still correct : accuracy : 0.82643, precision : 0.94444, f1 : 0.85087, recall : 0.77669, iou : 0.77669 ; but the Valid Metric : accuracy : 0.85126, precision : 0.00000, f1 : 0.00000, recall : 0.00000, iou : 0.00000, it must someting wrong but i can't solver it . can you give me some advices?
thank again, i'm looking forward for your replay!

Need to swap axes again (or not swap before) to restore mask correctly, and change version reqs for ttAugment

In this line in the main example notebook, you need to change to

transformation.restore_fragment(prediction_binary.swapaxes(1, 2)) 

so that the restored image correctly aligns with the original input - you can see in the current outputs that otherwise the axes for each image fragment are swapped.

Also, ttAugment now fails to install in colab due to requiring old versions of python, numpy etc - it seems that nothing breaks on removing these version requirements (including python version), and this allows it to work on colab again.

Otherwise handy package(s), thanks for putting it together!

how to organize train datasets to start training?

I got

ValueError: num_samples should be a positive integer value, but got num_samples=0

, with:

Loader:

    root: ../datasets/
    image_normalizer: divide_by_255
    label_normalizer: binary_label
    batch_size: 2

in config.yml,
and my datasets(image & label) under the folder datasets/
Looking forward to your reply!

Best Model info

Hi Fuzail,
could you please share some information about the best model you got? I would like to know:

  • epoch_end
  • epoch in which you got the best model
  • metrics values on training and validation sets
  • hyperparameters values (learning rate, optimizer, regularizer (L2))
  • augmentation you performed (just horizontal flip?)
  • the normalization you applied on images and labels (is it "divide_by_255"?)

Thank you,

Teresa

No detection after training with Massachusetts Buildings Dataset

Hello,

Im having a problem where I got no detection at all after training with Massachusetts Buildings Dataset.

Just to be sure, I should be using the chk_pth.pt as dict, right?

By the way, does the pretrained weight file should be working like "well"!? All I got is something like this
dst
:

Thx for your work, and support!

GeoTiff as output

I used the pretrained model to extract buildings footprint on a custom image. What I get is a jpeg or tiff image, how can I georeference it? Is it possible to get a geotiff as output?
Thank you.

How to test the model?

Hi,
why do you test your model using data augmentation as in PredictionWithAugmentations.py?
How did you choose which manipulation to be used?
By combining these outputs (eg. crop and resize results), how do you get the final estimate on the tile?
Thank you :)

Transfer Learning

Hi! I'm using the weights of the model you trained on Inria to retrain a model with other aereial images. Which layers of the RefineNet should I retrain?
Thank you!

Fine-tune pretrained model with own datasets, get 0 precision, recall, F1, iou after a few epochs

Hi, I am following your tips in the closed issue #42 Best Model:
Load the entire weights file you shared in repo; freeze the decoder part; then start training on my own dataset.

However, after a few epochs, the accuracy reached pretty high, but precision, F1, recall, and iou got to zero, in both training and validation metrics.

I tried to use my finetuned model to make inferences - apparently it predicts all the pixels of a test image as non-buildings.

I suspect it is because of the imbalance dataset I have - The buildings are rather sparse. But I checked the Ineria dataset you used, they are not always intensive with buildings either.

I am wondering what tricks you have used to tackled this, or do you have any suggestions?

Thanks in advance! And great work you have done!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.