wangzhecheng / deepsolar Goto Github PK
View Code? Open in Web Editor NEWNationwide houseshold-level solar panel identification with deep learning
License: MIT License
Nationwide houseshold-level solar panel identification with deep learning
License: MIT License
I follow the README.md to run
python train_classification.py --fine_tune=False
and get the error
Traceback (most recent call last):
File "train_classification.py", line 17, in
from inception import inception_model as inception
ImportError: No module named inception
So how can I get the module inception
available?
p.s.
I have google the error as this and this; but little helpful result found so I ask here.
I was looking closely at the example image on your homepage and I noticed a few of the circles showing where solar panels are supposed to be are a bit off. It's weird, because they aren't false positives -- there are solar panels nearby, the circles just aren't quite in the right place. Not sure if this image was made by human or computer though.
how did you guys generated the data list, i got an error (SPI_eval/eval_set_meta.csv doest not exist) it means i have to make it manualy but what should it look like.
Hi Wang ZheCheng,
I find I have access denied to the two links:
https://s3-us-west-1.amazonaws.com/roofsolar/SPI_train.tar.gz
https://s3-us-west-1.amazonaws.com/roofsolar/SPI_eval.tar.gzIs there something I need to be doing to get access?
Access to https://s3-us-west-1.amazonaws.com/roofsolar/SPI_val.tar.gz is fine.kind regards
Originally posted by @james-skipper115 in #4 (comment)
@wangzhecheng Thanks for the summary statistics. I was hoping to find a data dictionary somewhere? It looks like the joining of your team's dataset against some census dataset I should probably be familiar with, but I am not.
Thanks!
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 3 in both shapes must be equal, but are 96 and 32. Shapes are [1,35,35,96] and [1,35,35,32].
From merging shape 2 with other shapes. for 'inception_v3/mixed_35x35x256a/concat/concat_dim' (op: 'Pack') with input shapes: [1,35,35,64], [1,35,35,64], [1,35,35,96], [1,35,35,32].
please help in resolving this issue
@wangzhecheng thanks for your work!
I ran the test_segmentation.py and got this error:
......
Processing 5999/87502...
Processing 6000/87501...
Processing 6001/87500...
Traceback (most recent call last):
File "test_segmentation.py", line 223, in <module>
test()
File "test_segmentation.py", line 176, in test
stats[region_type][2] += 1
KeyError: 'r '
and in the path /DeepSolar-master/segmentation_results/TP
all the CAM pictures are not grayscale images and they are the same with original pictures.
Could you give me some help? Thanks.
@wangzhecheng I see that there are aggregate statistics publicly available, but is there a plan to release the raw panel location & size data?
Thanks for sharing the code, in any case!
Can you provide a license for the repo so we know how and where we can we use the code? Thank you.
Apparently, it is necessary to download the Inception Library. Unfortunately, the link in your Readme does not work anymore. Do you know where to download Inception v3?
I would like to use the trained models separately from the source code. The download URLs in the README file only contain checkpoint files, which can't be used separately from the original code, but I need the models in SavedModel format. Could you please provide download links for the classification and segmentation models in SavedModel format?
Dear Wang Zhecheng,
I would like to run DeepSolar for Central Europe, especially Germany.
The 5 GB of https://s3-us-west-1.amazonaws.com/roofsolar/SPI_eval.tar.gz is too much to download it for fun.
May you please tell me how the satellite images must be formatted? Is a TMS (tiled map service) sufficient?
Can I train the segmentation branch without training the classification branch? I only require the segmentation part of the script. When I try to do so with two layers false I get the following error:
python3 train_segmentation.py --two_layers=False Training set built. Size: 366467 Traceback (most recent call last): File "/home/user/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1659, in _create_c_op c_op = c_api.TF_FinishOperation(op_desc) tensorflow.python.framework.errors_impl.InvalidArgumentError: Shape must be rank 4 but is rank 2 for 'conv_aux_1/Conv2D' (op: 'Conv2D') with input shapes: [64,2], [3,3,288,512].
My package versions:
numpy 1.16.3
tensorflow 1.13.1 (required me to change some paths when creating the saver in train_classification.py)
scikit-image 0.15.0 (compatible with numpy 1.16.x unlike issue #7 )
pandas 0.23.4
I know these aren't the versions listed in requirements.txt, though I'm currently changing the files to be compatible with python 3.6. This may be the cause for the error, although I doubt this is the case as I have made changes to all dependencies so that they are compatible. And I don't understand why I have less elements in my input matrix just as a result of changing version.
Changes I have made to make it compatible:
e.g: from open('train_set_list', 'r')
to open('train_set_list.pickle', 'rb')
# Create a saver.
saver = tf.train.Saver(tf.all_variables())
to:
# Create a saver.
code_to_checkpoint_variable_map = {var.op.name: var for var in tf.all_variables()}
for code_variable_name, checkpoint_variable_name in {
"CrossEntropyLoss/value/avg": "tower_0/tower_0/CrossEntropyLoss/value/avg",
"aux_loss/value/avg": "tower_0/tower_0/aux_loss/value/avg",
"total_loss/avg": "tower_0/tower_0/total_loss/avg",
}.items():
code_to_checkpoint_variable_map[checkpoint_variable_name] = code_to_checkpoint_variable_map[
code_variable_name]
del code_to_checkpoint_variable_map[code_variable_name]
saver = tf.train.Saver(code_to_checkpoint_variable_map)
for count in xrange(args)
to for count in range(args)
Any help advice would be much appreciated, if I get this working I'll be happy to post a python 3 compatible version of this on github.
What's the image resolution?
Hi,
I am trying to run it locally, but I seem to get some version mismatch of numpy vs skimage.
I get this, when testing classification :
File "DeepSolar/.venv3/lib/python3.5/site-packages/skimage/util/arraycrop.py", line 8, in
from numpy.lib.arraypad import _validate_lengths
ImportError: cannot import name '_validate_lengths'
requirements.txt has floating versions (and also was missing pandas)
Could you please provide the versions of each dependency ?
Also, which version of python are you using ? 2.7 or 3.5 ? I guess it's 3.5 ?
Thanks in advance for your help.
I am running the model following the instruction on the test set and a personal dataset and find out that the FN rate is very high and while the precision rate is close to what is being reported in the paper, the recall rate is not nearly the same (10% and 40% for residential and commercial areas respectively).
Anyone know what the issue is?
Hi Wang ZheCheng,
Could you please let me know where I can find training and testing data? The access is denied to the two links:
https://s3-us-west-1.amazonaws.com/roofsolar/SPI_train.tar.gz
https://s3-us-west-1.amazonaws.com/roofsolar/SPI_eval.tar.gz
Thank you for the Nice Description and Scripts.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.