brandonsmithj / mdn Goto Github PK
View Code? Open in Web Editor NEWMixture Density Network for water constituent estimation
License: GNU General Public License v3.0
Mixture Density Network for water constituent estimation
License: GNU General Public License v3.0
Hello, I'm trying to run the code, I tried MSI data and HICO data but it always says that the weights are not found, although I can see that the repo includes zip files for the weights, can you please provide more information on this? or where can I find the weights?
I've also tried to run the training on MSI data, I struggled to find out the structure of the data directory:
data/loc/MSI/Rrs.csv, data/loc/MSI/Rrs_wvl.csv, data/loc/Chl.csv, and the training started, but the logs doesn't provide any information on how manny epochs are done, and when will it finish, or the accuracy, maybe you can provide more details about the training process?
Thank you
I also had trouble downloading the weights as noted in the previous posting. I downloaded these separately from the links in your response. I unzipped each of these in the Weights folder, and these generated additional zip files. I unzipped those as well. In two cases the second zip file unzipped into the base directory while the first created a separate subdirectory, e.g:
[ndetenbe@atmos2 OLI]$ ls
70c22252dff6bc608d1d8e15b1d2d9e62cdaceecf7217f268192964a4c4c1871
70c22252dff6bc608d1d8e15b1d2d9e62cdaceecf7217f268192964a4c4c1871.zip
Round_0
Round_1
Round_2
Round_3
Round_4
Round_5
Round_6
Round_7
Round_8
Round_9
c42180fb52344bc80726538ecdfb07569b29cc802f27ab3c2789ab41302d701e.zip
config
I just want to make sure this structure is correct. If not, I can generate a subdirectory myself and unzip within it.
Hi Brandon, great tool and excited to see this work accompanying the recent RSE publication to public!
I tried to setup this code on my local Linux machine (cloud, Ubuntu20.04 VM), but have some issue with setup. I used conda to create an environment: requirements_cz.txt
when use the simple example with random data you provided in the introduction page, I run my test script: python3 -m test.py
and recieved the following error:
File "<stdin>", line 1, in <module>
File "../MDN/product_estimation.py", line 174, in image_estimates
estimate = function(im_data, sensor=sensor, **kwargs) if im_data.size else np.zeros((0, 1))
File "../MDN/product_estimation.py", line 138, in apply_model
preds, idxs = get_estimates(args, x_test=x_test)
File "../MDN/product_estimation.py", line 45, in get_estimates
model_path = generate_config(args, create=x_train is not None)
File "../MDN/utils.py", line 348, in generate_config
uncompress(folder) # Unzip the archive if necessary
File "../MDN/utils.py", line 107, in uncompress
with zipfile.ZipFile(path.with_suffix('.zip'), 'r') as zf:
File "/home/user/anaconda3/envs/MDN/lib/python3.8/zipfile.py", line 1269, in __init__
self._RealGetContents()
File "/home/user/anaconda3/envs/MDN/lib/python3.8/zipfile.py", line 1336, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file
any clue?
Chui Zeng
my test.py
content as below:
from MDN import image_estimates, get_tile_data, get_sensor_bands
sensor ="OLCI" # "<OLI, MSI, OLCI, or HICO>"
# Or, with just random data:
import numpy as np
random_data = np.random.rand(3, 3, len(get_sensor_bands(sensor)))
chla, idxs = image_estimates(random_data, sensor=sensor)
Hi Brandon,
What methods are there to precompute the weights for something like this – is it possible to just pass the targets through an MDN as an input initially, to get good starting weights for the priors/means/variances?
Also for full covariance, does care need to be given to which targets you regress together?
James
Very nice job!!! I'm excited to see this kind of work. Thank you for your generous code. Such a complete work is very helpful for a beginner like me.
As a beginner like me, it's very difficult to organize data. Can you share the original data in your paper, such as "path/to/my"/tile.nc " and training and test data.
Thanks!
Recently downloaded and debugged your project, I found that I have some problems in running, what is the input and output of the program and how to make the program work normally,If possible, can you leave your email address to facilitate our communication?
Thank you for making this code available! Your paper about chla retrievals was very interesting and I'm looking forward to understanding the model more.
Was wondering if you have considered adding a setup.py file for easy use? (related to #2 )
It would make pip install git+https://github.com/BrandonSmithJ/MDN
possible.
I used this workaround https://stackoverflow.com/questions/9714635/how-to-install-python-module-without-setup-py/9714750#9714750 but perhaps helpful for future use.
Hi,
First, thanks for developing this package. It is extremely useful. I am using it in some training notebooks that I am developing to support pan-sensor ocean colour applications at EUMETSAT. However, I have a problem. Everything else I have developed to date is licensed under MIT. I have gone extensively out of my way to make sure that this is the case, as, due to the fact that GPL "contaminates" everything that it touches, the organisation is not very keen to use it. I have This is the only package that I have not been able to find a work-around for (developing alternative approaches to MDNs being far beyond me!!). I am wondering if you would consider releasing this under dual GPL / MIT licences?
I have no intention of limiting the distribution of this code, everything I do is open source anyway. But this does, of course, open up the prospect of using the MIT license to use the code in a proprietary setting for anyone who would want to do so.
I am open to alternative suggestions, should you have them!
Cheers,
Ben
└─I have provided my Rrs__data.csv as following. At the end it gives me csv file having chla values. If I want to predict aph or other water constituents and visualize it then what will be the command or steps i need to follow?
$ python3 -m MDN --sensor HICO --plot_loss /home/phoenix/anaconda3/envs/MDN/lib/python3.9/site-packages/MDN/Rrs_data.csv
Generating estimates for 1331 data points ((1331, 54))
Input Rrs
Shape: (1331, 54)
N Valid: 1331
Minimum: [-20.81, -21.06, -22.10, -11.29, -8.22, -2.54, -5.29, -3.16, -9.94, -2.60, -2.62, -2.79, -1.53, -2.74, -1.83, -1.63, -1.29, -0.83, -0.69, -0.68, -0.63, -0.78, -0.78, -0.48, -0.35, -0.41, -0.55, -0.60, -0.61, -0.69, -0.30, -0.56, -0.33, -0.42, -0.29, -0.33, -0.39, -0.32, -0.31, -0.26, -0.35, -0.32, -0.30, -0.25, -0.19, -0.21, -0.21, -0.27, -0.28, -0.26, -0.18, -0.24, -0.20, -0.17]
Maximum: [136.21, 145.81, 25.79, 12.17, 13.88, 40.83, 10.26, 3.87, 22.40, 3.46, 2.89, 3.11, 2.26, 3.30, 7.02, 3.00, 1.58, 2.92, 1.49, 2.69, 2.16, 1.62, 1.40, 0.62, 0.85, 0.47, 0.70, 0.70, 0.61, 0.70, 0.35, 0.79, 0.60, 0.81, 0.61, 0.80, 0.64, 0.55, 0.47, 0.35, 0.39, 0.40, 0.54, 0.27, 0.39, 0.24, 0.29, 0.40, 0.36, 0.30, 0.34, 0.24, 0.52, 0.31]
100%|██████████████████████████████████| 10/10 [00:01<00:00, 5.67it/s]
MDN Estimates
Shape: (1331, 1)
N Valid: [1331]
Minimum: [ 0.00]
Maximum: [ 62.72]
Saving estimates at location "/home/phoenix/anaconda3/envs/MDN/lib/python3.9/site-packages/MDN/MDN_Rrs_data.csv"
I run the MDN model directly and the output chla have no correlation with my field data. Thus, I think I may have to calibrate the model using my field data for the lake I am interested in.
Would you please provide a little example of how to train the model with my own data?
Thank you for sharing the code and thank you very much for your help.
Dear Brandon,
We have difficulties to download the entire repository, more specifically we can not download the weights of the models because the repo has reached the data quota (c.f. extracts from the log underneath).
I may also have made a mistake somewhere.
But if not, is there any other place where we can download the weights?
I greatly thank you in advance for your help,
Guillaume
(extracted from the log generated just now):
""
git-lfs/3.2.0 (GitHub; linux amd64; go 1.18.2)
git version 2.25.1
$ git-lfs filter-process
Error downloading object: Weights/HICO/45313342cb628c8cf45b6e2e29f4dc9a780ee1d403bdb98461e28fcb13ad9ce3.zip (c3b0e04): Smudge error: Error downloading Weights/HICO/45313342cb628c8cf45b6e2e29f4dc9a780ee1d403bdb98461e28fcb13ad9ce3.zip (c3b0e0457ebacd53d842b0541fc60233f70eb75c45b6842333f1918eddc38f07): batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.
[...]
""
Are plans to add the MDN package in the PyPi libary for easier use?
Hello, how should I train the MDN model with my own data set ?
Hi Brandon, I am using your code to train my data, and I can't find 'generate_estimates' function in 'product_estimation.py' file. could you fix this issue? thx.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.