clarkcga / multi-temporal-crop-classification-baseline Goto Github PK
View Code? Open in Web Editor NEWBaseline model for crop type segmentation as part of the HLS FM downstream task evaluations
Baseline model for crop type segmentation as part of the HLS FM downstream task evaluations
I've had some headaches evaluating on another dataset since the implementation here in _generate_matrix
assumes nodata is encoded as zero - however in my dataset 0 was a class and 255 was nodata.
Suggest either clarifying this expectation of the encoding, or allowing passing of a nodata value similar to here and here
Added this cell, might be useful to include:
# load previously trained model
checkpoint_path = "../output6/Unet_ep100/chkpt/Unet_final_state.pth"
checkpoint = torch.load(checkpoint_path, map_location=torch.device('cpu')) # or use GPU
# Remove 'module.' prefix if present (for nn.DataParallel compatibility)
new_state_dict = {k.replace('module.', ''): v for k, v in checkpoint.items()}
model.load_state_dict(new_state_dict)
From the following config I assume there was a fair bit of experimentation performed to arrive at these parameters - are you able to shed light/into on the experiments run? I am seeking to compare Prithvi/Unet when both typical defaults are used, and when optimised.
Many thanks
use_skipAtt: false
train_dropout_rate: 0.15
optimizer: sam
LR: 0.011
LR_policy: PolynomialLR
criterion:
name: TverskyFocalLoss
weight:
- 0.0182553
- 0.03123664
- 0.02590038
- 0.03026126
- 0.04142966
- 0.04371284
- 0.15352935
- 0.07286951
- 0.10277024
- 0.10736637
- 0.1447082
- 0.17132445
- 0.0566358
ignore_index: 0
gamma: 0.9
Hi
the default_config
references a dataset:
train_dataset_name: chips_filtered_13_classes_complete
Can you confirm this is the exact dataset from HF, and not a modified version (implied by the reference to filtering)
Thanks
Issues I found.
Config.yaml We should make it clear where to find the different options in this (i.e. in specific scripts). Sam also mentioned we might need to make updates for this.
Update README with new repo name. In the 'clone' command of Step 1.
Remove test code near end of main notebook. (Sam noted this).
I think these are the main issues. I also mentioned to Sam it might help to have examples for the commands in the README, but I'm not sure if that's typically done. (i.e. examples with specific file paths instead of <file_path>.
Setting gpu_devices=[0, 1, 2, 3]
and calling compiled_model.fit
I receive:
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu
Debugging
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.