- Link to TRUBA VPN connection setup and major SLURM functions are here.
- Link to runtime scripts definitions and options here.
- Use .env_template to configure input/output paths.
- Truba python environment installation instructions are here
- Data
sentinel_traj_nn's Introduction
sentinel_traj_nn's People
sentinel_traj_nn's Issues
Resolve label saving issues
Some of the labels are shown correctly but some are completely empty although they are not in raw label tif file.
Create running scripts for TRUBA for new datasets and codes
Add learning rate controls and additional optimizers to model_repository.py
Optimizers and their parameters are hard coded in model_repository.py
- Add Adam optimizer.
- Add SGD parameters (Defined in input file but not piped to model).
- Add dice loss.
- Add binary_crossentropy loss.
- Run, runtime test.
Review: https://medium.com/beyondminds/advances-in-generative-adversarial-networks-7bad57028032
Resolve folder naming bug: folder name contains colons
Colons causing data copy issue. Replace with underscore.
Data generator reshape sections review
Move model selection to model config json
Currently fed through command line script. Move to model config json files.
Implement attention mechanisms
Implement own mIoU for binary classification.
Add validation set size as parameter.
Current setup uses all validation data at once for validation run. Put limit for validation dataset as a model configuration parameter.
Add checkpoint callback.
Review: https://medium.com/the-downlinq/sar-101-an-introduction-to-synthetic-aperture-radar-2f0b6246c4a0
Test run SRCNN+MSI on Truba
Apply test runs on Truba to determine the relation between batch size, patch size, epoch size, GPU count, CPU count and model depth.
- Run models
- Evaluate results
- Create speed up efficiency graphs (? - metioned in the TIK on June)
Create Istanbul trajectory dataset
- Archive local dataset to portable disc storage
- Review Montreal data to use the same resolution options
- Run trajectory rasterization scripts for IBB data.
Improve output analysis jupyter notebook
- Check if the model is completed
- Get the model type
- Read model parameters from model config and print
- Read input files from model input config and print
- Get the model size as all/trainable parameters
- Print model structure
- Analyze metric outputs
- Add option for trajectory output visualisation
- Analyze runtime log data
Run models with reduced IoU threshold - determine an optimum threshold.
Truba test run for processing time estimation.
Review: https://medium.com/the-downlinq/sar-201-an-introduction-to-synthetic-aperture-radar-part-2-895beb0b4c0a
Work for Hopfield networks
Output file names merge when jobs start running at the same time
- Add slurm job id to output job name for error, output, runtime log and project folder
- Add same time stamp to log files and project folder
- Add copy functionality for runtime log, error log, output log to project folder
Add final classification metrics over test data.
Implement dice loss
Available in: https://github.com/Paulymorphous/skeyenet/blob/master/Src/loss_functions.py
Implement for binary classification.
Update loss function to averaging by batch size
Apply UNET only and SRCNN to MSI batch only
Resolve command line running issues before Truba run: https://bic-berkeley.github.io/psych-214-fall-2016/sys_path.html
Implement late SRCNN-Unet
Current SRCNN-UNET model has SRCNN structure right after input layers. Move the SRCNN structure between last convolution layer (current output) and new output layers. Keep current output layers as a new layer (keep everything as is), add SRCNN structure.
before:
output_layer = Conv2D(2, (1, 1), padding="same", activation="sigmoid")(up_conv1)
self.model = tf.keras.Model(inputs=input_layers, outputs=output_layer)
after:
transition_layer = Conv2D(2, (1, 1), padding="same", activation="sigmoid")(up_conv1)
srcnn = Conv2D(64, (9, 9), activation='relu', padding="same")(temp_input_layer)
srcnn = Conv2D(32, (1, 1), activation='relu', padding="same")(srcnn)
output = Conv2D(1, (5, 5), activation='relu', padding="same")(srcnn)
self.model = tf.keras.Model(inputs=input_layers, outputs=output_layer)
Regenerate local testing environment
- Input paths
- Output paths
- Pycharm tuning
- Runtime tests for confirmation
Review: https://medium.com/@sh.tsang/review-mdesnet-multichannel-densely-convolutional-network-super-resolution-cd785ab09300
Create IGARSS attendance schedule
Add threshold based normalization for trajectory_count to raster_standardize.py
Current trajectory_count
is using min_max normalization. Add new option as traj_count_treshold
to normilize between 0-Treshold interval.
Improve MTL datasets
- Create nearest neighborhood based images
- Create label data with "DEFLATE" compression (similar to Istanbul data)
Add residual unit to SRCNN+MSI study
Implement additional loss functions
- Binary crossentropy loss + dice loss
- Dice loss (with threshold)
Check also other available ones in the literature.
Improve data augmentation: empty areas created after rotation causing classification issues.
Out of range error in ist. data generation
Getting following error:
ERROR 5: /home/nagellette/Desktop/final_data/ist_data/T35TPF_20191109T090201_B02_10m_clipped.tif, band 1: Access window out of range in RasterIO(). Requested
(58495,31975) of size 362x362 on raster of 35687x31533.
Also getting following error, most probably because the above one create empty image. Check and test this one two after resolving above one:
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
MSI + SAR Fusion
Save prediction examples before normalization and correct output file name
- Current implementation is showing preprocessed (normalized) pixel values. This causes false colors when rendered in matplotlib.
- File name of the bands are confusing. Current code names the files from 0 to n with using "B" index for sentinel images. Implement to get original index from band name like "B2", "B8" etc.
Label data review
Check "sentinel_msi" standardization in raster_standardize.py
Current method use 10000 as upper threshold for pixel values. Cubic interpolation explodes the pixel values which might increase the upper threshold. Check if upper boundary can be used in a flexible and logical manner.
Create 2.5m resolution training data
Required after label data review.
Implement "light" variants of Unet&ResUnet
- LigthUnet
- LightResUnet
Check the belaviour of mean_iou metric
Convert the current implementation into the one in TensorFlow with threshold option:
Implement custom mIoU metric
Add additional metrics
Current accuracy metrics looks suspicious. Add following as the additional metrics to be saved in callback:
- BinaryAccuracy with threshold 0.5
- Recall with threshold 0.5 - https://datascience.stackexchange.com/questions/80890/keras-p-r-metrics-at-different-thresholds-during-training
- Precision with threshold 0.5 - same as recall
- Convert metrics list calls to a separate function in utils (
sentinel_traj_nn/models/model_repository.py
Line 142 in 0f1531c
Review: https://www.linkedin.com/posts/spacenet-llc_spacenet-6-dataset-release-activity-6632767483931615232-bntS
Upgrade Tensorflow version on local+TRUBA
- Test TRUBA support for latest version of the Tensorflow
- Upgrade TRUBA runtime environment to new Tensorflow version if previous test successful.
- Upgrade local coding environment to latest version of Tensorflow.
Implement Additional Semantic Segmentation Models
- ResUnet: https://medium.com/@nishanksingla/unet-with-resblock-for-semantic-segmentation-dd1766b4ff66
- D-LinkNet: https://github.com/zlkanata/DeepGlobe-Road-Extraction-Challenge/blob/master/networks/dinknet.py & https://towardsdatascience.com/understand-and-implement-resnet-50-with-tensorflow-2-0-1190b9b52691
- Segnet vs. Unet
- FuNet: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiLnsn1l6vuAhUvyoUKHQszDGQQFjAAegQIBRAC&url=https%3A%2F%2Fwww.mdpi.com%2F2220-9964%2F10%2F1%2F39%2Fpdf&usg=AOvVaw1k5uu8zg7QGzDHXm_KqFLa
-
DeeplabV3+ -
Mask R-CNN: https://ai.googleblog.com/2018/03/semantic-image-segmentation-with.html
https://github.com/matterport/Mask_RCNN -
Add others
Create callback for runtime analysis
Convert output save from single file implementation to folder+multiple file imlementation.
Add input file size verifier
Current input file size read from "label" file dimensions and rest of the code uses this. This causes issues when input files having different sizes accidentally. Add a verifier function to check, compare and report the equality of raster dimensions.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.