deep-subspace-clustering-networks's People
Forkers
benjamesbabala mutual-ai allenmujie suneymar xinhandi xingyuxie junhaowang arkc leostephen alan15932 isr-wang drah-kah-ris nbshawnlu ninachang1107 locussam xun-yang 9310gaurav redhat12345 caowenming0419 codegank infinitehj renke2 codes-kzhan ai3dvision jiwoongpark92 dangxuanhong xloc bokunwang stevensjp99 xiaoliang008 fly8764 yaolezju yangyuchen0340 wangsenhong drpengsong lslrh csjunxu zwenaing zhangsz1998 liangli-zhen nonothingc christosavg wrccrwx gsx0 sxksxy qinghaizheng1992 ethanyeoh monster-xie mikigom kunzhan fishchips525 qss2012 nbahti xxuan024 yjhong89 steven99999 zonghaofan hetianyu12138 phymucs notepad14 huangzhongyu mbai8854 greenrhyno guchengqian01 jeaninezpp shulingtang azhengzhou miracle0614 chiragkyal maybeee18 mahi2412 zhouyingjie123 sohailkhanmarwat madd2014 gnehz cydping helang818 gisu2km facecup-event samirandas1311 boscoj2008 ztt1024 lipi314 heart1999 zhangjiwei-japan blueyonder00 fuyao233 originofamoniadeep-subspace-clustering-networks's Issues
diag(coeff) =0?
Hi ,
I couldn't find the line that forces the self-expressive matrix's diagonal to be zero so as to avoid the trivial solution.
Can you please point me to that?
Parameters of comparative experiment "EDSC"
Hello, could you share me with parameters of "Efficient dense subspace clustering", your another paper, also compared method over COIL20 when you are convenient? I can't reproduce your effect.
How to fix the N problem?
Hi~ Nice job, though, after reading your paper, I have the same question as mentioned in NIPS Reviewer 2:
One of the drawbacks of the proposed approach is the fact that the number of weights between the middle layers of the network is
$N^2$ , given$N$ data points. Thus, it seems that the method will not be scalable to very large datasets.
Any idea how to fix it? Thanks in advance.
In this code,the diag(c) is not eaual to 0?
In this code,the diag(c) is not eaual to 0?
Need Help
Feature selection / extraction
Can feature selection be done with this model to know what features are the most relevant for clustering?
How to run code at HOPKIN155?
How to downsample the images to 48x42 as mentioned in the paper for the HOPKIN155 dataset?Is it possible to upload the mat file of Hopkin155?
some problem about experiment parameters setting
hello , professor Ji,
what does post clustering parameters alpha, dim_subspace and r0 mean? and how to set them for each dataset?
Thank you very much for your attention!
When does pre-training end?
When does pre-training end?
Applying Deep Subspace Clustering to High Dimensional Data
Having reading through the paper, I have seen that the coding for the Autoencoder has been primarily done by using Convolutional Kernels since the paper had worked with Images.
I need your inputs on finding the subspace for Data for the following format :
N Individual Datapoints each having 1-D dimension of size X (>100) i.e N separate data point of Array size (>100).
The Target is to Cluster these N data points into clusters.
Your inputs will be very helpful.
ZC or CZ?
Hi,
In your paper, the self-expressiveness property is defined as ||Z-ZC||. However, in your code, such as:
z_c = tf.matmul(Coef,z)
this seems to refer to CZ rather than ZC, right?
How can I get the specific subspace?
How can I get the specific subspace?
I want to analyze the subspaces obtained by clustering. I hope I can get some guidance. I am very grateful!
Can't find .ckpt file when call saver.restore
I met a read problem when I call the function restore()
in main function, the error message is like below,
Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ~/Documents/DeepSubspace/Deep-subspace-clustering-networks-master/pretrain-model-COIL100/model50.ckpt
To my understanding, when you call self.saver.restore(self.sess, self.model_path)
, actually you are loading the predefined model from directory, but I look into the model path, there is no file ends with .ckpt, which is the file type that the saver want for loading the variables, I'm wondering if you forget to upload the predefined model file?
Not able to reproduce results, Stopping pre-training for ORL dataset?
Hi
Can you please give an idea of around what epoch you stopped the pretraining for ORL dataset ? Even after following your way (early stopping at good visualizations) to stop pre-train, there is a huge variation in the result compared to restoring the pre-trained weights you have provided.
Replicability of your Research
Greetings, first of all let me say that I read your paper with interest.
Thanks for having shared the code! I'd like to address you some question about it.
- By running the code as you provided it (without changes), I am able to almost perfectly reproduce your published results (e.g., 14% error on ORL as in Figure 5.a). This happens if I select the max value of accuracy among 100 random runs, out of which I get a mean accuracy of 84.5%, with a standard deviation of 1.2% on the same benchmark. I was using the same hyper-parameters of yours. Can we then conclude that you followed a similar procedure to obtain the results inserted in the paper?
- In the README.md, you correctly mentioned about the usage of the diagonal constraint on the self-expressive module C (diag(C)=0) - the latter being fundamental if using L1 regularization. To implement such constraint, I used the snippet of code that you provide therein (
tf.matmul((C-tf.diag(tf.diag_part(C))),Z)
). I put such line of code - once I changed the variable names - replacing line 48 inside DSC-Net-L2-ORL.py. Then, I replaced line 59 of the same file with
self.reg_losses = tf.reduce_sum(tf.math.abs(self.Coef))
.
Unfortunately, if I do so, the results with L1 drastically differs from the ones tabulated in the paper. I'd like to ask you if you have an updated version of the code (maybe already implementing the diag constraint + L1), so that I can check and compare. - Also, I'm encountering an out-of-memory error when running the code on the COIL100 dataset. Can I kindly ask which are the specs of the computer/server you used for experiments (in terms of RAM mainly)?
Thank you very much for your attention
Looking forward to your reply
Could you release your pretrained models of coil20
If you have time, could you release your pretrained model of coil20? Thanks a lot!
I can not reproduce your results on dataset ORL in your paper. Is the code the newest?
I can not reproduce your results on dataset ORL in your paper. Is the code the newest?
image_size
Sorry,I have another question.
In the section 4.1 of the article, 'Following the experimental setup of [10], we down-sampled the original face images from 192 × 168 to 42 × 42 pixels ',however, the image size is 48*42 pixels in code?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.