As the title says, I'm struggling trying to decode **why I am getting empty p_micro_cluster_centers when running Test.py setting n_samples to a smaller number **. Any help will be appreciated.
![image](https://user-images.githubusercontent.com/26676136/68437524-05af6f80-01a0-11ea-90ff-05f1e5b8c547.png)
![image](https://user-images.githubusercontent.com/26676136/68437571-211a7a80-01a0-11ea-9866-d3ed9b3665e1.png)
I don't know why but there are not any issues of the kind if the data set you use has much more elements. I tried the algorithm with 500 n_samples and it worked fine.
IMPORTANT: there seems to be a problem in the _partial_fit
method when it comes to generate p_micro_clusters ... I think it has to do with the if
statement but I do not know what to do about it:
def _partial_fit(self, sample, weight):
self._merging(sample, weight)
if self.t % self.tp == 0:
self.p_micro_clusters = [p_micro_cluster for p_micro_cluster
in self.p_micro_clusters if
p_micro_cluster.weight() >= self.beta *
self.mu]
Xis = [((self._decay_function(self.t - o_micro_cluster.creation_time
+ self.tp) - 1) /
(self._decay_function(self.tp) - 1)) for o_micro_cluster in
self.o_micro_clusters]
self.o_micro_clusters = [o_micro_cluster for Xi, o_micro_cluster in
zip(Xis, self.o_micro_clusters) if
o_micro_cluster.weight() >= Xi]
self.t += 1
Thanks in advance!