Comments (2)
Hi! Thanks for raising this issue - this would indeed be a good feature to have. For the time being, it isn't possible with the ready-made ensemble classes we have. We can easily fix that with some tweaks. Basically, you need to set the multiplier
argument in the Learner
class when assigning columns. Currently, this is defaulting to 1
, and this argument is not exposed at the ensemble level (which we probably should).
For the time being, you need to set up a custom class
from mlens.parallel import Learner, Transformer, Pipeline, Group, Layer, make_group
from mlens.ensemble import BaseEnsemble
from mlens.index import FoldIndex, FullIndex
from mlens.utils import check_instances
from mlens.ensemble.base import check_kwargs
# First, your new Learner. ``num_targets`` will be your multi-output dimensionality.
class MultiLearner(Learner):
def __init__(self, estimator, num_targets, **kwargs):
super(MultiLearner, self).__init__(estimator, **kwargs)
self.num_targets = num_targets
def _get_multiplier(self, X, y):
return self.num_targets
def make_multi_group(indexer, estimators, preprocessing,
learner_kwargs=None, transformer_kwargs=None, name=None):
preprocessing, estimators = check_instances(estimators, preprocessing)
if learner_kwargs is None:
learner_kwargs = {}
if transformer_kwargs is None:
transformer_kwargs = {}
transformers = [Transformer(estimator=Pipeline(tr, return_y=True),
name=case_name, **transformer_kwargs)
for case_name, tr in preprocessing]
# We use your new MultiLearner class here
learners = [MultiLearner(estimator=est, preprocess=case_name,
name=learner_name, **learner_kwargs)
for case_name, learner_name, est in estimators]
group = Group(indexer=indexer, learners=learners,
transformers=transformers, name=name)
return group
# Change the make_group function in the base ensemble class
class MultiBaseEnsemble(BaseEnsemble):
def _build_layer(self, estimators, indexer, preprocessing, **kwargs):
check_kwargs(kwargs, ['backend', 'n_jobs'])
verbose = kwargs.pop('verbose', max(self._backend.verbose - 1, 0))
dtype = kwargs.pop('dtype', self._backend.dtype)
propagate = kwargs.pop('propagate_features', None)
shuffle = kwargs.pop('shuffle', self.shuffle)
random_state = kwargs.pop('random_state', self.random_state)
rs = kwargs.pop('raise_on_exception', self.raise_on_exception)
if random_state:
random_state = check_random_state(random_state).randint(0, 10000)
kwargs['verbose'] = max(verbose - 1, 0)
kwargs['scorer'] = kwargs.pop('scorer', self.scorer)
# We use your make_multi_group function from above
group = make_multi_group(indexer, estimators, preprocessing, kwargs)
name = "layer-%i" % (len(self._backend.stack) + 1) # Start count at 1
lyr = Layer(
name=name, dtype=dtype, shuffle=shuffle,
random_state=random_state, verbose=verbose,
raise_on_exception=rs, propagate_features=propagate)
lyr.push(group)
return lyr
# Finally, build the SuperLearner (or similar)
class MultiSuperLearner(MultiBaseEnsemble):
def __init__(
self, folds=2, shuffle=False, random_state=None, scorer=None,
raise_on_exception=True, array_check=None, verbose=False, n_jobs=-1,
backend='threading', model_selection=False, sample_size=20, layers=None):
super(MultiSuperLearner, self).__init__(
shuffle=shuffle, random_state=random_state, scorer=scorer,
raise_on_exception=raise_on_exception, verbose=verbose,
n_jobs=n_jobs, layers=layers, backend=backend,
array_check=array_check, model_selection=model_selection,
sample_size=sample_size)
self.__initialized__ = 0 # Unlock parameter setting
self.folds = folds
self.__initialized__ = 1 # Protect against param resets
def add_meta(self, estimator, **kwargs):
return self.add(estimators=estimator, meta=True, **kwargs)
def add(self, estimators, num_targets=1, preprocessing=None,
proba=False, meta=False, propagate_features=None, **kwargs):
c = kwargs.pop('folds', self.folds)
if meta:
idx = FullIndex()
else:
idx = FoldIndex(c, raise_on_exception=self.raise_on_exception)
return super(MultiSuperLearner, self).add(
estimators=estimators, num_targets=num_targets, indexer=idx, preprocessing=preprocessing,
proba=proba, propagate_features=propagate_features, **kwargs)
You can now use this class as you normally would:
X, y = make_regression(n_samples=1000, n_features=10, n_informative=5, n_targets=2, random_state=1, noise=0.5)
ensemble = MultiSuperLearner()
ensemble.add([LinearRegression()], num_targets=2)
ensemble.add_meta(LinearRegression(), num_targets=2)
ensemble.fit(X, y)
from mlens.
Thank you very much for the clarification in details. I have reused this code and used my data(after normalization and applying PCA). In my case the num_targets is number of PCA features forexample num_targets=7.
It gives me answers but I still have some questions:
1- In order to see the score I added the score = rmse, is this the correct way?
return super(MultiSuperLearner, self).add( scorer=rmse, estimators=estimators, num_targets=num_targets, indexer=idx, preprocessing=preprocessing, proba=proba, propagate_features=propagate_features, **kwargs)
2- Is num_targets = 1 is correct or it should be equal to num_targets?
def add(self, estimators, num_targets=1, preprocessing=None,
3- Using superlearner or multisuperlearner we should have the best of all the models in terms of score, yes? but it does not give the Min RMSE:
there are my 2 output using 2 different inputs:
4- could you please also let me know where/how can we set the ft-m/ft-s ? in the aforementioned example it these values are not correct.
again thanks for your time and kindness.
from mlens.
Related Issues (20)
- Precision-Recall metric
- Requirements on y dataset HOT 2
- Documentation detail HOT 1
- Stacking of Classifiers that Operate on Different Feature Subsets HOT 4
- Questions on the meta-learner in Superlearner HOT 1
- OSError: [Errno 24] Too many open files HOT 1
- Serialize mlens superlearner with KerasRegressor inside HOT 1
- Error when using sklearn StratifiedKFold in Evaluator CV HOT 1
- getting zero score accuracy on test data
- If I already have trained models, how can I use mlens HOT 3
- confirmation
- Save / Restore model HOT 5
- How do I know the weight of the base model assigned by the meta model?
- Adding custom models in the superlearner
- Apply preprocessing to target variable as well
- Monotonic constraints
- Error when using preprocessing per case in model selection HOT 2
- Error involving Collections Module
- Getting error when executing the ensemble.fit(X_train, y_train) command HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mlens.