Coder Social home page Coder Social logo

Comments (2)

DagonArises avatar DagonArises commented on May 24, 2024

please use 'val_accuracy' replace this 'val_acc' such as this code: history['val_acc'] ---> history['val_accuracy']
@xxs980 I moved the issue to this page.
It has worked for the author's code. But when I implemented on NLP, 'val_accuracy' raised an error:

124/124 [==============================]             
 - 129s 1s/step - loss: 0.1728 - val_loss: 0.1612    

  0%|          | 0/5 [04:24<?, ?trial/s, best loss=?]
job exception: 'val_accuracy'
  0%|          | 0/5 [04:24<?, ?trial/s, best loss=?]
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-7-8f507f068e69> in <module>()
     20     best_run, best_model= optim.minimize(model= create_model_gru, data= data,
     21                                         algo= tpe.suggest, max_evals= 5, trials= Trials(),
---> 22                                         eval_space= True, notebook_name= 'Deep learning GridSearch')
     23 
     24     print("Evalutation of best performing model:")

~/anaconda3/lib/python3.6/site-packages/hyperas/optim.py in minimize(model, data, algo, max_evals, trials, functions, rseed, notebook_name, verbose, eval_space, return_space, keep_temp)
     67                                      notebook_name=notebook_name,
     68                                      verbose=verbose,
---> 69                                      keep_temp=keep_temp)
     70 
     71     best_model = None

~/anaconda3/lib/python3.6/site-packages/hyperas/optim.py in base_minimizer(model, data, functions, algo, max_evals, trials, rseed, full_model_string, notebook_name, verbose, stack, keep_temp)
    137              trials=trials,
    138              rstate=np.random.RandomState(rseed),
--> 139              return_argmin=True),
    140         get_space()
    141     )

~/anaconda3/lib/python3.6/site-packages/hyperopt/fmin.py in fmin(fn, space, algo, max_evals, timeout, loss_threshold, trials, rstate, allow_trials_fmin, pass_expr_memo_ctrl, catch_eval_exceptions, verbose, return_argmin, points_to_evaluate, max_queue_len, show_progressbar, early_stop_fn, trials_save_file)
    520             show_progressbar=show_progressbar,
    521             early_stop_fn=early_stop_fn,
--> 522             trials_save_file=trials_save_file,
    523         )
    524 

~/anaconda3/lib/python3.6/site-packages/hyperopt/base.py in fmin(self, fn, space, algo, max_evals, timeout, loss_threshold, max_queue_len, rstate, verbose, pass_expr_memo_ctrl, catch_eval_exceptions, return_argmin, show_progressbar, early_stop_fn, trials_save_file)
    697             show_progressbar=show_progressbar,
    698             early_stop_fn=early_stop_fn,
--> 699             trials_save_file=trials_save_file,
    700         )
    701 

~/anaconda3/lib/python3.6/site-packages/hyperopt/fmin.py in fmin(fn, space, algo, max_evals, timeout, loss_threshold, trials, rstate, allow_trials_fmin, pass_expr_memo_ctrl, catch_eval_exceptions, verbose, return_argmin, points_to_evaluate, max_queue_len, show_progressbar, early_stop_fn, trials_save_file)
    551 
    552     # next line is where the fmin is actually executed
--> 553     rval.exhaust()
    554 
    555     if return_argmin:

~/anaconda3/lib/python3.6/site-packages/hyperopt/fmin.py in exhaust(self)
    354     def exhaust(self):
    355         n_done = len(self.trials)
--> 356         self.run(self.max_evals - n_done, block_until_done=self.asynchronous)
    357         self.trials.refresh()
    358         return self

~/anaconda3/lib/python3.6/site-packages/hyperopt/fmin.py in run(self, N, block_until_done)
    290                 else:
    291                     # -- loop over trials and do the jobs directly
--> 292                     self.serial_evaluate()
    293 
    294                 self.trials.refresh()

~/anaconda3/lib/python3.6/site-packages/hyperopt/fmin.py in serial_evaluate(self, N)
    168                 ctrl = base.Ctrl(self.trials, current_trial=trial)
    169                 try:
--> 170                     result = self.domain.evaluate(spec, ctrl)
    171                 except Exception as e:
    172                     logger.error("job exception: %s" % str(e))

~/anaconda3/lib/python3.6/site-packages/hyperopt/base.py in evaluate(self, config, ctrl, attach_attachments)
    905                 print_node_on_error=self.rec_eval_print_node_on_error,
    906             )
--> 907             rval = self.fn(pyll_rval)
    908 
    909         if isinstance(rval, (float, int, np.number)):

~/Desktop/honours thesis/thesis/thesis proposal/NLP tutorial/temp_model.py in keras_fmin_fnct(space)

KeyError: 'val_accuracy'

My edited code:

def data():
    # Load
    train= pd.read_csv('Toxic comment data/jigsaw-toxic-comment-train.csv')
    train.drop(['severe_toxic','obscene','threat','insult','identity_hate'],axis=1,inplace=True)
    train= train.iloc[:12000,:]
    
    # Tokenization
    xtr, xte, ytr, yte= train_test_split(train['comment_text'].values, 
                                        train['toxic'].values,
                                        stratify= train['toxic'].values,
                                        random_state= 42, test_size= 0.2, shuffle= True)
    tok= text.Tokenizer(num_words= None)
    tok.fit_on_texts(list(xtr)+ list(xte))
    input_dim= len(tok.word_index)+1
    input_length= train['comment_text'].apply(lambda x: len(str(x).split())).max()
    xtr_seq= tok.texts_to_sequences(xtr); xte_seq= tok.texts_to_sequences(xte)
    xtr_pad= sequence.pad_sequences(xtr_seq, maxlen= input_length)
    xte_pad= sequence.pad_sequences(xte_seq, maxlen= input_length)
    
    # Load GloVe embeddings
    embedding_dict=dict()
    f= open('GloVe/glove.6B.100d.txt')
    output_dim= 100
    for line in f:
        values= line.split()
        word= values[0]; coefs= asarray(values[1:], dtype= 'float32')
        embedding_dict[word]= coefs
    f.close()
    Emat= zeros((input_dim, output_dim))
    for word, i in tok.word_index.items():
        embedding_vector= embedding_dict.get(word)
        if embedding_vector is not None:
            Emat[i]= embedding_vector
    print('Shape of input:', xtr_pad.shape)
    print('Shape of embedding weight:', Emat.shape)
    
    return xtr_pad, ytr, xte_pad, yte, input_dim, input_length, Emat

def create_model_gru(xtr_pad, ytr, xte_pad, yte, input_dim, input_length, Emat):
    model= Sequential()
    model.add(Embedding(input_dim, 100, input_length= input_length, weights= [Emat], trainable= False))
    model.add(Dropout({{choice([0.2, 0.3, 0.5])}}))
    model.add(GRU({{choice([100, 150, 200])}}))
    model.add(Dense(1, activation= 'sigmoid'))
    
    model.compile(loss='binary_crossentropy', optimizer= {{choice(['rmsprop', 'adam'])}})
    results= model.fit(xtr_pad, ytr, epochs= 2, verbose= 1, validation_split=0.1, 
                      batch_size= 70)
    
    val_accuracy = np.amax(results.history['val_accuracy']) 
    print('Best validation acc of epoch:', val_accuracy)
    return {'loss': -val_accuracy, 'status': STATUS_OK, 'model': model}
    

if __name__ == '__main__':
    xtr_pad, ytr, xte_pad, yte, input_dim, input_length, Emat= data()
    best_run, best_model= optim.minimize(model= create_model_gru, data= data,
                                        algo= tpe.suggest, max_evals= 5, trials= Trials(),
                                        eval_space= True, notebook_name= 'Deep learning GridSearch')
    
    print("Evalutation of best performing model:")
    print(best_model.evaluate(xte_pad, yte))
    print("Best performing model chosen hyper-parameters:")
    print(best_run)

from hyperas.

DagonArises avatar DagonArises commented on May 24, 2024

At the moment val_loss works but strangely val_accuracy returns the error shown above.

val_loss = np.amin(results.history['val_loss']) 
return {'loss': val_loss, 'status': STATUS_OK, 'model': model}

from hyperas.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.