Keras anomaly in its training time












0















I am using Keras in multi-gpu, with Tensorflow backend on 2 gpus. I am using a generator (keras.utils.Sequence) to load my data in batch mode (BS = 64). Therefore I am using the fit_generator class, providing it with my train and validation data and steps.
I noticed a strange behaviour starting from the 2nd epoch on. Basically, the first 3 steps of each epoch are completed in just 8/9 seconds each, then the network starts taking longer and longer (as it should do). Logs are the following:



Epoch 00001: val_acc improved from -inf to 0.46875, saving model to data/subs_best_model.h5
Epoch 2/32
1/29 [>.............................] - ETA: 8s - loss: 1.0664 - acc: 0.5000
2/29 [=>............................] - ETA: 8s - loss: 1.1384 - acc: 0.4531
3/29 [==>...........................] - ETA: 9s - loss: 1.0915 - acc: 0.5052
4/29 [===>..........................] - ETA: 42:03 - loss: 1.1064 - acc: 0.5117
5/29 [====>.........................] - ETA: 56:02 - loss: 1.1173 - acc: 0.4969
6/29 [=====>........................] - ETA: 1:03:13 - loss: 1.0964 - acc: 0.4974
7/29 [======>.......................] - ETA: 1:06:45 - loss: 1.0740 - acc: 0.5067
8/29 [=======>......................] - ETA: 1:08:35 - loss: 1.0592 - acc: 0.5195
9/29 [========>.....................] - ETA: 1:08:53 - loss: 1.0580 - acc: 0.5191


Do you know what could cause this anomaly/strange behaviour?



EDIT:



My DataGenerator is inspired by this implementation



The code I use for the fit_generator is as follows:



params = {'batch_size':  TrainConfig.BATCH_SIZE,
'dim' : ( TrainConfig.BATCH_SIZE, 1, TrainConfig.SAMPLES),
'labels_dim': ( TrainConfig.BATCH_SIZE,),
'n_classes' : TrainConfig.OUTPUT_DIM}

training_generator = DataGenerator(train_set, **params)
validation_generator = DataGenerator(val_set, **params)

training_steps_per_epoch = int(1.*len(train_set) / batch_size)
validation_steps_per_epoch = int(1.*len(val_set) / batch_size)

history = model.fit_generator(generator=training_generator,
verbose=1,
use_multiprocessing=False,
workers=1,
steps_per_epoch=training_steps_per_epoch,
epochs=epochs,
validation_data=validation_generator,
validation_steps =validation_steps_per_epoch,
callbacks=callbacks)









share|improve this question




















  • 4





    check your generator. it might we generating batches of different sizes after each call.

    – Gerges
    Nov 28 '18 at 17:03






  • 1





    Please post the relevant code (at least your fit_generator )

    – desertnaut
    Nov 28 '18 at 17:08











  • Thanks, I just edited above.

    – user3426270
    Nov 28 '18 at 17:17











  • You should include the code of the generator.

    – Matias Valdenegro
    Nov 28 '18 at 19:54
















0















I am using Keras in multi-gpu, with Tensorflow backend on 2 gpus. I am using a generator (keras.utils.Sequence) to load my data in batch mode (BS = 64). Therefore I am using the fit_generator class, providing it with my train and validation data and steps.
I noticed a strange behaviour starting from the 2nd epoch on. Basically, the first 3 steps of each epoch are completed in just 8/9 seconds each, then the network starts taking longer and longer (as it should do). Logs are the following:



Epoch 00001: val_acc improved from -inf to 0.46875, saving model to data/subs_best_model.h5
Epoch 2/32
1/29 [>.............................] - ETA: 8s - loss: 1.0664 - acc: 0.5000
2/29 [=>............................] - ETA: 8s - loss: 1.1384 - acc: 0.4531
3/29 [==>...........................] - ETA: 9s - loss: 1.0915 - acc: 0.5052
4/29 [===>..........................] - ETA: 42:03 - loss: 1.1064 - acc: 0.5117
5/29 [====>.........................] - ETA: 56:02 - loss: 1.1173 - acc: 0.4969
6/29 [=====>........................] - ETA: 1:03:13 - loss: 1.0964 - acc: 0.4974
7/29 [======>.......................] - ETA: 1:06:45 - loss: 1.0740 - acc: 0.5067
8/29 [=======>......................] - ETA: 1:08:35 - loss: 1.0592 - acc: 0.5195
9/29 [========>.....................] - ETA: 1:08:53 - loss: 1.0580 - acc: 0.5191


Do you know what could cause this anomaly/strange behaviour?



EDIT:



My DataGenerator is inspired by this implementation



The code I use for the fit_generator is as follows:



params = {'batch_size':  TrainConfig.BATCH_SIZE,
'dim' : ( TrainConfig.BATCH_SIZE, 1, TrainConfig.SAMPLES),
'labels_dim': ( TrainConfig.BATCH_SIZE,),
'n_classes' : TrainConfig.OUTPUT_DIM}

training_generator = DataGenerator(train_set, **params)
validation_generator = DataGenerator(val_set, **params)

training_steps_per_epoch = int(1.*len(train_set) / batch_size)
validation_steps_per_epoch = int(1.*len(val_set) / batch_size)

history = model.fit_generator(generator=training_generator,
verbose=1,
use_multiprocessing=False,
workers=1,
steps_per_epoch=training_steps_per_epoch,
epochs=epochs,
validation_data=validation_generator,
validation_steps =validation_steps_per_epoch,
callbacks=callbacks)









share|improve this question




















  • 4





    check your generator. it might we generating batches of different sizes after each call.

    – Gerges
    Nov 28 '18 at 17:03






  • 1





    Please post the relevant code (at least your fit_generator )

    – desertnaut
    Nov 28 '18 at 17:08











  • Thanks, I just edited above.

    – user3426270
    Nov 28 '18 at 17:17











  • You should include the code of the generator.

    – Matias Valdenegro
    Nov 28 '18 at 19:54














0












0








0


1






I am using Keras in multi-gpu, with Tensorflow backend on 2 gpus. I am using a generator (keras.utils.Sequence) to load my data in batch mode (BS = 64). Therefore I am using the fit_generator class, providing it with my train and validation data and steps.
I noticed a strange behaviour starting from the 2nd epoch on. Basically, the first 3 steps of each epoch are completed in just 8/9 seconds each, then the network starts taking longer and longer (as it should do). Logs are the following:



Epoch 00001: val_acc improved from -inf to 0.46875, saving model to data/subs_best_model.h5
Epoch 2/32
1/29 [>.............................] - ETA: 8s - loss: 1.0664 - acc: 0.5000
2/29 [=>............................] - ETA: 8s - loss: 1.1384 - acc: 0.4531
3/29 [==>...........................] - ETA: 9s - loss: 1.0915 - acc: 0.5052
4/29 [===>..........................] - ETA: 42:03 - loss: 1.1064 - acc: 0.5117
5/29 [====>.........................] - ETA: 56:02 - loss: 1.1173 - acc: 0.4969
6/29 [=====>........................] - ETA: 1:03:13 - loss: 1.0964 - acc: 0.4974
7/29 [======>.......................] - ETA: 1:06:45 - loss: 1.0740 - acc: 0.5067
8/29 [=======>......................] - ETA: 1:08:35 - loss: 1.0592 - acc: 0.5195
9/29 [========>.....................] - ETA: 1:08:53 - loss: 1.0580 - acc: 0.5191


Do you know what could cause this anomaly/strange behaviour?



EDIT:



My DataGenerator is inspired by this implementation



The code I use for the fit_generator is as follows:



params = {'batch_size':  TrainConfig.BATCH_SIZE,
'dim' : ( TrainConfig.BATCH_SIZE, 1, TrainConfig.SAMPLES),
'labels_dim': ( TrainConfig.BATCH_SIZE,),
'n_classes' : TrainConfig.OUTPUT_DIM}

training_generator = DataGenerator(train_set, **params)
validation_generator = DataGenerator(val_set, **params)

training_steps_per_epoch = int(1.*len(train_set) / batch_size)
validation_steps_per_epoch = int(1.*len(val_set) / batch_size)

history = model.fit_generator(generator=training_generator,
verbose=1,
use_multiprocessing=False,
workers=1,
steps_per_epoch=training_steps_per_epoch,
epochs=epochs,
validation_data=validation_generator,
validation_steps =validation_steps_per_epoch,
callbacks=callbacks)









share|improve this question
















I am using Keras in multi-gpu, with Tensorflow backend on 2 gpus. I am using a generator (keras.utils.Sequence) to load my data in batch mode (BS = 64). Therefore I am using the fit_generator class, providing it with my train and validation data and steps.
I noticed a strange behaviour starting from the 2nd epoch on. Basically, the first 3 steps of each epoch are completed in just 8/9 seconds each, then the network starts taking longer and longer (as it should do). Logs are the following:



Epoch 00001: val_acc improved from -inf to 0.46875, saving model to data/subs_best_model.h5
Epoch 2/32
1/29 [>.............................] - ETA: 8s - loss: 1.0664 - acc: 0.5000
2/29 [=>............................] - ETA: 8s - loss: 1.1384 - acc: 0.4531
3/29 [==>...........................] - ETA: 9s - loss: 1.0915 - acc: 0.5052
4/29 [===>..........................] - ETA: 42:03 - loss: 1.1064 - acc: 0.5117
5/29 [====>.........................] - ETA: 56:02 - loss: 1.1173 - acc: 0.4969
6/29 [=====>........................] - ETA: 1:03:13 - loss: 1.0964 - acc: 0.4974
7/29 [======>.......................] - ETA: 1:06:45 - loss: 1.0740 - acc: 0.5067
8/29 [=======>......................] - ETA: 1:08:35 - loss: 1.0592 - acc: 0.5195
9/29 [========>.....................] - ETA: 1:08:53 - loss: 1.0580 - acc: 0.5191


Do you know what could cause this anomaly/strange behaviour?



EDIT:



My DataGenerator is inspired by this implementation



The code I use for the fit_generator is as follows:



params = {'batch_size':  TrainConfig.BATCH_SIZE,
'dim' : ( TrainConfig.BATCH_SIZE, 1, TrainConfig.SAMPLES),
'labels_dim': ( TrainConfig.BATCH_SIZE,),
'n_classes' : TrainConfig.OUTPUT_DIM}

training_generator = DataGenerator(train_set, **params)
validation_generator = DataGenerator(val_set, **params)

training_steps_per_epoch = int(1.*len(train_set) / batch_size)
validation_steps_per_epoch = int(1.*len(val_set) / batch_size)

history = model.fit_generator(generator=training_generator,
verbose=1,
use_multiprocessing=False,
workers=1,
steps_per_epoch=training_steps_per_epoch,
epochs=epochs,
validation_data=validation_generator,
validation_steps =validation_steps_per_epoch,
callbacks=callbacks)






python tensorflow keras multi-gpu






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 28 '18 at 17:16







user3426270

















asked Nov 28 '18 at 17:00









user3426270user3426270

43




43








  • 4





    check your generator. it might we generating batches of different sizes after each call.

    – Gerges
    Nov 28 '18 at 17:03






  • 1





    Please post the relevant code (at least your fit_generator )

    – desertnaut
    Nov 28 '18 at 17:08











  • Thanks, I just edited above.

    – user3426270
    Nov 28 '18 at 17:17











  • You should include the code of the generator.

    – Matias Valdenegro
    Nov 28 '18 at 19:54














  • 4





    check your generator. it might we generating batches of different sizes after each call.

    – Gerges
    Nov 28 '18 at 17:03






  • 1





    Please post the relevant code (at least your fit_generator )

    – desertnaut
    Nov 28 '18 at 17:08











  • Thanks, I just edited above.

    – user3426270
    Nov 28 '18 at 17:17











  • You should include the code of the generator.

    – Matias Valdenegro
    Nov 28 '18 at 19:54








4




4





check your generator. it might we generating batches of different sizes after each call.

– Gerges
Nov 28 '18 at 17:03





check your generator. it might we generating batches of different sizes after each call.

– Gerges
Nov 28 '18 at 17:03




1




1





Please post the relevant code (at least your fit_generator )

– desertnaut
Nov 28 '18 at 17:08





Please post the relevant code (at least your fit_generator )

– desertnaut
Nov 28 '18 at 17:08













Thanks, I just edited above.

– user3426270
Nov 28 '18 at 17:17





Thanks, I just edited above.

– user3426270
Nov 28 '18 at 17:17













You should include the code of the generator.

– Matias Valdenegro
Nov 28 '18 at 19:54





You should include the code of the generator.

– Matias Valdenegro
Nov 28 '18 at 19:54












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53524558%2fkeras-anomaly-in-its-training-time%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53524558%2fkeras-anomaly-in-its-training-time%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

A CLEAN and SIMPLE way to add appendices to Table of Contents and bookmarks

Calculate evaluation metrics using cross_val_predict sklearn

Insert data from modal to MySQL (multiple modal on website)