keras load_model raise error when executed a second time
I'm making a website, and sometimes, it calls a keras neural network.
So I have a function that looks like that :
def network(campaign):
from keras.models import load_model
model = load_model("sunshade/neural_network/model.h5") #the line that fail the second time i call it
#loading some data
label = model.predict(images, batch_size = 128, verbose = 1)
#some unrelated code...
This code works fine when I execute it the first time, but when I try to run a second time, it fails wieh this error :
Exception in thread Thread-31:
Traceback (most recent call last):
File "/usr/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 930, in _run
allow_operation=False)
File "/usr/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 2414, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/usr/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 2493, in _as_graph_element_locked
raise ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("Placeholder_3:0", shape=(32,), dtype=float32) is not an element of this graph.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.4/threading.py", line 920, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.4/threading.py", line 868, in run
self._target(*self._args, **self._kwargs)
File "/home/ec2-user/SpyNet/poc/sunshadeDetector/sunshade/models.py", line 46, in launch_network
network(self)
File "/home/ec2-user/SpyNet/poc/sunshadeDetector/sunshade/neural_network/network.py", line 27, in network
model = load_model("sunshade/neural_network/model.h5")
File "/usr/local/lib64/python3.4/site-packages/keras/models.py", line 236, in load_model
topology.load_weights_from_hdf5_group(f['model_weights'], model.layers)
File "/usr/local/lib64/python3.4/site-packages/keras/engine/topology.py", line 3048, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "/usr/local/lib64/python3.4/site-packages/keras/backend/tensorflow_backend.py", line 2188, in batch_set_value
get_session().run(assign_ops, feed_dict=feed_dict)
File "/usr/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/usr/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 933, in _run
+ e.args[0])
TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder_3:0", shape=(32,), dtype=float32) is not an element of this graph.
By the way, I use django for the website part, but i don't think it's related.
There must be some kind of thing that need to be closed, or re-initialized... I tried to use tf.Session(), and tf.reset_default_graph,but I still get errors.
So now I have to restart my django server each time i want to use this function.
Do you have any idea ? In worst case scenario, may be I can make the model a singleton, so that i don't have to reload it each time...
python tensorflow neural-network keras conv-neural-network
add a comment |
I'm making a website, and sometimes, it calls a keras neural network.
So I have a function that looks like that :
def network(campaign):
from keras.models import load_model
model = load_model("sunshade/neural_network/model.h5") #the line that fail the second time i call it
#loading some data
label = model.predict(images, batch_size = 128, verbose = 1)
#some unrelated code...
This code works fine when I execute it the first time, but when I try to run a second time, it fails wieh this error :
Exception in thread Thread-31:
Traceback (most recent call last):
File "/usr/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 930, in _run
allow_operation=False)
File "/usr/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 2414, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/usr/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 2493, in _as_graph_element_locked
raise ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("Placeholder_3:0", shape=(32,), dtype=float32) is not an element of this graph.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.4/threading.py", line 920, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.4/threading.py", line 868, in run
self._target(*self._args, **self._kwargs)
File "/home/ec2-user/SpyNet/poc/sunshadeDetector/sunshade/models.py", line 46, in launch_network
network(self)
File "/home/ec2-user/SpyNet/poc/sunshadeDetector/sunshade/neural_network/network.py", line 27, in network
model = load_model("sunshade/neural_network/model.h5")
File "/usr/local/lib64/python3.4/site-packages/keras/models.py", line 236, in load_model
topology.load_weights_from_hdf5_group(f['model_weights'], model.layers)
File "/usr/local/lib64/python3.4/site-packages/keras/engine/topology.py", line 3048, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "/usr/local/lib64/python3.4/site-packages/keras/backend/tensorflow_backend.py", line 2188, in batch_set_value
get_session().run(assign_ops, feed_dict=feed_dict)
File "/usr/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/usr/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 933, in _run
+ e.args[0])
TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder_3:0", shape=(32,), dtype=float32) is not an element of this graph.
By the way, I use django for the website part, but i don't think it's related.
There must be some kind of thing that need to be closed, or re-initialized... I tried to use tf.Session(), and tf.reset_default_graph,but I still get errors.
So now I have to restart my django server each time i want to use this function.
Do you have any idea ? In worst case scenario, may be I can make the model a singleton, so that i don't have to reload it each time...
python tensorflow neural-network keras conv-neural-network
1
My approach has been to load the model in the main thread (using Flask, falcon) and and give the routes this model (like you mentioned in the last sentence). Otherwise try this issue: github.com/fchollet/keras/issues/2397
– orsonady
Aug 18 '17 at 16:26
1
I've encountered the same error before, and the method described in the link provided by @putonspectacles fixed it. Specifically, I usedwith K.get_session().graph.as_default() as g: model = load_model(...)
.
– Yu-Yang
Aug 18 '17 at 18:02
Okay, I tested that. with singleton way, or K.get_session... my webserver doesn't crash any more. But it's not exactly what I want... What I really want is a way to train my network on the machine, while the webserver is up (it's okay that it can't use a network while it's training). Now I can't because, keras/tensorflow seems to still use memory in my gpu when its job is finished. Therefore I don't have enough memory to train it. It's not a major issue, but I'm looking for a way to "stop" completely keras/tensorflow.
– SuperMouette
Aug 21 '17 at 8:50
add a comment |
I'm making a website, and sometimes, it calls a keras neural network.
So I have a function that looks like that :
def network(campaign):
from keras.models import load_model
model = load_model("sunshade/neural_network/model.h5") #the line that fail the second time i call it
#loading some data
label = model.predict(images, batch_size = 128, verbose = 1)
#some unrelated code...
This code works fine when I execute it the first time, but when I try to run a second time, it fails wieh this error :
Exception in thread Thread-31:
Traceback (most recent call last):
File "/usr/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 930, in _run
allow_operation=False)
File "/usr/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 2414, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/usr/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 2493, in _as_graph_element_locked
raise ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("Placeholder_3:0", shape=(32,), dtype=float32) is not an element of this graph.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.4/threading.py", line 920, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.4/threading.py", line 868, in run
self._target(*self._args, **self._kwargs)
File "/home/ec2-user/SpyNet/poc/sunshadeDetector/sunshade/models.py", line 46, in launch_network
network(self)
File "/home/ec2-user/SpyNet/poc/sunshadeDetector/sunshade/neural_network/network.py", line 27, in network
model = load_model("sunshade/neural_network/model.h5")
File "/usr/local/lib64/python3.4/site-packages/keras/models.py", line 236, in load_model
topology.load_weights_from_hdf5_group(f['model_weights'], model.layers)
File "/usr/local/lib64/python3.4/site-packages/keras/engine/topology.py", line 3048, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "/usr/local/lib64/python3.4/site-packages/keras/backend/tensorflow_backend.py", line 2188, in batch_set_value
get_session().run(assign_ops, feed_dict=feed_dict)
File "/usr/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/usr/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 933, in _run
+ e.args[0])
TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder_3:0", shape=(32,), dtype=float32) is not an element of this graph.
By the way, I use django for the website part, but i don't think it's related.
There must be some kind of thing that need to be closed, or re-initialized... I tried to use tf.Session(), and tf.reset_default_graph,but I still get errors.
So now I have to restart my django server each time i want to use this function.
Do you have any idea ? In worst case scenario, may be I can make the model a singleton, so that i don't have to reload it each time...
python tensorflow neural-network keras conv-neural-network
I'm making a website, and sometimes, it calls a keras neural network.
So I have a function that looks like that :
def network(campaign):
from keras.models import load_model
model = load_model("sunshade/neural_network/model.h5") #the line that fail the second time i call it
#loading some data
label = model.predict(images, batch_size = 128, verbose = 1)
#some unrelated code...
This code works fine when I execute it the first time, but when I try to run a second time, it fails wieh this error :
Exception in thread Thread-31:
Traceback (most recent call last):
File "/usr/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 930, in _run
allow_operation=False)
File "/usr/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 2414, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/usr/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 2493, in _as_graph_element_locked
raise ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("Placeholder_3:0", shape=(32,), dtype=float32) is not an element of this graph.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.4/threading.py", line 920, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.4/threading.py", line 868, in run
self._target(*self._args, **self._kwargs)
File "/home/ec2-user/SpyNet/poc/sunshadeDetector/sunshade/models.py", line 46, in launch_network
network(self)
File "/home/ec2-user/SpyNet/poc/sunshadeDetector/sunshade/neural_network/network.py", line 27, in network
model = load_model("sunshade/neural_network/model.h5")
File "/usr/local/lib64/python3.4/site-packages/keras/models.py", line 236, in load_model
topology.load_weights_from_hdf5_group(f['model_weights'], model.layers)
File "/usr/local/lib64/python3.4/site-packages/keras/engine/topology.py", line 3048, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "/usr/local/lib64/python3.4/site-packages/keras/backend/tensorflow_backend.py", line 2188, in batch_set_value
get_session().run(assign_ops, feed_dict=feed_dict)
File "/usr/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/usr/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 933, in _run
+ e.args[0])
TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder_3:0", shape=(32,), dtype=float32) is not an element of this graph.
By the way, I use django for the website part, but i don't think it's related.
There must be some kind of thing that need to be closed, or re-initialized... I tried to use tf.Session(), and tf.reset_default_graph,but I still get errors.
So now I have to restart my django server each time i want to use this function.
Do you have any idea ? In worst case scenario, may be I can make the model a singleton, so that i don't have to reload it each time...
python tensorflow neural-network keras conv-neural-network
python tensorflow neural-network keras conv-neural-network
asked Aug 18 '17 at 12:08
SuperMouette
376
376
1
My approach has been to load the model in the main thread (using Flask, falcon) and and give the routes this model (like you mentioned in the last sentence). Otherwise try this issue: github.com/fchollet/keras/issues/2397
– orsonady
Aug 18 '17 at 16:26
1
I've encountered the same error before, and the method described in the link provided by @putonspectacles fixed it. Specifically, I usedwith K.get_session().graph.as_default() as g: model = load_model(...)
.
– Yu-Yang
Aug 18 '17 at 18:02
Okay, I tested that. with singleton way, or K.get_session... my webserver doesn't crash any more. But it's not exactly what I want... What I really want is a way to train my network on the machine, while the webserver is up (it's okay that it can't use a network while it's training). Now I can't because, keras/tensorflow seems to still use memory in my gpu when its job is finished. Therefore I don't have enough memory to train it. It's not a major issue, but I'm looking for a way to "stop" completely keras/tensorflow.
– SuperMouette
Aug 21 '17 at 8:50
add a comment |
1
My approach has been to load the model in the main thread (using Flask, falcon) and and give the routes this model (like you mentioned in the last sentence). Otherwise try this issue: github.com/fchollet/keras/issues/2397
– orsonady
Aug 18 '17 at 16:26
1
I've encountered the same error before, and the method described in the link provided by @putonspectacles fixed it. Specifically, I usedwith K.get_session().graph.as_default() as g: model = load_model(...)
.
– Yu-Yang
Aug 18 '17 at 18:02
Okay, I tested that. with singleton way, or K.get_session... my webserver doesn't crash any more. But it's not exactly what I want... What I really want is a way to train my network on the machine, while the webserver is up (it's okay that it can't use a network while it's training). Now I can't because, keras/tensorflow seems to still use memory in my gpu when its job is finished. Therefore I don't have enough memory to train it. It's not a major issue, but I'm looking for a way to "stop" completely keras/tensorflow.
– SuperMouette
Aug 21 '17 at 8:50
1
1
My approach has been to load the model in the main thread (using Flask, falcon) and and give the routes this model (like you mentioned in the last sentence). Otherwise try this issue: github.com/fchollet/keras/issues/2397
– orsonady
Aug 18 '17 at 16:26
My approach has been to load the model in the main thread (using Flask, falcon) and and give the routes this model (like you mentioned in the last sentence). Otherwise try this issue: github.com/fchollet/keras/issues/2397
– orsonady
Aug 18 '17 at 16:26
1
1
I've encountered the same error before, and the method described in the link provided by @putonspectacles fixed it. Specifically, I used
with K.get_session().graph.as_default() as g: model = load_model(...)
.– Yu-Yang
Aug 18 '17 at 18:02
I've encountered the same error before, and the method described in the link provided by @putonspectacles fixed it. Specifically, I used
with K.get_session().graph.as_default() as g: model = load_model(...)
.– Yu-Yang
Aug 18 '17 at 18:02
Okay, I tested that. with singleton way, or K.get_session... my webserver doesn't crash any more. But it's not exactly what I want... What I really want is a way to train my network on the machine, while the webserver is up (it's okay that it can't use a network while it's training). Now I can't because, keras/tensorflow seems to still use memory in my gpu when its job is finished. Therefore I don't have enough memory to train it. It's not a major issue, but I'm looking for a way to "stop" completely keras/tensorflow.
– SuperMouette
Aug 21 '17 at 8:50
Okay, I tested that. with singleton way, or K.get_session... my webserver doesn't crash any more. But it's not exactly what I want... What I really want is a way to train my network on the machine, while the webserver is up (it's okay that it can't use a network while it's training). Now I can't because, keras/tensorflow seems to still use memory in my gpu when its job is finished. Therefore I don't have enough memory to train it. It's not a major issue, but I'm looking for a way to "stop" completely keras/tensorflow.
– SuperMouette
Aug 21 '17 at 8:50
add a comment |
1 Answer
1
active
oldest
votes
You can create a new sesstion and load the model to it.
from keras.models import load_model
import keras
def network(campaign):
with keras.backend.get_session().graph.as_default():
model = load_model("sunshade/neural_network/model.h5")
label = model.predict(images, batch_size = 128, verbose = 1)
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f45756297%2fkeras-load-model-raise-error-when-executed-a-second-time%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
You can create a new sesstion and load the model to it.
from keras.models import load_model
import keras
def network(campaign):
with keras.backend.get_session().graph.as_default():
model = load_model("sunshade/neural_network/model.h5")
label = model.predict(images, batch_size = 128, verbose = 1)
add a comment |
You can create a new sesstion and load the model to it.
from keras.models import load_model
import keras
def network(campaign):
with keras.backend.get_session().graph.as_default():
model = load_model("sunshade/neural_network/model.h5")
label = model.predict(images, batch_size = 128, verbose = 1)
add a comment |
You can create a new sesstion and load the model to it.
from keras.models import load_model
import keras
def network(campaign):
with keras.backend.get_session().graph.as_default():
model = load_model("sunshade/neural_network/model.h5")
label = model.predict(images, batch_size = 128, verbose = 1)
You can create a new sesstion and load the model to it.
from keras.models import load_model
import keras
def network(campaign):
with keras.backend.get_session().graph.as_default():
model = load_model("sunshade/neural_network/model.h5")
label = model.predict(images, batch_size = 128, verbose = 1)
answered Nov 23 '18 at 16:41
Rajat Jain
1,0662924
1,0662924
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f45756297%2fkeras-load-model-raise-error-when-executed-a-second-time%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
My approach has been to load the model in the main thread (using Flask, falcon) and and give the routes this model (like you mentioned in the last sentence). Otherwise try this issue: github.com/fchollet/keras/issues/2397
– orsonady
Aug 18 '17 at 16:26
1
I've encountered the same error before, and the method described in the link provided by @putonspectacles fixed it. Specifically, I used
with K.get_session().graph.as_default() as g: model = load_model(...)
.– Yu-Yang
Aug 18 '17 at 18:02
Okay, I tested that. with singleton way, or K.get_session... my webserver doesn't crash any more. But it's not exactly what I want... What I really want is a way to train my network on the machine, while the webserver is up (it's okay that it can't use a network while it's training). Now I can't because, keras/tensorflow seems to still use memory in my gpu when its job is finished. Therefore I don't have enough memory to train it. It's not a major issue, but I'm looking for a way to "stop" completely keras/tensorflow.
– SuperMouette
Aug 21 '17 at 8:50