tensorflow - same program on different computers allocate different GPU memory












0















ubuntu 16.04, python 2.7.12, tensorflow 1.10.1 (gpu version), cuda 9.0, cudnn 7.2



I have built and trained a CNN model, and now I am using a while loop to repeatedly let my model make predictions.



In order to limit the memory usage, I am using the following code to create my classifier:



import tensorflow as tf

session_config = tf.ConfigProto(log_device_placement=False)
session_config.gpu_options.allow_growth = True
run_config = tf.estimator.RunConfig().replace(session_config=session_config)


classifier = tf.estimator.Estimator(
model_fn = my_model_fn,
model_dir = my_trained_model_dir,
config = run_config,
params={}


)



And I call classifier.predict(my_input_fn) in a while loop to repeatedly make predictions.



Issue:



I am running my codes on two computers, both with the same software environment as I listed above.



However, the two computers have different GPUs:



Computer A: 1050 2G



Computer B: 1070 8G



My code works well on both computer.



However, when I use nvidia-smi to check the GPU memory allocation, I found that my code will allocate 1.4G GPU memory on Computer A, while it becomes 3.6G on Computer B.



So, Why would this happen?



I think session_config.gpu_options.allow_growth = True tells the program to allocate as much as it needs. Computer A has proved that 1.4G is enough, then why would the same code allocate 3.6G on Computer B?










share|improve this question

























  • It might be worth it to add a cuda tag to this since it seems likely to me that the cuda or cudnn backend would have things to say about memory allocation based on the GPU you have available. I don't know much about the details of that unfortunately.

    – enumaris
    Nov 28 '18 at 0:59
















0















ubuntu 16.04, python 2.7.12, tensorflow 1.10.1 (gpu version), cuda 9.0, cudnn 7.2



I have built and trained a CNN model, and now I am using a while loop to repeatedly let my model make predictions.



In order to limit the memory usage, I am using the following code to create my classifier:



import tensorflow as tf

session_config = tf.ConfigProto(log_device_placement=False)
session_config.gpu_options.allow_growth = True
run_config = tf.estimator.RunConfig().replace(session_config=session_config)


classifier = tf.estimator.Estimator(
model_fn = my_model_fn,
model_dir = my_trained_model_dir,
config = run_config,
params={}


)



And I call classifier.predict(my_input_fn) in a while loop to repeatedly make predictions.



Issue:



I am running my codes on two computers, both with the same software environment as I listed above.



However, the two computers have different GPUs:



Computer A: 1050 2G



Computer B: 1070 8G



My code works well on both computer.



However, when I use nvidia-smi to check the GPU memory allocation, I found that my code will allocate 1.4G GPU memory on Computer A, while it becomes 3.6G on Computer B.



So, Why would this happen?



I think session_config.gpu_options.allow_growth = True tells the program to allocate as much as it needs. Computer A has proved that 1.4G is enough, then why would the same code allocate 3.6G on Computer B?










share|improve this question

























  • It might be worth it to add a cuda tag to this since it seems likely to me that the cuda or cudnn backend would have things to say about memory allocation based on the GPU you have available. I don't know much about the details of that unfortunately.

    – enumaris
    Nov 28 '18 at 0:59














0












0








0








ubuntu 16.04, python 2.7.12, tensorflow 1.10.1 (gpu version), cuda 9.0, cudnn 7.2



I have built and trained a CNN model, and now I am using a while loop to repeatedly let my model make predictions.



In order to limit the memory usage, I am using the following code to create my classifier:



import tensorflow as tf

session_config = tf.ConfigProto(log_device_placement=False)
session_config.gpu_options.allow_growth = True
run_config = tf.estimator.RunConfig().replace(session_config=session_config)


classifier = tf.estimator.Estimator(
model_fn = my_model_fn,
model_dir = my_trained_model_dir,
config = run_config,
params={}


)



And I call classifier.predict(my_input_fn) in a while loop to repeatedly make predictions.



Issue:



I am running my codes on two computers, both with the same software environment as I listed above.



However, the two computers have different GPUs:



Computer A: 1050 2G



Computer B: 1070 8G



My code works well on both computer.



However, when I use nvidia-smi to check the GPU memory allocation, I found that my code will allocate 1.4G GPU memory on Computer A, while it becomes 3.6G on Computer B.



So, Why would this happen?



I think session_config.gpu_options.allow_growth = True tells the program to allocate as much as it needs. Computer A has proved that 1.4G is enough, then why would the same code allocate 3.6G on Computer B?










share|improve this question
















ubuntu 16.04, python 2.7.12, tensorflow 1.10.1 (gpu version), cuda 9.0, cudnn 7.2



I have built and trained a CNN model, and now I am using a while loop to repeatedly let my model make predictions.



In order to limit the memory usage, I am using the following code to create my classifier:



import tensorflow as tf

session_config = tf.ConfigProto(log_device_placement=False)
session_config.gpu_options.allow_growth = True
run_config = tf.estimator.RunConfig().replace(session_config=session_config)


classifier = tf.estimator.Estimator(
model_fn = my_model_fn,
model_dir = my_trained_model_dir,
config = run_config,
params={}


)



And I call classifier.predict(my_input_fn) in a while loop to repeatedly make predictions.



Issue:



I am running my codes on two computers, both with the same software environment as I listed above.



However, the two computers have different GPUs:



Computer A: 1050 2G



Computer B: 1070 8G



My code works well on both computer.



However, when I use nvidia-smi to check the GPU memory allocation, I found that my code will allocate 1.4G GPU memory on Computer A, while it becomes 3.6G on Computer B.



So, Why would this happen?



I think session_config.gpu_options.allow_growth = True tells the program to allocate as much as it needs. Computer A has proved that 1.4G is enough, then why would the same code allocate 3.6G on Computer B?







python tensorflow






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 28 '18 at 5:32









talonmies

59.7k17129199




59.7k17129199










asked Nov 28 '18 at 0:47









user10253771user10253771

14810




14810













  • It might be worth it to add a cuda tag to this since it seems likely to me that the cuda or cudnn backend would have things to say about memory allocation based on the GPU you have available. I don't know much about the details of that unfortunately.

    – enumaris
    Nov 28 '18 at 0:59



















  • It might be worth it to add a cuda tag to this since it seems likely to me that the cuda or cudnn backend would have things to say about memory allocation based on the GPU you have available. I don't know much about the details of that unfortunately.

    – enumaris
    Nov 28 '18 at 0:59

















It might be worth it to add a cuda tag to this since it seems likely to me that the cuda or cudnn backend would have things to say about memory allocation based on the GPU you have available. I don't know much about the details of that unfortunately.

– enumaris
Nov 28 '18 at 0:59





It might be worth it to add a cuda tag to this since it seems likely to me that the cuda or cudnn backend would have things to say about memory allocation based on the GPU you have available. I don't know much about the details of that unfortunately.

– enumaris
Nov 28 '18 at 0:59












1 Answer
1






active

oldest

votes


















-1














It may be that 1.4gb is actually not enough, and some of the required memory is swapped into main memory. Video cards drivers do that.






share|improve this answer























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53510458%2ftensorflow-same-program-on-different-computers-allocate-different-gpu-memory%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    -1














    It may be that 1.4gb is actually not enough, and some of the required memory is swapped into main memory. Video cards drivers do that.






    share|improve this answer




























      -1














      It may be that 1.4gb is actually not enough, and some of the required memory is swapped into main memory. Video cards drivers do that.






      share|improve this answer


























        -1












        -1








        -1







        It may be that 1.4gb is actually not enough, and some of the required memory is swapped into main memory. Video cards drivers do that.






        share|improve this answer













        It may be that 1.4gb is actually not enough, and some of the required memory is swapped into main memory. Video cards drivers do that.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 28 '18 at 1:41









        pqnetpqnet

        3,38411742




        3,38411742
































            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53510458%2ftensorflow-same-program-on-different-computers-allocate-different-gpu-memory%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Contact image not getting when fetch all contact list from iPhone by CNContact

            count number of partitions of a set with n elements into k subsets

            A CLEAN and SIMPLE way to add appendices to Table of Contents and bookmarks