Random forest survival analysis crashes












1















I'm trying to run a RFSRC on a 6500 records dataframe, with 59 variables:



rfsrc_test <- rfsrc(Surv(TIME, DIED) ~ ., data=test, nsplit=10, na.action = "na.impute")


It seems to work when I run it on 1500 records, but crashes on the entire dataset.

It crashes R without any specific error - sometimes it gives "exceptional processing error".

Any thoughts how to debug this one? I skimmed the database for weird rows without any luck.










share|improve this question





























    1















    I'm trying to run a RFSRC on a 6500 records dataframe, with 59 variables:



    rfsrc_test <- rfsrc(Surv(TIME, DIED) ~ ., data=test, nsplit=10, na.action = "na.impute")


    It seems to work when I run it on 1500 records, but crashes on the entire dataset.

    It crashes R without any specific error - sometimes it gives "exceptional processing error".

    Any thoughts how to debug this one? I skimmed the database for weird rows without any luck.










    share|improve this question



























      1












      1








      1


      0






      I'm trying to run a RFSRC on a 6500 records dataframe, with 59 variables:



      rfsrc_test <- rfsrc(Surv(TIME, DIED) ~ ., data=test, nsplit=10, na.action = "na.impute")


      It seems to work when I run it on 1500 records, but crashes on the entire dataset.

      It crashes R without any specific error - sometimes it gives "exceptional processing error".

      Any thoughts how to debug this one? I skimmed the database for weird rows without any luck.










      share|improve this question
















      I'm trying to run a RFSRC on a 6500 records dataframe, with 59 variables:



      rfsrc_test <- rfsrc(Surv(TIME, DIED) ~ ., data=test, nsplit=10, na.action = "na.impute")


      It seems to work when I run it on 1500 records, but crashes on the entire dataset.

      It crashes R without any specific error - sometimes it gives "exceptional processing error".

      Any thoughts how to debug this one? I skimmed the database for weird rows without any luck.







      r random-forest survival-analysis






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 5 at 16:26









      llrs

      2,3932447




      2,3932447










      asked Nov 28 '18 at 16:37









      XPerimentXPeriment

      61




      61
























          2 Answers
          2






          active

          oldest

          votes


















          0














          We do not know the size of each record, nor the complexity of the variables.



          I have encountered similar situations when I have hit the RAM overhead. R is not designed for massive data sets. Parallel processing will resolve this, however R is not designed for this, the next suggestion is to buy more RAM.



          My approach would be to reduce the number of variables until you can process 6500 records (to make sure its just the data set size). Then I'd pre-screen the fitness of each variable e.g. GLM and use variables which explain a large amount of the data and minimise the residual. Then I'd rerun the survival analysis on a reduced number of variables.






          share|improve this answer



















          • 1





            thank you. so each record contains 59 variables (11 numerical, and the rest are categoricals), the last 2 are time (numerical) and died (logical). so i thought about RAM - so i put it on my PC which has double the RAM of the laptop - no help....

            – XPeriment
            Nov 30 '18 at 8:25











          • The classic sign of a RAM bottle neck is that the calculation progressively slows down until it crashes or hits a standstill. If it is crashing quickly, this may not be the answer. Simply doubling the RAM may not be sufficient. HOWEVER, if you can process more records, e.g. 2500 instead of 1500 it suggests RAM is the issue

            – Michael G.
            Nov 30 '18 at 12:22





















          0














          One thing you can check is the time variable - how many different values exist? The survival forest will save a cumulative hazard function for each node. If the number of unique time points in the dataset is large than the CHFS grow large as well.. had to round my time variable and this significantly reduced the run-time.






          share|improve this answer
























          • oh wow, i didn't know about that. it's any number between 1 and 3000 days...so almost 2000-3000 different options...should i bin those? doesn't it take the whole sting out of SRC?

            – XPeriment
            Nov 30 '18 at 8:23













          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53524163%2frandom-forest-survival-analysis-crashes%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          We do not know the size of each record, nor the complexity of the variables.



          I have encountered similar situations when I have hit the RAM overhead. R is not designed for massive data sets. Parallel processing will resolve this, however R is not designed for this, the next suggestion is to buy more RAM.



          My approach would be to reduce the number of variables until you can process 6500 records (to make sure its just the data set size). Then I'd pre-screen the fitness of each variable e.g. GLM and use variables which explain a large amount of the data and minimise the residual. Then I'd rerun the survival analysis on a reduced number of variables.






          share|improve this answer



















          • 1





            thank you. so each record contains 59 variables (11 numerical, and the rest are categoricals), the last 2 are time (numerical) and died (logical). so i thought about RAM - so i put it on my PC which has double the RAM of the laptop - no help....

            – XPeriment
            Nov 30 '18 at 8:25











          • The classic sign of a RAM bottle neck is that the calculation progressively slows down until it crashes or hits a standstill. If it is crashing quickly, this may not be the answer. Simply doubling the RAM may not be sufficient. HOWEVER, if you can process more records, e.g. 2500 instead of 1500 it suggests RAM is the issue

            – Michael G.
            Nov 30 '18 at 12:22


















          0














          We do not know the size of each record, nor the complexity of the variables.



          I have encountered similar situations when I have hit the RAM overhead. R is not designed for massive data sets. Parallel processing will resolve this, however R is not designed for this, the next suggestion is to buy more RAM.



          My approach would be to reduce the number of variables until you can process 6500 records (to make sure its just the data set size). Then I'd pre-screen the fitness of each variable e.g. GLM and use variables which explain a large amount of the data and minimise the residual. Then I'd rerun the survival analysis on a reduced number of variables.






          share|improve this answer



















          • 1





            thank you. so each record contains 59 variables (11 numerical, and the rest are categoricals), the last 2 are time (numerical) and died (logical). so i thought about RAM - so i put it on my PC which has double the RAM of the laptop - no help....

            – XPeriment
            Nov 30 '18 at 8:25











          • The classic sign of a RAM bottle neck is that the calculation progressively slows down until it crashes or hits a standstill. If it is crashing quickly, this may not be the answer. Simply doubling the RAM may not be sufficient. HOWEVER, if you can process more records, e.g. 2500 instead of 1500 it suggests RAM is the issue

            – Michael G.
            Nov 30 '18 at 12:22
















          0












          0








          0







          We do not know the size of each record, nor the complexity of the variables.



          I have encountered similar situations when I have hit the RAM overhead. R is not designed for massive data sets. Parallel processing will resolve this, however R is not designed for this, the next suggestion is to buy more RAM.



          My approach would be to reduce the number of variables until you can process 6500 records (to make sure its just the data set size). Then I'd pre-screen the fitness of each variable e.g. GLM and use variables which explain a large amount of the data and minimise the residual. Then I'd rerun the survival analysis on a reduced number of variables.






          share|improve this answer













          We do not know the size of each record, nor the complexity of the variables.



          I have encountered similar situations when I have hit the RAM overhead. R is not designed for massive data sets. Parallel processing will resolve this, however R is not designed for this, the next suggestion is to buy more RAM.



          My approach would be to reduce the number of variables until you can process 6500 records (to make sure its just the data set size). Then I'd pre-screen the fitness of each variable e.g. GLM and use variables which explain a large amount of the data and minimise the residual. Then I'd rerun the survival analysis on a reduced number of variables.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 28 '18 at 17:17









          Michael G.Michael G.

          2351417




          2351417








          • 1





            thank you. so each record contains 59 variables (11 numerical, and the rest are categoricals), the last 2 are time (numerical) and died (logical). so i thought about RAM - so i put it on my PC which has double the RAM of the laptop - no help....

            – XPeriment
            Nov 30 '18 at 8:25











          • The classic sign of a RAM bottle neck is that the calculation progressively slows down until it crashes or hits a standstill. If it is crashing quickly, this may not be the answer. Simply doubling the RAM may not be sufficient. HOWEVER, if you can process more records, e.g. 2500 instead of 1500 it suggests RAM is the issue

            – Michael G.
            Nov 30 '18 at 12:22
















          • 1





            thank you. so each record contains 59 variables (11 numerical, and the rest are categoricals), the last 2 are time (numerical) and died (logical). so i thought about RAM - so i put it on my PC which has double the RAM of the laptop - no help....

            – XPeriment
            Nov 30 '18 at 8:25











          • The classic sign of a RAM bottle neck is that the calculation progressively slows down until it crashes or hits a standstill. If it is crashing quickly, this may not be the answer. Simply doubling the RAM may not be sufficient. HOWEVER, if you can process more records, e.g. 2500 instead of 1500 it suggests RAM is the issue

            – Michael G.
            Nov 30 '18 at 12:22










          1




          1





          thank you. so each record contains 59 variables (11 numerical, and the rest are categoricals), the last 2 are time (numerical) and died (logical). so i thought about RAM - so i put it on my PC which has double the RAM of the laptop - no help....

          – XPeriment
          Nov 30 '18 at 8:25





          thank you. so each record contains 59 variables (11 numerical, and the rest are categoricals), the last 2 are time (numerical) and died (logical). so i thought about RAM - so i put it on my PC which has double the RAM of the laptop - no help....

          – XPeriment
          Nov 30 '18 at 8:25













          The classic sign of a RAM bottle neck is that the calculation progressively slows down until it crashes or hits a standstill. If it is crashing quickly, this may not be the answer. Simply doubling the RAM may not be sufficient. HOWEVER, if you can process more records, e.g. 2500 instead of 1500 it suggests RAM is the issue

          – Michael G.
          Nov 30 '18 at 12:22







          The classic sign of a RAM bottle neck is that the calculation progressively slows down until it crashes or hits a standstill. If it is crashing quickly, this may not be the answer. Simply doubling the RAM may not be sufficient. HOWEVER, if you can process more records, e.g. 2500 instead of 1500 it suggests RAM is the issue

          – Michael G.
          Nov 30 '18 at 12:22















          0














          One thing you can check is the time variable - how many different values exist? The survival forest will save a cumulative hazard function for each node. If the number of unique time points in the dataset is large than the CHFS grow large as well.. had to round my time variable and this significantly reduced the run-time.






          share|improve this answer
























          • oh wow, i didn't know about that. it's any number between 1 and 3000 days...so almost 2000-3000 different options...should i bin those? doesn't it take the whole sting out of SRC?

            – XPeriment
            Nov 30 '18 at 8:23


















          0














          One thing you can check is the time variable - how many different values exist? The survival forest will save a cumulative hazard function for each node. If the number of unique time points in the dataset is large than the CHFS grow large as well.. had to round my time variable and this significantly reduced the run-time.






          share|improve this answer
























          • oh wow, i didn't know about that. it's any number between 1 and 3000 days...so almost 2000-3000 different options...should i bin those? doesn't it take the whole sting out of SRC?

            – XPeriment
            Nov 30 '18 at 8:23
















          0












          0








          0







          One thing you can check is the time variable - how many different values exist? The survival forest will save a cumulative hazard function for each node. If the number of unique time points in the dataset is large than the CHFS grow large as well.. had to round my time variable and this significantly reduced the run-time.






          share|improve this answer













          One thing you can check is the time variable - how many different values exist? The survival forest will save a cumulative hazard function for each node. If the number of unique time points in the dataset is large than the CHFS grow large as well.. had to round my time variable and this significantly reduced the run-time.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 29 '18 at 16:32









          BSniderBSnider

          63




          63













          • oh wow, i didn't know about that. it's any number between 1 and 3000 days...so almost 2000-3000 different options...should i bin those? doesn't it take the whole sting out of SRC?

            – XPeriment
            Nov 30 '18 at 8:23





















          • oh wow, i didn't know about that. it's any number between 1 and 3000 days...so almost 2000-3000 different options...should i bin those? doesn't it take the whole sting out of SRC?

            – XPeriment
            Nov 30 '18 at 8:23



















          oh wow, i didn't know about that. it's any number between 1 and 3000 days...so almost 2000-3000 different options...should i bin those? doesn't it take the whole sting out of SRC?

          – XPeriment
          Nov 30 '18 at 8:23







          oh wow, i didn't know about that. it's any number between 1 and 3000 days...so almost 2000-3000 different options...should i bin those? doesn't it take the whole sting out of SRC?

          – XPeriment
          Nov 30 '18 at 8:23




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53524163%2frandom-forest-survival-analysis-crashes%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Contact image not getting when fetch all contact list from iPhone by CNContact

          count number of partitions of a set with n elements into k subsets

          A CLEAN and SIMPLE way to add appendices to Table of Contents and bookmarks