Autovacuum on template0 database on AWS RDS postgresql 9.4












2















Can we run vacuuming on PostgreSQL's template0 database? If yes, do we really need to or should we?



I am running PG 9.4 and having around billion transactions every 4 - 5 days.



When I run:



    SELECT datname, age(datfrozenxid) FROM pg_database;

datname | age
-----------------+------------
template0 | 1370404693
rdsadmin | 1285536968
template1 | 122255271
postgres | 122303483
bt_prod | 212084472
(5 rows)


As you can see template0 and rdsadmin are perilously close to the 2.1 billion marks and we don't want PostgreSQL to go down because of these databases.










share|improve this question

























  • What is your autovacuum_freeze_max_age setting? Is autovacuum set to on? Any log messages?

    – Laurenz Albe
    Nov 25 '18 at 14:46











  • autovacuum_freeze_max_age: 200000000 Yes, Autovacuum is turned on. FYI: I have around 60K tables (PG 9.4 partitions) some of them more than 20 GB.

    – Nerve
    Nov 26 '18 at 11:44













  • With that many tables, you should increase max_autovacuum_workers at least to 10. Check pg_stat_activity if there are autovaccuum workers running for a long time (old query_start). Set log_autovacuum_min_duration to 0 to get log messages. I suspect that autovacuum may just not be able to keep up. Tell me your findings.

    – Laurenz Albe
    Nov 26 '18 at 16:05











  • There were two issues: 1. All 6 workers were stuck since a long time (~ 16 hours). They were deadlocked on one table each and were not doing anything. I killed it manually. 2. Several settings were not perfect for our workload. I tweaked max_autovacuum_workers to 10. Gave more RAM to work_mem and maintenance_work_mem, increased autovacuum_vacuum_cost_limit to 2500 because I had unused IOPS. I'll write a detailed answer once I get some time. But, I still don't understand the exact reason why the workers were deadlocked.

    – Nerve
    Nov 28 '18 at 8:58











  • Great that you could improve the situation. The workers shouldn't get stuck, but now it's too late to examine the workers with strace or gdb to see what's going on.

    – Laurenz Albe
    Nov 28 '18 at 9:57
















2















Can we run vacuuming on PostgreSQL's template0 database? If yes, do we really need to or should we?



I am running PG 9.4 and having around billion transactions every 4 - 5 days.



When I run:



    SELECT datname, age(datfrozenxid) FROM pg_database;

datname | age
-----------------+------------
template0 | 1370404693
rdsadmin | 1285536968
template1 | 122255271
postgres | 122303483
bt_prod | 212084472
(5 rows)


As you can see template0 and rdsadmin are perilously close to the 2.1 billion marks and we don't want PostgreSQL to go down because of these databases.










share|improve this question

























  • What is your autovacuum_freeze_max_age setting? Is autovacuum set to on? Any log messages?

    – Laurenz Albe
    Nov 25 '18 at 14:46











  • autovacuum_freeze_max_age: 200000000 Yes, Autovacuum is turned on. FYI: I have around 60K tables (PG 9.4 partitions) some of them more than 20 GB.

    – Nerve
    Nov 26 '18 at 11:44













  • With that many tables, you should increase max_autovacuum_workers at least to 10. Check pg_stat_activity if there are autovaccuum workers running for a long time (old query_start). Set log_autovacuum_min_duration to 0 to get log messages. I suspect that autovacuum may just not be able to keep up. Tell me your findings.

    – Laurenz Albe
    Nov 26 '18 at 16:05











  • There were two issues: 1. All 6 workers were stuck since a long time (~ 16 hours). They were deadlocked on one table each and were not doing anything. I killed it manually. 2. Several settings were not perfect for our workload. I tweaked max_autovacuum_workers to 10. Gave more RAM to work_mem and maintenance_work_mem, increased autovacuum_vacuum_cost_limit to 2500 because I had unused IOPS. I'll write a detailed answer once I get some time. But, I still don't understand the exact reason why the workers were deadlocked.

    – Nerve
    Nov 28 '18 at 8:58











  • Great that you could improve the situation. The workers shouldn't get stuck, but now it's too late to examine the workers with strace or gdb to see what's going on.

    – Laurenz Albe
    Nov 28 '18 at 9:57














2












2








2








Can we run vacuuming on PostgreSQL's template0 database? If yes, do we really need to or should we?



I am running PG 9.4 and having around billion transactions every 4 - 5 days.



When I run:



    SELECT datname, age(datfrozenxid) FROM pg_database;

datname | age
-----------------+------------
template0 | 1370404693
rdsadmin | 1285536968
template1 | 122255271
postgres | 122303483
bt_prod | 212084472
(5 rows)


As you can see template0 and rdsadmin are perilously close to the 2.1 billion marks and we don't want PostgreSQL to go down because of these databases.










share|improve this question
















Can we run vacuuming on PostgreSQL's template0 database? If yes, do we really need to or should we?



I am running PG 9.4 and having around billion transactions every 4 - 5 days.



When I run:



    SELECT datname, age(datfrozenxid) FROM pg_database;

datname | age
-----------------+------------
template0 | 1370404693
rdsadmin | 1285536968
template1 | 122255271
postgres | 122303483
bt_prod | 212084472
(5 rows)


As you can see template0 and rdsadmin are perilously close to the 2.1 billion marks and we don't want PostgreSQL to go down because of these databases.







postgresql amazon-rds postgresql-9.4






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 24 '18 at 11:51









Pandya M. Nandan

323521




323521










asked Nov 24 '18 at 1:38









NerveNerve

3,58722124




3,58722124













  • What is your autovacuum_freeze_max_age setting? Is autovacuum set to on? Any log messages?

    – Laurenz Albe
    Nov 25 '18 at 14:46











  • autovacuum_freeze_max_age: 200000000 Yes, Autovacuum is turned on. FYI: I have around 60K tables (PG 9.4 partitions) some of them more than 20 GB.

    – Nerve
    Nov 26 '18 at 11:44













  • With that many tables, you should increase max_autovacuum_workers at least to 10. Check pg_stat_activity if there are autovaccuum workers running for a long time (old query_start). Set log_autovacuum_min_duration to 0 to get log messages. I suspect that autovacuum may just not be able to keep up. Tell me your findings.

    – Laurenz Albe
    Nov 26 '18 at 16:05











  • There were two issues: 1. All 6 workers were stuck since a long time (~ 16 hours). They were deadlocked on one table each and were not doing anything. I killed it manually. 2. Several settings were not perfect for our workload. I tweaked max_autovacuum_workers to 10. Gave more RAM to work_mem and maintenance_work_mem, increased autovacuum_vacuum_cost_limit to 2500 because I had unused IOPS. I'll write a detailed answer once I get some time. But, I still don't understand the exact reason why the workers were deadlocked.

    – Nerve
    Nov 28 '18 at 8:58











  • Great that you could improve the situation. The workers shouldn't get stuck, but now it's too late to examine the workers with strace or gdb to see what's going on.

    – Laurenz Albe
    Nov 28 '18 at 9:57



















  • What is your autovacuum_freeze_max_age setting? Is autovacuum set to on? Any log messages?

    – Laurenz Albe
    Nov 25 '18 at 14:46











  • autovacuum_freeze_max_age: 200000000 Yes, Autovacuum is turned on. FYI: I have around 60K tables (PG 9.4 partitions) some of them more than 20 GB.

    – Nerve
    Nov 26 '18 at 11:44













  • With that many tables, you should increase max_autovacuum_workers at least to 10. Check pg_stat_activity if there are autovaccuum workers running for a long time (old query_start). Set log_autovacuum_min_duration to 0 to get log messages. I suspect that autovacuum may just not be able to keep up. Tell me your findings.

    – Laurenz Albe
    Nov 26 '18 at 16:05











  • There were two issues: 1. All 6 workers were stuck since a long time (~ 16 hours). They were deadlocked on one table each and were not doing anything. I killed it manually. 2. Several settings were not perfect for our workload. I tweaked max_autovacuum_workers to 10. Gave more RAM to work_mem and maintenance_work_mem, increased autovacuum_vacuum_cost_limit to 2500 because I had unused IOPS. I'll write a detailed answer once I get some time. But, I still don't understand the exact reason why the workers were deadlocked.

    – Nerve
    Nov 28 '18 at 8:58











  • Great that you could improve the situation. The workers shouldn't get stuck, but now it's too late to examine the workers with strace or gdb to see what's going on.

    – Laurenz Albe
    Nov 28 '18 at 9:57

















What is your autovacuum_freeze_max_age setting? Is autovacuum set to on? Any log messages?

– Laurenz Albe
Nov 25 '18 at 14:46





What is your autovacuum_freeze_max_age setting? Is autovacuum set to on? Any log messages?

– Laurenz Albe
Nov 25 '18 at 14:46













autovacuum_freeze_max_age: 200000000 Yes, Autovacuum is turned on. FYI: I have around 60K tables (PG 9.4 partitions) some of them more than 20 GB.

– Nerve
Nov 26 '18 at 11:44







autovacuum_freeze_max_age: 200000000 Yes, Autovacuum is turned on. FYI: I have around 60K tables (PG 9.4 partitions) some of them more than 20 GB.

– Nerve
Nov 26 '18 at 11:44















With that many tables, you should increase max_autovacuum_workers at least to 10. Check pg_stat_activity if there are autovaccuum workers running for a long time (old query_start). Set log_autovacuum_min_duration to 0 to get log messages. I suspect that autovacuum may just not be able to keep up. Tell me your findings.

– Laurenz Albe
Nov 26 '18 at 16:05





With that many tables, you should increase max_autovacuum_workers at least to 10. Check pg_stat_activity if there are autovaccuum workers running for a long time (old query_start). Set log_autovacuum_min_duration to 0 to get log messages. I suspect that autovacuum may just not be able to keep up. Tell me your findings.

– Laurenz Albe
Nov 26 '18 at 16:05













There were two issues: 1. All 6 workers were stuck since a long time (~ 16 hours). They were deadlocked on one table each and were not doing anything. I killed it manually. 2. Several settings were not perfect for our workload. I tweaked max_autovacuum_workers to 10. Gave more RAM to work_mem and maintenance_work_mem, increased autovacuum_vacuum_cost_limit to 2500 because I had unused IOPS. I'll write a detailed answer once I get some time. But, I still don't understand the exact reason why the workers were deadlocked.

– Nerve
Nov 28 '18 at 8:58





There were two issues: 1. All 6 workers were stuck since a long time (~ 16 hours). They were deadlocked on one table each and were not doing anything. I killed it manually. 2. Several settings were not perfect for our workload. I tweaked max_autovacuum_workers to 10. Gave more RAM to work_mem and maintenance_work_mem, increased autovacuum_vacuum_cost_limit to 2500 because I had unused IOPS. I'll write a detailed answer once I get some time. But, I still don't understand the exact reason why the workers were deadlocked.

– Nerve
Nov 28 '18 at 8:58













Great that you could improve the situation. The workers shouldn't get stuck, but now it's too late to examine the workers with strace or gdb to see what's going on.

– Laurenz Albe
Nov 28 '18 at 9:57





Great that you could improve the situation. The workers shouldn't get stuck, but now it's too late to examine the workers with strace or gdb to see what's going on.

– Laurenz Albe
Nov 28 '18 at 9:57












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53454461%2fautovacuum-on-template0-database-on-aws-rds-postgresql-9-4%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53454461%2fautovacuum-on-template0-database-on-aws-rds-postgresql-9-4%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

A CLEAN and SIMPLE way to add appendices to Table of Contents and bookmarks

Calculate evaluation metrics using cross_val_predict sklearn

Insert data from modal to MySQL (multiple modal on website)