Using S3 to group SQS messages instead of FIFO queues












0















We plan to use SQS standard queue for messaging. The messages are a bunch of records that need to be received in order at the consumer end. There are a few reasons due to which we plan not to use FIFO queues.




  1. FIFO queues have a few restrictions as noted here and are recommended only for few selected use cases.

  2. We have multiple producers that push such messages to the queue (all producers are independent of the other) hence we are most likely to hit the 300 messages per second limitation.


Given this we are evaluating the SQS extended library support to use S3 for storing message payloads. We would club all the linked records into one message and post it to SQS as one request. I have a few questions




  1. What are the limitations or side effects of using S3 for persisting message payloads? One that I am aware off would be - S3 cost - I am assuming this won't be large given our messages don't cross a few MB's max.

  2. Are there real world examples of using this approach over FIFO queues for grouping messages?










share|improve this question























  • I don't understand how using S3 for message payloads replaces FIFO functionality - they are not related. If your messages are small (and will fit in SQS message size), consider plain SQS. Another alternative to consider for your use case is AWS Kinesis, though it's hard to tell which is better for you without more details

    – Krease
    Nov 27 '18 at 0:10













  • I am referring to SQS + S3 for messaging. Since I want to transfer a bunch of records in ordered fashion which won't fit in the 256 KB SQS limit, I was considering extended S3 support. FIFO also provides ordered delivery but it needs to be evaluated thoroughly. Hence the question.

    – Andy Dufresne
    Nov 27 '18 at 4:41
















0















We plan to use SQS standard queue for messaging. The messages are a bunch of records that need to be received in order at the consumer end. There are a few reasons due to which we plan not to use FIFO queues.




  1. FIFO queues have a few restrictions as noted here and are recommended only for few selected use cases.

  2. We have multiple producers that push such messages to the queue (all producers are independent of the other) hence we are most likely to hit the 300 messages per second limitation.


Given this we are evaluating the SQS extended library support to use S3 for storing message payloads. We would club all the linked records into one message and post it to SQS as one request. I have a few questions




  1. What are the limitations or side effects of using S3 for persisting message payloads? One that I am aware off would be - S3 cost - I am assuming this won't be large given our messages don't cross a few MB's max.

  2. Are there real world examples of using this approach over FIFO queues for grouping messages?










share|improve this question























  • I don't understand how using S3 for message payloads replaces FIFO functionality - they are not related. If your messages are small (and will fit in SQS message size), consider plain SQS. Another alternative to consider for your use case is AWS Kinesis, though it's hard to tell which is better for you without more details

    – Krease
    Nov 27 '18 at 0:10













  • I am referring to SQS + S3 for messaging. Since I want to transfer a bunch of records in ordered fashion which won't fit in the 256 KB SQS limit, I was considering extended S3 support. FIFO also provides ordered delivery but it needs to be evaluated thoroughly. Hence the question.

    – Andy Dufresne
    Nov 27 '18 at 4:41














0












0








0








We plan to use SQS standard queue for messaging. The messages are a bunch of records that need to be received in order at the consumer end. There are a few reasons due to which we plan not to use FIFO queues.




  1. FIFO queues have a few restrictions as noted here and are recommended only for few selected use cases.

  2. We have multiple producers that push such messages to the queue (all producers are independent of the other) hence we are most likely to hit the 300 messages per second limitation.


Given this we are evaluating the SQS extended library support to use S3 for storing message payloads. We would club all the linked records into one message and post it to SQS as one request. I have a few questions




  1. What are the limitations or side effects of using S3 for persisting message payloads? One that I am aware off would be - S3 cost - I am assuming this won't be large given our messages don't cross a few MB's max.

  2. Are there real world examples of using this approach over FIFO queues for grouping messages?










share|improve this question














We plan to use SQS standard queue for messaging. The messages are a bunch of records that need to be received in order at the consumer end. There are a few reasons due to which we plan not to use FIFO queues.




  1. FIFO queues have a few restrictions as noted here and are recommended only for few selected use cases.

  2. We have multiple producers that push such messages to the queue (all producers are independent of the other) hence we are most likely to hit the 300 messages per second limitation.


Given this we are evaluating the SQS extended library support to use S3 for storing message payloads. We would club all the linked records into one message and post it to SQS as one request. I have a few questions




  1. What are the limitations or side effects of using S3 for persisting message payloads? One that I am aware off would be - S3 cost - I am assuming this won't be large given our messages don't cross a few MB's max.

  2. Are there real world examples of using this approach over FIFO queues for grouping messages?







amazon-sqs






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 26 '18 at 5:28









Andy DufresneAndy Dufresne

3,60163066




3,60163066













  • I don't understand how using S3 for message payloads replaces FIFO functionality - they are not related. If your messages are small (and will fit in SQS message size), consider plain SQS. Another alternative to consider for your use case is AWS Kinesis, though it's hard to tell which is better for you without more details

    – Krease
    Nov 27 '18 at 0:10













  • I am referring to SQS + S3 for messaging. Since I want to transfer a bunch of records in ordered fashion which won't fit in the 256 KB SQS limit, I was considering extended S3 support. FIFO also provides ordered delivery but it needs to be evaluated thoroughly. Hence the question.

    – Andy Dufresne
    Nov 27 '18 at 4:41



















  • I don't understand how using S3 for message payloads replaces FIFO functionality - they are not related. If your messages are small (and will fit in SQS message size), consider plain SQS. Another alternative to consider for your use case is AWS Kinesis, though it's hard to tell which is better for you without more details

    – Krease
    Nov 27 '18 at 0:10













  • I am referring to SQS + S3 for messaging. Since I want to transfer a bunch of records in ordered fashion which won't fit in the 256 KB SQS limit, I was considering extended S3 support. FIFO also provides ordered delivery but it needs to be evaluated thoroughly. Hence the question.

    – Andy Dufresne
    Nov 27 '18 at 4:41

















I don't understand how using S3 for message payloads replaces FIFO functionality - they are not related. If your messages are small (and will fit in SQS message size), consider plain SQS. Another alternative to consider for your use case is AWS Kinesis, though it's hard to tell which is better for you without more details

– Krease
Nov 27 '18 at 0:10







I don't understand how using S3 for message payloads replaces FIFO functionality - they are not related. If your messages are small (and will fit in SQS message size), consider plain SQS. Another alternative to consider for your use case is AWS Kinesis, though it's hard to tell which is better for you without more details

– Krease
Nov 27 '18 at 0:10















I am referring to SQS + S3 for messaging. Since I want to transfer a bunch of records in ordered fashion which won't fit in the 256 KB SQS limit, I was considering extended S3 support. FIFO also provides ordered delivery but it needs to be evaluated thoroughly. Hence the question.

– Andy Dufresne
Nov 27 '18 at 4:41





I am referring to SQS + S3 for messaging. Since I want to transfer a bunch of records in ordered fashion which won't fit in the 256 KB SQS limit, I was considering extended S3 support. FIFO also provides ordered delivery but it needs to be evaluated thoroughly. Hence the question.

– Andy Dufresne
Nov 27 '18 at 4:41












1 Answer
1






active

oldest

votes


















1














S3 introduces additional latency (at each end of the queue), depending on the payload size, whether the message publishers/consumers are in AWS or hosted elsewhere, and how much bandwidth an individual server instance has available to it. (Off the cuff, I’d guess it’s >200 ms for a 1 MB payload.) The cost will be pretty insignificant, especially if you set up appropriate bucket lifecycle policies to archive or delete old data. Don’t forget that S3 is strongly consistent on the initial create, but only eventually consistent for any updates to an object. If possible, don’t update an object once you have created it.



I don’t have any real world examples, but I’ll let you know if I find one.



You will probably find it simpler to implement what you need using some sort of database, as was suggested in the article that you linked (which explained the limitations of FIFO queues). Make sure your decision isn’t biased by looking for premature optimizations.






share|improve this answer
























  • A couple of follow up questions - 1. Would we have to deal with S3 directly in order to worry about updates to the object? I am assuming SQS would be handling it. We would have to send all the data (records) at once to SQS. 2. Do you see any advantages of using SQS in such a scenario over using S3 directly? i.e. the producer writes all the records to an S3 file and the consumer reads it once its done.

    – Andy Dufresne
    Nov 26 '18 at 13:56






  • 1





    1. If you use the extended sqs client library, then you don’t need to worry about interacting with S3 directly. 2. Yes, the SQS message still tells the consumer where to find the file. If you skip SQS and just use S3, you need to figure out a way to communicate the location of the S3 object.

    – Matthew Pope
    Nov 26 '18 at 14:36











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53475194%2fusing-s3-to-group-sqs-messages-instead-of-fifo-queues%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














S3 introduces additional latency (at each end of the queue), depending on the payload size, whether the message publishers/consumers are in AWS or hosted elsewhere, and how much bandwidth an individual server instance has available to it. (Off the cuff, I’d guess it’s >200 ms for a 1 MB payload.) The cost will be pretty insignificant, especially if you set up appropriate bucket lifecycle policies to archive or delete old data. Don’t forget that S3 is strongly consistent on the initial create, but only eventually consistent for any updates to an object. If possible, don’t update an object once you have created it.



I don’t have any real world examples, but I’ll let you know if I find one.



You will probably find it simpler to implement what you need using some sort of database, as was suggested in the article that you linked (which explained the limitations of FIFO queues). Make sure your decision isn’t biased by looking for premature optimizations.






share|improve this answer
























  • A couple of follow up questions - 1. Would we have to deal with S3 directly in order to worry about updates to the object? I am assuming SQS would be handling it. We would have to send all the data (records) at once to SQS. 2. Do you see any advantages of using SQS in such a scenario over using S3 directly? i.e. the producer writes all the records to an S3 file and the consumer reads it once its done.

    – Andy Dufresne
    Nov 26 '18 at 13:56






  • 1





    1. If you use the extended sqs client library, then you don’t need to worry about interacting with S3 directly. 2. Yes, the SQS message still tells the consumer where to find the file. If you skip SQS and just use S3, you need to figure out a way to communicate the location of the S3 object.

    – Matthew Pope
    Nov 26 '18 at 14:36
















1














S3 introduces additional latency (at each end of the queue), depending on the payload size, whether the message publishers/consumers are in AWS or hosted elsewhere, and how much bandwidth an individual server instance has available to it. (Off the cuff, I’d guess it’s >200 ms for a 1 MB payload.) The cost will be pretty insignificant, especially if you set up appropriate bucket lifecycle policies to archive or delete old data. Don’t forget that S3 is strongly consistent on the initial create, but only eventually consistent for any updates to an object. If possible, don’t update an object once you have created it.



I don’t have any real world examples, but I’ll let you know if I find one.



You will probably find it simpler to implement what you need using some sort of database, as was suggested in the article that you linked (which explained the limitations of FIFO queues). Make sure your decision isn’t biased by looking for premature optimizations.






share|improve this answer
























  • A couple of follow up questions - 1. Would we have to deal with S3 directly in order to worry about updates to the object? I am assuming SQS would be handling it. We would have to send all the data (records) at once to SQS. 2. Do you see any advantages of using SQS in such a scenario over using S3 directly? i.e. the producer writes all the records to an S3 file and the consumer reads it once its done.

    – Andy Dufresne
    Nov 26 '18 at 13:56






  • 1





    1. If you use the extended sqs client library, then you don’t need to worry about interacting with S3 directly. 2. Yes, the SQS message still tells the consumer where to find the file. If you skip SQS and just use S3, you need to figure out a way to communicate the location of the S3 object.

    – Matthew Pope
    Nov 26 '18 at 14:36














1












1








1







S3 introduces additional latency (at each end of the queue), depending on the payload size, whether the message publishers/consumers are in AWS or hosted elsewhere, and how much bandwidth an individual server instance has available to it. (Off the cuff, I’d guess it’s >200 ms for a 1 MB payload.) The cost will be pretty insignificant, especially if you set up appropriate bucket lifecycle policies to archive or delete old data. Don’t forget that S3 is strongly consistent on the initial create, but only eventually consistent for any updates to an object. If possible, don’t update an object once you have created it.



I don’t have any real world examples, but I’ll let you know if I find one.



You will probably find it simpler to implement what you need using some sort of database, as was suggested in the article that you linked (which explained the limitations of FIFO queues). Make sure your decision isn’t biased by looking for premature optimizations.






share|improve this answer













S3 introduces additional latency (at each end of the queue), depending on the payload size, whether the message publishers/consumers are in AWS or hosted elsewhere, and how much bandwidth an individual server instance has available to it. (Off the cuff, I’d guess it’s >200 ms for a 1 MB payload.) The cost will be pretty insignificant, especially if you set up appropriate bucket lifecycle policies to archive or delete old data. Don’t forget that S3 is strongly consistent on the initial create, but only eventually consistent for any updates to an object. If possible, don’t update an object once you have created it.



I don’t have any real world examples, but I’ll let you know if I find one.



You will probably find it simpler to implement what you need using some sort of database, as was suggested in the article that you linked (which explained the limitations of FIFO queues). Make sure your decision isn’t biased by looking for premature optimizations.







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 26 '18 at 7:32









Matthew PopeMatthew Pope

1,4671612




1,4671612













  • A couple of follow up questions - 1. Would we have to deal with S3 directly in order to worry about updates to the object? I am assuming SQS would be handling it. We would have to send all the data (records) at once to SQS. 2. Do you see any advantages of using SQS in such a scenario over using S3 directly? i.e. the producer writes all the records to an S3 file and the consumer reads it once its done.

    – Andy Dufresne
    Nov 26 '18 at 13:56






  • 1





    1. If you use the extended sqs client library, then you don’t need to worry about interacting with S3 directly. 2. Yes, the SQS message still tells the consumer where to find the file. If you skip SQS and just use S3, you need to figure out a way to communicate the location of the S3 object.

    – Matthew Pope
    Nov 26 '18 at 14:36



















  • A couple of follow up questions - 1. Would we have to deal with S3 directly in order to worry about updates to the object? I am assuming SQS would be handling it. We would have to send all the data (records) at once to SQS. 2. Do you see any advantages of using SQS in such a scenario over using S3 directly? i.e. the producer writes all the records to an S3 file and the consumer reads it once its done.

    – Andy Dufresne
    Nov 26 '18 at 13:56






  • 1





    1. If you use the extended sqs client library, then you don’t need to worry about interacting with S3 directly. 2. Yes, the SQS message still tells the consumer where to find the file. If you skip SQS and just use S3, you need to figure out a way to communicate the location of the S3 object.

    – Matthew Pope
    Nov 26 '18 at 14:36

















A couple of follow up questions - 1. Would we have to deal with S3 directly in order to worry about updates to the object? I am assuming SQS would be handling it. We would have to send all the data (records) at once to SQS. 2. Do you see any advantages of using SQS in such a scenario over using S3 directly? i.e. the producer writes all the records to an S3 file and the consumer reads it once its done.

– Andy Dufresne
Nov 26 '18 at 13:56





A couple of follow up questions - 1. Would we have to deal with S3 directly in order to worry about updates to the object? I am assuming SQS would be handling it. We would have to send all the data (records) at once to SQS. 2. Do you see any advantages of using SQS in such a scenario over using S3 directly? i.e. the producer writes all the records to an S3 file and the consumer reads it once its done.

– Andy Dufresne
Nov 26 '18 at 13:56




1




1





1. If you use the extended sqs client library, then you don’t need to worry about interacting with S3 directly. 2. Yes, the SQS message still tells the consumer where to find the file. If you skip SQS and just use S3, you need to figure out a way to communicate the location of the S3 object.

– Matthew Pope
Nov 26 '18 at 14:36





1. If you use the extended sqs client library, then you don’t need to worry about interacting with S3 directly. 2. Yes, the SQS message still tells the consumer where to find the file. If you skip SQS and just use S3, you need to figure out a way to communicate the location of the S3 object.

– Matthew Pope
Nov 26 '18 at 14:36


















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53475194%2fusing-s3-to-group-sqs-messages-instead-of-fifo-queues%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Contact image not getting when fetch all contact list from iPhone by CNContact

count number of partitions of a set with n elements into k subsets

A CLEAN and SIMPLE way to add appendices to Table of Contents and bookmarks