zmq cpp performance compared to the latency reported by `qperf`












0















I'm trying to figure out why the qperf reports latency (one way) x and REQ/REP (roundtrip) reports something like 4x. Any particular socket tweaking I have to do? Because if I just open socket, set TCP_NODELAY (which set in ZMQ by default) I get latency very close (for 1k buffers) to the number reported by qperf. However the ZMQ is lagging behind these numbers about 4-5 times

The ZMQ server



zmq::context_t context;
zmq::socket_t socket(context, ZMQ_REP);

socket.bind("tcp://*:5555");

while (true) {
zmq::message_t request;

// Wait for next request from client
socket.recv(&request);

// Send reply back to client
zmq::message_t reply(5);
memcpy(reply.data(), "World", 5);
socket.send(reply);
}


The ZMQ client



zmq::context_t context;
zmq::socket_t socket(context, ZMQ_REQ);

std::cout << "Connecting to hello world server…" << std::endl;
socket.connect("tcp://my.host:5555");

const size_t cycles = 100'000;
double throughput = 0;

zmq::message_t reply;

auto start = std::chrono::high_resolution_clock::now();
vector<uint8_t> buff(MessageSize, 0);
for (auto i = 0ul; i < cycles; ++i) {
zmq::message_t request(MessageSize);
memcpy(request.data(), buff.data(), MessageSize);
throughput += request.size();
socket.send(request);
// Get the reply.
socket.recv(&reply);
}
auto end = std::chrono::high_resolution_clock::now();
auto us = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count();
std::cout << "Latency: " << us / cycles << "us." << std::endl;
std::cout << "Througput: " << std::fixed << throughput / us * 1'000'000 / 1024 / 1024 << "MiB/s." << std::endl;


Both are essentially ZMQ examples provided here http://zguide.zeromq.org/cpp:hwclient
Some background, Linux, Ubuntu, GCC 7.3, static library provided by vcpkg, built locally, looks like they pull the master from github.










share|improve this question























  • I think that's just par for the course with ZeroMQ. The library introduces additional latency beyond what's inherent in the low level network I/O.

    – Peter Ruderman
    Nov 27 '18 at 19:05











  • And whats about their claim they are as fast as TCP could be? Moreover, it is REQ/REP sockets, not some fancy XPUB or DEALER/BROKER when I can imagine what introduces additional latency (for additional feature set these sockets provide)

    – kreuzerkrieg
    Nov 27 '18 at 19:40






  • 1





    I'm not involved with the library, so I can only comment from personal experience. That being said, I've found ZMQ to be dramatically over-hyped. I believe when they claim "high speed" (which has not been my experience), they mean throughput.

    – Peter Ruderman
    Nov 27 '18 at 19:44











  • Did you find other library that does better in latency sensitive environment?

    – kreuzerkrieg
    Nov 27 '18 at 19:51






  • 2





    No. Latency wasn't a particular concern for us when using ZMQ (and the rest of our code is based on hand written sockets stuff).

    – Peter Ruderman
    Nov 27 '18 at 19:54
















0















I'm trying to figure out why the qperf reports latency (one way) x and REQ/REP (roundtrip) reports something like 4x. Any particular socket tweaking I have to do? Because if I just open socket, set TCP_NODELAY (which set in ZMQ by default) I get latency very close (for 1k buffers) to the number reported by qperf. However the ZMQ is lagging behind these numbers about 4-5 times

The ZMQ server



zmq::context_t context;
zmq::socket_t socket(context, ZMQ_REP);

socket.bind("tcp://*:5555");

while (true) {
zmq::message_t request;

// Wait for next request from client
socket.recv(&request);

// Send reply back to client
zmq::message_t reply(5);
memcpy(reply.data(), "World", 5);
socket.send(reply);
}


The ZMQ client



zmq::context_t context;
zmq::socket_t socket(context, ZMQ_REQ);

std::cout << "Connecting to hello world server…" << std::endl;
socket.connect("tcp://my.host:5555");

const size_t cycles = 100'000;
double throughput = 0;

zmq::message_t reply;

auto start = std::chrono::high_resolution_clock::now();
vector<uint8_t> buff(MessageSize, 0);
for (auto i = 0ul; i < cycles; ++i) {
zmq::message_t request(MessageSize);
memcpy(request.data(), buff.data(), MessageSize);
throughput += request.size();
socket.send(request);
// Get the reply.
socket.recv(&reply);
}
auto end = std::chrono::high_resolution_clock::now();
auto us = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count();
std::cout << "Latency: " << us / cycles << "us." << std::endl;
std::cout << "Througput: " << std::fixed << throughput / us * 1'000'000 / 1024 / 1024 << "MiB/s." << std::endl;


Both are essentially ZMQ examples provided here http://zguide.zeromq.org/cpp:hwclient
Some background, Linux, Ubuntu, GCC 7.3, static library provided by vcpkg, built locally, looks like they pull the master from github.










share|improve this question























  • I think that's just par for the course with ZeroMQ. The library introduces additional latency beyond what's inherent in the low level network I/O.

    – Peter Ruderman
    Nov 27 '18 at 19:05











  • And whats about their claim they are as fast as TCP could be? Moreover, it is REQ/REP sockets, not some fancy XPUB or DEALER/BROKER when I can imagine what introduces additional latency (for additional feature set these sockets provide)

    – kreuzerkrieg
    Nov 27 '18 at 19:40






  • 1





    I'm not involved with the library, so I can only comment from personal experience. That being said, I've found ZMQ to be dramatically over-hyped. I believe when they claim "high speed" (which has not been my experience), they mean throughput.

    – Peter Ruderman
    Nov 27 '18 at 19:44











  • Did you find other library that does better in latency sensitive environment?

    – kreuzerkrieg
    Nov 27 '18 at 19:51






  • 2





    No. Latency wasn't a particular concern for us when using ZMQ (and the rest of our code is based on hand written sockets stuff).

    – Peter Ruderman
    Nov 27 '18 at 19:54














0












0








0








I'm trying to figure out why the qperf reports latency (one way) x and REQ/REP (roundtrip) reports something like 4x. Any particular socket tweaking I have to do? Because if I just open socket, set TCP_NODELAY (which set in ZMQ by default) I get latency very close (for 1k buffers) to the number reported by qperf. However the ZMQ is lagging behind these numbers about 4-5 times

The ZMQ server



zmq::context_t context;
zmq::socket_t socket(context, ZMQ_REP);

socket.bind("tcp://*:5555");

while (true) {
zmq::message_t request;

// Wait for next request from client
socket.recv(&request);

// Send reply back to client
zmq::message_t reply(5);
memcpy(reply.data(), "World", 5);
socket.send(reply);
}


The ZMQ client



zmq::context_t context;
zmq::socket_t socket(context, ZMQ_REQ);

std::cout << "Connecting to hello world server…" << std::endl;
socket.connect("tcp://my.host:5555");

const size_t cycles = 100'000;
double throughput = 0;

zmq::message_t reply;

auto start = std::chrono::high_resolution_clock::now();
vector<uint8_t> buff(MessageSize, 0);
for (auto i = 0ul; i < cycles; ++i) {
zmq::message_t request(MessageSize);
memcpy(request.data(), buff.data(), MessageSize);
throughput += request.size();
socket.send(request);
// Get the reply.
socket.recv(&reply);
}
auto end = std::chrono::high_resolution_clock::now();
auto us = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count();
std::cout << "Latency: " << us / cycles << "us." << std::endl;
std::cout << "Througput: " << std::fixed << throughput / us * 1'000'000 / 1024 / 1024 << "MiB/s." << std::endl;


Both are essentially ZMQ examples provided here http://zguide.zeromq.org/cpp:hwclient
Some background, Linux, Ubuntu, GCC 7.3, static library provided by vcpkg, built locally, looks like they pull the master from github.










share|improve this question














I'm trying to figure out why the qperf reports latency (one way) x and REQ/REP (roundtrip) reports something like 4x. Any particular socket tweaking I have to do? Because if I just open socket, set TCP_NODELAY (which set in ZMQ by default) I get latency very close (for 1k buffers) to the number reported by qperf. However the ZMQ is lagging behind these numbers about 4-5 times

The ZMQ server



zmq::context_t context;
zmq::socket_t socket(context, ZMQ_REP);

socket.bind("tcp://*:5555");

while (true) {
zmq::message_t request;

// Wait for next request from client
socket.recv(&request);

// Send reply back to client
zmq::message_t reply(5);
memcpy(reply.data(), "World", 5);
socket.send(reply);
}


The ZMQ client



zmq::context_t context;
zmq::socket_t socket(context, ZMQ_REQ);

std::cout << "Connecting to hello world server…" << std::endl;
socket.connect("tcp://my.host:5555");

const size_t cycles = 100'000;
double throughput = 0;

zmq::message_t reply;

auto start = std::chrono::high_resolution_clock::now();
vector<uint8_t> buff(MessageSize, 0);
for (auto i = 0ul; i < cycles; ++i) {
zmq::message_t request(MessageSize);
memcpy(request.data(), buff.data(), MessageSize);
throughput += request.size();
socket.send(request);
// Get the reply.
socket.recv(&reply);
}
auto end = std::chrono::high_resolution_clock::now();
auto us = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count();
std::cout << "Latency: " << us / cycles << "us." << std::endl;
std::cout << "Througput: " << std::fixed << throughput / us * 1'000'000 / 1024 / 1024 << "MiB/s." << std::endl;


Both are essentially ZMQ examples provided here http://zguide.zeromq.org/cpp:hwclient
Some background, Linux, Ubuntu, GCC 7.3, static library provided by vcpkg, built locally, looks like they pull the master from github.







c++ zeromq






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 27 '18 at 18:53









kreuzerkriegkreuzerkrieg

1,0261121




1,0261121













  • I think that's just par for the course with ZeroMQ. The library introduces additional latency beyond what's inherent in the low level network I/O.

    – Peter Ruderman
    Nov 27 '18 at 19:05











  • And whats about their claim they are as fast as TCP could be? Moreover, it is REQ/REP sockets, not some fancy XPUB or DEALER/BROKER when I can imagine what introduces additional latency (for additional feature set these sockets provide)

    – kreuzerkrieg
    Nov 27 '18 at 19:40






  • 1





    I'm not involved with the library, so I can only comment from personal experience. That being said, I've found ZMQ to be dramatically over-hyped. I believe when they claim "high speed" (which has not been my experience), they mean throughput.

    – Peter Ruderman
    Nov 27 '18 at 19:44











  • Did you find other library that does better in latency sensitive environment?

    – kreuzerkrieg
    Nov 27 '18 at 19:51






  • 2





    No. Latency wasn't a particular concern for us when using ZMQ (and the rest of our code is based on hand written sockets stuff).

    – Peter Ruderman
    Nov 27 '18 at 19:54



















  • I think that's just par for the course with ZeroMQ. The library introduces additional latency beyond what's inherent in the low level network I/O.

    – Peter Ruderman
    Nov 27 '18 at 19:05











  • And whats about their claim they are as fast as TCP could be? Moreover, it is REQ/REP sockets, not some fancy XPUB or DEALER/BROKER when I can imagine what introduces additional latency (for additional feature set these sockets provide)

    – kreuzerkrieg
    Nov 27 '18 at 19:40






  • 1





    I'm not involved with the library, so I can only comment from personal experience. That being said, I've found ZMQ to be dramatically over-hyped. I believe when they claim "high speed" (which has not been my experience), they mean throughput.

    – Peter Ruderman
    Nov 27 '18 at 19:44











  • Did you find other library that does better in latency sensitive environment?

    – kreuzerkrieg
    Nov 27 '18 at 19:51






  • 2





    No. Latency wasn't a particular concern for us when using ZMQ (and the rest of our code is based on hand written sockets stuff).

    – Peter Ruderman
    Nov 27 '18 at 19:54

















I think that's just par for the course with ZeroMQ. The library introduces additional latency beyond what's inherent in the low level network I/O.

– Peter Ruderman
Nov 27 '18 at 19:05





I think that's just par for the course with ZeroMQ. The library introduces additional latency beyond what's inherent in the low level network I/O.

– Peter Ruderman
Nov 27 '18 at 19:05













And whats about their claim they are as fast as TCP could be? Moreover, it is REQ/REP sockets, not some fancy XPUB or DEALER/BROKER when I can imagine what introduces additional latency (for additional feature set these sockets provide)

– kreuzerkrieg
Nov 27 '18 at 19:40





And whats about their claim they are as fast as TCP could be? Moreover, it is REQ/REP sockets, not some fancy XPUB or DEALER/BROKER when I can imagine what introduces additional latency (for additional feature set these sockets provide)

– kreuzerkrieg
Nov 27 '18 at 19:40




1




1





I'm not involved with the library, so I can only comment from personal experience. That being said, I've found ZMQ to be dramatically over-hyped. I believe when they claim "high speed" (which has not been my experience), they mean throughput.

– Peter Ruderman
Nov 27 '18 at 19:44





I'm not involved with the library, so I can only comment from personal experience. That being said, I've found ZMQ to be dramatically over-hyped. I believe when they claim "high speed" (which has not been my experience), they mean throughput.

– Peter Ruderman
Nov 27 '18 at 19:44













Did you find other library that does better in latency sensitive environment?

– kreuzerkrieg
Nov 27 '18 at 19:51





Did you find other library that does better in latency sensitive environment?

– kreuzerkrieg
Nov 27 '18 at 19:51




2




2





No. Latency wasn't a particular concern for us when using ZMQ (and the rest of our code is based on hand written sockets stuff).

– Peter Ruderman
Nov 27 '18 at 19:54





No. Latency wasn't a particular concern for us when using ZMQ (and the rest of our code is based on hand written sockets stuff).

– Peter Ruderman
Nov 27 '18 at 19:54












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53506353%2fzmq-cpp-performance-compared-to-the-latency-reported-by-qperf%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53506353%2fzmq-cpp-performance-compared-to-the-latency-reported-by-qperf%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Contact image not getting when fetch all contact list from iPhone by CNContact

count number of partitions of a set with n elements into k subsets

A CLEAN and SIMPLE way to add appendices to Table of Contents and bookmarks