Hacker Newsnew | past | comments | ask | show | jobs | submit | jdennison's commentslogin

After rerunning the tests with docker host=net i see a small bump in the rate. ~1% across all the clients.

Msgs/s

confluent_kafka_consumer : 277573.293164 / 261407.908007 = 1.061%

pykafka_consumer : 33433.342585 / 33976.938217 = 0.984%

pykafka_consumer_rdkafka : 164311.503412 / 172008.742201 = 0.955%

python_kafka_consumer : 37667.971237 / 38622.727894 = 0.975%

So yes docker network magic adds overhead, but the bias is consistent across all clients.


There is obvious value in a having a pure python implementation of a kafka client. Many deployments don't want C extensions or want to use pypy. However, as python's scipy stack has shown, the right python api wrapping C code can have a vibrant community and the speed to boot.


Original author here. The docker network point is a good one, I'll give it a try with host network.

There is still value with comparing different clients with the same network constraints. Yeah it is a contrived setup(noted in the post), but at least is the same contrived setup for each test.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: