Increasing performance of trade engine and/or mapping a third party ordermatching engine to existing stack

Hi, I have created this topic to figure out with the help of the community how one can combine the rubykube stack with a high TPS order matching engine, something that can reach a high enough tps for 100+ tokens. presumably 10k tps would be suitable for a startup.

Below is a list of order-matching engines:

Option 1: viabtc exchange server which also has a ruby wrapper.

RUBY wrapper:

Option 2: LiquiBook…

I currently have only gone through these 2, systems, however, I am sure there are more.

What would be the best way to go about it? Ideally, I would want to do it in such a manner that I continue to get updates on rubykubes business logic and additional features.

What is the UML flow for the trading engine? Are there any examples that can be shared with the community to help students, developers, academics, etc

Are there any other solutions/tips&tricks to optimize and squeeze additional performance from the rubykube stack alone.

Looking forward to a positive response from the community.


Based on the rubykube webinar 2.2 as explained by Yaroslav Savchuk the team has added a new type of rabbitmq message that allows users to send messages outside of peatio and updates the databases for balances in a new way so as to deal with the criteria. I hope the openware team can provide additional information to help developers successfully implement rubykube with 3rd party trade engine. It will also be great if you can explain with an example.

1 Like

@sids2000 Hello!

First of all thanks for your question!
Yes you are right there are plenty of order-matching engines.

Actually peatio matching engine should be okay for you. You just need to scale matching daemon. For example you could have single matching daemon per market or markets group because there is no bottleneck in matching engine you could have 3k tps in single daemon.

I think that you could try to integrate third-party engine by turning our trading daemons and replacing them by one of examples you’ve listed. Frankly speaking I don’t know how this matching engines store orders, trades and update accounts but for me it feels like you will need to export all records they provide to peatio DB with existing models structure.


Thank you very much for your response.

What sort of hardware requirements would I require to reach 3k TPS? When I use a 16gb RAM 8 core system I barely get 100 TPS? Whereas if I use viabtc it can reach 10k TPS in half the resources but no business logic. This shows that Rubykube can be very resource intensive. (I used RUBYKUBE 2.0 in test)

In terms of the third-party engine as @rohit has mentioned can you please elaborate on the point of the new RabbitMQ messages for third-party engines. Is it possible to recommend an engine which one could integrate with the open-source stack? I have noticed the engines I have chosen would not be ideal as they are already setup to use Kafka whereas I need an engine that run with RabbitMQ.

1 Like

@sids2000 Actually it’s more about right daemons scaling.
As far as I remember we use 60 GB and 16 cores in our k8s deployments.
Which can guarantee stable 1.5k TPS for 3k you need to experiment with resources.

Our third party engine uses in memory balances which means that he needs to receive messages about deposit and withdrawal creation to update balance. But that is what we needed four our third party engine other may need more messages.

I advice you to code bridge between RabbitMQ and Kafka. So peatio will publish messages to RMQ and you read and transform them to format you engine requires and publish to Kafka or other broker.

Personally I advice to use this one because it’s the most popular.

Also I want you to notice that integration of third party engine is not easy task and requires deep understanding of both Peatio and trading engine.

1 Like