Let’s talk apples-to-apples for blockchain TPS.
We’ve all heard about the scalability of various distributed ledger technology (DLT) platforms and we see a lot of projects claiming high speed blockchains, but it’s really unclear exactly how these numbers are determined. In fact, just the definition of “Transactions per Second” (TPS) can become very muddled.
In a brief survey of various systems and their claims of TPS, we discovered that there is really zero agreement on even the definition of this seemingly critical number.
A lot of us in this space come from SaaS (especially web) backgrounds and TPS is fairly straightforward there: how many requests per second is your app doing from the front end with a “reasonable” latency. We’ve all come to have some sort of vague definition of reasonable, often around the 200ms mark.
Enter blockchain and all of a sudden we are talking latencies in the minutes (and sometimes hours). Some systems might call this “time to finality” (if they even have finality). Additionally, not all transactions are created equal. Transaction cost (gas) and complexity have an affect on how long it will take your transaction to process.
Determining TPS on a DLT system should really be more analogous to Queries per Second (QPS) on databases. This, it turns out, has always been difficult because the application matters. See: https://www.quora.com/What-is-the-typical-read-and-write-QPS-that-a-MySQL-database-server-can-handle-nowadays and https://www.cockroachlabs.com/blog/2-dot-0-perf-strides/ . There is a standard called TPC-C for measuring somewhat real-world performance of these systems. As an industry we’re going to need a standard to simulate the participants in a DLT system.
What numbers are out there now?
It’s difficult to find legitimate numbers with a documented procedure.
Ethereum and Bitcoin are well studied and the numbers are relatively straight forward. Bitcoin produces a block about every 10 minutes and Ethereum produces a block about every 10–20 seconds. Generously, Bitcoin has about 2200 transactions per block which translates to roughly 4 transactions per second. Ethereum is harder to say consistently because the transactions are more application-like and uses a gas limit rather than a block size. However, you can see from lots of sites ( eg https://bitinfocharts.com/ethereum/ ) that transactions per second are around 7 over a 24hr period.
Moving outside of these two systems, it gets even murkier to find performance numbers. For example Hashgraph is known for its speed (among other things) but the whitepaper showing over 10k TPS doesn’t include digital signatures:
It is important to note that these tests are purely for achieving consensus on transaction order and timestamps. They do not include the time to process transactions. For example, if every transaction is digitally signed, then these results suggest that a great deal of processing power might be needed to verify hundreds of thousands of digital signatures per second. It is possible that GPU implementations could be helpful.
EOS is also known as a high speed system (which operates with only 21 nodes). The claims are all over the map from 1600TPS to 5k TPS. Yet, a well documented EOS research paper shows only around 35–50 TPS in any real world scenario. Block explorers show 30–60 TPS. It’s also unclear exactly what a “transaction” is considered in these claims: compute? RAM? Storage?
Stellar doesn’t use global consensus and is another system said to handle a lot of TPS, in the thousands. It’s unclear what a transaction means in that scenario, or how many nodes in the system. The explorer shows peaks of about 30–40 TPS on stellar today.
These systems are obviously not without their merit, many of them are doing incredible things within the industry. Performance is a tricky topic. You can’t claim speed and efficiency and not be transparent about what that actually means. Doing so creates an idealized world, not an actual one.
What should we measure?
Any numbers coming from projects should expose a few key pieces of data:
- What kind of transactions? (does this match your application?)
- How many nodes involved? (is this realistic for a world-wide network)
- Latency (how long until a transaction is finalized, in whatever definition of that word)
- Simulation basics (eg: If there were only a few nodes, were they situated right next to each other on the network? Was there a consensus system involved?)
The important numbers are going to depend on your application. For purely financial/payment systems, a small transaction and throughput is probably enough. When we are developing real-world applications, it gets a lot more complex and we should start to look at industry standards like the TPC-C.
As a developer, you’re going to have to think about your user. Transactions per second is only one piece of the puzzle. Latency is also a huge issue for user adoption. In bitcoin and Ethereum you send a transaction, it sits in the mempool, eventually gets mined and then you have to wait at least 6 blocks or so in Bitcoin and 25 blocks or so in Ethereum to have a reasonable assurance of finality. That means roughly 60 minutes in bitcoin and 6 minutes in Ethereum. Both of those numbers assume that your transaction is mined immediately, which, of course, it usually is not. This site will give you a sense of confirmation time (time to first block) for bitcoin. I found it difficult to find any reasonable source of confirmation time in Ethereum. However, it looks like at any given time there are about 30k unconfirmed transactions.
Where do we go from here?
First, it’s really unhelpful to show a “best case scenario” for your DLT project. It’s the equivalent of showing how fast your SQL database can execute “SELECT 1;”. It’s not a useful measure. As developers of DLT systems we need to start showing users realistic tradeoffs in latency, TPS, complexity, transaction cost, etc. If we don’t, users will start to think everyone is just lying and won’t have any way to make a realistic determination if a certain system is right for them.
At Tupelo we are modeling our test net on AWS across 8 regions, using public IPs and using a varying number of nodes. We’re playing transactions that would be smart contract calls on other systems. It’s a start and we’re always looking to improve. What we’re trying to prove to ourselves and our builders is: “is this the right solution?” not just “how fast can we possibly make this.” It would be great if all of us building these systems could start thinking about actual developer adoption and production systems.
Find our most recent numbers here: https://docs.quorumcontrol.com/platform_performance.html