Narwhal and Tusk: A DAG-based Mempool and Efficient BFT Consensus

We propose separating the task of reliable transaction dissemination from transaction ordering, to enable high-performance Byzantine fault-tolerant quorum-based consensus. We design and evaluate a mempool protocol, Narwhal, specializing in high-throughput reliable dissemination and storage of causal...

Full description

Bibliographic Details
Main Authors: Danezis, George, Kogias, Eleftherios Kokoris, Sonnino, Alberto, Spiegelman, Alexander
Format: Report
Language:unknown
Published: arXiv 2021
Subjects:
Online Access:https://dx.doi.org/10.48550/arxiv.2105.11827
https://arxiv.org/abs/2105.11827
id ftdatacite:10.48550/arxiv.2105.11827
record_format openpolar
spelling ftdatacite:10.48550/arxiv.2105.11827 2023-05-15T17:14:11+02:00 Narwhal and Tusk: A DAG-based Mempool and Efficient BFT Consensus Danezis, George Kogias, Eleftherios Kokoris Sonnino, Alberto Spiegelman, Alexander 2021 https://dx.doi.org/10.48550/arxiv.2105.11827 https://arxiv.org/abs/2105.11827 unknown arXiv arXiv.org perpetual, non-exclusive license http://arxiv.org/licenses/nonexclusive-distrib/1.0/ Cryptography and Security cs.CR Distributed, Parallel, and Cluster Computing cs.DC FOS Computer and information sciences Preprint Article article CreativeWork 2021 ftdatacite https://doi.org/10.48550/arxiv.2105.11827 2022-04-01T15:51:48Z We propose separating the task of reliable transaction dissemination from transaction ordering, to enable high-performance Byzantine fault-tolerant quorum-based consensus. We design and evaluate a mempool protocol, Narwhal, specializing in high-throughput reliable dissemination and storage of causal histories of transactions. Narwhal tolerates an asynchronous network and maintains high performance despite failures. Narwhal is designed to easily scale-out using multiple workers at each validator, and we demonstrate that there is no foreseeable limit to the throughput we can achieve. Composing Narwhal with a partially synchronous consensus protocol (Narwhal-HotStuff) yields significantly better throughput even in the presence of faults or intermittent loss of liveness due to asynchrony. However, loss of liveness can result in higher latency. To achieve overall good performance when faults occur we design Tusk, a zero-message overhead asynchronous consensus protocol, to work with Narwhal. We demonstrate its high performance under a variety of configurations and faults. As a summary of results, on a WAN, Narwhal-Hotstuff achieves over 130,000 tx/sec at less than 2-sec latency compared with 1,800 tx/sec at 1-sec latency for Hotstuff. Additional workers increase throughput linearly to 600,000 tx/sec without any latency increase. Tusk achieves 160,000 tx/sec with about 3 seconds latency. Under faults, both protocols maintain high throughput, but Narwhal-HotStuff suffers from increased latency. Report narwhal* DataCite Metadata Store (German National Library of Science and Technology)
institution Open Polar
collection DataCite Metadata Store (German National Library of Science and Technology)
op_collection_id ftdatacite
language unknown
topic Cryptography and Security cs.CR
Distributed, Parallel, and Cluster Computing cs.DC
FOS Computer and information sciences
spellingShingle Cryptography and Security cs.CR
Distributed, Parallel, and Cluster Computing cs.DC
FOS Computer and information sciences
Danezis, George
Kogias, Eleftherios Kokoris
Sonnino, Alberto
Spiegelman, Alexander
Narwhal and Tusk: A DAG-based Mempool and Efficient BFT Consensus
topic_facet Cryptography and Security cs.CR
Distributed, Parallel, and Cluster Computing cs.DC
FOS Computer and information sciences
description We propose separating the task of reliable transaction dissemination from transaction ordering, to enable high-performance Byzantine fault-tolerant quorum-based consensus. We design and evaluate a mempool protocol, Narwhal, specializing in high-throughput reliable dissemination and storage of causal histories of transactions. Narwhal tolerates an asynchronous network and maintains high performance despite failures. Narwhal is designed to easily scale-out using multiple workers at each validator, and we demonstrate that there is no foreseeable limit to the throughput we can achieve. Composing Narwhal with a partially synchronous consensus protocol (Narwhal-HotStuff) yields significantly better throughput even in the presence of faults or intermittent loss of liveness due to asynchrony. However, loss of liveness can result in higher latency. To achieve overall good performance when faults occur we design Tusk, a zero-message overhead asynchronous consensus protocol, to work with Narwhal. We demonstrate its high performance under a variety of configurations and faults. As a summary of results, on a WAN, Narwhal-Hotstuff achieves over 130,000 tx/sec at less than 2-sec latency compared with 1,800 tx/sec at 1-sec latency for Hotstuff. Additional workers increase throughput linearly to 600,000 tx/sec without any latency increase. Tusk achieves 160,000 tx/sec with about 3 seconds latency. Under faults, both protocols maintain high throughput, but Narwhal-HotStuff suffers from increased latency.
format Report
author Danezis, George
Kogias, Eleftherios Kokoris
Sonnino, Alberto
Spiegelman, Alexander
author_facet Danezis, George
Kogias, Eleftherios Kokoris
Sonnino, Alberto
Spiegelman, Alexander
author_sort Danezis, George
title Narwhal and Tusk: A DAG-based Mempool and Efficient BFT Consensus
title_short Narwhal and Tusk: A DAG-based Mempool and Efficient BFT Consensus
title_full Narwhal and Tusk: A DAG-based Mempool and Efficient BFT Consensus
title_fullStr Narwhal and Tusk: A DAG-based Mempool and Efficient BFT Consensus
title_full_unstemmed Narwhal and Tusk: A DAG-based Mempool and Efficient BFT Consensus
title_sort narwhal and tusk: a dag-based mempool and efficient bft consensus
publisher arXiv
publishDate 2021
url https://dx.doi.org/10.48550/arxiv.2105.11827
https://arxiv.org/abs/2105.11827
genre narwhal*
genre_facet narwhal*
op_rights arXiv.org perpetual, non-exclusive license
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
op_doi https://doi.org/10.48550/arxiv.2105.11827
_version_ 1766071479881957376