* Initialize the rayon threadpool with a new config for CPU-bound threads
* Verify proofs and signatures on the rayon thread pool
* Only spawn one concurrent batch per verifier, for now
* Allow tower-batch to queue multiple batches
* Fix up a potentially incorrect comment
* Rename some variables for concurrent batches
* Spawn multiple batches concurrently, without any limits
* Simplify batch worker loop using OptionFuture
* Clear pending batches once they finish
* Stop accepting new items when we're at the concurrent batch limit
* Fail queued requests on drop
* Move pending_items and the batch timer into the worker struct
* Add worker fields to batch trace logs
* Run docker tests on PR series
* During full verification, process 20 blocks concurrently
* Remove an outdated comment about yielding to other tasks
* cancel background database tasks in `FinalizedState` destructor
* use `shutdown_timeout()`
* Log info-level messages while waiting for background tasks to shut down
* Cancel background tasks during debug_stop_at_height shutdown
This commit moves the database shutdown code into a common function.
* Create a constant for the tokio timeout
* Add a test script for Zebra shutdown errors
* Increase the shutdown timeout to 20 seconds for slower machines
* add title to building zebra
* use imported duration
Co-authored-by: teor <teor@riseup.net>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* Add a copy-state command, which copies blocks between two state services
* Check blocks were written correctly
* Add extra logging to debug shutdown
* Add a block height limit argument
* Let the target state start from any height
Co-authored-by: Deirdre Connolly <deirdre@zfnd.org>
* Start network before verifiers
This makes the Groth16 download task start as late as possible.
* Explain why the Groth16 download must happen first
* Speed up Zebra shutdown: skip waiting for the tokio runtime
This change is mostly mechanical, with the exception of the changes to the
`tower-batch` middleware. This middleware was adapted from `tower::buffer`,
and the `tower::buffer` code was changed to implement its own bounded queue,
because Tokio 0.3 removed the `mpsc::Sender::poll_send` method. See
ddc64e8d4d
for more context on the Tower changes. To match Tower as closely as possible
in order to be able to upstream `tower-batch`, those changes are copied from
`tower::Buffer` to `tower-batch`.
* Setup tracing-flame for use profiling zebrad
* start work on conditional flamegraph generation
* review time!
* update comments
* Update Cargo.toml
* disable default features for inferno
* reorganize
* missing one trait
* Apply suggestions from code review
* graceful shutdown!
* remove special case handling on ctrlc for cleanup
* rename signal fn to better represent its responsibility
* remove unused global hook for flushing flamegraph
* move tracing logic to the right file
* just copy linkerd's signal handling logic
* update book
* make zebrad app drop on shutdown normally
* Update zebrad/src/components/tokio.rs
Co-authored-by: teor <teor@riseup.net>
* Update zebrad/src/application.rs
Co-authored-by: teor <teor@riseup.net>
* Apply suggestions from code review
Co-authored-by: teor <teor@riseup.net>
* cleanup a little
* ooh yea there's an API for that
* setup env-filter for backup subscriber
* document env filter
* document return codes
* forgot to save
* Update book/src/applications/zebrad.md
Co-authored-by: teor <teor@riseup.net>
Co-authored-by: teor <teor@riseup.net>
The components are accessed by a lock on application state. When some command
calls block_on to enter an async context, it obtained a write lock on the
entire application state. This meant that if the application state were
accessed later in an async context, a deadlock would occur. Instead the
TokioComponent holds an Option<Runtime> now, so that before calling block_on,
the caller can .take() the runtime and release the lock. Since we only ever
enter an async context once, it's not a problem that the component is then
missing its runtime, as once we are inside of a task we can access the runtime.
* Add a TracingConfig and some components
Co-authored-by: Deirdre Connolly <deirdre@zfnd.org>
* Restructure, use dependency injection, initialize tracing
* Start a placeholder loop in start command
* Add hyper alpha.1, bump tokio to alpha.4
* Hello world endpoint using async/await from hyper 0.13 alpha
Also cleaned up some linter messages.
Co-authored-by: Henry de Valence <hdevalence@hdevalence.ca>
* Update to tracing_subscriber 0.1
* fmt
* add rust-toolchain
* Remove hyper::Version import
* wip: start filter_handler impl
* Add .rustfmt.toml
* rustfmt
* Tidy up .rustfmt.toml
* Add filter reloading handling.
* bump toolchain
* Remove generated hello world acceptance tests.
These test the behaviour of the autogenerated binary and work as examples of
how to test the behaviour of abscissa binaries. Since we don't print "Hello
World" any more, they fail, but we don't yet have replacement behaviour to add
tests for, so they're removed for now.
* Clean up config file handling with Option::and_then.