Temporary fix so that Zebra's default logs support a typical workflow:
1. Developer or user runs Zebra with the default config
2. They send the logs to a terminal
3. When they see a bug, they copy-paste the last few log lines into a
bug report
This is the same change that was merged in #1373 and reverted in #1375.
We'll create a consistent logging design for Zebra in ticket #1381.
* Make debug_stop_at_height and ephemeral work together
* if `debug_stop_at_height` and `ephemeral` are set, delete the database
files after reaching the stop height
* drop or flush the database before `debug_stop_at_height` exits Zebra
This commit changes the state system and database format to track the
provenance of UTXOs, in addition to the outputs themselves.
Specifically, it tracks the following additional metadata:
- the height at which the UTXO was created;
- whether or not the UTXO was created from a coinbase transaction or
not.
This metadata will allow us to:
- check the coinbase maturity consensus rule;
- check the coinbase inputs => no transparent outputs rule;
- implement lookup of transactions by utxo (using the height to find the
block and then scanning the block) for a future RPC mechanism.
Closes#1342
This provides useful and not too noisy output at INFO level. We do an
info-level message on every block commit instead of trying to do one
message every N blocks, because this is useful both for initial block
sync as well as continuous state updates on new blocks.
This change introduces two new types:
- `PreparedBlock`, representing a block which has undergone semantic
validation and has been prepared for contextual validation;
- `FinalizedBlock`, representing a block which is ready to be finalized
immediately;
and changes the `Request::CommitBlock`,`Request::CommitFinalizedBlock`
variants to use these types instead of their previous fields.
This change solves the problem of passing data between semantic
validation and contextual validation, and cleans up the state code by
allowing it to pass around a bundle of data. Previously, the state code
just passed around an `Arc<Block>`, which forced it to needlessly
recompute block hashes and other data, and was incompatible with the
already-known but not-yet-implemented data transfer requirements, namely
passing in the Sprout and Sapling anchors computed during contextual
validation.
This commit propagates the `PreparedBlock` and `FinalizedBlock` types
through the state code but only uses their data opportunistically, e.g.,
changing .hash() computations to use the precomputed hash. In the
future, these structures can be extended to pass data through the
verification pipeline for reuse as appropriate. For instance, these
changes allow the sprout and sapling anchors to be propagated through
the state.
The behavior of a request for a UTXO from a previous block depends on
whether that block has already been submitted to the state, or not:
* if it has, the state should be able to find it and answer immediately.
* if it has not, the state should see it in a later request.
However, the previous code only checked committed blocks, not queued
blocks, so if the block containing the UTXO had already arrived but had
not been committed, it would never be scanned.
This patch fixes the problem but is a bad solution, duplicating
computation between the block verifier and the state. A better fix
follows in the next commit.
Make tracing messages more concise by omitting information already
contained in a parent span and by shortening messages. This makes them
easier to read.
Previously, this function was instrumented with a span containing the
parent hash that was the entry to the function. But it doesn't make
sense to consider the work done by the function as happening in the
context of the supplied parent hash (as distinct from the context of the
hash of the newly arrived block, which is already contained in an outer
span), so this adds noise without conveying extra context.
Instead, use events that occur within the context of the existing spans.
Here the span is added to the body of the `Service::call`
implementation, not to the futures it returns, because the state service
does all of the work synchronously in `call` rather than in the futures
it returns.
The service is skipped as a span field. We could either include or
exclude the request itself. It would be useful, but the request body
can be very large. Instead, we make two spans, one at info level and
one at trace level, and filter that way.
This change is mostly mechanical, with the exception of the changes to the
`tower-batch` middleware. This middleware was adapted from `tower::buffer`,
and the `tower::buffer` code was changed to implement its own bounded queue,
because Tokio 0.3 removed the `mpsc::Sender::poll_send` method. See
ddc64e8d4d
for more context on the Tower changes. To match Tower as closely as possible
in order to be able to upstream `tower-batch`, those changes are copied from
`tower::Buffer` to `tower-batch`.
Some systems have a very small /dev/shm, for example, see:
https://github.com/docker-library/postgres/issues/416
So we should just use the temporary directory on all operating systems.
Also:
* use TempDir to generate the temporary path
* delete the code that we copied from sled
* prefix the temporary path with the state version and network
## Motivation
Prior to this PR we've been using `sled` as our database for storing persistent chain data on the disk between boots. We picked sled over rocksdb to minimize our c++ dependencies despite it being a less mature codebase. The theory was if it worked well enough we'd prefer to have a pure rust codebase, but if we ever ran into problems we knew we could easily swap it out with rocksdb.
Well, we ran into problems. Sled's memory usage was particularly high, and it seemed to be leaking memory. On top of all that, the performance for writes was pretty poor, causing us to become bottle-necked on sled instead of the network.
## Solution
This PR replaces `sled` with `rocksdb`. We've seen a 10x improvement in memory usage out of the box, no more leaking, and much better write performance. With this change writing chain data to disk is no longer a limiting factor in how quickly we can sync the chain.
The code in this pull request has:
- [x] Documentation Comments
- [x] Unit Tests and Property Tests
## Review
@hdevalence
This change explicitly documents cancellation contracts for our Tower services,
and tries to correct a bug in the implementation of the CheckpointVerifier,
which duplicates information from the state service but did not ensure that it
would be kept in sync.
This change has two benefits:
* reduces conflicts with the sled refactor and any replacement
* allows the function to be called independently for testing
`check_contextual_validity` mistakenly used the new block's hash to try
to get the parent block from the state. This caused a panic, because the
new block isn't in the state yet.
Use `StateService::chain` to get the parent block, because we'll be
using `chain` for difficulty adjustment contextual verification anyway.