* Add `Transaction::spent_outpoints` getter method
Returns an iterator over the UTXO `OutPoint`s spent by the transaction.
* Add `mempool::Error::Conflict` variant
An error representing that a transaction was rejected because it
conflicts with another transaction that's already in the mempool.
* Reject conflicting mempool transactions
Reject including a transaction in the mempool if it spends outputs
already spent by, or reveals nullifiers already revealed by another
transaction in the mempool.
* Fix typo in documentation
Remove the `r` that was incorrectly added.
Co-authored-by: teor <teor@riseup.net>
* Specify that the conflict is a spend conflict
Make the situation clearer, because there are other types of conflict.
Co-authored-by: teor <teor@riseup.net>
* Clarify that the outpoints are from inputs
Because otherwise it could lead to confusion because it could also mean
the outputs of the transaction represented as `OutPoint` references.
Co-authored-by: teor <teor@riseup.net>
* Create `storage::tests::vectors` module
Refactor to follow the convention used for other tests.
* Add an `AtLeastOne::first_mut` method
A getter to allow changing the first element.
* Add an `AtLeastOne::push` method
Allow appending elements to the collection.
* Derive `Arbitrary` for `FieldNotPresent`
This is just to make the code that generates arbitrary anchors a bit
simpler.
* Test if conflicting transactions are rejected
Generate two transactions (either V4 or V5) and insert a conflicting
spend, which can be either a transparent UTXO, or a nullifier for one of
the shielded pools. Check that any attempt to insert both transactions
causes one to be accepted and the other to be rejected.
* Delete a TODO comment that we decided not to do
Co-authored-by: teor <teor@riseup.net>
* Create initial `mempool::Crawler` type
The mempool crawler is responsible for periodically asking peers for
transactions to insert into the local mempool. This initial
implementation will periodically ask for transactions, but won't do
anything with them yet.
Also, the crawler is currently configured to be always enabled, but this
should be fixed to avoid crawling while Zebra is still syncing the
chain.
* Add a timeout to peer responses
Prevent the crawler from getting stuck if there's communication with a
peer that takes too long to respond.
* Run the mempool crawler in Zebra
Spawn a task for the crawler when Zebra starts.
* Test if the crawler is sending requests
Create a mock for the `PeerSet` service to intercept requests and verify
that the transaction requests are sent periodically.
* Use `full` Tokio features when testing
Make it simpler to select the features for test builds.
Co-authored-by: teor <teor@riseup.net>
* Link to the issue for crawler activation
Make it easy to navigate from the `TODO` comment to the current project
planning.
Co-authored-by: teor <teor@riseup.net>
* Link to the issue for downloading transactions
Make it easy to navigate from the `TODO` comment to the current project
planning.
Co-authored-by: teor <teor@riseup.net>
Co-authored-by: teor <teor@riseup.net>
* Minimal recent sync lengths implementation
Also includes metrics and logging, to make diagnosing bugs easier.
* Add logging to check what happens when Zebra reaches the chain tip
* Add tests for recent sync lengths
- initially empty
- pruned to correct length
- newest entries go first
* Drop a redundant `/` from a Cargo.lock URL
This seems to be a nightly or beta Rust change,
but hopefully stable just accepts it.
* Use metrics histograms to avoid overwriting values
* Add detailed syncer monitoring dashboard
* Increase the recent sync length to 4
This length makes it easier to distinguish between temporary and
sustained errors/syncs.
Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com>
* Update versions for zebra v1.0.0-alpha.12 release
* Update Cargo.lock
* Update release checklist with latest version changes to help keep track for future releases
* Remove reference to the fact that tower-fallback was not updated
* Gossip dynamically allocated listener ports to peers
Previously, Zebra would either gossip port `0`, which is invalid, or skip
gossiping its own dynamically allocated listener port.
* Improve "no configured peers" warning
And downgrade from error to warning, because inbound-only nodes are a
valid use case.
* Move random_known_port to zebra-test
* Add tests for dynamic local listener ports and the AddressBook
Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com>
- Add a custom semver match for `zebrad` versions
- Prefer "line contains string" matches, so tests ignore minor changes
- Escape regex meta-characters when a literal string match is intended
- Rename test functions so they are more precise
- Rewrite match internals to remove duplicate code and enable custom matches
- Document match functions
* build(deps): bump vergen from 3.2.0 to 5.1.1
* fix hardcoded version for Tracing struct
* add additional metadata
* remove extra allocations for metadata
* Remove zebrad code version from release checklist
The zebrad code automatically uses the crate version now.
* Sort panic metadata into rough categories
Co-authored-by: teor <teor@riseup.net>
Zebra's latest alpha checkpoints on Canopy activation, continues our work on NU5, and fixes a security issue.
Some notable changes include:
## Added
- Log address book metrics when PeerSet or CandidateSet don't have many peers (#1906)
- Document test coverage workflow (#1919)
- Add a final job to CI, so we can easily require all the CI jobs to pass (#1927)
## Changed
- Zebra has moved its mandatory checkpoint from Sapling to Canopy (#1898, #1926)
- This is a breaking change for users that depend on the exact height of the mandatory checkpoint.
## Fixed
- tower-batch: wake waiting workers on close to avoid hangs (#1908)
- Assert that pre-Canopy blocks use checkpointing (#1909)
- Fix CI disk space usage by disabling incremental compilation in coverage builds (#1923)
## Security
- Stop relying on unchecked length fields when preallocating vectors (#1925)
* Bump versions where appropriate
Tested with cargo install --locked --path etc
* Remove fixed panics from 'Known Issues'
* Change to alpha release series in the README
Co-authored-by: teor <teor@riseup.net>
When `cargo run` is run in the workspace directory, it can see two
executables:
- `zebrad`
- `zebra_checkpoints`
Adding `default-run = "zebrad"` to `zebrad/Cargo.toml` makes the
workspace run `zebrad` by default. (Even though it's redundant for the
`zebrad` crate itself.)
Because the new version of the prometheus exporter launches its own
single-threaded runtime on a dedicated worker thread, there's no need
for the tokio and hyper versions it uses internally to align with the
versions used in other crates. So we don't need to use our fork with
tokio 0.3, and can just use the published alpha. Advancing to a later
alpha may fix the missing-metrics issues.
As we approach our alpha release we've decided we want to plan ahead for the user bug reports we will eventually receive. One of the bigger issues we foresee is determining exactly what version of the software users are running, and particularly how easy it may or may not be for users to accidentally discard this information when reporting bugs.
To defend against this, we've decided to include the exact git sha for any given build in the compiled artifact. This information will then be re-exported as a span early in the application startup process, so that all logs and error messages should include the sha as their very first span. We've also added this sha as issue metadata for `color-eyre`'s github issue url auto generation feature, which should make sure that the sha is easily available in bug reports we receive, even in the absence of logs.
Co-authored-by: teor <teor@riseup.net>
The metrics code becomes much simpler because the current version of the
metrics crate builds its own single-threaded runtime on a dedicated worker
thread, so no dependency on the main Zebra Tokio runtime is required.
This change is mostly mechanical, with the exception of the changes to the
`tower-batch` middleware. This middleware was adapted from `tower::buffer`,
and the `tower::buffer` code was changed to implement its own bounded queue,
because Tokio 0.3 removed the `mpsc::Sender::poll_send` method. See
ddc64e8d4d
for more context on the Tower changes. To match Tower as closely as possible
in order to be able to upstream `tower-batch`, those changes are copied from
`tower::Buffer` to `tower-batch`.
The hedge middleware implements hedged requests, as described in _The
Tail At Scale_. The idea is that we auto-tune our retry logic according
to the actual network conditions, pre-emptively retrying requests that
exceed some latency percentile. This would hopefully solve the problem
where our timeouts are too long on mainnet and too slow on testnet.
This makes two changes relative to the existing download code:
1. It uses a oneshot to attempt to cancel the download task after it
has started;
2. It encapsulates the download creation and cancellation logic into a
Downloads struct.
* Split tracing component code into modules.
* Repatriate Tracing and simplify config handling.
We upstreamed our Tracing component, expecting not to have to exert fine
control over the tracing settings. But this turned out not to be the case, and
now that we want to do other things (flamegraphs, journalctl, opentelemetry,
etc), we end up with really awkward code (as in the current flamegraph
handling).
This also makes use of the changes to `init()` to load the config early to pass
configuration data into the components, which avoids the need for the
refactoring in #775.
Finally, we restore support for the `-v` flag when the filter is unset. Closes#831.
* Disable tracing and metrics endpoints by default.
Closes#660.
* Switch back to upstream Abscissa.
* Integrate flamegraph support into the new Tracing component.
* Pass -v in acceptance tests to get info-level output.
* Clean up acceptance test code.
* Setup tracing-flame for use profiling zebrad
* start work on conditional flamegraph generation
* review time!
* update comments
* Update Cargo.toml
* disable default features for inferno
* reorganize
* missing one trait
* Apply suggestions from code review
* graceful shutdown!
* remove special case handling on ctrlc for cleanup
* rename signal fn to better represent its responsibility
* remove unused global hook for flushing flamegraph
* move tracing logic to the right file
* just copy linkerd's signal handling logic
* update book
* make zebrad app drop on shutdown normally
* Update zebrad/src/components/tokio.rs
Co-authored-by: teor <teor@riseup.net>
* Update zebrad/src/application.rs
Co-authored-by: teor <teor@riseup.net>
* Apply suggestions from code review
Co-authored-by: teor <teor@riseup.net>
* cleanup a little
* ooh yea there's an API for that
* setup env-filter for backup subscriber
* document env filter
* document return codes
* forgot to save
* Update book/src/applications/zebrad.md
Co-authored-by: teor <teor@riseup.net>
Co-authored-by: teor <teor@riseup.net>
* add zebrad acceptance tests
* add custom command test helpers that work with kill
* add and use info event for start and seed commands
* combine conflicting tests into one test case
Co-authored-by: Jane Lusby <jane@zfnd.org>
We had a brief discussion on discord and it seemed like we had consensus on the
following versioning policy:
* zebrad: match major version to NU version, so we will start by releasing
zebrad 3.0.0;
* zebra-* libraries: start by matching zebrad's version, then increment major
versions of each library as we need to make breaking changes (potentially
faster than the zebrad version, always respecting semver but making no
guarantees about the longevity of major releases).
This commit sets all of the crate versions to 3.0.0-alpha.0 -- the -alpha.0
marks it as a prerelease not subject to perfect adherence to compatibility
guarantees.
Prior to this change, the service returned by `zebra_network::init` would spawn background tasks that could silently fail, causing unexpected errors in the zebra_network service.
This change modifies the `PeerSet` that backs `zebra_network::init` to store all of the `JoinHandle`s for each background task it depends on. The `PeerSet` then checks this set of futures to see if any of them have exited with an error or a panic, and if they have it returns the error as part of `poll_ready`.
* rename zebra-storage to zebra-state
* Setup initial skeleton for zebra-state
* add test
* Apply suggestions from code review
Co-authored-by: Henry de Valence <hdevalence@hdevalence.ca>
* move shared test vectors to a common crate
Co-authored-by: Jane Lusby <jane@zfnd.org>
Co-authored-by: Henry de Valence <hdevalence@hdevalence.ca>
* Don't expose submodules of zebra_network::peer.
* PeerSet, PeerDiscover stubs.
Co-authored-by: Deirdre Connolly <deirdre@zfnd.org>
* Initial work on PeerSet.
This is adapted from the MIT-licensed tower-balance implementation.
* Use PeerSet in the connect stub.
* Fix authorship, license information.
I *thought* I had done a sed pass over the Cargo defaults when doing
repository initialization, but I guess I missed it or something.
Anyways, fixed now.
Add a tower-based peer implementation.
Tower provides middleware for request-response oriented protocols, while Bitcoin/Zcash just send messages which could be interpreted either as requests or responses, depending on context. To bridge this mismatch we define our own internal request/response protocol, and implement a per-peer event loop that scans incoming messages and interprets them either as requests from the remote peer to our node, or as responses to requests we made previously. This is performed by the `PeerService` task, and a corresponding `PeerClient: tower::Service` can send it requests. These tasks are themselves created by a `PeerConnector: tower::Service` which dials a remote peer and performs a handshake.
This avoids some crate selection conflicts, but makes some futures
extension traits fall out of order? This seems to be an issue with
`pin-project` resolved in the git branch of `hyper` (but not yet
released).
* Add a TracingConfig and some components
Co-authored-by: Deirdre Connolly <deirdre@zfnd.org>
* Restructure, use dependency injection, initialize tracing
* Start a placeholder loop in start command
* Add hyper alpha.1, bump tokio to alpha.4
* Hello world endpoint using async/await from hyper 0.13 alpha
Also cleaned up some linter messages.
Co-authored-by: Henry de Valence <hdevalence@hdevalence.ca>
* Update to tracing_subscriber 0.1
* fmt
* add rust-toolchain
* Remove hyper::Version import
* wip: start filter_handler impl
* Add .rustfmt.toml
* rustfmt
* Tidy up .rustfmt.toml
* Add filter reloading handling.
* bump toolchain
* Remove generated hello world acceptance tests.
These test the behaviour of the autogenerated binary and work as examples of
how to test the behaviour of abscissa binaries. Since we don't print "Hello
World" any more, they fail, but we don't yet have replacement behaviour to add
tests for, so they're removed for now.
* Clean up config file handling with Option::and_then.