Returning `impl IntoIterator` means that the caller will always be
forced to call `.into_iter()`, and returning `impl Iterator` still
allows them to call `.into_iter()` because it becomes the identity
function.
Zebra assumes that deserialized times are always able to be serialized.
But this assumption is wrong because:
- sanitization can modify times
- gossiped `MetaAddr` validation can modify times
* Refactor: Split CandidateSet::update into separate functions
* Security: Apply a timeout to the entire CandidateSet::update
* Security: Stop using very large fanout limits during initialization
Previously, Zebra used the number of resolved peer addresses.
So it was possible for all peers to fail, and for Zebra to hang on the
first update.
And Zebra could send a fanout for each initial peer, regardless
of whether their connection was successful.
Also:
- wait for at least one successful peer before trying an update
- warn if there are no successful initial peers
When peers ask for peer addresses, add our local listener address to the
set of addresses, sanitize, then truncate. Sanitize shuffles addresses,
so if there are lots of addresses in the address book, our address will
only be sent to some peers.
Add canonical addresses from inbound connections to the address book,
so that Zebra can use them for reconnection attempts.
Use the newly added `NeverAttemptedAlternate` state for these addresses,
so we try gossiped addresses first, then canonical addresses. This avoids
duplicate connections to inbound peers.
If there is a small number of initial peers, and they are slow, the
initial candidate set update can appear to hang. To avoid this issue,
limit the initial candidate set fanout to the number of initial peers.
Once the initial peers have sent us more peer addresses, there is no need
to limit the fanouts for future updates.
Reported by Niklas Long of Equilibrium.
* Security: panic if an internally generated time is out of range
If Zebra has a bug where it generates blocks, transactions, or meta
addresses with bad times, panic. This avoids sending bad data onto the
network.
(Previously, Zebra would truncate some of these times, silently
corrupting the underlying data.)
Make it clear that deserialization of these objects is infalliable.
* Instrument the crawl task
When we created the crawl task, we forgot to instrument it with the
global span. This fix makes sure that the git and network span appears on
crawl logs.
* Instrument the connector
* Improve handshake instrumentation
Make some spans debug, so there are not too many spans.
* Add the address to initial peer connection errors
- stop putting inbound addresses in the address book
- drop address book entries that can't be used for outbound connections
- distinguish between temporary inbound and permanent outbound peer
addresses
- also create variants to handle proxy connections
(but don't use them yet)
- avoid tracking connection state for isolated connections
- document security constraints for the address book and peer set
* Security: stop panicking on out-of-range version timestamps
Instead, return a deserialization error, and close the connection.
This issue was reported by Equilibrium.
* Disable clippy warnings about comparing a newly created struct
In Sapling, we compare canonical JubJub bytes with a supplied byte array.
Since we need to perform calculations to get it into canonical form, we
need to create a newly owned object.
* Clippy: use assert rather than assert_eq on a bool
* Allow use listen address in config without port
* update comments
* remove not used alias
* use Network::default_port
* Move tests and use toml instead json
* change error message
* Make match more readable
Co-authored-by: teor <teor@riseup.net>
* Add functions for serializing and deserializing split arrays
In Transaction::V5, Zcash splits some types into multiple arrays, with a
single prefix count before the first array.
Add utility functions for serializing and deserializing the subsequent
arrays, with a paramater for the original array's length.
* Use zcash_deserialize_bytes_external_count in zebra-network
* Move some preallocate proptests to their own file
And fix the test module structure so it is consistent with the rest of
zebra-chain.
* Add a convenience alias zcash_serialize_external_count
* Explain why u64::MAX items will never be reached
Zebra avoids having a majority of addresses from a single peer by asking
3 peers for new addresses.
Also update a bunch of security comments and related documentation.
* Make proptest dependencies consistent between chain and network
* Implement Arbitrary for InventoryHash and use it in tests
* Impl Arbitrary for MetaAddr and use it in tests
Also test some extreme times in MetaAddr sanitization.