Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com>
This commit is contained in:
Dimitris Apostolou 2021-11-12 21:30:22 +02:00 committed by GitHub
parent d321e8f0cf
commit afb8b3d477
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
33 changed files with 48 additions and 48 deletions

View File

@ -51,7 +51,7 @@ spend.
This means that script verification requires access to data about previous
UTXOs, in order to determine the conditions under which those UTXOs can be
spent. In Zebra, we aim to run operations asychronously and out-of-order to
spent. In Zebra, we aim to run operations asynchronously and out-of-order to
the greatest extent possible. For instance, we may begin verification of a
block before all of its ancestors have been verified or even downloaded. So
we need to design a mechanism that allows script verification to declare its
@ -126,7 +126,7 @@ The request does not resolve until:
- the output is spendable at `height` with `spend_restriction`.
The new `Utxo` type adds a coinbase flag and height to `transparent::Output`s
that we look up in the state, or get from newly commited blocks:
that we look up in the state, or get from newly committed blocks:
```rust
enum Response::SpendableUtxo(Utxo)

View File

@ -447,10 +447,10 @@ Construct a new chain starting with `block`.
### The `QueuedBlocks` type
The queued blocks type represents the non-finalized blocks that were commited
The queued blocks type represents the non-finalized blocks that were committed
before their parent blocks were. It is responsible for tracking which blocks
are queued by their parent so they can be commited immediately after the
parent is commited. It also tracks blocks by their height so they can be
are queued by their parent so they can be committed immediately after the
parent is committed. It also tracks blocks by their height so they can be
discarded if they ever end up below the reorg limit.
`NonFinalizedState` is defined by the following structure and API:
@ -529,7 +529,7 @@ The state service uses the following entry points:
## Committing non-finalized blocks
New `non-finalized` blocks are commited as follows:
New `non-finalized` blocks are committed as follows:
### `pub(super) fn queue_and_commit_non_finalized_blocks(&mut self, new: Arc<Block>) -> tokio::sync::oneshot::Receiver<block::Hash>`
@ -559,7 +559,7 @@ New `non-finalized` blocks are commited as follows:
5. Else iteratively attempt to process queued blocks by their parent hash
starting with `block.header.previous_block_hash`
6. While there are recently commited parent hashes to process
6. While there are recently committed parent hashes to process
- Dequeue all blocks waiting on `parent` with `let queued_children =
self.queued_blocks.dequeue_children(parent);`
- for each queued `block`
@ -574,8 +574,8 @@ New `non-finalized` blocks are commited as follows:
- Else add the new block to an existing non-finalized chain or new fork
with `self.mem.commit_block(block);`
- Send `Ok(hash)` over the associated channel to indicate the block
was successfully commited
- Add `block.hash` to the set of recently commited parent hashes to
was successfully committed
- Add `block.hash` to the set of recently committed parent hashes to
process
7. While the length of the non-finalized portion of the best chain is greater

View File

@ -654,10 +654,10 @@ after dividing by `AveragingWindowTimespan`. But as long as there is no overflow
this is [equivalent to the single truncation of the final result] in the Zcash
specification. However, Zebra should follow the order of operations in `zcashd`,
and use repeated divisions, because that can't overflow. See the relevant
[comment in the zcashd souce code].
[comment in the zcashd source code].
[equivalent to the single truncation of the final result]: https://math.stackexchange.com/questions/147771/rewriting-repeated-integer-division-with-multiplication
[comment in the zcashd souce code]: https://github.com/zcash/zcash/pull/4860/files
[comment in the zcashd source code]: https://github.com/zcash/zcash/pull/4860/files
## Module Structure
[module-structure]: #module-structure
@ -748,7 +748,7 @@ would be a security issue.
- Testing
- What related issues do you consider out of scope for this RFC that could be addressed in the future independently of the solution that comes out of this RFC?
- Monitoring and maintainence
- Monitoring and maintenance
# Future possibilities
[future-possibilities]: #future-possibilities

View File

@ -174,7 +174,7 @@ same process) of the normal node operation.
In the case of the client component that needs to do blockchain scanning and
trial decryption, every valid block with non-coinbase transactions will need to
be checked and its transactions trial-decrypted with registerd incoming viewing
be checked and its transactions trial-decrypted with registered incoming viewing
keys to see if any notes have been received by the key's owner and if any notes
have already been spent elsewhere.
@ -318,7 +318,7 @@ Supporting a wallet assumes risk. Effort required to implement wallet functiona
- initial release could support mandatory sweeps, and future releases could support legacy keys
- split `Client` component into subprocess
- this helps somewhat but the benefit is reduced by our prexisting memory safety, thanks to Rust
- this helps somewhat but the benefit is reduced by our preexisting memory safety, thanks to Rust
- not meaningful without other isolation (need to restrict `zebrad` from accessing viewing keys on disk, etc)
- could use [cap-std](https://blog.sunfishcode.online/introducing-cap-std/)
to restrict filesystem and network access for zebra-client.

View File

@ -143,7 +143,7 @@ let peers = address_book.lock().unwrap().clone();
let mut peers = peers.sanitized();
```
## Avoiding Deadlocks when Aquiring Buffer or Service Readiness
## Avoiding Deadlocks when Acquiring Buffer or Service Readiness
[readiness-deadlock-avoidance]: #readiness-deadlock-avoidance
To avoid deadlocks, readiness and locks must be acquired in a consistent order.

View File

@ -39,7 +39,7 @@ a given shielded payment address.
**nullifier set**: The set of unique `Nullifier`s revealed by any `Transaction`s
within a `Block`. `Nullifier`s are enforced to be unique within a valid block chain
by commiting to previous treestates in `Spend` descriptions, in order to prevent
by committing to previous treestates in `Spend` descriptions, in order to prevent
double-spends.
**note commitments**: Pedersen commitment to the values consisting a `Note`. One
@ -174,7 +174,7 @@ To finalize the block, the Sprout and Sapling treestates are the ones resulting
from the last transaction in the block, and determines the Sprout and Sapling
anchors that will be associated with this block as we commit it to our finalized
state. The Sprout and Sapling nullifiers revealed in the block will be merged
with the exising ones in our finalized state (ie, it should strictly grow over
with the existing ones in our finalized state (ie, it should strictly grow over
time).
## State Management

View File

@ -32,7 +32,7 @@
- Calls `ContextualCheckBlockHeader`, defined at: https://github.com/zcash/zcash/blob/ab2b7c0969391d8a57d90d008665da02f3f618e7/src/main.cpp#L3900
- Does checks given a pointer to the previous block
- Check Equihash solution is valid
- In our code we compute the equihash solution on the block alone, we will need to also do a step to check that its block height is the appopriate N+1 re: the previous block
- In our code we compute the equihash solution on the block alone, we will need to also do a step to check that its block height is the appropriate N+1 re: the previous block
- Check proof of work
- Check timestamp against prev
- Check future timestamp soft fork rule introduced in v2.1.1-1.

View File

@ -205,7 +205,7 @@ https://zips.z.cash/zip-0207
https://zips.z.cash/zip-0214
- `funding_stream(height, newtork) -> Result<Amount<NonNegative>, Error>` - Funding stream portion for this block.
- `funding_stream(height, network) -> Result<Amount<NonNegative>, Error>` - Funding stream portion for this block.
- `funding_stream_address(height, network) -> Result<FundingStreamAddress, Error>` - Address of the funding stream receiver at this block. The funding streams addresses can be transparent `zebra_chain::transparent:Address::PayToScriptHash` or `zebra_chain::sapling:Address` addresses.
## Consensus rules

View File

@ -28,7 +28,7 @@ enum Item {
/// Generates an iterator of random [Item]s
///
/// Each [Item] has a unique [SigningKey], randomly choosen [SigType] variant,
/// Each [Item] has a unique [SigningKey], randomly chosen [SigType] variant,
/// and signature over the empty message, "".
fn sigs_with_distinct_keys() -> impl Iterator<Item = Item> {
std::iter::repeat_with(|| {

View File

@ -105,7 +105,7 @@ impl Block {
/// # Consensus rule:
///
/// The nConsensusBranchId field MUST match the consensus branch ID used for
/// SIGHASH transaction hashes, as specifed in [ZIP-244] ([7.1]).
/// SIGHASH transaction hashes, as specified in [ZIP-244] ([7.1]).
///
/// [ZIP-244]: https://zips.z.cash/zip-0244
/// [7.1]: https://zips.z.cash/protocol/nu5.pdf#txnencodingandconsensus

View File

@ -149,7 +149,7 @@ impl LedgerState {
/// Returns a strategy for creating `LedgerState`s with features from
/// `network_upgrade_override`.
///
/// These featues ignore the actual tip height and network.
/// These features ignore the actual tip height and network.
pub fn network_upgrade_strategy(
network_upgrade_override: NetworkUpgrade,
transaction_version_override: impl Into<Option<u32>>,

View File

@ -230,7 +230,7 @@ pub enum CommitmentError {
actual: [u8; 32],
},
#[error("invalid chain history activation reserved block committment: expected all zeroes, actual: {actual:?}")]
#[error("invalid chain history activation reserved block commitment: expected all zeroes, actual: {actual:?}")]
InvalidChainHistoryActivationReserved { actual: [u8; 32] },
#[error("invalid chain history root: expected {expected:?}, actual: {actual:?}")]

View File

@ -791,7 +791,7 @@ impl From<FullViewingKey> for DiversifierKey {
/// that cannot be distinguished (without knowledge of the
/// spending key) from one with a random diversifier...'
///
/// Derived as specied in section [4.2.3] of the spec, and [ZIP-32].
/// Derived as specified in section [4.2.3] of the spec, and [ZIP-32].
///
/// [4.2.3]: https://zips.z.cash/protocol/nu5.pdf#orchardkeycomponents
/// [ZIP-32]: https://zips.z.cash/zip-0032#orchard-diversifier-derivation

View File

@ -43,7 +43,7 @@ fn gen_128_bits<R: RngCore + CryptoRng>(mut rng: R) -> [u64; 4] {
/// API in an async context
///
/// The different enum variants are for the different signature types which use
/// different Pallas basepoints for computation: SpendAuth and Binding sigantures.
/// different Pallas basepoints for computation: SpendAuth and Binding signatures.
#[derive(Clone, Debug)]
enum Inner {
/// A RedPallas signature using the SpendAuth generator group element.

View File

@ -261,7 +261,7 @@ where
///
/// Getting the binding signature validating key from the Spend and Output
/// description value commitments and the balancing value implicitly checks
/// that the balancing value is consistent with the value transfered in the
/// that the balancing value is consistent with the value transferred in the
/// Spend and Output descriptions but also proves that the signer knew the
/// randomness used for the Spend and Output value commitments, which
/// prevents replays of Output descriptions.

View File

@ -123,7 +123,7 @@ impl<P: ZkSnarkProof> ZcashDeserialize for JoinSplit<P> {
/// The size of a joinsplit, excluding the ZkProof
///
/// Excluding the ZkProof, a Joinsplit consists of an 8 byte vpub_old, an 8 byte vpub_new, a 32 byte anchor,
/// two 32 byte nullifiers, two 32 byte committments, a 32 byte epheremral key, a 32 byte random seed
/// two 32 byte nullifiers, two 32 byte commitments, a 32 byte ephemeral key, a 32 byte random seed
/// two 32 byte vmacs, and two 601 byte encrypted ciphertexts.
const JOINSPLIT_SIZE_WITHOUT_ZKPROOF: u64 =
8 + 8 + 32 + (32 * 2) + (32 * 2) + 32 + 32 + (32 * 2) + (601 * 2);

View File

@ -193,7 +193,7 @@ pub fn time_is_valid_at(
/// # Consensus rules:
///
/// - The nConsensusBranchId field MUST match the consensus branch ID used for
/// SIGHASH transaction hashes, as specifed in [ZIP-244] ([7.1]).
/// SIGHASH transaction hashes, as specified in [ZIP-244] ([7.1]).
/// - A SHA-256d hash in internal byte order. The merkle root is derived from the
/// hashes of all transactions included in this block, ensuring that none of
/// those transactions can be modified without modifying the header. [7.6]

View File

@ -601,7 +601,7 @@ where
.expect("the current checkpoint range has continuous Vec<QueuedBlock>s");
assert!(
!qblocks.is_empty(),
"the current checkpoint range has continous Blocks"
"the current checkpoint range has continuous Blocks"
);
// Check interim checkpoints
@ -1007,7 +1007,7 @@ where
result.expect("commit_finalized_block should not panic")
};
if result.is_err() {
// If there was an error comitting the block, then this verifier
// If there was an error committing the block, then this verifier
// will be out of sync with the state. In that case, reset
// its progress back to the state tip.
let tip = match state_service

View File

@ -673,7 +673,7 @@ async fn checkpoint_drop_cancel() -> Result<(), Report> {
// Parse all the blocks
let mut checkpoint_data = Vec::new();
for b in &[
// Continous blocks are verified
// Continuous blocks are verified
&zebra_test::vectors::BLOCK_MAINNET_GENESIS_BYTES[..],
&zebra_test::vectors::BLOCK_MAINNET_1_BYTES[..],
// Other blocks can't verify, so they are rejected on drop

View File

@ -183,7 +183,7 @@ lazy_static! {
/// represented as a network upgrade.
///
/// The minimum protocol version is used to check the protocol versions of our
/// peers during the initial block download. After the intial block download,
/// peers during the initial block download. After the initial block download,
/// we use the current block height to select the minimum network protocol
/// version.
///

View File

@ -98,10 +98,10 @@ impl Handler {
Message::Tx(transaction),
) => {
// assumptions:
// - the transaction messages are sent in a single continous batch
// - the transaction messages are sent in a single continuous batch
// - missing transaction hashes are included in a `NotFound` message
if pending_ids.remove(&transaction.id) {
// we are in the middle of the continous transaction messages
// we are in the middle of the continuous transaction messages
transactions.push(transaction);
if pending_ids.is_empty() {
Handler::Finished(Ok(Response::Transactions(transactions)))

View File

@ -14,7 +14,7 @@ use crate::peer_set::set::CancelClientWork;
/// A Future that becomes satisfied when an `S`-typed service is ready.
///
/// May fail due to cancelation, i.e. if the service is removed from discovery.
/// May fail due to cancellation, i.e. if the service is removed from discovery.
#[pin_project]
#[derive(Debug)]
pub(super) struct UnreadyService<K, S, Req> {

View File

@ -245,7 +245,7 @@ impl Codec {
Message::Addr(addrs) => {
assert!(
addrs.len() <= constants::MAX_ADDRS_IN_MESSAGE,
"unexpectely large Addr message: greater than MAX_ADDRS_IN_MESSAGE addresses"
"unexpectedly large Addr message: greater than MAX_ADDRS_IN_MESSAGE addresses"
);
// Regardless of the way we received the address,

View File

@ -166,7 +166,7 @@ impl CachedFfiTransaction {
// point, while the transaction verifier is spawning all of the script verifier
// futures. The service readiness check requires this await between each task
// spawn. Each `script` future needs a copy of the
// `Arc<CachedFfiTransaction>` so that it can simultaniously verify inputs
// `Arc<CachedFfiTransaction>` so that it can simultaneously verify inputs
// without cloning the c++ allocated type unnecessarily.
//
// ## Explanation

View File

@ -304,7 +304,7 @@ impl FinalizedState {
// Check the block commitment. For Nu5-onward, the block hash commits only
// to non-authorizing data (see ZIP-244). This checks the authorizing data
// commitment, making sure the entire block contents were commited to.
// commitment, making sure the entire block contents were committed to.
// The test is done here (and not during semantic validation) because it needs
// the history tree root. While it _is_ checked during contextual validation,
// that is not called by the checkpoint verifier, and keeping a history tree there

View File

@ -34,7 +34,7 @@ use self::chain::Chain;
use super::{check, finalized_state::FinalizedState};
/// The state of the chains in memory, incuding queued blocks.
/// The state of the chains in memory, including queued blocks.
#[derive(Debug, Clone)]
pub struct NonFinalizedState {
/// Verified, non-finalized chains, in ascending order.
@ -419,7 +419,7 @@ impl NonFinalizedState {
.transpose()
})
.expect(
"commit_block is only called with blocks that are ready to be commited",
"commit_block is only called with blocks that are ready to be committed",
)?,
)),
}

View File

@ -383,7 +383,7 @@ impl<T> TestChild<T> {
pub struct TestOutput<T> {
#[allow(dead_code)]
// this just keeps the test dir around from `TestChild` so it doesnt get
// this just keeps the test dir around from `TestChild` so it doesn't get
// deleted during `wait_with_output`
dir: Option<T>,
pub cmd: String,

View File

@ -170,7 +170,7 @@ where
/// An entry point for starting the [`MockServiceBuilder`].
///
/// This `impl` block exists for ergonomic reasons. The generic type paramaters don't matter,
/// This `impl` block exists for ergonomic reasons. The generic type parameters don't matter,
/// because they are actually set by [`MockServiceBuilder::finish`].
impl MockService<(), (), ()> {
/// Create a [`MockServiceBuilder`] to help with the creation of a [`MockService`].

View File

@ -1,6 +1,6 @@
# Zebra Utilities
This crate contains tools for zebra mantainers.
This crate contains tools for zebra maintainers.
## Programs

View File

@ -486,7 +486,7 @@ async fn mempool_transaction_expiration() -> Result<(), crate::BoxError> {
.await
.respond(Response::Nil);
// Add all the rest of the continous blocks we have to test tx2 will never expire.
// Add all the rest of the continuous blocks we have to test tx2 will never expire.
let more_blocks: Vec<Arc<Block>> = vec![
zebra_test::vectors::BLOCK_MAINNET_4_BYTES
.zcash_deserialize_into()

View File

@ -162,7 +162,7 @@ pub enum Response {
/// The state of the mempool.
///
/// Indicates wether it is enabled or disabled and, if enabled, contains
/// Indicates whether it is enabled or disabled and, if enabled, contains
/// the necessary data to run it.
#[allow(clippy::large_enum_variant)]
enum ActiveState {

View File

@ -100,7 +100,7 @@ proptest! {
let awoke = match timeout(EVENT_TIMEOUT, wake_events.acquire()).await {
Ok(permit) => {
permit.expect("Sempahore closed prematurely").forget();
permit.expect("Semaphore closed prematurely").forget();
true
}
Err(_) => false,
@ -127,7 +127,7 @@ proptest! {
wake_events: Arc<Semaphore>,
) -> Result<(), TestCaseError> {
loop {
update_events.acquire().await.expect("Sempahore closed prematurely").forget();
update_events.acquire().await.expect("Semaphore closed prematurely").forget();
// The refactor suggested by clippy is harder to read and understand.
#[allow(clippy::question_mark)]

View File

@ -923,7 +923,7 @@ fn sync_until(
child.expect_stdout_line_matches(stop_regex)?;
// make sure there is never a mempool if we don't explicity enable it
// make sure there is never a mempool if we don't explicitly enable it
if enable_mempool_at_height.is_none() {
// if there is no matching line, the `expect_stdout_line_matches` error kills the `zebrad` child.
// the error is delayed until the test timeout, or until the child reaches the stop height and exits.