Go to file
Henry de Valence 027bdc8465 Rework initial crawler logic.
This splits out the connection handling code into a try_connect closure, which
could be refactored into a Service of its own.

On creation, when we are likely to have very few peers, launch many concurrent
connections to the first few candidates in the initial candidate set, before
continuing to grow the peer set according to demand signals.
2019-10-22 19:06:08 -07:00
.github/workflows Continuous integration (#2) 2019-09-05 13:08:48 -04:00
design Update architecture diagram. (#60) 2019-10-09 17:46:59 -04:00
zebra-chain Update zebra-chain/Cargo.toml 2019-10-17 09:33:10 -07:00
zebra-client Fix authorship, license information. (#55) 2019-10-08 09:25:59 -07:00
zebra-consensus Fix authorship, license information. (#55) 2019-10-08 09:25:59 -07:00
zebra-network Rework initial crawler logic. 2019-10-22 19:06:08 -07:00
zebra-rpc Fix authorship, license information. (#55) 2019-10-08 09:25:59 -07:00
zebra-script Fix authorship, license information. (#55) 2019-10-08 09:25:59 -07:00
zebra-storage Fix authorship, license information. (#55) 2019-10-08 09:25:59 -07:00
zebrad Rework initial crawler logic. 2019-10-22 19:06:08 -07:00
.gitignore Create workspace skeleton based on design.md 2019-08-29 14:46:54 -07:00
.rustfmt.toml Tracing endpoint (#3) 2019-09-09 13:05:42 -07:00
Cargo.toml Beginning of peerset implementation. (#62) 2019-10-10 18:15:24 -07:00
Dockerfile Continuous integration (#2) 2019-09-05 13:08:48 -04:00
cloudbuild.yaml Continuous integration (#2) 2019-09-05 13:08:48 -04:00
rust-toolchain Tracing endpoint (#3) 2019-09-09 13:05:42 -07:00