Just prior to the Interchain Conversations, a gregarious gang of cross-blockchain interoperation aficionados from Tendermint Inc, the Interchain Foundation, and Agoric gathered together in Berlin for an intense two-day work session focused on the IBC protocol. The session consisted of specification review, multi-color protocol whiteboarding, and vigorous design debates, and resulted in many useful conclusions as to both the desired form of IBC v1 and compelling directions in which to take the protocol in the future, to which I shall attempt to do justice in summary here.Day One
Day one started out with a recap of the protocol architecture designed thus far, as outlined in the IBC architecture document. We discussed the host state machine requirements (ICS 23, 24), client (ICS 2), connection (ICS 3), and channel (ICS 4) abstractions, relayer requirements (ICS 18), and higher-level module-facing interfaces (ICS 25 & 26), and drew out the whole dataflow system on the whiteboard.IBC protocol architecture
Once we had a collective understanding of how each individual IBC component worked, we examined what would happen if a certain one didn’t. We categorized failures into five distinct groups, and considered how best to architect consensus equivocation detection, packet acknowledgements & timeouts, and connection/channel closure & recovery into the protocol in order to provide applications with clear ways of handling different failure cases.Categories of failure Handling packet & channel failures
The day concluded with a discussion of two advanced inter-blockchain communication feature-sets: multi-hop routing and directed-acyclic-graph cross-chain partial packet ordering. Three kinds of multi-hop routing, with various trust requirements and storage / execution / latency cost tradeoffs, were outlined: application-layer multihop, validity-proxy multihop, and routing-layer multihop:
We decided to prioritize shipping...