Harmony is a blockchain designed to provide fast, secure, and scalable infrastructure. This article will cover the current picture and the plan for Harmony’s architecture. We’ll also address possible solutions for existing and potential hurdles of the network.
Bootnode
Bootnodes are responsible for bootstrapping nodes in the network. When a new node is introduced to the network, bootnode provides a set of nodes for the new nodes to sync with. This will allow the new node to have the necessary data to participate in the network.
Harmony is currently hosting 5 internal bootnodes for Mainnet. They are relatively cheaper than other services as they are lightweight instances. Despite their simplicity, bootnodes cannot be terminated at our current state as they are a critical service. Without them, new nodes will not be able to join the Harmony network. Harmony plans on fully externalizing all services and we once a detailed plan is set in stone, we will proceed with retiring the bootnodes.
Full Node
Full nodes are the entry point to the Harmony network. They serve RPC requests such as querying information in the blockchain or processing transactions. Last year, Harmony transitioned from original RPC nodes to elastic RPC clusters to scale RPC requests. The transition for scalability was reasonable as Harmony was receiving a lot more requests in the previous years. However, now that the traffic amount has decreased, we have transitioned back to the legacy RPC nodes.
The motive for this transition is to prevent resources from being wasted and extra costs from occurring. We have fully terminated our ERPC clusters and have set up the required nodes to properly handle RPC requests. Harmony currently has 3 full nodes serving requests. They are fully utilized as the average max CPUs during the past week are all above 80%.
Harmony will constantly monitor the health and traffic of the full nodes. If more or less traffic occurs, the team will scale the nodes for the robustness of the network. We hope to
Internal Validator Node
Leader node is responsible for constructing new blocks and broadcasting them to validators. Validator nodes verify the validity of blocks to ensure that the ones from malicious leaders do not get appended to the chain. Currently, leaders are chosen from the 20 internal validators hosted by Harmony (5 validators per shard). Each validator contains 5 BLS keys each, which totals 25 BLS keys per shard. These keys are the actual identity that gets selected as the leader.
Harmony’s plan is to fully externalize the voting slots as our end goal is a fully decentralized network. The plan is as follows:
Voting Power (Internal Slots) | Internal | External |
Current | 49% (100) | 51% (900) |
1st governance | 26% (48) | 74% (952) |
2nd governance | 0% (0) | 100% (1000) |
The governance for the first shutdown of internal nodes is happening currently: HIP-29. If the governance is approved, 2 nodes will be terminated per shard, making it 3 nodes per shard. Each node will have one less slot making it 4 slots per node and 12 slots per shard. Once the change is implemented, Harmony will have increased external voting power.
One concern we need to address is increased centralization. Increasing the external voting power may also increase the centralization of the network. A smaller number of participants with larger stakes may have more influence over the network’s decision. Along with this, there are other concerns Harmony has to confront and solve.
Full decentralization means an increased dependency on the external validators community. Harmony needs to find a swift mechanism to work with the community to prevent network failure. Without prompt action from external validators, the network may sustain more prolonged network outages.
The multiple governances will allow Harmony to cooperate with the community on finding a robust solution for a decentralized, secure network.
DNS Node
DNS nodes are responsible for syncing nodes. As explained in the previous section, bootnodes bootstrap new nodes with already existing nodes. These already existing nodes are the DNS nodes. The service provides convenience as, alongside with bootnodes and snapshot nodes, they allow a rather fast (TODO: How fast?) experience in syncing new nodes. However, maintaining the nodes is very costly due to their requirement for high bandwidth and large storage.
Harmony will continue to maintain the DNS nodes as their functionality is critical to our network. However, we plan on continuing the functionality in a different form. The protocol team is currently developing a feature called state sync. State sync will ensure that full nodes on the network will efficiently synchronize their state to new nodes. Eventually, full nodes with state sync will handle the current role of the DNS nodes.
(TODO: DO WE ONLY HAVE 1 DNS NODE?)
The protocol team plans on launching state sync during Q1 of 2023. Once the feature is launched, we will continue terminating the DNS nodes. This will simplify our architecture and save infrastructure costs.
Snapshot Node
Snapshot nodes are used to store backups of the database. They are made available in decentralized storage. The service is used alongside the DNS nodes to speed up the process of syncing new nodes. (TODO: More context and details of snapshot nodes.)
Harmony is operating 2 snapshot nodes for shard 0 and 1 node for all other shards, total of 5 nodes. Once state sync is fully functional, full nodes will work alongside snapshot nodes to efficiently exchange state data.
Archival Node
Archival nodes contain all the data of Harmony blockchain. Currently, a single archival node contains 22TB of data. Syncing a fresh archival node would take approximately 2 to 3 weeks (!). Due to its immense size, it is crucial for us to maintain a clean copy.
Harmony is currently maintaining 3 archival nodes. The purpose of redundancy is to prevent a single point of failure. At the current state of Harmony, 3 archival nodes are sufficient to handle traffic and provide network robustness. If more demands are required, Harmony will scale the archival nodes to fulfill the network needs.
(TODO: Draw out our architecture)