💸

Feb 2023 Cloud Cost Status

Feb 01 (Wed)

Redundancy for Archival Nodes

As of today we have, 4 full archival nodes.

Label
Provider
Location
Monthly Cost
arch-s0-01
Latitude
LA2
$654
arch-s0-02
Latitude
LON
$665*
arch-node-s0-hel-01
Hetzner
HEL1-DC1
$590
arch-node-s0-hel-02
Hetzner
HEL1-DC6
$584
Total Number
4
Total Monthly Cost
$2493

Proof of Price

Latitude

arch-s0-01

image

arch-s0-02

  • Latitude had server problems thus this price is being negotiated curretly by Soph
image

Hetzner

arch-node-s0-hel-01

image

arch-node-s0-hel-02

image
image

Main reason for redundancy of archival nodes is to prevent network failure (single point of failure). When one node is down, it takes another node at least 2 weeks to sync the other one. Thus, if we have only 3 nodes and 1 goes down, only 1 will be serving traffic.

Note: The $2500 cost for 4 archival nodes is included as part of the $15k projection for July 2023.

ERPC Termination

In order fully terminate the ERPC nodes, we need to fulfill two criterias:

  1. Full RPC nodes
  2. Archival RPC nodes

Archival Nodes

In January 2023, we migrated 2 of our archival nodes from AWS to Latitude in order to save cost. We initially synced the arch-s0-01 (LA archival) and was planning to sync arch-s0-02 (London archival) late January. However, Latitude had server problems and we were in a situation where we had to re-sync both nodes (where sync for each node takes at least 2 weeks). Since 2 archivals were down, we had to use 1 from Hetzner to face the customers and 1 to fully sync an archival from Latitude.

Currently, our archival node status looks as follows:

  • 1 from Hetzner serving traffic
  • 1 from Hetzner syncing a node in Latitude
  • 1 from Latitude serving traffic
  • 1 from Latitude being synced from Hetzner

Since we have started syncing the last Latitude node in Jan 30, we expect the last archival node to be up and running around mid February.

Full RPC Nodes

Currently, the devops team is full hands on spinning up the Full RPC nodes. In fact, we have 3 RPC nodes running in AWS (Oregon).

image

Our initial goal was to redirect traffic and test them as soon as possible. The testing is to fully ensure that 3 nodes are able to handle the amount of traffic we are receiving as of now. However, we were faced with various problems:

  • Problem with Latitude previously mentioned in the “Archival Nodes”
  • Node updates regarding hard fork

Many of our members are currently working with partners and community to ensure their nodes are up to date with the upcoming hard fork. The hard fork is to happen on Feb 7 (1 week away from the time of this article), thus all hands are on deck for update support. We believe that this is a priority (explanation on the next section) and, we are shifting our main focus to hard fork and the syncing of the archival nodes. However, we will also be focusing on Full RPC Nodes on the remaining weeks of February and plan on shutting down all ERPC by Feb 24.

Reserved Instances for AWS Dev

As of today (Feb 1), our cost monthly cost for AWS Dev looks as follows:

image

We paid approximately $20k on the very first day of February. This is because of “Elastic Compute Cloud” and “Savings Plans for AWS Compute usage”. Both of those are savings plan that we have bought up front last year (June 2022 and April 2022) respectively. Until April 2023, we will have a fixed cost of approximately $20k and from May to June 2023, a fixed cost of approximately $9k.

Despite the fact that we have paid $20k, our estimated cost for Feb 2023 is $30k. With the ERPCs being terminated by the end of February, we’ll have a greater cost cut in March 2023.

image

Feb 03 (Fri)

ERPC Termination and Resale of the Reserved Instances

Harmony’s goal is to fully terminate ERPC nodes by Feb 24, 2023. Once the ERPC nodes are fully terminated, our goal is to sell the Reserved Instances to save costs even further. If you’re wondering why the ERPC termination cannot happen right away, please refer to .

However, we do have to note that fully terminating ERPC nodes will not cut our cost to $0. We still have to consider the implementation of full RPC nodes and archival nodes. We are planning on reducing redundancy but note that we cannot simply get rid of all our systems all of the sudden.

Another point we have to note is that there have been major events in January. The team was occupied with migrations from AWS to other cloud providers and unfortunately, the mainnet incident for shards 1 and 3. We are now working on the upcoming v2023.1.0 hard fork upgrade as well as testing and redirecting traffic to the legacy full RPC nodes.

These are some reasons as to why we cannot rid all of our systems abruptly. However, we are working in the most robust and the fastest way to reduce our machines and costs without disrupting our network. We will provide more updates with the ERPC nodes.

What is Compute Savings Plan?

Last April (2022), the Harmony team purchased Savings Plans in order to reduce cost. Considering the traffic and requests Harmony was handling during the bull run last year, it made all the sense to purchase the plan up front. In fact, we were able to fully utilize the Savings Plan and have saved cost.

image

However, if we look solely at the month of January, we are not saving at all. In fact, we are losing money because the instances are not being fully utilized.

image

It may seem like we are wasting instances and money by not spinning up instances in AWS. However, we have to note that AWS have extra charges. AWS not only charges for instances but also for data transfer (I/O), storage, and many other features. These extra charges combined are still costlier than all the cost combined for hosting instances in other providers (AWS’s service fees w/out instance cost > other provider’s total cost). Since the Savings Plans is a fixed cost until April, we cannot not do anything about that. However, what we can do is to migrate services away from AWS to cut cost and resell our Reserved Instances.

Storj and Google Cloud Platform

  • Storage space for Mainnet snapshot (from S3)
  • Does Storj contain any archival information? No, the 22TB of data is way too expensive to be stored in a database.
    • Just today (Feb 02) OKX, they wanted to build a full explorer node
      • Syncing is very slow
      • Store all the files in Storj
      • Anyone (partners, validators, and internal) can download and spin up the node
    • With snapshot: 2-3 days (1 day to sync & 1 day to update) & Without snapshot: 2 months
    • Shouldn’t be more than 1 week old
    • Full node in s0 is 2.2 TB (mainnet.min)
  • GCP is our next target once all the big costs have been migrated / terminated

Binance

  • Why does Binance Node cost $1000?
    • Used by Binance for wallet transaction
      • Withdraw and deposit ONEs
    • Along with its 4TB of data, we have no saving compute plan or reserved instance, thus the $1000
  • What’s our projection with the Binance node?
    • As of now, the plan is to migrate out of AWS to cut cost from $1000 to ~$100

Feb 06 (Mon)