How to start a local sui network with multiple nodes and validators on one machine

Hi all,
Sorry for the dumb question. I got stuck on this for quite some time… how can I start a local sui network, with multiple nodes/validators, all on the same machine? I know you can start a local sui network by using sui-test-validator but that only has one node and one validator…

What I have done so far is generally follow this guide sui/blob/main/crates/sui/genesis.md

  • Run sui genesis-ceremony init on my machine,
  • Inside each docker container install the sui binary and generate validator, account, worker and network keys using sui keytool generate Then run sui genesis-ceremony add-validator to add validator info.
  • Back on the physical machine, sign using the previously generated validator keys in the common workspace (the sui genesis-ceremony sign-and-verify command doesn’t work inside containers for some reason) and finalize the ceremony. In the end, I got a genesis.blob, and I copied this along with fullnode.yamlandvalidator.yaml`(using the previously generated keys as protocol, network and worker keys).

In the end, I was able to start sui-node inside the containers, and was able to send json-rpc to port 9000 of the fullnode, but here’s where I got stuck:

  • How can I start a local faucet to get some test coins? I tried to do run cargo run --release --bin sui-faucet -- --write-ahead-log /tmp/sui_faucet.wal inside a docker container but it’s missing client.yaml file, if I run it on the physical machine I get this error: called Result::unwrap()on anErr value: Wallet("Networking or low-level protocol error: HTTP error: channel closed") panic.file="crates/sui-faucet/src/main.rs" panic.line=81 panic.column=10
  • Port forwarding confusions: since both the fullnode and the validator need port 8080+8084, and nodes/validators/faucet need port 9184 for metrics, there’s gotta be some port forwarding from the docker containers to the physical machine. But if I just forward port 8080 to a random host port, wouldn’t the client on the physical machine not be working properly as it sends traffic to port 8080 instead of the forwarded port?

Has anyone tried this before, or perhaps there’s a better way of doing this? Thanks a lot for any insights!

Regarding the faucet and sui start, here’s a bit more info.

  • sui-test-validator binary will start a network with 4 validators and a full node.
    This also sets up a gas faucet at http://127.0.0.1:9123/gas
  • The best way to run a local network and make sure the active address gets gas is to use sui genesis -f and then sui start – the sui genesis command (as opposed to genesis-ceremony) is intended for use to start a localnet and furnishes the active address with a bunch of SUI.
  • If you need to start a gas faucet when using sui start , then use the sui-faucet --write-ahead-log gas_service.log command.

I hope to unify these commands to have a less confusing experience.

Let me know if you have other questions or issues.

1 Like

Thank u so much for the reply! But ultimately I’d like to run more than 4 sui nodes, all on different machines, so sui-test-validator doesn’t seem to suffice. For sui genesis, iirc it generates 4 validator and 1 fullnode config in .sui/sui_config is there a way to make it generate more configs with different ips so nodes can be on different machines (I’m guessing --benchmark)? If there’s a way to generate the configs and I manually place them on different machines, how would sui start work since it doesn’t seem to run if .sui/sui_config isn’t exactly the way as generated by sui genesis? Thank you again!

Update: I’m able to start the network by sui-genesis --benchmark-ips then scp .sui/sui_config to different VMs, then do cargo run --bin sui-node individually on each VM (Not sure if this is the right way to do it, sui start doesn’t seem to work even if I remove the validator configs with IP addresses of other VMs), now things are mostly alright except when I try to request coin from the faucet or when I try to sui client execute-signed-tx, I get a RPC call failed: ErrorObject { code: ServerError(-32050), message: "Transaction timed out before reaching finality", data: None } error. I have 3 validators on 3 different VMs and one fullnode running on one of the validator VMs

1 Like