Dev Container common steps not working

I am trying to run the Rivet in the Dev Container. However booting the cluster with "nix-shell --run "bolt init dev --yes" fails with:
No description
No description
19 Replies
Nathan
Nathanβ€’16mo ago
hey! what os are you running?
Kuk!
Kuk!OPβ€’16mo ago
win 10
Kuk!
Kuk!OPβ€’16mo ago
No description
Kuk!
Kuk!OPβ€’16mo ago
FYI I have skipped the "Step 5: Setup dev tunnel (optional)" and I do not have the public IP however that should not be necessary for booting the cluster, right? I've tried to checkout 24.5.1 version but it is the same @Nathan should I make a bug?
Nathan
Nathanβ€’16mo ago
hey! i plan on looking at this later this weekend, i’m pretty tied up today apologies for the delay
Kuk!
Kuk!OPβ€’16mo ago
alright, the issue was the workspace/repo was not trusted by vscode so the git rev-parse HEAD command failed now I am stuck almost at the end of the bolt init dev
Finished `dev` profile [unoptimized] target(s) in 2m 13s

Building (individual)
00:00:00 [=======================] 21/21

Generating specs
β ‚ 00:00:01 [ ] 0/21 api-internal-monolith
thread 'main' panicked at /tmp/nix-build-bolt.drv-0/source/lib/bolt/core/src/dep/terraform/output.rs:196:58:
invalid terraform output: Error("missing field `traefik_tunnel_external_ip`", line: 0, column: 0)
Finished `dev` profile [unoptimized] target(s) in 2m 13s

Building (individual)
00:00:00 [=======================] 21/21

Generating specs
β ‚ 00:00:01 [ ] 0/21 api-internal-monolith
thread 'main' panicked at /tmp/nix-build-bolt.drv-0/source/lib/bolt/core/src/dep/terraform/output.rs:196:58:
invalid terraform output: Error("missing field `traefik_tunnel_external_ip`", line: 0, column: 0)
Nathan
Nathanβ€’16mo ago
ah good catch ah just looked at the config. looks like we recently broke the config for running without game servers. (if you run (cd infra/tf/k8s_infra/ && terraform output), you can validate that traefik_tunnel_external_ip = null) to fix, try adding this to the end of the namespaces/dev.toml file (feel free to configure hardware as needed):
[rivet.provisioning]
job_server_provision_margin = 1

[rivet.provisioning.cluster]
name_id = "rivet"

[rivet.provisioning.cluster.datacenters.atl]
datacenter_id = "db01f693-a4a4-4e31-933b-d1b81357bb58"
display_name = "Linode Atlanta"
provider = "linode"
provider_datacenter_name = "us-southeast"
build_delivery_method = "traffic_server"

[rivet.provisioning.cluster.datacenters.atl.pools.job]
desired_count = 1
max_count = 1
drain_timeout = 198000000

[[rivet.provisioning.cluster.datacenters.atl.pools.job.hardware]]
name = "g6-standard-1"

[rivet.provisioning.cluster.datacenters.atl.pools.gg]
desired_count = 1
max_count = 1
drain_timeout = 198000000

[[rivet.provisioning.cluster.datacenters.atl.pools.gg.hardware]]
name = "g6-standard-1"

[rivet.provisioning.cluster.datacenters.atl.pools.ats]
desired_count = 1
max_count = 1
drain_timeout = 198000000

[[rivet.provisioning.cluster.datacenters.atl.pools.ats.hardware]]
name = "g6-standard-1"
[rivet.provisioning]
job_server_provision_margin = 1

[rivet.provisioning.cluster]
name_id = "rivet"

[rivet.provisioning.cluster.datacenters.atl]
datacenter_id = "db01f693-a4a4-4e31-933b-d1b81357bb58"
display_name = "Linode Atlanta"
provider = "linode"
provider_datacenter_name = "us-southeast"
build_delivery_method = "traffic_server"

[rivet.provisioning.cluster.datacenters.atl.pools.job]
desired_count = 1
max_count = 1
drain_timeout = 198000000

[[rivet.provisioning.cluster.datacenters.atl.pools.job.hardware]]
name = "g6-standard-1"

[rivet.provisioning.cluster.datacenters.atl.pools.gg]
desired_count = 1
max_count = 1
drain_timeout = 198000000

[[rivet.provisioning.cluster.datacenters.atl.pools.gg.hardware]]
name = "g6-standard-1"

[rivet.provisioning.cluster.datacenters.atl.pools.ats]
desired_count = 1
max_count = 1
drain_timeout = 198000000

[[rivet.provisioning.cluster.datacenters.atl.pools.ats.hardware]]
name = "g6-standard-1"
then run bolt infra up to re-apply the new config. if it helps unstick you, this is the dev config that most rivet emplyees work off of, but it's fully loaded:
[cluster]
id = "XXXX"

[cluster.single_node]
# TODO: This is problematic, needs to separate container port with public port
api_http_port = 80
tunnel_port = 6000

[cluster.single_node.dev_tunnel]

[dns.domain]
main = "xxxx"
cdn = "xxxx"
job = "xxxx"

[dns.cloudflare]
account_id = "xxxx"

[dns.cloudflare.access.groups]
engineering = "xxxx"

[dns.cloudflare.access.services]
grafana = "xxxx"

[rivet.cdn]
cache_size_gb = 10

[kubernetes.k3d]
use_local_repo = true # NOTE: This might not work in dev containers

[kubernetes]
dashboard_enabled = true

[rivet.upload.nsfw_check]

[email.sendgrid]

[clickhouse.kubernetes]

[cockroachdb.kubernetes]

[prometheus]

[rivet.test]
load_tests = true

[rivet.api]
error_verbose = true
hub_origin = "http://localhost:5080"

[rivet.provisioning]
[rivet.provisioning.cluster]
name_id = "rivet"

[rivet.provisioning.cluster.datacenters.atl]
datacenter_id = "db01f693-a4a4-4e31-933b-d1b81357bb58"
display_name = "Linode Atlanta"
provider = "linode"
provider_datacenter_name = "us-southeast"
build_delivery_method = "traffic_server"

[rivet.provisioning.cluster.datacenters.atl.pools.job]
desired_count = 1
max_count = 1
drain_timeout = 198000000

[[rivet.provisioning.cluster.datacenters.atl.pools.job.hardware]]
name = "g6-standard-1"

[rivet.provisioning.cluster.datacenters.atl.pools.gg]
desired_count = 1
max_count = 1
drain_timeout = 198000000

[[rivet.provisioning.cluster.datacenters.atl.pools.gg.hardware]]
name = "g6-standard-1"

[rivet.provisioning.cluster.datacenters.atl.pools.ats]
desired_count = 1
max_count = 1
drain_timeout = 198000000

[[rivet.provisioning.cluster.datacenters.atl.pools.ats.hardware]]
name = "g6-standard-1"

[s3.minio]

[s3.cors]
allowed_origins = ["http://localhost:5080"]

[captcha.turnstile]
site_key_main = "1x00000000000000000000AA"
site_key_cdn = "1x00000000000000000000AA"
[cluster]
id = "XXXX"

[cluster.single_node]
# TODO: This is problematic, needs to separate container port with public port
api_http_port = 80
tunnel_port = 6000

[cluster.single_node.dev_tunnel]

[dns.domain]
main = "xxxx"
cdn = "xxxx"
job = "xxxx"

[dns.cloudflare]
account_id = "xxxx"

[dns.cloudflare.access.groups]
engineering = "xxxx"

[dns.cloudflare.access.services]
grafana = "xxxx"

[rivet.cdn]
cache_size_gb = 10

[kubernetes.k3d]
use_local_repo = true # NOTE: This might not work in dev containers

[kubernetes]
dashboard_enabled = true

[rivet.upload.nsfw_check]

[email.sendgrid]

[clickhouse.kubernetes]

[cockroachdb.kubernetes]

[prometheus]

[rivet.test]
load_tests = true

[rivet.api]
error_verbose = true
hub_origin = "http://localhost:5080"

[rivet.provisioning]
[rivet.provisioning.cluster]
name_id = "rivet"

[rivet.provisioning.cluster.datacenters.atl]
datacenter_id = "db01f693-a4a4-4e31-933b-d1b81357bb58"
display_name = "Linode Atlanta"
provider = "linode"
provider_datacenter_name = "us-southeast"
build_delivery_method = "traffic_server"

[rivet.provisioning.cluster.datacenters.atl.pools.job]
desired_count = 1
max_count = 1
drain_timeout = 198000000

[[rivet.provisioning.cluster.datacenters.atl.pools.job.hardware]]
name = "g6-standard-1"

[rivet.provisioning.cluster.datacenters.atl.pools.gg]
desired_count = 1
max_count = 1
drain_timeout = 198000000

[[rivet.provisioning.cluster.datacenters.atl.pools.gg.hardware]]
name = "g6-standard-1"

[rivet.provisioning.cluster.datacenters.atl.pools.ats]
desired_count = 1
max_count = 1
drain_timeout = 198000000

[[rivet.provisioning.cluster.datacenters.atl.pools.ats.hardware]]
name = "g6-standard-1"

[s3.minio]

[s3.cors]
allowed_origins = ["http://localhost:5080"]

[captcha.turnstile]
site_key_main = "1x00000000000000000000AA"
site_key_cdn = "1x00000000000000000000AA"
Kuk!
Kuk!OPβ€’16mo ago
I will try it and let you know, btw I am not sure I understand the public IP requirement for the dev container. I mean in my head the dev workflow is about starting the local cluster listening on the 127.0.0.1, then it would spin up the game server listening on localhost and then I would connect to it with my dev (unreal) game client. So I do not know why would I need an external IP?
Nathan
Nathanβ€’16mo ago
ah, self-hosting rivet is intended for hosting public servers so we assume you have a public ip. if your'e just trying to give it a spin locally for testing, what you're saying is supposed to be possible, but you ran in to a bug that's preventing that without provisioning. i think you can just specify this without any servers to get past the bug.
[rivet.provisioning]
[rivet.provisioning.cluster]
name_id = "rivet"
[rivet.provisioning]
[rivet.provisioning.cluster]
name_id = "rivet"
without any datacenters and it might work. it'll require a linode api key, but you can probably just pass in some random string since it won't use it.
Kuk!
Kuk!OPβ€’16mo ago
so it should look like this?
id = "2f88f90e-c922-48c5-91e7-3d1b07f4dd97"

[cluster.single_node]
public_ip = "127.0.0.1"
api_http_port = 80
#api_http_port = 8080

[s3.minio]

[rivet.provisioning]
[rivet.provisioning.cluster]
name_id = "rivet"
id = "2f88f90e-c922-48c5-91e7-3d1b07f4dd97"

[cluster.single_node]
public_ip = "127.0.0.1"
api_http_port = 80
#api_http_port = 8080

[s3.minio]

[rivet.provisioning]
[rivet.provisioning.cluster]
name_id = "rivet"
if yes, then it complains about must have dns configured to provision servers I've added
[dns.domain]
main = "localhost"
cdn = "localhost"
job = "localhost"

[dns.cloudflare]
account_id = "xxxx"
[dns.domain]
main = "localhost"
cdn = "localhost"
job = "localhost"

[dns.cloudflare]
account_id = "xxxx"
then I've run bolt init dev --yes
Nathan
Nathanβ€’16mo ago
yep, it requires you to configure cloudflare with dns. i don't think there's any way around it without fixing the bug i mentioned earlier.
Kuk!
Kuk!OPβ€’16mo ago
damn, maybe I can just get older version? but nobody knows when it got introduced
Nathan
Nathanβ€’16mo ago
frankly, not sure how far back you'd have to go. people who are self-hosting right now are using it to run public game servers, so we don't run through this path that frequently. we're planning on making this dead simple to set up later this year, but can't promise when i.e. it should be just a docker-compose up command
Kuk!
Kuk!OPβ€’16mo ago
yep πŸ˜‰ ok, thanks for help anyway
Nathan
Nathanβ€’16mo ago
GitHub
missing field traefik_tunnel_external_ip` when attempting to setu...
Symptoms The new rivet.provision config broke support running Rivet without edge servers. Currently fails because traefik_tunnel_external_ip tries to return null but the tf/oututs.rs config does no...
Nathan
Nathanβ€’15mo ago
hey @Kuk! ! it's been a long haul stamping out all sorts of bugs, but we just submitted v2.1.0 to the asset store. you can download it hear while we wait for it to get approved – https://releases.rivet.gg/plugin-godot/v2.1.0/rivet-plugin-godot.zip let us know if you run in to issues, we'll be quick on the turnaround now that the plugin core is solid.
Kuk!
Kuk!OPβ€’15mo ago
the thing is I wanted to use unreal πŸ˜‰
Nathan
Nathanβ€’15mo ago
ah i dropped this link in the wrong thread πŸ˜Άβ€πŸŒ«οΈ easier self-hosting & unreal is coming in the next couple months
Kuk!
Kuk!OPβ€’15mo ago
cool, I will give it a go, for now I have went with nakama + agones for fleet management. I am looking forward to trying out how it would work with rivet.

Did you find this page helpful?