Jupyterlab how to
I used
Not sure if it create conda env, but i cant install other packages.
Also i want to use my gpu in that coder space.
I use kubernetes, k3s and my pods are ready
Checked docs/articles but non found. Having some for this would be great.
Thank you

15 Replies
GitHub
[Feature]: RDNA 2 Support · Issue #154 · ROCm/gpu-operator
Suggestion Description I have 6800xt but it do not detect my card. Name: rocminfo Namespace: default Priority: 0 Service Account: default Node: <none> Labels: <none> Annotations: <no...
sample pod with amd gpu
it doesn't seem to create a
conda
env, no
so either you'll have to set up a virtualenv yourself or use !sudo apt install python3-<package>
instead in your notebookokay, what about gpu passthrough?
or lets say gpu scheduling in kubernetes world 🙂 I had proxmox background
after checking scripts i see pipx does the thing

not sure but could be driver issue..

Usually the expectation is that the environment(workspace) is pre-configured with GPU access and any other tools i.e. pytorch etc.
The module only starts (and optionally installs) the jupyterlab ik the workspace
(@emircanerkul )
couldn't get it @Phorcys I already installed the module also spesified gpu labels as like i did in other outside coder pods but getting that error.
second run i only see cpus not gpu
would you be able to share your base image?
Sure; here all tf code.
codercom/enterprise-base:ubuntu
hey, sorry for the late reply
have you installed the AMD GPU drivers on your Kubernetes node(s)?
if not, please install them and check via
rocm-smi
if yes, then i believe the container also needs to be compatible, so maybe try using the rocm/dev-ubuntu-22.04:latest
base image insteadNo worries and thank you, its just side research and not an urgent thing. I already install all amd things and i thought labeling make all work automatically, Kube node feature discovery should do the thing automatically i thought but yea might not enough, yea probably need to merge
codercom/enterprise-base:ubuntu
with rocm/dev-ubuntu-22.04:latest
because replacing it didnt worked
Learning a lot more today.
- Enabled registery: https://docs.k3s.io/installation/registry-mirror
- Forked your coder docker repo
- Based on FROM rocm/dev-ubuntu-24.04:latest
- Build via docker, saved image and imported via k3s ctr images import https://www.geekandi.com/2023/02/17/import-docker-image-into-k3s/
Tested all looks good but jupyterlab tensorflow still give same error. I'll try my luck with https://hub.docker.com/r/rocm/tensorflow/tags
k exec coder-dd716dea-c5a1-4351-87c6-9d4c1efa7aa2-797b7674db-z4xnh -n coder -- sudo rocminfo
I build with tensorflow 22gb image size 🤯 but still getting same issue. I think it might be related with user group access.
https://discord.com/channels/747933592273027093/1370758361108447272
Of course .. not things always goes good https://github.com/pypa/pipx/issues/1635
Here is pr https://github.com/coder/images/pull/296 (its quite niche area and could be waste of build credits but incase you wanna add *but i want to test it more)