How to continue training from a checkpoint in shivamshrirao repo with Colab?
How to continue training from a checkpoint in shivamshrirao repo with Colab?



--skip-torch-cuda-test ?NVIDIA GeForce RTX 3090 GPU type on RunPod.NVIDIA RTX A5000.


################################################################
Launching launch.py...
################################################################
Using TCMalloc: libtcmalloc.so.4
Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0]
Version: v1.3.1
Commit hash: b6af0a3809ea869fb180633f9affcae4b199ffcf
Traceback (most recent call last):
File "/workspace/stable-diffusion-webui/launch.py", line 38, in <module>
main()
File "/workspace/stable-diffusion-webui/launch.py", line 29, in main
prepare_environment()
File "/workspace/stable-diffusion-webui/modules/launch_utils.py", line 257, in prepare_environment
raise RuntimeError(
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.108.03 Driver Version: 510.108.03 CUDA Version: 11.8 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:45:00.0 Off | N/A |
| 0% 27C P8 32W / 350W | 1MiB / 24576MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+