Launch PyTorch Docker Application
This guide shows how to run the Docker PyTorch application. Sign in to AAC if you have not already.
Select application
Click Applications, then choose the PyTorch application and the version you want.
Note
In this example we use Pytorch_2_4_0_Rocm6_3_0 (Jupyterlab_4_2_5).
New workload
Click New Workload at the top right.
Select team
If you have more than one team, select one in the pop-up and click Launch. If you have only one team, this step is skipped.
Select input files
Upload any files the workload needs via Upload files, then click Next.
Select resources
Set the number of GPUs (max 8) and Maximum allowed runtime. You cannot change the runtime after launch. Click Next.
Select compute
Select the cluster and queue for the job, then click Next.
Review workload submission
Review the configuration and click Run Workload.
When the queue is available, the status changes to Running. Click the running workload to open it.
Monitor workload
Use the SYSLOG, STDOUT, and STDERR tabs to view logs and output.
When interactive endpoints are enabled, you can connect via JupyterLab or SSH. For JupyterLab, select jupyterlab and click Connect to open ML Studio. The password is in the STDOUT tab and in the Interactive endpoints panel under Secret key.
You will see the JupyterLab interface for Python development.
When you are done with JupyterLab, close it. For SSH, select ssh and click Connect.
Copy the shell command (Copy shell command) and the password (Copy to clipboard). Username is aac; example: ssh -o strictHostKeyChecking=no -p 7000 aac@aac1.amd.com.
You can also use the built-in Service terminal.
Click Finish Workload when done.
After the workload finishes, download logs from the STDOUT tab via Download Logs.















