Skip to content

What's a Plexus Application

Applications in Plexus are containerized software packages that you can deploy and run on AMD clusters. They package dependencies, libraries, and code so that execution is consistent across different environments.

What is a containerized application?

A containerized application is software packaged with its dependencies, libraries, and configuration into a self-contained unit (a container). Containers provide isolation, portability, and reproducibility so applications run the same way regardless of the underlying infrastructure.

Plexus supports containerized applications to ensure:

  • Reproducibility: Applications run the same way every time, regardless of where they execute
  • Dependency Management: All required libraries and tools are bundled within the container
  • Isolation: Applications run in isolated environments, preventing conflicts between different software versions
  • Portability: Containers can run on any compatible compute resource without modification

Container technologies supported

Plexus supports two primary container technologies:

Docker containers

Docker is the most widely used containerization platform. Docker containers are built from Docker images stored in container registries.

Key characteristics: - Supports both batch and interactive workloads - Enables multi-container applications (microservices) - Allows port exposure for web-based applications - Supports multi-node distributed applications - Compatible with AMD and NVIDIA GPU acceleration

Use cases: - Interactive applications (JupyterLab, SSH access) - Web services and APIs - Machine learning training and inference - HPC applications - Microservices architectures

Learn how to create Docker applications →

Singularity containers

Singularity (now Apptainer) is designed specifically for HPC and scientific computing environments. It provides enhanced security and seamless integration with traditional HPC workflows.

Key characteristics: - Optimized for HPC and batch workloads - Enhanced security model suitable for shared clusters - Efficient handling of large-scale parallel applications - Direct access to GPU resources - Compatible with Docker images

Use cases: - HPC simulations and modeling - Large-scale scientific computations - Batch processing workloads - GPU-accelerated computing

Learn how to create Singularity applications →


Application types

Standard applications

Standard applications run a single container with a defined set of resources. They can be configured for batch or interactive execution.

Configuration options: - CPU and memory allocation - GPU requirements - Storage mount points - Environment variables - Pre-run and post-run scripts

Examples: - PyTorch training jobs - Data processing pipelines - Computational simulations

View standard application examples →

Interactive service applications

Interactive applications expose network ports and allow users to connect via web browsers or SSH. These applications continue running until explicitly stopped.

Features: - Port exposure for web interfaces - SSH access for terminal interaction - Real-time monitoring and interaction - Persistent sessions

Common interactive applications: - JupyterLab: Interactive notebooks for data science and development - SSH-enabled containers: Full terminal access to running containers - Web applications: Custom web services and APIs

Create a JupyterLab application →

Microservices applications

Microservices applications consist of multiple containers working together within the same isolated environment. Containers communicate through localhost networking and can expose public endpoints.

Architecture: - Multiple containers in a single workload - Internal communication via localhost - Dynamic endpoint configuration - Shared namespace and networking

Use cases: - Frontend and backend service pairs - Multi-tier web applications - Complex application stacks with multiple components

Create microservices applications →

Multi-node applications

Multi-node applications distribute computation across multiple compute nodes, enabling large-scale parallel processing and distributed training.

Capabilities: - Distributed computing across multiple nodes - Support for PyTorch Distributed Data Parallel (DDP) - Support for TensorFlow MultiWorkerMirroredStrategy - MPI-based parallel applications - Automatic network configuration

Environment variables automatically configured: - Node rank and world size - Master/worker addresses and ports - GPU configuration per node - Backend communication endpoints

Use cases: - Large-scale deep learning training - Distributed PyTorch and TensorFlow models - MPI-based HPC simulations - Multi-GPU parallel computations

Create multi-node applications →


Pre-built vs. custom applications

Pre-built applications

Plexus provides a curated catalog of pre-configured applications optimized for AMD hardware, including:

  • AI/ML Frameworks: PyTorch, TensorFlow
  • HPC Benchmarks: HPL, HPCG, LAMMPS
  • Scientific Applications: GROMACS, CP2K, NAMD
  • Development Tools: ROCm-enabled Ubuntu environments

These applications are ready to use and require no configuration beyond workload submission parameters.

Custom applications

If you have Developer or Admin permissions, you can create custom applications by:

  • Importing existing containers from Docker Hub or private registries
  • Building new containers from definition files (Singularity)
  • Configuring custom environments with specific dependencies
  • Sharing applications with team members or all platform users

Custom applications let you run specialized software, proprietary tools, or domain-specific workflows.


Application catalog and discovery

The Applications page is a catalog where you can:

  • Browse available applications organized by category
  • Search by name, framework, or keyword
  • Filter by container type (Docker/Singularity)
  • View detailed information including versions, dependencies, and resource requirements
  • Launch workloads directly from application details

Application categories include: - AI and Machine Learning - High-Performance Computing - Data Analytics - Visualization - Development Tools


Getting started

For application users

  1. Browse the application catalog.
  2. Select an application that fits your needs.
  3. Review resource requirements.
  4. Launch a workload with your desired configuration.

How to run a workload →

For application developers

  1. Ensure you have Developer or Admin permissions.
  2. Choose your container technology (Docker or Singularity).
  3. Follow the creation guide for your chosen technology.
  4. Configure general settings, containers, and scripts.
  5. Test and share your application.

Create applications: - Docker applications - Singularity applications