Official One-Click Local LLM Deployment for 2019 Mac Pro (7,1) Dual W6900X

I am a professional user of the 2019 Mac Pro (7,1) with dual AMD Radeon Pro W6900X MPX modules (32GB VRAM each). This hardware is designed for high-performance compute, but it is currently crippled for modern local LLM/AI workloads under Linux due to Apple's EFI/PCIe routing restrictions.

Core Issue:

  • rocminfo reports "No HIP GPUs available" when attempting to use ROCm/amdgpu on Linux
  • Apple's custom EFI firmware blocks full initialization of professional GPU compute assets
  • The dual W6900X GPUs have 64GB combined VRAM and high-bandwidth Infinity Fabric Link, but cannot be fully utilized for local AI inference/training

My Specific Request:

Apple should provide an official, one-click deployable application that enables full utilization of dual W6900X GPUs for local large language model (LLM) inference and training under Linux.

This application must:

  1. Fully initialize both W6900X GPUs via HIP/ROCm, establishing valid compute contexts
  2. Bypass artificial EFI/PCIe routing restrictions that block access to professional GPU resources
  3. Provide a stable, user-friendly one-click deployment experience (similar to NVIDIA's AI Enterprise or AMD's ROCm Hub)

Why This Matters:

The 2019 Mac Pro is Apple's flagship professional workstation, marketed for compute-intensive workloads. Its high-cost W6900X GPUs should not be locked down for modern AI/LLM use cases. An official one-click deployment solution would demonstrate Apple's commitment to professional AI and unlock significant value for professional users.

I look forward to Apple's response and a clear roadmap for enabling this critical capability.

#MacPro #Linux #ROCm #LocalLLM #W6900X #CoreML

ROCm does not support the W6900X, based on the information at https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/system-requirements.html. You will likely need to use Vulkan or request that AMD add support.

The W6800X and W6800X Duo should work fine, as their architectures are the same as the W6800.

Infinity Fabric Link does not work on Linux. There is already an ongoing bug report for the AMDGPU kernel driver: https://gitlab.freedesktop.org/drm/amd/-/work_items/3793. I doubt AMD will fix this anytime soon.

You also likely need to use distributions provided by T2 Linux or use the T2 kernel.

Official One-Click Local LLM Deployment for 2019 Mac Pro (7,1) Dual W6900X
 
 
Q