Solutions

Ubilink adopts the NVIDIA Super POD architecture, deploying a computing power center with 128 H100 GPU servers (equivalent to 1,024 GPU cards), delivering up to 45.82 PetaFlops of powerful computing performance (ESC N8-E11).This allows us to exhibit unparalleled efficiency and speed when handling the most challenging Artificial Intelligence (AI) and High-Performance Computing (HPC) workloads. In addition, we have built an AI-HPC high-efficiency parallel file system that supports NVIDIA GPUDirect RDMA technology, significantly enhancing I/O performance, meeting the extremely high demands for massive I/O performance and low latency required for AI computing tasks. Looking to the future, Ubilink plans to further expand the scale of its computing power center. In addition to the existing Tucheng Minquan data center, we are ready with space and power planning for future expansion to 256 GPU servers, and we are actively evaluating the establishment of a new-generation AI computing base in Hsinchu or Taichung, to provide our clients with the most outstanding computing power services.
The computing power prices are as follows. Additionally, we can provide a customized quote based on the client’s specifications and required time.
GPU ModelGPU CountvRAMvCPURAM(GB)StorageGPU component per hour (USD)
H100 (SXM5)880GBx8 HBM312820481TBUSD$40
H100 PCIe80G HBM2e482048Optional PurchaseUS$4.76
L40S48G DDR6482048Optional PurchaseUS$1.28
ADA A500032G DDR632128Optional PurchaseUS$1.00
ADA A450024G DDR632128Optional PurchaseUS$0.80
ADA A400020G DDR624128Optional PurchaseUS$0.70
RTX A400016G DDR624128Optional PurchaseUS$0.61
 
Contact Ubilink
  • Please send your requirements to the following email, and we will arrange a dedicated representative to assist you.
  • Business Contact:sales@ubilink.ai