Datasheet
Contact
Image

1u rackmount with Ampere processors for AI and software development

The Nucleus 1U with Ampere processors is a compact, short-depth 1U server with substantial NVMe storage and NVIDIA GPU support. It is primarily designed to deliver high-density, high-throughput, and energy-efficient performance for cloud-native and AI inference workloads in space- and power-constrained environments like Edge, Telco/5G and on-premises.

  • High-density, high-throughput, and energy-efficient performance for cloud-native and AI inference workloads
  • 1U rack height, short 28” depth
  • Power-efficient AmpereOne or Ampere Altra
  • Front and rear full-height PCIe expansion
  • NVMe and removable SSD storage

Leveraging the power of Ampere Processors

  • Exceptional Core Density and Throughput

    With up to 192 custom, single-threaded cores in a socket, the AmpereOne® maximizes the number of independent compute threads available per server and per rack. This is ideal for workloads that can be highly parallelized, such as running many independent virtual machines (VMs) or containers.

  • Superior Power Efficiency (Performance per Watt)

    Ampere’s Arm-based architecture delivers high performance while consuming significantly less power than comparable x86 processors. This is crucial for deployments where power and cooling are the biggest constraints, resulting in Total Cost of Ownership (TCO) reduction.

  • Predictable, Low-Latency Performance

    By utilizing single-threaded cores, the AmpereOne architecture helps avoid the “noisy neighbor” problem common in multi-tenant cloud environments. This ensures consistent throughput and latency.

  • Cost-Effectiveness (Performance per Dollar)

    Ampere processors are competitively priced, offering a strong value proposition in terms of performance and efficiency for cloud and scale-out workloads.

Use Cases

Image

High-Density AI Inference and Edge AI

Running multiple Large Language Models (LLMs), deep learning models, or computer vision applications for real-time AI inference.
  1. Ampere processors handle the pre- and post-processing, orchestrate a large number of inference sessions, and run smaller, high-throughput models (e.g., on-CPU inference). Their high core count supports a massive number of concurrent inference agents or sessions.
  2. NVIDIA GPUs provide specialized acceleration for the computationally intensive parts of the AI models, delivering high tokens-per-second throughput for generative AI.
  3. NVMe storage provides fast access to the large datasets, model weights, and logs required for AI workloads, allowing models to be swapped in and out of memory quickly.
Image

High-Density Cloud-Native Microservices and Container Hosting

Hosting a vast number of containers, microservices, and virtual machines for cloud service providers (CSPs) and large enterprises. This includes web serving, caching services (like Redis, Memcached), and API gateways.
  1. The 192 single-threaded cores of AmpereOne CPUs are perfect for assigning one or a few dedicated cores to each VM or container, maximizing density and maintaining performance isolation and security.
  2. The short-depth/1U form factor is Ideal for maximizing compute power in limited rack space, particularly in co-location facilities or modular/edge data centers.
  3. Optional 100G networking provides the high-throughput, low-latency connectivity required for a distributed cloud environment.
Image

Telecommunication (Telco) Edge and 5G Infrastructure

Running virtualized network functions (VNFs) and containerized network functions (CNFs) for 5G core and radio access networks (RAN), as well as AI Mobile Edge Computing (MEC) applications.
  1. Short-depth (28”) rackmount form factor is critical for deployment in space-constrained offices, roadside cabinets, and edge locations.
  2. 1+1 PSUs provide the redundancy needed for mission-critical telecom services.
  3. Ampere processors offers the high, predictable core count necessary to handle massive, concurrent signaling and data traffic processing with guaranteed performance characteristics.
Image

Arm Development

Lightning fast software development and testing for devices with embedded arm64 SoCs including Snapdragon, Rasperry Pi, Jetson, i.MX, Oryon. Focus on creating efficient, reliable, and performance-optimized software that can manage the limited resources and real-time requirements of these specialized systems.

Modular, Scalable AI Cluster

As a complete solution, the AI cluster rack of Nucleus 1U with Ampere offers 2x 100G NVIDIA BlueField DPUs per 1U server and includes dual universal aggregation / core switches from Extreme Networks.
Image

System Specs

Available CPUs
  • AmpereOne® processor 192 cores
  • Ampere® Altra® processor 128 cores
Memory

Up to 2TB

Rear I/O PCIe Expansion

Rear-access PCIe 4.0/5.0 full height, single-width expansion card including 2x100G NIC or NVIDIA BlueField DPU and SuperNIC options for high-performance network clusters or other I/O

AI Development Configuration Front-access PCIe 4.0/5.0 full height, dual width for NVIDIA Blackwell Pro series Server Edition and Workstation Edition, H200 NVL, L40S, and L4 GPUs
Standard Storage

NVMe M.2 SSD up to 8TB and dual front lockable removable U.2 NVMe SSDs up to 122TB capacity each for 252TB total system capacity

Operating Systems

Red Hat Enterprise Linux (RHEL) (9 series), SUSE Linux Enterprise Server (SLES), Oracle Linux Community Linux: NixOS, Ubuntu Server LTS (22.04, 24.04), Debian, Fedora Server, AlmaLinux, Rocky Linux, CentOS Stream, SUSE Virtualization / Harvester 1.5.0

Power

1+1 AC 1300W, 110/220V

Physical
  • 1U height, 28.25 in (71.75 cm) depth
  • Rackmount slide options for 26”to 32” rack depth cabinet mounting
Warranty 1 year parts and labor; 2nd and 3rd year warranties available

Resources

Datasheet (PDF)

Nucleus 1U with Ampere Datasheet

Datasheet (PDF)

Modular Scalable AI Cluster Datasheet

Speak to a Sales Engineer

Speak to a Sales Engineer

Speak to a Sales Engineer

Additional Services

Image