
High-Performance Ampere-Based Tower Workstation
The Edge XTP tower workstation is a professional-grade platform powered by the Ampere family of high-performance, scalable, power-efficient processors.
Designed for demanding data-intensive, edge and cloud applications, the Edge XTP supports multiple full-size GPU cards for enhanced graphics or AI workflows as well as a variety of storage options including PCI Express and SATA-based SSDs. Together with innovative Ampere CPUs, the Edge XTP allows you to create the ideal high-performance system for your needs.
Click here to learn more about NextComputing workstations featuring Ampere CPUs.

Key Features
POWERFUL, EFFICIENT PROCESSING
With Ampere CPUs, the Edge XTP is at the forefront of sustainable computing, driving the future of energy-efficient high-performance computing.
Performance Tuned
Optimized for popular developer applications then configured and integrated for your requirements
Multi-GPU Support
Leverage one or more full-size workstation-class GPU cards for graphics or AI workflows and machine learning optimized performance
Massive Storage
Multiple storage options including PCI Express based or SATA-based SSDs
Application Support
We work directly with our customers every day to ensure that our computers meet their unique requirements
Leveraging the power of AmpereOne

AmpereOne processors feature outstanding performance and power efficiency. They offer linearly scalable throughput, making them ideal as a machine learning tool. They are perfectly suited for data-intensive tasks such as the development of AI-based cloud services (recommender engines, predictive analytics, natural language processing, and computer vision applications).
- High Core Count and Parallelism: AmpereOne processors can have up to 192 cores, providing massive parallelism for AI inference workloads like natural language processing, image recognition, and generative AI. This architecture is designed to handle high-density computing and can deliver consistent, low-latency performance.
- Energy Efficiency: Ampere’s architecture is known for its high performance per watt, which is crucial for managing the operational costs and environmental impact of running AI workloads continuously in cloud environments.
- Scalability: AmpereOne is built for dense containerized deployments. Applications scale linearly as additional single-threaded cores are utilized. This makes AmpereOne an ideal platform for cloud native applications sensitive to latency variability or needing strict SLAs. Services integrated with AI ranging from visual analytics and natural language processing to classical machine learning pipelines are workloads that benefit from this level of consistency, elasticity and determinism.
- Incredible Speed: With 128 lanes of PCIe Gen5, AmpereOne can support up to eight x16 devices attached to a single socket. This provides a high-speed interface for data transfer to and from the GPU.
- Built-In AI Inference: Ampere helps customers achieve superior performance for AI workloads by integrating optimized inference layers directly into popular AI frameworks like PyTorch, TensorFlow, and ONNX Runtime. This allows for seamless deployment without requiring code changes or model conversions.
These features lead to highly predictable performance. More compute intensive tasks can be performed with fewer systems, lowering TCO. AmpereOne architecture drives faster, more predictable AI services that scale seamlessly with demand, helping data centers maximize efficiency and reduce infrastructure costs.
NVIDIA GPUs for AI

NVIDIA GPUs are versatile and powerful tools for AI development, striking a balance between high-end performance and cost-effectiveness. They are particularly well-suited for a wide range of AI workloads, from training and inference to graphics-intensive applications.
NVIDIA L4 and NVIDIA L40S GPUs
- AI and Deep Learning Capabilities: The L4 and L40S GPUs are built on the Ada Lovelace architecture and feature 4th-generation Tensor Cores, which are specifically designed to accelerate AI training and inference.
- Large Memory Capacity: With 48 GB of GDDR6 memory, the L40 can handle very large datasets and complex models without memory bottlenecks. This is especially important for training large language models and other data-intensive tasks.
- Scalable, Efficient Performance: This architecture delivers groundbreaking performance for deep learning and inference applications, and it is optimized for 24/7 enterprise data center operations, ensuring reliability and uptime while maintaining energy efficiency and lower total cost of ownership.
NVIDIA RTX PRO™ 4500, 6000 Blackwell GPU
Designed for professionals who demand the best, NVIDIA RTX PRO Blackwell is the most powerful professional RTX GPU. It delivers next-level AI and neural rendering with 5th Gen Tensor Cores, 4th Gen RT Cores, and Blackwell’s advanced CUDA architecture, accelerating data workflows like never before.
NVIDIA H200 NVL Tensor Core GPU
The NVIDIA H200 NVL is a cutting-edge data center GPU built on the Hopper architecture. The H200 is a powerful solution with 141 GB of HBM3e memory. This card is particularly effective at accelerating inference for massive large language models and other memory-bound tasks, offering faster processing, lower latency, and improved energy efficiency per token, thereby lowering the total cost of ownership for large-scale enterprise AI deployments.
Solution, Engineering, and Integration Services
NextComputing offers services for solution success and fast time to market or deployment. Outsource tasks and let NextComputing handle processes that are not core to your business so you can focus on what you do best.
See our Services section for a complete look at how we can build, brand, validate, and maintain the perfect appliance for you or your customers.

Reduce Costs
Contain soft costs with an appliance solution validated by us
Generate New Products
Quickly deploy a variety of turnkey solutions based on a common architecture
Save Time
Let us handle configuration management for updates or branching out to new products
Extend Your Brand
Put your logo on the system to create your own branded product
And Much More!
System Specs
| Available CPUs |
NextComputing systems support the entire Ampere family of processors including the powerful AmpereOne and Ampere Altra |
|---|---|
| Memory |
Up to 2TB of system memory |
| PCI Expansion |
Up to 6 full-length PCI Express x16 slots depending on processor configuration. Ask your NextComputing Sales Engineer. |
| Storage |
|
| RAID | Options for software RAID 0/1/5, or via add-on PCIe Hardware RAID controller |
| Operating Systems |
Operating Systems supported include: Rocky Linux, Red Hat Enterprise Linux , Ubuntu |
| Power |
|
| Physical | 9.5" (240mm) x 20.5" (520mm) x 20.1" (510mm) |
| Warranty | 3 years parts and labor |


