Artificial Intelligence


NextComputing’s AI PC product line offers powerful and versatile computers that are configured to handle even the most demanding AI workloads with ease. Whether you're a researcher, developer, or entrepreneur, our AI PCs are the powerful, professional-grade tools you need to bring your AI projects to life.

Here are just a few of the things you can do with a NextComputing AI PC:

  • Train and deploy complex AI models.
  • Analyze large amounts of data.
  • Develop new AI applications.
  • Generate new and original content from a wide variety of inputs.

The NextComputing Advantage

Any computer suitable for developing AI should have certain features and capabilities to handle the computational demands of AI workloads. Here are some key aspects that set NextComputing apart from other solutions:
  1. Processing Power (CPU/GPU) - AI development often involves complex mathematical computations, and having a powerful processor is crucial. Graphics Processing Units (GPUs) are especially important for training machine learning models as they excel at parallel processing. NextComputing systems are available with the latest AMD Ryzen Threadripper PRO and Intel Xeon processors as well as NVIDIA GeForce RTX GPUs for performance and parallel processing at the cutting edge.
  2. Memory - Sufficient RAM is necessary to handle large datasets and complex algorithms. AI tasks, especially deep learning, can be memory-intensive, so having 256GB DDR5 RAM allows our solutions to provide smooth operation.
  3. Storage - AI projects may involve working with large datasets, so with 8TB-62TB and options for even further expansion, our systems can offer incredible data access speed and overall system performance.
  4. Portability - The demands of resource-intensive tasks like model training are far beyond any laptop, but NextComputing systems offer extreme performance in high-density systems that can be packed up and redeployed quickly and securely, wherever you want to work.

The Future of AI - Advanced by AMD


AMD Ryzen™ AI will bring amazing experiences and innovative solutions to consumers and commercial audiences. These processors feature built-in AI-centric features to accelerate your workflow.

AMD Ryzen includes the world’s first x86 dedicated AI engine. This specialized engine enables AI applications directly on your PC while relieving some of the burden on the CPU and GPU. Innovations like this make AMD Ryzen the ideal tool for AI tasks such as model training, threat detection, coding assistance, and data modeling.

The Power of Personal AI with AMD Ryzen™ AI

AI is everywhere. It is transforming the way consumers create, interact, work and use technology every day. With AMD Ryzen AI, PC users can tap into the exciting possibilities that AI brings and experience the power of personal AI at their fingertips.

AMD Ryzen AI at a Glance

AMD Ryzen AI is a combination of powerful AI accelerators that help unlock new levels of performance and efficiency for running AI workloads.

AMD Ryzen AI includes:

  • NPU : dedicated AI engine optimized for efficient AI processing
  • GPU : AMD Radeon™ graphics for high-bandwidth AI performance
  • CPU : AMD Ryzen processor cores designed for responsive AI acceleration

Together, they can perform up to 39 trillion operations per second (TOPS), the most you can get on a consumer Windows x86 processor today!

NextComputing workstations with AMD Ryzen AI are optimized for a world of AI-driven experiences including developing your own instance of a GPT based Large Language Model (LLM-powered) chatbot, creating an AI coding assistant, and so much more!

View Our AMD Solutions

AI Everywhere with Intel


5th Gen Intel® Xeon® Scalable processors deliver performance increases and benefits across key workloads, such as artificial intelligence, high-performance computing, networking, storage, database and security.

With AI acceleration in every core, 5th Gen Xeon processors address demanding end-to-end AI workloads before customers need to add discrete accelerators — including up to 42% higher inference performance over previous generations and less than 100 millisecond latency on large language models (LLMs) under 20 billion parameters.

View Our Intel Solutions

GPU Acceleration

Many AI frameworks and libraries leverage GPU acceleration for faster training of deep learning models. A computer with a powerful GPU or multiple GPUs is beneficial for speeding up these computations. AI PCs from NextComputing support multiple professional-grade GPUs from NVIDIA featuring the newest advancements in AI performance.

Supported graphics cards include the recently announced GeForce RTX™ SUPER desktop GPUs and accelerated software tools for supercharged AI performance.

NVIDIA RTX GPUs — capable of running a broad range of applications at the highest performance — unlock the full potential of generative AI on PCs. Tensor Cores in these GPUs dramatically speed AI performance across the most demanding applications. Workstations with RTX GPUs can run NVIDIA AI Enterprise software, including TensorRT and NVIDIA RAPIDS™ for simplified, secure generative AI and data science development.

For Data Science use cases, NextComputing AI PCs can also be built with GPUs based on NVIDIA’s Ampere architecture. The NVIDIA A800 40GB Active GPU accelerates data science, AI, and HPC workflows with 432 third-generation Tensor Cores to maximize AI performance and ultra-fast and efficient inference capabilities. With third-generation NVIDIA NVLink technology, A800 40GB Active offers scalable performance for heavy AI workloads, doubling the effective memory footprint and enabling GPU-to-GPU data transfers up to 400 gigabytes per second (GB/s) of bidirectional bandwidth. This board is an AI-ready development platform with NVIDIA AI Enterprise, and delivers workstations ideally suited to the needs of skilled AI developers and data scientists.

The Right Tools

To help developers quickly create, test and customize pretrained generative AI models and Large Multimodal Models (LMMs) using PC-class performance and memory footprint, NVIDIA recently announced NVIDIA AI Workbench. AI Workbench offers streamlined access to popular repositories like Hugging Face, GitHub and NVIDIA NGC™, along with a simplified user interface that enables developers to easily reproduce, collaborate on and migrate projects.

NVIDIA TensorRT-LLM (TRT-LLM), an open-source library that accelerates and optimizes inference performance of the latest large language models (LLMs), now supports more pre-optimized models for PCs. Accelerated by TRT-LLM, Chat with RTX, an NVIDIA tech demo also releasing this month, allows AI enthusiasts to interact with their notes, documents and other content.

Support for LMMs


NextComputing can integrate various open-source Large Multimodal Models (LMMs), including BERT.

BERT, or Bidirectional Encoder Representations from Transformers, is a powerful natural language processing model that has been used for a variety of tasks, including text classification, question answering, and natural language inference.

Are you a researcher, developer, or entrepreneur who wants to train BERT models? If so, you need a powerful PC that can handle the task. But with so many options on the market, how do you choose the right one? That’s where NextComputing can help. We work with you to produce a purpose-built AI PC that is perfect for training BERT and other LMM models. Our AI PCs are powerful, affordable, and easy to use. Plus, we offer a variety of features that make them ideal for BERT training, such as:

  • Large amounts of memory: BERT models require a lot of memory to train. Our PCs have plenty of memory to handle even the largest models.
  • Fast processors and GPUs: BERT training can be time-consuming. Our PCs have fast processors that can speed up the training process.
  • User-friendly software: Our PCs come with user-friendly software that makes it easy to train BERT models.

If you're looking for a powerful, affordable, and easy-to-use AI PC for training BERT models, look no further than our AI PCs. We offer a variety of models to choose from, so you can find the perfect one for your needs. Plus, we offer a variety of features that make our AI PCs ideal for BERT training. Contact us to review your requirements and start training BERT models like a pro!

Featured Systems


  • Designed for best-in-class components - Built to use AMD motherboards that leverage the extreme power of the latest AMD Ryzen Threadripper, AMD Ryzen Threadripper PRO, and AMD EPYC processors
  • Versatile form factor - Stackable design and removable front and rear side rack ears for use as either a rack mount or desktop system. Durable, compact, and deployable aluminum alloy chassis with active front-to-back cooling.
  • Modular design - Built for optimal airflow volume and direction for best cooling/thermal performance with high-end PCI Express cards
  • XL version available - An extended version of the NextStation-TR is available with a deeper chassis designed to support NVIDIA RTX Ampere series GPUs for server/GPU co-processing
Learn More

Edge XT

  • Unprecedented Processing Power: Workstation-class AMD Ryzen Threadripper, Threadripper PRO, AMD EPYC, or Intel Xeon processors. Configurations include high CPU core count for 3D rendering of images and animations, encoding videos, and Elastic/Kibana data visualization use cases and fast CPU clock speeds for 3D modeling use cases.
  • Performance tuned: Optimized for popular creative developer applications then configured and integrated for your requirements
  • Multi-GPU support: Leverage one or more full-size workstation-class GPU cards for graphics or AI workflows and machine learning optimized performance
  • Massive storage: Multiple storage options including PCI Express based or SATA-based SSDs
AMD-Based Configurations
Intel-Based Configurations

System Specs

Available CPUs
  • AMD Ryzen Threadripper PRO 7950X 16-Core 3.8 GHz (5.7 GHz Max Boost) Socket sTRX4 280W Desktop Processor
  • For custom configurations, multiple models are available from AMD Ryzen Threadripper PRO series
  • 5th Gen Intel Xeon Scalable Processor

256GB DDR5 6000MHz (2 x 128GB)

  • NVIDIA GeForce RTX A6000 48GB GDDR6X PCI Express 4.0
  • 1-2 NVIDIA A800 40GB Active high performance GPUs for deep learning, artificial intelligence, data science, simulation, and visualization
  • 1-5 NVIDIA A2 Tensor Core GPUs: The NVIDIA A2 is powered by the NVIDIA Ampere Architecture. It provides revolutionary precision performance to accelerate deep learning and machine learning training, as well as inference, video transcoding, AI audio and video effects, rendering, data analytics, virtual workstation, virtual desktop, and many other workloads. As part of NVIDIA AI, the A2 supports all AI frameworks and network models, delivering dramatic performance and efficiency that maximizes the utility of at-scale deployments

8TB-62TBPCIe high performance SSDs with options for additional expansion

Operating Systems

Windows 11 Professional

Ready to Get Started?

View our catalog of products

Contact us to discuss your requirements