NextComputing’s AI PC product line offers powerful and versatile computers that are configured to handle even the most demanding AI workloads with ease. Whether you're a researcher, developer, or entrepreneur, our AI PCs are the powerful, professional-grade tools you need to bring your AI projects to life.
Here are just a few of the things you can do with a NextComputing AI PC:
- Train and deploy complex AI models.
- Analyze large amounts of data.
- Develop new AI applications.
- Generate new and original content from a wide variety of inputs.
The NextComputing Advantage
- Processing Power (CPU/GPU) - AI development often involves complex mathematical computations, and having a powerful processor is crucial. Graphics Processing Units (GPUs) are especially important for training machine learning models as they excel at parallel processing. NextComputing systems are available with the latest AMD Ryzen Threadripper PRO and Intel Xeon processors as well as NVIDIA GeForce RTX GPUs for performance and parallel processing at the cutting edge.
- Memory - Sufficient RAM is necessary to handle large datasets and complex algorithms. AI tasks, especially deep learning, can be memory-intensive, so having 256GB DDR5 RAM allows our solutions to provide smooth operation.
- Storage - AI projects may involve working with large datasets, so with 8TB-62TB and options for even further expansion, our systems can offer incredible data access speed and overall system performance.
- Portability - The demands of resource-intensive tasks like model training are far beyond any laptop, but NextComputing systems offer extreme performance in high-density systems that can be packed up and redeployed quickly and securely, wherever you want to work.
The Future of AI - Advanced by AMD
AMD Ryzen™ AI will bring amazing experiences and innovative solutions to consumers and commercial audiences. These processors feature built-in AI-centric features to accelerate your workflow.
AMD Ryzen includes the world’s first x86 dedicated AI engine. This specialized engine enables AI applications directly on your PC while relieving some of the burden on the CPU and GPU. Innovations like this make AMD Ryzen the ideal tool for AI tasks such as model training, threat detection, coding assistance, and data modeling.
AI Everywhere with Intel
5th Gen Intel® Xeon® Scalable processors deliver performance increases and benefits across key workloads, such as artificial intelligence, high-performance computing, networking, storage, database and security.
With AI acceleration in every core, 5th Gen Xeon processors address demanding end-to-end AI workloads before customers need to add discrete accelerators — including up to 42% higher inference performance over previous generations and less than 100 millisecond latency on large language models (LLMs) under 20 billion parameters.
Supported graphics cards include the recently announced GeForce RTX™ SUPER desktop GPUs and accelerated software tools for supercharged AI performance.
NVIDIA RTX GPUs — capable of running a broad range of applications at the highest performance — unlock the full potential of generative AI on PCs. Tensor Cores in these GPUs dramatically speed AI performance across the most demanding applications. Workstations with RTX GPUs can run NVIDIA AI Enterprise software, including TensorRT and NVIDIA RAPIDS™ for simplified, secure generative AI and data science development.
The Right Tools
To help developers quickly create, test and customize pretrained generative AI models and Large Multimodal Models (LMMs) using PC-class performance and memory footprint, NVIDIA recently announced NVIDIA AI Workbench. AI Workbench offers streamlined access to popular repositories like Hugging Face, GitHub and NVIDIA NGC™, along with a simplified user interface that enables developers to easily reproduce, collaborate on and migrate projects.
NVIDIA TensorRT-LLM (TRT-LLM), an open-source library that accelerates and optimizes inference performance of the latest large language models (LLMs), now supports more pre-optimized models for PCs. Accelerated by TRT-LLM, Chat with RTX, an NVIDIA tech demo also releasing this month, allows AI enthusiasts to interact with their notes, documents and other content.
Support for LMMs
NextComputing can integrate various open-source Large Multimodal Models (LMMs), including BERT.
BERT, or Bidirectional Encoder Representations from Transformers, is a powerful natural language processing model that has been used for a variety of tasks, including text classification, question answering, and natural language inference.
Are you a researcher, developer, or entrepreneur who wants to train BERT models? If so, you need a powerful PC that can handle the task. But with so many options on the market, how do you choose the right one? That’s where NextComputing can help. We work with you to produce a purpose-built AI PC that is perfect for training BERT and other LMM models. Our AI PCs are powerful, affordable, and easy to use. Plus, we offer a variety of features that make them ideal for BERT training, such as:
- Large amounts of memory: BERT models require a lot of memory to train. Our PCs have plenty of memory to handle even the largest models.
- Fast processors and GPUs: BERT training can be time-consuming. Our PCs have fast processors that can speed up the training process.
- User-friendly software: Our PCs come with user-friendly software that makes it easy to train BERT models.
If you're looking for a powerful, affordable, and easy-to-use AI PC for training BERT models, look no further than our AI PCs. We offer a variety of models to choose from, so you can find the perfect one for your needs. Plus, we offer a variety of features that make our AI PCs ideal for BERT training. Contact us to review your requirements and start training BERT models like a pro!
- Designed for best-in-class components - Built to use AMD motherboards that leverage the extreme power of the latest AMD Ryzen Threadripper, AMD Ryzen Threadripper PRO, and AMD EPYC processors
- Versatile form factor - Stackable design and removable front and rear side rack ears for use as either a rack mount or desktop system. Durable, compact, and deployable aluminum alloy chassis with active front-to-back cooling.
- Modular design - Built for optimal airflow volume and direction for best cooling/thermal performance with high-end PCI Express cards
- XL version available - An extended version of the NextStation-TR is available with a deeper chassis designed to support NVIDIA RTX Ampere series GPUs for server/GPU co-processing
- Unprecedented Processing Power: Workstation-class AMD Ryzen Threadripper, Threadripper PRO, AMD EPYC, or Intel Xeon processors. Configurations include high CPU core count for 3D rendering of images and animations, encoding videos, and Elastic/Kibana data visualization use cases and fast CPU clock speeds for 3D modeling use cases.
- Performance tuned: Optimized for popular creative developer applications then configured and integrated for your requirements
- Multi-GPU support: Leverage one or more full-size workstation-class GPU cards for graphics or AI workflows and machine learning optimized performance
- Massive storage: Multiple storage options including PCI Express based or SATA-based SSDs
256GB DDR5 6000MHz (2 x 128GB)
8TB-62TBPCIe high performance SSDs with options for additional expansion
Windows 11 Professional