Home

Articles from GigaIO

GigaIO’s Power-efficient Interconnect Technology Achieves Breakthrough AI Performance: 2x Faster Training and Fine-Tuning with 83.5x Lower Latency
GigaIO, a pioneer in scalable edge-to-core AI platforms for all accelerators that are easy to deploy and manage, has unveiled compelling AI training, fine-tuning, and inference benchmarks that demonstrate the performance, cost, and power efficiency of GigaIO’s AI fabric compared with RDMA over Converged Ethernet (RoCE). Key results include 2x faster training and fine-tuning and 83x better time to first token for inferencing, demonstrating how smarter interconnects can have a transformative impact on AI infrastructure.
By GigaIO · Via Business Wire · April 29, 2025
GigaIO Announces General Availability for Gryf, the World’s First Portable AI Supercomputer
GigaIO, a pioneer in scalable edge-to-core AI platforms for all accelerators that are easy to deploy and manage, today announced the general availability of Gryf™, the world’s first suitcase-sized AI supercomputer. Co-designed by GigaIO and SourceCode, Gryf delivers datacenter-class computing power directly to edge operations, enabling real-time intelligence and analytics in previously impossible field conditions. The platform has already secured significant orders from the U.S. Department of Defense and the intelligence community, validating its game-changing capabilities for mission-critical applications in challenging environments.
By GigaIO · Via Business Wire · April 25, 2025
GigaIO’s Award-winning SuperNODE Technology to Power the AI Infrastructure of TensorWave’s “TensorNODE” Deployment with AMD Instinct MI300X Accelerators at Scale
GigaIO, the award-winning provider of open workload-defined infrastructure for AI and accelerated computing, today announced the largest order yet for its flagship SuperNODE™, which will utilize tens of thousands of the AMD Instinct MI300X accelerators that launched yesterday. GigaIO’s novel infrastructure will form the backbone of a bare-metal specialized AI cloud code-named “TensorNODE,” to be built by cloud provider TensorWave for supplying access to AMD data center GPUs, especially for use in LLMs.
By GigaIO · Via Business Wire · December 7, 2023
GigaIO Introduces the First Ever 32 GPU Single-Node Supercomputer for Next-Gen AI and Technical Computing
GigaIO, the leading provider of workload-defined infrastructure for AI and technical computing workflows, recently announced that it successfully configured 32 AMD Instinct MI210 accelerators to a single server utilizing the company’s transformative FabreX ultra-low latency PCIe memory fabric. Available today, the 32-GPU engineered solution, called SuperNODE, offers a simplified system capable of scaling multiple accelerator technologies such as GPUs and FPGAs without the latency, cost, and power overhead required for multi-CPU systems.
By GigaIO · Via Business Wire · July 13, 2023
GigaIO Announces Series of Composability Appliances Powered by AMD, First Edition Purpose-Built for Higher Education and Launched at ISC
GigaIO, provider of the world’s only open rack-scale computing platform for advanced scale workflows, today announced the launch of a new composability appliance. The GigaIO Composability Appliance: University Edition, powered by AMD, is a flexible environment for heterogeneous compute designed for Higher Education that can easily accommodate the different workloads required for teaching, professor research, and grad-student research. Future iterations of the appliance will bring the benefits of composability to Manufacturing and Life Science users over the coming year.
By GigaIO · Via Business Wire · May 31, 2022
GigaIO FabreX for Composable Infrastructure Now Supported Natively in NVIDIA Bright Cluster Manager 9.2
GigaIO, provider of the world’s only open rack-scale computing platform for advanced scale workflows, today announced that GigaIO FabreXTM for composable infrastructure is now natively supported in NVIDIA Bright Cluster Manager 9.2. The integration, led by NVIDIA in collaboration with GigaIO, ensures customers can build easy-to-manage, platform-independent compute clusters that scale in minutes to handle the most demanding AI and HPC workloads.
By GigaIO · Via Business Wire · May 19, 2022
Esteemed HPC Guru Dr. James Cuff Comes Out of Retirement to Help GigaIO Team Build Impossible Configurations Through Composability
GigaIO, creator of next-gen data center rack-scale architecture for Artificial Intelligence (AI) and High Performance Computing (HPC) solutions, today announced the hire of Dr. James Cuff as Chief of Scientific Computing and Partnerships. Dr. Cuff will be tasked with helping design composable architectures that function at scale and provide the foundation for the world’s most complex and challenging workloads. In this role, he will support and extend GigaIO’s deeply technical, sophisticated scientific computing platforms and services in order to deliver truly open, at-scale solutions for both communities and partners.
By GigaIO · Via Business Wire · March 29, 2022
GigaIO Awarded Lonestar6 Contract in TACC’s First Bid for Composable Disaggregated Infrastructure
GigaIO, creator of next-gen data center rack-scale architecture for Artificial Intelligence (AI) and High Performance Computing (HPC) solutions, today announced that production has begun on their CDI testbed in the Lonestar6 system at The Texas Advanced Computing Center (TACC) at The University of Texas at Austin. Lonestar6 is a 600-node system utilizing Milan-based AMD servers from Dell Technologies and A100 GPUs from NVIDIA, and is the first platform at TACC to incorporate Composable Disaggregated Infrastructure (CDI) in order to benefit from de-centralized server infrastructure.
By GigaIO · Via Business Wire · March 22, 2022
GigaIO Bolsters Executive Sales Leadership with Two Key New Hires to Execute Growth Strategy
GigaIO, the creators of next-generation data center rack-scale architecture for artificial intelligence (AI) and high-performance computing (HPC) solutions, announced today that it has expanded its leadership team, hiring Eric Oberhofer as Vice President North American Sales and Matt Demas as Chief Technical Officer, Global Sales.
By GigaIO · Via Business Wire · August 24, 2021
GigaIO Raises $14.7 Million in Oversubscribed Series B Funding
GigaIO, the creator of next-generation data center rack-scale architecture for artificial intelligence (AI) and high-performance computing (HPC) solutions, today announced the completion of a Series B round of funding totaling $14.7 million. Impact Venture Capital led the funding round, which was oversubscribed by 50% and included participation from Mark IV Capital, Lagomaj Capital, SK Hynix, and Four Palms Ventures.
By GigaIO · Via Business Wire · September 16, 2021
GigaIO Is Selected to Bring Composability to Bold New National Research Platform
Building on the successes of past collaborations with the San Diego Supercomputer Center (SDSC) located at UC San Diego, GigaIO, the creators of next-generation data center rack-scale architecture for artificial intelligence (AI) and high-performance computing (HPC) solutions, is proud to be announcing its low latency universal dynamic fabric, FabreX™, was selected as the technology of choice for the new Prototype National Research Platform (NRP). This National Science Foundation-funded cyberinfrastructure ecosystem is an innovative, all-in-one system—computing resources, research and education networks, edge computing devices and other instruments—designed as a testbed to expedite science and enable transformative discoveries.
By GigaIO · Via Business Wire · July 15, 2021
GigaIO Introduces New Scalability for AI Workloads with FabreX™ 2.2 for Dynamically Configured Rack-scale Architectures
GigaIO, the creators of next-generation data center rack-scale architecture for AI and High-Performance Computing solutions, today announced FabreX release 2.2, the industry’s first native PCI Express (PCIe) Gen4 universal dynamic fabric, which supports NVMe-oF, GDR, MPI, and TCP/IP. This new release introduces an industry first in scalability over a PCIe fabric for AI workloads by enabling the creation of composable GigaPods™ and GigaClusters™ with cascaded and interlinked switches. In addition, FabreX 2.2 delivers performance improvements of up to 30% across all server-to-server communications through new and improved DMA implementations.
By GigaIO · Via Business Wire · April 20, 2021