NVIDIA made its first bet on a Korean startup last week — and it was not for a chip. The company is Point2 Technology, a twelve-person outfit from KAIST, Korea's leading science university, that makes waveguide interconnect: the wiring that moves data between GPUs inside a data center. The investment was $76 million through NVentures, NVIDIA's venture arm, alongside Maverick Silicon and UMC Capital. It was announced across Korean and English-language tech press on April 23rd.
The financing is modest by AI infrastructure standards. A Series B extension for a minority stake is not a strategic lockup. But the direction is the story. NVIDIA is buying into the interconnect layer of its own supply chain — upstream of its GPU business — in a market where the bottleneck has quietly shifted from compute to connectivity.
Interconnect is the plumbing of a data center: the wiring between GPUs, between racks, between memory and compute. For the first decade of the deep learning era, the interesting engineering lived in the chips. The wires were commodity. That is changing.
The case for why connectivity now matters more than compute comes from several directions simultaneously. NAND Research, an independent semiconductor analyst, watched NVIDIA's GTC conference in March and came away with a blunt summary: the Rubin era is here, and the constraint is no longer how fast GPUs can compute — it is how fast they can talk to each other. DataGravity, a supply chain research firm, put a dollar figure on the structural shift: $4.15 billion in optical and interconnect acquisitions closed in the twelve months ending this quarter, including NVIDIA's $900 million purchase of fabric startup Enfabrica and Marvell's acquisition of Celestial AI for roughly $3.25 billion. NVIDIA separately invested $2 billion apiece in Lumentum and Coherent in March, specifically to lock up laser supply for optical transceivers. Electro-absorption modulated lasers — the light-emitting component inside every 800G and 1.6T optical transceiver — are undershipping demand by roughly 30 percent, according to DataGravity.
Point2 sits in a specific gap in this landscape. Copper cables are the incumbent for short-reach interconnect inside a rack. They draw almost no power and add near-zero latency. But at the signaling rates AI infrastructure now demands, passive copper has a reach problem. At 200G PAM4 — the modulation standard used in current AI interconnects, where four electrical levels encode two bits per symbol — copper cables lose signal integrity below roughly one meter. Thicker cables extend that reach but add weight and routing complexity that makes cooling difficult inside a dense rack. Active copper cables, which use signal conditioning chips at each end, stretch to roughly three meters at 100G PAM4 but add 10 watts of power per end and introduce 20 to 100 nanoseconds of latency. Optical fiber has unlimited reach but burns 30 watts per connector at these speeds — roughly three times what an equivalent copper solution draws. Jensen Huang raised the optical power problem at NVIDIA's 2024 GTC: running optics across the NVLink interconnect backbone would require 20 kilowatts of transceivers alone.
e-Tube is Point2's answer to this mid-reach gap. The technology modulates digital data into radio frequency signals, couples them into a plastic waveguide — a rectangular channel roughly 3.5 millimeters wide and 0.6 millimeters thick — and decodes them on the other end using a mixed-signal system-on-chip, which handles both analog radio signals and digital processing in a single package. It requires no digital signal processing, which is what makes optical and active copper solutions power-hungry. The 2020 proof-of-concept was published in Nature. A collaboration with Keysight, announced in January, is validating multi-terabit performance for AI cluster interconnects. Foxconn Interconnect Technology is separately partnered with Point2 to develop the technology for 1.6 terabit and 3.2 terabit speeds.
The published claims are specific: e-Tube delivers ten times the reach of passive copper at comparable cost, consumes one-third the power of optical interconnects, and achieves one-thousandth the latency. Those numbers come from Point2's own announcement and the KAIST paper. They have not been independently validated at scale in a production AI cluster. The Keysight collaboration is a test and measurement relationship, not a deployment announcement. No hyperscaler has publicly named Point2 as a supplier.
There is a technical caveat worth knowing about. Viks, a semiconductor industry newsletter, published a detailed analysis of e-Tube last year and identified the microstrip-to-waveguide transition as a potential bandwidth scaling bottleneck. The electromagnetic coupling structure that guides the radio signal from the chip into the plastic waveguide limits usable bandwidth to roughly 50 to 70 gigahertz in the current implementation — which may be insufficient at 200G PAM4 and above. Whether that is a fundamental constraint or a solvable engineering problem is an open question.
NVIDIA's investment is consistent with a broader pattern of acquiring or investing in upstream interconnect technology rather than relying solely on merchant silicon. The company already owns Mellanox for InfiniBand and has built Spectrum-X for Ethernet. NVentures' stake in Point2 is a different kind of bet: not on a chip that competes with NVIDIA's products, but on a physical layer technology that could become essential to how AI clusters are wired if copper runs out of headroom and optics runs too hot.
The 2026 BloombergNEF Pioneer award, cited in Point2's announcement, is an outside-industry validation signal. BloombergNEF makes money on research subscriptions and infrastructure analysis; it has no obvious commercial interest in promoting a specific interconnect startup. That gives the citation more weight than a press release quote from a venture partner.
What this is not: a confirmed deployment. A $76 million Series B extension is not proof that e-Tube is inside the next generation of NVIDIA's reference architecture. The investment tells you NVIDIA thinks the problem is worth owning a small option on, not that the problem is solved. The gap between a KAIST physics paper and a production AI interconnect is years and several engineering pivots.
What it is: evidence that the bottleneck conversation in AI infrastructure has moved from chips to wires, and that the wiring inside a data center is now a place where strategic capital is flowing. Point2 is not a story about a company that built something. It is a story about a problem that is becoming too expensive to ignore.