Back to top
-
Back1 2026 03 27 005130 nfxq

Wafer/Chip Level Testing

Wafer/chip-level testing is where photonic devices stop being design concepts and start being manufacturable hardware. At this stage, the job is not only to capture spectral data, but to do it fast enough for real engineering throughput, with enough visibility to separate true device behavior from coupling, reflection, or routing artifacts. Santec supports that workflow with a software-led test environment built around swept characterization, multi-port measurement, in-situ OFDR analysis, and scalable switching architecture for both development and production use.

Explore Application Solutions

Wafer/chip-level testing: process metrology before packaging

A process engineer once spent three weeks chasing what looked like a yield problem in a splitter tree layout, only to find that the variation was almost entirely in the probe coupling, not the devices. Every die that looked weak on first test looked fine on retest with a re-seated fiber. The root cause was never in the wafer. It was in a workflow that had no way to distinguish coupling instability from actual device performance.

That kind of ambiguity is expensive at wafer level in a way it simply is not at later stages. A packaged component that fails can be rejected. A wafer-level false negative sends weak dies into packaging. A false positive sends them to scrap. Either way, the feedback loop between test, design, and fabrication slows down, and in photonics development that loop is already one of the more painful parts of the process.

A credible wafer/chip-level test flow has to do three things at once: characterize optical performance with enough fidelity to support engineering decisions, localize anomalies without leaving the measurement environment, and scale across die counts and port configurations without requiring the station to be rebuilt every time the DUT changes.

Spectral characterization that preserves engineering value

The basic requirement is clean spectral measurement of the structures that actually determine whether a die is worth packaging: waveguides, splitters, filters, AWGs, ring-based devices, and routing elements. At wafer level, speed matters, but not at the cost of resolution. A sweep fast enough to support practical screening but coarse enough to miss a 30 pm ripple inside a passband is not a faster test; it is a less informative one.

This is the specific tension the TSL-570 is built around, combining sweep speeds up to 200 nm/s with 0.1 pm resolution and wavelength accuracy that holds up across a full production shift. For wafer teams that want throughput without converting the result into a rough yes/no screen, that combination is the relevant specification, not peak sweep speed in isolation.

Multi-port acquisition as a throughput problem, not a hardware problem

Many wafer-level PIC structures produce multiple outputs, and measuring them serially under shifting conditions is one of the more reliable ways to generate data that cannot be interpreted. Splitter networks, AWGs, and wavelength-routing structures create a measurement load that becomes impractical port-by-port, and the issue is not just time. It is that serial measurements made under slightly different conditions tell you about drift as much as they tell you about the device.

The MPM-220 addresses this directly: simultaneous acquisition across up to 20 ports, up to one million logging points per port, and real-time referenced insertion loss when paired with a Santec tunable laser. The reference integrity piece matters here as much as the port count, because throughput at wafer level is often limited by how efficiently the system holds measurement conditions stable across all outputs, not by how fast the laser sweeps.

Swept Photonics Analyzer

The system uses Optical Frequency Domain Reflectometry (OFDR) technology to analyze the back reflection and transmission characteristics. With a sampling resolution of 5 µm, the system easily discerns structures within photonic integrated circuits (PICs) and silicon photonic (SiPh) devices.

In-situ diagnosis when a spectrum is not enough

A spectral anomaly tells you something is wrong. It does not tell you where. In a wafer-level workflow, that distinction has real consequences: a resonance distortion, an unexplained insertion loss step, or a passband asymmetry could originate from the intended photonic structure, a localized waveguide defect, a coupling artifact, or a propagation discontinuity somewhere along the optical path. Treating all of these as equivalent and iterating on design is expensive.

The SPA-110 is Santec's OFDR-based analyzer for exactly this layer of diagnosis, with 5 µm sampling resolution, a 30 m measurement range, and up to 160 nm sweep range for simultaneous ORL and WDL characterization. A feedthrough port allows integration of an external power meter for active silicon photonics alignment. In practice, that means a team can move from a suspicious spectrum to a spatially resolved root cause without changing stations or measurement philosophy.

Coupling stability as the hidden variable in correlation

The single most underappreciated source of poor wafer-level correlation is not instrument accuracy. It is what happens at the optical interface before the signal reaches the instrument. Alignment drift between probe and grating coupler, inconsistent fiber seating, and reference conditions that shift between measurements can make one die appear worse than another even when the underlying structures are identical.

This is why both the STS system's real-time power referencing and the MPM-220's optical reference port are engineering decisions that matter beyond their headline specifications. The SPA-110's support for external power meter integration during active SiPh alignment follows the same logic. The philosophy behind all three is the same: reduce ambiguity at the interface before drawing conclusions about the device.

Scaling without rebuilding the station

A wafer test setup that has to be substantially reconfigured when the DUT changes from a single passive structure to a routing-heavy die, or when the program moves from engineering debug to volume screening, is a liability rather than an asset. Scalability depends on routing architecture, switching repeatability, and the ability to expand channel count without breaking the measurement logic that was already validated.

The OSX-100 is designed for this kind of growth, with configurable paths from 1x2 to 1x400, ±0.005 dB repeatability, insertion loss below 0.5 dB, and crosstalk below -80 dB. USB and Ethernet control allow it to fit into automated test sequences without requiring a separate integration project each time the test matrix expands.

Software as the difference between data and decisions

Fast hardware does not produce a useful wafer-level workflow without a software layer that makes setup practical, keeps measurement recipes consistent, and returns analysis fast enough that engineers can act while the wafer is still on station. The STS platform includes custom software with DLL-based automation and documented coverage of system configuration, basic operations, data analysis, optical return loss measurement, and high-resolution characterization. The intent is a measurement environment where source, detection, routing, and analysis operate as one workflow, not as a collection of instruments that happen to share a bench.