Back to top
-
Back1 2026 03 27 005130 nfxq

Quantum Photonics

Quantum photonics puts unusually high demands on optical control. What matters is not only reaching the target wavelength once, but keeping frequency, power, and spectral conditions under control long enough to trust the experiment. This page focuses on the practical instrument layer behind that stability: tunable sources, frequency locking, filtering, and optical power monitoring for quantum photonics setups that need to stay measurable, repeatable, and software-driven.

Explore Application Solutions

Why component inspection & testing demands a system, not a single measurement

A filter that passed incoming inspection once caused weeks of intermittent margin violations in a DWDM line system, not because it was defective, but because the test that cleared it used a 50 pm step size and never saw the 80 pm ripple sitting inside the passband. The optical trace looked clean. The component was not.

That kind of failure is more common than it should be, and almost always traceable to a test workflow that was designed to generate a number rather than characterize a component. Modern passive and multi-port photonic components fail quietly. The signal is a spectral ripple that only shows up with adequate sampling, a PDL penalty that's invisible until you hit the right polarization state, a back-reflection that only couples on certain port combinations. Catching these requires a workflow built around system-level thinking, not a single sweep and a pass/fail threshold.

Define what you are actually measuring before you touch an instrument

The most common source of "component variation" in a lab is not the component. It's an undefined reference plane.

Before selecting instruments or writing a test procedure, you need answers to three questions:

  • What is the acceptance metric: insertion loss at a wavelength, average IL over a band, passband ripple, PDL, channel uniformity?
  • Where exactly is the reference plane: connector face, pigtail end, or including an adapter?
  • What conditions are held constant: launch state, connector seating, temperature, dwell time between connections?

Skipping this step means two engineers on the same device will produce numbers that disagree, and neither is wrong. The measurement is real; the measurement definition was absent. In high-volume production or multi-site supplier qualification, that ambiguity compounds fast.

Drift is not a calibration problem, it's a design problem

Swept-wavelength IL measurements are sensitive to slow changes that have nothing to do with the component: source power drift over a long sweep, connector micro-movement between reference and measurement, temperature-driven changes in fiber birefringence. Many labs treat this as a reason to recalibrate frequently. That's the wrong model.

The right model is to build real-time power normalization into the measurement itself, so that drift is continuously referenced out rather than periodically corrected. A system that normalizes power in real time during the sweep is measuring what you think it's measuring. A system that doesn't is measuring the component plus everything that moved since the last calibration and, in a production environment, that interval is never short enough.

This is why swept test architectures that include a dedicated reference port or normalization channel behave fundamentally differently from those that don't, even when the tunable source and detector hardware look similar on paper.

PDL will show you the worst-case behavior your IL measurement missed

A component that measures fine for insertion loss at a single polarization state will sometimes show a completely different picture under a full polarization sweep. Isolators, thin-film filters, and anything involving a physical anisotropy in the optical path can look compliant on a standard IL measurement and still carry a polarization-dependent loss that creates system penalty in the field.

If PDL matters for your application and, in most coherent and high-density WDM scenarios it does, the test workflow must specify exactly how polarization states are generated, how many states are sampled or whether a continuous scramble is used, and how results are reduced from a set of measurements to a reportable number. Max-minus-min over a Mueller matrix sweep and a statistical envelope over random scrambling are not equivalent, and they don't produce the same spec margin. Choosing between them is an engineering decision that should be made once per product family and documented, not re-evaluated per lot.

Seed + amplifier + linear cavity SHG system

Common wavelengths include 798/399nm for ytterbium, 840/420nm and 960/480nm for rubidium, 922/461nm for strontium and 1020/510nm for caesium. Overall doubling efficiency is typically 50 to 60% for mid-power amplified systems (0.5 to 5 watts), and can be optimised for low power direct ECDLs (e.g. 100mW) to achieve higher efficiencies(up to 70%).

Spectral sampling is the variable nobody specifies until it causes a problem

Two test stations measuring the same component and producing different insertion loss results is almost always a sampling problem before it's anything else. A 100 pm step size is common because it's fast and produces clean-looking traces. It also hides features that matter: a narrow notch at the edge of a passband, periodic ripple from an etalon effect in a connector, a channel non-uniformity that only resolves below 25 pm.

The fix is straightforward but requires discipline: write the sampling plan, step size, sweep speed, averaging, into the method specification and treat it as part of the test, not a tuning parameter. Once that's done, disagreements between stations become traceable. Without it, you're comparing measurements that are fundamentally unlike.

Multi-port devices require comparable measurements, not just many measurements

For arrayed waveguide gratings, WSS-class devices, couplers, and multi-channel splitters, the challenge is not accessing many ports, it's ensuring that the measurements across those ports are comparable enough to support real manufacturing decisions. Channel-to-channel uniformity numbers mean nothing if the measurement uncertainty on any individual channel is comparable to the variation you're trying to detect.

This requires either truly simultaneous multi-port acquisition or a switching architecture whose port-to-port repeatability is characterized, stable, and factored into the spec margin. A reference port concept, where one port is continuously monitored as a normalization anchor across all channel measurements, is one of the most effective ways to maintain comparability in a switching-based architecture. Without something equivalent, lot-to-lot decisions end up absorbing test noise that was never accounted for.

Back-reflection screening belongs in the flow, not in the failure investigation

In systems that are increasingly sensitive to return loss, a back-reflection problem often presents first as something else: instability, apparent passband ripple, unexpected gain variations. The reflection is rarely the first hypothesis, and by the time it's confirmed, significant time has been spent chasing other explanations.

The more practical approach is to include a fast reflection screen alongside insertion loss characterization as a standard step in incoming inspection or pre-assembly test. For multi-channel devices especially, per-port return loss variation can be as meaningful as per-port IL variation, and it takes only marginally more time to measure both together than to measure one alone.

When the optical data doesn't explain the result, look at the package

A clean optical trace does not prove the assembly is correct. Packaged components introduce failure modes that have no direct optical-trace signature until they've already propagated into a larger problem: lens standoff errors that shift the working distance, bonding offsets that introduce a slow walk-off across temperature, surface condition issues on AR-coated optics that create a back-reflection only visible at specific angles.

When a component is persistently out-of-family and retesting the optical path doesn't resolve the question, the fastest path to an answer is usually direct measurement of the package geometry, not another optical sweep. High-speed 3D surface profiling at the package level lets a process engineer see whether the physical build matches the intended geometry in minutes, without destructive analysis. In R&D, this closes iteration loops. In production, it turns a mystery reject into a traceable root cause.