Coincidentally, i have been recently watching/reading a bunch of videos/articles on ASML and their technology.
Can some experts/knowledgeable folks here actually explain the technology in ELI5 (and above) terms? As i understand, a laser (what are its characteristics?) is fired at Tin droplets in a vaccuum chamber causing it to emit light in "Extreme UV" wavelength range which is then focused using a set of Zeiss mirrors to do the actual photolithography. Wikipedia (https://en.wikipedia.org/wiki/Extreme_ultraviolet_lithograph...) is well over my head. What i am unable to bridge is how this EUV wavelength maps to transistor sizes (in nanometers) via High-NA/Hyper-NA
technology.
A major limitation comes from the laws of optics. German physicist Ernst Abbe found that the resolution of a microscope d is (roughly) limited to the wavelength λ of the light used in illumination:
d = λ/(nsin(α)) ...(1)
where n is the refractive index of the medium between the lens and the object and α is the half-angle of the objective's cone of light. For lithography, substituting numerical aperture (NA) for n sin(α) and adding a factor k to the formula (because lithographic resolution can be strongly tweaked with illumination tricks), the minimum feasible structure, or critical dimension (CD), is:
CD = kλ/NA ...(2)
This formula, which governs all lithographic imaging processes, makes obvious why the wavelength is such a crucial parameter. As a result, engineers have been looking for light sources with ever-shorter wavelengths to produce ever-smaller features.
Diffraction sets a limit on how small the features can be patterned using photolith. When the wavelength is larger than the feature size, diffraction causes the light to spread out and blur the edges of what you're trying to pattern. The Rayleigh criterion shows how the ability to separate features depends on the numerical aperture of the system and the wavelength used. The explanation under 'Resolution in projection systems' on the wikipedia for photolithography is a better explanation than what is talked about under the EUV article. https://en.wikipedia.org/wiki/Photolithography
Going further and further into the UV makes the wavelength smaller and smaller and thus the feature size smaller and smaller. But making light that is controllable in a way for photolithography techniques to work that far into the UV is the difficult part.
Thank You! The section "Resolution in projection systems" in the above wikipedia link (https://en.wikipedia.org/wiki/Photolithography#Resolution_in...) contains the essential info. It lists the same exact equation which i have linked to in my comment and adds further details, to whit;
The minimum feature size that a projection system can print is given approximately by:
CD = k1 ⋅ λ / N A
where CD is the minimum feature size (also called the critical dimension, target design rule, or "half-pitch"), λ is the wavelength of light used, and NA is the numerical aperture of the lens as seen from the wafer.
k1 (commonly called k1 factor) is a coefficient that encapsulates process-related factors and typically equals 0.4 for production. (k1 is actually a function of process factors such as the angle of incident light on a reticle and the incident light intensity distribution. It is fixed per process.) The minimum feature size can be reduced by decreasing this coefficient through computational lithography.
According to this equation, minimum feature sizes can be decreased by decreasing the wavelength, and increasing the numerical aperture (to achieve a tighter focused beam and a smaller spot size).
Thus the NA being increased in Hyper-NA (0.75) from High-NA (0.55) results in a smaller "feature size" i.e. smaller nanometers.
I need to read some more but i think i now get the basic Physics concepts involved.
Not an expert, but I've always understood it as (LI5):
You may have heard the effect of light is like a wave, higher frequency wave-lengths have a shorter wavelength, which means it can 'reach' smaller features without impacting the rest of the surroundings. Photo-lithography is done through a mask, this effectively means you get a crisper image projected onto the wafer.
If you go too far into/past UV it becomes hard to deal with in terms of heat and optics from my understanding. Which is why we keep getting new $prefix-UV rather than something like X-rays (which are past UV).
Absolutely Brilliant! The animation of diffraction physics and mapping them to the terms in the given formula is just superlative. Scientific explanation through video at its best! Not too complicated and not too dumbed down but just the essence in a very understandable way. I now need to consult my Physics books for a Wave Optics brush up :-)
Thanks for pointing me to this. This truly deserves all the views/recognition from HN/larger Internet.
The waves emanating from from that point would be spherically symmetrical (think a 360deg "field of view"[0], whereas most lenses are <<90deg).
Now, since optical paths are two-way, this also implies that forming a perfect point image requires perfectlu spherically symmetric wavefronts[1] converging to that point, causing all the waves to perfectly cancel out each other everywhere except at the image point.
If you take away a slice of the wavefronts (i.e. block light with an aperture), the cancellations is no longer balanced, producing stray excitations at places that should be silent. (Think of it like squeezing a beer can with your hand causing it to spurt out of the sides)
The large the slice of wavefronts you are missing, the greater the imbalance. The resulting artifact are oscillations on the size order relative to the waves' frequency.
Basically, high NA means trying to capture as complete of the total wavefronts as possible to minimize the imbalance, and short wavelength means trying to keep the size of whatever artifact you do end up getting to be as small as possible.
[0] In air quotes because FOV != NA. The main distinction is that FOV refers to the span of principal directions (i.e. how many points can you see), whereas NA means, given a point object, how complete of its total wavefronts are you capturing it, (i.e. how bright is any given one point)
[1] Up to 2pi phase differential. If your signal is CW then multiples of 2pi is indistinguishable from being in phase, think Shannon Limit. This is why lenses work despite having path differential, because all that's important is that it's back in phase for the given wavelength even if shifted by multiple cycles.
> Basically, high NA means trying to capture as complete of the total wavefronts as possible to minimize the imbalance, and short wavelength means trying to keep the size of whatever artifact you do end up getting to be as small as possible.
Can some experts/knowledgeable folks here actually explain the technology in ELI5 (and above) terms? As i understand, a laser (what are its characteristics?) is fired at Tin droplets in a vaccuum chamber causing it to emit light in "Extreme UV" wavelength range which is then focused using a set of Zeiss mirrors to do the actual photolithography. Wikipedia (https://en.wikipedia.org/wiki/Extreme_ultraviolet_lithograph...) is well over my head. What i am unable to bridge is how this EUV wavelength maps to transistor sizes (in nanometers) via High-NA/Hyper-NA technology.
From https://www.laserfocusworld.com/blogs/article/14039015/how-d...
A major limitation comes from the laws of optics. German physicist Ernst Abbe found that the resolution of a microscope d is (roughly) limited to the wavelength λ of the light used in illumination:
d = λ/(nsin(α)) ...(1)
where n is the refractive index of the medium between the lens and the object and α is the half-angle of the objective's cone of light. For lithography, substituting numerical aperture (NA) for n sin(α) and adding a factor k to the formula (because lithographic resolution can be strongly tweaked with illumination tricks), the minimum feasible structure, or critical dimension (CD), is:
CD = kλ/NA ...(2)
This formula, which governs all lithographic imaging processes, makes obvious why the wavelength is such a crucial parameter. As a result, engineers have been looking for light sources with ever-shorter wavelengths to produce ever-smaller features.
Explain the above?