Subtitles section Play video
Hello, my name is Seokho Yoon, and I'm a researcher who is leading the Next Generation Invisensor project at Samsung Advanced Institute of Technology.
First of all, I'm very pleased to present our latest result first at IDM.
We all know it's not a good situation to hold a conference because of COVID-19, but I think we are very fortunate to have the option to attend online.
The topic I would like to present to you today is a highly efficient color separation and focusing in the sub-micron CMOS image sensor.
Here is the outline of my presentation today.
First, I'll talk about the pixel scaling downtrend, which is the motivation for this study.
And next, I'll explain the concept and design principle of the metaphotonic color routing structure proposed in the paper.
And we'll show the design and verification results of MPCR, which is integrated into an image sensor with a pixel size of 0.8 micron.
In addition, I'll also show our preliminary results for small pixel applications below 0.7 micron.
And finally, I'll conclude my talk today.
Pixel scaling downtrend has continued for the past 20 years along with the development of smartphone technology.
Recently, with the advent of AR and metaverse, miniaturization of image sensors has become more important than ever.
As the technology advances from 5.6 micron pixel in 2000 to 0.64 micron pixel in 2021, the pixel area has decreased by more than 70 times and the number of pixels has increased by more than 300 times.
So far, we have experienced several structural innovations.
First, the BSI technology replaced FSI.
BSI can greatly increase the amount of light received by reducing the scattering loss in metal wires.
The next innovation is going to be deep trench isolation, which can reduce both the electrical and optical crosstalk in photodetector layers.
The introduction of low refractive index fence layer for color filter separation can also improve optical efficiency and crosstalk.
Currently, the technology seems to be being developed incrementally by just improving these structures.
However, as the pixel size becomes extremely small, it is more important to improve SNR by receiving sufficient light even in low light conditions.
To this end, one can choose pixel binning to increase the effective pixel size or use a broadband color filter, such as a white color filter or a combination of CMY complementary color filters.
Instead of taking this route, we actually decided to take advantage of the light loss caused by the color filter layer.
We started studying metaphotonic structures, which can engineer light propagation to separate colors and focus the light at the same time.
Here we introduce the metaphotonic color routing structure.
From now on, it will be called as MPCR.
As shown in the schematic on the left bottom, in the conventional image sensor, when blue light is incident, only the light incident to the blue pixel is transmitted through the corresponding color filter.
And the light entering the green and red pixels is lost due to the absorption of color filters.
So only about one third of the incident light can be used.
However, if the direction of the light absorbed by other pixels is changed and redirected, the amount of light received can be greatly increased.
Placing a large virtual lens that covers the green and red pixels will collect the blue light that is lost from the neighboring pixels.
The concept can be extended for all RGB color bands, and for these, virtual lens phase distributions as shown on the right are required.
For blue and red, a large-scale lens array is necessary to cover the neighboring pixels, and for green pixels, a large array of lenses arranged diagonally is required to contain the adjacent RGB color pixels.
If this phase distribution can be realized simultaneously for RGB color bands, both color separation and focusing are possible at the same time even in the Bayes array.
It can be seen in the simulation field distribution below here that RGB can be separated and focused by the phase conditions described above.
With the conventional microlens array, however, it is impossible to achieve such wavelength-dependent phase properties.
On the other hand, if the metaphotonic technology is applied, the wavelength-dependent phase distribution can be realized by utilizing the dispersion properties of higher effective index nanostructures.
MPCR is an artificially engineered nanostructure that is composed of high refractive index nanoposts which is embedded in a low-index medium.
Titanium dioxide posts can be used as a waveguide to create the phase array so that the desired lens phase distribution can be realized by properly selecting post dimensions at each phase point.
Light coupled to the nanopost travels much slower than light passing through its surroundings.
The induced phase difference changes as a function of wavelength, thus allowing wavelength-dependent color separation from interference and diffraction phenomena.
As shown in the figure on the left bottom, when the green light of 540 nm is incident on the post, it is transmitted to the bottom of the post with no problem, but the red light of 630 nm is split left and right by the post due to destructive interference.
The phase delay of this post actually originates from the mode index of the waveguide, and one can calculate the mode index according to the dimensions of waveguide post.
As shown on the right, the design library can be acquired based on this information.
In the early design stage of MPCR, we also used this library to check the initial settings of the post dimensions to satisfy the phase conditions described above.
Using the described design rules, we successfully designed an MPCR structure to be applied to a 0.8-micron Bayer sensor.
As shown in the schematic on the left, the optimized MPCR structure is spaced apart from the color filter layer to secure the color separation distance.
As can be seen from the finite difference time domain field evolution simulation in the middle, when green light is incident on the designed MPCR structure, the light incident on the red pixel is redirected and transmitted to the green pixel, so that the optical efficiency enhancement and light focusing effect are simultaneously possible.
The advantage of such color separation and redirection is that it is equally applied to other colors with the same single structure.
Optical efficiency enhancement effect can also be confirmed by the increase in quantum efficiency curves on the right.
Compared to the conventional sensor indicated by the dotted lines, it is clearly shown that the QE improvement of about 30% is possible in all RGB channels in the case of MPCR sensors.
In order to confirm the characteristics of the designed MPCR structure, we developed a new mass production process method.
In order to realize the maximum refractive index contrast, an air-surrounding structure may be preferred, but a buried form is more suitable in consideration of mass production due to its mechanical stability.
Like the process flow on the left, after forming the color filter layer, a spatial layer is deposited and planarized.
Next, the first MPCR layer can be formed by patterning holes in the deposited SiO2 layer, gap filling with titanium dioxide ALD process, and planarization with CMP process.
This process was repeated twice to form the final structure, which is bilayer MPCR.
For patterning, either KRF or ARF ortholithography techniques can be applied depending on the dimensions of the optimized design structure.
The fabrication example shown on the right is a cross-sectional image of the MPCR structure integrated by KRF patterning on the actual 0.8-micron pixel image sensor of our company.
As shown in the image, the double-layered titanium dioxide post is well-formed and successfully embedded in the SiO2 medium.
To confirm the optical efficiency enhancement of the fabricated MPCR sensor, we measured the quantum efficiency of the fabricated MPCR sensor package and compared it with the reference sensor.
As can be seen from the graph on the left, on average of 20% QE was improved for all RGB channels as expected from the simulation results.
In particular, it is noteworthy that the QE improvement of the blue channel is remarkable.
For this reason, the actual RGB channels are more balanced, so images with proper white balance can be captured.
As can be seen from the raw images on the right, which is before ISP processing, unlike the reference sensor that has a greenish hue due to a large green channel signal, it can be seen that a brighter and more white balanced image is obtained with the MPCR sensor.
In order for MPCR to be applied in real camera module applications, it is necessary to maintain the optical properties of each pixel according to the CRA used in the module lens.
Most optical nanostructures reported so far are very sensitive to the angle of incidence so that their properties are greatly deteriorated when they are deviating from the optimal angle.
In order to maintain the angular characteristics of MPCR, we developed an appropriation design method of MPCR and applied it to the fabricated sensor.
The module lens used in the camera test has F number of 2.4, maximum CRA is 30 degrees.
The luminous shading shown on the left bottom are the images of the white surface light source taken with the reference and the MPCR camera modules.
It was confirmed that a proper design rule for the angle of incidence was reflected in the MPCR module so that the luminous shading of MPCR were maintained over the entire sensor area, which is almost equivalent to the existing sensor.
In addition, we also tested the resolution of MPCR.
The fabricated MPCR sensor module appeared to have comparable quality of the star chart image and MTF characteristics as shown on the right.
If necessary, we also expect it can be more improved by applying a resolution restoration algorithm that is dedicated to MPCR sensors.
The advantage of MPCR technology is that if the target phase distribution is satisfied, it can be applied not only to large pixels but also to fine pixels of 0.8 micron or even less.
We carried out designs applicable to pixel sizes of 0.7 micron, 0.64, 0.56, and 0.5 micron and first confirmed through simulation that color routing and focusing of RGB are possible even in this small pixel region.
In order to confirm the actual implementation possibility, each design was fabricated on a glass substrate and imaged with a microscope.
And RGB routing characteristics by MPCR could be confirmed at the focal plane.
The lower images were captured by CCD showing color separation characteristics of the fabricated MPCR structures.
The clear RGB routing characteristics of 0.7 micron and 0.56 micron designs can be confirmed and even at 0.5 micron, the color routing operation was clearly demonstrated.
Recently, we have confirmed that the design rules works down to 0.45 micron.
We are currently working on the design and verification to enhance the performance of MPCR and now are looking forward to commercialization.
Finally, I would like to summarize today's presentation.
First, we developed and introduced MPCR design rules and a new mass production fabrication process.
By applying this design and fabrication method, we were able to demonstrate the optical efficiency enhancement by integrating MPCR into a 0.8 micron pixel image sensor for the first time.
In addition, we not only simulated that MPCR can operate below 0.8 micron, but also demonstrated color separation with fabrication and imaging.
We expect that MPCR can be utilized to implement high-efficiency ultrafine pixel image sensors.
Thank you for your attention. www.microsoft.com