Characterisation, Measurement & Analysis
+44(0)1582 764334


  • How a microfabrication researcher uses SEM as a technique to verify nanoscale structures

    Microfabrication, the creation of microscale structures and features, is an essential technique for the creation of next-generation semiconductors, processors and the ‘lab-on-a-chip’ microfluidic systems found in chemical analysis systems that can fit in the palm of your hand.

    Until now, microfabrication has relied on masking techniques such as lithography, which limits the variety of structures that can be produced. However, new research into microscale 3D printing systems is allowing complex 3D shapes to be assembled at scales smaller than ever achieved before.

    The future beckons

    Tommaso Baldacchini is a microfabrication researcher for the Technology and Applications Center (TAC) at Newport Corporation. In his research into laser-assisted nanofabrication, a Phenom Pro scanning electron microscope is an essential tool.

    Newport Corporation’s TAC labs are very similar to those that you would see at a University, and they closely cooperate with academic customers. TAC conducts experiments for academic partners and fabricates micro-devices and components for use in other academic research areas.

    The current microfabrication landscape

    Up to the current time, much microfabrication has been dominated by traditional machining and photolithographic processes, which are planar techniques. According to Tommaso Baldacchini, photolithography can produce an extremely fine structure with high throughput, but the process is limited by two-dimensionality. Baldacchini said “this means that fabricators are missing out on one entire dimension.” Other limitations include:

    • The expense of the instrumentation for producing these structures
    • A clean room environment is often required
    • The number of substrates and materials is limited to silicon and semiconductors

    Baldacchini mentions: “There is definitely a need to break the barriers of these limitations to produce new micro and nano devices.”

    Breaking down barriers to nanofabrication

    A number of challenges are presented when fabricating nanostructures. These depend mostly on the specific technique used to fabricate the structure and the features of the structure itself — such as its size, shape and surface area.

    Laser assisted nanofabrication (Journal of Laser Applications 24, 042007 (2012)) provides a whole raft of unique abilities for building nano- and microstructures. Laser irradiation projected on material surfaces can cause several effects, including localised heating, melting, ablation, decomposition and photochemical reaction — and leads to the realisation of various complex nanostructures with materials such as graphene, carbon nanotubes and even polymers and ceramics.


    When characterising structures, it is crucial to have a tool that allows precise measurements to examine fabricated structures at nanoscale precision. There is a need to look at the structures topology and uniformity to make sure the ‘build’ quality is up to scratch. It is also important to be able to characterise the new material by determining its surface composition, and even its internal composition.

    A scanning electron microscope (SEM) is the ideal tool for this type of work, providing the ability to focus in to tens of thousands of nanometers and view small scale and nanoscale sample features. Baldacchini said “A scanning electron microscope is an invaluable tool to characterise products. We can view changes in the samples surface when it is ablated, or we can use SEM to study the topology of a sample we have produced using additive manufacturing.”

    Innovative techniques

    TAC have developed a high-resolution, nanoscale 3D printing technique called two-photon polymerisation. Using two-photon polymerisation allows the creation of extremely 3D polymeric structures which are often tens of microns large with nanoscale features. SEM is frequently used for structure characterisation, as a means of verifying the nanoscale structure that has been built. In addition to this, Baldacchini’s research has involved applying nonlinear optical microscopy, such as CARS microscopy, to investigate the chemical and mechanical properties of the microstructure created by two-photon polymerisation.

    “One of the tools that we developed in the TAC for aiding laser microfabrication is called the Laser µFAB. It is a complete system that enables customers to connect their own laser to the machine and perform different types of laser micromachining.”

    The system is provided with software that enables customers to import a two-dimensional drawing and reproduce the drawing using the motion of the stages with respect to the stationary laser. This allows users to create any three-dimensional objects they want to produce.

    Characterisation with a SEM

    So, according to Baldacchini at Newport Corporation, a scanning electron microscope proves to be an invaluable tool to characterise products and verify nanoscale structures.

    If you would like to learn even more about how TAC utilises SEM to verify nanoscale structures, you can click here to download the detailed Case Study.

    Topics: 3D Printing, Electronics

    About the author:
    Jake Wilkinson is an editor for AZoNetwork, a collection of online science publishing platforms. Jake spends his time writing and interviewing experts on a broad range of topics covering materials science, nanoscience, optics, and clean technology.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • FEG vs. Tungsten source in a scanning electron microscope (SEM): what’s the difference?

    After few years of operating a transmission electron microscope (TEM) in my postgraduate studies, in 2006 I started my career in electron microscopy as an SEM operator for a biological and medical research centre in York (United Kingdom). Not knowing how to operate an SEM before, I found it relatively easy to switch from TEM to SEM.

    My first SEM—equipped with a tungsten source—was getting too old and difficult to maintain. For that reason, it was replaced by two brand new SEMs; the first equipped with a tungsten source and the second with a Field Emission Gun (FEG). The tungsten system was considered the ‘workhorse’, and was used by many co-workers and researchers.

    However, challenging specimens (such as nanoparticles and beam-sensitive specimens)could be difficult to image on a tungsten system due to the lack of resolution. Whereas with the FEG source, those difficult specimens were much easier to image. The FEG system allowed us to see things that we couldn’t resolve with a tungsten system. It was like exploring and discovering a completely new world. Ever since that day, I’ve been in love with the FEG source.

    In this blog, I would like to make you enthusiastic too and explain why I prefer using an FEG source in an SEM system. You can learn what the main differences are between a tungsten thermionic emitter and a field emission source, and find out how an FEG source could enhance your research.

    Thermionic emission sources vs. field emission sources

    • Thermionic emission sources (TEM)
      Typically, thermionic filaments are made of tungsten (W) in the form of a v-shaped wire. They are resistively heated to release electrons (hence the term thermionic) as they overcome the minimum energy needed to escape the material.
    • Field emission sources
      For a field emission source, a fine, sharp, single crystal tungsten tip is employed. An FEG emitter gives a more coherent beam and its brightness is much higher than the tungsten filament. Electrons are emitted from a smaller area of the FEG source, giving a source size of a few nanometers, compared to around 50 μm for the tungsten filament. This leads to greatly improved image quality with the FEG source. In addition, the lifetime of an FEG source is considerably longer than for a tungsten filament (roughly 10,000 hours vs 100-500 hours), although a better vacuum is required for the FEG, 10-8 Pa (10-10 torr), compared with 10-3 Pa (10-5 torr) for tungsten, as shown in Figure 1.

    There are two types of FEG sources: Cold and Schottky FEGs

    For a so-called cold emission source, heating of the filament is not required as it operates at room temperature. However, this type of filament is prone to contamination and requires more stringent vacuum conditions (10-8 Pa, 10-10 torr). Regular and rapid heating (‘flashing’) is required in order to remove contamination. The spread of electron energies is very small for a cold field emitter (0.3 eV) and the source size is around 5 nm.

    Other field emission sources, known as thermal and Schottky sources, operate with lower field strengths. The Schottky source is also heated and dispenses zirconium dioxide onto the tungsten tip to further lower its work function. The Schottky source is slightly larger, 20–30 nm, with a small energy spread (about 1 eV).

    It starts with sample preparation

    When switching from tungsten to FEG emitter, it is worth mentioning that the specimen preparation becomes extremely critical in order to obtain high resolution and high magnification of any specimen.

    In general, samples are generally mounted rigidly on a specimen holder or stub using a carbon ‘conductive’ adhesive. These carbon tabs are partially or non-conductive and can lead to charging artefacts. Hence, carbon tabs might be suitable for a tungsten system, but become inappropriate for an FEG system.

    For high-resolution imaging on an FEG system, I always try to avoid using the carbon sticker. Specimens such as nanoparticles or fine powder should be prepared directly onto an aluminum pin stub for example.

    For conventional imaging in the SEM, specimens must be electrically conductive, at least at the surface, and electrically grounded to prevent the accumulation of electrostatic charge (i.e. using silver paint, aluminum or copper tape). [copper tape?]

    Non-conducting materials are usually coated with an ultra-thin coating of electrically conducting material, including gold, gold/palladium alloy, platinum, platinum/palladium, iridium, tungsten, and chromium. I recommend using the metals and thickness below for tungsten and FEG sources:

    • Metals:
      Au, Au/Pd (Tungsten source)
      Pt, Pd/Pt, Ir, W (FEG source)
    • Thickness:
      5-10 nm for low magnification
      2-3 nm for high resolution, the thinner the better

    Tungsten source vs. FEG source: imaging differences

    FEG sources have an electron beam that is smaller in diameter, more coherent and with up to three orders of magnitude greater current density or brightness than could ever be achieved with a tungsten source.

    The result of using an FEG source in scanning electron microscopy (SEM) is a significantly improved signal-to-noise ratio and spatial resolution, compared with thermionic devices.

    Field emission sources are ideal for high resolution and low-voltage imaging in SEM. Therefore, focusing and working at higher magnification become easy for any operator.

    Topics: FEG

    About the author:
    Kay Mam - In 2006 I started my career in electron microscopy as an SEM operator for a biological and medical research center in York (United Kingdom). With an FEG source, difficult specimens are easier to image. The FEG system allowed me to see things, it was like exploring and discovering a completely new world. Ever since that day, I’ve been in love with the FEG source. In 2016, I joined the Phenom Desktop SEM Application Team, working on a desktop SEM with an FEG source.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • SEM: types of electrons, their detection and the information they provide

    Electron microscopes are very versatile instruments, which can provide different types of information depending on the user’s needs. In this blog we will describe the different types of electrons that are produced in a SEM, how they are detected and the type of information that they can provide.

    As the name implies, electron microscopes employ an electron beam for imaging. In Fig.1, you can see the various products that are possible as a result of the interaction between electrons and matter. All these different types of signals carry different useful information about the sample and it is the choice of the microscope’s operator which signal to capture.

    For example, in transmission electron microscopy (TEM), as the name suggests, signals such as the transmitted electrons are detected which will give information on the sample’s inner structure. In the case of a scanning electron microscope (SEM), two types of signal are typically detected; the backscattered electrons (BSE) and the secondary electrons (SE).

    Type of electrons in SEM

    In SEM, two types of electrons are primarily detected:
    •backscattered electrons (BSE),
    •secondary electrons (SE),

    Backscattered electrons are reflected back after elastic interactions between the beam and the sample. Secondary electrons, however, originate from the atoms of the sample: they are a result of inelastic interactions between the electron beam and the sample.

    BSE come from deeper regions of the sample, while SE originate from surface regions. Therefore, BSE and SE carry different types of information. BSE images show high sensitivity to differences in atomic number: the higher the atomic number, the brighter the material appears in the image. SE imaging can provide more detailed surface information.

    Figure 1: Electron — matter interactions: the different types of signals which are generated.

    Backscattered-electron (BSE) imaging

    This type of electrons originate from a broad region within the interaction volume. They are a result of elastic collisions of electrons with atoms, which results in a change in the electrons’ trajectory. Think of the electron-atom collision as the so-called “billiard-ball” model, where small particles (electrons) collide with larger particles (atoms). Larger atoms are much stronger scatterers of electrons than light atoms, and therefore produce a higher signal (Fig.2). The number of the backscattered electrons reaching the detector is proportional to their Z number. This dependence of the number of BSE on the atomic number helps us differentiate between different phases, providing imaging that carries information on the sample’s composition. Moreover, BSE images can also provide valuable information on crystallography, topography and the magnetic field of the sample.

    Figure 2: a) SEM image of an Al/Cu sample, b), c) Simplified illustration of the interaction between electron beam with aluminum and copper. Copper atoms (higher Z) scatter more electrons back towards the detector than the lighter aluminum atoms and therefore appear brighter in the SEM image.

    The most common BSE detectors are solid state detectors which typically contain p-n junctions. The working principle is based on the generation of electron-hole pairs by the backscattered electrons which escape the sample and are absorbed by the detector. The amount of these pairs depends on the energy of the backscattered electrons. The p-n junction is connected to two electrodes, one of which attracts the electrons and the other the holes, thereby generating an electrical current, which also depends on the amount of the absorbed backscattered electrons.

    The BSE detectors are placed above the sample, concentrically to the electron beam in a “doughnut” arrangement, in order to maximize the collection of the backscattered electrons and they consist of symmetrically divided parts. When all parts are enabled, the contrast of the image depicts the atomic number Z of the element. On the other hand, by enabling only specific quadrants of the detector, topographical information from the image can be retrieved.

    Figure 3: Typical position of the backscattered and secondary electron detectors.

    Secondary electrons (SE)

    In contrast, secondary electrons originate from the surface or the near-surface regions of the sample. They are a result of inelastic interactions between the primary electron beam and the sample and have lower energy than the backscattered electrons. Secondary electrons are very useful for the inspection of the topography of the sample’s surface, as you can see in Fig. 4:

    Figure 4: a) Full BSD, b) Topography BSD and c) SED image of a leaf.

    The Everhart-Thornley detector is the most frequently used device for the detection of SE. It consists of a scintillator inside a Faraday cage, which is positively charged and attracts the SE. The scintillator is then used to accelerate the electrons and convert them into light before reaching a photomultiplier for amplification. The SE detector is placed at the side of the electron chamber, at an angle, in order to increase the efficiency of detecting secondary electrons.

    These two types of electrons are the most used signals by SEM users for imaging. Not all SEM users require the same type of information, so the capability of having multiple detectors makes SEM a very versatile tool that can provide valuable solutions for many different applications.

    Topics: Electrons, Scanning Electron Microscope

    About the author:
    Antonis Nanakoudis is Application Engineer at Thermo Fisher Scientific, the world leader in serving science. Antonis is extremely motivated by the capabilities of the Phenom desktop SEM on various applications and is constantly looking to explore the opportunities that it offers for innovative characterisation methods.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • How next-generation composite materials are manufactured and analysed

    The technical specifications of next-generation materials are taking our technology to a completely new level, allowing us to create products with outstanding properties that were impossible to achieve in the past. These materials are the result of a huge drive toward innovation in material science and could only be achieved because of the invention of the first composite materials and their introduction into the industrial landscape.

    In this article, I describe how these next-generation materials are being developed — and equally important: how their chemical composition is analysed, and their performance is measured.

    How beneficial properties of composite materials are created and preserved

    Certain materials have outstanding properties that offer the perfect fit with a specific application. Sometimes, unfortunately, the environment is affecting these materials to such an extent that they cannot be easily used. They also require continuous replacement and fixing, thereby compromising all the advantages that come from their use.

    By creating multiple layers, or applying a coating, such delicate materials can be shielded and used, with all the benefits that they bring.


    Figure 1: Glass sheet coated with different materials. The multiple layers add specific properties to the product.

    For example, introducing nanofibres in a slab can dramatically improve its resistance to traction, flexion or torsion. These materials normally feature a matrix (the external part of the material, directly exposed to the stress) that is supported by a network of fibres. When the stress is applied to the material, this is transferred to the fibres. The fibres can easily handle the applied force, responding with an elastic deformation. As soon as the stress is removed, the fibres will bring the material back to its original state.

    This stress-transfer process is what led to the creation of self-healing materials. A typical example is the plastic covers of some smartphones that, when scratched, can recover from the condition in a matter of minutes. If the scratch is not too deep, it will completely disappear and the ‘brand-new’ feeling of the phone will last longer.

    The crafting of these materials requires high-level engineering and is the result of a big investment in research. In particular, scientists have focused their attention on how to transfer the stress from the matrix to the fibre, without having the latter slipping inside the structure. Several different solutions were taken into account and investigated, such as creating a complex fibrous skeleton or coating the fibres with a material that improves the shear stress transmission at the fibre-matrix interface.

    Figure 2 & 3: Different kinds of fibre weaving offer different resistance to stress. The appropriate weaving technique is chosen according to the application.

    How next-generation composite materials are analysed and measured

    As these investigations were performed on nano-scaled materials, electron microscopes were employed for the analysis and measurements. With a desktop scanning electron microscope (SEM), it is in fact possible to evaluate the diameter of the fibres and monitor how they change along the structure. At the same time, it is also possible to locally analyse the quality and chemical composition of the coating in order to verify that the adhesion of the fibre to the matrix is optimised. This can be done with an energy dispersive X-ray analysis (EDS).

    Composite materials are not a recent invention, by the way. The ancient population inhabiting the European continent were already mixing different types of materials for decorative or practical uses. One example is the discovery of archaeological grave goods in the imperial and royal tombs in the Speyer Cathedral in Speyer, Germany, which showed that textile fibres were mixed with golden threads.

    Within the KUR-Project “Conservation and restoration of mobile cultural assets” in Germany, electron microscopy has been successfully used to perform numerous analyses of the tombs’ contents. Download this free case study to discover how a desktop SEM was used to investigate fibre and leather details without damaging the samples or performing additional sample preparation, it's very interesting:

    Topics: Fibres imaging & analysis, Materials Science, EDX/EDS Analysis

    About the author:
    Luigi Raspolini is an Application Engineer at Thermo Fisher Scientific, the world leader in serving science. Luigi is constantly looking for new approaches to materials characterisation, surface roughness measurements and composition analysis. He is passionate about improving user experiences and demonstrating the best way to image every kind of sample.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • How to spot astigmatism in Scanning Electron Microscopy (SEM) images

    You may have heard of astigmatism as a medical condition that causes visual impairment in up to 40% of adults [1], but how is this applicable to electron microscopy? First of all, let’s talk about what the word astigmatism, in fact, means: It is derived from the negative prefix ‘a’ (without) + ‘stigmat-’ (mark, or point, in Ancient Greek) + ‘ism’ (condition). In a perfect optical system, a lens has only one focal point, and is stigmatic. When the lens has more than one focal point, however, we refer to the lens as being astigmatic. This happens when the lens is elongated in either the sagittal (y-axis) or tangential (x-axis) plane, resulting in two focal points (= foci).

    In electron microscopy, astigmatism arises due to imperfections in the lens system. At high magnification, the imperfections become more apparent, hampering the quality of your images. As a result, round objects might appear elliptical or out of focus. Fortunately, electron microscopes have a component called a stigmator, which is used to rectify the problem.

    In this blog, I want to show you what astigmatism looks like. This example is ideal if you image metal spheres.

    How to spot astigmatism in a scanning electron microscopy image

    The image below shows the tin spheres, deposited on carbon, at low magnification (1500 ×, field of view 179 µm). There aren’t any obviously visible distortions on the image, but what happens if we increase the magnification?

    Figure 1. At low magnification, astigmatism is not overly apparent.

    When increasing the magnification to 50000 × (field of view 5.37µm), we notice that the tin spheres seem out of focus:

    Figure 2. At high magnification, astigmatism becomes apparent.

    Knowing that we should be able to see more detail when using a SEM, perhaps the image is simply out of focus? Let’s see what happens when we adjust the focus:

    Figure 3. When out of focus, round objects become elongated on astigmatic images.

    Figure 3 shows that when the image is astigmatic and out of focus, elongation of round objects occurs. Notice how the smaller spheres appear almost egg-shaped. In Figure 2, we see that an astigmatic, in-focus image simply appears to be out of focus.

    How can I tell if my SEM image is astigmatic?

    In the above examples (Figure 2 and 3), we see that when an image is astigmatic and out of focus, it becomes elongated. When astigmatic images are over- or under-focused, they become elongated in perpendicular directions. This means that you can test whether your image is astigmatic by over- and under-focusing it.

    In the three images below, we show the process of testing whether an image is astigmatic or not. In Figure 4, the lab operator has obtained an image and wants to know if it is astigmatic.

    Figure 4. To the untrained eye, this astigmatic image might just appear to be out of focus.

    Not convinced that the best-quality image has been obtained, the microscope operator over- and under-focuses the image:

    Figure 5. After over- and under-focusing the image, it is clear to the lab operator that the image was astigmatic, due to the visible elongation in perpendicular directions.

    The microscope operator now knows that the microscope’s stigmator needs to be used to improve the quality of the image. After adjusting the stigmator in both X and Y directions, the operator again tests the image, and sees the following:

    Figure 6. When an image is stigmatic, no elongation occurs when (A) under-focused or (C) over-focused. (B) When stigmatic and in focus the image is crisp.

    Figure 6. When an image is stigmatic, no elongation occurs when (A) under-focused or (C) over-focused. (B) When stigmatic and in focus the image is crisp.

    The operator is now ready to acquire beautiful images for a publication or report.

    In summary, please consider the table below to see what your image will look like depending on how well it is stigmated and/or focused:

    Table 1. SEM image quality depending on stigmation and focus.

    1. Hashemi H et al. (2018) Global and regional estimates of prevalence of refractive errors: Systematic review and meta-analysis. Journal of Current Ophthalmology 30(1): 3–22.

    Topics: Scanning Electron Microscope

    About the author:
    Willem van Zyl is Application Specialist at Thermo Fisher Scientific, the world leader in serving science. He is excited by analytical instruments that are accessible and user-friendly, and truly believes that a SEM image is worth a kazillion words.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • Measure Latency in Optical Networks with Picosecond Accuracy

    In optical networks where action on a message or signal is time critical, latency becomes a critical design element. Latency in communications networks is comprised of the networking and processing of messages, as well as the transmission delay through the physical fibre. Measuring and optimising this optical transmission delay can be critical in diagnosing latency issues in a data centre or maintaining quality control in the production of precision fibre links. Fortunately, the Luna OBR 4600 can measure this latency with picosecond accuracy.

    Specifically, latency is the time delay of a light signal to travel, or propagate, in an optical transmission medium. The latency is related to the length of an optical fibre by the equation:

    Where L is the length, c is the speed of light in a vacuum and n is the index of refraction for the optical fibre.

    Because the Luna OBR can measure loss and reflections in short fibre networks with ultra-high resolution (sampling resolution of 10 µm) and no dead zones, it is straightforward to extract the exact length or latency of a segment of fibre or waveguide by analysing the time delay between reflection events. In fact, the OBR 4600 is able to measure latency or length this way with an accuracy of <0.0034% of the total length (or latency). For a 30 m optical fibre, for example, this corresponds to an overall length measurement accuracy of better than 1 mm, which is equivalent to a latency measurement accuracy of about 5ps for standard fibre. Note that this is the absolute accuracy; actual measurement resolution will be much higher.

    The example illustrates a typical application of measuring any differences in the length or latency of two fibre segments, each approximately 50 m in length. An OBR 4600 scans both segments and the latency of each segment is indicated by the distance between the two reflections at the beginning and end connectors of the segments. In this example, the difference in latency is found to be 95ps. For this fibre, this is equivalent to a difference of about 19.3 mm in length.

    Measuring length and latency is only one application of the versatile OBR reflectometer. For an overview of the OBR and common applications for ultra high resolution optical reflectometry, download Luna’s OBR white paper below.

    Fibre Optic Test & Measurement with Optical Backscatter Reflectometry (OBR)

    Optical communications technology is rapidly evolving to meet the ever-growing demand for ubiquitous connectivity and higher data rates. As signalling rates increase and modulation schemes become more complex, guaranteeing a high-fidelity optical transmission medium is becoming even more critical.

    Additionally, modern networks are relying more on photonic integrated circuits (PICs) based on silicon photonics or other developing technologies, introducing additional variables into the design and deployment of robust high bandwidth optical systems.

    Measurement and full characterisation of loss along the light path is a fundamental tool in the design and optimisation of these components and fibre optic networks.

    While different types of reflectometers are available to measure return loss, insertion loss, and event location for different types of optical systems, Optical Backscatter Reflectometry (OBR) is a very high resolution, polarisation-diverse implementation of optical reflectometry that dramatically improves sensitivity and resolution.

    See what you’ve been missing with traditional optical instrumentation in this white paper.

    Topics include:

    • Reflectance And Return Loss
    • Measuring Return Loss
    • Optical Backscatter Reflectometry (OBR)
    • Luna OBR Reflectometers

    Click here to download the white paper.

    For more information please email or call 01582 764334.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • SEM automation – the future of scanning electron microscopy

    Kai van Beek, Director of Market Development at Thermo Fisher Scientific™, analyses the market and how the company's products fit customers' current and future needs. Together with his team, he defines the roadmap for product development. For almost 20 years, he has been working with automated scanning electron microscopy (SEM) solutions. In this interview, Kai looks back over many years of SEM experience and talks about current and future automated SEM products, the demands of the market and his personal vision regarding the automation of SEM.

    Thermo Scientific™ Phenom Desktop Electron Microscopy Solutions offer innovative SEM products such as automated solutions. For whom do you develop such products?
    There are lots of people, who—during their daily work life—need to do measurements with SEM systems. Often these people have to look at hundreds of images or repeat the same measurement again and again. We want to make these people more productive.

    And who are these people?
    Very different types: from a scientist to an industrial researcher to a production line worker. In many cases, they have to examine lots of data extracted from the SEM images.

    Any examples?
    Gunshot residue analysis is a good example. When a firearm is discharged, a plume of distinct particles is emitted and these particles can settle on hands and clothing. Once a sample is extracted from the hands or clothing, a forensic scientist needs to analyse thousands of particles on the sample, looking for those few particles that are distinctly from the discharge of a firearm.

    Another good example is the cleanliness of production processes. During the manufacturing of an assembly, dirt and dust is introduced from the production hall and the manufacturing steps, for example, the drilling of a hole. A production engineer will collect samples and analyse thousands of particles to determine if the production process is still clean enough to produce good products. I want people to be able to answer such questions in a fast and easy manner.

    Figure 1: Gunshot residue analysis with a SEM

    That means in the field of material analysis, automated SEM solutions can make the work much faster?
    Yes, of course – and moreover an automated analysis avoids human bias. When you sit in front of a system and you look at a SEM image, your eye always gravitates to those characteristics that stand out. But there might be 50 other features, which you ignore, although they could tell you something, too. And moreover, especially in industry, you have multiple users – so, it is difficult to obtain comparable results without automation.

    Just to be clear: this means time-saving and elimination of human bias are the main advantages of automated SEM systems compared to non-automated ones?
    Well, there are basically three main reasons why automation of SEM is important. Eliminating human bias is one. Another one is statistical characterisation. Only automated systems are able to image a huge number of different particles or spots within a reasonable time. And the third reason is the “needle in a haystack” problem: I want to find a specific feature within a large number of other features. And the two last reasons I just mentioned, save lots of time.

    Regarding commercial products, it seems that automated solutions and applications for scanning electron microscopes are a rather new phenomenon. Is that true?
    Not necessarily. I would say it is actually quite old – it probably goes back to the 1970s. As soon as people had a SEM, they realised that automation would make things easier. A simple example: if you do not want to make only one image of your sample but 100. Obviously, one wants to automate that process.

    So could we say that researchers, in particular, have been automating their imaging and analysis ever since they started working with SEM?
    Yes, but also those working in fields outside of research sought to automate their SEM. For example the gunshot residue (GSR) analysis, which is a SEM application that was automated very early. The operator had to look at the sample and search for the gunshot residue particles in order to determine if a firearm has been used in a crime. And that is the so-called “needle in the haystack” problem. One goes crazy trying to find these particles.

    But isn't it true that automated SEM systems are considered a new trend?
    I agree. These older automated SEMs required well-trained operators to run the system. What has changed is that there are systems like the Phenom desktop SEM, which are very easy to operate, including automating the Phenom desktop SEM.

    So, that means that modern SEM systems are smaller, faster and easier to operate compared to older models right?
    …and, therefore, the system is cheaper to operate. And also analysing and storing a big amount of data is much easier today. Together, all these things have made the technique, and especially automated solutions, more accessible.

    What was really necessary to develop automated SEM solution for a broader audience?
    In particular, the stability of the system. Obviously, in automated SEMs the operator leaves the system working alone. In order to obtain good images and to be able to trust the results, several parameters such as ‘focus’ or ‘contrast’ must remain stable for different images. Also, nowadays the preparation of the sample can be done very routinely. Longer life of the electron source is also important. Conventional sources used to burn out after 100 hours. If your system runs many hours per day, then you have to replace the filament every week. Our SEMs, for example, use a long-life electron source that can run continuously for long periods of time.

    You mentioned "stability of the system". What technical requirements does this condition place on the system?
    Basically, the detector and the electron source have to work in a very stable and reliable fashion. For example, earlier EDX detectors, which give us chemical information about the sample, were unstable. That has changed. Besides that, the system now is much more compact. It is possible to put it almost anywhere and immediately start collecting data.

    When you leave the system unattended, how can you be sure that the data is reliable?
    This is a very critical issue. Operators must be sure that they can trust their results – for instance when you release a product to the market or investigate a customer sample. For that reason, we have so-called reference samples that provide a fingerprint of how the system behaves during automation.

    How can I imagine that? Does the Thermo Scientific™ Phenom product range use different reference samples for testing the instrument?
    Yes, depending on the purpose of the SEM there are different reference samples – like for the automotive industry or for gunshot residue. The reference samples are representative of the samples the customers will use. And before the system leaves the factory, we test it with the particular reference sample in the way the customer will use it.

    At present, what are the most used applications for automated SEM systems?
    I would say gunshot residue since it is one of the oldest applications. In the automotive industry, it is technically challenging to apply automated SEM. However, the automotive industry already uses our products to monitor the assembly process of certain parts, and also to microscopically inspect the final product. Another application is for imaging microscopic fibres – there are many different types of fibres and they hold many things together. Finally, analysis of minerals is another very important field.

    Figure 2: An example of automatic detection of fibres with SEM

    When it comes to the Phenom product range, what kind of automated SEM products in particular are you offering?
    Our products should enable our customers to make the world healthier, cleaner and safer. For dedicated markets, we have specific solutions such as the Thermo Scientific™ Phenom AsbestoMetric Software, which enables operators to detect asbestos fibres automatically. These fibres are hazardous, and the software allows for a quick risk assessment. And as already mentioned, for forensic purposes we offer a gunshot residue desktop SEM, the Thermo Scientific™ Phenom Perception GSR Desktop SEM. Moreover, we have dedicated solutions for additive manufacturing and the automotive industry.

    And for scientists in the lab?
    For example, there are automated scripts for imaging and data analysis and moreover, we allow our customers to make their own scripts for automating their workflow. The users of our systems are very creative. They have their own good ideas and they do not want to wait for us to implement them. And we encourage them to start realising their ideas.

    Are automated SEM systems still more suitable and important for big companies with large production lines, rather than for small companies or research labs?
    It does not depend on the size of the company or the lab. It really depends on the question you want to answer. Among our customers, there are big as well as small companies – both asking for automated solutions. However, that was different in the past when systems were much larger and more difficult to operate. At that time, only large companies were able to afford such systems, including the trained experts to operate them.

    Soon there will be an ISO standard in place specifying, among others, the qualification of the SEM for quantitative measurements. Will this standard drive automation?
    In my opinion, these standards normalise the best practices. Say we introduce a new product for automation. A couple of early adopters, who see the value, will buy it. But not everybody is like this; some people and companies prefer to wait a little bit. For these people, the ISO standards are helpful since they describe the best practices. Besides, the ISO standards help everyone by having a common language. Especially in industry, you always have this sort of communication between a supplier and a customer. Let’s say the automotive industry buys steel from a steel plant and now the quality of the steel can be tested according to a standard. That means the ISO standard makes it simpler to meet the expectations of both partners. And probably that will drive the development of automated SEM systems.

    Are there any new automated SEM applications on the horizon?
    The big one that we are seeing is nanoparticles. Plastic nanoparticles, for instance, are showing up everywhere. Our customers want to start monitoring this. Where are these nanoparticles, what are they made of and where do they come from? The interest can, for example, be driven by health or environmental concerns. Another field – which evolves because data storing is so cheap and easy – is large area mapping.

    For what reason?
    It can be to test the uniformity of material, which is becoming more and more important. And the SEM data can then be combined with information from other sources. Let’s assume I take a picture with my cell phone of a part that has failed. Afterwards, I make SEM images to get microscopic information and in addition, I generate some chemical information, and so on. There are a whole bunch of different data sources. I literally build up a “picture” about this part containing all the information I have collected. Eventually, I might be able to understand, why it has failed. This kind of correlated data will be more and more important in the future.

    Are there any new markets with potential demand for innovative SEM solutions?
    Electric vehicles is one market we can think of. This development will change the requirements for the car industry. Electric vehicles come with a whole bunch of electronic parts and devices. All these electronics and the batteries themselves must be controlled and microscopically imaged.

    What is your vision regarding automation in SEM?
    From a personal point of view, the value of an automated SEM is really to show small features and the value of the chemistry is to differentiate between features.

    I think the next level of automation is the analysis of the interface of materials or multiple layers of materials. This is only happening in academia, and not in an automated manner. Take steel, for example: you could analyse the interface between the steel and the inclusion. Or think of airborne particles. They can be made of plastic, and this plastic can have a coating. If you swallow such a particle, it makes a big difference if it is coated with a certain chemical or not.

    Last but not least, a very personal question. What motivates you during work?
    What personally drives me is to make products that people can really use. It gives me great pleasure to see people work productively when using our tools. I want them to obtain the best possible results at that time.They should gain insights by using our products, which they previously did not get. And if they are happy, I am happy.

    Topics: Scanning Electron Microscope Automation

    About the author:
    Rose Helweg is the Sr Digital Marketing Specialist for Thermo Scientific™ Phenom Desktop SEM. She is driven to unveil the hidden beauty of the nanoworld and by the performance and versatility of the Phenom Desktop SEM product range. She is dedicated to developing new relevant stories about high-tech innovation and the interesting world of electron microscopy.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • Optical Reflectometers – How Do They Compare?

    Measuring the return loss along a fibre optic network, or within a photonic integrated circuit, is a common and very important technique when characterising a network’s or device’s ability to efficiently propagate optical signals. Reflectometry is a general method of measuring this return loss and consists of launching a probe signal into the device or network, measuring the reflected light and calculating the ratio between the two.

    Spatially-resolved reflectometers can map the return loss along the length of the optical path, identifying and locating problems or issues in the optical path. There are three established technologies available for spatially-resolved reflectometry:

    • Optical Time-Domain Reflectometry (OTDR)
    • Optical Low-Coherence Reflectometry (OLCR)
    • Optical Frequency-Domain Reflectometry (OFDR)

    The OTDR is currently the most widely used type of reflectometer when working with optical fibre. OTDRs work by launching optical pulses into the optical fibre and measuring the travel time and strength of the reflected and backscattered light.These measurements are used to create a trace or profile of the returned signal versus length. OTDRs are particularly useful for testing long fibre optic networks, with ranges reaching hundreds of kilometres. The spatial resolution (the smallest distance over which it can resolve two distinct reflection events) is typically in the range of 1 or 2 meters. All OTDRs, even specialised ‘high-resolution’ versions, suffer from dead zones – the distance after a reflection in which the OTDR cannot detect or measure a second reflection event. These dead zones are most prevalent at the connector to the OTDR and any other strong reflectors. OLCR is an interferometer-based measurement that uses a wideband low-coherent light source and a tunable optical delay line to characterise optical reflections in a component. While an OLCR measurement can achieve high spatial resolution down to the tens of micrometers, the overall measurement range is limited, often to only tens of centimetres. Therefore, the usefulness of the OLCR is limited to inspecting individual components, such as fibre optic connectors.Finally, OFDR is an interferometer-based measurement that utilises a wavelength-swept laser source. Interference fringes generated as the laser sweeps are detected and processed using the Fourier transform, yielding a map of reflections as a function of the length. OFDR is well suited for applications that require a combination of high speed, sensitivity and resolution over short and intermediate lengths.Luna’s Optical Backscatter Reflectometers (OBRs) are a special implementation of OFDR, adding polarisation diversity and optical optimisation to achieve unmatched spatial resolution. An OBR can quickly scan a 30-meter fibre with a sampling resolution of 10 micrometers or a 2-kilometre network with 1-millimetre resolution.This graphic summarises the landscape of these established technologies for optical reflectometry. By mapping the measurement range and spatial resolution of the most common technologies, the plot illustrates the unique application coverage of OBR.

  • What is the Value of Shortwave Infrared?

    Sensing in the shortwave infrared (SWIR) range (wavelengths from 0.9 to 1.7 microns) has been made practical by the development of Indium Gallium Arsenide (InGaAs) sensors. Sensors Unlimited, Inc., part of UTC Aerospace Systems, is the pioneer in this technology and clear leader in advancing the capability of SWIR sensors. Founded in 1991 to create lattice-matched InGaAs structures, Sensors Unlimited, Inc. quickly grew as the telecom industry recognised the exceptional capabilities of this remarkable material.


    Click here to read the complete article.


    To speak with a sales/applications engineer please call 01582 764334 or click here to email

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • Vibration-Tolerant Interferometry

    QPSI™ Technology Shrugs Off Vibration from Common Sources

    When image stabilization became available on digital cameras, it vastly reduced the number of photos ruined by camera shake. The new technology eliminated the effects of common hand tremors, greatly improving image quality in many photo situations.

    Animated comparison of a PSI measurement with fringe print-through due to vibration, and the same surface measured with QPSI™ technology – free of noisy print-through.

    In precision interferometric metrology, a similar problem, environmental vibration, has ruined countless measurements, like the one in the animation shown at right. Vibration can significantly affect measurement results and spatial frequency analysis, and it is difficult to make high quality optics if you can not measure them reliably. Solving the vibration problem can be costly, requiring the purchase of a vibration isolation system or a special dynamic interferometer.

    ZYGO's QPSI™ technology is truly a breakthrough for many optical facilities because it eliminates problems due to common sources of vibration, providing reliable data the first time you measure. QPSI measurements require no special setup or calibration, and cycle times are typically within a second or two of standard PSI measurements.

    Key Features:

    • Eliminates ripple and fringe print-through due to vibration
    • High-precision measurement; same as phase-shifting interferometry (PSI)
    • Requires no calibration, and no changes to your setup
    • Easily enabled/disabled with a mouse click

    QPSI is available exclusively from ZYGO on Verifire™, Verifire™ HD, Verifire™ XL, and also on DynaFiz® interferometer systems that have the PMR option installed (phase measuring receptacle). These systems are easy-to-use, on-axis, common-path Fizeau interferometers – the industry standard for reliable metrology – making them the logical choice for most surface form measurements.

    QPSI™ Simplifies Production Metrology
    A ZYGO interferometer with QPSI technology is capable of producing reliable high-precision measurements in the presence of environmental vibration from common sources such as motors, pumps, blowers, and personnel. Unless your facility is free of these sources, your business will likely benefit from QPSI technology.

    While QPSI can completely solve many common vibration issues, environments that have extreme vibration and/or air turbulence may require the additional capability of DynaPhase® dynamic acquisition, which is included by default with ZYGO's DynaFiz® interferometer. DynaPhase® is also available as an option on most new Verifire systems from 2018 onwards.
    We can help determine the best solution for your particular situation.

    Click here to read further information on DynaPhase® Dynamic Acquisition for Extreme Environments Confidence in metrology, no matter the conditions.

    Please contact us for advice and a demonstration please call 01582 764334 or click here to email.

Items 1 to 10 of 95 total

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. ...
  7. 10