Characterisation, Measurement & Analysis
+44(0)1582 764334


  • SEM automation – the future of scanning electron microscopy

    Kai van Beek, Director of Market Development at Thermo Fisher Scientific™, analyses the market and how the company's products fit customers' current and future needs. Together with his team, he defines the roadmap for product development. For almost 20 years, he has been working with automated scanning electron microscopy (SEM) solutions. In this interview, Kai looks back over many years of SEM experience and talks about current and future automated SEM products, the demands of the market and his personal vision regarding the automation of SEM.

    Thermo Scientific™ Phenom Desktop Electron Microscopy Solutions offer innovative SEM products such as automated solutions. For whom do you develop such products?
    There are lots of people, who—during their daily work life—need to do measurements with SEM systems. Often these people have to look at hundreds of images or repeat the same measurement again and again. We want to make these people more productive.

    And who are these people?
    Very different types: from a scientist to an industrial researcher to a production line worker. In many cases, they have to examine lots of data extracted from the SEM images.

    Any examples?
    Gunshot residue analysis is a good example. When a firearm is discharged, a plume of distinct particles is emitted and these particles can settle on hands and clothing. Once a sample is extracted from the hands or clothing, a forensic scientist needs to analyse thousands of particles on the sample, looking for those few particles that are distinctly from the discharge of a firearm.

    Another good example is the cleanliness of production processes. During the manufacturing of an assembly, dirt and dust is introduced from the production hall and the manufacturing steps, for example, the drilling of a hole. A production engineer will collect samples and analyse thousands of particles to determine if the production process is still clean enough to produce good products. I want people to be able to answer such questions in a fast and easy manner.

    Figure 1: Gunshot residue analysis with a SEM

    That means in the field of material analysis, automated SEM solutions can make the work much faster?
    Yes, of course – and moreover an automated analysis avoids human bias. When you sit in front of a system and you look at a SEM image, your eye always gravitates to those characteristics that stand out. But there might be 50 other features, which you ignore, although they could tell you something, too. And moreover, especially in industry, you have multiple users – so, it is difficult to obtain comparable results without automation.

    Just to be clear: this means time-saving and elimination of human bias are the main advantages of automated SEM systems compared to non-automated ones?
    Well, there are basically three main reasons why automation of SEM is important. Eliminating human bias is one. Another one is statistical characterisation. Only automated systems are able to image a huge number of different particles or spots within a reasonable time. And the third reason is the “needle in a haystack” problem: I want to find a specific feature within a large number of other features. And the two last reasons I just mentioned, save lots of time.

    Regarding commercial products, it seems that automated solutions and applications for scanning electron microscopes are a rather new phenomenon. Is that true?
    Not necessarily. I would say it is actually quite old – it probably goes back to the 1970s. As soon as people had a SEM, they realised that automation would make things easier. A simple example: if you do not want to make only one image of your sample but 100. Obviously, one wants to automate that process.

    So could we say that researchers, in particular, have been automating their imaging and analysis ever since they started working with SEM?
    Yes, but also those working in fields outside of research sought to automate their SEM. For example the gunshot residue (GSR) analysis, which is a SEM application that was automated very early. The operator had to look at the sample and search for the gunshot residue particles in order to determine if a firearm has been used in a crime. And that is the so-called “needle in the haystack” problem. One goes crazy trying to find these particles.

    But isn't it true that automated SEM systems are considered a new trend?
    I agree. These older automated SEMs required well-trained operators to run the system. What has changed is that there are systems like the Phenom desktop SEM, which are very easy to operate, including automating the Phenom desktop SEM.

    So, that means that modern SEM systems are smaller, faster and easier to operate compared to older models right?
    …and, therefore, the system is cheaper to operate. And also analysing and storing a big amount of data is much easier today. Together, all these things have made the technique, and especially automated solutions, more accessible.

    What was really necessary to develop automated SEM solution for a broader audience?
    In particular, the stability of the system. Obviously, in automated SEMs the operator leaves the system working alone. In order to obtain good images and to be able to trust the results, several parameters such as ‘focus’ or ‘contrast’ must remain stable for different images. Also, nowadays the preparation of the sample can be done very routinely. Longer life of the electron source is also important. Conventional sources used to burn out after 100 hours. If your system runs many hours per day, then you have to replace the filament every week. Our SEMs, for example, use a long-life electron source that can run continuously for long periods of time.

    You mentioned "stability of the system". What technical requirements does this condition place on the system?
    Basically, the detector and the electron source have to work in a very stable and reliable fashion. For example, earlier EDX detectors, which give us chemical information about the sample, were unstable. That has changed. Besides that, the system now is much more compact. It is possible to put it almost anywhere and immediately start collecting data.

    When you leave the system unattended, how can you be sure that the data is reliable?
    This is a very critical issue. Operators must be sure that they can trust their results – for instance when you release a product to the market or investigate a customer sample. For that reason, we have so-called reference samples that provide a fingerprint of how the system behaves during automation.

    How can I imagine that? Does the Thermo Scientific™ Phenom product range use different reference samples for testing the instrument?
    Yes, depending on the purpose of the SEM there are different reference samples – like for the automotive industry or for gunshot residue. The reference samples are representative of the samples the customers will use. And before the system leaves the factory, we test it with the particular reference sample in the way the customer will use it.

    At present, what are the most used applications for automated SEM systems?
    I would say gunshot residue since it is one of the oldest applications. In the automotive industry, it is technically challenging to apply automated SEM. However, the automotive industry already uses our products to monitor the assembly process of certain parts, and also to microscopically inspect the final product. Another application is for imaging microscopic fibres – there are many different types of fibres and they hold many things together. Finally, analysis of minerals is another very important field.

    Figure 2: An example of automatic detection of fibres with SEM

    When it comes to the Phenom product range, what kind of automated SEM products in particular are you offering?
    Our products should enable our customers to make the world healthier, cleaner and safer. For dedicated markets, we have specific solutions such as the Thermo Scientific™ Phenom AsbestoMetric Software, which enables operators to detect asbestos fibres automatically. These fibres are hazardous, and the software allows for a quick risk assessment. And as already mentioned, for forensic purposes we offer a gunshot residue desktop SEM, the Thermo Scientific™ Phenom Perception GSR Desktop SEM. Moreover, we have dedicated solutions for additive manufacturing and the automotive industry.

    And for scientists in the lab?
    For example, there are automated scripts for imaging and data analysis and moreover, we allow our customers to make their own scripts for automating their workflow. The users of our systems are very creative. They have their own good ideas and they do not want to wait for us to implement them. And we encourage them to start realising their ideas.

    Are automated SEM systems still more suitable and important for big companies with large production lines, rather than for small companies or research labs?
    It does not depend on the size of the company or the lab. It really depends on the question you want to answer. Among our customers, there are big as well as small companies – both asking for automated solutions. However, that was different in the past when systems were much larger and more difficult to operate. At that time, only large companies were able to afford such systems, including the trained experts to operate them.

    Soon there will be an ISO standard in place specifying, among others, the qualification of the SEM for quantitative measurements. Will this standard drive automation?
    In my opinion, these standards normalise the best practices. Say we introduce a new product for automation. A couple of early adopters, who see the value, will buy it. But not everybody is like this; some people and companies prefer to wait a little bit. For these people, the ISO standards are helpful since they describe the best practices. Besides, the ISO standards help everyone by having a common language. Especially in industry, you always have this sort of communication between a supplier and a customer. Let’s say the automotive industry buys steel from a steel plant and now the quality of the steel can be tested according to a standard. That means the ISO standard makes it simpler to meet the expectations of both partners. And probably that will drive the development of automated SEM systems.

    Are there any new automated SEM applications on the horizon?
    The big one that we are seeing is nanoparticles. Plastic nanoparticles, for instance, are showing up everywhere. Our customers want to start monitoring this. Where are these nanoparticles, what are they made of and where do they come from? The interest can, for example, be driven by health or environmental concerns. Another field – which evolves because data storing is so cheap and easy – is large area mapping.

    For what reason?
    It can be to test the uniformity of material, which is becoming more and more important. And the SEM data can then be combined with information from other sources. Let’s assume I take a picture with my cell phone of a part that has failed. Afterwards, I make SEM images to get microscopic information and in addition, I generate some chemical information, and so on. There are a whole bunch of different data sources. I literally build up a “picture” about this part containing all the information I have collected. Eventually, I might be able to understand, why it has failed. This kind of correlated data will be more and more important in the future.

    Are there any new markets with potential demand for innovative SEM solutions?
    Electric vehicles is one market we can think of. This development will change the requirements for the car industry. Electric vehicles come with a whole bunch of electronic parts and devices. All these electronics and the batteries themselves must be controlled and microscopically imaged.

    What is your vision regarding automation in SEM?
    From a personal point of view, the value of an automated SEM is really to show small features and the value of the chemistry is to differentiate between features.

    I think the next level of automation is the analysis of the interface of materials or multiple layers of materials. This is only happening in academia, and not in an automated manner. Take steel, for example: you could analyse the interface between the steel and the inclusion. Or think of airborne particles. They can be made of plastic, and this plastic can have a coating. If you swallow such a particle, it makes a big difference if it is coated with a certain chemical or not.

    Last but not least, a very personal question. What motivates you during work?
    What personally drives me is to make products that people can really use. It gives me great pleasure to see people work productively when using our tools. I want them to obtain the best possible results at that time.They should gain insights by using our products, which they previously did not get. And if they are happy, I am happy.

    Topics: Scanning Electron Microscope Automation

    About the author:
    Rose Helweg is the Sr Digital Marketing Specialist for Thermo Scientific™ Phenom Desktop SEM. She is driven to unveil the hidden beauty of the nanoworld and by the performance and versatility of the Phenom Desktop SEM product range. She is dedicated to developing new relevant stories about high-tech innovation and the interesting world of electron microscopy.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • Optical Reflectometers – How Do They Compare?

    Measuring the return loss along a fibre optic network, or within a photonic integrated circuit, is a common and very important technique when characterising a network’s or device’s ability to efficiently propagate optical signals. Reflectometry is a general method of measuring this return loss and consists of launching a probe signal into the device or network, measuring the reflected light and calculating the ratio between the two.

    Spatially-resolved reflectometers can map the return loss along the length of the optical path, identifying and locating problems or issues in the optical path. There are three established technologies available for spatially-resolved reflectometry:

    • Optical Time-Domain Reflectometry (OTDR)
    • Optical Low-Coherence Reflectometry (OLCR)
    • Optical Frequency-Domain Reflectometry (OFDR)

    The OTDR is currently the most widely used type of reflectometer when working with optical fibre. OTDRs work by launching optical pulses into the optical fibre and measuring the travel time and strength of the reflected and backscattered light.These measurements are used to create a trace or profile of the returned signal versus length. OTDRs are particularly useful for testing long fibre optic networks, with ranges reaching hundreds of kilometres. The spatial resolution (the smallest distance over which it can resolve two distinct reflection events) is typically in the range of 1 or 2 meters. All OTDRs, even specialised ‘high-resolution’ versions, suffer from dead zones – the distance after a reflection in which the OTDR cannot detect or measure a second reflection event. These dead zones are most prevalent at the connector to the OTDR and any other strong reflectors. OLCR is an interferometer-based measurement that uses a wideband low-coherent light source and a tunable optical delay line to characterise optical reflections in a component. While an OLCR measurement can achieve high spatial resolution down to the tens of micrometers, the overall measurement range is limited, often to only tens of centimetres. Therefore, the usefulness of the OLCR is limited to inspecting individual components, such as fibre optic connectors.Finally, OFDR is an interferometer-based measurement that utilises a wavelength-swept laser source. Interference fringes generated as the laser sweeps are detected and processed using the Fourier transform, yielding a map of reflections as a function of the length. OFDR is well suited for applications that require a combination of high speed, sensitivity and resolution over short and intermediate lengths.Luna’s Optical Backscatter Reflectometers (OBRs) are a special implementation of OFDR, adding polarisation diversity and optical optimisation to achieve unmatched spatial resolution. An OBR can quickly scan a 30-meter fibre with a sampling resolution of 10 micrometers or a 2-kilometre network with 1-millimetre resolution.This graphic summarises the landscape of these established technologies for optical reflectometry. By mapping the measurement range and spatial resolution of the most common technologies, the plot illustrates the unique application coverage of OBR.

  • What is the Value of Shortwave Infrared?

    Sensing in the shortwave infrared (SWIR) range (wavelengths from 0.9 to 1.7 microns) has been made practical by the development of Indium Gallium Arsenide (InGaAs) sensors. Sensors Unlimited, Inc., part of UTC Aerospace Systems, is the pioneer in this technology and clear leader in advancing the capability of SWIR sensors. Founded in 1991 to create lattice-matched InGaAs structures, Sensors Unlimited, Inc. quickly grew as the telecom industry recognised the exceptional capabilities of this remarkable material.


    Click here to read the complete article.


    To speak with a sales/applications engineer please call 01582 764334 or click here to email

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • Vibration-Tolerant Interferometry

    QPSI™ Technology Shrugs Off Vibration from Common Sources

    When image stabilization became available on digital cameras, it vastly reduced the number of photos ruined by camera shake. The new technology eliminated the effects of common hand tremors, greatly improving image quality in many photo situations.

    Animated comparison of a PSI measurement with fringe print-through due to vibration, and the same surface measured with QPSI™ technology – free of noisy print-through.

    In precision interferometric metrology, a similar problem, environmental vibration, has ruined countless measurements, like the one in the animation shown at right. Vibration can significantly affect measurement results and spatial frequency analysis, and it is difficult to make high quality optics if you can not measure them reliably. Solving the vibration problem can be costly, requiring the purchase of a vibration isolation system or a special dynamic interferometer.

    ZYGO's QPSI™ technology is truly a breakthrough for many optical facilities because it eliminates problems due to common sources of vibration, providing reliable data the first time you measure. QPSI measurements require no special setup or calibration, and cycle times are typically within a second or two of standard PSI measurements.

    Key Features:

    • Eliminates ripple and fringe print-through due to vibration
    • High-precision measurement; same as phase-shifting interferometry (PSI)
    • Requires no calibration, and no changes to your setup
    • Easily enabled/disabled with a mouse click

    QPSI is available exclusively from ZYGO on Verifire™, Verifire™ HD, Verifire™ XL, and also on DynaFiz® interferometer systems that have the PMR option installed (phase measuring receptacle). These systems are easy-to-use, on-axis, common-path Fizeau interferometers – the industry standard for reliable metrology – making them the logical choice for most surface form measurements.

    QPSI™ Simplifies Production Metrology
    A ZYGO interferometer with QPSI technology is capable of producing reliable high-precision measurements in the presence of environmental vibration from common sources such as motors, pumps, blowers, and personnel. Unless your facility is free of these sources, your business will likely benefit from QPSI technology.

    While QPSI can completely solve many common vibration issues, environments that have extreme vibration and/or air turbulence may require the additional capability of DynaPhase® dynamic acquisition, which is included by default with ZYGO's DynaFiz® interferometer. DynaPhase® is also available as an option on most new Verifire systems from 2018 onwards.
    We can help determine the best solution for your particular situation.

    Click here to read further information on DynaPhase® Dynamic Acquisition for Extreme Environments Confidence in metrology, no matter the conditions.

    Please contact us for advice and a demonstration please call 01582 764334 or click here to email.

  • Dynamic Capability comes to (nearly) all new Zygo Verifire Interferometers

    DynaPhase® Dynamic Acquisition for Extreme Environments
    Confidence in metrology, no matter the conditions

    Fizeau Interferometry has become a trusted standard for precise metrology of optical components and systems. Traditionally, these instruments were required to be installed in lab environments, where conditions were carefully controlled, to ensure high precision measurements were not compromised. However, today a growing number of applications demand easy, cost-effective solutions for the use of interferometry in environments where metrology has been difficult or impossible in the past.

    Often, optical systems must be tested in locations that simulate their end-use environment. These environments can present challenges due to factors like large vibration and air turbulence, which can negatively affect or prevent the ability to acquire reliable optical measurements. Many of these challenges are addressed with less than optimal solutions, often suffering from drawbacks and issues related to usability, speed, reliability and precision.

    ZYGO's patented DynaPhase® data acquisition technology offers many differentiated benefits, without the limitations associated with alternative methods. Key attributes of DynaPhase include:

    • Highest vibration tolerance in a Fizeau interferometer, enabled by the ZYGO-manufactured high-power laser* and fast acquisition speeds
    • Patented in-situ calibration enables the highest precision, lowest measurement uncertainty measurements, and excellent correlation to temporal phase shifting interferometry (PSI)
    • Simple setup and calibration compared to alternative approaches
    • Cost-effective solution; available on nearly all ZYGO laser interferometers

    Comparison of Measurement Techniques Using an Identical Measurement Cavity

    DynaPhase offers the versatility and performance to address a wide range of challenging optical testing environments and applications, including:

    • Cryogenic and vacuum chamber testing
    • Telescope components and complex optical systems
    • Large tower, workstations and complex or unstable test stands

    DynaPhase is available on nearly all ZYGO laser interferometers. Features vary by model and enable users the flexibility to use capabilities that enhance efficiency in Production Mode, enable fast system alignment with LivePhase, or reveal temporal changes in data with Movie Mode.

    Get the most from your metrology investment with the unique capabilities and unmatched versatility of DynaPhase, now available on the entire interferometer line from ZYGO.
    Complete range of vibration tolerant metrology - check out ZYGO's patented QPSI vibration tolerant temporal phase shifting data acquisition enables metrology in the presence of common shop floor vibrations without the need for calibration.

    DynaPhase is inherent in the DynaFiz interferometer - and available as an optional extra software module on the new Verifire (1200 x 1200 pixel camera), Verifire HD and HDx systems. This means you can have DynaPhase capability on the entry level Verifire interferometer, which is pretty well specified to start with as it includes QPSI technology also.

    Click here to read further information on QPSI™ Technology Shrugs Off Vibration from Common Sources.

    To speak with a Sales & Applications Engineer please call 01582 764334 or click here to email.

  • Nano Mechanical Imaging

    The nano mechanical imaging (NMI) mode is an extension of the contact mode. The static force acting on the cantilever is used to produce a topography image of the sample. Simultaneously, at each pixel force curves are produced and used to extract quantitative material properties data such as adhesion, deformation, dissipation...

    Click here to read the complete article.


    To speak with a sales/applications engineer please call 01582 764334 or click here to email

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • EMC Pre-Compliance Testing

    Electronic products can emit unwanted electromagnetic radiation, or electromagnetic interference (EMI). Regulatory agencies create standards that define the allowable limits of EMI over specific frequency ranges.
    Testing designs and products for compliance to these standards can be difficult and expensive, but there are tools and techniques that can help to minimise the cost of testing and help to enable designs to pass compliance testing quickly.
    One of the most often used techniques for EMI testing is near field probing. In this technique, a spectrum analyser is used to measure electromagnetic radiation from a device-under-test using magnetic (H) field and electric (E) field probes.
    The application note describes some common techniques used to identify problem areas using near field probes, download it here

  • The Phenom Process Automation: mixing backscattered and secondary electron images using a Python script

    When the primary beam interacts with the sample, backscattered electrons (BSEs) and secondary electrons (SEs) are generated. Images of the samples obtained by detecting the emitted signals, carry information on the composition (for BSE signals) and on the topography (for SE signals). How are BSEs and SEs formed and why do they carry specific information? Moreover, is it possible to get both compositional and topographical information in one image? And how flexible is this solution? In this blog, I will answer these questions and introduce a script that allows users to mix their own images.

    When the primary beam hits the sample surface, secondary electrons and backscattered electrons are emitted and can be detected to form images. Secondary electrons are generated from inelastic scattering events of the primary electrons with electrons in the atoms of the sample, as shown on the left of Figure 1. SEs are electrons with low energy (typically less than 50eV) that can be easily absorbed. This is the reason why only the secondary electrons coming from a very thin top layer of the sample can be collected by the detector.

    On the other hand, backscattered electrons are formed from elastic scattering events, where the trajectories of primary electrons are deviated by the interaction with the nuclei of the atoms in the sample, as shown on the right in Figure 1. BSEs typically have high energy and can emerge from deep inside the sample.

    Figure 1: Formation of secondary electrons (on the left) and backscattered electrons (on the right). SEs are formed from inelastic scattering events, while BSEs are formed from elastic scattering events.

    Secondary electrons images contain information on the topography of the sample. As shown on the left of Figure 2, the beam is scanned on top of a surface that has a protrusion. When the beam is located on the slope of this protrusion, the interaction volume touches the sidewall causing more secondary electrons to escape the surface. When the beam is located on the flat area, fewer secondary electrons can escape. This means that more secondary electrons will be emitted on edges and slopes, causing brighter contrast than on flat areas, providing information on the morphology of the sample.

    On the other hand, the backscattered electrons yield depends on the material, as shown on the right of Figure 2. If the beam hits silicon atoms, which have atomic number Z=14, fewer backscattered electrons will be formed than in the case of gold, which has atomic number Z = 79. The reason for this is that gold atoms have bigger nuclei, providing a stronger effect on the primary electrons’ trajectories, which translates into a bigger deviation. Backscattered electrons images therefore provide information on the material difference of the sample.

    Figure 2: On the left, more secondary electrons can escape the sample surface on edges and slopes than in flat areas. On the right, the yield of backscattered electrons depends on the atomic number of the material, more BSEs are generated in gold (Z = 79) than in silicon (Z = 14).

    To collect the secondary electrons, the Everhart-Thornley detector (ETD) is typically used. Because SEs have low energy, a grid at high potential is placed in front of the detector to attract the secondary electrons. On the other hand, BSEs are often collected by a solid-state detector placed above the sample. The images obtained by the ETD detector and the BSD detector contain information on the morphology and the composition of the sample respectively.

    For some applications, however, it is convenient to have information on both the topography and the composition, in one image. This can be done by simply adding the signal coming from the two detectors.

    Mixing BSE and SE images

    When an image is acquired, the beam scans the sample surface pixel by pixel. In each pixel, the signal is collected by the detector and translated into a value. If images are acquired in 8 bits, the range of pixel values varies from 0 to 255. If images are acquired in 16 bits, the values of each pixel can vary from 0 to 65,535.

    The value of the pixel depends on how many secondary electrons or backscattered electrons are emitted and the higher the value of the pixel, the brighter the pixel appears in the image. This means that in the case of the sample shown in Figure 2 on the left, the edges will appear brighter in the image because more SEs are emitted and therefore the pixels in that position will have a higher value.

    Mixing backscattered electron and secondary electron images means that the two images are summed together. In practice, each pixel in the SE image is summed to the corresponding pixel in the BSE image, using the formula:

    Where ratio is the percentage of how much SE and BSE information the mixed image will carry. For equal topographic and compositional information, the ratio will be equal to 0.5.

    On the left of Figure 3 you can see SE (top) and the BSE (bottom) images of a solar cell, where the white area is silver and the dark area is silicon. In the SE image, the topography of the sample is clear: the granular structure of the silver strip can be easily noticed, as well as the bumpy silicon surface. Of course the ETD detector picks up some BSE signal as well, which is the reason why there is a difference in contrast between the two materials.

    In the BSE image, the topography of the sample is less visible. However, the material contrast is enhanced, and also shows some dirty particles on the silver strip. On the right of Figure 3, the mixed image is shown. In this case, we used a ratio of 0.5, meaning that each pixel value contains 50% of topographic information and 50% of compositional information. Not only are all the particles with different material contrast visible, but also the surface roughness of the strip and the silicon area.

    Figure 3: An example of mixing images. On the top left, the SE image and on the bottom, the BSE image of a solar cell, where the silver stripe (bright area) can be distinguished from the silicon (dark area). While the SE image carries information on the topography, in the BSE image the material contrast is dominant. On the right is the resulting mixed image using a ratio of 0.5.

    The mixed imaging script

    Being able to generate and save mixed backscattered and secondary electron images is a key value in many applications. Not only that, being able to set the SE:BSE ratio is also important for obtaining flawless images, that provide valuable information to the user.

    Using the Phenom Programming Interface (PPI), we developed a script that can acquire BSE and SE images directly from the Phenom SEM and mix them together, as shown in Figure 4. It is also possible to load BSE and SE images that were previously saved with the Phenom and generate and save the mixed image offline.

    Figure 4: User interface of the mixed images script, developed with PPI.

    Are you an experienced programmer interested in knowing more about the Phenom Programming Interface and its functionalities? Click here for further information.

    If you are not familiar with programming, but would still like an automated solution for your workflow, then we can help by developing the solution for you.

    Topics: Scanning Electron Microscope Software, Automation, PPI, Automated SEM Workflows

    About the author:

    Marijke Scotuzzi is an Application Engineer at Phenom-World, the world’s no 1 supplier of desktop scanning electron microscopes. Marijke has a keen interest in microscopy and is driven by the performance and the versatility of the Phenom SEM. She is dedicated to developing new applications and to improving the system capabilities, with the main focus on imaging techniques.

  • SEM technology: the role of the electron beam voltage in electron microscopy analysis

    When conducting electron microscopy (EM) analysis, there are a few important parameters that must be taken into account to produce the best possible results, and to image the feature of interest. One of the crucial roles is played by the voltage (or tension) applied to the source electrodes to generate the electron beam. Historically, the trend has always been to increase the voltage to improve the resolution of the system.

    It is only in recent years that scanning electron microscope (SEM) producers have started to focus on improving the resolution at lower voltages. A major role in this has been the expanding field of application of EM to the life sciences - especially after the introduction of the Nobel prize-winning cryo-SEM technique. This blog will focus on the effects of the voltage on the results of electron microscopy analysis.

    The electron beam voltage shapes the interaction volume

    The voltage is an indication of the electrons’ energy content: this will therefore determine what kind of interaction the beam will have with the sample. As a general guideline, a high voltage corresponds with a higher penetration beneath the surface of the sample —also known as bigger interaction volume.

    This means that the electrons will have a larger and deeper propagation within the sample and generate signals in different parts of the affected volume. The chemical composition of the sample also has an impact on the size of the interaction volume: light elements have fewer shells, and the electrons’ energy content is lower. This limits the interactions with the electrons from the electron beam, which can therefore penetrate deeper into the sample, compared to a heavier element.

    When analysing the outcoming signals, different results can be obtained. In desktop instruments, three kinds of signals are normally detected: backscattered electrons (BSE), secondary electrons (SE), and X-rays. As digging into the different nature of the signal is not the main focus of this article, more info can be found in this blog.

    The effect of voltage in SEM imaging

    The effect of voltage within the BSE and SE imaging is comparable: low voltages enable the surface of the sample to be imaged; high voltages provide more information on the layer beneath the surface. This can be visualised in the images below, where low voltages make surface sample contamination clearly distinguishable, while higher tensions reveal the structure of the surface underneath the contamination layer.

    Figure 1: BSE images of tin balls at 5kV (left) and at 15kV (right). With the lower voltage, the carbon contamination on top of the sample becomes visible. When the voltage is increased, the deeper penetration enables the imaging of the tin ball surface underneath the carbon spots.

    The nature of the sample is also hugely important in the choice of the appropriate voltage. Biological samples, several polymers, and many other (mostly organic) samples are extremely sensitive to the high energy content of the electrons. Such sensitivity is further enhanced by the fact that the SEM operates in vacuum. This is the leading reason why the focus of SEM developers is moving towards increasing the resolution value at lower voltages, providing important results even with the most delicate samples.

    The main difficulty that is encountered in this process is the physics principle behind the imaging technique: in a similar way to photography, there are in fact several kinds of distortion and aberration that can affect the quality of the final output. With higher voltages, the chromatic aberrations become less relevant, which is the main reason why the previous trend with SEM was to turn towards the highest possible voltage to improve imaging resolution.

    The generation of X-rays in a SEM

    When it comes to X-ray generation, the story is totally different: a higher voltage is responsible for a higher production of X-rays. The X-rays can be captured and processed by an EDS (energy dispersive spectroscopy) detector to perform compositional analysis on the sample.

    The technique consists of forcing the ejection of an electron in the target sample by means of the interaction with the electrons from the electron beam (primary electrons).

    A charge vacancy (hole) can be generated in the inner shells of an atom, and it is filled by an electron with a higher energy content from an outer shell in the same atom. This process requires the electron to release part of its energy in form of an X-ray. The energy of the X-ray can finally be correlated to the atomic weight of the atom through the Moseley’s law, returning the composition of the sample.

    The key factors in X-ray production are the following:

    • Overvoltage: the ratio between the energy of the incoming beam and the energy necessary to ionize the targeted atom
    • Interaction volume: defines the spatial resolution of the analysis

    The ideal analysis requires a minimum overvoltage value of 1.5, which means that by increasing the electron beam voltage, the maximum number of detectable elements increases. On the other hand, a high voltage corresponds with higher chances of sample damage and, even more importantly, a larger interaction volume.

    This does not only mean that the sample reliability could be compromised, but also that the generation of X-rays interests a much larger volume. In case of multilayers, particles, and generally non-isotropic materials, a larger interaction volume will generate signals coming from portions of the sample with a different composition, compromising the quality of the results.

    Figure 2: Example of an EDS spectrum collected at 15kV. The peaks highlight the presence of an element and a complex algorithm is applied to convert the signal coming from the detector into chemical composition.

    Typical recommended tension values for the analysis range between 10 and 20kV, to balance the two effects. Choosing the ideal value depends on an additional aspect of EDS analysis that is known as ‘peak overlap’. X-rays generated by electrons moving from different shells of different elements can have comparable energy contents.

    This requires more advanced integration processes to deconvolute the peaks and normalize the results, or use the higher energy content lines (coming from one of the two elements with overlapping peaks). While the former is already implemented in most EDS software, the latter is not always possible, considering that the higher energy level line for a very common element such as lead would require a voltage higher than 100kV.

    More about EDS in SEM

    If you would like to take an even deeper dive into EDS in scanning electron microscopy, take a look at the Phenom ProX desktop SEM. Among other things, it demonstrates how you can control a fully-integrated EDS detector with a dedicated software package — and avoid the need to switch between external software packages or computers.

    Topics: Scanning Electron Microscope Software, Electrons, EDX/EDS Analysis

    About the author:

    Luigi Raspolini is an Application Engineer at Phenom-World, the world’s no 1 supplier of desktop scanning electron microscopes. Luigi is constantly looking for new approaches to materials characterisation, surface roughness measurements and composition analysis. He is passionate about improving user experiences and demonstrating the best way to image every kind of sample.

  • SEM automation guidelines for small script development: image analysis

    Scripts are small automated software tools that can help a scanning electron microscope (SEM) user with their work. In my previous blog I wrote about how SEM images can be acquired with the Phenom Programming Interface (PPI) using a small script. In this blog I will explain how to extract physical properties from those SEM images.

    Workflows for SEM automation

    Typically, SEM workflows always consist of the same steps, see Figure 1. The four steps that can be automated using the PPI are:

    1.Image acquisition

    In the image acquisition step (1), images are automatically made using PPI and the Phenom SEM. In the analysis step (2), the physical properties are extracted from the image. The images are evaluated based on these physical properties in the evaluation step (3). The final automated step (4) is reporting the results back to the user.

    Figure 1: Scanning Electron Microscopy workflow

    SEM image analysis

    This blog will focus on step 2: the analysis step. The analysis is the image processing that takes place to obtain physical quantities from an SEM image. This could be for example:

    • Thresholding BSD images to quantify compositional differences in the image
    • Segmenting the image to find particles
    • Measuring the working distance to detect a height step in your sample

    We will continue with the script from the previous blog. In that blog, an automation script was written to acquire images from a copper-aluminum stub. I also explained how the information obtained with the BSD detector gives compositional contrast.

    In this blog we will use those images and information to segment them into a copper and aluminum part. The script to obtain the images can be found in code snippet 1.

    Code snippet 1: PPI script to acquire an image of the copper and aluminum part of the stub

    Now I will explain how the images can be segmented based on contrast by doing the following steps:

    • Acquire image with only aluminum
    • Acquire image with only copper
    • Determine (automatically) the gray levels associated with the metals
    • Acquire a new image that contains an area with both copper and aluminum
    • Segment the new image into a copper and aluminum part

    Firstly, two images are acquired containing only one of the elements. From these images the associated gray level can be determined. To show how this works the histograms of the two images are plotted in one graph, see Figure 2. The aluminum peak can be seen around 32000 and the copper peak is around 36000. The gaussian spread of the peaks is overlapping, at around 34000. The thresholding values should be unambiguous, therefore the threshold values should not overlap.

    Figure 2: Histograms of the images

    Determining the appropriate values can be done automatically using NumPy. We first determine the mean, which should be close to the peaks. The spread we can get from the standard deviation. This can be done by adding the code in snippet 2 to the script from the previous blog (In code snippet 4 you can find the complete code, including the code from the previous blog).

    Code snippet 2: Script to obtain average and standard deviation of the images

    The output of this part of the script is:

    These values are very close to the values we got just by looking at the histogram. A reasonable threshold is the mean plus or minus twice the standard deviation.


    The information about the gray values that was obtained in the previous part can be used to segment a new image. This new image has both an area with copper and an area with aluminum. To segment the images, OpenCV will be used. The threshold value will be: μ±2σ.

    To do this we must first import OpenCV into Python as cv2. We obtain the new image (with a longer exposure time to reduce noise) at a new location. Then we blur the image to reduce the noise levels further (thereby making the peaks sharper) and segment based on the contrast levels. In the final step we calculate the relative areas of the metal parts. The code to do this is as follows:

    Code snippet 3: Code to segment the image

    The code in snippet 3 first moves the stage to a position where both the copper and aluminum part is visible. The image is acquired using the phenom.SemAcquireImage method. In Figure 3, the acquired image is shown on the left-hand side. Using OpenCV the image is image is blurred using a circular Gaussian kernel with a diameter of 9 pixels. This reduces the noise in the image to improve the segmentation.

    This blurred image is segmented using the cv2.inRange method. This method yields a binary image with the white being the segmented part of the image. The two images on the right-hand side of Figure 3 are the segmented results. From the resulting images, the relative percentage of copper and aluminum can be calculated using the print function. The extra division by 255 after the sum is because they are 8-bit images and the white values are therefore 255. The area of copper in this image is 44% and aluminum is 37%, the remaining 19% is other material.

    Figure 3: Image to segment, and the segmented parts

    The segmentation results conclude this blog post. The complete code including all the extra plotting is shown in code snippet 4. In the next blog I will explain how the physical properties that we have obtained in this blog can be used to evaluate your sample.

    Code snippet 4: Complete code including all plotting

    I know computing the standard deviation in NumPy can yield some skewed results because a Poisson distribution is assumed. For the sake of simplicity I assume this method to be close enough, and actually, for samples that are not heavily contaminated, the results will be completely acceptable.

    Topics: Scanning Electron Microscope Automation, Automation, PPI, Automated SEM Workflows

    About the author:

    Wouter Arts is Application Software Engineer at Phenom-World, the world’s no 1 supplier of desktop scanning electron microscopes. He is interested in finding new smart methods to convert images to physical properties using the Phenom. In addition, he develops scripts to help companies in using the Phenom for automated processes.

Items 11 to 20 of 88 total

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. ...
  7. 9