Characterisation, Measurement & Analysis
+44(0)1582 764334

Technical Notes

  • Particle analysis with the Phenom ParticleX AM Desktop SEM

    A multiscale desktop SEM solution for additive manufacturing

    Timely and accurate quality control is a prerequisite for modern additive manufacturing, as excessive or unknown variation in the metal feed powder can lead to non-uniform layering, increased defects, poor surface finish and even catastrophic failures. The Phenom ParticleX AM Desktop SEM is a versatile solution for high-quality analysis, giving you the ability to carry out quick verification and classification of materials. With the Phenom ParticleX AM Desktop SEM, your production is supported with fast, accurate and trusted data.

    CLICK HERE to download the full Application Note.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • SEM: types of electrons, their detection and the information they provide

    Electron microscopes are very versatile instruments, which can provide different types of information depending on the user’s needs. In this blog we will describe the different types of electrons that are produced in a SEM, how they are detected and the type of information that they can provide.

    As the name implies, electron microscopes employ an electron beam for imaging. In Fig.1, you can see the various products that are possible as a result of the interaction between electrons and matter. All these different types of signals carry different useful information about the sample and it is the choice of the microscope’s operator which signal to capture.

    For example, in transmission electron microscopy (TEM), as the name suggests, signals such as the transmitted electrons are detected which will give information on the sample’s inner structure. In the case of a scanning electron microscope (SEM), two types of signal are typically detected; the backscattered electrons (BSE) and the secondary electrons (SE).

    Type of electrons in SEM

    In SEM, two types of electrons are primarily detected:
    •backscattered electrons (BSE),
    •secondary electrons (SE),

    Backscattered electrons are reflected back after elastic interactions between the beam and the sample. Secondary electrons, however, originate from the atoms of the sample: they are a result of inelastic interactions between the electron beam and the sample.

    BSE come from deeper regions of the sample, while SE originate from surface regions. Therefore, BSE and SE carry different types of information. BSE images show high sensitivity to differences in atomic number: the higher the atomic number, the brighter the material appears in the image. SE imaging can provide more detailed surface information.

    Figure 1: Electron — matter interactions: the different types of signals which are generated.

    Backscattered-electron (BSE) imaging

    This type of electrons originate from a broad region within the interaction volume. They are a result of elastic collisions of electrons with atoms, which results in a change in the electrons’ trajectory. Think of the electron-atom collision as the so-called “billiard-ball” model, where small particles (electrons) collide with larger particles (atoms). Larger atoms are much stronger scatterers of electrons than light atoms, and therefore produce a higher signal (Fig.2). The number of the backscattered electrons reaching the detector is proportional to their Z number. This dependence of the number of BSE on the atomic number helps us differentiate between different phases, providing imaging that carries information on the sample’s composition. Moreover, BSE images can also provide valuable information on crystallography, topography and the magnetic field of the sample.

    Figure 2: a) SEM image of an Al/Cu sample, b), c) Simplified illustration of the interaction between electron beam with aluminum and copper. Copper atoms (higher Z) scatter more electrons back towards the detector than the lighter aluminum atoms and therefore appear brighter in the SEM image.

    The most common BSE detectors are solid state detectors which typically contain p-n junctions. The working principle is based on the generation of electron-hole pairs by the backscattered electrons which escape the sample and are absorbed by the detector. The amount of these pairs depends on the energy of the backscattered electrons. The p-n junction is connected to two electrodes, one of which attracts the electrons and the other the holes, thereby generating an electrical current, which also depends on the amount of the absorbed backscattered electrons.

    The BSE detectors are placed above the sample, concentrically to the electron beam in a “doughnut” arrangement, in order to maximize the collection of the backscattered electrons and they consist of symmetrically divided parts. When all parts are enabled, the contrast of the image depicts the atomic number Z of the element. On the other hand, by enabling only specific quadrants of the detector, topographical information from the image can be retrieved.

    Figure 3: Typical position of the backscattered and secondary electron detectors.

    Secondary electrons (SE)

    In contrast, secondary electrons originate from the surface or the near-surface regions of the sample. They are a result of inelastic interactions between the primary electron beam and the sample and have lower energy than the backscattered electrons. Secondary electrons are very useful for the inspection of the topography of the sample’s surface, as you can see in Fig. 4:

    Figure 4: a) Full BSD, b) Topography BSD and c) SED image of a leaf.

    The Everhart-Thornley detector is the most frequently used device for the detection of SE. It consists of a scintillator inside a Faraday cage, which is positively charged and attracts the SE. The scintillator is then used to accelerate the electrons and convert them into light before reaching a photomultiplier for amplification. The SE detector is placed at the side of the electron chamber, at an angle, in order to increase the efficiency of detecting secondary electrons.

    These two types of electrons are the most used signals by SEM users for imaging. Not all SEM users require the same type of information, so the capability of having multiple detectors makes SEM a very versatile tool that can provide valuable solutions for many different applications.

    Topics: Electrons, Scanning Electron Microscope

    About the author:
    Antonis Nanakoudis is Application Engineer at Thermo Fisher Scientific, the world leader in serving science. Antonis is extremely motivated by the capabilities of the Phenom desktop SEM on various applications and is constantly looking to explore the opportunities that it offers for innovative characterisation methods.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • How next-generation composite materials are manufactured and analysed

    The technical specifications of next-generation materials are taking our technology to a completely new level, allowing us to create products with outstanding properties that were impossible to achieve in the past. These materials are the result of a huge drive toward innovation in material science and could only be achieved because of the invention of the first composite materials and their introduction into the industrial landscape.

    In this article, I describe how these next-generation materials are being developed — and equally important: how their chemical composition is analysed, and their performance is measured.

    How beneficial properties of composite materials are created and preserved

    Certain materials have outstanding properties that offer the perfect fit with a specific application. Sometimes, unfortunately, the environment is affecting these materials to such an extent that they cannot be easily used. They also require continuous replacement and fixing, thereby compromising all the advantages that come from their use.

    By creating multiple layers, or applying a coating, such delicate materials can be shielded and used, with all the benefits that they bring.


    Figure 1: Glass sheet coated with different materials. The multiple layers add specific properties to the product.

    For example, introducing nanofibres in a slab can dramatically improve its resistance to traction, flexion or torsion. These materials normally feature a matrix (the external part of the material, directly exposed to the stress) that is supported by a network of fibres. When the stress is applied to the material, this is transferred to the fibres. The fibres can easily handle the applied force, responding with an elastic deformation. As soon as the stress is removed, the fibres will bring the material back to its original state.

    This stress-transfer process is what led to the creation of self-healing materials. A typical example is the plastic covers of some smartphones that, when scratched, can recover from the condition in a matter of minutes. If the scratch is not too deep, it will completely disappear and the ‘brand-new’ feeling of the phone will last longer.

    The crafting of these materials requires high-level engineering and is the result of a big investment in research. In particular, scientists have focused their attention on how to transfer the stress from the matrix to the fibre, without having the latter slipping inside the structure. Several different solutions were taken into account and investigated, such as creating a complex fibrous skeleton or coating the fibres with a material that improves the shear stress transmission at the fibre-matrix interface.

    Figure 2 & 3: Different kinds of fibre weaving offer different resistance to stress. The appropriate weaving technique is chosen according to the application.

    How next-generation composite materials are analysed and measured

    As these investigations were performed on nano-scaled materials, electron microscopes were employed for the analysis and measurements. With a desktop scanning electron microscope (SEM), it is in fact possible to evaluate the diameter of the fibres and monitor how they change along the structure. At the same time, it is also possible to locally analyse the quality and chemical composition of the coating in order to verify that the adhesion of the fibre to the matrix is optimised. This can be done with an energy dispersive X-ray analysis (EDS).

    Composite materials are not a recent invention, by the way. The ancient population inhabiting the European continent were already mixing different types of materials for decorative or practical uses. One example is the discovery of archaeological grave goods in the imperial and royal tombs in the Speyer Cathedral in Speyer, Germany, which showed that textile fibres were mixed with golden threads.

    Within the KUR-Project “Conservation and restoration of mobile cultural assets” in Germany, electron microscopy has been successfully used to perform numerous analyses of the tombs’ contents. Download this free case study to discover how a desktop SEM was used to investigate fibre and leather details without damaging the samples or performing additional sample preparation, it's very interesting:

    Topics: Fibres imaging & analysis, Materials Science, EDX/EDS Analysis

    About the author:
    Luigi Raspolini is an Application Engineer at Thermo Fisher Scientific, the world leader in serving science. Luigi is constantly looking for new approaches to materials characterisation, surface roughness measurements and composition analysis. He is passionate about improving user experiences and demonstrating the best way to image every kind of sample.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • How to spot astigmatism in Scanning Electron Microscopy (SEM) images

    You may have heard of astigmatism as a medical condition that causes visual impairment in up to 40% of adults [1], but how is this applicable to electron microscopy? First of all, let’s talk about what the word astigmatism, in fact, means: It is derived from the negative prefix ‘a’ (without) + ‘stigmat-’ (mark, or point, in Ancient Greek) + ‘ism’ (condition). In a perfect optical system, a lens has only one focal point, and is stigmatic. When the lens has more than one focal point, however, we refer to the lens as being astigmatic. This happens when the lens is elongated in either the sagittal (y-axis) or tangential (x-axis) plane, resulting in two focal points (= foci).

    In electron microscopy, astigmatism arises due to imperfections in the lens system. At high magnification, the imperfections become more apparent, hampering the quality of your images. As a result, round objects might appear elliptical or out of focus. Fortunately, electron microscopes have a component called a stigmator, which is used to rectify the problem.

    In this blog, I want to show you what astigmatism looks like. This example is ideal if you image metal spheres.

    How to spot astigmatism in a scanning electron microscopy image

    The image below shows the tin spheres, deposited on carbon, at low magnification (1500 ×, field of view 179 µm). There aren’t any obviously visible distortions on the image, but what happens if we increase the magnification?

    Figure 1. At low magnification, astigmatism is not overly apparent.

    When increasing the magnification to 50000 × (field of view 5.37µm), we notice that the tin spheres seem out of focus:

    Figure 2. At high magnification, astigmatism becomes apparent.

    Knowing that we should be able to see more detail when using a SEM, perhaps the image is simply out of focus? Let’s see what happens when we adjust the focus:

    Figure 3. When out of focus, round objects become elongated on astigmatic images.

    Figure 3 shows that when the image is astigmatic and out of focus, elongation of round objects occurs. Notice how the smaller spheres appear almost egg-shaped. In Figure 2, we see that an astigmatic, in-focus image simply appears to be out of focus.

    How can I tell if my SEM image is astigmatic?

    In the above examples (Figure 2 and 3), we see that when an image is astigmatic and out of focus, it becomes elongated. When astigmatic images are over- or under-focused, they become elongated in perpendicular directions. This means that you can test whether your image is astigmatic by over- and under-focusing it.

    In the three images below, we show the process of testing whether an image is astigmatic or not. In Figure 4, the lab operator has obtained an image and wants to know if it is astigmatic.

    Figure 4. To the untrained eye, this astigmatic image might just appear to be out of focus.

    Not convinced that the best-quality image has been obtained, the microscope operator over- and under-focuses the image:

    Figure 5. After over- and under-focusing the image, it is clear to the lab operator that the image was astigmatic, due to the visible elongation in perpendicular directions.

    The microscope operator now knows that the microscope’s stigmator needs to be used to improve the quality of the image. After adjusting the stigmator in both X and Y directions, the operator again tests the image, and sees the following:

    Figure 6. When an image is stigmatic, no elongation occurs when (A) under-focused or (C) over-focused. (B) When stigmatic and in focus the image is crisp.

    Figure 6. When an image is stigmatic, no elongation occurs when (A) under-focused or (C) over-focused. (B) When stigmatic and in focus the image is crisp.

    The operator is now ready to acquire beautiful images for a publication or report.

    In summary, please consider the table below to see what your image will look like depending on how well it is stigmated and/or focused:

    Table 1. SEM image quality depending on stigmation and focus.

    1. Hashemi H et al. (2018) Global and regional estimates of prevalence of refractive errors: Systematic review and meta-analysis. Journal of Current Ophthalmology 30(1): 3–22.

    Topics: Scanning Electron Microscope

    About the author:
    Willem van Zyl is Application Specialist at Thermo Fisher Scientific, the world leader in serving science. He is excited by analytical instruments that are accessible and user-friendly, and truly believes that a SEM image is worth a kazillion words.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • Measure Latency in Optical Networks with Picosecond Accuracy

    In optical networks where action on a message or signal is time critical, latency becomes a critical design element. Latency in communications networks is comprised of the networking and processing of messages, as well as the transmission delay through the physical fibre. Measuring and optimising this optical transmission delay can be critical in diagnosing latency issues in a data centre or maintaining quality control in the production of precision fibre links. Fortunately, the Luna OBR 4600 can measure this latency with picosecond accuracy.

    Specifically, latency is the time delay of a light signal to travel, or propagate, in an optical transmission medium. The latency is related to the length of an optical fibre by the equation:

    Where L is the length, c is the speed of light in a vacuum and n is the index of refraction for the optical fibre.

    Because the Luna OBR can measure loss and reflections in short fibre networks with ultra-high resolution (sampling resolution of 10 µm) and no dead zones, it is straightforward to extract the exact length or latency of a segment of fibre or waveguide by analysing the time delay between reflection events. In fact, the OBR 4600 is able to measure latency or length this way with an accuracy of <0.0034% of the total length (or latency). For a 30 m optical fibre, for example, this corresponds to an overall length measurement accuracy of better than 1 mm, which is equivalent to a latency measurement accuracy of about 5ps for standard fibre. Note that this is the absolute accuracy; actual measurement resolution will be much higher.

    The example illustrates a typical application of measuring any differences in the length or latency of two fibre segments, each approximately 50 m in length. An OBR 4600 scans both segments and the latency of each segment is indicated by the distance between the two reflections at the beginning and end connectors of the segments. In this example, the difference in latency is found to be 95ps. For this fibre, this is equivalent to a difference of about 19.3 mm in length.

    Measuring length and latency is only one application of the versatile OBR reflectometer. For an overview of the OBR and common applications for ultra high resolution optical reflectometry, download Luna’s OBR white paper below.

    Fibre Optic Test & Measurement with Optical Backscatter Reflectometry (OBR)

    Optical communications technology is rapidly evolving to meet the ever-growing demand for ubiquitous connectivity and higher data rates. As signalling rates increase and modulation schemes become more complex, guaranteeing a high-fidelity optical transmission medium is becoming even more critical.

    Additionally, modern networks are relying more on photonic integrated circuits (PICs) based on silicon photonics or other developing technologies, introducing additional variables into the design and deployment of robust high bandwidth optical systems.

    Measurement and full characterisation of loss along the light path is a fundamental tool in the design and optimisation of these components and fibre optic networks.

    While different types of reflectometers are available to measure return loss, insertion loss, and event location for different types of optical systems, Optical Backscatter Reflectometry (OBR) is a very high resolution, polarisation-diverse implementation of optical reflectometry that dramatically improves sensitivity and resolution.

    See what you’ve been missing with traditional optical instrumentation in this white paper.

    Topics include:

    • Reflectance And Return Loss
    • Measuring Return Loss
    • Optical Backscatter Reflectometry (OBR)
    • Luna OBR Reflectometers

    Click here to download the white paper.

    For more information please email or call 01582 764334.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • Everything is nano these days:

    To improve the world of nanotechnology we make extremely fast SEM imaging and analysis accessible to everyone

    Imaging with a Scanning Electron Microscope (SEM) is a powerful tool for any materials scientist, though historically, accessing the technique was an issue. SEM involved using large, expensive systems that were only available to large research institutions. Even then, access was often difficult, due to long waiting lists and because their complex operation required in-depth training.

    That all changed with the introduction of the Phenom desktop SEM, which made the world of SEM available to a wide audience of researchers. Since then, we continued to improve the desktop SEM experience, pushing the limits of where the technique can be used and creating electron microscopes that have a real impact on how people work.

    Read on to find out more about 10 years of Phenom desktop SEM history and our latest addition - the Thermo Scientific Phenom Pharos Field Emission Gun desktop scanning electron microscope.

    A bit of history

    Back when it all started in 2006 with the launch of Phenom generation, a couple of breakthrough innovations were introduced. Karl Kersten, Head of Applications at Thermo Fisher Scientific: “We were the first to introduce an integrated optical microscope within a SEM. We also offered SEM imaging that was faster than anything else that was – and still is – available. We did not want anybody to have to wait for their results.”

    “We found there was a gap between light and electron microscopy and there was a real need for something to bridge that gap. Then we discovered our customers didn’t just want fast imaging; they were equally keen on having chemical information on their samples.”

    “That’s why we developed the Phenom ProX, which has an integrated EDS system. And that was when we really started to become popular. Our customers could now determine a sample’s nanoscale structure, as well as its elemental composition. The Phenom ProX enabled users to be very productive as it provided rapid, integrated measurements, all controlled using just one mouse.”

    2017: Phenom generation five

    We’ve come a long way since then. In 2017, generation five was introduced, but still with the same goal; to have more and more people benefit from our approach. We are strongly motivated by helping scientists and analysts to achieve more in nanotechnology. To obtain results and better develop new insights, faster and more affordable. That’s why we keep investing time and effort to develop high-quality electron microscope solutions that are functionally rich, yet easy to use.

    And so the Phenom desktop SEM has seen further improvements – Since 2017 we also have a secondary electron detector, the vacuum has been improved, as has the resolution. Our fifth generation of desktop SEMs is down to 8 nm resolution. If you compare this to our first generation, which was 30 nm, it’s almost 4 times better, which means users can take SEM images rapidly on an even finer scale.

    Our fifth generation of SEMs also have a newly-designed objective lens with a far more analytical design; the take-off angle of the EDS system has been increased, we’ve created space to include a secondary electron detector and we’ve improved the electron source lifetime. As the Phenom desktop SEM performance becomes closer to that of a large, non-desktop SEM, it is becoming a viable option for more and more researchers who will no longer have to wait for their results.

    The time it takes to image a specimen, the time it takes to get results, is tremendously short. We’re essentially providing immediate measurement. For some users, their waiting time goes from a few weeks, due to the outsourcing of the SEM measurements, to just a few minutes. If you work in quality control of, for example, pharmaceutical research, or process control in manufacturing, this huge time saving creates a lot of value.

    Platform for automation

    Added value comes from the SEMs versatility; it offers backscatter, secondary electron, and EDS detection. Switching between the different imaging modes is easy. Using one easy software application and a single mouse, you simply click between modes. There is no need to choose spot sizes and aperture sizes for each different image. You put the sample in at one height, and all measurements can be taken rapidly with no need to change the working distance when switching from one detector to another.

    It is important to note that these benefits are not just due to being a desktop SEM model. A lot of the advantages are thanks to our specially-designed sample holders and load systems from standard to the Eucentric sample holders that are used for the Phenom XL. Another platform that opened up the world of SEM to everyone who can benefit from it and that enables users to place their SEM right by the production line, to provide results in real-time.

    With the Phenom XL, we have set a new benchmark in compact desktop SEM performance, analysing large samples of up to 100 x 100 mm thanks to an integrated chamber. We see it being used a lot in materials sciences, and earth sciences. But we also have clients in industrial manufacturing who want to look at larger areas. And it still offers the same ease-of-use and fast time-to-image features of other Phenom systems.

    The system’s proprietary mechanisms ensure a rapid vent and load cycle and high throughput. A new, compact motorised stage allows users to scan the entire sample area. In addition, a single-shot optical navigation camera in the Phenom XL enables users to shift to different locations on the sample at the click of a button, all in a matter of seconds.

    The desktop SEM can also be used in space-constrained areas, analyse multiple samples - and run your reliable analysis overnight. For example, take our latest introduction, AsbestoMetric, can process 100 images in approximately 40 minutes, which means in 6 hours the software can analyse 9 filters. And you can do another imaging run overnight.

    Another example is our dedicated automation scripts - small automated software tools that can help SEM users work more efficiently. You can use the Phenom SEM with the Phenom programmable interface (PPI) to automate acquiring, analysing, evaluating images and creating reports.

    Automation reduces valuable operator time

    For other systems, a huge time saver is replacing samples. Normally you have to vent the system, mount the new sample carefully, using gloves, pump the sample chamber, turn on the high voltage for the electron beam. That is at least seven minutes worth of waiting time. In a Phenom, you can insert the sample, (take the measurement), and remove the sample within one minute, so you are saving six minutes with every measurement. The value in this is undeniable, most certainly if you are in a production environment.

    Which industries benefit?

    Education is an obvious example. We work closely with the University of Cambridge, who are big fans of Phenom SEMs, and the students get a lot from working with them. The simplicity and ease of use means that even first-year students can use the Phenom from the start, without zero risk of breaking it!

    Other examples are industries that benefit from mobile measurements. For example, in the event of demolition work, some construction companies have installed the Phenom in a vehicle. Thanks to AsbestoMetric, they can immediately determine if there is asbestos on site and if the situation is safe, or not. Having multiple imaging and analysis capabilities in a mobile unit is of great help to many who work in the field.

    Improved image resolution helps end users

    Many of the applications that our instruments are used for can now be applied to other settings. For example, we have software applications like ParticleMetric and PoroMetric; if you have a higher resolution system, you can measure smaller pores and examine smaller particles. Everything is nano these days, everything is getting smaller.

    Improving the resolution also means our automation programs can be used for more applications. If you look at the nanoparticles in the air, diesel pollution or air pollution, a better resolution gives you a more accurate picture.

    The better resolution also makes desktop SEMs a viable option for more researchers and analysts to perform their measurements. A smaller system – if well designed, like the Phenom P-series – is also mechanically more stable. That is an important reason why we make the mobile SEM units we discussed earlier. It also enables us to position the Phenom in locations with lots of vibrations, such as high-rise buildings or even ships.

    5th generation materials analysis

    The Phenom can image a huge variety of different types of sample, besides the ‘regular’ samples that any SEM can image. Because of the Phenom’s relatively small size, it can be placed into sealed environments, like a glove-box with a protective atmosphere, enabling highly-reactive specimens to be imaged, for example in battery research.

    The Phenom can also be placed in isolated, dangerous environments. For example, in the nuclear industry Phenoms are placed behind 12 cm of lead glass for radioactive sample analysis. It’s especially important here that you don’t have to change the filament every few weeks!

    2018: raising the bar again

    In 2018, the Thermo Scientific Phenom Pharos once again broke the boundaries in the field of nanoscale imaging. It is the first desktop SEM solution that includes a field emission gun (FEG) source. Karl: “With the FEG source we increased our imaging range in the single digit nanoscale range, offering a system with the same ease of use making FEG accessible to everyone. It is easy to operate, from the initial installation to the actual usage, thanks to its intuitive design. The advanced hardware design and detectors enable a fast time to image and easy, foolproof handling.”

    Every interaction is easy and intuitive, starting with the ordering process: one code provides a fully-functional FEG SEM with a backscattered electron detector (BSD). Options include a secondary electron detector (SED) and/or energy-dispersive X-ray detector (EDX), along with sophisticated analytical software.

    The advanced detectors can acquire high-quality images with magnifications of up to one million times. Thanks to the column design, high-resolution imaging (<3 nm, SE) is done at the same working distance as analytical work. And image acquisition takes only seven seconds or less. This easy operation and intuitive UI enable a very high throughput on a FEG SEM, making the benefits of FEG accessible to everyone.

    The impact on the SEM world

    Phenom desktop SEMs continue to break boundaries in the field of nanoscale imaging. Year on year we are growing the market. We are highly ambitious and constantly working on more detectors, better resolution, and better signals. We are demonstrating that fewer and fewer people actually need a large SEM system to perform the measurements they need for their research.

    Their size and portability are also having a huge impact as with the demolition response teams I mentioned earlier. They’re also being used on naval ships, assessing aircraft components to determine if helicopters are ready for operation. The range of applications that the Phenom can be used in is expanding with every new tweak we make. We’re really proud to be introducing the benefits of SEM to so many people.

    Click here to learn more about Phenom SEM.

    Topics: Scanning Electron Microscope

    About the author:

    Karl Kersten is head of the Application team at Thermo Fisher Scientific, the world leader in serving science. He is passionate about the Thermo Fisher Scientific product and likes converting customer requirements into product or feature specifications so customers can achieve their goals.

  • SEM automation guidelines for small script development: simulation and reporting

    Scripts are small automated software tools that can help a scanning electron microscope (SEM) user work more efficiently. In my previous blogs, I have explained how we can use the Phenom SEM with the Phenom programmable interface (PPI) to automate the process of acquiring, analysing and evaluating images. In this blog, I will add the Phenom PPI simulator to that and explain how you can generate and export reports using PPI.

    First, I’ll explain how to create a Phenom SEM Simulator in PPI and how to use it to acquire images. The Simulator mimics the behaviour of the Phenom and is a great tool to develop code without needing to have to access to a Phenom SEM.

    After that, I will demonstrate how you can analyse these images using an external module and how you can generate a report using PPI.

    The Phenom PPI Simulator

    The Simulator can be created by calling the Phenom class in PPI and passing empty strings for the Phenom ID, username, and password. In code it looks like this:

    Acquiring images works in exactly the same way as I explained in my first blog on guidelines for script making.

    We create a class of ScanParams and fill it with the desired settings. In this case, we want to acquire an image with a resolution of 1024x1024, using the BSD detector, 16 frames to average, 8-bit image depth, and a scale of 1. The image is then obtained using phenom.SemAcquireImage(). The image is displayed in a matplotlib figure. The code for this is:

    The resulting image from the Simulator are repeating diagonal gradients, which is shown in Figure 1.

    Figure 1.Phenom Simulator image

    Analyse using an external module 

    In my previous blogs on script development and automated image analysis, I have shown how an image can be analysed and evaluated using external libraries. In this blog, we will use an external module and determine the peak to peak distance on a circular cross section of the image. To determine this distance we will import a great little module called detect_peaks.

    Using the detect_peaks module we can determine local peaks based on its characteristics. Importing an external module is as easy as downloading the .py file and putting it in the root directory of your script and then adding the following line to your import statements:

    We extract a circular path because it is a little more exciting than using just a straight line, where all the peaks would be equidistant, and the results would be rather dull. To create a circle with points spread by 1 degree, a radius of 300 pixels, and positioned in the middle of the acquired image:

    In this script we force the numbers to remain integers, otherwise, we cannot use them to extract a cross-section. This is done with astype(np.uint16), the numbers are now unsigned integers with a bit-range of 16 bits (i.e. from 0 to 65,535).

    Extracting the circle and peaks can now be easily done by:

    The mpd parameter in detect peaks is the minimum spread between the peaks and the mph is the minimum peak height.

    To plot the results, we create a new image. In the left-hand plot, we will show the acquired image with a circle indicating where we took the cross section and red crosses to show where the peaks were found. In the right-hand image, we will plot the value of the cross section with red crosses where the peaks were found. We will add titles and labels to the plot and save it to a jpeg file in order to be able to use it in the report later on.

    The resulting image will be displayed in the report we will generate in the next section.

    PPI reporting

    To powerfully report the results to a user of a script, PPI has its own PDF-reporting tool. It is based on libHaru. Creating a pdf is fairly easy, once you know which steps to take. The first step is to create a document in Python:

    In the document, we need to create a page. This is done by:

    All positioning of text and objects is done with reference to the bottom left corner of the page. The positions are given in points and the default resolution is 72 dpi, thus the default size for A4 is: 595x842 pixels. The size of the paper is saved into the height and width variables.

    In this document, I will show how you can make headings, write large sections of text, make tables, and include figures. We start by adding text. I added three different types of text, first a big and bold header, then a smaller italic header, and a section with a large string that runs over multiple lines.

    To create text we begin with page.BeginText(). After that, we set the font with page.SetFontAndSize(). Then we position the text to the top left of the document with a margin of 50 pixels (about 2 cm in the document) with page.MoveTextPos(). To insert text we add a line with page.ShowText(). To move to the next line we only have to set the relative movement over the page with page.MoveTextPos().

    The first time you call page.MoveTextPos() the starting point is the bottom left corner of the document, and the second time it is a relative change to the new position. Typically, if you have a long text it is a hassle to find where every line break should be. To automatically find where a line break should be a text box can be made. This text box automatically does the line breaks and alignments of the text for you.

    It can be called with page.TextRect(), and the following attributes are passed: a PPI rectangle giving the absolute position of the text box, the string with the text that should be printed in the report, and the alignment. You can also see that I have changed the font three times to be able to distinguish between headers and sub-headers and normal text.

    To make a table, normal text is used but is displayed in a structured manner. First a header is made using text spaced in a regular horizontal pattern. Below this header we want a single line. However, drawing is not allowed between page.BeginText() and page.EndText() parts, so we have to close the text part to draw and then reopen it again.

    To draw the line we move the position to the start location and use the page.LineTo() to define the line. The real drawing is done by page.Stroke(). After that we iterate over the items we want to put in the table and put them all in the right column, with the same spacing. The table ends with a double underlining. The code to do this is:

    To save the pdf pdf.SaveToFile() is used. To open the pdf the subprocess library can be used:

    The resulting PDF is:

    This blog concludes my series of blogs with guidelines for small script development, I hope you have enjoyed it. If you would like to learn more about PPI and automation you can download the PPI specification sheet below:

    Click here to learn more about SEM automation and the Phenom Programming Interface.

    Topics: Scanning Electron Microscope Software, Automated SEM Workflows,  Automation,  Automation, PPI

    About the author:

    Wouter Arts is Application Software Engineer at Thermo Fisher Scientific, the world leader in serving science. He is interested in finding new smart methods to convert images to physical properties using the Phenom desktop SEM. In addition, he develops scripts to help companies in using the Phenom desktop SEM for automated processes.

  • What is an FFT Spectrum Analyser?

    FFT Spectrum Analysers, such as the SRS SR760, SR770, SR780 and SR785, take a time varying input signal, like you would see on an oscilloscope trace, and compute its frequency spectrum. Fourier's theorem states that any waveform in the time domain can be represented by the weighted sum of sines and cosines.The FFT spectrum analyser samples the input signal, computes the magnitude of its sine and cosine components, and displays the spectrum of these measured frequency components.

    Click here to download the full Application Note.


    If you would like more information, to arrange a demonstration or receive a quotation please contact us via email or call us on 01582 764334.

  • SEM automation guidelines for small script development: evaluation

    Scripts are small automated software tools that can help a scanning electron microscope (SEM) user work more efficiently In my previous two blogs, I wrote about image acquisition and analysis with the Phenom Programming Interface (PPI). In this blog I will explain how we can use the physical properties we obtained in the last blog in the evaluation step.

    SEM automation workflows

    Typically, SEM workflows always consist of the same steps, see Figure 1. The four steps that can be automated using PPI are:

    1. Image acquisition
    2. Analysis
    3. Evaluation
    4. Reporting

    In the image acquisition step (1), images are automatically made using PPI and the Phenom SEM (read this blog for more information on this step). In the analysis step (2), the physical properties are extracted from the image (see this blog) .The images are evaluated based on these physical properties in the evaluation step (3). The final automated step (4) is reporting the results back to the user.

    Figure 1: Scanning Electron Microscopy workflow

    Image evaluation

    In the evaluation step, the physical quantities are evaluated and categorized. This can be done by:

    • Counting particles based on their morphology
    • Determining the coverage on a sample
    • Base actions on physical properties of the sample

    In this blog we will base action on the physical properties in an image to determine where the center of the copper aluminum stub is.

    To do this we will assume that the copper insert is perfectly round. The script will start at a location pstart within the copper part of the stub. From here it will move in both positive and negative x and y directions to find a set of four edges points of the copper insert. These points will be Schermafbeelding 2018-07-05 om 12.17.40. Because of the circular symmetry of the stub, the arithmetic average of the x positions of Schermafbeelding 2018-07-05 om 12.20.01 and the y-position of Schermafbeelding 2018-07-05 om 12.20.45 will yield the center Schermafbeelding 2018-07-05 om 12.21.08 of the stub. In Figure 2 all the points are shown.


    Figure 2: Definitions of the locations on the stub

    To find the edges, the stage is moved. In every step the image is segmented using the techniques explained in the previous blog. When less than 50% of the image consists of the copper part, the edge is located. The exact position of the edge point is then defined as the center of mass of the area that is neither copper nor aluminum.

    Figure 3: Definitions of the locations on the stub

    Code snippet 1 shows an example of how this can be done. First the stage is brought to its original starting point with the Phenom.MoveTo method. This position is retrieved back from the Phenom using the phenom. GetStageModeAndPosition command. After that, the step size is defined. A step of 250 µm is chosen, which is equal to half the image field width. Four vectors are defined in all directions to find the four edges. These vectors are combined into an iterable list, to be able to iterate over them in the for loop.

    In the for loop, the stage is first moved to an initial guess of the location of the center. Then, a while loop is started where the stage moves to one direction with the step size. At every step the image is segmented and checked if the area of copper is smaller than 50%. If the copper area is less than 50%, the edge has been found and the center location of the edge is determined using ndimage.measurements.center_of_mass method.

    The resulting center of mass is expressed in pixels and is converted to metric units using the metadata that is available in the Phenom acquisition objects. The centers of masses are stored in a list and from this list the Schermafbeelding 2018-07-05 om 13.09.38 and Schermafbeelding 2018-07-05 om 13.10.07 locations are determined. From the set of locations, the arithmetic averages are easily determined, and the stage is moved to its new improved center location.

    Code snippet 1: Code to find and to move to the center of the stub

    In Figure 4, the initial guess of the location of the center is shown on the left-hand side and the improved center location is shown on the right-hand side. Iterating this process a few times could improve the center location even further; this because the symmetry will improve towards the center of the stub.

    Figure 4: Definitions of the locations on the stub


    In code snippet 2, the complete code is shown, including the code from my two previous blogs.

    Code snippet 2: Complete code

    Click here to learn more about SEM automation and the Phenom Programming Interface

    Topics: Scanning Electron Microscope Automation, Industrial Manufacturing, Automation, PPI, Automated SEM Workflows

    About the author:

    Wouter Arts is Application Software Engineer at Thermo Fisher Scientific, the world leader in serving science. He is interested in finding new smart methods to convert images to physical properties using the Phenom desktop SEM. In addition, he develops scripts to help companies in using the Phenom desktop SEM for automated processes.

  • Buying a scanning electron microscope: how to select the right SEM

    You want to buy a new scanning electron microscope (SEM) because you know you need more SEM capability. Maybe you have a traditional floor model SEM, but it is slow and complicated to operate. Maybe you are using an outside service and the turn-around time is unacceptably long.

    You have made your case that your company could significantly improve their business performance and you could do your job better if SEM imaging and analysis were easier, faster and more accessible. Can a desktop SEM do what you need? This article provides the answers and helps you to select the right SEM.

    Floor model SEM vs. Desktop SEM

    The choice between a desktop SEM and a larger, floor model system is almost always primarily an economic one: desktops are much less expensive. But there are other factors that also argue in favor of a desktop solution, even when cost is not the primary consideration.

    Scanning electron microscopes: pricing & affordability

    Let’s deal first with SEM pricing. Desktop SEMs are typically priced at a fraction of their floor model relatives. And there are certainly situations in which the additional cost of the larger systems are justifiable, for example, when the resolution requirements are beyond those achievable in a desktop SEM system.

    However, today’s desktop SEM’s can deliver resolutions smaller than 10 nm, enough for 80%-90% of all SEM applications. So your first question has to be, is it enough for yours?

    Beyond the initial acquisition, there are significant additional costs for a floor model scanning electron microscope system:

    • facilities – typically at least a dedicated room (perhaps including specialized foundations and environmental isolation)
    • additional space and equipment for sample preparation; personnel – a dedicated operator, trained in instrument operation and sample preparation.

    It is worth noting that while the cost of the equipment and facility are primarily fixed costs of acquisition, the operator is an ongoing expense that will persist for the lifetime of the instrument.

    Clearly, a desktop SEM solution — less costly to acquire and with no requirement for a dedicated facility or operator — is the less expensive choice, as long as its capabilities satisfy the requirements of the application.

    Other decision factors when selecting and buying a scanning electron microscope

    • Microscope speed
      Desktop SEM systems require minimal sample preparation and their relaxed vacuum requirements and small evacuated volume allow the system to present an image much more quickly than a typical floor model system.Moreover, desktop SEMs are usually operated by the consumer of the information, eliminating the time required a dedicated operator to perform the analysis, prepare a report and communicate the result.In addition to faster answers, there is considerable intangible value in the immediacy of the analysis and the user’s ability to direct the investigation in real-time response to observations.Finally, in some applications, such as inspection, longer delays carry a tangible cost by putting more work-in-progress at risk.
    • Microscope applications
      Is the application routine well defined? If it is, and a desktop SEM can provide the required information, why spend more? Concerns about future requirements exceeding the desktop capability should be evaluated in terms of the certainty and timing of the potential requirements and the availability of outside resources for more demanding applications.Even in cases where future requirements will exceed desktop capability, the initial investment in a desktop SEM can continue to deliver a return as that system is used to supplement a future floor model system.Perhaps in a screening capacity or to continue to perform routine analyses while the floor model system is applied to more demanding applications.A desktop system may also serve as a step-wise approach to the justification of a larger system, establishing the value of SEM while allowing an experience-based evaluation of the need and cost of more advanced capability from an outside provider.
    • Microscope users
      How many individuals will be using the system? Are the users trained? If not, how much time are they willing to invest in training? Desktop SEMs are simple to operate and require little or no sample preparation. Obtaining an image can be as easy as pushing a couple of buttons.More advanced procedures can be accessed by users with specific needs who are willing to invest a little time in training. In general, the requirements for operator training are much lower with a desktop system and the system itself is much more robust. It is harder to break, and the potential repair cost is much lower.

    Buying a scanning electron microscope: take-aways

    Now a short recap. The primary decision factors when selecting a SEM are:

    • Pricing
    • Speed
    • Applications
    • Users

    The question to ask yourself while going over these factors is: does a desktop SEM meet my application requirements?

    From experience we can say that it will, in most scenarios. If a desktop SEM is indeed suitable for your application, you’re looking at an investment that’s significantly lower compared to a floor model SEM.

    Remember, desktop systems are typically priced at a fraction of their floor model relatives.

    As I stated earlier there are situations in which the additional cost of larger systems is justifiable. This is the case when the resolution requirements are beyond those achievable in a desktop system.

    However, today’s desktop SEMs can deliver resolutions less than 10 nm — enough for 80%-90% of all SEM applications. So the question will often be: is it enough for yours?

    If that’s a difficult question to answer — or if you’re still just in doubt which SEM to choose — we have an e-guide available that should be of help: how to choose a SEM.

    This guide takes an even deeper dive into the selection process of a SEM, and will help you select the right model for your process and applications.

    Topics: Research Productivity, Scanning Electron Microscope, Pricing

    About the author:

    Karl Kersten is head of the Application team at Thermo Fisher Scientific, the world leader in serving science. He is passionate about the Thermo Fisher Scientific product and likes converting customer requirements into product or feature specifications so customers can achieve their goals.

Items 1 to 10 of 76 total

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. ...
  7. 8