Characterisation, Measurement & Analysis
+44(0)1582 764334

News

  • Introducing the Latest Innovation in Fibre-optic MPO/MTP Polarity Testing Solutions

    We are pleased to announce the OP415 Polarity Analyser from OptoTest - the latest innovation in polarity testing solutions.

    This polarity tester was designed to test 24-fibre MTP/MPO cable assemblies efficiently, but is easily configured to test for 8-fibre and 12-fibre cables. It is pre-loaded with 12-fibre and 24-fibre polarity types A, B, and C plus the ability to create and store custom fibre mappings and channel configurations. Additionally the OP415 can learn polarity types from existing cables and store those for future use.

    Most customers will be interested in automatic testing, but the OP415 Polarity Analyzer also has a manual mode to step through a cable channel by channel - a useful feature for troubleshooting or routing fibres during ribbonizing. Bright red laser sources on each channel provide visual fault detection for ribbon cables.

    The full colour touchscreen display graphically shows if fibres are routed incorrectly or are not connected, and can even display a power level for each channel to detect poor connections. On-board data storage allows users to save results for later analysis.

    Based in Camarillo, California, OptoTest strives to be at the forefront of the fibre optics industry with solid fundamental measurement technologies for optical power, insertion loss, return loss, and launch condition. The company maintains a tradition of breakthrough products and innovative solutions for the testing and analysis of fibre optics components and systems. Lambda Photometrics are proud to represent OptoTest in the UK and welcome the opportunity to share our fibre testing experience with potential customers.

    If you would like more information, to arrange a demonstration or receive a quotation for the OP415 Polarisation Analyser, please contact us via email, our website or call us on 01582 764334.

  • The Phenom Process Automation: mixing backscattered and secondary electron images using a Python script

    When the primary beam interacts with the sample, backscattered electrons (BSEs) and secondary electrons (SEs) are generated. Images of the samples obtained by detecting the emitted signals, carry information on the composition (for BSE signals) and on the topography (for SE signals). How are BSEs and SEs formed and why do they carry specific information? Moreover, is it possible to get both compositional and topographical information in one image? And how flexible is this solution? In this blog, I will answer these questions and introduce a script that allows users to mix their own images.

    When the primary beam hits the sample surface, secondary electrons and backscattered electrons are emitted and can be detected to form images. Secondary electrons are generated from inelastic scattering events of the primary electrons with electrons in the atoms of the sample, as shown on the left of Figure 1. SEs are electrons with low energy (typically less than 50eV) that can be easily absorbed. This is the reason why only the secondary electrons coming from a very thin top layer of the sample can be collected by the detector.

    On the other hand, backscattered electrons are formed from elastic scattering events, where the trajectories of primary electrons are deviated by the interaction with the nuclei of the atoms in the sample, as shown on the right in Figure 1. BSEs typically have high energy and can emerge from deep inside the sample.

    Figure 1: Formation of secondary electrons (on the left) and backscattered electrons (on the right). SEs are formed from inelastic scattering events, while BSEs are formed from elastic scattering events.

    Secondary electrons images contain information on the topography of the sample. As shown on the left of Figure 2, the beam is scanned on top of a surface that has a protrusion. When the beam is located on the slope of this protrusion, the interaction volume touches the sidewall causing more secondary electrons to escape the surface. When the beam is located on the flat area, fewer secondary electrons can escape. This means that more secondary electrons will be emitted on edges and slopes, causing brighter contrast than on flat areas, providing information on the morphology of the sample.

    On the other hand, the backscattered electrons yield depends on the material, as shown on the right of Figure 2. If the beam hits silicon atoms, which have atomic number Z=14, fewer backscattered electrons will be formed than in the case of gold, which has atomic number Z = 79. The reason for this is that gold atoms have bigger nuclei, providing a stronger effect on the primary electrons’ trajectories, which translates into a bigger deviation. Backscattered electrons images therefore provide information on the material difference of the sample.

    Figure 2: On the left, more secondary electrons can escape the sample surface on edges and slopes than in flat areas. On the right, the yield of backscattered electrons depends on the atomic number of the material, more BSEs are generated in gold (Z = 79) than in silicon (Z = 14).

    To collect the secondary electrons, the Everhart-Thornley detector (ETD) is typically used. Because SEs have low energy, a grid at high potential is placed in front of the detector to attract the secondary electrons. On the other hand, BSEs are often collected by a solid-state detector placed above the sample. The images obtained by the ETD detector and the BSD detector contain information on the morphology and the composition of the sample respectively.

    For some applications, however, it is convenient to have information on both the topography and the composition, in one image. This can be done by simply adding the signal coming from the two detectors.

    Mixing BSE and SE images

    When an image is acquired, the beam scans the sample surface pixel by pixel. In each pixel, the signal is collected by the detector and translated into a value. If images are acquired in 8 bits, the range of pixel values varies from 0 to 255. If images are acquired in 16 bits, the values of each pixel can vary from 0 to 65,535.

    The value of the pixel depends on how many secondary electrons or backscattered electrons are emitted and the higher the value of the pixel, the brighter the pixel appears in the image. This means that in the case of the sample shown in Figure 2 on the left, the edges will appear brighter in the image because more SEs are emitted and therefore the pixels in that position will have a higher value.

    Mixing backscattered electron and secondary electron images means that the two images are summed together. In practice, each pixel in the SE image is summed to the corresponding pixel in the BSE image, using the formula:


    Where ratio is the percentage of how much SE and BSE information the mixed image will carry. For equal topographic and compositional information, the ratio will be equal to 0.5.

    On the left of Figure 3 you can see SE (top) and the BSE (bottom) images of a solar cell, where the white area is silver and the dark area is silicon. In the SE image, the topography of the sample is clear: the granular structure of the silver strip can be easily noticed, as well as the bumpy silicon surface. Of course the ETD detector picks up some BSE signal as well, which is the reason why there is a difference in contrast between the two materials.

    In the BSE image, the topography of the sample is less visible. However, the material contrast is enhanced, and also shows some dirty particles on the silver strip. On the right of Figure 3, the mixed image is shown. In this case, we used a ratio of 0.5, meaning that each pixel value contains 50% of topographic information and 50% of compositional information. Not only are all the particles with different material contrast visible, but also the surface roughness of the strip and the silicon area.

    Figure 3: An example of mixing images. On the top left, the SE image and on the bottom, the BSE image of a solar cell, where the silver stripe (bright area) can be distinguished from the silicon (dark area). While the SE image carries information on the topography, in the BSE image the material contrast is dominant. On the right is the resulting mixed image using a ratio of 0.5.

    The mixed imaging script

    Being able to generate and save mixed backscattered and secondary electron images is a key value in many applications. Not only that, being able to set the SE:BSE ratio is also important for obtaining flawless images, that provide valuable information to the user.

    Using the Phenom Programming Interface (PPI), we developed a script that can acquire BSE and SE images directly from the Phenom SEM and mix them together, as shown in Figure 4. It is also possible to load BSE and SE images that were previously saved with the Phenom and generate and save the mixed image offline.

    Figure 4: User interface of the mixed images script, developed with PPI.

    Are you an experienced programmer interested in knowing more about the Phenom Programming Interface and its functionalities? Click here for further information.

    If you are not familiar with programming, but would still like an automated solution for your workflow, then we can help by developing the solution for you.

    Topics: Scanning Electron Microscope Software, Automation, PPI, Automated SEM Workflows

    About the author:

    Marijke Scotuzzi is an Application Engineer at Phenom-World, the world’s no 1 supplier of desktop scanning electron microscopes. Marijke has a keen interest in microscopy and is driven by the performance and the versatility of the Phenom SEM. She is dedicated to developing new applications and to improving the system capabilities, with the main focus on imaging techniques.

  • SEM technology: the role of the electron beam voltage in electron microscopy analysis

    When conducting electron microscopy (EM) analysis, there are a few important parameters that must be taken into account to produce the best possible results, and to image the feature of interest. One of the crucial roles is played by the voltage (or tension) applied to the source electrodes to generate the electron beam. Historically, the trend has always been to increase the voltage to improve the resolution of the system.

    It is only in recent years that scanning electron microscope (SEM) producers have started to focus on improving the resolution at lower voltages. A major role in this has been the expanding field of application of EM to the life sciences - especially after the introduction of the Nobel prize-winning cryo-SEM technique. This blog will focus on the effects of the voltage on the results of electron microscopy analysis.

    The electron beam voltage shapes the interaction volume

    The voltage is an indication of the electrons’ energy content: this will therefore determine what kind of interaction the beam will have with the sample. As a general guideline, a high voltage corresponds with a higher penetration beneath the surface of the sample —also known as bigger interaction volume.

    This means that the electrons will have a larger and deeper propagation within the sample and generate signals in different parts of the affected volume. The chemical composition of the sample also has an impact on the size of the interaction volume: light elements have fewer shells, and the electrons’ energy content is lower. This limits the interactions with the electrons from the electron beam, which can therefore penetrate deeper into the sample, compared to a heavier element.

    When analysing the outcoming signals, different results can be obtained. In desktop instruments, three kinds of signals are normally detected: backscattered electrons (BSE), secondary electrons (SE), and X-rays. As digging into the different nature of the signal is not the main focus of this article, more info can be found in this blog.

    The effect of voltage in SEM imaging

    The effect of voltage within the BSE and SE imaging is comparable: low voltages enable the surface of the sample to be imaged; high voltages provide more information on the layer beneath the surface. This can be visualised in the images below, where low voltages make surface sample contamination clearly distinguishable, while higher tensions reveal the structure of the surface underneath the contamination layer.

    Figure 1: BSE images of tin balls at 5kV (left) and at 15kV (right). With the lower voltage, the carbon contamination on top of the sample becomes visible. When the voltage is increased, the deeper penetration enables the imaging of the tin ball surface underneath the carbon spots.

    The nature of the sample is also hugely important in the choice of the appropriate voltage. Biological samples, several polymers, and many other (mostly organic) samples are extremely sensitive to the high energy content of the electrons. Such sensitivity is further enhanced by the fact that the SEM operates in vacuum. This is the leading reason why the focus of SEM developers is moving towards increasing the resolution value at lower voltages, providing important results even with the most delicate samples.

    The main difficulty that is encountered in this process is the physics principle behind the imaging technique: in a similar way to photography, there are in fact several kinds of distortion and aberration that can affect the quality of the final output. With higher voltages, the chromatic aberrations become less relevant, which is the main reason why the previous trend with SEM was to turn towards the highest possible voltage to improve imaging resolution.

    The generation of X-rays in a SEM

    When it comes to X-ray generation, the story is totally different: a higher voltage is responsible for a higher production of X-rays. The X-rays can be captured and processed by an EDS (energy dispersive spectroscopy) detector to perform compositional analysis on the sample.

    The technique consists of forcing the ejection of an electron in the target sample by means of the interaction with the electrons from the electron beam (primary electrons).

    A charge vacancy (hole) can be generated in the inner shells of an atom, and it is filled by an electron with a higher energy content from an outer shell in the same atom. This process requires the electron to release part of its energy in form of an X-ray. The energy of the X-ray can finally be correlated to the atomic weight of the atom through the Moseley’s law, returning the composition of the sample.

    The key factors in X-ray production are the following:

    • Overvoltage: the ratio between the energy of the incoming beam and the energy necessary to ionize the targeted atom
    • Interaction volume: defines the spatial resolution of the analysis

    The ideal analysis requires a minimum overvoltage value of 1.5, which means that by increasing the electron beam voltage, the maximum number of detectable elements increases. On the other hand, a high voltage corresponds with higher chances of sample damage and, even more importantly, a larger interaction volume.

    This does not only mean that the sample reliability could be compromised, but also that the generation of X-rays interests a much larger volume. In case of multilayers, particles, and generally non-isotropic materials, a larger interaction volume will generate signals coming from portions of the sample with a different composition, compromising the quality of the results.

    Figure 2: Example of an EDS spectrum collected at 15kV. The peaks highlight the presence of an element and a complex algorithm is applied to convert the signal coming from the detector into chemical composition.

    Typical recommended tension values for the analysis range between 10 and 20kV, to balance the two effects. Choosing the ideal value depends on an additional aspect of EDS analysis that is known as ‘peak overlap’. X-rays generated by electrons moving from different shells of different elements can have comparable energy contents.

    This requires more advanced integration processes to deconvolute the peaks and normalize the results, or use the higher energy content lines (coming from one of the two elements with overlapping peaks). While the former is already implemented in most EDS software, the latter is not always possible, considering that the higher energy level line for a very common element such as lead would require a voltage higher than 100kV.

    More about EDS in SEM

    If you would like to take an even deeper dive into EDS in scanning electron microscopy, take a look at the Phenom ProX desktop SEM. Among other things, it demonstrates how you can control a fully-integrated EDS detector with a dedicated software package — and avoid the need to switch between external software packages or computers.

    Topics: Scanning Electron Microscope Software, Electrons, EDX/EDS Analysis

    About the author:

    Luigi Raspolini is an Application Engineer at Phenom-World, the world’s no 1 supplier of desktop scanning electron microscopes. Luigi is constantly looking for new approaches to materials characterisation, surface roughness measurements and composition analysis. He is passionate about improving user experiences and demonstrating the best way to image every kind of sample.

  • SEM automation guidelines for small script development: image analysis

    Scripts are small automated software tools that can help a scanning electron microscope (SEM) user with their work. In my previous blog I wrote about how SEM images can be acquired with the Phenom Programming Interface (PPI) using a small script. In this blog I will explain how to extract physical properties from those SEM images.

    Workflows for SEM automation

    Typically, SEM workflows always consist of the same steps, see Figure 1. The four steps that can be automated using the PPI are:

    1.Image acquisition
    2.Analysis
    3.Evaluation
    4.Reporting

    In the image acquisition step (1), images are automatically made using PPI and the Phenom SEM. In the analysis step (2), the physical properties are extracted from the image. The images are evaluated based on these physical properties in the evaluation step (3). The final automated step (4) is reporting the results back to the user.

    Figure 1: Scanning Electron Microscopy workflow

    SEM image analysis

    This blog will focus on step 2: the analysis step. The analysis is the image processing that takes place to obtain physical quantities from an SEM image. This could be for example:

    • Thresholding BSD images to quantify compositional differences in the image
    • Segmenting the image to find particles
    • Measuring the working distance to detect a height step in your sample

    We will continue with the script from the previous blog. In that blog, an automation script was written to acquire images from a copper-aluminum stub. I also explained how the information obtained with the BSD detector gives compositional contrast.

    In this blog we will use those images and information to segment them into a copper and aluminum part. The script to obtain the images can be found in code snippet 1.

    Code snippet 1: PPI script to acquire an image of the copper and aluminum part of the stub

    Now I will explain how the images can be segmented based on contrast by doing the following steps:

    • Acquire image with only aluminum
    • Acquire image with only copper
    • Determine (automatically) the gray levels associated with the metals
    • Acquire a new image that contains an area with both copper and aluminum
    • Segment the new image into a copper and aluminum part

    Firstly, two images are acquired containing only one of the elements. From these images the associated gray level can be determined. To show how this works the histograms of the two images are plotted in one graph, see Figure 2. The aluminum peak can be seen around 32000 and the copper peak is around 36000. The gaussian spread of the peaks is overlapping, at around 34000. The thresholding values should be unambiguous, therefore the threshold values should not overlap.

    Figure 2: Histograms of the images

    Determining the appropriate values can be done automatically using NumPy. We first determine the mean, which should be close to the peaks. The spread we can get from the standard deviation. This can be done by adding the code in snippet 2 to the script from the previous blog (In code snippet 4 you can find the complete code, including the code from the previous blog).

    Code snippet 2: Script to obtain average and standard deviation of the images

    The output of this part of the script is:

    These values are very close to the values we got just by looking at the histogram. A reasonable threshold is the mean plus or minus twice the standard deviation.

    Segmentation

    The information about the gray values that was obtained in the previous part can be used to segment a new image. This new image has both an area with copper and an area with aluminum. To segment the images, OpenCV will be used. The threshold value will be: μ±2σ.

    To do this we must first import OpenCV into Python as cv2. We obtain the new image (with a longer exposure time to reduce noise) at a new location. Then we blur the image to reduce the noise levels further (thereby making the peaks sharper) and segment based on the contrast levels. In the final step we calculate the relative areas of the metal parts. The code to do this is as follows:

    Code snippet 3: Code to segment the image

    The code in snippet 3 first moves the stage to a position where both the copper and aluminum part is visible. The image is acquired using the phenom.SemAcquireImage method. In Figure 3, the acquired image is shown on the left-hand side. Using OpenCV the image is image is blurred using a circular Gaussian kernel with a diameter of 9 pixels. This reduces the noise in the image to improve the segmentation.

    This blurred image is segmented using the cv2.inRange method. This method yields a binary image with the white being the segmented part of the image. The two images on the right-hand side of Figure 3 are the segmented results. From the resulting images, the relative percentage of copper and aluminum can be calculated using the print function. The extra division by 255 after the sum is because they are 8-bit images and the white values are therefore 255. The area of copper in this image is 44% and aluminum is 37%, the remaining 19% is other material.

    Figure 3: Image to segment, and the segmented parts

    The segmentation results conclude this blog post. The complete code including all the extra plotting is shown in code snippet 4. In the next blog I will explain how the physical properties that we have obtained in this blog can be used to evaluate your sample.

    Code snippet 4: Complete code including all plotting

    I know computing the standard deviation in NumPy can yield some skewed results because a Poisson distribution is assumed. For the sake of simplicity I assume this method to be close enough, and actually, for samples that are not heavily contaminated, the results will be completely acceptable.

    Topics: Scanning Electron Microscope Automation, Automation, PPI, Automated SEM Workflows

    About the author:

    Wouter Arts is Application Software Engineer at Phenom-World, the world’s no 1 supplier of desktop scanning electron microscopes. He is interested in finding new smart methods to convert images to physical properties using the Phenom. In addition, he develops scripts to help companies in using the Phenom for automated processes.

  • Sample degradation during SEM analysis: what causes it and how to slow down the process

    When using a scanning electron microscope (SEM), the electron beam can, over time, permanently alter or degrade the sample that is being observed. Sample degradation is an unwanted effect as it can alter — or even destroy — the details you want to see, and consequently change your results and conclusions. In this blog, I will explain what can cause sample degradation, and how you can slow down the process.

    In a SEM, a focused electron beam is used to scan the surface of your sample to create an image. Electrons are generated by an electron source and accelerated through the column by an electric field. This field varies from 1 kV to 30 kV with typical beam currents in the nano-Ampere range.

    Accelerated electrons interact within the sample and, when analysing beam sensitive materials, this interaction can damage and degrade the sample. The degradation can be seen in the form of cracks on the surface, or it can appear that the material is melting or boiling. The speed at which the degradation becomes visible varies with accelerating voltage, beam current and magnification level.

    Sample degradation of different kinds of non-conductive materials

    In samples where the material appears to be melting or boiling, you might assume that the material is being heated by the electron beam. However, simulations of samples show that the melting point of materials can only be reached in extreme cases. These extreme cases are samples with a very low heat transfer coefficient, high beam current or high zoom level. Degradation can also set in at low electron beam currents and low magnification levels, but it will just occur over a longer time frame.

    What is causing the sample degradation?

    Depending on the accelerating voltage, electrons from the electron beam can interact with electrons in the atoms of the sample. If a valence electron — an electron that can participate in the formation of a chemical bond — happens to be knocked out of the atom, it will leave an electron hole. This hole must be filled by another electron within 100 femtoseconds (i.e. the typical time period of an atomic vibration), or the bond will be broken.

    In conductive materials, this is not a problem as the electron hole is filled within 1 femtosecond (fs). But for non-conductive materials it can take up to several microseconds to fill up the electron hole, potentially breaking the bond and chemically altering the material and its morphology.

    How to slow sample degradation down

    The speed at which the degradation becomes visible varies according to the material. With some samples, you might not see it at all. If samples do degrade and this interferes with your results, then here are a few tips to slow degradation down:

    • Coat the samples with a conductive (gold) layer to slow down the degradation. The thicker the layer, the better the effect. But be careful not to cover up details with the (conductive) gold layer.
    • Lower the beam current and acceleration voltage.
    • Limit viewing time by adjusting your image settings such as focus and contrast on a non-important part of your sample. When these are correct, move to the area of interest, immediately take a picture and move away again.

    Topics: Sample Preparation, Sample Degradation

    About the author:
    Karl Kersten is head of the Application team at Phenom-World, world’s no 1 supplier of desktop scanning electron microscopes. He is passionate about the Phenom product and likes converting customer requirements into product or feature specifications so customers can achieve their goals.

  • Customer Testimonial: Closely monitored

    Palletways has developed technology which captures the data of every pallet and vehicle entering its hub with the same high definition cameras used to inspect the Notre Dame Cathedral in Paris. Rob Gittins, Palletways UK Managing Director, explains the benefits of its award winning archway scanning system.

    Click here to read the complete article published in www.shdlogistics.com

    To speak with a sales/applications engineer please call 01582 764334 or click here to email

     

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • Future Photonics Hub

    The Future Photonics Hub. An astonishing range of innovative ideas emerge when scientists and engineers come together to think about manufacturing.

    Thursday 20 September 2018.

    University of Southampton, Highfield Campus, Southampton, SO17 1BJ.

    Click here for more information

  • Using the SMARTTECH 3D scanner for production of additional parts and accessories

    Enduro-Tech manufactures accessories for enduro motorcycles, founded in 2012 within a year it became one of the best known brands in the market of companies manufacturing motorcycle accessories. Its success is due to the company’s extensive knowledge of the industry and the use of the newest technologies allowing it to both design and manufacture parts. Enduro-Tech uses a SMARTTECH 3D scanner in order to obtain the geometry of the motorcycle engine for which it is planning to manufacture a dedicated casing.

    During off-road rides motorcycles of this type are exposed to repeated impacts by obstacles on rough terrain. The high speed and off-road environment often surprise a rider with an unexpected obstacle, which hits the engine. Constant impacts to the exhaust pipe and the engine can severely damage a motorcycle. The company Enduro-Tech, which specialises in the design and manufacturing of dedicated engine casings was founded in order to meet the high market demand for additional casings.

    Thanks to SMARTTECH 3D scanners it is possible to obtain the geometry of the engine and then to design the casing in such a way that not only does it perfectly fit the engine but is also lighter and – which is very important for a lot of customers – it perfectly matches the aggressive look of the motorcycle. The entire design process is carried out on a computer giving the engineers responsible for the project complete freedom. The measured surface consists of glossy metal elements. That’s why the underside of the engine is firstly covered with a non-invasive anti-glare solution, which allows for a clear capture of the geometry by an optical measuring device.

    Click here to download the full article.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

  • Guidelines for small script development: image acquisition

    Scripts are small software tools that help a Scanning Electron Microscopy (SEM) operator in their daily work. It can be used to automate a repetitive task, to scan large areas quickly, or to obtain a higher repeatability between measurements. To do this a software script must be developed. In this blog we will give guidelines how to develop a small script.

    SEM workflows

    • Typically, the workflow for SEM measurements consists of the following steps:
    • Sample prep and loading
    • Image acquisition
    • Analysis
    • Evaluation
    • Reporting
    • Conclusion/Action

    This workflow is illustrated in Figure 1. Steps 1 to 4 can be automated using scripts. Sample preparation has been extensively covered in our previous blogs on sample preparation techniques and on how sputter coating assists your SEM imaging.

    Image acquisition, step 1, is mostly self-descriptive and concerns all steps that are necessary to acquire a high-quality image. It starts with moving the sample to the right position under the microscope and setting image parameters. The task is completed by acquiring and saving the image.

    Image analysis concerns the processing of images to obtain the physical quantities from the images. This could, for example, be:

    • Thresholding BSD images to quantify compositional differences in the image
    • Segment the image to find particles
    • Measure the working distance to detect a height step in your sample
    Figure 1: Scanning Electron Microscopy workflow

    In the evaluation step, the physical quantities are evaluated and categorized. This can be done by:

    • Counting particles based on their morphology
    • Determining the coverage on a sample
    • Counting the number of defects on a sample

    In the reporting step, a report is made (automatically) that contains all relevant information to make a well-informed decision. The report could be:

    • The acquired images that contain useful information
    • Histograms showing information about the sample such as coverage or size distributions
    • A complete PDF report with tables, graphs and images

    If a script is made in an effective way, all these steps can be made with a single click of a button and a report will roll out once acquisition and processing is finished. Then all the operator has to do is check the report and decide what action is appropriate.

    In this blog we will focus on the first step of script development and explain how to acquire images efficiently and with the most suitable quality. In future blogs we will explore the other steps in more detail.

    Acquiring images with the Phenom through PPI
    In my previous blog, I have already shown how easy it is to capture an image with the Phenom through PPI. This time I’ll show the full potential of the image acquisition methods in PPI, and how easy it is to get these images in Python.

    Image acquisition is a method of the Phenom class. To acquire an image with the preferred settings we have to create an object containing the image acquisition parameters. This class is called: ScanParams. The scan parameters class contains the following attributes:

    • size: The dimensions (resolution) of the image to scan.
    • detector: The detector configuration.
    • nFrames: The number of frames to average for signal to noise improvement.
    • hdr: The option to use the High Dynamic Range mode, to create 16-bit images.
    • scale: The scale of the acquisition within the field of view.

    The detector is a separate class (called: detectorMode) that needs to be provided to the scan parameters. This class has the following options:

    • All: Use all BSD detector segments
    • NorthSouth: Subtract the bottom two BSD segments from the top two (Topo A)
    • EastWest: Subtract the left two BSD segments from the right two (Topo B)
    • A: Select only BSD segment A
    • B: Select only BSD segment B
    • C: Select only BSD segment C
    • D: Select only BSD segment D
    • Sed: Select SE detector

    This might all seems a bit intimidating, but it is very easy to use in Python with PPI. To show this, I created a little code snippet that will acquire an image using the Topo A mode of the Phenom. The other settings will be: resolution: 1024x1024 pixels; number of frames: 16; high definition mode: off (an 8-bit image will be acquired); and the image is not scaled. The code is:

    Figure 2: Acquiring images in PPI

    In the first line, PPI is loaded into Python. After that the Phenom object is created and the connection to the Phenom is set up. The scanParams are initialized and filled out according to the settings described in the previous paragraph. Acquiring the image is completed by calling the phenom.SemAcquireImage method while passing the scan parameters into it and the image is obtained.

    Developing a real-life small script
    To illustrate how a script is developed, we will show how a real-life script is developed. This script will image a copper-aluminum sample and find the boundary between these elements. The script will first determine what the characterising parameters of copper and aluminum are. It will use these characteristics to move the sample to a position in which both aluminum and copper are visible. The copper-aluminum stub is used in the calibration of energy-dispersive X-ray spectroscopy, Figure 3 shows an image of the stub.

    Figure 3: The copper aluminium sample

    First, it is important to know how you can differentiate between copper and aluminum in a SEM. In this blog it is explained that the gray-value of the BSD-image is directly correlated to the z-value of the atoms on the sample. This can be used to differentiated between copper and aluminum. Copper is lighter than aluminum and will therefore appear darker on the image.

    The contrast information is best obtained using 16-bit images. In 16-bit images entire registry from the ADC (analog-digital converter) is used instead of a sub-selection of this information. This is better than using 8-bit images because no auto-contrast/brightness needs to be applied between the images. Therefore, no information is lost, and images can be directly compared.

    Figure 3 shows that the centre of the stub is copper and its outer edge is aluminum. This geometrical information can be used in the process. For this example, we position the sample in the centre stub position in the Phenom XL, or in the normal stub position in the P-series. To determine the gray-level of the copper part we take an image at the centre of the stub. The gray level of the aluminum is determined by moving the stage by 0.5 cm to the right. To ensure that there is no overlap, the horizontal flied of view is set to 500µm.

    The images that have been acquired are plotted to validate they are correct. Matplotlib is used for plotting, it is a versatile plotting tool that is commonly used in Python. More information on Matplotlib can be found here.

    To acquire the images, move the stage and plot using the following code:

    Figure 4: PPI script to acquire an image of the copper and aluminum part of the stub

    The script in Figure 4 first loads PPI and Matplotlib into Python and sets-up the connection to the Phenom. It then moves the sample to the central location, assuming that the sample is loaded into the Phenom and is in focus (this can, of course, also be automated, but I’ll leave that as an exercise for the reader). The horizontal field of view is set to 500µm with phenom.SetHFW.

    The image settings can be set to a relatively low quality because the average of the gray value is barely influenced by it. A resolution of 256x256 is plenty for statistics (there are still more than 65,000 data points!). The BSD-detector is used with the ppi.detectorMode.All. A short integration time is also acceptable as the noise in the image is dominated by white noise. White noise will only increase the width of the gray-value spectrum (or in other words increase the standard deviation) but it will not change the average. I chose a very aggressive approach here by taking just 1 frame. After the first image is saved, the stage is moved by 0.5 centimeters to the right and a second image is acquired. These images are plotted using the plt commands, which call the Matplotlib package for plotting.

    These speed-tweaks in the scripts are often rather important to consider. In many well written scripts, the acquisition time dominates the execution time of a script. For example, if we had taken the acquisition parameters as shown in Figure 2, the acquisition time would be a factor of 20 slower - from about 4 seconds per image to 0.2 seconds. For scripts that acquire many images, it is especially important that the right set of image parameters is chosen.

    Figure 5: Acquired images of the copper and aluminum parts of the stub; plotted with Matplotlib

    To discover more about SEM and see if it fits your research requirements, you can take a look at our desktop SEM comparison sheet. It will give you a quick overview of the capabilities and specifications of several Phenom scanning electron microscopes.

    Click here to download our comparison sheet.

     

    Take a closer look at our SEMs and you will realise that they have a multitude of interesting specifications worth investigating, like their advanced light and electron optical magnification, resolution and digital zoom.

    Topics: Automation, PPI, Automated SEM Workflows

    About the author:
    Wouter Arts is Application Software Engineer at Phenom-World, the world’s no 1 supplier of desktop scanning electron microscopes. He is interested in finding new smart methods to convert images to physical properties using the Phenom. In addition, he develops scripts to help companies in using the Phenom for automated processes.

  • Battery research with a SEM: inspecting one layer at a time

    Batteries revolutionised the world of electronics by enabling us to carry an energy reserve in our pockets. Miniaturisation and efficiency are the two key words when it comes to new developments in this field, impacting with the battery materials’ properties and stretching their limits. Let’s take a look at how researchers characterise materials and gather relevant information about batteries using scanning electron microscopy (SEM).

    The structure of a battery consists of three main components: two electrodes made of different materials and an insulation membrane in between them. The different chemical composition of the electrodes makes them available for chemical interaction and during the reduction-oxidation processes that subsequently takes place, energy is released. The chemical energy stored in the electrodes is therefore converted into electrical energy and can be employed to power up our electronic devices.

    To go from the original battery concept (which would fit on a table) to the small and long-lasting battery of a smartwatch, some improvements were made. These mostly affected the materials used for the battery construction, rather than the working principle, which remained conceptually unchanged.

    Engineering batteries: what matters?
    When designing a new battery structure, the specification of the product that will be powered by it are crucial to achieve a good match in terms of size and capacity. There are some parameters that are commonly found in the battery research and development process:

    • Nominal voltage: This is an index of what voltage the battery can supply. A car and a watch require different amounts of energy and these values are obtained using different types of electrodes.
    • Self-discharge rate: Batteries cannot keep their charge forever and sometimes they just lose it. This can be tolerable for some applications, but can become extremely annoying if, for example, it happens with the battery of a remote controller, which requires very small amounts of energy with long time intervals in between. Temperature typically plays a dominant role in this context (ever wondered why your phone battery dies faster when it’s cold?).
    • Charging cycles: IF the battery can be recharged, it is very likely that it must be done quite often. Charge and discharge cycle will damage the battery components over time (specifically the electrodes) and the total amount of energy that can be accumulated will decrease over time. Optimising the material shape and composition helps to produce batteries that can withstand thousands of charging cycles and lose less than 10% of their nominal capacity.
    • Energy density: As its name suggests, this defines the amount of energy that can be accumulated per volume unit. This is improved not just by engineering the composition of the electrodes, but also their shape, to optimise the use of space with regard to the available reaction surface. In addition, the components’ size has been drastically reduced.
    • Safety: Reducing the size of components raises an important safety issue: the proper insulation of the electrodes. It is not a mystery that batteries can explode (you probably recall how some smartphone producers have actually struggled with this issue). This can happen, for example, when the insulating membranes that separate the electrodes break due to a mechanical stress (in other words, if the battery is bent too much).

    Improving battery quality with SEM
    All these parameters have, as mentioned, a strong dependency on the material composition and structure. These parameters can be easily monitored, but appropriate analysis instrumentation is required.

    Figure 1: left and right: SEM images of raw powders used in the production of cathodes. SEMs are ideal tools for investigating small particles in the range of micrometers or nanometers.

    SEM gives you the opportunity to improve battery research by enabling you to magnify your sample hundreds of thousands of times, making features of a few nanometers clearly visible. In this way it is possible to measure the cross section of layers, as well as the size of the small features on the electrode’s surface that improve the contact surface.

    In addition, it is possible to apply both thermal and mechanical stress to a membrane and observe its behaviour on a microscopic level, thus allowing battery researchers to understand the cause of an eventual rupture.

    Energy-dispersive X-ray microanalysis is often combined with SEM to locally identify the chemical composition of the sample accurately and with an outstanding, sub-micron, spatial resolution. And the analysis only takes few seconds!

    Figure 2: An example of how EDS can be used to trace how the sample composition changes along a line. Spot analysis, line scan or area map can be used to monitor the distribution of different phases in a specific region of the sample.

    To discover more about SEM and see if it fits your research requirements, you can take a look at our desktop SEM comparison sheet. It will give you a quick overview of the capabilities and specifications of several Phenom scanning electron microscopes.

    Click here to download our comparison sheet.

     

    Take a closer look at our SEMs and you will realise that they have a multitude of interesting specifications worth investigating, like their advanced light and electron optical magnification, resolution and digital zoom.

    Topics: Research Productivity, Sample Preparation, Scanning Electron Microscope, Batteries

    About the author:
    Luigi Raspolini is an Application Engineer at Phenom-World, the world’s no 1 supplier of desktop scanning electron microscopes. Luigi is constantly looking for new approaches to materials characterization, surface roughness measurements and composition analysis. He is passionate about improving user experiences and demonstrating the best way to image every kind of sample.

Items 1 to 10 of 165 total

Page:
  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. ...
  7. 17