Characterisation, Measurement & Analysis
+44(0)1582 764334

Knowledge Base

Welcome to the Lambda Knowledge Base

  • Transmission Electron Microscopy (TEM) Sample preparation

    Sample preparation in Electron Microscopy is one of the most important factors in obtaining high quality data.

    A key phrase to remember is if you put “rubbish in” you’ll get “rubbish out”.

    Therefore, how can high quality TEM samples actually be produced? This short article will try and answer this question by covering the basic steps of sample preparation for TEM.

    The main characteristic of high quality specimens, is that they are thin, very thin. Ideally, a thickness that is close to the mean free path of the electrons that travel through the samples. For instance, High Resolution Imaging will need samples with a thickness of 100 angstrom. For electron Energy Loss Spectroscopy (EELS) between 100 to 500 angstrom and for diffraction contrast around 300 to 500 nanometres.

    But how do you achieve the appropriate specimen thickness? 

    A sample will normally start as a Bulk material. Using the Fischione Model 130 specimen punch a 3mm disk can be sectioned from the bulk material. This disk will be in the region of 500 microns or thicker. Thus, the size needs to be reduced further. This can be achieved with the use of the Fischione Model 160 specimen grinder. The grinder can reduce the thickness to roughly 20 microns. Then by utilising a high precision Fischione Model 200 dimple grinder the thickness can be reduced to a few microns.

     

    The final step to prepare an ideal sample for TEM analysis is Ion Milling. Once the sample thickness is approximately 5 microns or less, the Fischione 1051 TEM Ion Mill can be used to reduce the thickness of the specimen further, potentially until it perforates.

    During this procedure the user needs to observe the specimen closely and wait for the coloured fringes to appear. For example, when silicon is very thin (< 1 µm), changes in the colour of its fringes will directly correlate with changes in the specimen thickness. By milling at higher kV until coloured fringes are first observed and then milling at lower kV until perforation, one can accurately endpoint the milling process.

    After then being plasma cleaned, ideally using the Fischione 1070, the specimen will be ready for analysis within the microscope.

    This article covers the basic principles required to achieve electron transparency, but not all samples are considered equal. Please visit our website or contact us for more in-depth preparation steps on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • What is additive manufacturing technology? How does the process work?

    Additive manufacturing is a relatively new manufacturing approach that has attracted the attention of many people and industries around the world due to its unlimited and promising potential. In this blog we will describe what Additive Manufacturing (AM) technology is and how it works and in a follow-up blog we will explain how SEM analysis can assist in improving the quality of the AM processes.

    What is additive manufacturing?
    Additive manufacturing, also known as 3D printing or rapid prototyping, according to its ASTM standard is the “process of joining materials to make objects from 3D model data, usually layer upon layer, as opposed to subtractive manufacturing methodologies, such as traditional machining”. Today, the term “additive manufacturing” is mostly used in industry markets, while 3D printing mostly refers to the consumer market.

    Benefits of additive manufacturing technology
    Look again at the definition of AM by the ASTM standards and you’ll see its major benefit has already been revealed. Conventional subtractive manufacturing uses processes that withdraw materials from a larger piece to form the final 3D object, while AM processes add materials only when needed.

    The latter, combined with material reutilisation, reduces the material waste involved in the creation of a 3D object, lowering its environmental footprint.

    The other major advantage of AM compared to conventional manufacturing processes that have been used until now is the design freedom that it brings. In principle, everything that is designed with CAD products can be produced with additive manufacturing.

    That of course enables customisation, providing designers with the opportunity to offer specific solutions for every application. AM also enables a more extensive variety of highly complex structures to be created. It opens up new opportunities to innovate by adding new designs, or changing and/or revising versions of a product in a way that was not possible before. For example, new, more light-weight structures are being created to substitute bulkier products since AM allows parts to be designed with material present only where it needs to be. An example of this can be seen in Fig 1.

    Fig 1: Titanium 3D-printed limbs, designed by William Root.

    Moreover, AM offers shorter production cycles, requires no special manufacturing tools other than the AM machine, and reduces labor time and (energy) costs.

    Limitations of additive manufacturing technology

    Of course, AM also has its limitations, mainly because it's still under development and therefore evolving. First of all, until now, AM did not seem relevant for mass production and showed certain limitations regarding scaling, material size and choice.

    It has also been shown that in certain cases, post-processing of products is required to realise the correct surface finish and dimensional accuracy.

    However, AM has captured the interest of many people and industries that are constantly working on finding solutions to these limitations and improving the process and the quality of the products designed using it.

    Another factor that has had a negative effect on the rise of AM, is the potential loss of jobs in manufacturing. Of course, this is always the case with new technologies and hopefully people will adapt and develop the new skills that are essential for the new jobs that it will create.

    Additive Manufacturing: Areas of application
    Because of its great potential, AM has shown to be beneficial for a large variety of applications. In some areas, AM products are currently used in low-volume productions while in other areas, research is still going on to optimise the processes.

    As a first step, additive manufacturing can be applied to producing models and prototypes during the development stage of a product, and later on, as a production of pilot series for specific applications up to low-volume production for certain products.

    As a first application field, researchers are applying additive manufacturing processes for medical and dental applications. These include medical and surgical implants, prosthetics, bio-manufactured parts, and even pills.

    It is evident that the main advantage of AM for such applications is its versatility and customisation possibilities, allowing for tailored solutions within every use case.

    Up to now, several AM designs are currently used in automotive (e.g. motor parts and cooling ducts), aerospace (e.g. turbine blades and fuel system parts) and in tooling. You can see examples of these products in Fig 2.

    Fig 2: Examples of AM products in a) Aerospace, b) Automotive and c) Medical applications.

    Of course, there are many more applications in which 3D printing has been applied and/or will be applied in the future. Designs are currently used in education and research, construction, art and jewellery, sensors, and even apparel and clothing.

    Obviously, as more people get involved in the research and development as well as quality control of AM products, new application areas will pop-up and AM processes will become common practice for a variety of applications and products.

    Additive Manufacturing & SEM

    As with every emerging technology, quality control of the entire process is an important task. Material characterisation (e.g. particles) and quality control of the finished product — and everything in between —are all essential -to ensure the quality of the manufacturing process.

    In a follow-up blog, we will describe how scanning electron microscopy (SEM) is a powerful tool for material characterisation and quality control in additive manufacturing processes.

    Until then, we'd like to point you to an interesting video on exactly that topic. It explains how Additive Industries, the world’s first dedicated equipment manufacturer for industrial metal additive manufacturing systems, uses SEM to obtain fast results in additive manufacturing.

    Topics: 3D Printing, Materials Science, R&D, Additive Manufacturing

    About the author:
    Antonis Nanakoudis is Application Engineer at Thermo Fisher Scientific, the world leader in serving science. Antonis is extremely motivated by the capabilities of the Phenom desktop SEM on various applications and is constantly looking to explore the opportunities that it offers for innovative characterisation methods.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • OptoTest equipment provides the most accurate Return Loss measurements in the industry.

    In this presentation you will learn more about the techniques making that possible, plus some useful tips to keep in mind to ensure optimal performance.

    Click here to download the full presentation.

     

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • How to Save Time When Testing Multiple Cable Types

    Changing out reference cables during the insertion loss and return loss testing process to accommodate new DUT types can cause downtime for a manufacturing line and drastically reduce the efficiency of the cable production process.

    With single channel DUTs and reference cables, the issue may seem inconsequential. However, in the modern fibre production line every minute counts; any method that can speed up time per test, minimise the time between tests, and prevent downtime are ways to make your cables more profitable.

    Figure 1: An OP940 insertion and return loss meter with multiple reference cables: MTP, LC Duplex, SC, and FC connectors.

    One way to minimise this downtime is to connect multiple different reference cables to your multi-channel insertion loss and return loss test set in advance.

    Having reference cables that match the different connector types you commonly test or that will be tested that day at that test station will minimise the time it takes to start testing new DUT types.

    Additionally, it allows connector-level insertion loss testing for many types of hybrid cables without a complicated test setup if you have a reference cable that matches each.

    Ideally, the setup includes enough channels to accommodate the different reference cable types at once, as shown in Figure 1. Discuss with your sales engineer your testing needs and they can recommend an ideal test setup and channel count.

    Figure 2: Connector end-faces before (top) and after (bottom) cleaning.

    Finally, having reference cables that are only disconnected from time to time means that you have more repeatable IL/RL results and you can prevent damage to high quality reference cables and the interfaces to the test equipment.

    This will reduce overall downtime and cost associated with troubleshooting and repairing the damage to those connections.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • How a microfabrication researcher uses SEM as a technique to verify nanoscale structures

    Microfabrication, the creation of microscale structures and features, is an essential technique for the creation of next-generation semiconductors, processors and the ‘lab-on-a-chip’ microfluidic systems found in chemical analysis systems that can fit in the palm of your hand.

    Until now, microfabrication has relied on masking techniques such as lithography, which limits the variety of structures that can be produced. However, new research into microscale 3D printing systems is allowing complex 3D shapes to be assembled at scales smaller than ever achieved before.

    The future beckons

    Tommaso Baldacchini is a microfabrication researcher for the Technology and Applications Center (TAC) at Newport Corporation. In his research into laser-assisted nanofabrication, a Phenom Pro scanning electron microscope is an essential tool.

    Newport Corporation’s TAC labs are very similar to those that you would see at a University, and they closely cooperate with academic customers. TAC conducts experiments for academic partners and fabricates micro-devices and components for use in other academic research areas.

    The current microfabrication landscape

    Up to the current time, much microfabrication has been dominated by traditional machining and photolithographic processes, which are planar techniques. According to Tommaso Baldacchini, photolithography can produce an extremely fine structure with high throughput, but the process is limited by two-dimensionality. Baldacchini said “this means that fabricators are missing out on one entire dimension.” Other limitations include:

    • The expense of the instrumentation for producing these structures
    • A clean room environment is often required
    • The number of substrates and materials is limited to silicon and semiconductors

    Baldacchini mentions: “There is definitely a need to break the barriers of these limitations to produce new micro and nano devices.”

    Breaking down barriers to nanofabrication

    A number of challenges are presented when fabricating nanostructures. These depend mostly on the specific technique used to fabricate the structure and the features of the structure itself — such as its size, shape and surface area.

    Laser assisted nanofabrication (Journal of Laser Applications 24, 042007 (2012)) provides a whole raft of unique abilities for building nano- and microstructures. Laser irradiation projected on material surfaces can cause several effects, including localised heating, melting, ablation, decomposition and photochemical reaction — and leads to the realisation of various complex nanostructures with materials such as graphene, carbon nanotubes and even polymers and ceramics.

    Characterisation

    When characterising structures, it is crucial to have a tool that allows precise measurements to examine fabricated structures at nanoscale precision. There is a need to look at the structures topology and uniformity to make sure the ‘build’ quality is up to scratch. It is also important to be able to characterise the new material by determining its surface composition, and even its internal composition.

    A scanning electron microscope (SEM) is the ideal tool for this type of work, providing the ability to focus in to tens of thousands of nanometers and view small scale and nanoscale sample features. Baldacchini said “A scanning electron microscope is an invaluable tool to characterise products. We can view changes in the samples surface when it is ablated, or we can use SEM to study the topology of a sample we have produced using additive manufacturing.”

    Innovative techniques

    TAC have developed a high-resolution, nanoscale 3D printing technique called two-photon polymerisation. Using two-photon polymerisation allows the creation of extremely 3D polymeric structures which are often tens of microns large with nanoscale features. SEM is frequently used for structure characterisation, as a means of verifying the nanoscale structure that has been built. In addition to this, Baldacchini’s research has involved applying nonlinear optical microscopy, such as CARS microscopy, to investigate the chemical and mechanical properties of the microstructure created by two-photon polymerisation.

    “One of the tools that we developed in the TAC for aiding laser microfabrication is called the Laser µFAB. It is a complete system that enables customers to connect their own laser to the machine and perform different types of laser micromachining.”

    The system is provided with software that enables customers to import a two-dimensional drawing and reproduce the drawing using the motion of the stages with respect to the stationary laser. This allows users to create any three-dimensional objects they want to produce.

    Characterisation with a SEM

    So, according to Baldacchini at Newport Corporation, a scanning electron microscope proves to be an invaluable tool to characterise products and verify nanoscale structures.

    If you would like to learn even more about how TAC utilises SEM to verify nanoscale structures, you can click here to download the detailed Case Study.

    Topics: 3D Printing, Electronics

    About the author:
    Jake Wilkinson is an editor for AZoNetwork, a collection of online science publishing platforms. Jake spends his time writing and interviewing experts on a broad range of topics covering materials science, nanoscience, optics, and clean technology.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • FEG vs. Tungsten source in a scanning electron microscope (SEM): what’s the difference?

    After few years of operating a transmission electron microscope (TEM) in my postgraduate studies, in 2006 I started my career in electron microscopy as an SEM operator for a biological and medical research centre in York (United Kingdom). Not knowing how to operate an SEM before, I found it relatively easy to switch from TEM to SEM.

    My first SEM—equipped with a tungsten source—was getting too old and difficult to maintain. For that reason, it was replaced by two brand new SEMs; the first equipped with a tungsten source and the second with a Field Emission Gun (FEG). The tungsten system was considered the ‘workhorse’, and was used by many co-workers and researchers.

    However, challenging specimens (such as nanoparticles and beam-sensitive specimens)could be difficult to image on a tungsten system due to the lack of resolution. Whereas with the FEG source, those difficult specimens were much easier to image. The FEG system allowed us to see things that we couldn’t resolve with a tungsten system. It was like exploring and discovering a completely new world. Ever since that day, I’ve been in love with the FEG source.

    In this blog, I would like to make you enthusiastic too and explain why I prefer using an FEG source in an SEM system. You can learn what the main differences are between a tungsten thermionic emitter and a field emission source, and find out how an FEG source could enhance your research.

    Thermionic emission sources vs. field emission sources

    • Thermionic emission sources (TEM)
      Typically, thermionic filaments are made of tungsten (W) in the form of a v-shaped wire. They are resistively heated to release electrons (hence the term thermionic) as they overcome the minimum energy needed to escape the material.
    • Field emission sources
      For a field emission source, a fine, sharp, single crystal tungsten tip is employed. An FEG emitter gives a more coherent beam and its brightness is much higher than the tungsten filament. Electrons are emitted from a smaller area of the FEG source, giving a source size of a few nanometers, compared to around 50 μm for the tungsten filament. This leads to greatly improved image quality with the FEG source. In addition, the lifetime of an FEG source is considerably longer than for a tungsten filament (roughly 10,000 hours vs 100-500 hours), although a better vacuum is required for the FEG, 10-8 Pa (10-10 torr), compared with 10-3 Pa (10-5 torr) for tungsten, as shown in Figure 1.

    There are two types of FEG sources: Cold and Schottky FEGs

    For a so-called cold emission source, heating of the filament is not required as it operates at room temperature. However, this type of filament is prone to contamination and requires more stringent vacuum conditions (10-8 Pa, 10-10 torr). Regular and rapid heating (‘flashing’) is required in order to remove contamination. The spread of electron energies is very small for a cold field emitter (0.3 eV) and the source size is around 5 nm.

    Other field emission sources, known as thermal and Schottky sources, operate with lower field strengths. The Schottky source is also heated and dispenses zirconium dioxide onto the tungsten tip to further lower its work function. The Schottky source is slightly larger, 20–30 nm, with a small energy spread (about 1 eV).

    It starts with sample preparation

    When switching from tungsten to FEG emitter, it is worth mentioning that the specimen preparation becomes extremely critical in order to obtain high resolution and high magnification of any specimen.

    In general, samples are generally mounted rigidly on a specimen holder or stub using a carbon ‘conductive’ adhesive. These carbon tabs are partially or non-conductive and can lead to charging artefacts. Hence, carbon tabs might be suitable for a tungsten system, but become inappropriate for an FEG system.

    For high-resolution imaging on an FEG system, I always try to avoid using the carbon sticker. Specimens such as nanoparticles or fine powder should be prepared directly onto an aluminum pin stub for example.

    For conventional imaging in the SEM, specimens must be electrically conductive, at least at the surface, and electrically grounded to prevent the accumulation of electrostatic charge (i.e. using silver paint, aluminum or copper tape). [copper tape?]

    Non-conducting materials are usually coated with an ultra-thin coating of electrically conducting material, including gold, gold/palladium alloy, platinum, platinum/palladium, iridium, tungsten, and chromium. I recommend using the metals and thickness below for tungsten and FEG sources:

    • Metals:
      Au, Au/Pd (Tungsten source)
      Pt, Pd/Pt, Ir, W (FEG source)
    • Thickness:
      5-10 nm for low magnification
      2-3 nm for high resolution, the thinner the better

    Tungsten source vs. FEG source: imaging differences

    FEG sources have an electron beam that is smaller in diameter, more coherent and with up to three orders of magnitude greater current density or brightness than could ever be achieved with a tungsten source.

    The result of using an FEG source in scanning electron microscopy (SEM) is a significantly improved signal-to-noise ratio and spatial resolution, compared with thermionic devices.

    Field emission sources are ideal for high resolution and low-voltage imaging in SEM. Therefore, focusing and working at higher magnification become easy for any operator.

    Topics: FEG

    About the author:
    Kay Mam - In 2006 I started my career in electron microscopy as an SEM operator for a biological and medical research center in York (United Kingdom). With an FEG source, difficult specimens are easier to image. The FEG system allowed me to see things, it was like exploring and discovering a completely new world. Ever since that day, I’ve been in love with the FEG source. In 2016, I joined the Phenom Desktop SEM Application Team, working on a desktop SEM with an FEG source.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • SEM: types of electrons, their detection and the information they provide

    Electron microscopes are very versatile instruments, which can provide different types of information depending on the user’s needs. In this blog we will describe the different types of electrons that are produced in a SEM, how they are detected and the type of information that they can provide.

    As the name implies, electron microscopes employ an electron beam for imaging. In Fig.1, you can see the various products that are possible as a result of the interaction between electrons and matter. All these different types of signals carry different useful information about the sample and it is the choice of the microscope’s operator which signal to capture.

    For example, in transmission electron microscopy (TEM), as the name suggests, signals such as the transmitted electrons are detected which will give information on the sample’s inner structure. In the case of a scanning electron microscope (SEM), two types of signal are typically detected; the backscattered electrons (BSE) and the secondary electrons (SE).

    Type of electrons in SEM

    In SEM, two types of electrons are primarily detected:
    •backscattered electrons (BSE),
    •secondary electrons (SE),

    Backscattered electrons are reflected back after elastic interactions between the beam and the sample. Secondary electrons, however, originate from the atoms of the sample: they are a result of inelastic interactions between the electron beam and the sample.

    BSE come from deeper regions of the sample, while SE originate from surface regions. Therefore, BSE and SE carry different types of information. BSE images show high sensitivity to differences in atomic number: the higher the atomic number, the brighter the material appears in the image. SE imaging can provide more detailed surface information.

    Figure 1: Electron — matter interactions: the different types of signals which are generated.

    Backscattered-electron (BSE) imaging

    This type of electrons originate from a broad region within the interaction volume. They are a result of elastic collisions of electrons with atoms, which results in a change in the electrons’ trajectory. Think of the electron-atom collision as the so-called “billiard-ball” model, where small particles (electrons) collide with larger particles (atoms). Larger atoms are much stronger scatterers of electrons than light atoms, and therefore produce a higher signal (Fig.2). The number of the backscattered electrons reaching the detector is proportional to their Z number. This dependence of the number of BSE on the atomic number helps us differentiate between different phases, providing imaging that carries information on the sample’s composition. Moreover, BSE images can also provide valuable information on crystallography, topography and the magnetic field of the sample.

    Figure 2: a) SEM image of an Al/Cu sample, b), c) Simplified illustration of the interaction between electron beam with aluminum and copper. Copper atoms (higher Z) scatter more electrons back towards the detector than the lighter aluminum atoms and therefore appear brighter in the SEM image.

    The most common BSE detectors are solid state detectors which typically contain p-n junctions. The working principle is based on the generation of electron-hole pairs by the backscattered electrons which escape the sample and are absorbed by the detector. The amount of these pairs depends on the energy of the backscattered electrons. The p-n junction is connected to two electrodes, one of which attracts the electrons and the other the holes, thereby generating an electrical current, which also depends on the amount of the absorbed backscattered electrons.

    The BSE detectors are placed above the sample, concentrically to the electron beam in a “doughnut” arrangement, in order to maximize the collection of the backscattered electrons and they consist of symmetrically divided parts. When all parts are enabled, the contrast of the image depicts the atomic number Z of the element. On the other hand, by enabling only specific quadrants of the detector, topographical information from the image can be retrieved.

    Figure 3: Typical position of the backscattered and secondary electron detectors.

    Secondary electrons (SE)

    In contrast, secondary electrons originate from the surface or the near-surface regions of the sample. They are a result of inelastic interactions between the primary electron beam and the sample and have lower energy than the backscattered electrons. Secondary electrons are very useful for the inspection of the topography of the sample’s surface, as you can see in Fig. 4:

    Figure 4: a) Full BSD, b) Topography BSD and c) SED image of a leaf.

    The Everhart-Thornley detector is the most frequently used device for the detection of SE. It consists of a scintillator inside a Faraday cage, which is positively charged and attracts the SE. The scintillator is then used to accelerate the electrons and convert them into light before reaching a photomultiplier for amplification. The SE detector is placed at the side of the electron chamber, at an angle, in order to increase the efficiency of detecting secondary electrons.

    These two types of electrons are the most used signals by SEM users for imaging. Not all SEM users require the same type of information, so the capability of having multiple detectors makes SEM a very versatile tool that can provide valuable solutions for many different applications.

    Topics: Electrons, Scanning Electron Microscope

    About the author:
    Antonis Nanakoudis is Application Engineer at Thermo Fisher Scientific, the world leader in serving science. Antonis is extremely motivated by the capabilities of the Phenom desktop SEM on various applications and is constantly looking to explore the opportunities that it offers for innovative characterisation methods.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • How next-generation composite materials are manufactured and analysed

    The technical specifications of next-generation materials are taking our technology to a completely new level, allowing us to create products with outstanding properties that were impossible to achieve in the past. These materials are the result of a huge drive toward innovation in material science and could only be achieved because of the invention of the first composite materials and their introduction into the industrial landscape.

    In this article, I describe how these next-generation materials are being developed — and equally important: how their chemical composition is analysed, and their performance is measured.

    How beneficial properties of composite materials are created and preserved

    Certain materials have outstanding properties that offer the perfect fit with a specific application. Sometimes, unfortunately, the environment is affecting these materials to such an extent that they cannot be easily used. They also require continuous replacement and fixing, thereby compromising all the advantages that come from their use.

    By creating multiple layers, or applying a coating, such delicate materials can be shielded and used, with all the benefits that they bring.

     

    Figure 1: Glass sheet coated with different materials. The multiple layers add specific properties to the product.

    For example, introducing nanofibres in a slab can dramatically improve its resistance to traction, flexion or torsion. These materials normally feature a matrix (the external part of the material, directly exposed to the stress) that is supported by a network of fibres. When the stress is applied to the material, this is transferred to the fibres. The fibres can easily handle the applied force, responding with an elastic deformation. As soon as the stress is removed, the fibres will bring the material back to its original state.

    This stress-transfer process is what led to the creation of self-healing materials. A typical example is the plastic covers of some smartphones that, when scratched, can recover from the condition in a matter of minutes. If the scratch is not too deep, it will completely disappear and the ‘brand-new’ feeling of the phone will last longer.

    The crafting of these materials requires high-level engineering and is the result of a big investment in research. In particular, scientists have focused their attention on how to transfer the stress from the matrix to the fibre, without having the latter slipping inside the structure. Several different solutions were taken into account and investigated, such as creating a complex fibrous skeleton or coating the fibres with a material that improves the shear stress transmission at the fibre-matrix interface.

    Figure 2 & 3: Different kinds of fibre weaving offer different resistance to stress. The appropriate weaving technique is chosen according to the application.

    How next-generation composite materials are analysed and measured

    As these investigations were performed on nano-scaled materials, electron microscopes were employed for the analysis and measurements. With a desktop scanning electron microscope (SEM), it is in fact possible to evaluate the diameter of the fibres and monitor how they change along the structure. At the same time, it is also possible to locally analyse the quality and chemical composition of the coating in order to verify that the adhesion of the fibre to the matrix is optimised. This can be done with an energy dispersive X-ray analysis (EDS).

    Composite materials are not a recent invention, by the way. The ancient population inhabiting the European continent were already mixing different types of materials for decorative or practical uses. One example is the discovery of archaeological grave goods in the imperial and royal tombs in the Speyer Cathedral in Speyer, Germany, which showed that textile fibres were mixed with golden threads.

    Within the KUR-Project “Conservation and restoration of mobile cultural assets” in Germany, electron microscopy has been successfully used to perform numerous analyses of the tombs’ contents. Download this free case study to discover how a desktop SEM was used to investigate fibre and leather details without damaging the samples or performing additional sample preparation, it's very interesting:

    Topics: Fibres imaging & analysis, Materials Science, EDX/EDS Analysis

    About the author:
    Luigi Raspolini is an Application Engineer at Thermo Fisher Scientific, the world leader in serving science. Luigi is constantly looking for new approaches to materials characterisation, surface roughness measurements and composition analysis. He is passionate about improving user experiences and demonstrating the best way to image every kind of sample.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • How to spot astigmatism in Scanning Electron Microscopy (SEM) images

    You may have heard of astigmatism as a medical condition that causes visual impairment in up to 40% of adults [1], but how is this applicable to electron microscopy? First of all, let’s talk about what the word astigmatism, in fact, means: It is derived from the negative prefix ‘a’ (without) + ‘stigmat-’ (mark, or point, in Ancient Greek) + ‘ism’ (condition). In a perfect optical system, a lens has only one focal point, and is stigmatic. When the lens has more than one focal point, however, we refer to the lens as being astigmatic. This happens when the lens is elongated in either the sagittal (y-axis) or tangential (x-axis) plane, resulting in two focal points (= foci).

    In electron microscopy, astigmatism arises due to imperfections in the lens system. At high magnification, the imperfections become more apparent, hampering the quality of your images. As a result, round objects might appear elliptical or out of focus. Fortunately, electron microscopes have a component called a stigmator, which is used to rectify the problem.

    In this blog, I want to show you what astigmatism looks like. This example is ideal if you image metal spheres.

    How to spot astigmatism in a scanning electron microscopy image

    The image below shows the tin spheres, deposited on carbon, at low magnification (1500 ×, field of view 179 µm). There aren’t any obviously visible distortions on the image, but what happens if we increase the magnification?

    Figure 1. At low magnification, astigmatism is not overly apparent.

    When increasing the magnification to 50000 × (field of view 5.37µm), we notice that the tin spheres seem out of focus:

    Figure 2. At high magnification, astigmatism becomes apparent.

    Knowing that we should be able to see more detail when using a SEM, perhaps the image is simply out of focus? Let’s see what happens when we adjust the focus:

    Figure 3. When out of focus, round objects become elongated on astigmatic images.

    Figure 3 shows that when the image is astigmatic and out of focus, elongation of round objects occurs. Notice how the smaller spheres appear almost egg-shaped. In Figure 2, we see that an astigmatic, in-focus image simply appears to be out of focus.

    How can I tell if my SEM image is astigmatic?

    In the above examples (Figure 2 and 3), we see that when an image is astigmatic and out of focus, it becomes elongated. When astigmatic images are over- or under-focused, they become elongated in perpendicular directions. This means that you can test whether your image is astigmatic by over- and under-focusing it.

    In the three images below, we show the process of testing whether an image is astigmatic or not. In Figure 4, the lab operator has obtained an image and wants to know if it is astigmatic.

    Figure 4. To the untrained eye, this astigmatic image might just appear to be out of focus.

    Not convinced that the best-quality image has been obtained, the microscope operator over- and under-focuses the image:

    Figure 5. After over- and under-focusing the image, it is clear to the lab operator that the image was astigmatic, due to the visible elongation in perpendicular directions.

    The microscope operator now knows that the microscope’s stigmator needs to be used to improve the quality of the image. After adjusting the stigmator in both X and Y directions, the operator again tests the image, and sees the following:

    Figure 6. When an image is stigmatic, no elongation occurs when (A) under-focused or (C) over-focused. (B) When stigmatic and in focus the image is crisp.

    Figure 6. When an image is stigmatic, no elongation occurs when (A) under-focused or (C) over-focused. (B) When stigmatic and in focus the image is crisp.

    The operator is now ready to acquire beautiful images for a publication or report.

    In summary, please consider the table below to see what your image will look like depending on how well it is stigmated and/or focused:

    Table 1. SEM image quality depending on stigmation and focus.

    1. Hashemi H et al. (2018) Global and regional estimates of prevalence of refractive errors: Systematic review and meta-analysis. Journal of Current Ophthalmology 30(1): 3–22.

    Topics: Scanning Electron Microscope

    About the author:
    Willem van Zyl is Application Specialist at Thermo Fisher Scientific, the world leader in serving science. He is excited by analytical instruments that are accessible and user-friendly, and truly believes that a SEM image is worth a kazillion words.

    For further information, application support, demo or quotation requests  please contact us on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • Measure Latency in Optical Networks with Picosecond Accuracy

    In optical networks where action on a message or signal is time critical, latency becomes a critical design element. Latency in communications networks is comprised of the networking and processing of messages, as well as the transmission delay through the physical fibre. Measuring and optimising this optical transmission delay can be critical in diagnosing latency issues in a data centre or maintaining quality control in the production of precision fibre links. Fortunately, the Luna OBR 4600 can measure this latency with picosecond accuracy.

    Specifically, latency is the time delay of a light signal to travel, or propagate, in an optical transmission medium. The latency is related to the length of an optical fibre by the equation:

    Where L is the length, c is the speed of light in a vacuum and n is the index of refraction for the optical fibre.

    Because the Luna OBR can measure loss and reflections in short fibre networks with ultra-high resolution (sampling resolution of 10 µm) and no dead zones, it is straightforward to extract the exact length or latency of a segment of fibre or waveguide by analysing the time delay between reflection events. In fact, the OBR 4600 is able to measure latency or length this way with an accuracy of <0.0034% of the total length (or latency). For a 30 m optical fibre, for example, this corresponds to an overall length measurement accuracy of better than 1 mm, which is equivalent to a latency measurement accuracy of about 5ps for standard fibre. Note that this is the absolute accuracy; actual measurement resolution will be much higher.

    The example illustrates a typical application of measuring any differences in the length or latency of two fibre segments, each approximately 50 m in length. An OBR 4600 scans both segments and the latency of each segment is indicated by the distance between the two reflections at the beginning and end connectors of the segments. In this example, the difference in latency is found to be 95ps. For this fibre, this is equivalent to a difference of about 19.3 mm in length.

    Measuring length and latency is only one application of the versatile OBR reflectometer. For an overview of the OBR and common applications for ultra high resolution optical reflectometry, download Luna’s OBR white paper below.

    Fibre Optic Test & Measurement with Optical Backscatter Reflectometry (OBR)

    Optical communications technology is rapidly evolving to meet the ever-growing demand for ubiquitous connectivity and higher data rates. As signalling rates increase and modulation schemes become more complex, guaranteeing a high-fidelity optical transmission medium is becoming even more critical.

    Additionally, modern networks are relying more on photonic integrated circuits (PICs) based on silicon photonics or other developing technologies, introducing additional variables into the design and deployment of robust high bandwidth optical systems.

    Measurement and full characterisation of loss along the light path is a fundamental tool in the design and optimisation of these components and fibre optic networks.

    While different types of reflectometers are available to measure return loss, insertion loss, and event location for different types of optical systems, Optical Backscatter Reflectometry (OBR) is a very high resolution, polarisation-diverse implementation of optical reflectometry that dramatically improves sensitivity and resolution.

    See what you’ve been missing with traditional optical instrumentation in this white paper.

    Topics include:

    • Reflectance And Return Loss
    • Measuring Return Loss
    • Optical Backscatter Reflectometry (OBR)
    • Luna OBR Reflectometers

    Click here to download the white paper.

    For more information please email or call 01582 764334.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

Items 1 to 10 of 170 total

Page:
  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. ...
  7. 17