Characterisation, Measurement & Analysis
+44(0)1582 764334


  • Vibration-Tolerant Interferometry

    QPSI™ Technology Shrugs Off Vibration from Common Sources

    When image stabilization became available on digital cameras, it vastly reduced the number of photos ruined by camera shake. The new technology eliminated the effects of common hand tremors, greatly improving image quality in many photo situations.

    Animated comparison of a PSI measurement with fringe print-through due to vibration, and the same surface measured with QPSI™ technology – free of noisy print-through.

    In precision interferometric metrology, a similar problem, environmental vibration, has ruined countless measurements, like the one in the animation shown at right. Vibration can significantly affect measurement results and spatial frequency analysis, and it is difficult to make high quality optics if you can not measure them reliably. Solving the vibration problem can be costly, requiring the purchase of a vibration isolation system or a special dynamic interferometer.

    ZYGO's QPSI™ technology is truly a breakthrough for many optical facilities because it eliminates problems due to common sources of vibration, providing reliable data the first time you measure. QPSI measurements require no special setup or calibration, and cycle times are typically within a second or two of standard PSI measurements.

    Key Features:

    • Eliminates ripple and fringe print-through due to vibration
    • High-precision measurement; same as phase-shifting interferometry (PSI)
    • Requires no calibration, and no changes to your setup
    • Easily enabled/disabled with a mouse click

    QPSI is available exclusively from ZYGO on Verifire™, Verifire™ HD, Verifire™ XL, and also on DynaFiz® interferometer systems that have the PMR option installed (phase measuring receptacle). These systems are easy-to-use, on-axis, common-path Fizeau interferometers – the industry standard for reliable metrology – making them the logical choice for most surface form measurements.

    QPSI™ Simplifies Production Metrology
    A ZYGO interferometer with QPSI technology is capable of producing reliable high-precision measurements in the presence of environmental vibration from common sources such as motors, pumps, blowers, and personnel. Unless your facility is free of these sources, your business will likely benefit from QPSI technology.

    While QPSI can completely solve many common vibration issues, environments that have extreme vibration and/or air turbulence may require the additional capability of DynaPhase® dynamic acquisition, which is included by default with ZYGO's DynaFiz® interferometer. DynaPhase® is also available as an option on most new Verifire systems from 2018 onwards.
    We can help determine the best solution for your particular situation.

    Click here to read further information on DynaPhase® Dynamic Acquisition for Extreme Environments Confidence in metrology, no matter the conditions.

    Please contact us for advice and a demonstration please call 01582 764334 or click here to email.

  • Dynamic Capability comes to (nearly) all new Zygo Verifire Interferometers

    DynaPhase® Dynamic Acquisition for Extreme Environments
    Confidence in metrology, no matter the conditions

    Fizeau Interferometry has become a trusted standard for precise metrology of optical components and systems. Traditionally, these instruments were required to be installed in lab environments, where conditions were carefully controlled, to ensure high precision measurements were not compromised. However, today a growing number of applications demand easy, cost-effective solutions for the use of interferometry in environments where metrology has been difficult or impossible in the past.

    Often, optical systems must be tested in locations that simulate their end-use environment. These environments can present challenges due to factors like large vibration and air turbulence, which can negatively affect or prevent the ability to acquire reliable optical measurements. Many of these challenges are addressed with less than optimal solutions, often suffering from drawbacks and issues related to usability, speed, reliability and precision.

    ZYGO's patented DynaPhase® data acquisition technology offers many differentiated benefits, without the limitations associated with alternative methods. Key attributes of DynaPhase include:

    • Highest vibration tolerance in a Fizeau interferometer, enabled by the ZYGO-manufactured high-power laser* and fast acquisition speeds
    • Patented in-situ calibration enables the highest precision, lowest measurement uncertainty measurements, and excellent correlation to temporal phase shifting interferometry (PSI)
    • Simple setup and calibration compared to alternative approaches
    • Cost-effective solution; available on nearly all ZYGO laser interferometers

    Comparison of Measurement Techniques Using an Identical Measurement Cavity

    DynaPhase offers the versatility and performance to address a wide range of challenging optical testing environments and applications, including:

    • Cryogenic and vacuum chamber testing
    • Telescope components and complex optical systems
    • Large tower, workstations and complex or unstable test stands

    DynaPhase is available on nearly all ZYGO laser interferometers. Features vary by model and enable users the flexibility to use capabilities that enhance efficiency in Production Mode, enable fast system alignment with LivePhase, or reveal temporal changes in data with Movie Mode.

    Get the most from your metrology investment with the unique capabilities and unmatched versatility of DynaPhase, now available on the entire interferometer line from ZYGO.
    Complete range of vibration tolerant metrology - check out ZYGO's patented QPSI vibration tolerant temporal phase shifting data acquisition enables metrology in the presence of common shop floor vibrations without the need for calibration.

    DynaPhase is inherent in the DynaFiz interferometer - and available as an optional extra software module on the new Verifire (1200 x 1200 pixel camera), Verifire HD and HDx systems. This means you can have DynaPhase capability on the entry level Verifire interferometer, which is pretty well specified to start with as it includes QPSI technology also.

    Click here to read further information on QPSI™ Technology Shrugs Off Vibration from Common Sources.

    To speak with a Sales & Applications Engineer please call 01582 764334 or click here to email.

  • SEM automation guidelines for small script development: simulation and reporting

    Scripts are small automated software tools that can help a scanning electron microscope (SEM) user work more efficiently. In my previous blogs, I have explained how we can use the Phenom SEM with the Phenom programmable interface (PPI) to automate the process of acquiring, analysing and evaluating images. In this blog, I will add the Phenom PPI simulator to that and explain how you can generate and export reports using PPI.

    First, I’ll explain how to create a Phenom SEM Simulator in PPI and how to use it to acquire images. The Simulator mimics the behaviour of the Phenom and is a great tool to develop code without needing to have to access to a Phenom SEM.

    After that, I will demonstrate how you can analyse these images using an external module and how you can generate a report using PPI.

    The Phenom PPI Simulator

    The Simulator can be created by calling the Phenom class in PPI and passing empty strings for the Phenom ID, username, and password. In code it looks like this:

    Acquiring images works in exactly the same way as I explained in my first blog on guidelines for script making.

    We create a class of ScanParams and fill it with the desired settings. In this case, we want to acquire an image with a resolution of 1024x1024, using the BSD detector, 16 frames to average, 8-bit image depth, and a scale of 1. The image is then obtained using phenom.SemAcquireImage(). The image is displayed in a matplotlib figure. The code for this is:

    The resulting image from the Simulator are repeating diagonal gradients, which is shown in Figure 1.

    Figure 1.Phenom Simulator image

    Analyse using an external module 

    In my previous blogs on script development and automated image analysis, I have shown how an image can be analysed and evaluated using external libraries. In this blog, we will use an external module and determine the peak to peak distance on a circular cross section of the image. To determine this distance we will import a great little module called detect_peaks.

    Using the detect_peaks module we can determine local peaks based on its characteristics. Importing an external module is as easy as downloading the .py file and putting it in the root directory of your script and then adding the following line to your import statements:

    We extract a circular path because it is a little more exciting than using just a straight line, where all the peaks would be equidistant, and the results would be rather dull. To create a circle with points spread by 1 degree, a radius of 300 pixels, and positioned in the middle of the acquired image:

    In this script we force the numbers to remain integers, otherwise, we cannot use them to extract a cross-section. This is done with astype(np.uint16), the numbers are now unsigned integers with a bit-range of 16 bits (i.e. from 0 to 65,535).

    Extracting the circle and peaks can now be easily done by:

    The mpd parameter in detect peaks is the minimum spread between the peaks and the mph is the minimum peak height.

    To plot the results, we create a new image. In the left-hand plot, we will show the acquired image with a circle indicating where we took the cross section and red crosses to show where the peaks were found. In the right-hand image, we will plot the value of the cross section with red crosses where the peaks were found. We will add titles and labels to the plot and save it to a jpeg file in order to be able to use it in the report later on.

    The resulting image will be displayed in the report we will generate in the next section.

    PPI reporting

    To powerfully report the results to a user of a script, PPI has its own PDF-reporting tool. It is based on libHaru. Creating a pdf is fairly easy, once you know which steps to take. The first step is to create a document in Python:

    In the document, we need to create a page. This is done by:

    All positioning of text and objects is done with reference to the bottom left corner of the page. The positions are given in points and the default resolution is 72 dpi, thus the default size for A4 is: 595x842 pixels. The size of the paper is saved into the height and width variables.

    In this document, I will show how you can make headings, write large sections of text, make tables, and include figures. We start by adding text. I added three different types of text, first a big and bold header, then a smaller italic header, and a section with a large string that runs over multiple lines.

    To create text we begin with page.BeginText(). After that, we set the font with page.SetFontAndSize(). Then we position the text to the top left of the document with a margin of 50 pixels (about 2 cm in the document) with page.MoveTextPos(). To insert text we add a line with page.ShowText(). To move to the next line we only have to set the relative movement over the page with page.MoveTextPos().

    The first time you call page.MoveTextPos() the starting point is the bottom left corner of the document, and the second time it is a relative change to the new position. Typically, if you have a long text it is a hassle to find where every line break should be. To automatically find where a line break should be a text box can be made. This text box automatically does the line breaks and alignments of the text for you.

    It can be called with page.TextRect(), and the following attributes are passed: a PPI rectangle giving the absolute position of the text box, the string with the text that should be printed in the report, and the alignment. You can also see that I have changed the font three times to be able to distinguish between headers and sub-headers and normal text.

    To make a table, normal text is used but is displayed in a structured manner. First a header is made using text spaced in a regular horizontal pattern. Below this header we want a single line. However, drawing is not allowed between page.BeginText() and page.EndText() parts, so we have to close the text part to draw and then reopen it again.

    To draw the line we move the position to the start location and use the page.LineTo() to define the line. The real drawing is done by page.Stroke(). After that we iterate over the items we want to put in the table and put them all in the right column, with the same spacing. The table ends with a double underlining. The code to do this is:

    To save the pdf pdf.SaveToFile() is used. To open the pdf the subprocess library can be used:

    The resulting PDF is:

    This blog concludes my series of blogs with guidelines for small script development, I hope you have enjoyed it. If you would like to learn more about PPI and automation you can download the PPI specification sheet below:

    Click here to learn more about SEM automation and the Phenom Programming Interface.

    Topics: Scanning Electron Microscope Software, Automated SEM Workflows,  Automation,  Automation, PPI

    About the author:

    Wouter Arts is Application Software Engineer at Thermo Fisher Scientific, the world leader in serving science. He is interested in finding new smart methods to convert images to physical properties using the Phenom desktop SEM. In addition, he develops scripts to help companies in using the Phenom desktop SEM for automated processes.

  • Keysight’s Truevolt Digital Multimeters (DMMs)

    Keysight’s Truevolt Digital Multimeters (DMMs) offer a full range of measurement capabilities and price points with higher levels of accuracy, speed, and resolution.

    Get more insight quickly
    Truevolt DMM's graphical capabilities such as trend and histogram charts offer more insights quickly. Both models also provide a data logging mode for easier trend analysis and a digitizing mode for capturing transients.

    Measure low-power devices
    The ability to measure very low current, 1 µA range with pA resolution, allows you to make measurements on very low power devices.

    Maintain calibrated measurements
    Auto calibration allows you to compensate for temperature drift so you can maintain measurement accuracy throughout your workday.

    Key specifications 34460A 34461A 34465A 34470A
    Digits of resolution
    Basic DCV accuracy 75 ppm 35 ppm 30 ppm 16 ppm
    Max reading rate 300 rdgs/s 1,000 rdgs/s 5,000 rdgs/s std 5,000 rdgs/s std
    50,000 rdgs/s opt 50,000 rdgs/s opt
    Memory 1,000 rdgs 10,000 rdgs 50,000 rdgs std 50,000 rdgs std
    2 million rdgs opt 2 million rdgs opt
    DCV 100 mV to 1,000 V 100 mV to 1,000 V 100 mV to 1,000 V 100 mV to 1,000 V
    ACV (RMS) 100 mV to 750 V 100 mV to 750 V 100 mV to 750 V 100 mV to 750 V
    DCI 100 μA to 3 A 100 μA to 10 A 1 μA to 10 A 1 μA to 10 A
    ACI 100 μA to 3 A 100 μA to 10 A 100 μA to 10 A 100 μA to 10 A
    2- and 4-wire resistance 100 Ω to 100 MΩ 100 Ω to 100 MΩ 100 Ω to 1,000 MΩ 100 Ω to 1,000 MΩ
    Continuity, diode Y, 5 V Y, 5 V Y, 5 V Y, 5 V
    Frequency, period 3 Hz to 300 kHz 3 Hz to 300 kHz 3 Hz to 300 kHz 3 Hz to 300 kHz
    Temperature RTD/PT100, thermistor RTD/PT100, thermistor RTD/PT100, thermistor, thermocouples RTD/PT100, thermistor, thermocouples
    Capacitance 1.0 nF to 100.0 µF 1.0 nF to 100.0 µF 1.0 nF to 100.0 μF 1.0 nF to 100.0 μF
    Dual line display Yes Yes Yes Yes
    Display Color, graphical Color, graphical Color, graphical Color, graphical
    Statistical graphics Histogram, bar chart Histogram, bar chart, trend chart Histogram, bar chart, trend chart Histogram, bar chart, trend chart
    Rear input terminals No Yes Yes Yes
    IO interface
    USB Yes Yes Yes Yes
    LAN/LXI Core Optional Yes Yes Yes
    GPIB Optional Optional Optional Optional
  • What is an FFT Spectrum Analyser?

    FFT Spectrum Analysers, such as the SRS SR760, SR770, SR780 and SR785, take a time varying input signal, like you would see on an oscilloscope trace, and compute its frequency spectrum. Fourier's theorem states that any waveform in the time domain can be represented by the weighted sum of sines and cosines.The FFT spectrum analyser samples the input signal, computes the magnitude of its sine and cosine components, and displays the spectrum of these measured frequency components.

    Click here to download the full Application Note.


    If you would like more information, to arrange a demonstration or receive a quotation please contact us via email or call us on 01582 764334.

  • Nano Mechanical Imaging

    The nano mechanical imaging (NMI) mode is an extension of the contact mode. The static force acting on the cantilever is used to produce a topography image of the sample. Simultaneously, at each pixel force curves are produced and used to extract quantitative material properties data such as adhesion, deformation, dissipation...

    Click here to read the complete article.


    To speak with a sales/applications engineer please call 01582 764334 or click here to email

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • New Baumer LX Cameras now with integrated JPEG compression

    We are pleased to announce that we are now delivering an on-board image compression video camera, which is able to transmit high-quality images and reduce the data output in real-time.

    Baumer is supplementing the hugely popular LX series with 2, 4 and 25 megapixel cameras with integrated JPEG image compression and frame rates of up to 140 fps. With the GigE cameras, your savings are continual: from bandwidth through CPU load to storage space – this simplifies the system structure design and reduces integration costs.

    Why not give us a call now on 01582 764334 if you would like a free trial of the camera or click here to email.

    Further information on our Machine Vision camera series click here.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • LED Lighting Techniques

    The future belongs to LED technology. The long lifetime and high energy efficiency of these devices form the main reason for changing over to this technology for illumination requirements in Machine Vision.

    Depending on what the requirements are in terms of price, performance and flexibility, the user can find the best solution in the market using this state-of-the-art technology.

    The user has a number of options available to them for target illumination.

    Click here for a useful guide to help you choose the right lighting for your Machine Vision application.

    Alternatively why not contact our Machine Vision specialists on 01582 764334 or click here to email.

    Lambda Photometrics is a leading UK Distributor of Characterisation, Measurement and Analysis solutions with particular expertise in Electronic/Scientific and Analytical Instrumentation, Laser and Light based products, Optics, Electro-optic Testing, Spectroscopy, Machine Vision, Optical Metrology, Fibre Optics and Microscopy.

  • SEM automation guidelines for small script development: evaluation

    Scripts are small automated software tools that can help a scanning electron microscope (SEM) user work more efficiently In my previous two blogs, I wrote about image acquisition and analysis with the Phenom Programming Interface (PPI). In this blog I will explain how we can use the physical properties we obtained in the last blog in the evaluation step.

    SEM automation workflows

    Typically, SEM workflows always consist of the same steps, see Figure 1. The four steps that can be automated using PPI are:

    1. Image acquisition
    2. Analysis
    3. Evaluation
    4. Reporting

    In the image acquisition step (1), images are automatically made using PPI and the Phenom SEM (read this blog for more information on this step). In the analysis step (2), the physical properties are extracted from the image (see this blog) .The images are evaluated based on these physical properties in the evaluation step (3). The final automated step (4) is reporting the results back to the user.

    Figure 1: Scanning Electron Microscopy workflow

    Image evaluation

    In the evaluation step, the physical quantities are evaluated and categorized. This can be done by:

    • Counting particles based on their morphology
    • Determining the coverage on a sample
    • Base actions on physical properties of the sample

    In this blog we will base action on the physical properties in an image to determine where the center of the copper aluminum stub is.

    To do this we will assume that the copper insert is perfectly round. The script will start at a location pstart within the copper part of the stub. From here it will move in both positive and negative x and y directions to find a set of four edges points of the copper insert. These points will be Schermafbeelding 2018-07-05 om 12.17.40. Because of the circular symmetry of the stub, the arithmetic average of the x positions of Schermafbeelding 2018-07-05 om 12.20.01 and the y-position of Schermafbeelding 2018-07-05 om 12.20.45 will yield the center Schermafbeelding 2018-07-05 om 12.21.08 of the stub. In Figure 2 all the points are shown.


    Figure 2: Definitions of the locations on the stub

    To find the edges, the stage is moved. In every step the image is segmented using the techniques explained in the previous blog. When less than 50% of the image consists of the copper part, the edge is located. The exact position of the edge point is then defined as the center of mass of the area that is neither copper nor aluminum.

    Figure 3: Definitions of the locations on the stub

    Code snippet 1 shows an example of how this can be done. First the stage is brought to its original starting point with the Phenom.MoveTo method. This position is retrieved back from the Phenom using the phenom. GetStageModeAndPosition command. After that, the step size is defined. A step of 250 µm is chosen, which is equal to half the image field width. Four vectors are defined in all directions to find the four edges. These vectors are combined into an iterable list, to be able to iterate over them in the for loop.

    In the for loop, the stage is first moved to an initial guess of the location of the center. Then, a while loop is started where the stage moves to one direction with the step size. At every step the image is segmented and checked if the area of copper is smaller than 50%. If the copper area is less than 50%, the edge has been found and the center location of the edge is determined using ndimage.measurements.center_of_mass method.

    The resulting center of mass is expressed in pixels and is converted to metric units using the metadata that is available in the Phenom acquisition objects. The centers of masses are stored in a list and from this list the Schermafbeelding 2018-07-05 om 13.09.38 and Schermafbeelding 2018-07-05 om 13.10.07 locations are determined. From the set of locations, the arithmetic averages are easily determined, and the stage is moved to its new improved center location.

    Code snippet 1: Code to find and to move to the center of the stub

    In Figure 4, the initial guess of the location of the center is shown on the left-hand side and the improved center location is shown on the right-hand side. Iterating this process a few times could improve the center location even further; this because the symmetry will improve towards the center of the stub.

    Figure 4: Definitions of the locations on the stub


    In code snippet 2, the complete code is shown, including the code from my two previous blogs.

    Code snippet 2: Complete code

    Click here to learn more about SEM automation and the Phenom Programming Interface

    Topics: Scanning Electron Microscope Automation, Industrial Manufacturing, Automation, PPI, Automated SEM Workflows

    About the author:

    Wouter Arts is Application Software Engineer at Thermo Fisher Scientific, the world leader in serving science. He is interested in finding new smart methods to convert images to physical properties using the Phenom desktop SEM. In addition, he develops scripts to help companies in using the Phenom desktop SEM for automated processes.

  • Buying a scanning electron microscope: how to select the right SEM

    You want to buy a new scanning electron microscope (SEM) because you know you need more SEM capability. Maybe you have a traditional floor model SEM, but it is slow and complicated to operate. Maybe you are using an outside service and the turn-around time is unacceptably long.

    You have made your case that your company could significantly improve their business performance and you could do your job better if SEM imaging and analysis were easier, faster and more accessible. Can a desktop SEM do what you need? This article provides the answers and helps you to select the right SEM.

    Floor model SEM vs. Desktop SEM

    The choice between a desktop SEM and a larger, floor model system is almost always primarily an economic one: desktops are much less expensive. But there are other factors that also argue in favor of a desktop solution, even when cost is not the primary consideration.

    Scanning electron microscopes: pricing & affordability

    Let’s deal first with SEM pricing. Desktop SEMs are typically priced at a fraction of their floor model relatives. And there are certainly situations in which the additional cost of the larger systems are justifiable, for example, when the resolution requirements are beyond those achievable in a desktop SEM system.

    However, today’s desktop SEM’s can deliver resolutions smaller than 10 nm, enough for 80%-90% of all SEM applications. So your first question has to be, is it enough for yours?

    Beyond the initial acquisition, there are significant additional costs for a floor model scanning electron microscope system:

    • facilities – typically at least a dedicated room (perhaps including specialized foundations and environmental isolation)
    • additional space and equipment for sample preparation; personnel – a dedicated operator, trained in instrument operation and sample preparation.

    It is worth noting that while the cost of the equipment and facility are primarily fixed costs of acquisition, the operator is an ongoing expense that will persist for the lifetime of the instrument.

    Clearly, a desktop SEM solution — less costly to acquire and with no requirement for a dedicated facility or operator — is the less expensive choice, as long as its capabilities satisfy the requirements of the application.

    Other decision factors when selecting and buying a scanning electron microscope

    • Microscope speed
      Desktop SEM systems require minimal sample preparation and their relaxed vacuum requirements and small evacuated volume allow the system to present an image much more quickly than a typical floor model system.Moreover, desktop SEMs are usually operated by the consumer of the information, eliminating the time required a dedicated operator to perform the analysis, prepare a report and communicate the result.In addition to faster answers, there is considerable intangible value in the immediacy of the analysis and the user’s ability to direct the investigation in real-time response to observations.Finally, in some applications, such as inspection, longer delays carry a tangible cost by putting more work-in-progress at risk.
    • Microscope applications
      Is the application routine well defined? If it is, and a desktop SEM can provide the required information, why spend more? Concerns about future requirements exceeding the desktop capability should be evaluated in terms of the certainty and timing of the potential requirements and the availability of outside resources for more demanding applications.Even in cases where future requirements will exceed desktop capability, the initial investment in a desktop SEM can continue to deliver a return as that system is used to supplement a future floor model system.Perhaps in a screening capacity or to continue to perform routine analyses while the floor model system is applied to more demanding applications.A desktop system may also serve as a step-wise approach to the justification of a larger system, establishing the value of SEM while allowing an experience-based evaluation of the need and cost of more advanced capability from an outside provider.
    • Microscope users
      How many individuals will be using the system? Are the users trained? If not, how much time are they willing to invest in training? Desktop SEMs are simple to operate and require little or no sample preparation. Obtaining an image can be as easy as pushing a couple of buttons.More advanced procedures can be accessed by users with specific needs who are willing to invest a little time in training. In general, the requirements for operator training are much lower with a desktop system and the system itself is much more robust. It is harder to break, and the potential repair cost is much lower.

    Buying a scanning electron microscope: take-aways

    Now a short recap. The primary decision factors when selecting a SEM are:

    • Pricing
    • Speed
    • Applications
    • Users

    The question to ask yourself while going over these factors is: does a desktop SEM meet my application requirements?

    From experience we can say that it will, in most scenarios. If a desktop SEM is indeed suitable for your application, you’re looking at an investment that’s significantly lower compared to a floor model SEM.

    Remember, desktop systems are typically priced at a fraction of their floor model relatives.

    As I stated earlier there are situations in which the additional cost of larger systems is justifiable. This is the case when the resolution requirements are beyond those achievable in a desktop system.

    However, today’s desktop SEMs can deliver resolutions less than 10 nm — enough for 80%-90% of all SEM applications. So the question will often be: is it enough for yours?

    If that’s a difficult question to answer — or if you’re still just in doubt which SEM to choose — we have an e-guide available that should be of help: how to choose a SEM.

    This guide takes an even deeper dive into the selection process of a SEM, and will help you select the right model for your process and applications.

    Topics: Research Productivity, Scanning Electron Microscope, Pricing

    About the author:

    Karl Kersten is head of the Application team at Thermo Fisher Scientific, the world leader in serving science. He is passionate about the Thermo Fisher Scientific product and likes converting customer requirements into product or feature specifications so customers can achieve their goals.

Items 1 to 10 of 177 total

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. ...
  7. 18