By Paul Sirois, MS, PAS - Lab Manager
Near infrared reflectance spectroscopy is the predominant form of analysis employed in today's forage analysis industry. Over the last 30 years, NIR analyses in our lab has nearly doubled to account for 76% of all samples analyzed. Accuracy, turnaround time and cost have all played a role in the increasing use of NIR technology. In fact, it is so popular that many of our customers are disappointed when they learn that NIR is not an available option for a particular type of feed. Given its popularity, many people are still unclear on how this technology works. This is the first in a series to help improve your knowledge of NIR analyses.
How is NIR analysis defined?
Near infrared reflectance spectroscopy (NIR) is a non-consumptive instrumental method for fast, accurate and precise evaluation of the chemical composition and associated feeding value attributes of forages and other feedstuffs provided the proper procedures are followed. Source: USDA Agriculture Handbook No. 643
What is the theory behind NIR?
Wavelengths of light in the near infrared region of the spectrum can be associated with different nutrient components. Analyzing changes in the amount of light reflected at these wavelengths enables us to determine nutrient value.
What advancements have increased the power of NIR technology?
The physics or spectral properties of NIR have always existed. Advancements in microcomputers and software have enabled us to capture and maximize the interpretation of this information.
What is the NIR spectrum?
The portion of the electromagnetic spectrum used for NIR analysis is 1100 - 2500 nm. For comparison, the visible portion of the spectrum or the light that we can see lies between 400 - 700 nm.
What are spectra?
The spectra of a sample is a tracing of the amount of light reflected at each wavelength of the NIR spectrum. It can be thought of as the fingerprint of the sample. See Figure 1.
Figure 1. A comparison of NIR Spectra
Why can't all samples be automatically analyzed by NIR?
NIR is a calibration dependent technology, i.e., a calibration has to be developed for a particular feed or nutrient before routine analyses can be performed.
How do you determine which feeds or nutrients to calibrate?
We take a look at the market and demand for a particular sample type. If there is a relatively high demand for wet chemistry analyses for a sample type, we consider it a good candidate for calibration development. Likewise, we look at the demand for individual nutrients and their use in ration balancing. For example, the introduction of organic matter (om) based fiber and fiber digestibility measures in the CNCPS led us to develop calibrations for aNDFom and uNDFom at 30, 120 and 240 hours. The CNCPS forms the backbone for several popular ration programs. These values were needed to drive the new software.
How are calibrations built and developed?
There are several different approaches, but they all involve the following:
- Collecting a group of samples that represents the diversity of quality in the population to be analyzed (reference samples)
- Scan and store the spectra of reference samples
- Traditional wet chemistry analyses are performed to determine the nutrient values of the samples (reference chemistry)
- Sample spectra and wet chemistry values are collected and stored in a database
- Calibration software evaluates differences in the amount of light reflected at individual wavelengths and relates it to differences in nutrient concentration based on the reference chemistry
- Strong correlations between the reflected light and nutrient concentration at specific wavelengths lead to the ability to predict the nutrient concentration of future samples
How many wavelengths are used in a typical calibration?
In the early days of NIR, generally 2 to 7 wavelengths were selected that showed the best correlation with the nutrient in question. Today, with the increased power of microcomputers and software, every other wavelength or about 700 are utilized. This enhances our ability to detect changes in spectra and their relationship to changes in nutrient concentration.
Does one calibration cover all of the nutrients in a feed?
No, all nutrients require their own calibration for a given feed. For example, if you were interested in analyzing corn distillers grains and corn gluten feed for CP, ADF, NDF and fat, this would require eight individual calibrations.
What factors are important for developing a good calibration?
- Organic (carbon based) compounds produce the strongest chance for success. For example; protein, fiber, fat, starch, etc.
- Good reference chemistry - the accuracy and precision of the wet chemistry reference methods will have a direct influence on the development of sound calibrations.
- A robust dataset - the population of reference samples used for calibration should be representative of the population of as a whole. For example, to develop a crude protein (CP) calibration for a mixed population of hays, a good dataset would start with wheat straw samples at 2% CP and range up to 28% CP in immature alfalfa. A dataset built from hays 8-22 %CP would be a good calibration, but may struggle with samples outside of this range.
Is the number of samples in a calibration important?
More critical is the robustness of the dataset as described above. However, large numbers of samples can be an asset. For example, our TMR calibration consists of over 7,000 samples collected from across the world. This helps cover the wide diversity of TMRs received and analyzed on a daily basis.
Next month, more on maximizing the power of NIR to analyze your samples.