Food fraud is the manipulation of a commodity or product in some manner, intentionally or unintentionally, that isn’t known to the consumer. Typically, it’s when a high-end expensive product is diluted or replaced with a lower end, cheaper product. This practice is on the rise, as premium ingredients are becoming more expensive but remain in high demand. Economically motivated food fraud is estimated at more than $10 billion annually in the U.S. alone. Additionally, manipulating or misstating ingredients in a food product may result in health consequences to some consumers, such as when allergens are present. Laboratories need to test for food authenticity, because consumers want confidence in the products that they’re buying.
One approach to food authenticity testing is to monitor the molecular composition of the foodstuff with liquid or gas chromatography coupled with mass spectrometry (LC/MS or GC/MS). Traditionally, food authenticity testing has been performed by searching for one or more adulterants or impurities, which are then quantified to determine fraud. However, this only works if the adulterants are known a priori. Further, fraudsters can always find new adulterants to add. An increasingly common approach for this analysis is to profile small molecules, or features, in a commodity with high resolution MS, using many of those features to indicate if a product is adulterated. Using authentic food samples, a statistical model is built; when a new sample is tested, its features are compared with the model and the sample is classified into a group. Because this method does not use information from specific adulterants and does not even need to identify the features, it’s nearly impossible for fraudsters to manipulate.
Although profiling, model building, and classification of samples sounds complicated, it’s becoming increasingly routine, with user-friendly workflows and software available for laboratories to begin this type of testing. But, before you start your analysis, there are some concepts and best practices you should understand.
Samples
Well-defined and verified samples of food products grouped by type are critical for building statistical models. These should include as many individual samples in each group as possible to capture enough sources of variability and to reduce potential non-measurement biases from the model. An example of this could be providing different lots of honey from different production sites for each type of honey grouped in the model. This would reduce bias in the model toward a specific lot of honey or toward a specific manufacturing line and focus only on features that are separating the different types of honey. You should also plan to acquire an additional number of authentic and adulterated samples to be withheld from the model at creation and used to test, or validate, the model later.
Samples must be extracted in a manner that is reproducible for the endogenous metabolites that are of interest. Try to maintain simple protocols, if possible. For example, a liquid extraction of a homogenized sample with an organic solvent is a good protocol to begin with, as this will extract the compounds of interest with few steps, avoiding the introduction of potential contamination and error. However, the complexity of some samples may require additional sample preparation. If a liquid extract is still too high in matrix for routine analysis, try altering the pH or temperature of the extraction to produce a cleaner extract before testing a solid phase extraction (SPE) approach. SPE protocols may inadvertently remove analytes of interest for the analysis or introduce too much sample handling variation for a robust model to be built.
Instrument Platform
Although there are other platforms that are desirable for authenticity testing, when beginning research for a model, consider a high-resolution instrument such as a quadrupole time-of-flight (Q-TOF) to ensure enough resolution to differentiate analytes and increase the specificity of the model. This instrument also allows for untargeted models that are harder to cheat than targeted models. A Q-TOF has an extended dynamic range, which is important for analyzing complex samples at a range of concentrations in a heavy sample matrix because it allows you to detect small amounts of analytes that are coeluting with high abundant analytes. Also, try to avoid instruments that use ion-trapping capabilities due to limitations in their dynamic range and ion capacity, which can leave critical analytes out in complex food matrices. Ultimately, in complex food matrices, a Q-TOF will generate the most reliable and robust data for model building and subsequent authenticity screening.
Quality Control
External and internal standards should be used to monitor instrument performance and help troubleshoot any acquisition issues that might arise. These standards are not intended for any peak area correction, but rather to monitor peak area and retention time reproducibility. During method development, mass accuracy, area counts, and retention time should be tracked and proven stable. Incoming data that does not meet quality standards may need to be discarded. If reliable quality characteristics are not initially achieved, sample preparation, acquisition parameters, or instrument maintenance should be reevaluated to achieve a stable data acquisition.
ACCESS THE FULL VERSION OF THIS ARTICLE
To view this article and gain unlimited access to premium content on the FQ&S website, register for your FREE account. Build your profile and create a personalized experience today! Sign up is easy!
GET STARTED
Already have an account? LOGIN