The project puts together algorithms intended to emulate the functions of the human nose. Working with MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Givaudan flavor scientists will look to analyze taste-test results more accurately, in the hope they may reveal more reliable indicators of consumer preferences.
Using taste test data provided by Givaudan, a team of CSAIL researchers led by Una-May O’Reilly used a genetic programming tool to analyze and interpret the data. The method, which uses evolutionary computing, involves different mathematical models ‘competing’ with each other to fit the available data. After this the best fitting models are combined to produce models that are more accurate still.
Professor Lee Spector, editor-in-chief of the journal Genetic Programming and Evolvable Machines – where a research paper from the project appears – explained that “people have been playing with these [evolutionary] techniques for decades.”
“One of the reasons that they haven’t made a big splash until recently is that people haven’t really figured out, I think, where they can pay off big.”
Taste preference, Spector said, “is a pretty brilliant area in which to apply the evolutionary methods — and it looks as though they’re working, also, so that’s exciting.”
The design of flavors and aromas in foods and drinks is big business, with leading flavor companies spending tens of millions of dollars every year on research and development – including a lot of consumer testing.
However, making sense of consumer taste-test results can be difficult, with taste preferences varying so widely that no clear consensus may emerge, whilst the nose itself can begin to lose sensitivity after testing around 40 samples.
In order to avoid this problem, which often leaves flavor houses using incomplete and sometimes contradictory data, the new project aims to use an evolutionary algorithm to analyze patterns in current results and accurately predict future data.
The Swiss flavor giant provided CSAIL’s research team with data from 69 subjects who had evaluated 36 different combinations of seven basic flavors – assigning each a score according to its olfactory appeal.
To help interpret the results O’Reilly and her colleagues used the computer programs to randomly generate mathematical functions that predict scores according to the concentrations of different flavors.
They explained that each function is assessed according to two criteria: accuracy and simplicity.
“A function that, for example, predicts a subject’s preferences fairly accurately using a single factor — say, concentration of butter — could prove more useful than one that yields a slightly more accurate prediction but requires a complicated mathematical manipulation of all seven variables,” they explained.
After all the possible functions (algorithms) have been assessed by the computer, those that gave poor predictions were removed, said the researchers. After this, the program takes parts of the surviving algorithms and randomly recombines them to produce “a new generation of functions”.
O’Reilly explained that the whole process is repeated about 30 times, until it reaches a point where the algorithms come together around a set of functions that match with the preferences of a single subject.
The team said they have not yet been able to conduct studies to determine whether the new algorithms correctly predict testers’ responses to new flavor – but said a further computer model has suggested such tests will be successful.
Source: Genetic Programming and Evolvable Machines
Published online ahead of print, doi: 10.1007/s10710-011-9153-2
“Knowledge mining sensory evaluation data: genetic programming, statistical techniques, and swarm optimization”
Authors: K. Veeramachaneni, E. Vladislavleva, U-M. O’Reilly