How AI can get your novel food approved faster

Male scientist during work at modern biological laboratory.
Pulling together a novel food dossier for assessment is a timely process. But AI can make it faster. (Image: Getty/LightFieldStudios)

AI helps build your novel foods dossier faster, so you can submit to EFSA sooner


Quick bites

  • AI slashes R&D time for novel foods
  • Regulators want transparency, not mystery
  • LLMs decode proteins and predict digestions
  • Humans still call the final shots

In food, artificial intelligence is improving efficiencies across the board. It can help gauge eating trends, power R&D for new product development, and even communicate with consumers.

But what if your product isn’t yet on the market, because it’s a novel food? Can AI help propel those businesses towards commercialisation?

AI powers formulation and product development

Yes, it can. When working with a novel ingredient, it can be time-intensive and expensive to test nutrition, texture, shelf life, and sensory qualities. That’s where AI steps in.

More and more companies are using AI to simulate and predict how ingredient combinations will perform across these factors, saving both time and resources. Something that previously could have taken weeks can now take minutes. “This can significantly reduce lab work and accelerate R&D timelines,” explains regulatory consultant Stephen O’Rourke.

This is what lawyer Katia Merten-Lentz, founder and partner of Food Law Science and Partners, is seeing, too. AI can not only optimise formulations for taste and texture, but also improve those production processes, she explains. The technology can even predict consumer preferences.

Once a novel food is ready for commercialisation, it must undergo pre-market authorisation from a food safety body, such as the European Food Safety Authority (EFSA). Any such agency will need to see that AI models were validated with observed data.

Regulators also require “full transparency” on what data went into the AI model, how it was trained, and how predictions were verified, explains O’Rourke. If an AI model’s decision-making processes are opaque, the dossier’s likely to be rejected.

That’s not to say that food safety bodies like EFSA are anti-AI. Far from it, says O’Rourke: “There’s sometimes a perception that EFSA discourages AI use, but in reality, they’re open to it – as long as data sources are transparent and models are properly justified."

What your novel food does – and how the body handles it

When submitting a novel food dossier, businesses need to demonstrate they understand the functionality of the product.

If it’s a protein, which many recent novel foods submissions have been – whether in cultivated meat or precision fermentation-derived dairy – large language models (LLMs) can take “heavy, complex data”, analyse amino acid profiles, and reveal functional properties of the new protein, explains O’Rourke.


Also read → AI and digital twins power cultivated meat R&D

If fed with enough high-quality data on specific proteins, LLMs can also synthesise information to predict how consumers might digest the novel food – an especially critical factor when introducing unfamiliar ingredients.

Lawyer Merten-Lentz also sees opportunity for AI optimisation when working with alternative proteins – particularly when it comes to protein selection. “AI can analyse all proteins faster than anybody else,” she explains. The technology can help makers of plant-based meat or dairy analogues find the exact protein they’re looking for, that adds to the nutrition, colour, flavour, or even texture of such an alternative. “This process is often combined with precision fermentation, which brings a wide perspective of tailor-made products,” says Merten-Lentz.

Another emerging area is AI-generated exposure assessments. Some companies are already using AI tools to simulate dietary intake scenarios across different populations, helping to speed up safety evaluations in regulatory dossiers. However, regulators still expect full traceability on the data sources, assumptions, and methodology behind these models – transparency remains critical, says O’Rourke.

“LLMs don’t replace lab experts, but they can speed up early data synthesis – freeing up time for new projects."

With AI in the mix, do humans still matter in novel foods applications?

AI’s true value emerges when humans acknowledge their own limits in processing vast amounts of information.

Humans are still very much needed, but in a different way, explains O’Rourke. “It’s not about replacing people in the lab, because they’re always needed. AI complements their work.”

Both laboratory and regulatory experts are required to do a final check on findings produced by AI. That’s the final step, we’re told. Analytical minds are the only ones capable of working out if AI-powered data is usable, or if hallucinations have crept in. Humans need to “refine” the output, says O’Rourke, and ultimately, make sure it’s up to scratch.

AI warning: Protect sensitive data

When working with AI and LLMs, safeguarding sensitive or proprietary information is critical. “Sensitive data should never be uploaded to cloud-based LLMs,” warns regulatory consultant Stephen O’Rourke.


If using AI to process confidential information is unavoidable, companies should opt for models hosted on secure, internal systems. This approach helps ensure proprietary data remains protected. Still, O’Rourke emphasises that even then, such use should be limited to select confidential projects.


“Confidentiality, data integrity, and compliance remain paramount when AI tools are used,” he adds.

Of course, the verification stage is still time-consuming – it could take half a day. But that person’s just saved an entire week’s worth of time analysis data that AI can complete in a couple of days. “The human is then freed up to start a new project,” explains O’Rourke.

Merten-Lentz is very much in agreement, stressing that AI is not accurate enough to replace regulatory and scientific specialists. “While AI can be used to accelerate food innovation, never forget that strategy should still be decided by humans. In the end, products are chosen ‘emotionally’ by consumers, which AI – at least for the time being – cannot understand.”