Robotic Chef Learns to Recreate Recipes From Viewing Food Videos

Robotic Chef Learns to Recreate Recipes From Viewing Food Videos

A lady cutting erbs.
Credit: Canva

Scientists have trained a robotic chef to view and learn from cooking videos and recreate the recipe.

The scientists from the University of Cambridge programmed their robot chef with a ‘cookbook’ of eight basic salad recipes. After watching a video of a human showing one of the recipes, the robot chef could recognize which recipe was being prepared and how to make it.

In addition, the videos helped the robot incrementally add to its cookbook. At the end of the experiment, the robot came up with a ninth recipe by itself. Their outcomes, reported in the journal IEEE Access, show how video content can be an important and rich data source for automated food production and might allow more manageable and cheaper deployment of robot chefs.

Robot chefs have been a part of science fiction for decades; however, cooking is a challenging issue for a robot. Several commercial companies have developed prototype robot chefs, although these are not presently commercially available, and they need to catch up to their human equivalents regarding skill.

Human chefs often learn new recipes via observation, whether that is watching another individual cook or watching a video on YouTube, but programming a robot to make a variety of dishes is costly and taxing.

Training a robotic chef

According to Grzegorz Sochacki from Cambridge’s Department of Engineering, the paper’s first author, they wished to see whether they could train a robot chef to learn in the same incremental manner in which humans can– by identifying the ingredients and how they interact in the dish.

Sochacki, a Ph.D. candidate in Professor Fumiya Iida’s Bio-Inspired Robotics Laboratory, and his colleagues devised eight simple salad recipes and recorded themselves making them. They then used a publicly available neural network to teach their robotic chef. The neural network had already been programmed to determine various objects, including the vegetables and fruits utilized in the eight salad recipes (broccoli, carrot, apple, banana, and orange).

Utilizing computer vision techniques, the robot assessed each video frame. It could recognize different objects and features, such as a knife and the ingredients, as well as the human demonstrator’s arms, hands, and face. The recipes and the videos were transformed into vectors, and the robot carried out mathematical operations on the vectors to identify the similarity between a demonstration and a vector.

By properly identifying the ingredients and the actions of the human chef, the robot can determine which recipes are being prepared. The robot might infer that if the human demonstrator held a knife in one hand and a carrot in the other, the carrot would certainly get sliced up.

Machine learning at work

Of the 16 videos it viewed, the robotic chef recognized the right recipe 93% of the time, although it only detected 83% of the human chef’s actions. The robot was additionally able to spot that small variants in a recipe, such as making a double portion or normal human error, were variations and not brand-new recipes. The robot likewise successfully recognized the demonstration of a brand-new ninth salad, added it to its cookbook, and made it.

According to Sochacki, the robot can detect an incredible amount of nuance. The recipes are simple– they are chopped vegetables and fruits. However, it was actually effective at recognizing, for example, that two chopped apples and two chopped carrots are the same recipe as three chopped apples and three chopped carrots.

The videos used to instruct the robotic chef are different from the food videos made by some social media influencers, which are full of fast cuts and visual effects, and rapidly move back and forth between the individual preparing the food and the food they are preparing. For instance, the robot would struggle to identify a carrot if the human demonstrator had their hand wrapped around it– for the robot to recognize the carrot, the human demonstrator needed to hold up the carrot to make sure that the robot might see the whole vegetable.

Race against time

According to Sochacki, their robot isn’t caught up in the sorts of food videos that go viral on social media– they are difficult to follow. However, as these chefs get better and faster at identifying ingredients in food videos, they could be able to use sites like YouTube to discover a whole range of recipes.

The study was supported in part by Beko plc and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).


Read the original article on Science Daily.

Read more: Scientists Have Developed the First Modular Body– a Living Being That Is not Alive.

Share this post