Facebook uses first-person videos to train AI in human routines

The challenge of computer vision projects is to make a machine see and understand the world as a human being would. Facebook has some initiatives in this regard, but Ego4D is the most recent project and focused on this premise, created especially to assemble a vast set of training data for the models.

  • “Smart Backpack” uses Intel AI to guide the visually impaired on the streets
  • European agencies question Facebook on smart glasses privacy
  • Robot dog with Facebook AI learns alone how to walk in different environments

The idea of ​​Ego4D is to generate an AI capable of interpreting more than 2.200 hours of footage made by 1024 participants of universities in nine countries According to Facebook’s chief research scientist, Kristen Grauman, this is the largest collection of data explicitly created for this focus: each film was produced from the perspective of the user’s vision to give the machine the same conditions as human beings.

Want to catch up on the best tech news of the day? Go and subscribe to our new channel on youtube, Canaltech News.

Every day a summary of the main news in the tech world for you!

The footage portrays common experiences of people’s everyday lives, such as social interactions, handling objects with their hands and tasks such as cook and tidy up the house. The goal is to link this to social media algorithms in order to deliver better computer vision experiences from the viewer’s perspective.

The developers also created five reference challenges for the development of wizards smartest and most useful AI:

  • Episodic memory: “what happened and when?”. It’s used to remember things like where you left your keys or a pending callback call.
  • Prediction: “what should I do next?”. It can be used to guide or warn about something that might happen, such as adding excess salt to a recipe or forgetting the umbrella in cloudy weather.
  • Hand and object manipulation: “what am I doing?”. The machine must be able to identify an activity and give guidelines: how to play the drums or a more efficient way to cut vegetables.
  • Daily record: “who said what and when ?”. The AI ​​should be able to remember what the class was about or what your brother said earlier.
  • Social interaction: “who is interacting with whom?” . Recognition of people not only by appearance, but by voice, grimaces and other characteristics.

AI Expansion

Facebook intends to make the data set available in November for researchers interested in working in partnership with Ego4D. If the initial tests succeed, it is likely that the company will extend the experiment to the external environment, to companies that specialize in training machines that understand what human beings routinely do.

AI project can enhance smart glasses features (Image: Disclosure/Facebook)

With the arrival of Ray Ban eyewear models with Facebook cameras, this project only tends to grow because it will be necessary to interpret the captured data to deliver a utilitarian user experience. Of course, this is very sensitive terrain, as people’s privacy is at stake, but modern companies will need to be aware of this technology if they want to be relevant in the future.

Source: Facebook

Did you like this article?

Subscribe your email on Canaltech to receive daily updates with the latest news from the world of technology.

505863

Related Articles

Back to top button