Facebook To Assist Wikipedia Verify Article Sources Using Artificial Intelligence

The free multilingual online encyclopedia Wikipedia has created a new AI tool that will concurrently scan thousands of citations to assist assess and validating the material they contain.

Demand Of Citations

A database with more than 4 million citations is used by Wikipedia. In order to be satisfied, users request citations in order to obtain proof for the claims. For instance, according to a Wikipedia article, President Obama visited Europe before going to Kenya to meet his paternal ancestors for the first time. Citations and hyperlinks are required to demonstrate that the material presented above is accurate and has been taken from a reliable source.

Although hyperlinks don’t often provide full explanations, they are nevertheless useful in bolstering the content. The issue is that hyperlinks frequently point to irrelevant pages that lack important information. Either they give up reading the topic or they switch to another one, abandoning the first.

Facebook To Assist Wikipedia Verify Article Sources Using Artificial Intelligence

Meta begins developing an AI tool

Wikipedia has a page dedicated to Joe Hipp, the first American heavyweight boxer to compete in WBA competition. Joe Hipp and boxing were not mentioned in the article; instead, it was explained how he was the first Native American boxer to challenge the WBA.

According to Joe Hipp, Wikipedia allows people to accept something even when the citations are false. Misinformation might be disseminated all across the world with this. Because of this, Facebook owner Meta began employing Meta AI to collaborate with the Wikimedia Foundation (a development research lab for the social media giant). They said that it is the first machine learning model to automatically scan a large number of citations simultaneously.

This will save time because it would take too long to hand examine each citation.

Meta AI Efforts

Fabio Petroni, the research tech lead manager for the team Meta AI told Digital Trends:

I think we were driven by curiosity at the end of the day. We wanted to see what was the limit of this technology. We were absolutely not sure if this AI could do anything meaningful in this context. No one had ever tried to do something similar before.

He further clarified how this tool will work:

With these models, what we have done is to build an index of all these web pages by chunking them into passages and providing an accurate representation for each passage that is not representing word-by-word the passage, but the meaning of the passage. That means that two chunks of text with similar meanings will be represented in a very close position in the resulting n-dimensional space where all these passages are stored.

According to Petroni, the team is still working on bringing it to the point:

What we have built is a proof of concept. It’s not really usable at the moment. In order for this to be usable, you need to have a fresh index that indexes much more data than what we currently have. It needs to update constantly, with new information coming every day.

This indicates that the AI tool supports multimedia in addition to text. It will be useful on various platforms that handle photos and videos.