How the new algorithm will protect platform users from trolls and fakes.
European scientists have created a machine learning algorithm that is able to predict the malicious behavior of users on the social network X (formerly Twitter). The study, published July 12 in the journal IEEE, shows significant progress in countering the spread of false information and the manipulation of public opinion online.
A team of scientists led by Ruben Sánchez-Corcuera, a professor of engineering at the University of Deusto in Spain, has developed a model that is 40% more efficient than existing analogues. The algorithm is based on the JODIE (Jointly Optimizing Dynamics and Interactions for Embeddings) model - it calculates the probability of different scenarios of user communication in social networks. However, the researchers improved it by adding a recurrent neural network that takes into account a person's past actions and the time intervals between them.
The model was tested on three real data sets, including millions of tweets from citizens of China, Iran and Russia. The first set contained 936 accounts associated with the Chinese government. These accounts sought to stir up political unrest during the 2019 Hong Kong protests. The second set included 1,666 Twitter accounts linked to the Iranian government. They posted biased tweets in support of Iran's diplomatic and strategic interests in 2019. The set, consisting of accounts of Russian users, contained 1152 entries.
The test results pleased the developers. For example, by analyzing just 40% of the interactions in the Iranian dataset, the AI accurately identified 75% of users who later violated the platform's policies. Professor Sánchez-Corcuera emphasizes the importance of the algorithm in creating a safer and more constructive online environment. In his opinion, reducing the level of hatred and manipulation in social networks can reduce the polarization of society. This will have a positive impact not only on digital platforms, but also on the overall well-being of people.
According to the scientists, their approach is mostly applicable to text-based social networks such as X. Platforms focused on multimedia content, such as TikTok or Instagram*, will require different methods. The program uses time characteristics, which makes it especially effective in conditions where the audience is constantly growing and users from different countries with different backgrounds have equal access to the network.
The study also raises important ethical questions about the balance between security and freedom of speech online. Of course, the implementation of such algorithms requires careful consideration of potential risks and the development of clear rules for their application.
Source
European scientists have created a machine learning algorithm that is able to predict the malicious behavior of users on the social network X (formerly Twitter). The study, published July 12 in the journal IEEE, shows significant progress in countering the spread of false information and the manipulation of public opinion online.
A team of scientists led by Ruben Sánchez-Corcuera, a professor of engineering at the University of Deusto in Spain, has developed a model that is 40% more efficient than existing analogues. The algorithm is based on the JODIE (Jointly Optimizing Dynamics and Interactions for Embeddings) model - it calculates the probability of different scenarios of user communication in social networks. However, the researchers improved it by adding a recurrent neural network that takes into account a person's past actions and the time intervals between them.
The model was tested on three real data sets, including millions of tweets from citizens of China, Iran and Russia. The first set contained 936 accounts associated with the Chinese government. These accounts sought to stir up political unrest during the 2019 Hong Kong protests. The second set included 1,666 Twitter accounts linked to the Iranian government. They posted biased tweets in support of Iran's diplomatic and strategic interests in 2019. The set, consisting of accounts of Russian users, contained 1152 entries.
The test results pleased the developers. For example, by analyzing just 40% of the interactions in the Iranian dataset, the AI accurately identified 75% of users who later violated the platform's policies. Professor Sánchez-Corcuera emphasizes the importance of the algorithm in creating a safer and more constructive online environment. In his opinion, reducing the level of hatred and manipulation in social networks can reduce the polarization of society. This will have a positive impact not only on digital platforms, but also on the overall well-being of people.
According to the scientists, their approach is mostly applicable to text-based social networks such as X. Platforms focused on multimedia content, such as TikTok or Instagram*, will require different methods. The program uses time characteristics, which makes it especially effective in conditions where the audience is constantly growing and users from different countries with different backgrounds have equal access to the network.
The study also raises important ethical questions about the balance between security and freedom of speech online. Of course, the implementation of such algorithms requires careful consideration of potential risks and the development of clear rules for their application.
Source