Processing of biodata instead of labeling

Users are not yet ready to protect themselves from deepfakes and other falsifications to be generated by neural networks: most of participants in the Plenary Session «Shield and Sword of the Digital World» have made this conclusion. «Attacking AI technologies are superior to defensive ones, since defense, as a rule, relies on people», explained Luka Safonov, the expert in information security, Technical Director of «Weblock». «AI can generate so many scenarios that a person is simply unable to keep track of them», he believes.
Stanislav Kazarin, Vice-Governor of St. Petersburg, noted that, for counteracting fakes, the state, in particular, encourages users to switch to Russian platforms, where the process «can somehow be controlled». On foreign platforms, users are not protected, for example, from fakes by Western intelligence agencies.
During the discussion, the participants expressed their opinions on the idea of labeling content produced by using AI. In particular, Luka Safonov believes that any tokens will begin to inspire trust among users, and they can be falsified. Igor Bederov, Head of the Information and Analytical Research Department at «T.Hunter», also opposed labeling. He suggested collecting users’ consent to process their bio-data, such as voice and other unique human features, allowing faking their image.