Microsoft disclosed that Russian online efforts to influence the upcoming U.S. presidential election have surged in the last 45 days, albeit at a slower pace compared to previous elections.
Researchers at the tech giant revealed that Russia-linked accounts have been spreading divisive content targeting American audiences, particularly criticizing U.S. support for Ukraine in its conflict with Russia.
This was disclosed in a terse statement released on Wednesday by the software production giant General Manager of Threat Analysis Center, Clint Watts.
However, the Kremlin had previously asserted that it would not interfere in the November U.S. election and denied allegations of orchestrating campaigns to sway the 2016 and 2020 presidential elections.
While the observed Russian activity is less intense than in previous election cycles, Microsoft researchers warned that it could escalate in the coming months.
"Messaging regarding Ukraine - via traditional media and social media - picked up steam over the last two months with a mix of covert and overt campaigns from at least 70 Russia-affiliated activity sets we track," Microsoft stated.
The most notable among these campaigns is linked to Russia's Presidential Administration, while another aims to spread disinformation online across multiple languages.
The disinformation typically starts with content posted by an apparent whistleblower or citizen journalist on a video channel, which is then disseminated through a network of websites, including DC Weekly, Miami Chronicle, and The Intel Drop.
Microsoft highlighted a "notable uptick" in hacking activities by a Russian group dubbed Star Blizzard, or Cold River, targeting Western think tanks.
The group's current focus on U.S. political figures and policy circles suggests a potential escalation leading up to the November election.
Concerns about the malicious use of artificial intelligence (AI) by foreign adversaries have been raised by American political observers.
However, Microsoft noted that simpler digital forgeries are more prevalent than deepfakes. Audio manipulations, in particular, have a greater impact than video manipulations.
"Rarely have nation-states' employments of generative AI-enabled content achieved much reach across social media, and in only a few cases have we seen any genuine audience deception from such content," the researchers concluded.