Skip to main content

The cybersecurity landscape in 2025 will see the emergence of new threats in the cryptocurrency and AI sectors

The cybersecurity landscape in 2024 was marked by supply chain vulnerabilities and the misuse of generative artificial intelligence (AI) in scams. While these threats will persist into 2025, new challenges are emerging, including an increase in cryptocurrency theft, the use of generative AI for disinformation and difficulties for regulatory agencies — especially in the United States — in establishing cybersecurity rules amid a shifting political and legal environment.

What next

As cryptocurrency prices rise, cybercriminals and some nation-states will actively target wallets and blockchains using stolen credentials and software exploits. Generative AI will play an increasingly prominent role in information operations and influence campaigns worldwide, enabling threat actors to scale cyber operations more quickly and effectively. These will be coupled with changes in political administrations and legal rulings, ultimately complicating the development of new cybersecurity rules and requirements.

Subsidiary Impacts

Analysis

As 2025 approaches, several trends in technology and cybersecurity offer clues to the direction of the coming year’s threat landscape and regulatory priorities.

Cryptocurrency

One trend picking up momentum relates to the theft of cryptocurrency from people’s wallets and via extortion schemes such as ransomware.

Cryptocurrency is often a preferred mechanism for criminals to steal funds since it is difficult to trace and easy to move across international borders, but the growing value of many popular cryptocurrencies has led to a surge in cryptocurrency heists that aim to hijack users’ wallets and transfer the funds stored in them to other accounts (see INTERNATIONAL: Crypto traders celebrate Trump victory – November 6, 2024). This can occur by stealing user credentials or wallet passwords, often through phishing or malware attacks. Attackers often also exploit software vulnerabilities in wallet management tools or the systems used by cryptocurrency exchanges to facilitate transactions.

In many instances, these stolen funds cannot be recouped by the victims because cryptocurrency accounts are not protected in the same manner as traditional bank accounts and therefore have no regulated fraud protections. Moreover, it can be difficult for victims to prove that the funds were transferred against their will or without their knowledge when it is also often difficult to establish definitively who owns the wallets that the funds are being transferred to.

Policing these incidents and holding perpetrators accountable also remains a challenge for many law enforcement agencies. With limited technical expertise and the difficulty of tracking cryptocurrency transactions across borders, agencies often struggle to investigate and prosecute cybercriminals effectively without international cooperation (see GLOBAL: Cryptocurrency ownership – June 7, 2024).

Since some countries have chosen not to regulate the cryptocurrency industry or investigate its transactions too closely, cybercriminals can launder their stolen funds through these safe havens, making it even more difficult for investigations to pursue a digital forensics trail of their activities.

Some countries have chosen not to regulate the cryptocurrency industry, which complicates investigations

Due to the limited regulatory landscape, cryptocurrency wallet hijacks are growing in popularity even as other cryptocurrency-enabled forms of cybercrime, such as ransomware, seem to be remaining steady or even decreasing.

Generative AI

In addition to rising rates of crypto wallet hijacking, 2025 will see increased use of generative AI, not only for scams and fraud but also for enabling more targeted and effective influence operations (see INT: 2024 polls will reveal disinformation risks of AI – January 16, 2024).

In mid-2024 Microsoft reported that China was using generative AI tools to sow disinformation stories across social media platforms in many countries:

  • In the United States, AI-driven influence campaigns have been used to propagate misinformation, including conspiracy theories about government involvement in causing natural disasters, such as the wildfires in Hawaii.
  • In Japan, disinformation campaigns centred on false narratives about how the country managed nuclear waste after the Fukushima disaster.
  • In Taiwan, the Chinese influence operations focused on tricking viewers into believing that people had endorsed certain political candidates during the country’s elections.

In all of these cases, generative AI played a significant role in creating convincing content, such as fake audio recordings of political endorsements, TV segments anchored by AI-generated television hosts reporting fake news and AI-generated memes that promoted particular political messages and conspiracy theories.

New players

Notably, China was not the only state making use of AI to power its information operations in 2024. Russia and Iran were also accused by US officials of creating deepfakes to mislead viewers in the United States and other countries and interfere with the 2024 US elections (see INT: GenAI will test major 2024 national elections – December 14, 2023).

China, Russia and Iran were all accused of creating deepfakes to mislead the public

As generative AI tools become more powerful and convincingly able to replicate the work of humans, these applications to influence operations are likely to become more widespread.

This will challenge organisations and regulators to combat misinformation and clarify real information by labelling or banning AI-generated content and introducing tactics such as watermarks or warning labels to help viewers identify content created by generative AI tools.

Regulatory landscape

The regulatory landscape in cyberspace will remain challenging to navigate, particularly in the United States where Donald Trump is set to begin his second term. Washington’s shift will follow the 2024 Supreme Court decision overturning the Chevron doctrine, which had previously granted regulatory agencies the authority to establish specialised rules.

Both of these shifts will lead to less regulation governing cybersecurity in the private sector, offering companies and critical infrastructure operators more leeway to determine their own security postures.

Less regulation governing cybersecurity in the private sector is expected

This could result in organisations increasingly turning to insurers to determine necessary steps for securing networks and data, rather than relying on regulatory bodies for guidance. Additionally, if cybersecurity measures are not mandated by sector-specific regulations, this might lead to a reduction in overall investment in cybersecurity (see INT: Evolution in cybercrime needs security upgrades – May 13, 2024).

At the same time, this rolling back of cybersecurity regulations will also be accompanied by a government effort to harmonise cybersecurity rules worldwide through initiatives such as the Streamlining Federal Cybersecurity Regulations Act. This parallel push is intended to help eliminate or reduce duplicative and overlapping cybersecurity regulations in different sectors and provide a clearer, simpler set of baseline security requirements for critical infrastructure that different sectors can then build upon.

While it is not clear that this structure will be finalised in 2025, it is likely that regulators will make progress towards establishing that baseline and reaching consensus over which security controls organisations must have in place to protect themselves.

Illustration image for cybersecurity (Yuichiro Chino/Getty Images)

Authored by:

Sarah Fowler

Sarah Fowler

Senior Analyst,
International Economy
Tatia Bolkvadze

Tatia Bolkvadze

Technology Analyst

Looking for more like this?

Start your free Oxford Analytica Daily Brief® trial today.