WASHINGTON (AP) — For Washington insiders, seeing and hearing is no longer believing, thanks to a spate of recent incidents involving deepfakes impersonating top officials in President Donald Trump’s administration.

Digital fakes are coming for corporate America, too, as criminal gangs and hackers associated with adversaries including North Korea use synthetic video and audio to impersonate CEOs and low-level job candidates to gain access to critical systems or business secrets.

Thanks to advances in artificial intelligence, creating realistic deepfakes is easier than ever, causing security problems for governments, businesses and private individuals and making trust the most valuable currency of the digital age.

Analysts say responding to the challenge will require laws, better digital literacy and technical solutions that fight AI with more AI.

“As humans, we are remarkably susceptible to deception,” said Vijay Balasubramaniyan, CEO and founder of the tech firm Pindrop Security. Yet he believes solutions to the challenge of deepfakes may be within reach: “We are going to fight back.”

The greater availability and sophistication of the programs mean deepfakes are increasingly used for corporate espionage and garden variety fraud.

“The financial industry is right in the crosshairs,” said Jennifer Ewbank, a former deputy director of the CIA who worked on cybersecurity and digital threats. “Even individuals who know each other have been convinced to transfer vast sums of money.”

In the context of corporate espionage, they can be used to impersonate CEOs asking employees to hand over passwords or routing numbers.

Deepfakes also can allow scammers to apply for jobs — and even do them — under an assumed or fake identity. For some, this is a way to access sensitive networks, to steal secrets or to install ransomware. Others just want the work and may be working a few similar jobs at different companies at the same time.

Within three years, as many as 1 in 4 job applications is expected to be fake, according to research from Adaptive Security, a cybersecurity company.

“We’ve entered an era where anyone with a laptop and access to an open-source model can convincingly impersonate a real person,” said Brian Long, Adaptive’s CEO. “It’s no longer about hacking systems — it’s about hacking trust.”

Researchers, public policy analysts and technology companies now are investigating the best ways of addressing the economic, political and social challenges posed by deepfakes.

New regulations could require tech companies to do more to identify, label and potentially remove deepfakes on their platforms. Lawmakers also could impose greater penalties on those who use digital technology to deceive others, if they can be caught.

Greater investments in digital literacy also could boost people’s immunity to online deception by teaching them ways to spot fake media and avoid falling prey to scammers.

The best tool for catching AI may be another AI program, one trained to sniff out the tiny flaws in deepfakes that would go unnoticed by a person.