With the use of deepfakes rising at an alarming rate, we’ve been taking on the challenge of this ever-growing threat to privacy and security.
Last year we took part in the industry-wide Deepfake Detection Challenge to develop innovative and practical solutions for detecting fake media. Our innovations were successfully shortlisted from hundreds of ideas and are now being used to inform new deepfake techniques for law enforcement.
Our expertise in this area is also growing through the development of a new cyber security software tool, the Digital Authenticity Verification Environment (DAVE), which aims to help organisations suffering from the impacts of synthetically created media.
Deepfakes explained
Deepfakes are videos, picture or audio clips made with artificial intelligence to be perceived by the general public as authentic. Advances in generative AI and society’s willingness to trust digital identities, mean the use of deepfakes has spread quickly and widely in recent years.
Cyber criminals are actively using AI-generated deepfakes for convincing phishing scams or identity theft operations. As it is now harder than ever to determine the authenticity of media, the world has become vulnerable to identity misrepresentation and convincing disinformation attacks.
Seeing is believing with our deepfake solution
As part of our commitment to doing things that matter and making a difference in society, we’re helping the government and industry detect synthetic media.
Our response to the 2024 Deepfake Detection Challenge was an initial ensemble of detection capabilities. This comprised a trained Explainable AI (xAI) image classifier and a novel technique we call Narrative Variance Analysis. This examines the content of a video in the context of external grounding sources, allowing us to verify information authenticity and determine the likelihood of media being a deepfake.
Such is the potential of these capabilities, we’re now further developing these techniques for operational use, working into the Accelerated Capability Environment (ACE), the Home Office and the Department for Science, Innovation and Technology (DSIT).
We’re exploring how tooling can be best deployed in the short and long term for law enforcement, taking into account their current workflows and the work needed for the tools to remain effective as technology evolves.
We’re also expanding our expertise in this area with our new cyber security software tool, DAVE, and contributing technical recommendations to research on deepfake threats during elections. Alongside this, we’ve set up an enterprise advisory service on how synthetic detection capabilities can be deployed across an organisation, taking into account different modalities and use-cases.
Today, deepfakes pose a significant risk to personal, societal and national security. With our innovation, expertise and practical tools in synthetic media and generative-AI, we’re helping organisations mitigate the severe implications of this new technology.
Find out more
To find out more get in touch with the team today.