About TrustTrace
TrustTrace: Combating AI misconduct, focusing on emerging tech risks like deepfake and fake WhatsApp calls. Multidisciplinary team ensures responsible and ethical AI deployment.
Built By
Shah and Anchor Kutchhi Engineering College, Cyber Security Department
Powered By
CyberPeace Foundation
How It Works
A diverse dataset comprising authentic and deepfake videos was collected to train and evaluate the models.
Video frames were extracted, and data augmentation techniques were applied to enhance model generalization.
Convolutional Neural Networks were used to capture spatial dependencies within frames.
ResNext architecture was implemented to leverage parallelization for improved feature extraction.
EfficientNet was chosen for its efficiency in balancing model size and performance.
The models were trained on the prepared dataset, optimizing for accuracy and minimizing false positives.
Besides there exists audio deepfake detection model which takes raw audio files and analysis by converting
them into spectograms.
Powered By
Sponsored By
Testimonials
How industry leaders and users feel about TrustTrace
Who is TrustTrace?
TrustTrace is a closed source research project leading innovation in the field of detecting audio visual deepfakes from a variety of generators with high accuracy.
We're a funded project investing internally and externally into fueling research and development of models capable of real-like deepfakes with a high consistency and making these models available to the general public for safety and general use..
Trusted Deepfake Detector
By combining multi source of data, with different more variable models our solution account for new and old deepfake generators.
Domain Leading Innovator
We promote and fund innovation towards high efficiency and high consistency machine learning models for deepfake detection.
F.A.Q
-
What is the TrustTrace Project?
TrustTrace Project is an initiative dedicated to tracing and eradicating AI misconduct. Our focus is on investigating AI-driven misconduct, with a particular emphasis on addressing threats related to deepfake technology, fake WhatsApp calls, and other emerging challenges in the AI landscape
-
What is AI-driven misconduct, and why is it a concern?
AI-driven misconduct refers to the malicious use of artificial intelligence technologies, leading to harmful outcomes. This can include the creation and dissemination of deepfake content, fake WhatsApp calls, and other deceptive practices. The concern lies in the potential for privacy breaches, misinformation, and other negative consequences affecting individuals and society.
-
What if a video is identified as a deepfake ?
If the uplaoded video is a deepfake, you can go ahead and take action yourself by reporting this case to the Government Website mentioned below:
-
How does the TrustTrace Project approach the investigation of AI-driven misconduct?
TrustTrace Project employs a comprehensive approach involving continuous identification of threats, in-depth research and analysis, technology development, collaboration with partners, rigorous testing, education and outreach initiatives, and continuous monitoring and adaptation to address the dynamic nature of AI threats.
-
What specific technologies does the TrustTrace Project develop to combat AI misconduct?
TrustTrace Project actively develops cutting-edge technologies, including advanced deepfake detection algorithms and protective measures against fake WhatsApp calls. Our goal is to create robust tools that can effectively detect and mitigate the impact of AI-driven misconduct.
-
Who collaborates with TrustTrace Project, and how can organizations get involved?
TrustTrace Project collaborates with industry leaders, government agencies, and advocacy groups. If your organization is interested in collaborating with the TrustTrace Project, please contact our collaboration team at collaborate@trusttraceproject.com.
-
How can individuals stay informed about TrustTrace Project's initiatives?
Individuals can stay informed about TrustTrace Project by subscribing to our newsletter and following us on social media platforms such as Twitter, LinkedIn, and Facebook. Regular updates, research findings, and news related to AI ethics and misconduct will be shared through these channels.
Advisory Board
A panel of industry and domain experts serving in the best interests of the people.
Dr. Bhavesh Patel
Principal, SAKEC
Major Vineet Kumar
CEO, CyberPeace Foundation
The Team
The developers and machine learning engineers behind TruthTrace and it's reliability.
Dr. Nilakshi Jain
Principal Investigator
Dr. Shwetambari Borade
Co-Principal Investigator
Mustansir Sazid Godhrawala
Senior Research Fellow
Yash Nagare
Junior Research Fellow
Shubham Kolaskar
Junior Research Fellow
Pratham Shah
Junior Research Fellow
Jayan Shah
Junior Research Fellow
Newsletter
Learn about the latest changes in the deepfake detection and generation domain by subscribing to our newsletter.
Contact Us
TrustTrace Project Headquarters