USC’s cutting-edge biometrics research receives a technology-transfer government

Cyber ​​attacks are becoming increasingly sophisticated as institutions of all kinds turn to biometrics to verify user identities. Researchers at the USC Informatics Institute are on the front lines, developing systems to protect against security breaches and hacking attempts.

Unsplash. Illustration by Rami Al-Zayat on

In an age when we rely on biometric authentication processes such as fingerprint and iris identification to perform day-to-day tasks, theft of biometric data can put anyone at significant risk. From creating realistic-looking masks that hijack facial recognition structures to mimicking fingerprint and iris patterns, these spoofing attacks can take many forms and have become increasingly prevalent in our digital world.

With additional government funding, researchers at the USC Information Science Institute for the Vision, Image, Speech and Text Analysis (VITA) group are leading the effort to identify and prevent spoofing attacks on biometrics systems. He has dubbed it “Biometric Authentication with Timeless Learner” – or the BATL Research Project.

It is one of several ongoing projects at USC that build AI-based tools for the public good.

“My group is dedicated to the ethical applications of artificial intelligence,” said ISI research director Waal Abd Almaged, who leads the team whose results have recently been published in journals such as IEEE Sensors. “We want to use artificial intelligence ethically.”

AbdAlmaged, a research professor at the USC Viterbi School of Engineering, leads the five-member VISTA team at ISI. The team’s research and software development efforts include a variety of efforts from computer vision to voice recognition tools. Together they have also developed an algorithm that accurately flags deepfakes – AI-generated videos that propagate misinformation and misinformation in media and social platforms.

Other members of the team are Mohamed Hussain, a research chief and member of the VISTA group; Hengmeh Mirzalian, a machine-learning and computer vision scientist; Leonidas Spinaulas, a research scientist and Joe Mathai, a research programmer at ISI. The VISTA team collaborated with Sebastian Marcel and his colleagues at the Idiop Research Institute in Switzerland; A non-profit organization focused on biometrics research and machine learning.

When hackers attempt to access critical personal and financial data, VISTA Group generates complex and ever-evolving machine learning and AI algorithms for these spoofing attacks. The research is sponsored by the US(IARPA) The Odin program, which invests in cutting-edge research in the intelligence community. IARPA is under the US Office of the Director of National Intelligence and is located in Washington, DC.

Since the proposal’s initial conception five years ago, the USC team has made unprecedented progress in the biomaterials research community by turning an abstract idea into a patented, working product. His achievements have undoubtedly attracted the attention of the research community and beyond. As we move forward, the team has received a research extension from the government within the past year to transfer its technology to several federal agencies.

This level of accreditation puts USC ISI at the forefront of biometrics research at Marina del Rey and expands the project’s impact beyond the research community.

trailblazing progress

Many important improvements have been made in the biometrics model since last year.

Described as “trailblazing” by Dr. Lars Eriksson, Odin program manager at IARPA, the Vista team at USC’s ISI developed a first-of-its-kind algorithm that interprets decisions made by biometric anti-spoofing systems using natural language. For security analysts, this means an easier and more accessible understanding of the thought process behind determining whether something is tagged as a spoofing attempt. The research will be presented at the 2021 IEEE International Conference on Automatic Face and Gesture Recognition.

In an age when technology continues to advance, it is inevitable that new never-seen-before spoofing attacks will emerge.

“The main challenge was the ability to identify unknown spoofing attacks and learn them consistently,” AbdAlmaged said.

Improvements have been made to create new machine-learning algorithms to boost the system’s adaptability and security against spoofing attacks. This will ensure that the system continuously learns how to detect new spoofing attacks.

In addition, the team has also implemented more robust and sophisticated AI models that can adapt to radically new environments and a compact and lightweight sensor that can be easily manufactured and deployed.

Widespread use of biometrics

Biometrics technology has countless promising uses. For example, Japan used facial recognition technology in its Olympic Games to prevent the spread of the coronavirus. The implementation of biometrics marked the first time that the Olympics had ever used the technology, proving that biometrics are helpful not only for security purposes but also as a large-scale method of ensuring public health.

While biometrics research extends far beyond the laboratory, Vista’s work is an example of how to deploy biometric data and technology on a large scale.

Despite this significant progress, biometric technology outside strict regulatory limits remains questionable.

“Generally speaking, I am not in favor of using facial recognition without very clear and transparent rules about how the technology will be used,” Abdalmaged said.

AbdAlmaged considers it important to keep his research ahead of developments and security concerns. By developing new and improved technology, AbdAlmaged can advise on the ethical use of machine learning, and provide new tools to help doctors diagnose conditions that affect facial features. He can also mitigate cyber attacks and slow the spread of misinformation that disrupt elections and harm public health through his work.

“AI is not mature enough to be used in a world without safeguards. the way we use it in our lab, whether it [for] To help deepfakes or doctors, we really know the limits,” AbdAlmaged said.

Source: USC


Related Posts

Leave a Reply

Your email address will not be published.