,People have had to rely on complex codes and calculations to predict spin qubit coherence times. But now people can calculate the prediction themselves instantly. This opens up opportunities for researchers to discover the next generation of qubit material themselves.” – Shun Kanai, Tohoku University

The elegant formula allows scientists to estimate a material’s coherence time in an instant—as opposed to the hours or weeks it takes to calculate a precise value.

The team, which includes scientists from the US Department of Energy (doe) Argonne National Laboratory, the University of Chicago, Tohoku University in Japan and Ajou University in Korea, published their results in April in the Proceedings of the National Academy of Sciences.

Their work is supported by the Center for Novel Pathways to Quantum Coherence in Materials, an Energy Frontiers Research Center funded by the US Department of Energy, and Q-Next. doe The National Center for Quantum Informatics Research, led by Argonne.

The team’s equation applies to a special class of materials – those that can be used in devices called spin qubits.

,People have had to rely on complex codes and calculations to predict spin qubit coherence times. But now people can calculate the prediction immediately,” said co-author Shun Kanai from Tohoku University. I,This opens up opportunities for researchers to discover the next generation of qubit material themselves.”

Qubits are the fundamental unit of quantum information, the quantum version of classical computer bits. They come in a variety of forms and varieties, including a type known as a spin qubit. A spin qubit stores data in a material’s spin—a quantum property inherent in all atomic and subatomic matter, such as electrons, atoms, and groups of atoms.

Scientists hope that quantum technologies will help improve our everyday lives. We may be able to send information over quantum communication networks that are impervious to hackers, or we may use quantum simulations to accelerate drug delivery.

The realization of this capability will depend on whether it is stable enough to store, process, and send information – which has sufficient coherence over a long period of time.

While the research team’s equation gives only a rough estimate of a material’s coherence time, it turns out to be fairly close to the true value. And what the equation lacks in precision, it makes up for in convenience. Only five numbers are needed to obtain the solution – the values ​​of five special properties of the material in question. Plug them in, and voila! You have your own consolidation time.

Diamond and silicon carbide are currently the best-established materials for hosting spin qubits. Now scientists can explore other candidates without having to calculate whether a material is worth a deep dive.

,The equation is like a lens. it tells you,,Look here, look at this material — it looks promising,’ said Giulia Galli, a University of Chicago professor and senior scientist at Argonne, study co-author and Q-Next collaborator. I,We are behind the new qubit platform, new content. To identify such mathematical relationships, indicates new material to combine.

With this equation in hand, the researchers plan to boost the accuracy of their model.

They will also be joined by researchers who can create the most promising coherence materials over time, testing whether they perform as well as the equation predicts. (The team has already marked a breakthrough: a scientist outside the team reported that a material called calcium tungstate has a relatively long coherence time, as predicted by the team’s formula.)

,Our results help us advance current quantum information technology, but that’s not all,” said Professor Hideo Ohno of Tohoku University, who is currently the university’s chair and paper co-author.,It will unlock new possibilities by bridging quantum technology with a variety of traditional systems, allowing us to make even greater progress with materials we are already familiar with. We are pushing more than one scientific frontier.”

this work was supportEd By The Center for Novel Pathways to Quantum Coherence in Materials, an Energy Frontiers research center funded by the US Department of Energy, Office of Science, Basic Energy Sciences, in collaboration with the US Department of Energy’s Office of Science’s National Quantum Information Science Research Centers.

Q-NEXT is a US Department of Energy National Quantum Information Science Research Center led by Argonne National Laboratory. Q-NEXT brings together world-class researchers from national laboratories, universities and US technology companies with the single goal of developing science and technology to control and distribute quantum information. Q-NEXT collaborators and institutes will create two national foundries for quantum materials and devices, develop networks of sensors and secure communications systems, establish simulation and network testbeds, and train the next generation of quantum-ready workforces to support the continued efforts of American scientists and scientists. To ensure economic leadership. in this rapidly growing field. For more information, visit https://www.q​-next​.org.

Read More

overcoming biological laws

The tensile strength of spider silk fibers is provided by protein segments that are tightly packed and zipped together. Spider silk proteins are secreted from silk gland cells, so they must be devoid of long stretches of hydrophobic residues as such segments become trapped in membranes inside the cell. Also, such hydrophobic residues may mediate tight interactions in protein zippers, attractive features for the generation of artificial solid silk.

Protein production in bacteria can bypass the natural rules that spiders must follow because they lack membranes to trap the proteins in cells. Based on these insights, the researchers designed spider silk proteins that are predicted to produce more robust zippers, and successfully generated a panel of these in bacteria.

Biomimetic spinning of these engineered spider silk proteins resulted in increased tensile strength, and the two fiber types displayed stiffness comparable to native dragline silk. Bioreactor expression and purification resulted in a protein yield of ~9 g/L, in line with the requirements for an economically viable industrial bulk-scale production. The researchers’ proteins from a 1L bacterial culture would be enough to spin a fiber 18km long.

Source: Karolinska Institutet

Read More

The MIT AI Hardware Program is a new academic and industry collaboration to define and develop translational technologies in hardware and software for the AI ​​and quantum age. A partnership between the MIT School of Engineering and the MIT Schwarzman College of Computing, which incorporates Microsystems Technologies laboratories and programs and units in the college, the cross-disciplinary effort aims to innovate technologies that lead to cloud and edge computing. Will enhance the energy efficiency system.

Caption: MIT has announced the launch of the MIT AI Hardware Program, which includes five inaugural companies to advance transformational AI technologies for the next decade.

“A sharp focus on AI hardware manufacturing, research and design is critical to meeting the demands of the world’s evolving devices, architectures and systems,” says Anantha Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering. computer science. “Knowledge sharing between industry and academia is essential to the future of high-performance computing.”

Based on usage-driven research involving materials, devices, circuits, algorithms and software, the MIT AI Hardware Program convenes researchers from MIT and industry to facilitate the transition of fundamental knowledge to real-world technical solutions. The program spans materials and devices and architecture and algorithms enabling energy-efficient and sustainable high-performance computing.

Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science, says, “As AI systems become more sophisticated, they are intended to enable more advanced applications and deliver greater performance.” New solutions are needed.” “Our goal is to create real-world technical solutions and lead the development of technologies for AI in hardware and software.”

The program’s inaugural members are companies from a wide range of industries, including chip making, semiconductor manufacturing equipment, AI and computing services, and information systems R&D organizations. The companies represent a diverse ecosystem nationally and internationally, and will work with MIT faculty and students to help shape a vibrant future for our planet through cutting-edge AI hardware research.

The five inaugural members of the MIT AI Hardware Program are:

  • Amazon, a global technology company whose hardware inventions include the Kindle, Amazon Echo, Fire TV and Astro;
  • Analog Devices, a global leader in the design and manufacture of analog, mixed-signal and DSP integrated circuits;
  • ASML, an innovation leader in the semiconductor industry, provides hardware, software and services to mass-produce patterns on silicon through lithography;
  • NTT Research, a subsidiary of NTT, conducts fundamental research to elevate reality in game-changing ways that improve lives and brighten our global future; And
  • TSMC is the world’s leading dedicated semiconductor foundry.

The MIT AI Hardware Program will create a roadmap for transformative AI hardware technologies. Leveraging MIT.nano, the most advanced university nanofabrication facility anywhere, the program will foster a unique environment for AI hardware research.

“We are all in awe at the seemingly superhuman capabilities of today’s AI systems. But it comes at a rapidly increasing and constant energy cost,” says Jess del Alamo, a donor professor in MIT’s Department of Electrical Engineering and Computer Science. Continued advances in AI will require new and much more energy-efficient systems. This, in turn, will demand innovations across the entire abstraction stack, from materials and devices to systems and software. The program is in a unique position to contribute to this discovery.”

The program will give priority to the following topics:

  • analog neural network;
  • new roadmap CMOS design;
  • heterogeneous integration for AI systems;
  • Onolithic-3D AI System;
  • analog non-volatile memory devices;
  • software-hardware co-design;
  • Wisdom on the edge;
  • intelligent sensor;
  • energy efficient AI;
  • Intelligent Internet of Things (IIoT);
  • neuromorphic computing;
  • AI Edge Security;
  • how much ai
  • wireless technology;
  • hybrid-cloud computing; And
  • High performance calculations.

“We live in an era where paradigm-shifting pursuits in hardware, systems communications and computing have become imperative to finding sustainable solutions – solutions that we are proud to offer to the world and generations to come,” said Aude Oliva, a says senior research scientist at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and director of strategic industry engagement at the MIT Schwarzman College of Computing.

written by engineering school

Source: Massachusetts Institute of Technology


Read More

The MIT AI Hardware Program launched with five inaugural companies to advance AI technologies for the next decade.

The MIT AI Hardware Program is a new academic and industry collaboration to define and develop translational technologies in hardware and software for the AI ​​and quantum age. A partnership between the MIT School of Engineering and the MIT Schwarzman College of Computing, which incorporates Microsystems Technologies laboratories and programs and units in the college, the cross-disciplinary effort aims to innovate technologies that lead to cloud and edge computing. Will enhance the energy efficiency system.

Caption: MIT has announced the launch of the MIT AI Hardware Program, which includes five inaugural companies to advance transformational AI technologies for the next decade.

“A sharp focus on AI hardware manufacturing, research and design is critical to meeting the demands of the world’s evolving devices, architectures and systems,” says Anantha Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering. computer science. “Knowledge sharing between industry and academia is essential to the future of high-performance computing.”

Based on usage-driven research involving materials, devices, circuits, algorithms and software, the MIT AI Hardware Program convenes researchers from MIT and industry to facilitate the transition of fundamental knowledge to real-world technical solutions. The program spans materials and devices and architecture and algorithms enabling energy-efficient and sustainable high-performance computing.

Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science, says, “As AI systems become more sophisticated, they are intended to enable more advanced applications and deliver greater performance.” New solutions are needed.” “Our goal is to create real-world technical solutions and lead the development of technologies for AI in hardware and software.”

The program’s inaugural members are companies from a wide range of industries, including chip making, semiconductor manufacturing equipment, AI and computing services, and information systems R&D organizations. The companies represent a diverse ecosystem nationally and internationally, and will work with MIT faculty and students to help shape a vibrant future for our planet through cutting-edge AI hardware research.

The five inaugural members of the MIT AI Hardware Program are:

  • Amazon, a global technology company whose hardware inventions include the Kindle, Amazon Echo, Fire TV and Astro;
  • Analog Devices, a global leader in the design and manufacture of analog, mixed-signal and DSP integrated circuits;
  • ASML, an innovation leader in the semiconductor industry, provides hardware, software and services to mass-produce patterns on silicon through lithography;
  • NTT Research, a subsidiary of NTT, conducts fundamental research to elevate reality in game-changing ways that improve lives and brighten our global future; And
  • TSMC is the world’s leading dedicated semiconductor foundry.

The MIT AI Hardware Program will create a roadmap for transformative AI hardware technologies. Leveraging MIT.nano, the most advanced university nanofabrication facility anywhere, the program will foster a unique environment for AI hardware research.

“We are all in awe at the seemingly superhuman capabilities of today’s AI systems. But it comes at a rapidly increasing and constant energy cost,” says Jess del Alamo, a donor professor in MIT’s Department of Electrical Engineering and Computer Science. Continued advances in AI will require new and much more energy-efficient systems. This, in turn, will demand innovations across the entire abstraction stack, from materials and devices to systems and software. The program is in a unique position to contribute to this discovery.”

The program will give priority to the following topics:

  • analog neural network;
  • new roadmap CMOS design;
  • heterogeneous integration for AI systems;
  • Onolithic-3D AI System;
  • analog non-volatile memory devices;
  • software-hardware co-design;
  • Wisdom on the edge;
  • intelligent sensor;
  • energy efficient AI;
  • Intelligent Internet of Things (IIoT);
  • neuromorphic computing;
  • AI Edge Security;
  • how much ai
  • wireless technology;
  • hybrid-cloud computing; And
  • High performance calculations.

“We live in an era where paradigm-shifting pursuits in hardware, systems communications and computing have become imperative to finding sustainable solutions – solutions that we are proud to offer to the world and generations to come,” said Aude Oliva, a says senior research scientist at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and director of strategic industry engagement at the MIT Schwarzman College of Computing.

written by engineering school

Source: Massachusetts Institute of Technology


Read More

overcoming biological laws

The tensile strength of spider silk fibers is provided by protein segments that are tightly packed and zipped together. Spider silk proteins are secreted from silk gland cells, so they must be devoid of long stretches of hydrophobic residues as such segments become trapped in membranes inside the cell. Also, such hydrophobic residues may mediate tight interactions in protein zippers, attractive features for the generation of artificial solid silk.

Protein production in bacteria can bypass the natural rules that spiders must follow because they lack membranes to trap the proteins in cells. Based on these insights, the researchers designed spider silk proteins that are predicted to produce more robust zippers, and successfully generated a panel of these in bacteria.

Biomimetic spinning of these engineered spider silk proteins resulted in increased tensile strength, and the two fiber types displayed stiffness comparable to native dragline silk. Bioreactor expression and purification resulted in a protein yield of ~9 g/L, in line with the requirements for an economically viable industrial bulk-scale production. The researchers’ proteins from a 1L bacterial culture would be enough to spin a fiber 18km long.

Source: Karolinska Institutet

Read More

Recently, remarkable progress has been made in the field of text-to-video retrieval. However, while current systems are primarily designed for very short videos, most real-world videos often capture complex human actions, which can last several minutes or even hours.

Image Credits: Yan-Bo Lin Ji Lei, Mohit Bansal, Gedas Bertasius

Image Credits: Yan-Bo Lin Ji Lei, Mohit Bansal, Gedas Bertasius

A scientific paper published on arXiv.org addresses this limitation by proposing an efficient audio-visual text-to-video retrieval system focused on long-distance video.

The researchers notice that most of the relevant visual information can be captured in just a few video frames, while temporal dynamics can be briefly encoded in an audio stream. Therefore, instead of processing multiple densely-extracted frames from a long video, the proposed framework operates on low-sampled video frames with dense audio.

It is demonstrated that compared to the long-range video-only approach, the novel framework leads to better video retrieval results at lower computational cost.

We introduce an audio-visual method for long-range text-to-video retrieval. In contrast to previous approaches designed for short video retrieval (eg, 5–15 seconds in duration), our approach aims to recover minute-long videos capturing complex human actions. One challenge of standard video-only approaches is the large computational cost associated with processing hundreds of condensedly extracted frames from such long videos. To address this problem, we propose to replace parts of the video with compact audio signals that concisely summarize dynamic audio events and are cheap to process. Our method, named ECLIPSE (Efficient CLIP with Sound Encoding), adapts the popular CLIP model to an audio-visual video setting, by adding an integrated audiovisual transformer block that captures complementary signals from video and audio streams. In addition to being 2.92x faster and 2.34x memory-efficient than long-range video-only approaches, our method also achieves superior text-to-video retrieval accuracy on many diverse long-range video datasets such as ActivityNet, QVHighlights, YouCook2, does. Didemo and Charades.

Research Article: Lin, Y.-B., Lei, J., Bansal, M., and Bertasius, G., “ECLIPSE: Efficient Long-Range Video Retrieval Using Sight and Sound”, 2022. Link of Paper: https://arxiv.org/abs/2204.02874
Project Page: https://yanbo.ml/project_page/eclipse/


Read More

Modern AI systems can create realistic images and art from descriptions in natural language.

Previously, two approaches to the problem of text-conditional image formation have been proposed: contrast models such as CLIP and diffusion models. Recently, OpenAI has proposed a new method for this task: DALL·E 2.

Example of generated image.  credit: DALL E 2

Example of generated image. credit: DALL E 2

This new method produces more realistic and accurate images with 4x higher resolution than its predecessor DALL·E. The novel system combines two previous methods: a diffusion decoder is trained to reverse the CLIP image encoder.

In addition to creating original, realistic images and art from text details, DALL·E 2 can perform realistic edits such as adding or removing elements to existing images. It can also use an image as an input and create different variations of it inspired by the original. In addition to empowering people to express themselves creatively, the research also helps humans better understand how advanced AI systems see and understand our world.

Link: https://openai.com/dall-e-2/


Read More

When artificial intelligence is tasked with visual recognition of objects and faces, it assigns specific components of its network to face recognition – just like the human brain.

It seems that the human brain cares a lot about faces. It is dedicated to a specific area of ​​recognizing them, and the neurons there are so good at their job that most of us can easily recognize thousands of individuals. With artificial intelligence, computers can now recognize faces with equal efficiency – and neuroscientists at MIT’s McGovern Institute for Brain Research have found that a computational network trained to recognize faces and other objects can solve them all. Surprisingly detects brain-like tactics.

Visualization of a preferred incentive eg face-rank filter.  Whereas the filters in the early layers (eg, Transform 5) were maximally activated by simple features, the filters reacted to features that were part of the face (eg, the nose and eyes) in the mid-level layers. are visible (for example, Transform 9) and appear to represent faces in a more holistic way in late convolutional layers.  Credits: Courtesy of Kanvisher Lab.

Visualization of a preferred incentive eg face-rank filter. Whereas the filters in the early layers (eg, Transform 5) were maximally activated by simple features, the filters reacted to features that were part of the face (eg, the nose and eyes) in the mid-level layers. are visible (for example, Transform 9) and appear to represent faces in a more holistic way in late convolutional layers. Credits: Courtesy of Kanvisher Lab.

search, reported in science advancesuggests that millions of years of evolution with shaped circuits in the human brain have adapted our systems for facial recognition.

“The solution to the human brain is to separate the processing of faces from the processing of objects,” explains Katharina Dobbs, who led the Walter A. The Rosenblyth Professor led the study as a postdoc in the lab of McGovern investigator Nancy Kanwisher. , The artificial network he trained did the same. “And this is the same solution we envisage for any system trained to recognize faces and classify objects,” she adds.

“These two completely different systems have figured out what – if not – the good solution is. And it seems very deep,” says Knavisher.

functionally specific brain region

More than 20 years ago, Kanwisher and his colleagues discovered a tiny spot in the temporal lobe of the brain that responds specifically to faces. This area, which they named the fusiform face area, is one of several brain regions Kanwisher and others found devoted to specific functions, such as detecting written words, perception of vocal songs, and understanding language.

Kanwisher says that as she discovered how the human brain is organized, she has always been curious about the reasons for that organization. Does the brain require special machinery for facial recognition and other functions? “Why ‘Questions’ Are Hard in Science,” she says. But with a sophisticated type of machine learning called deep neural networks, his team can at least figure out how a different system would handle a similar task.

Dobbs, leader of a research group at Justus Liebig University Giesen in Germany, assembled hundreds of thousands of images to train a deep neural network in face and object recognition. The collection included the faces of more than 1,700 different people and hundreds of different types of objects, from chairs to cheeseburgers. All these were presented on the network, about which there was no clue. “We never told the system that some of them are faces, and some are objects. So it’s just a huge task,” Dobbs says. “It needs to recognize a face and recognize a bike or a pen.”

As the program learned to recognize objects and faces, it organized itself into an information-processing network consisting of units dedicated exclusively to facial recognition. Like the brain, this specialization occurred during the later stages of image processing. In both the brain and artificial networks, the early stages of facial recognition involve the more general vision processing machinery, and the final stages rely on face-dedicated components.

It is unknown how the face-processing machinery is generated in the developing brain. Still, based on their findings, Kanwisher and Dobbs say that the network does not require an intuitive face-processing mechanism to gain that expertise. “We haven’t made anything face-ish in our network,” says Kanvishar. “The networks managed to differentiate themselves without any face-specific nudges.”

Kanwisher says it was thrilling to see deep neural networks being separated into different parts for face and object recognition. “That’s what we’ve been seeing in the brain for 20-some years,” she says. “Why do we have a different system for facial recognition in the brain? This tells me that it is because it looks like an optimized solution.”

Now, he is eager to use deep neural nets to ask similar questions about why other brain functions are organized the way they do. “We have a new way of asking why the brain is organized the way it is,” she says. “Will the structure we see in the human brain be generated automatically by training networks to perform comparison tasks?”

Written by Jennifer Michalovsky

Source: Massachusetts Institute of Technology


Read More

Researchers led by the University of Cambridge analyzed more than 12,000 research papers on breast cancer cell biology. After narrowing the set of high scientific interest to 74 papers, less than a third – 22 papers – were found to be reproducible. In two cases, Eve was able to make serious discoveries.

Reported results in the journal Royal Society InterfaceDemonstrates that it is possible to use robotics and artificial intelligence to help address the reproducibility crisis.

A breast cancer cell close.  Image credit: NCI

A breast cancer cell close. Image credit: NCI

A successful experiment is one where another scientist can achieve similar results in a different laboratory under similar conditions. But more than 70% of researchers have tried and failed to reproduce another scientist’s experiments. More than half have been unable to present some of their experiments: it’s a fertility crisis.

“Good science depends on reproducible results; otherwise, the results are meaningless,” said Professor Ross King, from Cambridge’s Department of Chemical Engineering and Biotechnology, who led the research. “This is particularly important in biomedicine. Q: If I’m a patient and I’ve read about a promising new potential treatment, but the results aren’t reproducible, how do I know what to believe? The result could be that people are losing faith in science.”

Several years ago, King developed the robotic scientist Eve, a computer/robot system that uses artificial intelligence (AI) techniques to perform scientific experiments.

“One of the big benefits of using machines to do science is that they are more precise and record more precise details than a human can,” King said. “This makes them suitable for attempting to reproduce scientific results.”

As part of a project funded by DARPA, King and his colleagues from the UK, US and Sweden designed an experiment that uses a combination of AI and robotics to help computers read and understand scientific papers and eavesdrop on them. to help overcome the reproduction crisis. To try to reproduce the experiments.

For the current paper, the team focused on cancer research. “The cancer literature is huge, but no one ever does the same thing twice, making reproducibility a huge issue,” King said. He also holds a position at Chalmers University of Technology in Sweden. “Given the huge amount of money spent on cancer research and the huge number of people affected by cancer around the world, this is an area where we urgently need to improve reproducibility.”

From an initial set of more than 12,000 published scientific papers, researchers used automated text mining techniques to extract statements related to gene expression changes in response to drug treatment in breast cancer. 74 papers were selected from this set.

Two different human teams used Eve and two breast cancer cell lines and attempted to reproduce 74 results. Statistically significant evidence for duplication was found in 43 papers. The results were repeatable under similar conditions, and 22 articles found considerable evidence for reproducibility or robustness, meaning that the results were repeatable by different scientists under similar conditions. In two cases, automation made serious discoveries.

While only 22 of the 74 papers in this experiment were found to be reproducible, the researchers say that does not mean that the remaining papers are not scientifically reproducible or robust. “There are many reasons why a particular result may not be reproducible in another laboratory,” King said. “Cell lines can sometimes change their behavior in different laboratories under different conditions. The most important difference we found was that it mattered who did the experiment because every individual is different.”

King says the work shows that automated and semi-automated techniques can be an essential tool to help address the reproduction crisis and that reproduction should become a standard part of the scientific process.

“It’s quite shocking how big the issue of reproducibility is in science, and the way a lot of science is done, in need of a complete overhaul,” King said. “We think machines have an important role to play in helping to fix this.”

Source: University of Cambridge


Read More

Cyber ​​attacks are becoming increasingly sophisticated as institutions of all kinds turn to biometrics to verify user identities. Researchers at the USC Informatics Institute are on the front lines, developing systems to protect against security breaches and hacking attempts.

Unsplash. Illustration by Rami Al-Zayat on

In an age when we rely on biometric authentication processes such as fingerprint and iris identification to perform day-to-day tasks, theft of biometric data can put anyone at significant risk. From creating realistic-looking masks that hijack facial recognition structures to mimicking fingerprint and iris patterns, these spoofing attacks can take many forms and have become increasingly prevalent in our digital world.

With additional government funding, researchers at the USC Information Science Institute for the Vision, Image, Speech and Text Analysis (VITA) group are leading the effort to identify and prevent spoofing attacks on biometrics systems. He has dubbed it “Biometric Authentication with Timeless Learner” – or the BATL Research Project.

It is one of several ongoing projects at USC that build AI-based tools for the public good.

“My group is dedicated to the ethical applications of artificial intelligence,” said ISI research director Waal Abd Almaged, who leads the team whose results have recently been published in journals such as IEEE Sensors. “We want to use artificial intelligence ethically.”

AbdAlmaged, a research professor at the USC Viterbi School of Engineering, leads the five-member VISTA team at ISI. The team’s research and software development efforts include a variety of efforts from computer vision to voice recognition tools. Together they have also developed an algorithm that accurately flags deepfakes – AI-generated videos that propagate misinformation and misinformation in media and social platforms.

Other members of the team are Mohamed Hussain, a research chief and member of the VISTA group; Hengmeh Mirzalian, a machine-learning and computer vision scientist; Leonidas Spinaulas, a research scientist and Joe Mathai, a research programmer at ISI. The VISTA team collaborated with Sebastian Marcel and his colleagues at the Idiop Research Institute in Switzerland; A non-profit organization focused on biometrics research and machine learning.

When hackers attempt to access critical personal and financial data, VISTA Group generates complex and ever-evolving machine learning and AI algorithms for these spoofing attacks. The research is sponsored by the US(IARPA) The Odin program, which invests in cutting-edge research in the intelligence community. IARPA is under the US Office of the Director of National Intelligence and is located in Washington, DC.

Since the proposal’s initial conception five years ago, the USC team has made unprecedented progress in the biomaterials research community by turning an abstract idea into a patented, working product. His achievements have undoubtedly attracted the attention of the research community and beyond. As we move forward, the team has received a research extension from the government within the past year to transfer its technology to several federal agencies.

This level of accreditation puts USC ISI at the forefront of biometrics research at Marina del Rey and expands the project’s impact beyond the research community.

trailblazing progress

Many important improvements have been made in the biometrics model since last year.

Described as “trailblazing” by Dr. Lars Eriksson, Odin program manager at IARPA, the Vista team at USC’s ISI developed a first-of-its-kind algorithm that interprets decisions made by biometric anti-spoofing systems using natural language. For security analysts, this means an easier and more accessible understanding of the thought process behind determining whether something is tagged as a spoofing attempt. The research will be presented at the 2021 IEEE International Conference on Automatic Face and Gesture Recognition.

In an age when technology continues to advance, it is inevitable that new never-seen-before spoofing attacks will emerge.

“The main challenge was the ability to identify unknown spoofing attacks and learn them consistently,” AbdAlmaged said.

Improvements have been made to create new machine-learning algorithms to boost the system’s adaptability and security against spoofing attacks. This will ensure that the system continuously learns how to detect new spoofing attacks.

In addition, the team has also implemented more robust and sophisticated AI models that can adapt to radically new environments and a compact and lightweight sensor that can be easily manufactured and deployed.

Widespread use of biometrics

Biometrics technology has countless promising uses. For example, Japan used facial recognition technology in its Olympic Games to prevent the spread of the coronavirus. The implementation of biometrics marked the first time that the Olympics had ever used the technology, proving that biometrics are helpful not only for security purposes but also as a large-scale method of ensuring public health.

While biometrics research extends far beyond the laboratory, Vista’s work is an example of how to deploy biometric data and technology on a large scale.

Despite this significant progress, biometric technology outside strict regulatory limits remains questionable.

“Generally speaking, I am not in favor of using facial recognition without very clear and transparent rules about how the technology will be used,” Abdalmaged said.

AbdAlmaged considers it important to keep his research ahead of developments and security concerns. By developing new and improved technology, AbdAlmaged can advise on the ethical use of machine learning, and provide new tools to help doctors diagnose conditions that affect facial features. He can also mitigate cyber attacks and slow the spread of misinformation that disrupt elections and harm public health through his work.

“AI is not mature enough to be used in a world without safeguards. the way we use it in our lab, whether it [for] To help deepfakes or doctors, we really know the limits,” AbdAlmaged said.

Source: USC


Read More