Physicists Christian Bauer, Marat Fretis and Benjamin Nachmann of Lawrence Berkeley National Laboratory have taken advantage of an IBM Q quantum computer to capture part of the computation of two protons colliding through the Oak Ridge Leadership Computing Facility’s Quantum Computing User Program. Calculations can show the probability that an outgoing particle will emit additional particles.

In the team’s recent paper, published physical review paperIn this article, the researchers describe how they used a method called effective field theory to break down their complete theory into components. Ultimately, he developed a quantum algorithm to allow the computation of some of these components on a quantum computer while leaving other computations for classical computers.

Searching for the smallest distance scales with particle colliders often requires detailed calculations of the spectra of the outgoing particles (the smallest filled green circles). Image credits: Benjamin Nachman, Berkeley Lab

“For a theory that is closer to nature, we showed how it would work in principle. Then we took a very simplified version of that theory and did an explicit calculation on a quantum computer,” Nachman said.

The Berkeley Lab team aims to uncover insights about nature’s smallest building blocks by observing high-energy particle collisions in laboratory environments, such as the Large Hadron Collider in Geneva, Switzerland. The team is tracing what happens in these collisions, using calculations to compare predictions with actual collision debris.

“One of the difficulties of these types of calculations is that we want to describe a large range of energies,” Nachman said. “We aim to characterize the highest-energy processes into the lowest-energy processes by analyzing the associated particles that fly into our detector.”

Using a quantum computer alone to solve these types of calculations requires many qubits that far outweigh the quantum computation resources available today. The team can calculate these problems on classical systems using approximations, but they ignore important quantum effects. Therefore, the team aimed to separate the computation into parts that were suitable for either classical systems or quantum computers.

The team conducted experiments on the IBM Q through OLCF’s QCUP program at the US Department of Energy’s Oak Ridge National Laboratory to verify that the quantum algorithms reproduced the expected results on the small scale that is still needed for classical computers. Can be calculated and confirmed with

“This is an absolutely significant performance problem,” Nachman said. “For us, it’s important that we describe the properties of these particles theoretically and then actually apply a version of them to a quantum computer. When you move to a quantum computer there are a lot of challenges that are not theoretically Our algorithm scales, so when we get more quantum resources, we’ll be able to do calculations that we couldn’t do classically.”

The team also aims to make quantum computers usable so that they can perform the kind of science they hope to do. Quantum computers are noisy, and this noise introduces errors in calculations. Therefore, the team also deployed error mitigation techniques that they had developed in previous work.

Next, the team hopes to add more dimensions to their problem, dividing their space into smaller numbers and increasing the size of their problem. Eventually, they hope to perform calculations on quantum computers that are not possible with classical computers.

“The quantum computers available through ORNL’s IBM Q agreement have about 100 qubits, so we should be able to reach larger system sizes,” Nachman said.

The researchers hope to relax their guesses and move on to physics problems closer to nature so that they can make calculations that do more than just a proof of concept.

The team performed the IBM Q calculations with funding from the DOE Office of Science in High Energy Physics as part of the Quantum Information Science Enabled Discovery Program (QuantiSEAD).

Source: ORNL


Read More

With the aim of promoting diversity and inclusion in artificial intelligence, MIT’s Stephen A. The Schwarzman College of Computing is launching Break Through Tech AI, a new program to bridge the talent gap for women and underrepresented genders in AI positions in the industry.

Break Through Tech AI will provide skills-based training, industry-relevant portfolios and mentorship to deserving graduate students in the Greater Boston area to more competitively position them for careers in data science, machine learning and artificial intelligence. The free, 18-month program will also offer each student a stipend for participation, lowering the barrier for those usually unable to engage in an unpaid, extra-curricular educational opportunity.

Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and Henry Ellis Warren Professor of Electrical Engineering, says, “Helping students from diverse backgrounds to succeed in fields such as data science, machine learning and artificial intelligence is critical to the future of our society.” is.” and computer science. “We look forward to working with students in the Greater Boston area to provide them with the skills and advice to help them find careers in this competitive and growing industry.”

The college is collaborating with Break Through Tech – a national initiative launched by Cornell Tech in 2016 to increase the number of women and underrepresented groups with degrees in computing – to host and administer the program locally. for. In addition to Boston, the inaugural artificial intelligence and machine learning program will be offered in two other metropolitan areas – one hosted by Cornell Tech in New York and the other hosted by the University of California at Los Angeles at the Los Angeles Samueli School of Engineering Went.

“Tech’s success in diversifying those pursuing computer science degrees and careers has changed lives and the industry,” says Judith Spitz, executive director of Break Through Tech. “With our new partners, we can apply our impressive model to drive inclusion and diversity in artificial intelligence.”

The new program will begin this summer at MIT with an eight-week, skills-based online course and in-person lab experience that teaches industry-relevant tools for building real-world AI solutions. Students will learn how to analyze datasets and use a number of common machine learning libraries to build, train and apply their own ML models in a business context.

Following the summer course, students will be matched with machine-learning challenge projects for which they will convene monthly at MIT and work in teams to build solutions and collaborate with an industry advisor or mentor throughout the academic year , which will result in a portfolio of resume- quality work. Participants will be connected with young professionals in the field to help them build their networks, build their portfolio, practice for interviews and develop workplace skills.

“By leveraging the college’s strong partnership with industry, Break Through AI will provide students with unique opportunities that will enhance their portfolio in machine learning and AI,” said Asu Ozdaglar, deputy dean of academics and head of the department at MIT Schwarzman College of Computing it is said. of Electrical Engineering and Computer Science. Ozdaglar, who will be the MIT faculty director of Break Through Tech AI, says: “The college is committed to making computing inclusive and accessible to all. We are thrilled to host this event at MIT for the Greater Boston area and to do what we can to help increase diversity in computing fields.

Break Through Tech is part of the MIT Schwarzman College of Computing’s focus to advance diversity, equity and inclusion in AI computing. The College aims to improve and create programs and activities that broaden participation in computing classes and degree programs, increase the diversity of top faculty candidates in computing fields, and ensure that faculty search and graduate admissions processes have a diverse slate of candidates and interviews.

Alana Anderson, assistant dean for diversity, says, “By engaging in activities like Break Through Tech AI that work to improve the climate for underrepresented groups, we are taking a significant step toward creating a more welcoming environment. We are taking steps where all members can innovate and thrive.” Equity and Inclusion for Schwarzman College of Computing.

Read More

Training AI reasoning systems that can perform simple mathematical reasoning is an important task, as numbers are ubiquitous in textual data.

Mathematical logic - abstract image.

Mathematical logic – abstract image. Image credit: Pxhere, CC0 Public Domain

A recent paper on arXIv.org presents a multi-task benchmark consisting of eight different functions, at the core of which the solution requires an understanding of simple arithmetic. They may require common sense reasoning or reading comprehension to combine with the basic skills of simple arithmetic.

The researchers showed that this is a challenging benchmark even for state-of-the-art large-scale language models, which yield poor scores even after fine-tuning. Furthermore, a memory-enhanced neural model is proposed to demonstrate the usefulness of such a multi-task meta dataset. In contrast to task-specific training, the model improves on average 3.4% when trained on all tasks combined.

Given the ubiquitous nature of numbers in text, reasoning with numbers to perform simple calculations is an important skill of AI systems. While many datasets and models have been developed for this purpose, state-of-the-art AI systems are brittle; Fail to execute the underlying mathematical logic when they appear in slightly different scenarios. Taking inspiration from the proposed GLUE in the context of natural language comprehension, we propose NumGLUE, a multi-task benchmark that evaluates the performance of AI systems on eight different tasks, with simple arithmetic understanding at its core. is required. We show that this benchmark is far from being solved with neural models that include state-of-the-art large-scale language models that perform significantly worse than humans (less than 46.4%). In addition, NumGLUE promotes knowledge sharing across tasks, especially those with limited training data, as evidenced by improved performance (average gain of 3.4% on each task) when a model is applied to all tasks as opposed to task-specific modeling. Jointly trained on tasks. Ultimately, we hope that NumGLUE will encourage systems that perform robust and general arithmetic reasoning within the language, the first step towards being able to perform more complex mathematical reasoning.

Research Article: Mishra, S., “NumGLUE: A Suite of Fundamental Yet Challenging Mathematical Reasoning Tasks”, 2022. Link: https://arxiv.org/abs/2204.05660


Read More

Recently, large language models (LLMs) have shown that it is possible to achieve impressive results without large-scale task-specific data collection or model parameter updating.

Image Credits: Google AI Blog

Image Credits: Google AI Blog

To further increase understanding of the capabilities that emerge with low-shot learning, Google Research has proposed an approach to Pathway, a single model that can generalize across domains and tasks while being highly efficient.

A recent paper introduces the Pathway Language Model (PaLM), which enables progress towards this goal. PaLM is a 540-billion parameter, compact decoder-BLP only transformer model trained with Pathway Systems.

Image Credits: Google AI Blog

Image Credits: Google AI Blog

This model illustrates language comprehension and success capabilities in diverse domains such as generation, logic and code-related tasks. For example, in the field of natural language processing, it can separate cause and effect and even predict film from emoji.

Source Link: https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html