## The world’s first LED lights developed from rice bran

Milling of rice to separate the grain from the husk produces approximately 100 million tons of rice bran waste globally each year. Scientists searching for a scalable method for making quantum dots have developed a way to recycle rice bran to make the first silicon quantum dot (QD) LED light. Their new method turns agricultural waste into state-of-the-art light-emitting diodes in a low-cost, eco-friendly way.

The research team from Hiroshima University’s Natural Science Center for Basic Research and Development published their findings on January 28, 2022, in the Journal of the American Chemical Society. ACS Sustainable Chemistry and Engineering,

“Since typical QDs often include toxic substances such as cadmium, lead, or other heavy metals, environmental concerns are often discussed when using nanomaterials. Our proposed process and fabrication method for QDs reduce these concerns. ,” said Ken-ichi Saito, lead study author and professor of chemistry at Hiroshima University.

Since porous silicon (Si) was discovered in the 1950s, scientists have explored its use in applications in lithium-ion batteries, luminescent materials, biomedical sensors, and drug delivery systems. Non-toxic and found abundantly in nature, Si possesses photoluminescence properties, which result from its microscopic (quantum-sized) dot structures that act as semiconductors.

Aware of the environmental concerns surrounding current quantum dots, researchers set out to find a new method for making quantum dots that have a positive environmental impact. It turns out that uncooked rice bran is an excellent source of high purity silica (SiO.)2) and value-added C powder.

The team used a combination of milling, heat treatment, and chemical etching to process rice bran silica: First, they milled the rice bran and extracted the silica (SiOO).2) Powdered by burning the organic compounds of ground rice bran. Second, they heated the resulting silica powder in an electric furnace to obtain the Si powder via a reduction reaction. Third, the product was a pure Si powder that was reduced to 3 nanometers in size by chemical etching. Finally, its surface was chemically functionalized for high chemical stability and high dispersion in solvent, with 3 nm crystalline particles with a high luminescence efficiency of more than 20% for SiQDs luminescent in the orange-red range. was produced.

“This is the first research to develop an LED from waste rice bran,” Saito said, adding that the non-toxic quality of silicon makes them an attractive alternative to the semiconductor quantum dots available today.

“The current method becomes a great method for developing environmentally friendly quantum dot LEDs from natural products,” he said.

The LEDs were assembled as a series of physical layers. An indium-tin-oxide (ITO) glass substrate was the LED anode; It is a good conductor of electricity while being sufficiently transparent to emit light. Additional layers were spin-coated onto the ITO glass, including the SiQDs layer. The material was capped with an aluminum film cathode.

The chemical synthesis method developed by the team has allowed them to evaluate the optical and optoelectrical properties of SiQD light-emitting diodes, including the structure, synthesis yields, and properties of SiO.2 and Si powder and SiQDs.

“By synthesizing high-yield SiQDs from rich husks and dispersing them in organic solvents, it is possible that one day these processes could be applied on a large scale like other high-yield chemical processes,” Saito said.

The team’s next steps include developing high-efficiency luminescence in SiQDs and LEDs. They will also explore the possibility of producing SiQD LEDs other than the orange-red ones they have just made. Looking ahead, the scientists suggest that the method they developed could be applied to other plants, such as sugarcane bamboo, wheat, barley, or grasses, which contain SiO.2, These natural products and their wastes have the potential to be converted into non-toxic optoelectronic devices. Ultimately, scientists would like to see the commercialization of this eco-friendly approach to making luminescent devices out of rice bran waste.

Other members of the Hiroshima University research team include Honoka Ueda, Shiho Terada and Taisei Ono.

graphene quantum dot leds

Shiho Terada et al, Orange-Red Si Quantum Dot LEDs from Recycled Rice Bran, ACS Sustainable Chemistry and Engineering (2022). DOI: 10.1021/acssuschemeng.1c04985

Provided by Hiroshima University

## Why did I fail? A cause-based method for finding explanations for robot failures

What a robot can explain will increase trust and transparency in robots and help correct their behavior. A recent paper published on arXiv.org proposes a method for generating explanations for failures based on a causal model that gives robots a partial understanding of their environment.

Robotic Grippers. Image credits: Ars Electronics, CC BY-NC-ND 2.0 via Flickr

Researchers use Bayesian networks to tackle the problem of knowledge acquisition. A new method is proposed to generate explanations for performance failures based on learned causal knowledge. It is based on the comparison of the variable parametrization associated with a failed operation with its closest parametrization that can lead to a successful execution.

The researchers demonstrate how causal Bayesian networks can be learned from simulations and provide real-world experiments showing that causal models are transferable from simulations to reality without any retraining.

Robot failure is inevitable in a human-centered environment. Therefore, the ability of robots to explain such failures to interact with humans is of paramount importance in order to increase trust and transparency. To acquire this skill, the main challenges addressed in this paper are I) obtaining enough data to learn an environmental cause-effect model and II) generating causal explanations based on that model. We address I) by learning a causal Bayesian network from simulation data. Regarding II), we propose a new method that enables robots to generate contrasting explanations on task failures. The explanation is based on establishing the failure condition as opposed to the closest state that would have allowed successful execution, which is found through breadth-first search and based on success predictions from learned causal models. We assess the sim2 true transferability of the causal model on a cube stacking scenario. Based on real-world experiments with robots with two different avatars, we obtain sim2’s actual accuracy of 70% without any optimization or retraining. Our method thus allowed real robots to have failure explanations, such as, ‘The upper cube was dropped too high and the lower cube too far to the right.’

Research Article: Diehl, M. and Ramirez-Amaro, K., “Why did I fail? A reason-based method for finding explanations for robot failures”, 2022. Link: https://arxiv.org/abs/2204.04483

## MIT.nano Emerson Lab Gaming Program Awards 3rd Annual Seed Grant. MIT News

MIT.nano has announced its next round of seed grants to support hardware and software research related to sensors, 3D/4D interaction and analysis, augmented and virtual reality (AR/VR), and gaming. The grants are awarded through the MIT.nano Immersion Lab Gaming Program, a four-year collaboration between MIT.nano and NCSOFT, a digital entertainment company and founding member of the MIT.nano Consortium.

“We are delighted to be able to continue to support research at the intersection of physical and digital thanks to this collaboration with NCSOFT,” says MIT.nano Associate Director Brian W. Anthony, who is also the lead research scientist in mechanical engineering and the Institute of Medical Engineering and Science. “These projects are just a few examples of ways researchers at MIT are exploring how new technologies can change how humans interact with the world and with each other.”

The MIT.nano Immersion Lab is a two-story immersive space dedicated to observing, understanding, and interacting with big data and synthetic environments. Equipped with equipment and software tools for motion capture, photogrammetry, and 4D experiences, and supported by expert technical staff, this open-access facility is available for use by any MIT student, faculty, or researcher, as well as outside users.

This year, three projects have been selected for receiving the seed grant:

Ian Condy: Innovations in Spatial Audio and Immersive Sound

Japanese culture and media studies professor Ian Kondi is exploring spatial sound research and technology for video gaming. Specifically, Condry and co-investigator Philip Tan, research scientist and creative director at MIT Game Lab, hope to develop software to link the “roar of the crowd” to online gameplay and e-sports so that players and spectators can hear. to be able to participate in the sound.

Condry and Tan will use MIT Spatial Sound Lab’s object-based mixing technology, combined with Immersion Lab’s tracking and playback capabilities, to collect data and compare different approaches to immersive audio. Both see the project as something that most likely fully engrossed “in real life” gaming experiences with 360-degree video, or mixed gaming, in which online and in-person players can attend the same event and interact with the cast. can talk with.

Robert Houp: Immersive athlete-training techniques and data-driven coaching support in fencing

Seeking to improve the athlete training, practice, and coaching experience to maximize learning while minimizing injury risk, MIT assistant coach Robert Houp aims to transform fencing pedagogy through expanded reality (XR) technology and biomechanical data. Have to move forward.

Hoop, who is working with MIT.Nano Immersion Lab staff, says preliminary data suggests that technology-assisted self-motion exercises can make a fencer’s movements more compact, and an immersive Reactive techniques can be improved by practicing in the environment. He spoke about data-driven coaching support and athlete training at the MIT.nano IMMERSED seminar in September 2021.

With this seed grant, Hoope helped develop an immersive training system for self-paced athlete learning, biofeedback systems to support coaches, conduct scientific studies to track athlete’s progress, and current understanding of rival interactions. planned to move forward. He envisions the work having implications in athletics, biomechanics and physical therapy, and using XR technology for training could expand to other sports.

Jeevan Kim: The next generation human/computer interface for advanced AR/VR gaming

The most widely used user interaction methods for AR/VR gaming are gaze and motion tracking. However, according to Jeevan Kim, associate professor of mechanical engineering, current state-of-the-art devices fail to deliver a truly immersive AR/VR experience due to their limitations in size, power consumption, intelligibility and reliability.

Kim, who is also an associate professor in materials science and engineering, developed a microLED/pupillary dilation (PD)-based gauge tracker and electronic, skin-based, controller-free motion tracker for next-generation AR/VR human computer interfaces. has been proposed. Kim’s gauge tracker is more compact and consumes less energy than traditional trackers. It can be integrated into see-through displays and used to develop compact AR glasses. The e-Skin motion tracker can clearly follow human skin and accurately detect human motion, which Kim says will facilitate more natural human interaction with AR/VR.

This is the third year of seed grant awards from the MIT.nano Immersion Lab Gaming Program. In the program’s first two calls for proposals in 2019 and 2020, 12 projects from five departments were awarded $1.5 million of joint research funding. The collaborative proposal selection process by MIT.nano and NCSOFT ensures that awarded projects develop industrially impressive advances, and that MIT researchers are in contact with technical partners at NCSOFT during the seed grant period. ## Blood cancer cells and the immune system are the best enemies The combination of single-cell techniques and machine learning uncovers an association between cancer cells and the immune system. A blood test procedure. Image credits: Max Pixel, CC0 Public Domain Researchers from the University of Helsinki and Aalto University have shown that the body’s immune system attacks itself in a rare type of blood cancer. The discovery could lead to improved treatments and a more complex understanding of the immune system’s role in other cancers. Current treatment methods for large granular lymphocyte (LGL) leukemia, a rare type of blood cancer, are based on the understanding that cancer cells attack the body’s tissues. Prior research has focused on the study of these rogue cells, which may lead to a better understanding of the disease. Single-cell technologies allow analysis of individual cells and comparison of normal cells with tumor cells (purple). Image credit: Claudeau Cotta / Aalto University ‘Our research group showed ten years ago that LGL cancer cells normally have mutations in the STAT3 gene, which is now the leading cause of disease diagnosis worldwide,’ says Professor of Translational Hematology Satu Mustajoki from the University of Helsinki. is used. Although rarely fatal, blood cancer causes a number of chronic symptoms, including an increased risk of infection, anemia, and joint pain. The challenge so far has been that patients show a mixed response to treatment. ‘Current treatment methods target cancer cells and their vulnerabilities,’ explains Jani Huhtanen from the University of Helsinki and Aalto. ‘It is impossible to evaluate which patients will respond to treatment, because in some patients, the amount of activated cancer cells is reduced, yet symptoms persist, and for others, it is the opposite.’ Satu Mustajoki’s research group took a step back from conventional thinking and examined the role of other cells in the immune system. They used the latest single-cell techniques combined with machine learning models developed by Aalto University. This enabled the group to uncover an adverse interaction between the body’s immune system and blood cancer cells. “In these patients, the immune system becomes more active and signals the tumor cells to grow and provide them with a favorable environment,” says Dipaburn Bhattacharyya, a doctoral researcher at the University of Helsinki. The research group demonstrated that in this type of leukemia, it is not only the cancer cells isolated from other cancer cells in different patients, but also the immune system as a whole. The discovery could have important implications for current treatment methods. “Our research may explain the observed discrepancy between LGL cancer cells and symptoms,” Huhtenen elaborated. ‘The immune system is cooperating with the cancer cells. Therefore, future treatments should target the entire immune system – not just cancer cells – to enhance patients’ quality of life.’ ‘The single-cell technique opens up completely new avenues for research,’ says Tina Kelka, dossent of immunology from the University of Helsinki. These technologies can quantify key receptor proteins in immune cells, helping researchers better understand the role of the immune system in LGL leukemia and other diseases. These receptors determine what types of cancer cells or pathogens the cell can fight, but advanced machine learning tools must analyze the data. ‘Several different machine learning-based computational techniques were needed in this study. “The latest statistical machine learning and artificial intelligence methods have proven effective in single-cell data analysis,” says Harry Lahdesmaki, professor of computational biology and machine learning at Aalto University. The machine learning component also includes an open-source machine learning model developed by Aalto’s Computational Systems Biology Group, which was also used to study the SARS-CoV-2 coronavirus in 2021. ‘This is the most interesting aspect of medical research, which is undergoing a significant computational transition,’ explains Huhtenen, who is working on a doctoral thesis at the University of Helsinki and Aalto’s Department of Computer Science. ‘These computational methods allow us to access medical data without any preconceptions and see where it takes us.’ The research group is looking to investigate the role of the immune system in other types of cancer, which may cover up one of the most serious health problems. Source: Aalto University ## Cyber ​​security researchers help keep the internet secure The Internet is the backbone of our lives, supporting everything from doing business to communicating with loved ones to managing home appliances. Cars, medical equipment, agricultural equipment and security systems depend on it. Even the currency, once known as “cold, hard cash”, is now traded in purely virtual form by more than 100 million people globally. It is easy to assume that this connectivity is secure and reliable, but the online world is subject to many dangers. The growing field of cyber security aims to protect systems – and us – from cybercriminals: from state entities to small groups of saboteurs to lone wolves to modern-day crooks who can wreak havoc from their living rooms. Cyber ​​security is a growing emphasis at the University of Oregon’s Department of Computer and Information Sciences. Department faculty at the UO Center for Cyber ​​Security and Privacy, together with colleagues in philosophy, law, business and other fields, conducts research and helps thwart threats to Internet traffic, cryptocurrencies, social media networks, infrastructure security, and more. ### denying deniers Lei Xiao, an assistant professor in the Department of Computer Science, focuses on how to negate deniers – those who try to disrupt others’ computers by launching Distributed Denial of Service (DDoS) attacks, which can be accessed from a laptop. , can cripple a bunch of computers, or an entire multinational company. Xiao was recently awarded a fellowship by Ripple Labs, the US-based developer of the cryptocurrency platform, as part of a university research initiative. In a DDoS attack, hackers launch large amounts of data traffic towards a victim, taking a toll on the recipient’s computer bandwidth. The receipt or transmission of legitimate information becomes impossible for the victim. Internet service providers such as AT&T and Comcast attempt to thwart these intrusions by operating “scrubbing centers” — data centers containing multiple computers programmed to detect and defeat the intruder. Malicious traffic is filtered into scrubbing centers, and the rest is sent to clients. These centers are located across the country, and it is up to each service provider to determine which one to use, which traffic flow to divert, and how many computers to allocate to the center for each suspicious incident. Xiao is developing “smart algorithms” – computers that can follow instructions to make these decisions. “My algorithms will automatically and efficiently tell Internet service providers what the best decisions are to deal with every attack,” he says, “so they don’t have to address each one manually.” ### cracking down on crypto-criminals Ripple professor Yingjiu “Joe” Lee and PhD student Sanidh Arora focus on flash loan attacks on cryptocurrency exchanges. Cryptocurrency—currency that exists only in digital form—is traded on decentralized platforms that do not rely on the oversight of institutions such as banks or governments. “Cryptocurrencies are very convenient and cost-effective for users,” says Lee. “Since participants have complete control over their files, they feel secure. Plus, anyone can interact with these financial services without being censored or blocked by a third party.” The cryptocurrency market had a record year in 2021, briefly crossing$3 trillion in November. Recent research from the Pew Research Center found that 16 percent of Americans say they have invested in, traded or used cryptocurrency. “It’s a very fast-growing platform,” Lee says.

While cryptocurrencies reduce the hacking risks facing centralized exchanges such as the New York Stock Exchange, decentralized systems offer a lot of opportunities for cybercriminals.

Individual “coin” ownership is stored in a digital blockchain database, with part of the information shared equally among the entire network of users. “The practical operation of blockchain exchanges goes far beyond security measures,” says Lee. “Increasing security is imperative to protect users from economic loss.” According to the Chainalysis 2022 Crypto Crime Report, in 2021, criminals earned around $14 billion from digital currency exchanges, investors and users. A flash loan attack occurs when someone borrows potentially millions or even billions of dollars worth of cryptocurrency assets, uses them to purchase currency, illegally manipulates the price through a vulnerability in computer coding, And then pays off the loan, making a huge profit at least. as 30 seconds. For example, in February, hackers took advantage of a vulnerability to steal over$320 million in cryptocurrency from Wormhole, a decentralized financial platform.

Lee and Arora are studying how to increase the security of the protocols governing exchanges. Some existing hedge exchanges monitor systems and identify flash loan attacks after the fact, but may not recover damages. Lee adds: “A better strategy is to improve the protocol design in these decentralized exchanges to prevent instant loan attacks or to detect and block them before they cause any economic harm. This is a topic we are on. working.”

### master of Disaster

With the help of more than \$3 million in grants from the National Science Foundation and others, Ram Durairajan is making the network more innovative and more robust.

Durairajan, an assistant professor in the department, is working with PhD student Matthew Hall on denial of service threats by reconfiguring the paths of the wavelengths that transmit data.

He uses the idea of ​​a museum thief as a metaphor for an attacker. “Imagine someone trying to steal a painting hanging in a museum,” says Durairajan. “The museum is the network. The painting is the service the attacker is trying to steal. We can change the floor plan of the museum — that is, the configuration of the wavelengths the data is carrying — every time so the thief won’t know that where to go.”

Durairajan also studies how we can protect our ability to stay connected despite earthquakes, tsunamis and rising seas. The West Coast, specifically the Oregon Coast, is the landing point for many of the underwater fiber cables connecting our continent to Asia. It is also the site of the Cascadia Subduction Zone, a fault line that separates two major tectonic plates and is overdue for a devastating earthquake.

Durairajan, with the help of undergraduate Juno Meyer, developed an assessment tool called ShakeNet to analyze the risk that earthquake-induced aftershocks affect wired and wireless infrastructure in the Northwest. He collaborated with colleagues from the Department of Earth Sciences, who helped develop ShakeAlert, an early earthquake warning system. Durairajan combined a map of earthquake-prone areas with one of the fiberoptic infrastructure and found that about 65 percent of the fiber infrastructure and cell towers on the west coast would be damaged during a violent earthquake.

Using ShakeNet’s route planner capability, data during an earthquake can be sent via longer but less sensitive routes. For example, data transfer between Seattle and Portland can be done via Kennewick and Boise, avoiding the I-5 corridor, which can be affected by strong tremors. “There’s this tension between what Internet service providers do and what Mother Nature does,” says Durairajan. “Our aim is to take that tension away so that you won’t find the shortest path, but you will find a stronger path.”

Durairajan has also studied the threats posed by climate change. They recently found that thousands of miles of fiberoptic cable in the US—mainly in the areas around New York, Miami and Seattle—will be severely affected by rising sea levels.

He acknowledges that his focus on unpleasant scenarios may lead some to tease about having a serious outlook.

“I’m seriously not a fun person,” Durairajan says. “But as long as people are safe and the Internet works well, I’m happy to be a negative person.”

Source: University of Oregon

## Intel to set up quantum computing test bed for Q-NEXT

Partnerships with the world’s leading chip maker accelerate the development of quantum devices.

Argonne is soon to receive an exclusive delivery.

This year, tech company Intel will deliver its first quantum computing test bed to the US Department of Energy.doea) Argonne National Laboratory, the host laboratory for Q-NEXT, a doe National Center for Quantum Informatics Research.

Intel’s Janet Roberts accommodates a lackluster refrigerator, which creates the ideal environment for a qubit display at Intel Labs’ Hillsboro, Oregon, campus. (Image by Walden Kirsch/Intel Corporation.)

The machine will be the first major component installed at Argonne’s Quantum Foundry, which will serve as a factory for manufacturing and testing new quantum materials and devices. It is expected to be completed this year.

Q-NEXT scientists will use Intel’s machine to run quantum algorithms on a real, brick-and-mortar quantum computing test bed instead of a simulated quantum environment. And Intel will get feedback from scientists on the quality of the machine’s components and its overall operation.

,I love working on challenging, interesting problems. I think building a practical quantum computer is one of the most challenging problems I’ve been presented with.” – Janet Roberts, Intel

,Realizing quantum computing will take a lot of people working together. We need to leverage everyone’s expertise,” said Janet Roberts, who leads Intel’s quantum measurement team. I,It is a kind of team sport. It’s a good area for collaboration in a pre-competitive space.”

The promise of quantum computing,The power of technology is widely publicized: a quantum computer would be able to solve problems that are impossible for today’s highest-performing computers. Its realization is expected to prove to be a boon not only for fundamental research but also for areas that touch our daily lives including medicine, logistics and finance.

#### diving deep into the science

Roberts controls the effort. Working with Q-NEXT scientists, she is currently setting up the test bed’s hardware, software, and all the programming needed to get it into operation.

The prospect of building a quantum computer was particularly attractive to Roberts, who always sought to understand how the physical world worked.

,I often look at things and find phrases that were further information,outside the scope of this book.’ I wanted to go deep enough into the science so that I could understand the full scope,” said Roberts, who earned his Ph.D. in physics just before joining Intel in 1995.

Her curiosity of nature extends beyond the laboratory. An avid snow skier, hiker, rock climber and mountain biker, Roberts is also a certified master scuba diver, with certifications in Deep and Wreck Diving. She has made more than 900 dives around the world, including Australia, the Caribbean, Chuuk, Fiji, Indonesia, Malaysia, Palau and the North American Northwest – often carrying 100 pounds of gear.

,I usually go scuba diving or plan to scuba dive,” she said.,It is a completely different world in water, with animals and plants unlike what we see on land. It is like being on another planet. It offers opportunities to live in different cultures as well as underwater environments. ,

As exciting as scuba diving has been for Roberts, the challenge of building a quantum computer can be beat.

,I love working on challenging, interesting problems,” she said. I,I think building a practical quantum computer is one of the most challenging problems I’ve been presented with.”

He was offered this offer in 2015, when Intel entered the quantum tech industry with the launch of its quantum computing program. Partnering with the Technical University of Delft, Intel began the program with the goal of applying high-volume manufacturing techniques to the manufacture of quantum devices.

Roberts was one of the first two engineers to join the company’s quantum computing team, which helped develop Intel’s qubits, the quantum analog of binary computing bits.

#### From semiconductor chips to spin qubits

Different types of qubits process data in different ways. Intel focuses on a class called spin qubits. These devices store information in a material’s spin, a special, fundamental characteristic of all nuclear and subatomic matter.

,It turns out that the spin qubits look like transistors, of which Intel ships 800 quadrillion each year. The similarity between the two technologies means we can take advantage of Intel’s expertise in semiconductor design and manufacturing of spin qubits,” said Roberts. I,We are using Intel’s infrastructure to help make quantum computing a reality.”

Qubit Development Intel’s Quantum R. is only a part ofAndD. Company R. also organizesAndD on quantum algorithms, controls electronics for quantum devices, and quantum interconnects, components that enable quantum information to be transmitted between different media and platforms.

,Intel’s work in developing quantum devices resonates strongly with Q-Next’s mission, and the company’s partnership has been invaluable to the collaboration,” said David Avschalom, director of Q-Next, an Argonne senior scientist. The Lew Family Professor and Vice Dean of Research and Infrastructure of the Pritzker School for Molecular Engineering at the University of Chicago, and Founding Director of the Chicago Quantum Exchange.,The entire Q-Next-Intel team, including Janet, is committed to helping the Center achieve its goals. Once the semiconductor test bed is up and running, it’s going to open up all kinds of possibilities for creating new quantum materials and devices. ,

This work is supported by the US Department of Energy’s Office of Science’s National Quantum Information Science Research Centers.

Q-NEXT is a US Department of Energy National Quantum Information Science Research Center led by Argonne National Laboratory. Q-NEXT brings together world-class researchers from national laboratories, universities and US technology companies with the single goal of developing science and technology to control and distribute quantum information. The Q-NEXT collaborators and institutes will create two national foundries for quantum materials and devices, develop networks of sensors and secure communications systems, establish simulation and network testbeds, and train the next generation of quantum-ready workforces to support continuous U.S. scientists and To ensure economic leadership. in this rapidly growing field. For more information, visit https://www.q​-next​.org.

Source: ANL

## Dancing Under the Stars: Video Depicting the Starlight

To take pictures in dark settings, photographers use cameras that increase gain, effectively making each pixel more sensitive to light. However, it amplifies the noise present in each frame. A recent paper on arXiv.org proposes a new method for submillilux video denoising.

Examples of noisy images with different levels of noise. Image credits: MDF, CC-BY-SA-3.0 via Wikimedia

The researchers proposed a camera optimized for low-light imaging and set to the highest gain setting. The camera noise model is learned using physics-inspired noise generators and still noise images easily obtainable from the camera. The noise model then generates synthetic clean/noisy video pairs to train a video denoiser.

The effectiveness of the denoising network is demonstrated on 5-10 fps video taken on a clear moonlit night. Several challenging scenes are presented with sweeping motion, such as dancing only with the lights of the Milky Way as a meteor showers from above.

Imaging in low light is extremely challenging due to the low number of photons. Using sensitive CMOS cameras, it is currently possible to take video at night under moonlight (0.05–0.3 lux illumination). In this paper, we demonstrate photorealistic video under starlight (no moon exists, <0.001 lux) for the first time. To enable this, we develop a GAN-tuned physics-based noise model to more accurately represent camera noise at the lowest light levels. Using this noise model, we train a video denoiser using a combination of simulated noise video clips and real noise still images. We capture 5-10 fps video datasets with significant speed at about 0.6-0.7 millilux with no active illumination. Compared to alternative methods, we achieve better video quality at the lowest light levels, displaying photorealistic video in Starlight for the first time.

Research Article: Monakhova, K., Richter, S. R., Waller, L., and Koltun, V., “Dancing Under the Stars: Video Denoising in Starlight”, 2022. Link of Paper: https://arxiv.org/abs/2204.04210
Project Page: https://kristinamonakhova.com/starlight_denoising/

## Like I May, Not Like I Say It: Grounding Languages ​​in Robotic Affirmations

Large language models (LLMs) can perform tasks such as creating long text based on prompts, answering questions, or even engaging in a dialogue on a wide range of topics. This knowledge can be used to broaden the set of tasks that robots can plan and perform.

Therefore, a recent paper published on arXiv.org looks at the problem of how to extract knowledge in an LLM to enable an embodied agent such as a robot to follow high-level textual instructions.

Image credit: Fabrice Florin, CC BY-SA 2.0 via Flickr

The goal of the researchers is to make the LLM aware of the available and viable repertoire of skills, this can provide it with an awareness of both the agent’s capabilities and the current state of the environment. They propose an algorithm that extracts and leverages the knowledge within the LLM in physically based tasks.

Evaluation on real-world robotic tasks confirms that the algorithm can execute temporally extended, complex, and abstract instructions.

Large language models can encode a wealth of semantic knowledge about the world. Such knowledge can be extremely useful for robots aiming to operate on high-level, temporally extended instructions expressed in natural language. However, a significant weakness of language models is that they lack real-world experience, making it difficult to take advantage of them to make decisions in a given embodiment. For example, asking a language model to describe how to clean up a spill may result in a proper narrative, but it may not apply to a particular agent, such as a robot, which is required to be placed in a particular environment. This work is required to be done. We propose to provide real-world grounding through pre-trained skills, which are used to constrain the model to propose natural language verbs that are both feasible and contextually appropriate. The robot can act as the “hands and eyes” of the language model, while the language model provides higher-level semantic knowledge about the task. We show how low-level skills can be combined with larger language models so that the language model provides higher-level knowledge about the processes of executing complex and temporally extended instructions, while the value associated with these skills Functions provide the necessary basis for connecting it. Knowledge for a particular physical environment. We evaluate our method on several real-world robotic tasks, where we show a need for real-world grounding and that the approach is capable of carrying out long-horizon, abstract, natural language instructions on mobile manipulators. The project website and video can be found at this https URL

Research Article: Ah, M., “As I Can, As I Say: Grounding Languages ​​in Robotic Affirmations”, 2022. Link: https://arxiv.org/abs/2204.01691

## A ‘Cautionary Story’ About Location Tracking

Data about our habits and activities is continuously collected through mobile phone apps, fitness trackers, credit card logs, websites visited and other means.

But if we turn off data tracking on our device, can’t we get traced?

No, according to a new study.

“Turning off your location data completely won’t help,” says Gaurav Ghoshal, an associate professor of physics, mathematics and computer science and the Stephen Bigger ’92 and Elizabeth Asaro ’92 Fellow in Data Science at the University of Rochester.

Ghosal, joined with colleagues at the University of Exeter, the Federal University of Rio de Janeiro, Northeastern University and the University of Vermont, applied techniques from information theory and network science to determine how far-reaching an individual’s data could be. The researchers found that even if individual users turned off data tracking and did not share their information, their mobility patterns could be predicted with astonishing accuracy based on data collected from their acquaintances .

Even worse, says Ghoshal, “almost as much secret information can be extracted from complete strangers with whom the person co-locates.”

The researchers published their findings in nature communication,

### The Smoking Gun: Your ‘Colocation Network’.

The researchers analyzed four datasets: three location-based social network datasets composed of millions of check-ins on apps such as BrightKite, Facebook and Foursquare, and one call-data record containing more than 22 million calls by about 36,000 anonymous users.

He developed a “colocation” network to differentiate between the mobility patterns of two groups of people:

• people who are socially attached to a person, such as family members, friends, or coworkers
• People who are socially not tied to a person but to a person in one place at the same time. They can include people working in the same building but with different companies, parents whose children attend the same school but who are unknown, or people who shop at the same grocery store.

By applying information theory and measures of entropy — the degree of randomness or structure in a sequence of place visits — researchers learned that people who are socially tied to a person have up to 95 percent of the predictability in their movement patterns. There is information. that person’s mobility pattern. However, even more astonishingly, he found that the stranger No Socially bonded individuals can also provide important information, which predicts up to 85 percent of a person’s movement.

### ‘A cautionary tale’

The ability to predict the locations of individuals or groups could be beneficial in areas such as urban planning and epidemic control, where contact tracing based on mobility patterns is an important tool for preventing the spread of disease. In addition, many consumers appreciate the potential of data mining to offer tailored recommendations for restaurants, TV shows, and commercials.

However, Ghoshal says, data mining is a slippery slope, especially because, as research has shown, individuals sharing data through mobile apps may inadvertently provide information about others.

“We are presenting a cautionary tale that people should be aware of how far-reaching their data can be,” he says. “This research has a lot of implications for surveillance and privacy issues, especially with the rise of authoritarian impulses. We can’t just tell people to turn off their phones or go off the grid. And it needs to be communicated to put in place guidelines that govern how people collect your data.”

Source: University of Rochester

## Validating a cortisol-induced framework for human-robot interaction

Unlike robots, people interact favorably with others: they instinctively change actions, intonations, and speech according to the perceived needs of their peers. Furthermore, people interact with partners who share the same desire for closeness and intimacy. Can this principle be used to improve interactions between humans and robots?

A humanoid robot. Image credits: Nicolas-Halodi, CC-BY-SA-4.0 via Wikimedia

A paper recently published on arXiv.org notes that these observations could be a turning point in human-robot interactions.

The researchers propose to endow the robot with an intrinsic cortisol framework where the robot’s cortisol levels will change as a result of its own attachment style and the way its partner behaves. The robot understands the attachment style of a human partner by the effect it has on its cortisol levels. Then, the robot adapts its behavior accordingly.

The first findings confirm that the framework mimics the hormonal dynamics of human–human interaction when modulated by specific social stimuli.

When interacting with others in our daily lives, we prefer the company of people who share with us the same desire for closeness and intimacy (or lack thereof), as it determines whether our interactions are more or less Will be pleasant This type of compatibility can be inferred from our innate attachment style. The attachment style represents our distinctive way of thinking, feeling and behaving in our close relationships, and in addition to behavior, it can also affect us biologically through our hormonal dynamics. While we are looking at how to enrich human-robot interaction (HRI), one possible solution could be enabling robots to understand the attachment style of their partners, which can then improve their partners’ perception and Can help them behave in an adaptive manner during conversations. , We propose to use the relationship between attachment style and the cortisol hormone to endow the humanoid robot iCub with an intrinsic cortisol-induced framework, which allows it to infer a participant’s attachment style from the interaction effect on their cortisol levels. Allows (referred to as R-cortisol). In this work, we present our cognitive framework and its recognition during replication of a well-known paradigm on hormonal modulation in human–human interaction (HHI) – the static face paradigm.

Research Article: Mongile, S., Tanevska, A., Rea, F., and Sciutti, A., “Validating a Cortisol-Inspired Framework for Human-Robot Interaction with a Replication of the Still Face Paradigm”, 2022. Link: https://arxiv.org/abs/2204.03518