Scientists, students, and community members came together last month to discuss the promise and pitfalls of artificial intelligence at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) for the fourth TEDxMIT event, held at MIT.
Attendees were entertained and challenged as they explored the “good and bad of computing,” explained CSAL director Professor Daniela Russ, who co-authored the program with John Werner, a fellow at MIT and managing director of Link Ventures. organized; MIT sophomore Lucy Zhao; and graduate student Jessica Karagusian. “As you listen to the talks today,” Russ told the audience, “consider how our world has been made better by AI, and also our internal responsibilities to ensure that technology is deployed for the greater good. Has been done.”
Russ mentioned a few new capabilities that could be enabled by AI: an automated personal assistant that can monitor your sleep stages and wake you up at optimal times, as well as sensors on the body that can detect your posture. Monitors everything from your digestive system. “Intelligent assistance can help empower and enhance our lives. But these intriguing possibilities should be pursued only if we can together address the challenges that these technologies bring,” Russ said.
The next speaker, CSAIL principal investigator and professor of electrical engineering and computer science, Manolis Kelis, began by suggesting what seemed like an unattainable goal – using AI to “end evolution as we know it.” Looking at it from the point of view of computer science, he said, what we call evolution is basically a brute force discovery. “You’re just exploring all the search space, making billions of copies of each of your programs, and just letting them fight against each other. It’s just brutal. And it’s also totally slow. Even It took us billions of years to reach.” Could it be possible, he said, to speed up development and make it less messy?
The answer, Kelis said, is that we can do better, and that we’re already doing better: “We’re not killing people like Sparta, throwing the weak off the mountain. We really want diversity.” are saving.”
Knowledge, moreover, is now being shared widely, passed “horizontally” through accessible information sources, he said, from parent to offspring rather than “vertically”. “I would like to argue that competition in the human species has been replaced by cooperation. Despite having a certain cognitive hardware, we have software upgrades that are enabled by culture, by the age of 20 that our children can use to develop their brains in school. spend to fill with everything that humanity has learned, even if the family came with it. This is the secret of our great acceleration ”- the fact that human progress in recent centuries has largely accelerated the slow pace of development so far removed.
The next step, Kelis said, is to use insights about evolution to counteract a person’s genetic susceptibility to disease. “Our current approach is simply inadequate,” he said. “We are treating the manifestations of the disease, not the causes of the disease.” A key element in his lab’s ambitious strategy to transform medicine is to “identify the causal pathways through which genetic predisposition manifests. Only by understanding these pathways can we truly manipulate the causes of disease and modify the disease circuitry.” can reverse.”
Kelis was followed by Alexander Madry, MIT professor of electrical engineering and computer science and CSAL’s principal investigator, who told the crowd, “There is progress in AI, and it’s happening fast.” Computer programs can routinely beat humans in games such as chess, poker, and Go. So should we be concerned about AI overtaking humans?
Madry, for one, isn’t scared — or at least not yet. And some of that assurance stems from research that has led him to the following conclusion: Despite its considerable success, AI, especially in the form of machine learning, is lazy. “Think of being lazy as such a smart student who doesn’t really want to study for an exam. Instead, what he does is just study all the previous year’s exams and only look at the pattern. explores. Instead of actually trying to learn, he just tries to pass the test. And that’s just like the current AI is lazy.”
For example, a machine-learning model can recognize grazing sheep, for example, by choosing only images that contain green grass. If a model is trained to identify fish from photographs of fishermen, Madry explained, “The model detects that if there is a human in the picture, I would classify it as a fish. ” The consequences could be more dire for AI models aimed at ruling out malignant tumors. If the model is trained on images with rulers that indicate the size of the tumor, the model can only select images that have rulers.
This leads to Madry’s biggest concerns about AI in its current form. “AI is killing us now,” he said. “But the way it does [involves] A bit of a hoax.” He fears we’ll implement AI “in some way in which this mismatch between what the model actually does and what we think will have some disastrous consequences.” People who rely on AI, especially In potentially life-or-death situations, there is a need to be more aware of its current limitations, cautioned Madry.
There were 10 speakers in total, and the last to take the stage was MIT associate professor of electrical engineering and computer science and CSAIL principal investigator Marzeh Ghasemi, who laid out her vision of how AI might best contribute to general health and wellness. But for this to happen, its models must be trained on accurate, diverse and unbiased medical data.
It’s important to focus on the data, Ghasemi stressed, because these models are learning from us. “Since our data is human-generated… a neural network is learning how to practice from a doctor. But doctors are humans and humans make mistakes. And if a human makes a mistake, and we call the AI from that.” If you train, the AI will do too. Garbage came, garbage went. But it is not that the garbage is distributed equally.”
She pointed out that many subgroups receive worse care from medical practitioners, and members of these subgroups die from certain conditions at disproportionately high rates. This is one area, Ghasemi said, “where AI can really help. It’s something we can fix.” His group is developing machine-learning models that are robust, private, and fair. What is stopping them is neither the algorithm nor the GPU. This is data. Once we collect reliable data from a variety of sources, Ghasemi said, we can begin to reap the benefits AI can bring to the realm of health care.
In addition to speakers from CSAIL, there were also talks with members of MIT’s Institute for Data, Systems, and Society; MIT Mobility Initiative; MIT Media Lab; and Sensible City Lab.
With that hope the proceedings ended. Russ and Werner then thanked everyone for coming. “Please continue to think about the good and the bad of computing,” Russ urged. “And we look forward to seeing you back here in May for the next TEDxMIT event.”
The exact theme of the Spring 2022 gathering will have something to do with “superpowers.” But — if December’s adorable presentations were any indication — May’s offering is almost sure to give its attendees a lot to think about. And maybe provide inspiration for a startup or two.