Building an Artificial Brain

Paul Allen’s $500 million quest to dissect the mind and code a new one from scratch.

Paul Allen has been waiting for the emergence of intelligent machines for a very long time. As a young boy, Allen spent much of his time in the library reading science-fiction novels in which robots manage our homes, perform surgery and fly around saving lives like superheroes. In his imagination, these beings would live among us, serving as our advisers, companions and friends.

Now 62 and worth an estimated $17.7 billion, the Microsoft co-founder is using his wealth to back two separate philanthropic research efforts at the intersection of neuroscience and artificial intelligence that he hopes will hasten that future.

The first project is to build an artificial brain from scratch that can pass a high school science test. It sounds simple enough, but trying to teach a machine not only to respond but also to reason is one of the hardest software-engineering endeavors attempted — far more complex than building his former company’s breakthrough Windows operating system, said to have 50 million lines of code.

The second project aims to understand intelligence by coming at it from the opposite direction — by starting with nature and deconstructing and analyzing the pieces. It’s an attempt to reverse-engineer the human brain by slicing it up — literally — modeling it and running simulations.

“Imagine being able to take a clean sheet of paper and replicate all the amazing things the human brain does,” Allen said in an interview.

He persuaded University of Washington AI researcher Oren Etzioni to lead the brain-building team and Caltech neuroscientist Christof Koch to lead the brain-deconstruction team. For them and the small army of other PhD scientists working for Allen, the quest to understand the brain and human intelligence has parallels in the early 1900s when men first began to ponder how to build a machine that could fly.

There were those who believed the best way would be to simulate birds, while there were others, like the Wright brothers, who were building machines that looked very different from species that could fly in nature. And it wasn’t clear back then which approach would get humanity into the skies first.

Whether they create something reflected in nature or invent something entirely novel, the mission is the same: conquering the final frontier of the human body — the brain — to enable people to live longer, better lives and answer fundamental questions about humans’ place in the universe.

“We are starting with biology. But first you have to figure out how you represent that knowledge in a software database,” Allen said. “I wish I could say our understanding of the brain could inform that, but we’re probably a decade away from that. Our understanding of the brain is so elemental at this point that we don’t know how language works in the brain.”

In the Hollywood version of the approaching era of artificial intelligence, the machines will be so sleek and sophisticated and alluring that humans will fall in love with them. The 21st century reality is a little more boring.

At its most basic level, artificial intelligence is an area of computer science in which coders design programs to enable machines to act intelligently, in the ways that humans do. Today’s AI programs can adjust the temperature in your home or your driving route to work based on your patterns and traffic conditions. They can tell you someone stole your credit card to make a charge in a strange city or who has the best odds of winning tonight’s soccer match.

In medicine, artificial intelligence algorithms are already being used to do things such as predicting manic episodes in those suffering mental disease; pinpointing dangerous hot spots of asthma on maps; guessing which cancer treatments might give you a better chance at living longer based on your genetic makeup and medical history; and finding connections between things such as weather, traffic and your health.

But when it comes to general knowledge, scientists have struggled to create a tech that can do as well as a 4-year-old human on a standard IQ test. Although today’s computers are great at storing knowledge, retrieving it and finding patterns, they are often still stumped by a simple question: “Why?”

So while Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana — despite their maddening quirks — do a pretty good job of reminding you what’s on your calendar, you’d probably fire them in short of a week if you put them up against a real person.

That will almost certainly change in the coming years as billions of dollars in Silicon Valley investments lead to the development of more sophisticated algorithms and upgrades in memory storage and processing power.

The most exciting — and disconcerting — developments in the field may be in predictive analytics, which aims to make an informed guess about the future. Although it’s currently mostly being used in retail to figure out who is more likely to buy, say, a certain sweater, there are also test programs that attempt to figure out who might be more likely to get a certain disease or even commit a crime.

Google, which acquired AI company DeepMind in 2014 for an estimated $400 million, has been secretive about its plans in the field, but the company has said its goal is to “solve intelligence.” One of its first real-world applications could be to help self-driving cars become better aware of their environments. Facebook chief executive Mark Zuckerberg says his social network, which has opened three different AI labs, plans to build machines “that are better than humans at our primary senses: vision, listening, etc.”

All of this may one day be possible. But is it a good idea?

Advances in science often have made people uneasy, even angry, going back to Copernicus, who placed the sun — not the Earth — at the center of the universe. Artificial intelligence is particularly sensitive, because the brain and its ability to reason is what makes us human.

In May 2014, cosmologist Stephen Hawking caused a stir when he warned that intelligent computers could be the downfall of humanity and “potentially our worst mistake in history.” Elon Musk — the billionaire philanthropist who helped found SpaceX, Tesla Motors and PayPal — in October 2014 lamented that a program whose function is to get rid of e-mail spam may determine “the best way of getting rid of spam is getting rid of humans.” He wasn’t joking.

Allen and Etzioni say that they also have thought a lot about how AI might change the world and that they respectfully disagree with the doomsayers. The technology will not exterminate but empower, they say, making humans more inventive and helping solve huge global problems such as climate change.

“There are people who say, ‘I don’t care about the ethics of it all. I’m a technologist.’ We are the opposite of that. We think about the impact of this kind of technology on society all the time,” said Etzioni, who is chief executive of the Allen Institute for Artificial Intelligence, “and what we see is a very positive impact.”

Koch is more hesitant.

“Runaway machine intelligence is something we need to think about more,” Koch, president and chief science officer of the Allen Institute for Brain Science, said. “Clearly, we can’t say let’s not develop any more AI. That’s never going to happen. But we need to figure out what are the imagined dangers and what are the real ones and how to minimize them.”

Allen’s vision is creating an AI machine that would be like a smart assistant, rather than an independent being, “answering questions and clarifying things for you and so forth.” But he admits he has wondered whether it will one day be possible for that assistant or its descendants to evolve into something more.

“It’s a very deep question,” Allen said. “Nobody really knows what it would take to create something that is self-aware or has a personality. I guess I could imagine a day when perhaps, if we can understand how it works in the human brain, which is unbelievably complicated, it could be possible. But that is a long, long ways away.”

HUMAN BRAINS

Made up of 100 billion neurons, each one connected to as many as 10,000 others, the human brain is the most complex biological system in existence. When you see, hear, touch, taste or think, neurons fire with an electrochemical signal that travels across the synapses between neurons, where information is exchanged.

Somewhere within this snarl are patterns and connections that make people who they are — their memories, preferences, habits, skills and emotions.

Building on the work that Allen accelerated through his philanthropy, governments around the world have launched their own brain initiatives in recent years. The European Commission’s Human Brain Project, which began in 2013 with about $61 million in initial funding, aims to create an artificial model of the human brain within a decade. President Obama announced the United States’ own BRAIN (Brain Research through Advancing Innovative Neurotechnologies) effort in 2014 to great fanfare, comparing it to the Human Genome Project that led to the current genetic revolution. BRAIN was launched with initial funding of $110 million.

Some futurists even believe that the brain, not the body, may be the key to immortality — that at some point we’ll be able to download our brains to a computer or another body and live on long after the bodies we were born in have decayed.

Allen’s own interest in the brain began with his love of tinkering.

He always has been interested in how things were put together, from steam engines to phones, and as he grew older he became fascinated with the brain.

“Computers are really basically computing elements and a lot of memory,” he said. “They are pretty easy to understand, as compared to the brain, which was designed by evolution.”

But it wasn’t until his mother, Faye, a former elementary school teacher, became ill with Alzheimer’s that Allen’s brain philanthropy took shape.

Allen was very close to her and was devastated when she began to regularly exhibit symptoms in 2003.

“It deepened all my motivations to want to bring forward research about the functions of the brain so that we can create treatments for the different pathologies that can develop. . . . They are horrific to watch progress,” he said.

Within months, he had founded the Allen Institute for Brain Science and seeded it with $100 million. But he didn’t want to just replicate what was being done at university and government labs.

“He wanted to do a different brand of science, tackle bigger questions,” said Allan Jones, who was involved in the founding of the institute and is now its chief executive. Allen’s marching orders were simple: Figure out “how information is coded in the brain.”

Allen, who has committed a total of nearly $500 million to the institute, thought that gathering great minds under one roof, all focused on the same goal, could accelerate the process of discovery.

“Our whole approach is to do science on an industrial scale and trying to do things exhaustively and not just focus on one path,” Allen said.

Allen’s “big science” strategy has attracted and significantly increased the salaries of some of the world’s top talent — including a number of tenured professors at the peak of their careers, such as R. Clay Reid, a neurobiologist who left Harvard Medical School in 2012 to continue his work on how vision works in the brain.

“The brain is the hardest puzzle I can think of, and never before has such a large group been directed to reverse-engineer how it works,” he said.

The Allen Institute also has pioneered a number of other approaches uncommon in biology research.

First, the brain institute started with data, not a hypothesis. Not just ordinary big data but exabytes of it — billions of gigabytes, the scale of global Internet traffic in a month — detailing the growth, white matter and connections of every gene expressed in the brain. Researchers spent their first few years painstakingly slicing donor brains into thousands of microthin anatomical cross sections that were then analyzed and mapped.

Then, it took a page from the open-source movement, which advocates making software code transparent and free, and it made all of its data publicly available, inviting anyone to scrutinize and build upon it.

By 2006, the institute’s scientists had created the most comprehensive three-dimensional map of how the mouse brain is wired and released that atlas to the public, as promised. By 2010, they had mapped the human brain. Since then, researchers around the world have built on their work; the mouse brain paper alone has been cited by more than 1,800 peer-reviewed scientific articles.

Now many of the institute’s 265 employees are turning to more tangible problems, studying autism, schizophrenia, traumatic brain injury and glioblastoma, a rare but particularly aggressive type of brain tumor, as well as projects to understand the nature of vision.

ARTIFICIAL BRAINS

All along, Allen has been backing parallel projects in artificial brains.

He wondered whether it might be possible to encode books — especially textbooks — into a computer brain to create a foundation upon which a machine could be a digital Aristotle, using a higher level of knowledge to interact with humans.

“I wasn’t aiming to solve the mystery of human consciousness,” he explained in his 2011 memoir. “I simply wanted to advance the field of artificial intelligence so that computers could do what they do best (organize and analyze information) to help people do what they do best, those inspired leaps of intuition that fuel original ideas and breakthroughs.”

That idea grew into the Allen Institute for Artificial Intelligence (or AI2 as it is called by its employees), which opened its doors on Jan. 1, 2014, and currently has 43 employees — a number of them recruited from places like Google and Amazon. Allen hasn’t publicly announced the exact amount of his investment, but Etzioni said it is in the tens of millions of dollars and is growing.

Over the past year, Etzioni and his team have created Aristo. The institute’s first digital entity now is being trained to pass the New York State Regents high school biology exam.

Not only do the engineers have to figure out how to represent memory, but they have to give this entity the ability to parse natural language and make complex inferences.

It’s not as easy as it sounds.

“It’s paradoxical that things that are hard for people are easy for the computer, and things that are hard for the computer any child can understand,” Etzioni said. For example, he said, computers have a difficult time understanding simple sentences such as “People breathe air.” A computer might wonder: Does this apply to dead people? What about people holding their breath? All the time? Is air one thing? Is it made up of a single molecule? And so on. The data that Aristo possesses doesn’t add up to the wisdom an elementary school child has accumulated about breathing.

Another test question would require an AI program to interpret this narrative: “The ball crashed through the table. It was made of styrofoam.” A human might grumble about pronoun-antecedent ambiguity but still quickly conclude that the second sentence described the table. Now if the second sentence were changed to “It was made of steel,” the human would conclude it described the ball. But that type of logic requires a large amount of “common sense” background knowledge — about materials such as styrofoam, steel and wood, how they work, furniture, how balls roll and so forth — which has to be explicitly taught to computers.

So far, Aristo has passed the first-, second-, and third-grade biology tests and is working his way through the fourth.The last time Aristo took this test, a few months ago, the grade was about a C. Or, more precisely, 73.5 percent.

Etzioni says that’s pretty good — for a computer. Sounding like a glowing parent, he said, “We’re very proud he has started to make measurable progress.”

But he estimates that Aristo needs at least one more year to get an A on fourth-grade biology, mostly because the team needs to figure out image recognition and visual processing so that the computer can interpret the diagrams.

Five more to pass the eighth-grade test.

After that, who knows?

CONVERGENCE

The artificial intelligence researchers and their counterparts in brain science are in a kind of race, Allen says, and their work one day will converge — although to what end he’s not sure.

Koch, who leads the team that is reverse-engineering the brain, explained that for Allen, understanding the brain is about cracking a code.

“He’s fascinated by how codes work. What codes are used to process information in the cerebral cortex? Is the code different in a mouse versus a human? It’s the same for programming code. He wants to know, ‘Can you program intelligence in an artificial way?’ ” Koch said.

The implications of this work are in­cred­ibly complex, and Hawking and Musk — who in January announced he would donate $10 million to fund researchers who are “working to mitigate existential risks facing humanity” — are hardly the only ones calling for researchers to slow down and think about the consequences of superintelligent machines.

“There’s a huge debate right now about whether simulating the human brain is necessary to get the kind of AI we want or whether simulating the human brain would be the equivalent of reproducing the brain. Nobody knows exactly what this means,” said Jonathan Moreno, a bioethicist at the University of Pennsylvania.

Eric Horvitz, director of Microsoft Research’s main lab in Redmond, Wash., and a past president of the Association for the Advancement of Artificial Intelligence, stepped into the debate in December by announcing he would fund a major research project on the potential effects of AI on society.

Led by Stanford University historians, the study would run for 100 years. The first report is scheduled to be completed in 2015 and subsequent ones will be published every five years, containing updates on technological progress and recommendations and guidelines about the law, economics, privacy and other issues.

“A number of years back we were hearing complaints about AI as a failure. Now that we’re seeing more successes — a presence of machine intelligence in products and services — we’re hearing some anxieties coming out that maybe the progress has been too good,” said Horvitz, who sits on the board of AI2.

He said he hopes the study will help trigger thoughtful discussion, draft guidelines and help redirect the focus in the field back to the short-term where he believes the programs can do a lot of good. He cites being able to minimize hospital errors, help make sense of scientific publications and improve car safety as worthy and achievable goals. He also said it’s critically important to think about the implications of AI for democracy, freedom and other important values in the most basic blueprints for the machines.

“If we could design them from the ground up to be supporters of their creators, they could become very strong advocates of human beings and work on their behalf,” Horvitz said.

But could those beings ever become self-aware?

Koch, the expert on the subject, isn’t sure.

On the one hand, he believes consciousness is a property of natural systems: “The job of the stomach is digestion, the heart to pump blood. Is the job of the brain consciousness?”

“In principle, once I replicate this piece of highly organized matter I should be able to get all the properties associated with it,” he said. But he said scientists and philosophers aren’t in agreement about what is the right way to do this, under what circumstances and whether it should be done at all.

Two iconic works of science fiction of the 1950s address that question in an ominous way. In Isaac Asimov’s “The Last Question,” humans ask a supercomputer how to save the world until they are gone. Only the machine is left when it comes up with the answer and in the end it commands, “Let there be light …” In Fredric Brown’s “Answer,” a “supercalculator” made up of all the machines on 96 billion planets is asked: “Is there a God?” Its answer: “Yes, now there is a God.”

“I don’t think we’re building a god by any means,” Etzioni said. “We’re building something on science. The computer is an assistant — not someone you ask, ‘Solve cancer and get back to me.’

“I think it’s going to be something very sophisticated with vast amounts of information, but I still think of it very much as a tool.”

Ariana Eunjung Cha
Source : http://www.washingtonpost.com

You May Also Like

One thought on “Building an Artificial Brain

  1. There are two principal approaches to AI: start at the bottom with simple devices imitating neurons and let them self-assemble intelligence as an emergent property; or start at the top with complex interpersonal behaviors and deconstruct this into ever lower level components. Both approaches have value. I began at the bottom in 1973 with artificial neurons, building up networks and teaching them. For me this remains useful as “recognizers”, generalizing answers from incomplete datasets, like the Grandmother Cell. In 1999 I turned the telescope around and worked with behaviors described as scripts (action sentences). This led to better and better human-computer interfaces using language, and lower level behaviors easily added (like a CO2 sensor that feeds a fear of smothering). Putting the two together is my current interest. I hope the Allen Institute can explore both top-down and bottom-up approaches without getting lost in the weeds or succumbing to turf wars. Good luck to us all.

Leave a Reply

Your email address will not be published.