Artificial Intelligence (AI)

AI is used in many stages of neural reconstruction and analysis. Insights gained from new understandings of the brain could in turn be used to to improve AI.

Intelligence is “the ability to acquire and apply knowledge and skills.” From a human point of view, we usually associate intelligence with solving complex problems or coming up with particularly clever ideas. When it comes to intelligence for a machine, the bar is much lower. A computer might learn to recognize a face from an image or steer a car so it does not run off the road. But from human POC, recognizing a friend or driving around town are far from signifiers of intelligence - they’re so basic we hardly even think about it. When you consider the trajectory of software over the past few decades, the leap to machine intelligence is impressive, yet as far as we’ve come, we’re still so far from achieving even remotely close to human cognition.


Why do we even want smarter machines? There is a legitimate concern that automation from both AI and robotics will impact the job market and economy; however, we expect that many of those changes will be positive. Advances in AI extend well beyond machines taking over some jobs that were formerly done by humans. It means a legendary expansion in the endeavors that we as a species can undertake. Innumerable industries, from biotech to manufacturing, are severely hindered by an inability to make sense of big data. AI means machines will be able fill huge holes in R&D, opening up unimaginable opportunities for humans to exert creativity and strategy. The future of AI is not machines taking over all things humans do but rather humans working alongside machines, as partners. AI is a supplement for the human brain, not a replacement. And there will be some stellar technology to come out in years to come. Imagine…

Imagine a few years from now when everyone has a personal AI. Not only will it be able to personalize nest to everything, but it will transform integral components of life, from healthcare to safety to production.


Artificial intelligence is the study of replicating human intelligence in computers.

Right now, AI describes a bunch of programming techniques that get computers to perform simple tasks that are difficult for traditional programming methods, like getting computers to describe what’s in an image, or to understand human speech.

Artificial intelligence is a tool that allows us to automate tasks, so they can be done reliably, repeatedly, and quickly without errors.

Without artificial intelligence, search engines like Google would never be able to answer the 40,000 queries it receives every second.

In the near future, artificial intelligence will be used will be used to sift through medical images around the clock to identify early stage tumors that should be inspected further.

In the far future, artificial intelligence will be the perfect personal assistant, seamlessly coordinating your schedule so you and everyone else can be healthy, happy, and productive. (How’s that for dystopian?)

The current state-of-the-art AI is based on a technique called neural networks. They’re not real neurons but digital ones, programming models based on neuroscience research from the 1960s that describe how the beginning parts of visual system works. An input is given to the network, the network identifies a hierarchy of features, and based on the features detected the networks determines the proper output.

As a result, neural networks are well-suited to tackle sensory processing tasks, like face recognition, language translation, and speech recognition (think Facebook image tag suggestions, Google Translate, and Siri or Alexa).


Despite much recent progress, neural networks still have many shortcomings that limit their effectiveness:

  • Neural networks require the programmer to provide a set of examples that are used to train them.
  • The training process requires that the number of examples be large, and roughly cover all the types of inputs the network would ever receive (networks won’t be able to process inputs that look nothing like the training examples).
  • It’s easy to fool a neural network with well-designed input examples.


Is AI dangerous?

Like all technology, there are costs and benefits, depending on how the technology is used. Unlike other technologies, where humans are the sole decision makers, AI introduces another decision-making-entity involved.

Even now, we face ethical dilemmas determining how to assign blame when a decision made by a non-thinking AI causes injury to someone (see the Trolley Problem).


What’s a neural network?

Neural networks are a tool that a programmer can use to solve a problem for which a human can easily identify a solution but isn’t able to easily explain all the steps they took to get there. For example: given a picture, indicate whether there’s a dog or cat in the photo.


But how? What’s going on in there? Is this the most unhelpful chart ever because there is a huge black box in the middle?

A human can easily do this kind of task. But to make a program that describes what a dog looks like, what a cat looks like, and how the two are different is very difficult. Both are furry, both have ears, noses, and mouths, both have four paws. You might try to make a program that says dogs are big and cats are small, but that’s not always true…

Dog or cat? It's easy for you to tell. But computers have a tougher time.

Neural networks are great, because all a programmer needs to do is setup the network, train it on some examples, then use it in their application. During training, the programmer shows the network an image, the network produces an answer, and if the answer is wrong, the network updates itself so that it can do better the next time around. This process of correction is a field known as machine learning, and neural networks are one of the most successful techniques as of now.

For example, consider how Facebook can automatically recognize and tag your face. When yo upload an image, AI runs sequential analysis groups of pixels within the image to detect edges, which it then later labels as features. It can detect thousands of features, many of which may not make sense to humans. Then by recombining those features and cross-referencing it with a dataset of example images that are categorized, it can give a probability estimates for who or what an image contains.

A machine learning system extracts feature maps from an image, allowing it to recombine those maps into an estimate for what the image represents.

Left: detecting edges from pixels

How does this relate to brain mapping?

Image recognition is but one of numerous applications of machine learning. The takeaway is that given a dataset, machines can learn the features that are important for a given analysis. This technique can employed for things like detecting the edges of neurons, finding synapses, and even detecting patterns in signal propagation.

Will this lead to human-level intelligence?

We don’t know. It’s science! But one of the more seductive hypotheses in neuroscience is that there is a basic circuit of intelligence that’s just replicated all over the brain. If researchers could only figure out what that basic circuit is, how it’s wired, and how it wires together with other basic circuits, then we’ve cracked intelligence.

It’s unlikely that it will be that simple, but there are a few tantalizing observations:

  • Mammals all share a structure in the brain called the neocortex. It’s the outer sheet of neurons commonly called gray matter that from an evolutionary standpoint is a relatively recent development.
  • As mammals develop greater signs of intelligence the ratio of neocortex to body mass increases. And of course, humans have the largest neocortex for our body size.
  • Under a microscope, the neocortex looks very similar across all mammals: (1) it has a column-like organization to it, with vertical lines of neurons, and (2) it’s organized into multiple layers, with certain layers hosting certain types of neurons.
  • When researchers measure the electrical responses of neurons to particular stimuli, neurons again seem to organize their responses along a column.

Could it be that these columns, or cortical columns, are a basic computational unit of intelligence? It’s possible. Regardless, cortical columns are definitely an important object to study.

Cortical columns have attracted a lot of research in neuroscience, including the famous Blue Brain Project in Europe, but no one has been able to definitively map all the connections between neurons in one column.

That’s the primary target for the teams in iARPA’s MICrONs project.