Research team sets new mark for “deep learning”

:::

The human brain or mind has been actively researched and scientists have attempted to understand it, the research has brought about the creation of artificial intelligence technologies. In a collaborative project with scientists from Baylor University’s College of Medicine and Rice University, Ankit Patel (lead researcher), Tan Nguyen (co-author) and Richard Baranuik (co-author as well) have moved forward the technology by creating an algorithm that allows computers the ability to learn “visually,” quite independently, much like we do at our earliest ages.

The researchers set out to create a semisupervised learning system through their algorithm for their “convolutional neural network”. This network is a “very simple visual cortex” created through layered software that uses artificial neurons meant to mimic biological neurons by processing layer by layer the images of a data set of 10,000 handwritten digits between nine and zero to identify the numbers. Unlike supervised learning systems where in a computer or analyzing machine will be trained by being presented with thousands of examples of these digits identified with their numerical value, these researcher’s only programed the software with 10 identified examples of each handwritten digit then presenting it to with thousands more. Convolutional learning systems, Nguyen a graduate student from Rice University, says are already being used by self-driving cars as they are the latest technology for visual analysis.

The layers of processing artificial neurons then used the scanned the images, looked for edges, color change, and angles the next layer examines that output to recognize patterns and so on searching for pattern upon pattern in a “nonlinear process.” In this manner the computer taught itself in a new “deep learning” method. The deep part might derive from the depth of understanding/recognition established by the number or depth of layers that compose the “eye” or visual aspect of the image analyzing software. Though there are some similarities in the structure to the human visual cortex the very process of understanding and identifying as humans do is still not known entirely. We learn mostly unsupervised from simple exposure and life interaction with the world.

Baranuik illustrates the need for the capabilities of and the amazing human abilities to identify objects in different visual contexts like a moving video in a three-dimensional space where the objects move and frame by frame must be analyzed and identified but humanly it is too time consuming.

Patel, who has worked in positions such as high-volume commodity training and strategic missile defense applying and learning from machine learning in these fields for over a decade, believes that these neural networks can help neuroscientists learn more about how the human brain works.

Last Modified: 2017/02/20