October 26, 2016 Black Bear Solutions

Our progress in the quest for creating Artificial Intelligence

Over the recent past there have been many mentions of machine learning, one might even think that something groundbreaking has been discovered. In reality it’s almost been around as long as we have computers and, no, nothing incredible has been discovered lately. Long ago Alan Turing asked whether machines can think and we have certainly come a long way in the pursuit of creating an artificial conscience, but still not quite there yet. This discovery might help us crack the mystery of our own mind and perhaps the eternal question “Why are we here?”. Apart from the philosophical implications, in this article we would like to shed some light on some aspects of Artificial Inteligence.

If my data is enormous, can Intelligence be created ?

Initial attempts at creating AI consisted of letting machines full of information run and hope for a positive outcome. Based on our limited knowledge of the universe and our place in it, this does not at all far-fetched. We are a result of entropy – given billions of years living matter emerged out of inanimate one. The concept is somewhat similar with two big restrictions – time is not unlimited and there is a memory capacity on these machines(as opposed to a seemingly endless universe). Google might be the pinnacle in this endeavor, but our search engines won’t evolve their own conscious.

In broader terms, machine learning consists of reasoning and generalizing, based on initial sets of information, applied to new data. Neural networks, deep learning and reinforcement learning all represent machine learning as they create systems, capable of analyzing new information.

Some 60 years ago, processing power was a fraction of what we have now, big-data was nonexistent and algorithms were primitive. In this setting, advancing in machine learning was nearly impossible, but people kept going. In recent decades we had neurology help advance neural networks. Machine learning patterns can be broken into classification or regression. Both methods work with previously provided data. The first class categorizes information, while the second develops trends that then help make prognosis for the future.

Frank Rosenblatt’s perceptron is an example of linear classifier – it’s predictions are based on a linear prediction function that splits data into multiple parts. The perceptron takes objective features (length, weight, color etc.) and gives them a value. It then works with those values until an accepted output is achieved – one, fitting into predefined boundaries.

Even people working in this field find it confusing

Neural networks are many perceptrons that work together, creating similar structure to the neurons in our brains. In more recent years scientists tried to create AI by mimicking how our conscience works – or at least as far as we know.

Deep learning has been the next big thing in AI development. These are neural networks with more layers, adding more levels of abstraction. It is important to remember that a computer does not consider the traits that human would between two or more objects – machines need abstraction in order to fulfill their task. This difference in perception is perhaps the final frontier to developing an AI, capable of passing Turing’s test.

Despite our solid progress, there is a long way to go. The black box of machine learning is an example of issue that we still can’t quite figure out. We can say the exact same thing about the human mind. The good news is that scientists are working on both problems and not knowing something has never stopped us into digging deeper and ultimately finding the answers we are looking for.