If one looks at the history of AI, the research field is divided into two camps – Symbolic & non-symbolic AI that followed different paths towards building an intelligent system. Symbolists firmly believed in developing an intelligent system based on rules and knowledge and whose actions were explainable while the non-symbolic approach strived to build a computational system inspired by the human brain. Symbolic AI involves the clear embedding of human knowledge and behavior rules into computer programs. The practice showed a lot of promise in the early decades of AI research. Symbols are things we use to represent other things. Symbols play a vital role in the human thought and reasoning process. If I tell you that I saw a cat up in a tree, your mind will quickly conjure an image. We use symbols all the time to define things (cat, car, airplane, etc.) and people (teacher, police, salesperson). Symbols can represent abstract concepts (bank transaction) or things that do not physically exist (web page, blog post, etc.). Symbols can be organized into hierarchies (a car is made of doors, windows, tires, seats, etc.). They can also be used to describe other symbols (a cat with fluffy ears, a red carpet, etc.). An example of symbolic AI tools is object-oriented programming. OOP languages allow you to define classes, specify their properties, and organize them in hierarchies. You can create instances of these classes (called objects) and manipulate their properties. Class instances can also perform actions, also known as functions, methods, or procedures. Each method executes a series of rule-based instructions that might read and change the properties of the current and other objects. Using OOP, you can create both extensive and complex symbolic AI programs that perform various tasks.
The ultimate challenge in computer science is to develop an effective AI system with a layer of reasoning, logic and learning capabilities. But today’s current AI systems have either learning capabilities or reasoning capabilities — rarely do they combine both. Now, a Symbolic approach offers good performances in reasoning, can give explanations and can manipulate complex data structures, but it has generally serious difficulties in anchoring their symbols in the perceptive world.
Methods of symbolic AI
Symbolic AI attempts to solve problems using a top-down approach (example: chess computer). “Seek and you shall find.” Search is the symbolic AI technique. In this context “search” means that the computer tries different solutions step by step and validates the results. The classic example of this would be a chess computer that “imagines” millions of different futures moves and combinations and, based on the outcome, “decides” which moves promise the highest probability of winning. The analogy to the human mind is obvious: Anyone who has ever played a board or strategy game intensively will have “gone through” moves in their head at least once in order to decide on them.
When using search algorithms an AI checks all possible solutions step by step. Only the part of the solution that is currently investigated is created in computer memory.
Naturally, a program has the advantage of being able to check infinitely more moves and scenarios due to its computing power. Even AlphaGo works with a variety of this technique at its core. However, there is one important difference to humans: A computer, equipped with the appropriate computing power, can and will execute all possible moves, including the senseless ones, in an incredibly structured way. Humans however can partially rely on their “gut”. We usually decide early on, based on our gut feeling, what makes sense and thus limit the number of potential moves we think about.
Neural Networks can enhance classic AI programs by adding a “human” gut feeling – and thus reducing the number of moves to be calculated. Using this combined technology, AlphaGo was able to win a game as complex as Go against a human being. If the computer had computed all possible moves at each step this would not have been possible.
One of the main stumbling blocks of symbolic AI, or GOFAI, was the difficulty of revising beliefs once they were encoded in a rule’s engine. Expert systems are monotonic; that is, the more rules you add, the more knowledge is encoded in the system, but additional rules cannot undo old knowledge. Monotonic basically means one direction. Because machine learning algorithms can be retrained on new data and will revise their parameters based on that new data, they are better at encoding tentative knowledge that can be retracted later if necessary; i.e. if they need to learn something new, like when data is non-stationary.
The second flaw in symbolic reasoning is that the computer itself does not know what the symbols mean, i.e. they are not necessarily linked to any other representations of the world in a non-symbolic way. Again, this stands in contrast to neural nets, which can link symbols to vectorized representations of the data, which are in turn just translations of raw sensory data.
One question that logically arises then is: who are the symbols for? Are they useful for machines at all? If symbols allow homo sapiens to share and handle information based on fundamental physiological constraints, great, but why should machines use them? Why should not machines just talk to each other in vectors or some squeaky language of dolphins and fax machines? Let us hazard a bet: When machines do begin to speak to one another intelligibly, it will be in a language that humans cannot understand. Maybe words are too low in bandwidth for high-bandwidth machines. Maybe they need more dimensions to express themselves unambiguously. Language is just a keyhole in a door that machines have bypassed. At best, natural language could be an API that AI offers humans so they can ride on its coattails; at worst, it could be a distraction from what constitutes true machine intelligence. But we have confused it with the summit of achievement because natural language is how we show that we are smart.
Combining the strengths of neural and symbolic AI methods
What is interesting is that, for the most part, the disadvantages of deep neural nets are strengths of symbolic systems (and vice versa), which inherently possess compositionality, interpretability, and can exhibit true generalization. Neural net architectures are very powerful at certain types of learning, modeling, and action — but have limited capability for abstraction. That is why they compare with the Ptolemaic epicycle model of our solar system — they can become more and more precise, but they need more and more parameters and data for this, and they, by themselves, cannot discover Kepler’s laws and incorporate them into the knowledge base, and further infer Newton’s laws from them.
Symbolic AI is powerful at manipulating and modeling abstractions but deals poorly with massive empirical data streams. This is why we believe that deep integration of neural and symbolic AI systems is an important path to human-level AGI on modern computer hardware. It is worth noting in this light that many recent “deep neural net” successes are actually hybrid architectures, e.g. The AlphaGo architecture from Google DeepMind integrates two neural nets with one game tree.
- Symbolic AI refers to the fact that all steps are based on symbolic human readable representations of the problem that use logic and search to solve problems.
- A key advantage of Symbolic AI is that the reasoning process can be easily understood – a Symbolic AI program can easily explain why a certain conclusion is reached and what the reasoning steps had been.
- A key disadvantage of Non-symbolic AI is that it is difficult to understand how the system concluded. It is very important when applied to critical applications such as self-driving cars, medical diagnosis, among others.
- A key disadvantage of Symbolic AI is that for the learning process – the rules and knowledge must be hand coded which is a hard problem.