ARTIFICIAL INTELLIGENCE – A TOOL TO UNDERSTAND MIND
The art of intelligence is a fascinating study. Scientists are in fast pursuit of the “mind of matter” and enjoying the “Game Of God” by creating entities which mirror the minds most precious-possession consciousness and the ability to think!
A leading scientist Juergen Schmidhuber is a computer scientist working with artificial intelligence (AI). He made many thought-provoking observations in an interview with Jacob Kosly (The Hindu dated 20th December 2017, Chennai)
He said his goal was to build a general purpose AI which can learn to do many things.
It must learn the learning algorithm itself (that can help it master chess as well as drive a car, for instance) – true meta-learning, as it’s called.
He spoke of a concept called “feedforward network” which is layers of neural networks arranged to mimic neurons in the brain. Information moves up these layers. Curiously, is the structure of learning used for millennia in the Indian culture to teach dance, music, mantras Vedic Chanting the connecting with and dependent on the structural learning is in a similar pattern – working layer by layer each, in comporting and work from the previous skill, which is then incorporated into the preceding one.
Another field of brain activity deals with long-short-term memory. The computer scientist described how the brains utilize this phenomenon.
It’s a recurrent network, a little bit like in the brain. The brain has a hundred billion neurons and each is connected to 10,000 others. That’s a million-billion connection and each of them has a ‘strength’ that indicates how much one neuron influences another.
Then there are feedback connections that make it (the network) like a general-purpose computer and you can feed in videos through the input neurons, acoustics through microphones, tactile information through sensors, and some are output neurons that control finger muscles. Initially, all connections are random and the network, perceiving all this, outputs rubbish. There’s a difference between the rubbish that comes out and the translated sentence that should have come out. We measure the difference and translate it into a change of all these connection strengths so that they become ‘better connections’ and learn through the Long Short-Term Memory algorithm to adjust internal connections to understand the structure of, say, Polish, and learn to translate between them.
An interesting question was posted and Schmidhuben’s answer gives an equally interesting twist to Yama Niyana.
Anything that can be taught via demonstration can be taught to an AI in principle. How are we teaching our kids to be valuable members of society? We let them play around, be curious and explore the world. We punish them, for instance, when they take the lens and burn ants. And they learn to adopt our ethical and moral values. The more situations they are exposed to, the closer they come to understanding values. We cannot prove or predict that they are always going to do the right thing, especially if they are smarter than the parents. Einstein’s parents couldn’t predict what he would do, and some of the things he discovered can be used for evil purposes. But this is a known problem. In an artificial neural network, it’s easier to see, in hindsight, what went wrong. For instance, in a car crash, we can find which neuron influenced the other. If it’s a huge network, it will take some time, but it’s possible. With humans, you can’t do this. You can only ask them and very often, they will lie. Artificial systems, in that sense, are under control.
Another interesting query often raised is what if Artificial Intelligence becomes smarter than human beings the scientist responds in an interesting manner.
I would be very surprised if, within a few decades, there are no AIs smarter than ourselves. They won’t really rule us. They will be very interested in us — ‘artificial curiosity’ as the term goes. That’s among the areas I work on. As long as they don’t understand life and civilization, they will be super interested in us and their origins. In the long run, they will be much more interested in others of their kind and it will expand to wherever there are resources. There’s a billion times more sunlight in space than here. They will emigrate and be far away from humans. They will have very little, if anything, to do with humans.
The interviewer expounds an age-old fear and gets a suitable answer
But can we go extinct or be exterminated like Neanderthals?
No. So, people are much smarter than, say, frogs, but there are lots of frogs out there, right? Just because you are smarter than them doesn’t mean you have any desire to exterminate them. As humans, we are responsible for the accidental extermination of a lot of species that we also don’t know, exist. That is true, but at least you won’t have the silly conflict of Schwarzenegger movies, or like The Matrix, where bad AIs live off the energy of human brains. That, incidentally, is the stupidest plot ever.
Thirty watts (what a brain produces) and the power plant used to keep the human alive is much more. When should you be afraid of anybody? When you share goals and have to fight for it. That’s why the worst enemy of a man is another man. However, the best friend of another man is also man or a woman. You can collaborate or compete. An extreme example of collaboration may be love — that is shared goals towards having a family. The other extreme could be war. AI will be interested in other AI like frogs are interested in other frogs.