What The Heck is AI?
My venture into artificial intelligence.
Two weeks ago I knew, well, pretty much nothing about artificial intelligence except that my Alexa used it, and that it’s in pretty much every sci-fi movie since the dawn of time. When it came to how it worked, it seemed like something so complex and intricate that it shouldn’t even be possible.
Nevertheless, my curiosity finally took over and I decided that I would learn a bit more about it. I started with the complete basics and slowly got more and more complex. So, in one week I had an introductory crash course in AI, and below is my understanding.
To start things off, I did a TKS Explore. An Explore is basically an introductory course to a subject that a program that I attend(The Knowledge Society) creates.
First of all, I learned about artificial neural networks(ANNs). In machine learning, a neural network is an algorithm that is loosely based on how the human brain functions and learns. It has an input layer, an output layer, and usually one or multiple hidden layer(s) with neurons, but sometimes it doesn’t have any hidden layers(this is called a perception).
After learning about an ANN’s similarity to the human brain, some of the mystery about AI was gone. It made so much sense a computer could replicate human intelligence by having a brain resembling that of a human.
In order to get artificial intelligence, you must first build an artificial brain(at least to a certain point).
Last summer, I studied the human nervous system, and I can definitely see the similarities. The principal functions of your nervous system are sensory input, integration, motor output.
The peripheral nervous system detects stimuli, and then sends information about that stimulus to the central nervous system(comprised of our brain and spinal cord). The central nervous integrates the information it was provided. Based on the information it was provided, it decides how to act, it’s motor output.
In the case of an ANN, it is slightly different, but it has the same general idea. The inputs are like the sensory input. The hidden layers with its neurons are like the brain integrating information. Then, the neurons produce an output, just like the central nervous system produces a motor output.
Once I learned how it was even possible for a machine to learn, I learned the different ways that it can learn. There are many methods of teaching a machine, but I only learned a few of the main ones.
All machine learning models are trained with data. What happens with that data during that training, however, is unique for each method.
In supervised learning, the machine is provided with already labeled training data. Labeled training data is data where the output is already known.
The job of the machine is to learn and derive a function from analyzing how the input resulted in the given output. Once it has been trained, when you give the machine an input similar to one it has seen before, but not the same, it should be able to produce the correct output.
The problem with supervised learning is that if you over-train your model, then you’re machine will be too specific so much so that if you provide an input that is even slightly different than what it has ever seen before, it won’t know what to do and your machine might even break.
Then, there’s unsupervised learning. When training this model, the data is unlabeled. The job of the machine is to find similarities between the inputs and group them accordingly.
There isn’t a set number of groups, the machine must decide how many groups depending on how it chooses to group the data. With unsupervised learning, the machine has much more freedom. There isn’t one method that’s the best, it depends on the purpose of your model.
After completing the Explore, I didn’t really know what steps to take next. I wanted to start making a neural network, just to see AI in action, but I wasn’t super confident in making one since I felt like I didn’t know what the parts of a neural network look like in code, and what function they have. Nevertheless, I decided to make one anyway.
While I was in the process of coding my first neural network I realized that I was learning about AI while coding the neural network. I got more of a hands-on experience and was able to understand the purpose of some parts of the code even better than I would have reading about it.
To use an analogy Elon Musk often uses, If you want to teach someone how to build an engine, there are two ways to go about it. Either, you can start by teaching them about the wrench, screwdriver, etc. or you can start taking the engine apart. In the first scenario, when you are learning about say, the wrench, you’re thinking, “well I know how to use a wrench, but, what is it’s purpose?” However on the other hand, if you take apart the engine the use of a wrench becomes obvious and you have a better understanding of it.
It seems like I have learned a lot this past week, but I have really only scratched the surface of the wonders of AI. I look forward to going deeper into artificial intelligence and all of its potential capabilities!