Hi ya’ll! Welcome back to my blog! Not much stuff happened this week in terms of the internship, but I did make some significant progress in my data science course.
Last week, I met with Mr. Swain to discuss my findings and see how they related to the data. For my next assignment, he needs to discuss with Professor Fagan about the available data he can give me, so I am still waiting for his update.
In the meantime, I was able to finish the machine learning course, which marks the completion of the entire data science in R program that I enrolled in. In the end, they even gave me a certificate 🙂
I have now started learning a brand new course called Introduction to Artificial Intelligence with Python. It consists of 7 lessons, each with a keyword that explains the concepts listed and their relation to AI. This week, I was able to complete the first two lessons, which were on Search and Knowledge. In the first lesson, I learned about how AI uses a variety of steps in order to achieve a goal and resolve a problem. Starting from an initial state, it undergoes various actions from one node to the next, until it passes a goal test and has the smallest path cost. This signifies that the goal state is achieved through the most optimal solution. Although the general idea is the same, there are different types of search methods, two of which can be represented by data structures of stack (depth-first search) and queue (breadth-first search). Depth-first Search (DFS) exhausts each one direction before trying another direction. After nodes are being added to the frontier, the mechanism that “manages” the nodes, the first node to remove and consider is the last one to be added. this results in a search algorithm that goes as deep as possible in the first direction while leaving all other directions for later. While this method can be fast if the algorithm lucks out and happens to choose the right path by chance, it is possible that the found solution is not optimal. At worst, it might take the longest time by exploring every possible path before finding the solution. Breadth-first Search (BFS), on the other hand, follows multiple directions at the same time, taking one step in each possible direction before taking the second step. This algorithm is guaranteed to find the optimal solution since it goes through all methods, but will almost always take longer to run. Both these two methods are uninformed search algorithms, which don’t utilize any knowledge about the problem that they didn’t acquire through exploration. Informed search algorithms consider additional knowledge to try to improve performance. For instance, Greedy Best-fit Search expands the node that is the closest to the goal, as determined by a heuristic function, which estimates how close to the goal the next node is. A development of the greedy best-fit algorithm is A* search, which considers not only the estimated cost from the current location to the goal, but also the cost that was accrued until the current location. By combining these values, the algorithm has a more accurate way of determining the cost of the solution and optimizing its choices. Keeping track of each value for each node, once it exceeds the estimated cost of some previous option, will ditch the current path and return to the previous option. Adversarial search is an algorithm used when facing an opponent who tries to achieve the opposite goal in, for say, a game of Tic tac toe. Within this type is a Minimax algorithm, which represents winning conditions as (-1) for one side and (+1) for the other side. Recursively, the algorithm simulates all possible games that can take place beginning at the current state and until a terminal state is reached. Each terminal state is valued as either -1, 0, or +1. After having these values, a maximizing player would choose the highest one and follow that optimal action.
The second lesson was mostly focused on propositional logic, which is based on propositions, statements about the world that can be either true or false. It uses propositional symbols and logical connectives such as not, and, or, implication, and biconditional. Through understanding inference rules like Modus Ponens, And Elimination, Double Negation Elimination, Implication Elimination, Biconditional Elimination, De Morgan’s Law, and the Distributive Property, people can use models, entailment, and a knowledge base to make inferences and come to a conclusion. There are a lot more specific terms but I won’t go into the nitty-gritty since this post is already too long ha.
For next week, I plan on starting my next assignment for the internship, continuing my independent research, and learning about uncertainty in AI. Thanks for tuning in!