Learning Gradient Descent!
- Kavan Mehta
- Aug 30, 2021
- 2 min read
Updated: Oct 25, 2022
In the past week, I have been researching and learning about machine learning and specifically one of the most important introductory concepts, gradient descent. The gradient descent is the concept which is used to create the perfect model (the predictor which is used to recognize specific patterns through programming and statistics) to predict data that is the most closely associated with the actual data points by reducing losses (inaccuracies of a model against the actual data). I learned about how we can use gradient descent to minimize losses and I also gained perspective of how derivatives from calculus relate to machine learning. I also have gained knowledge of the different types of machine learning and their uses such as reinforcement learning, supervised learning, and unsupervised learning. By learning how machine learning truly works and introductory concepts such as linear regression and gradient descent, I am even more intrigued by this topic and the practical implementations concerning it. Furthermore, this knowledge has raised questions that I have wondered on how to actually implement the gradient descent through programming to minimize the mistakes or to increase the accuracy of a particular model. I also want to know about neural networks (deep learning) as they are much more complicated and are involved in technology such as computer vision and various other technologies.
My goal for this week would be to learn about using TensorFlow, one of the leading software in machine learning developed by Google, and learn about the process of training data through programming to actually implement a model. I still have a lot to learn about this interesting topic so see you next week, same place, same time.

Comments