-->

كتاب تعلم بعمق Grokking Deep Learning

كتاب تعلم بعمق Grokking Deep Learning

    كتاب تعلم بعمق Grokking Deep Learning

    كتاب تعلم بعمق Grokking Deep Learning
    كتاب تعلم بعمق grokking Deep Learning ، استكمالا لسلسلة تعلم بايثون Learn Python نقدم لكم في هذه المقالة كتاب تعلم بعمق grokking Deep Learning ، تأليف Andrew W. Trask.
    Introduction book Grokking Deep Learning

    Introducing deep learning: why you should learn it

    If you’ve got a Jupyter notebook in hand and feel comfortable with the basics of Python, you’re ready for the next chapter! As a heads-up, chapter 2 is the last chapter that will be mostly dialogue based (without building something). It’s designed to give you an awareness of the high-level vocabulary, concepts, and fields in artificial intelligence, machine learning, and, most important, deep learning.

    Fundamental concepts: how do machines learn?

    In this chapter, we’ve gone a level deeper into the various flavors of machine learning. You learned that a machine learning algorithm is either supervised or unsupervised and either parametric or nonparametric. Furthermore, we explored exactly what makes these four different groups of algorithms distinct. You learned that supervised machine learning is a class of algorithm where you learn to predict one dataset given another and that unsupervised learning generally groups a single dataset into various kinds of clusters. You learned that parametric algorithms have a fixed number of parameters and that nonparametric algorithms adjust their number of parameters based on the dataset.

    Deep learning uses neural networks to perform both supervised and unsupervised prediction. Until now, we’ve stayed at a conceptual level as you got your bearings in the field as a whole and your place in it. In the next chapter, you’ll build your first neural network, and all subsequent chapters will be project based. So, pull out your Jupyter notebook, and let’s jump in!

    Introduction to neural prediction: forward propagation

    To predict, neural networks perform repeated weighted sums of the input. You’ve seen an increasingly complex variety of neural networks in this chapter. I hope it’s clear that a relatively small number of simple rules are used repeatedly to create larger, more advanced neural networks. The network’s intelligence depends on the weight values you give it. 

    Everything we’ve done in this chapter is a form of what’s called forward propagation, wherein a neural network takes input data and makes a prediction. It’s called this because you’re propagating activations forward through the network. In these examples, activations are all the numbers that are not weights and are unique for every prediction.

    In the next chapter, you’ll learn how to set weights so your neural networks make accurate predictions. Just as prediction is based on several simple techniques that are repeated/stacked on top of each other, weight learning is also a series of simple techniques that are combined many times across an architecture. See you there!

    إرسال تعليق