recently looking at WaveNet data, and found traces of residuals in the source code.Since it has not been studied before, turn it over and have a look.
The following is a blog link that I think speaks well:
This blog post by https://blog.csdn.net/dulingtingzi/article/details/79870486 is very clear from beginning to end. It is enough to have a clear understanding of residual network.
In addition, https://blog.csdn.net/column/details/14646.html
I have also seen my understanding of GAN in this column, because I also want to combine GAN with speech synthesis to play and make a record.
1. Label Function Customized by tld File:
is mainly used to execute back-end code to obtain data for page display. For example, custom tags can be used to load select drop-down boxes through attributes in development.Another usage is that sometimes the read data is the id or other fields of the General, but the page should display the corresponding name. At this time, the tag can be defined by tld.
Is the following statement about Heteroskedasticity correct?
A. Linear regression has different error terms
B linear regression has the same error term
C. Linear regression error term is zero
D none of the above statements is true.
Analysis: heteroscedasticity is relative to the same square difference.
The so-called homodyne is to ensure that the regression parameter estimators have good statistical properties. An important assumption of the classical linear regression model is that the random errorterm in the overall regression function satisfies homoscedasticity, that is, they all have the same variance.
mainly involves:https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v2.pyhttps://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_utils.py First, it is recommended to read the official documents of Tensorflow and slim to acquire basic knowledge of arg_scope, variable_scope, outputs_collections, etc. This article mainly records the general logic and memo of the code. resnet_utils.py ''' 需了解collections.namedtuple，相当于一个命
ABSTRACT attitude machine provides a continuous frame estimation for learning rich hidden space models.We systematically design a convolution network for pose recognition, which can learn image features and image-dependent spatial models.Contribution of the paper: Implicit modeling of long-distance relationships (joints).We design a cascade network structure, which consists of a convolution network with the trust graph of the previous network input.It improves the estimation of components layer by layer without using explicit graph models as references.
CVPR2018 paper list
Abstract object tracking is the cornerstone of many visual analysis systems.In recent years, although considerable progress has been made in this area, it is still a challenge to track in real video steadily, efficiently and accurately.In this paper, we propose a hybrid tracker, which uses motion information from compressed video streams and a general semantic object detector acting on decoded frames to build a fast and effective tracking engine.
GBDT (Gradient Boosting Decision Tree): Gradient Lifting Decision Tree
GBRT (Gradient Boosting Regression Tree): Gradient Ascension Regression Tree
CART(Classification And Regression Tree)
In Boosting algorithm, when the square error loss function is adopted, the loss function just expresses the fitting residual of the current model, which is convenient for optimization.It is also very convenient when using exponential loss function.But for General function, optimization is very difficult.Therefore, the steepest descent approximation method, that is, the negative gradient of the loss function in the current model, is used as the approximation of the residual error of Boosting algorithm in the regression problem.
Since this article is based on interview, it will not pay too much attention to formulas and derivation. If you want to know the algorithm in detail, please look forward to the following.RF, GBDT and XGBoost all belong to Ensemble Learning. The purpose of Ensemble Learning is to improve the generalization ability and robustness of a single learner by combining the prediction results of multiple base learners.According to the generation method of individual learners, the current integrated learning methods can be roughly divided into two categories: strong dependency between individual learners, serialization method that must be generated serially, and parallelization method that can be generated simultaneously without strong dependency between individual learners.