QA Official

CS231n course study notes (3)-implementation of ——Softmax classifier

https://qaofficial.com/post/2019/03/29/24451-cs231n-course-study-notes-3-implementation-of-softmax-classifier.html 2019-03-29
Translation Notes: https://zhuanlan.zhihu.com/p/21930884?refer=intelligentuniVideo address of homework explanation: http://www.mooc.ai/course/364/learn#lesson/2118Reference: http://blog.csdn.net/pjia_1008/article/details/66972060 homework explanation Find Loss linear model: y=Wxy=Wx Softmax:Si=eyi∑jeyj\displaystyle S_{i}=\frac{e^{y_{i}}}{\sum_{j}e^{y_{j}}} cross-entropy Loss:−∑ilabelilogSi\displaystyle -\sum_{i}\mathrm{label}_{i}\log S_{i} labeli \mathrm{label}_{i} } _ { i } is one-hot type, so loss = logsi = yi+log ∑ jey j \ mathrm { loss } =-\ logs _ { i } =-y _ { i }+\ log \ sum _ { j } e { \ hat { y } _ { j } } scores = X.

Data Imbalance in Classification Problem

https://qaofficial.com/post/2019/03/29/24115-data-imbalance-in-classification-problem.html 2019-03-29
Original Address: http://blog.csdn.net/heyongluoyao8/article/details/49408131Thank the author In many machine learning tasks, the number of samples in one or some categories in the training set may be much larger than the number of samples in other categories.That is, class imbalance. In order to achieve better learning results, it is necessary to solve the problem of class imbalance. Jason Brownlee's answer: original title: 8tactics to combat imbalanced classes in your machine learning datasetWhen you classify a dataset with unbalanced categories, you get 90% Accuracy.

First Multilayer Perceptron Example: Indian Diabetes Diagnosis

https://qaofficial.com/post/2019/03/29/24374-first-multilayer-perceptron-example-indian-diabetes-diagnosis.html 2019-03-29
Multilayer Perceptron is the simplest neural network model used to deal with classification and regression problems in machine learning. First Case: Indian Diabetes Diagnosis pimadindians dataset: standard machine learning dataset downloaded free of charge by UCI Machine Learning. http://archive.ics.uci.edu/ml/datasets # Import Required Packages import tensorflow import keras from keras.models import Sequential from keras.layers import Dense import numpy as np # Initializes a random number generator using a fixed random number seed

How to convert keras trained model into. pb file of tensorflow and call it in TensorFlow serving environment

https://qaofficial.com/post/2019/03/29/24854-how-to-convert-keras-trained-model-into.-pb-file-of-tensorflow-and-call-it-in-tensorflow-serving-environment.html 2019-03-29
Firstly, the model trained by keras is saved in. model (.h5) format through its own model.save () model loading is through my _ model = keras.models.load _ model (filepath) To convert this model into a TensorFlow model in. pb format, the code is as follows: 1 # -*- coding: utf-8 -*- 2 from keras.layers.core import Activation, Dense, Flatten 3 from keras.layers.embeddings import Embedding 4 from keras.layers.recurrent import LSTM 5 from keras.

Label Classification Model Based on CNN Inception-v3 Network Structure

https://qaofficial.com/post/2019/03/29/24738-label-classification-model-based-on-cnn-inception-v3-network-structure.html 2019-03-29
CNN Network Label Classification 1. Corpus Processing and Model Buildingthe training corpus format follows the label-> title-> content, and the title is optional.How can text be used as input to convolutional neural networks?We know that the input of convolution neural network is a three-dimensional matrix, and if batch is included, it is a four-dimensional matrix.Each three-dimensional matrix is similar to the length, width and color depth of the figure.

Multi-label Classification Principle and Code

https://qaofficial.com/post/2019/03/29/23667-multi-label-classification-principle-and-code.html 2019-03-29
Principle Multiple DiclassificationCode https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/kernels

Multi-tag Learning Based on Generic Attributes-Including python Code

https://qaofficial.com/post/2019/03/29/23620-multi-tag-learning-based-on-generic-attributes-including-python-code.html 2019-03-29
The basic content of this article is translated from Lift: Multi-Label Learning With Label-Specific Features [ 1 ], including some of my own understanding, and finally attached with github link of python code that I reproduced. generic attribute The so-called multi-label learning is a machine learning problem compared with single-label learning.As the name implies, the model will return the prediction results of multiple label to the input feature vectors. For example, it will give a picture for the program to judge whether there are "

Random Forest (python Version)

https://qaofficial.com/post/2019/03/29/23634-random-forest-python-version.html 2019-03-29
Random Forest Reprinted from: http://www.zilhua.com/629.htmlAlthough it is reprinted, it is written in python later, and the original author uses r language. 1. Random Forest Use Background 1.1 definition of random forest Random Forest is a relatively new machine learning model.The classic machine learning model is neural network, which has a history of more than half a century.Neural network is accurate in prediction, but it takes a lot of calculation.

Regular Items L1 and L2 Added to Loss Function in Linear Regression

https://qaofficial.com/post/2019/03/29/24485-regular-items-l1-and-l2-added-to-loss-function-in-linear-regression.html 2019-03-29
RegularizationAlmost all machine learning can see that an additional term is added after the loss function. There are two commonly used additional terms, generally called 1-norm and 2-norm in English, L1 regularization and L2 regularization in Chinese, or ℓ1-norm and ℓ2-norm. L1 regularization and L2 regularization can be regarded as penalty terms of loss function.L1 regularization refers to the sum of the absolute values of each element in the weight vector w, which is usually expressed as ||w||1L2 regularization refers to the sum of squares of each element in the weight vector w and then the square root is calculated (it can be seen that the L2 regularization term of Ridge regression has a square sign), which is usually expressed as ||w||2

Several binary encoding functions of sklearn involved: OneHotEncoder (), LabelEncoder (), LabelBinarizer (), MultiLabelBinarizer ()

https://qaofficial.com/post/2019/03/29/23656-several-binary-encoding-functions-of-sklearn-involved-onehotencoder-labelencoder-labelbinarizer-multilabelbinarizer.html 2019-03-29
Transferred from http://blog.csdn.net/haramshen/article/details/53169963 Several binary encoding functions of sklearn involved: OneHotEncoder (), LabelEncoder (), LabelBinarizer (), MultiLabelBinarizer () 1. Code Block import pandas as pd from sklearn.preprocessing import OneHotEncoder from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import LabelBinarizer from sklearn.preprocessing import MultiLabelBinarizer testdata = pd.DataFrame({'pet': ['cat', 'dog', 'dog', 'fish'],'age': [4 , 6, 3, 3], 'salary':[4, 5, 1, 1]}) 1 2 3 4 5 6 7 8 1 2 3 4 5