QA Official

Keras custom loss [centerloss for fine-grained classification] 2019-05-02
Keras is a building block deep learning framework, which can be used to easily and intuitively build some common deep learning models.Before tensorflow came out, Keras was almost the most popular in-depth learning framework at that time, taking theano as the back end. Now Keras has supported four back ends at the same time: theano, tensorflow, cntk and mxnet (the first three official supports, mxnet has not yet been integrated

Keras(2): Using Keras to build a neural network to classify Mnist handwritten fonts and qualitatively analyze the influence of various hyperparameter 2019-05-02
Experimental Report 0, BaseNet (Layer 3, sigmoid, 784-386-10)one 1, Hidden_Net(784-112-10 and 784-543-10)2. 2、reluActivation_Net 3 3, DeepNet (4, 5) 4 4, DeepNet (four, five layers;Increase in the number of training rounds) 5 5, DeepNet (five layers;Dropout) 6 7, DeepNet (five layers;Dropout+relu) 8 8, AutoEncoder_Net (five layers;AutoEncoder) 9 9, Conclusion 10   Abstract: The data set of this experiment is MNIST digits.

RNN(LSTM) is used for classification 2019-05-02
import tensorflow as tf import sys import random from sklearn.cross_validation import train_test_split from sklearn.cross_validation import StratifiedKFold #StratifiedKFold import matplotlib.pyplot as plt from sklearn.metrics import roc_curve, auc from scipy import interp # from tensorflow.contrib.keras.python.keras.layers import BatchNormalization %matplotlib inline # hyperparameters lr = 0.001 training_iters = 10000 batch_size = 200 #3200/200=16用作训练 800/20

Softmax regression C++ implementation 2019-05-02
Experimental Environment Visual Studio 2013 data Data comes from and contains 26 capital letters. There are 20,000 samples in it, each with 16 dimensions. Experimental Purpose Complete the classification of character samples in the data set. Experimental Code 1. Define a LogisticRegression class: header fileLogisticRegression.h #include <iostream> #include <math.h> #include<algorithm> #include <functional> #include <string> #include <cassert> #include <vector> using namespace std; class LogisticRegression { public: LogisticRegression(int inputSize, int k,

TensorFlow Actual Combat (1)-Definition of cross entropy 2019-05-02
for multi-class problem, cross-entropy is usually used as lostfunction.Cross entropy is a concept in information theory at the earliest. It is changed from information entropy and then used in many places, including communication, error correction code, game theory and machine learning.For the relationship between cross entropy and information entropy, please see: < a href = "", target = "_ blank" > machine learning foundation (6)-cross-entropy cost function (cross-entropy error). When

TensorFlow-RNN Recurrent Neural Network Example 2: Sentiment analysis 2019-05-02
TensorFlow-RNN Sentiment analysis Previously wrote a Sentiment analysis using fully connected neural networks Now, use TensorFlow to build an RNN network for sentiment analysis of text complete code and detailed solution training data Step 1 Data Processing import numpy as np # 读取数据 with open('reviews.txt', 'r') as f: reviews = with open('labels.txt', 'r') as f: labels = # 每一个 \n

Tensorflow loss function and Custom Loss Function (3) 2019-05-02
I mainly divided into three articles to introduce tensorflow's loss function. This article is tensorflow's custom loss function.(A) tensorflow built-in four loss functions(2) Other loss functions(3) custom loss function Self-defined loss function is the end of the chapter on loss function. Learning self-defined loss function is very helpful to improve the accuracy of classification, segmentation and other issues. At the same time, exploring new loss function can also make you

Tensorflow loss function and Custom Loss Function (I) 2019-05-02
I mainly divided into three articles to introduce tensorflow's loss function, this is tensorflow built-in four loss functions (a) tensorflow built-in four loss functions(2) Other loss functions(3) custom loss function loss function quantifies the difference between the results (predicted values) output by the classifier and the results (labels) we expect, which is as important as the classifier structure itself.There are many scholars who devote themselves to discussing how to improve the loss function so as to optimize the results of the classifier.

keras custom loss loss function, sample weighted sum metric on loss 2019-05-02
First differentiate and analyze the concept: 1. loss is the goal of the optimization of the whole network and needs to participate in the optimization operation and update the weight W. 2. metric is only used as an "indicator" to evaluate the performance of the network, such as accuracy, in order to intuitively understand the effect of the algorithm, act as view, and do not participate in the optimization process

logistic Regression Detailed Explanation (II): Detailed Explanation of cost function 2019-05-02
supervised learning Machine learning is divided into supervised learning, unsupervised learning, semi-supervised learning and reinforcement learning.For logical regression, it is a typical supervised learning.Since it is supervised learning, the training set can naturally be expressed as follows:\{(x^1,y^1),(x^2,y^2),\cdots,(x^m,y^m)\} For these M training samples, each sample itself has N-dimensional characteristics.Plus an offset x0x_0, each sample contains n+1 dimensional features:x = [x_0,x_1,x_2,\cdots,x_n]^TWhere x ∈ rn+1x \ in r {n+1}, x0 = 1x _ 0 = 1, y ∈ {0,1} y \ in \ {0,1 \}