Keras is a building block deep learning framework, which can be used to easily and intuitively build some common deep learning models.Before tensorflow came out, Keras was almost the most popular in-depth learning framework at that time, taking theano as the back end. Now Keras has supported four back ends at the same time: theano, tensorflow, cntk and mxnet (the first three official supports, mxnet has not yet been integrated

Experimental Report
0, BaseNet (Layer 3, sigmoid, 784-386-10)one
1, Hidden_Net(784-112-10 and 784-543-10)2.
2、reluActivation_Net 3
3, DeepNet (4, 5) 4
4, DeepNet (four, five layers;Increase in the number of training rounds) 5
5, DeepNet (five layers;Dropout） 6
7, DeepNet (five layers;Dropout+relu） 8
8, AutoEncoder_Net (five layers;AutoEncoder） 9
9, Conclusion 10
Abstract: The data set of this experiment is MNIST digits.

Experimental Environment Visual Studio 2013 data Data comes from http://archive.ics.uci.edu/ml/datasets/optical+recognition+of+handwritten+digits and contains 26 capital letters. There are 20,000 samples in it, each with 16 dimensions. Experimental Purpose Complete the classification of character samples in the data set. Experimental Code 1. Define a LogisticRegression class: header fileLogisticRegression.h #include <iostream> #include <math.h> #include<algorithm> #include <functional> #include <string> #include <cassert> #include <vector> using namespace std; class LogisticRegression { public: LogisticRegression(int inputSize, int k,

for multi-class problem, cross-entropy is usually used as lostfunction.Cross entropy is a concept in information theory at the earliest. It is changed from information entropy and then used in many places, including communication, error correction code, game theory and machine learning.For the relationship between cross entropy and information entropy, please see: < a href = "http://blog.csdn.net/lanchunhui/article/details/50970625", target = "_ blank" > machine learning foundation (6)-cross-entropy cost function (cross-entropy error). When

TensorFlow-RNN Sentiment analysis Previously wrote a Sentiment analysis http://blog.csdn.net/weiwei9363/article/details/78357670 using fully connected neural networks Now, use TensorFlow to build an RNN network for sentiment analysis of text complete code and detailed solution https://github.com/jiemojimo/deep-learning/tree/master/sentin-rnn training data https://github.com/jiemojimo/deep-learning/tree/master/sentin-network Step 1 Data Processing import numpy as np # 读取数据 with open('reviews.txt', 'r') as f: reviews = f.read() with open('labels.txt', 'r') as f: labels = f.read() # 每一个 \n

I mainly divided into three articles to introduce tensorflow's loss function. This article is tensorflow's custom loss function.(A) tensorflow built-in four loss functions(2) Other loss functions(3) custom loss function Self-defined loss function is the end of the chapter on loss function. Learning self-defined loss function is very helpful to improve the accuracy of classification, segmentation and other issues. At the same time, exploring new loss function can also make you

I mainly divided into three articles to introduce tensorflow's loss function, this is tensorflow built-in four loss functions
(a) tensorflow built-in four loss functions(2) Other loss functions(3) custom loss function
loss function quantifies the difference between the results (predicted values) output by the classifier and the results (labels) we expect, which is as important as the classifier structure itself.There are many scholars who devote themselves to discussing how to improve the loss function so as to optimize the results of the classifier.

First differentiate and analyze the concept: 1. loss is the goal of the optimization of the whole network and needs to participate in the optimization operation and update the weight W. 2. metric is only used as an "indicator" to evaluate the performance of the network, such as accuracy, in order to intuitively understand the effect of the algorithm, act as view, and do not participate in the optimization process

supervised learning Machine learning is divided into supervised learning, unsupervised learning, semi-supervised learning and reinforcement learning.For logical regression, it is a typical supervised learning.Since it is supervised learning, the training set can naturally be expressed as follows:\{(x^1,y^1),(x^2,y^2),\cdots,(x^m,y^m)\}
For these M training samples, each sample itself has N-dimensional characteristics.Plus an offset x0x_0, each sample contains n+1 dimensional features:x = [x_0,x_1,x_2,\cdots,x_n]^TWhere x ∈ rn+1x \ in r {n+1}, x0 = 1x _ 0 = 1, y ∈ {0,1} y \ in \ {0,1 \}