frontier This article records keras-related parts used in my project.Since this project involves both multi-class and multi-label classification, there are many related articles on the multi-class classification network.Let's talk about the networking part of multi-label.After that, if there is time, let's talk about cross validation and how to deal with some multi-label metric problems in the callback function of epoch.
multi-label multi-label supervised learning In fact, I personally prefer to translate label into label.
Keras is a building block deep learning framework, which can be used to easily and intuitively build some common deep learning models.Before tensorflow came out, Keras was almost the most popular in-depth learning framework at that time, taking theano as the back end. Now Keras has supported four back ends at the same time: theano, tensorflow, cntk and mxnet (the first three official supports, mxnet has not yet been integrated
source code link: https://www.kaggle.com/yasneghouzam/introduction-to-cnn-keras-0-997-top-6
sns.countplot(label): mapping, counting the number of occurrences of each different label in the label
DataFrame.values.reshap (shape): rearrange the values in dataframe with shape, eg:shape=(-1,28,28,1), -1 indicates that this dimension is unlimited.
note: the DataFrame type changes to the ndarray type after conversion.
keras.utils.np _ utils.to _ categorycal (Y_train, num _ classes = n): change y _ train represented by scalar quantity into one hot vector and list into matrix
''' This script loads pre-trained word embeddings(word2vec embeddings) into a Keras Embedding layer, and uses it to train a text classification model on a customized dataset. ''' from __future__ import print_function from collections import defaultdict import os import numpy as np import pandas as pd np.random.seed(1337) from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.utils.np_utils import to_categorical from keras.layers import Dense, Input, Flatten from keras.layers import Conv1D, MaxPooling1D, Embedding
A set of random binary data is generated, and the training set is used to train NaiveBayes, SVC and random forest algorithms respectively. For SVC and random forest algorithms, the optimal values of their parameters need to be additionally evaluated.Finally, ACC, FIA Formula 1 World Championship and rocauc methods are used to evaluate the three algorithms respectively. from sklearn import cross_validation from sklearn import datasets from sklearn import naive_bayes from
"MATLAB Neural Network 30 Case Analysis" Learning Record (to be Updated):
1. Data Classification, Classification-Multiple Outputs, Vector Representing   
2. Linear system modeling, fitting parameters, training neural network with a certain amount of input and output data
3. Genetic algorithm optimizes BP neural network-nonlinear function fitting. Neural network can be regarded as a prediction function, while genetic algorithm optimizes BP neural network can be regarded as optimizing some parameters of the prediction function.
1. Motivation Deep Learning has achieved great success in the fields of image, voice, text and so on, which has promoted the landing of a series of intelligent products.However, the depth model has many parameters and a large amount of training and inference calculations.At present, products based on deep learning are mostly driven by server-side computing power, and are very dependent on a good network environment. In many cases, we
Data Set Introduction Iris is a plant classification data set, which has 150 pieces of plant data.Each piece of data gives four characteristics: sepal length, sepal width, petal length, petal width (representing the length and width of sepals and petals respectively), all in cm.The data set has three categories: Iris Setosa (Iris setosa), Iris Versicolour (Iris versicolor) and Iris Virginica (Iris Virginica).The purpose of our classification here is to infer
MaxNorm Implicit Layer Weight Given Input Maximum Constraint
ReferencesDropout: A Simple Way to Prevent Neural Networks from Overfitting Srivastava, Hinton, et al. 2014 NonNeg Ensure the weight in training is not negative (similar to nmf, or proning effect)
UnitNorm Implicit Layer Weight-norm Rule
My writing is bad. This is my term paper in CS5312-deep learning course.
Section II: List and highlight of papers you have studied.In this section, I separate the papers into 3 parts-NN networks, algorithms, hardware designs.
Gradient-Based Learning Applied to Document Recognition. Yann Lecun, Yoshua Bengio. (1998) Neural networks used in this paper are called LeNet, which is well applied in the MNIST dataset. Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient-based learning technique.