QA Official

keras Practice (I): multi-label Neural Network

https://qaofficial.com/post/2019/05/06/23427-keras-practice-i-multi-label-neural-network.html 2019-05-06
frontier This article records keras-related parts used in my project.Since this project involves both multi-class and multi-label classification, there are many related articles on the multi-class classification network.Let's talk about the networking part of multi-label.After that, if there is time, let's talk about cross validation and how to deal with some multi-label metric problems in the callback function of epoch. multi-label multi-label supervised learning In fact, I personally prefer to translate label into label.

keras custom loss function

https://qaofficial.com/post/2019/05/06/23429-keras-custom-loss-function.html 2019-05-06
Keras is a building block deep learning framework, which can be used to easily and intuitively build some common deep learning models.Before tensorflow came out, Keras was almost the most popular in-depth learning framework at that time, taking theano as the back end. Now Keras has supported four back ends at the same time: theano, tensorflow, cntk and mxnet (the first three official supports, mxnet has not yet been integrated

keras functions used to build CNN-taking MNIST data set as an example

https://qaofficial.com/post/2019/05/06/24996-keras-functions-used-to-build-cnn-taking-mnist-data-set-as-an-example.html 2019-05-06
source code link: https://www.kaggle.com/yasneghouzam/introduction-to-cnn-keras-0-997-top-6 sns.countplot(label): mapping, counting the number of occurrences of each different label in the label DataFrame.values.reshap (shape): rearrange the values in dataframe with shape, eg:shape=(-1,28,28,1), -1 indicates that this dimension is unlimited. note: the DataFrame type changes to the ndarray type after conversion. keras.utils.np _ utils.to _ categorycal (Y_train, num _ classes = n): change y _ train represented by scalar quantity into one hot vector and list into matrix

keras sentence classification keras _ demo _ for _ sentence _ classification (simplified version)

https://qaofficial.com/post/2019/05/06/24966-keras-sentence-classification-keras-_-demo-_-for-_-sentence-_-classification-simplified-version.html 2019-05-06
''' This script loads pre-trained word embeddings(word2vec embeddings) into a Keras Embedding layer, and uses it to train a text classification model on a customized dataset. ''' from __future__ import print_function from collections import defaultdict import os import numpy as np import pandas as pd np.random.seed(1337) from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.utils.np_utils import to_categorical from keras.layers import Dense, Input, Flatten from keras.layers import Conv1D, MaxPooling1D, Embedding

sklearn evaluation of three classification algorithms

https://qaofficial.com/post/2019/05/06/25012-sklearn-evaluation-of-three-classification-algorithms.html 2019-05-06
A set of random binary data is generated, and the training set is used to train NaiveBayes, SVC and random forest algorithms respectively. For SVC and random forest algorithms, the optimal values of their parameters need to be additionally evaluated.Finally, ACC, FIA Formula 1 World Championship and rocauc methods are used to evaluate the three algorithms respectively. from sklearn import cross_validation from sklearn import datasets from sklearn import naive_bayes from

"MATLAB Neural Network 30 Case Analysis" Learning Notes

https://qaofficial.com/post/2019/05/06/25040-ampquotmatlab-neural-network-30-case-analysisampquot-learning-notes.html 2019-05-06
"MATLAB Neural Network 30 Case Analysis" Learning Record (to be Updated): 1. Data Classification, Classification-Multiple Outputs, Vector Representing [1000] [001] [001] 2. Linear system modeling, fitting parameters, training neural network with a certain amount of input and output data 3. Genetic algorithm optimizes BP neural network-nonlinear function fitting. Neural network can be regarded as a prediction function, while genetic algorithm optimizes BP neural network can be regarded as optimizing some parameters of the prediction function.

BinaryNet for Model Compression

https://qaofficial.com/post/2019/05/06/24992-binarynet-for-model-compression.html 2019-05-06
1. Motivation Deep Learning has achieved great success in the fields of image, voice, text and so on, which has promoted the landing of a series of intelligent products.However, the depth model has many parameters and a large amount of training and inference calculations.At present, products based on deep learning are mostly driven by server-side computing power, and are very dependent on a good network environment. In many cases, we

Classification of Iris Plant Data by OneR Algorithm

https://qaofficial.com/post/2019/05/06/25033-classification-of-iris-plant-data-by-oner-algorithm.html 2019-05-06
Data Set Introduction Iris is a plant classification data set, which has 150 pieces of plant data.Each piece of data gives four characteristics: sepal length, sepal width, petal length, petal width (representing the length and width of sepals and petals respectively), all in cm.The data set has three categories: Iris Setosa (Iris setosa), Iris Versicolour (Iris versicolor) and Iris Virginica (Iris Virginica).The purpose of our classification here is to infer

Common Constraints of Network Layer Weights

https://qaofficial.com/post/2019/05/06/24995-common-constraints-of-network-layer-weights.html 2019-05-06
MaxNorm Implicit Layer Weight Given Input Maximum Constraint ReferencesDropout: A Simple Way to Prevent Neural Networks from Overfitting Srivastava, Hinton, et al. 2014 NonNeg Ensure the weight in training is not negative (similar to nmf, or proning effect) UnitNorm Implicit Layer Weight-norm Rule

Deep Learning Beginner's Thesis Collection

https://qaofficial.com/post/2019/05/06/24974-deep-learning-beginneramp#x27s-thesis-collection.html 2019-05-06
My writing is bad. This is my term paper in CS5312-deep learning course. Section II: List and highlight of papers you have studied.In this section, I separate the papers into 3 parts-NN networks, algorithms, hardware designs. 1.NN-networks Gradient-Based Learning Applied to Document Recognition. Yann Lecun, Yoshua Bengio. (1998) Neural networks used in this paper are called LeNet, which is well applied in the MNIST dataset. Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient-based learning technique.