QA Official

[CAFFE (Convolutional Architecture for Fast Feature Embedding)]: About the Beginner's Entrance to CAFFE (Convolutional Architecture for Fast Feature Embedding)

https://qaofficial.com/post/2019/04/30/24277-caffe-convolutional-architecture-for-fast-feature-embedding-about-the-beginneramp#x27s-entrance-to-caffe-convolutional-architecture-for-fast-feature-embedding.html 2019-04-30
Several Important Documents of CAFFE (Convolutional Architecture for Fast Feature Embedding) Caffe so long, CAFFE (Convolutional Architecture for Fast Feature Embedding) hasn't written a blog for beginners. Recently, at the request of laboratory Younger, he plans to write a simple and fast-paced popular science article for beginners.The first step in using CAFFE (Convolutional Architecture for Fast Feature Embedding) for deep neural network training needs to understand several important documents: solver.prototxt

[Keras] Chinese document learning notes-get started quickly keras

https://qaofficial.com/post/2019/04/30/24179-keras-chinese-document-learning-notes-get-started-quickly-keras.html 2019-04-30
Learning notes based on Chinese official documents and English official documents summarize the learning process systematically. Keras is a high-level neural network API. Keras is written by pure Python and is based on Tensorflow, Theano and CNTK backend.Keras was born to support rapid experiments and can quickly convert your idea into results. If you have the following requirements, please choose Keras: Simple and Fast Prototype Design (keras is highly modular,

detailed explanation of official website example 4.38(reuters_mlp.py)-keras learning note 4

https://qaofficial.com/post/2019/04/30/24318-detailed-explanation-of-official-website-example-4.38reuters_mlp.py-keras-learning-note-4.html 2019-04-30
Training and evaluating a simple MLP (multi-layer perceptron) based on the topic classification task of news reports in Reuters. Keras Instance Directory Data set download used by Keras (including the following data sets) MNIST cifar-10-batches-py imdb.npz、imdb_word_index.json nietzsche.txt reuters.npz fra-eng code comments '''Trains

keras for Deep Learning

https://qaofficial.com/post/2019/04/30/24180-keras-for-deep-learning.html 2019-04-30
Note that keras is used to press and install theano or tensorflow,keras. Keras uses tensorflow by default First create a moel from keras.models import Sequential model = Sequential() Then add nerve layer and activation function from keras.layers import Dense, Activation model.add(Dense(units=64, input_dim=100)) model.add(Activation('relu')) model.add(Dense(units=10)) model.add(Activation('softmax')) Use Age Function and Optimization Function model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) You can set the parameters of your loss function and optimization function model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True))

official website example details 4.40 (variant _ autoencoder.py)-keras learning notes 4

https://qaofficial.com/post/2019/04/30/24207-official-website-example-details-4.40-variant-_-autoencoder.py-keras-learning-notes-4.html 2019-04-30
使用Keras建立变分自编码器演示脚本 Keras Instance Directory code comments '''This script demonstrates how to build a variational autoencoder with Keras. 使用Keras建立变分自编码器演示脚本 #Reference 参考 - Auto-Encoding Variational Bayes 自动编码变分贝叶斯

the difference between Conv1D and Conv2D in keras

https://qaofficial.com/post/2019/04/30/24238-the-difference-between-conv1d-and-conv2d-in-keras.html 2019-04-30
If there is any mistake, please correct it. My answer is that when the Conv2D input channel is 1, there is no difference between the two or they can be transformed into each other.First of all, the last code called by both is the back-end code (take TensorFlow as an example, which can be found in tensorflow_backend.py): x = tf.nn.convolution( input=x, filter=kernel, dilation_rate=(dilation_rate,), strides=(strides,), padding=padding, data_format=tf_data_format) The difference is that

watermelon book+actual combat+Andrew Ng machine learning (3) machine learning foundation (multi-classification, category imbalance)

https://qaofficial.com/post/2019/04/30/24144-watermelon-book-actual-combat-andrew-ng-machine-learning-3-machine-learning-foundation-multi-classification-category-imbalance.html 2019-04-30
If this article is of a little help to you, please pay attention to it and praise it. I will be very happy ~ 0. Preface This article introduces the problems of multi-classification and class imbalance in machine learning. 1. Multi-classification Learning Some algorithms can directly perform multi-classification, while others cannot. The basic idea is to split the multi-classification task into several two-classification tasks to solve.

2019 fall recruit preparation-machine learning foundation

https://qaofficial.com/post/2019/04/29/24105-2019-fall-recruit-preparation-machine-learning-foundation.html 2019-04-29
Fundamentals of Machine Learning Common Calculation Formulas Term frequency–inverse document frequency (word frequency-IDF inverse document frequency, term frequency-inverse document frequency)Word frequency (TF)= number of times a word appears in an article \text{ word frequency (TF)} = \frac{\text{ number of times a word appears in an article }}{\text{ number of words in an article}} word frequency (TF)= number of times a word appears in an articleIDF inverse document frequency (IDF)=log (total number of documents in corpus contains the word +1)\text{ IDF inverse document frequency (IDF)}=log(\frac{\text{ total number of documents in corpus}} {contains the word +1}) IDF inverse document frequency (IDF)=log (contains the word +1 total number of documents in corpus) Common Loss Functions logloss (crossentropyloss, softmaxloss) is mostly used for classification problems.

Keras Use vgg16 to Identify Cats and Dogs

https://qaofficial.com/post/2019/04/29/23899-keras-use-vgg16-to-identify-cats-and-dogs.html 2019-04-29
import module import os import numpy as np import tensorflow as tf import random import seaborn as sns import matplotlib.pyplot as plt from keras.models import Sequential, Model from keras.layers import Dense, Dropout, Activation, Flatten, Input from keras.layers.convolutional import Conv2D, MaxPooling2D from keras.optimizers import RMSprop, Adam, SGD from keras.preprocessing import image from keras.preprocessing.image import ImageDataGenerator from keras.applications.vgg16 import VGG16, preprocess_input from sklearn.model_selection import train_test_split Read the picture function from the target

LSTM Text Classification Example

https://qaofficial.com/post/2019/04/29/24023-lstm-text-classification-example.html 2019-04-29
The data to be classified are text documents that have been segmented, each line of which represents an article. The segmentation is relatively rough and no stop words filtering has been carried out. The effect should be significantly improved after using stop words filtering.1. Load data # -*- coding: utf-8 -*- import sys reload(sys) sys.setdefaultencoding('utf-8') def loadData(fileN