QA Official

keras Generates Text Using LSTM

https://qaofficial.com/post/2019/03/31/24040-keras-generates-text-using-lstm.html 2019-03-31
This article mainly introduces the implementation of character-level text generation using LSTM. The following is the sample code: # coding: utf-8 # In[1]: # 下载语料库并将其转化为小写 import keras import numpy as np path = keras.utils.get_file( 'nietzsche.txt', origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt') text = open(path).read().lower() print('Corpus length:', len(text)) # In[11]: ''' 接下来,将提取长度为“ma

keras ImageDataGenerator parameter detailed explanation and usage examples

https://qaofficial.com/post/2019/03/31/23705-keras-imagedatagenerator-parameter-detailed-explanation-and-usage-examples.html 2019-03-31
keras picture generator ImageDataGeneratorkeras.preprocessing.image.ImageDataGenerator(featurewise_center=False, samplewise_center=False, featurewise_std_normalization=False, samplewise_std_normalization=False, zca_whitening=False, zca_epsilon=1e-6, rotation_range=0., width_shift_range=0., height_shift_range=0., shear_range=0., zoom_range=0., channel_shift_range=0., fill_mode='nearest', cval=0., horizontal_flip=False, vertical_flip=False, rescale=None, preprocessing_function=None, data_format=K.image_data_format())Used to generate image data of batch, supporting real-time data promotion.During training, the function will generate data indefinitely until the specified epoch number is reached. parametersfeaturewise_center: boolean value, which de-centers the input data set (with an average value of 0) and executes according to feature samplewise_center: Boolean value, making

keras code analysis of fastercnn

https://qaofficial.com/post/2019/03/31/23857-keras-code-analysis-of-fastercnn.html 2019-03-31
The code analyzed in this article comes from https://github.com/yhenon/keras-frcnn, which is invaded and deleted.Keras is a package in Python language similar to tensorflow. It can also be used in the design of convolutional neural networks. Compared with tensorflow, it is relatively easy to implement but still needs the support of tensorflow or theano. This paper analyzes the implementation of keras supported by tensorflow, keras=2.0.3.The data set is Pascal VOC2007. After downloading, you need to manually merge the train and test data sets and place them in the train_path folder under the root directory (you need to create them yourself).

keras migration learning, change VGG16 output layer, use imagenet weight to retrain.

https://qaofficial.com/post/2019/03/31/23911-keras-migration-learning-change-vgg16-output-layer-use-imagenet-weight-to-retrain..html 2019-03-31
Migration Learning, Run Own Data with Existing Network: Retain Weight of Other Layers of Existing Network except Output Layer, Change Output class Number of Output Layer of Existing Network. Train Own Network Based on Existing Network Weight Value,Take keras 2.1.5/VGG16Net as an example. Import necessary libraries from keras.preprocessing.image import ImageDataGenerator from keras import optimizers from keras.models import Sequential from keras.layers import Dropout, Flatten, Dense from keras import Model from keras

180411 Customize batch-generator Batch Data Generator with python

https://qaofficial.com/post/2019/03/31/23721-180411-customize-batch-generator-batch-data-generator-with-python.html 2019-03-31
Simple Thinking Demonstration Code Simple code demonstration, basic thinking (in practice, shuffling is required like the following code, and the sequence is different after each shuffer) a = np.arange(100) def batch_gen(data): # 定义batch数据生成器 idx = 0 while True: if idx+10>100: idx=0 start = idx idx += 10 yield data[start:start+10] gen = batch_gen(a) for i in range(20): b = next(gen)

CS231n note 1-softmaxloss and Multiclass SVM Loss

https://qaofficial.com/post/2019/03/31/23758-cs231n-note-1-softmaxloss-and-multiclass-svm-loss.html 2019-03-31
Softmax Loss and Multiclass SVM Loss Softmax Loss gives (xi,yi)(x_i, y_i), where xix_i is the image, yiy_i is the category (integer) of the image, s=f(xi,W)s = f(x_i,W), where ss is the output of the network, the error is defined as follows:P(Y = k|X = x_i) = \dfrac{e^{s_k}}{\sum_je^{s^j}} \\L_i = -logP(Y=y_i | X = x_i)For example, s = [ 3.2, 5.1, 1.7 ] s = [ 3.2, 5.

DataCastle[ champion-ku hung's thinking and code

https://qaofficial.com/post/2019/03/31/23931-datacastle-champion-ku-hung#39s-thinking-and-code.html 2019-03-31
i'm kuhung, a contestant in the DataCastle cat and dog war.In the evaluation, the data set I submitted scored 0.98639.The following is my preparation process and experience.(complete code and more comprehensive comments are available at the end) In the cat-and-dog fight, participants need to build a model from the training set to identify the puppies in the test set.He who can identify the largest number will get better results. Contestants

Evaluation Index for Multi-label Classification

https://qaofficial.com/post/2019/03/31/23751-evaluation-index-for-multi-label-classification.html 2019-03-31
\quad Currently, there are a large number of evaluation metrics related to multi-label classification.Generally speaking, it can be divided into two categories: (1) one is called document-pivot (also called instance-based or example-based), which, as the name implies, predicts labels for each test document;(2) The second category is label-pivoted (also known as label-based), which focuses on predicting documents for each label.Each category can also include the following two types of prediction methods: (1) one is binary predictions.

Faster-RCNN(keras) Realize Dog Identification-Data Docking

https://qaofficial.com/post/2019/03/31/23897-faster-rcnnkeras-realize-dog-identification-data-docking.html 2019-03-31
Preface: This part is very different about the docking of data, because the codes used are different, so let's make a record here.The initial version of the data interface is in this sentence:all_imgs, classes_count, class_mapping = get_data(options.train_path)In get_data.py, I found that all_imgs contains the two sentences of width and height of the picture. therefore, when the annotation box is generated, the return values of width and height are added.The data

How to Utilize Keras Extensibility

https://qaofficial.com/post/2019/03/31/23712-how-to-utilize-keras-extensibility.html 2019-03-31
Keras is a framework for building a neural network model on python. Its syntax is similar to torch.Personally, I think Keras is characterized by good packaging. Some methods to be output in the training process, commonly used optimization functions and objective functions have been built in, which is very suitable for writing large homework.The philosophy of Keras and python is somewhat similar, that is, try not to build your own