QA Official

Keras Learning Note 3: BatchNormalization Layer and Merge Layer

https://qaofficial.com/post/2019/04/30/24218-keras-learning-note-3-batchnormalization-layer-and-merge-layer.html 2019-04-30
1. batchNormalization Layer: This layer re-normalizes the activation value of the previous layer on each batch, even if the average value of its output data is close to 0 and its standard deviation is close to 1 keras.layers.normalization.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones', beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None) parameters,axis: integer that specifies the axis to normalize, usually the feature axis.For example, axis=1 is generally set after 2D convolution of data_format="

Keras TensorFlow mix has invalid trainable=False setting

https://qaofficial.com/post/2019/04/30/24176-keras-tensorflow-mix-has-invalid-trainablefalse-setting.html 2019-04-30
Keras TensorFlow mix has invalid trainable=False setting This is a recent problem, first describe the following problem:First of all, I have a trained model (for example, vgg16). I want to make some changes to this model, for example, add a full connection layer. For various reasons, I can only use TensorFlow to optimize the model. tf optimizer updates the weight of all tf.trainable_variables () by default. The problem lies in

ProposalLayer source code analysis

https://qaofficial.com/post/2019/04/30/24331-proposallayer-source-code-analysis.html 2019-04-30
ProposalLayer source code analysis label (space separation): faster-rcnn (cable system)-RCNN object detection source code # -------------------------------------------------------- # Faster R-CNN # Copyright (c) 2015 Microsoft # Licensed under The MIT License [see LICENSE for details] # Written by Ross Girshick and Sean Bell # -------------------------------------------------------- import caffe import numpy as np import yaml from fast_rcnn.config import cfg from generate_anchors import generate_anchors from fast_rcnn.bbox_transform import bbox_transform_inv, clip_boxes from fast_rcnn.nms_wrapper import nms DEBUG

TensorFlow Framework --Keras Use

https://qaofficial.com/post/2019/04/30/24284-tensorflow-framework-keras-use.html 2019-04-30
Keras is an advanced Python neural network framework with detailed documentation.Keras has been added toTensorFlow, as its default framework, provides TensorFlow with a higher-level API.If readers don't want to know the details of TensorFlow, they only need modularization, then Keras is a good choice.Such asIf TensorFlow is compared to Java or C++, Keras is Python in the programming world.It acts asTensorFlow's high-rise package can be used in conjunction with TensorFlow to quickly build a prototype.

keras Implements Unet for Character Location and Recognition Classification

https://qaofficial.com/post/2019/04/30/24348-keras-implements-unet-for-character-location-and-recognition-classification.html 2019-04-30
#coding=utf-8 import cv2 import numpy as np from keras.utils import to_categorical from model.augmentations import randomHueSaturationValue, randomShiftScaleRotate, randomHorizontalFlip from keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint, TensorBoard import matplotlib.pyplot as plt from keras.preprocessing.image import img_to_array from keras.utils.vis_utils import plot_model from keras import backend as K from keras.callbacks import ModelCheckpoint,Callback, EarlyStopping class LossHistory(Callback): def on_train_begin(self, logs={}): self.losses = [] def on_batch_end(self, batch, logs={}): self.losses.append(logs.get('loss')) # def on_epoch_end(self, epoch, logs=None): #unet modeldef get_unet_128_muticlass(input_shape=(None, 128, 128, 3), num_classes=1): inputs = Input(batch_shape=input_shape)#shape=input_shape) # 128 down1 = Conv2D(64, (3, 3), padding='same')(inputs) down1 = BatchNormalization()(down1) down1 = Activation('relu')(down1) down1 = Conv2D(64, (3, 3), padding='same')(down1) down1 = BatchNormalization()(down1) down1 = Activation('relu')(down1) down1_pool = MaxPooling2D((2, 2), strides=(2, 2))(down1) # 64 down2 = Conv2D(128, (3, 3), padding='same')(down1_pool) down2 = BatchNormalization()(down2) down2 = Activation('relu')(down2) down2 = Conv2D(128, (3, 3), padding='same')(down2) down2 = BatchNormalization()(down2) down2 = Activation('relu')(down2) down2_pool = MaxPooling2D((2, 2), strides=(2, 2))(down2) # 32 down3 = Conv2D(256, (3, 3), padding='same')(down2_pool) down3 = BatchNormalization()(down3) down3 = Activation('relu')(down3) down3 = Conv2D(256, (3, 3), padding='same')(down3) down3 = BatchNormalization()(down3) down3 = Activation('relu')(down3) down3_pool = MaxPooling2D((2, 2), strides=(2, 2))(down3) # 16 down4 = Conv2D(512, (3, 3), padding='same')(down3_pool) down4 = BatchNormalization()(down4) down4 = Activation('relu')(down4) down4 = Conv2D(512, (3, 3), padding='same')(down4) down4 = BatchNormalization()(down4) down4 = Activation('relu')(down4) down4_pool = MaxPooling2D((2, 2), strides=(2, 2))(down4) # 8 center = Conv2D(1024, (3, 3), padding='same')(down4_pool) center = BatchNormalization()(center) center = Activation('relu')(center) center = Conv2D(1024, (3, 3), padding='same')(center) center = BatchNormalization()(center) center = Activation('relu')(center) # center up4 = UpSampling2D((2, 2))(center) up4 = concatenate([down4, up4], axis=3) up4 = Conv2D(512, (3, 3), padding='same')(up4) up4 = BatchNormalization()(up4) up4 = Activation('relu')(up4) up4 = Conv2D(512, (3, 3), padding='same')(up4) up4 = BatchNormalization()(up4) up4 = Activation('relu')(up4) up4 = Conv2D(512, (3, 3), padding='same')(up4) up4 = BatchNormalization()(up4) up4 = Activation('relu')(up4) # 16 up3 = UpSampling2D((2, 2))(up4) up3 = concatenate([down3, up3], axis=3) up3 = Conv2D(256, (3, 3), padding='same')(up3) up3 = BatchNormalization()(up3) up3 = Activation('relu')(up3) up3 = Conv2D(256, (3, 3), padding='same')(up3) up3 = BatchNormalization()(up3) up3 = Activation('relu')(up3) up3 = Conv2D(256, (3, 3), padding='same')(up3) up3 = BatchNormalization()(up3) up3 = Activation('relu')(up3) # 32 up2 = UpSampling2D((2, 2))(up3) up2 = concatenate([down2, up2], axis=3) up2 = Conv2D(128, (3, 3), padding='same')(up2) up2 = BatchNormalization()(up2) up2 = Activation('relu')(up2) up2 = Conv2D(128, (3, 3), padding='same')(up2) up2 = BatchNormalization()(up2) up2 = Activation('relu')(up2) up2 = Conv2D(128, (3, 3), padding='same')(up2) up2 = BatchNormalization()(up2) up2 = Activation('relu')(up2) # 64 up1 = UpSampling2D((2, 2))(up2) up1 = concatenate([down1, up1], axis=3) up1 = Conv2D(64, (3, 3), padding='same')(up1) up1 = BatchNormalization()(up1) up1 = Activation('relu')(up1) up1 = Conv2D(64, (3, 3), padding='same')(up1) up1 = BatchNormalization()(up1) up1 = Activation('relu')(up1) up1 = Conv2D(64, (3, 3), padding='same')(up1) up1 = BatchNormalization()(up1) up1 = Activation('relu')(up1) # 128 classify = Conv2D(num_classes, (1, 1), activation='softmax')(up1) model = Model(inputs=inputs, outputs=classify) model.

keras backend setting tensorflow,theano

https://qaofficial.com/post/2019/04/30/24214-keras-backend-setting-tensorflowtheano.html 2019-04-30
win7 系统环境安装步骤: 1.首先是安装Python,建议安装anaconda 2.安装完anaconda后打开anaconda promp命令行pr

keras compilation has importerror: cannotimportnamemerge

https://qaofficial.com/post/2019/04/30/24219-keras-compilation-has-importerror-cannotimportnamemerge.html 2019-04-30
源码来自 AlexNet-Experiments-KerasSome problems occurred during compilation. Please record them. ========================== After consulting data, alexnet_base.py and customlayer.py inside have been modified from keras.layer import mergewill do All changed to merge (corresponding code segment inside also needs to be modified) All import from keras.layer inside instead of keras.layers.core ============================ First of all, I used tensorflow as backend. At first, I compiled directly on jupyter notebook. ValueError: You are tring

keras regular term constraint term activation function callback function pre-training model

https://qaofficial.com/post/2019/04/30/24191-keras-regular-term-constraint-term-activation-function-callback-function-pre-training-model.html 2019-04-30
Activates the function Activationsactivation function can be realized by setting a separate activation layer or by passing the activation parameter when constructing a layer object. from keras.layers.core import Activation, Dense model.add(Dense(64)) model.add(Activation('tanh'))is equivalent to model.add(Dense(64, activation='tanh'))can also be used as the activation function by passing an element-by-element operation of Theano/TensorFlow function: from keras import backend as Kdef tanh(x): return K.tanh(x) model.add(Dense(64, activation=tanh)) model.add(Activation(tanh)Predefined Activation Function softmax: softmax is applied to the last dimension of the input data.

tensor flow Learning Notes

https://qaofficial.com/post/2019/04/30/24222-tensor-flow-learning-notes.html 2019-04-30
1. Tensor concept In tensorFlow, all data are tensors, generalizations of vector and matrices, for example: vector is a 1D tensor, matrix is a 2D tensor tf.constant(value, dtype=None, shape=None, name=’const’)Create a constant tensor. value can be either a number or a list according to the value given.If it is a number, all

Beginners Keras (Build Model, Train Data)

https://qaofficial.com/post/2019/04/30/24344-beginners-keras-build-model-train-data.html 2019-04-30
Keras is an easy-to-use deep learning framework, which is easy to use in building models/training data. 1 Training Data Transmission def prepare_input_data(img_width,img_height): train_datagen=image.ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) val_datagen = image.ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( config['Train_path'], target_size=(img_width, img_height), batch_size=int(config['Batch_size']), class_mode='categorical') validation_generator = val_datagen.flow_from_directory( config['Val_path'], target_size=(img_width, img_height), batch_size=int(config['Batch_size']), class_mode='categorical', shuffle=False) print(train_generator.class_indices) #print(train_generator.shape) #print(validation_generator.class_indices) return train_generator, validation_generator 2 Model Construction import keras import keras.preprocessing import image import keras.layers import Conv2D,MaxPooling2D,Dense,Activation,Flatten,Dropout import keras.layers.normalization import BatchNormalization