QA Official

Sigmoid and Softmax differences

https://qaofficial.com/post/2019/03/30/23752-sigmoid-and-softmax-differences.html 2019-03-30
sigmoid将一个real value映射到(0,1)的区间,用来做二分类。而 softmax 把一个 k 维的real value向量(a1,a2,a3,a4…

Tensorflow actual combat learning (10) [ softmax classification ]

https://qaofficial.com/post/2019/03/30/23783-tensorflow-actual-combat-learning-10-softmax-classification.html 2019-03-30
answer multi-option questions, use softmax function, the promotion of logarithmic probability regression on multiple possible different values.The return value of the function is a probability vector of C components, and each component corresponds to an output category probability.The component is probability, and the sum of the C components is always 1.Each sample must belong to an output category and all possible samples are overwritten.Component sum is less than 1, there

UNet

https://qaofficial.com/post/2019/03/30/23825-unet.html 2019-03-30
import numpy as npsmooth = 1.dropout_rate = 0.5act = “relu”######################################### 2D Standard######################################## def standard_unit(input_tensor, stage, nb_filter, kernel_size=3): x <span class="token operator">=</span> Conv2D<span class="token punctuation">(</span>nb_filter<span class="token punctuation">,</span> <span class="token punctuation">(</span>kernel_size<span class="token punctuation">,</span> kernel_size<span class="token punctuation">)</span><span class="token punctuation">,</span> activation<span class="token operator">=</span>act<span class="token punctuation">,</span> name<span class="token operator">=</span><span class="token string">'conv'</span><span class="token operator">+</span>stage<span class="token operator">+</span><span class="token string">'_1'</span><span class="token punctuation">,</span> kernel_initializer <span class="token operator">=</span> <span class="token string">'he_normal'</span><span class="token punctuation">,</span> padding<span class="token operator">=</span><span class="token string"

keras Transfer Learning

https://qaofficial.com/post/2019/03/30/23697-keras-transfer-learning.html 2019-03-30
Load the structure and weight of the first 13 layers of VGG16, define the final full connection layer by yourself, then freeze the first 13 layers, and train the model using small data sets. seems simple, encountered many problems in the middle, said too much is tears, is because entangled in online examples, too dead-headed. csdn has a very detailed post, I ran inside the example has been reported wrong:

keras Two Training Model Methods fit and fit_generator (Save Memory)

https://qaofficial.com/post/2019/03/30/23709-keras-two-training-model-methods-fit-and-fit_generator-save-memory.html 2019-03-30
1, fit import keras from keras.models import Sequential from keras.layers import Dense import numpy as np from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import OneHotEncoder from sklearn.model_selection import train_test_split #读取数据 x_train = np.load("D:\\machineTest\\testmulPE_win7\\data_sprase.npy")[()] y_train = np.load("D:\\machineTest\\testmulPE_win7\\lable_sprase.npy") # 获取分类类别总数 classes = len(np.unique(y_train)) #对label进行one-hot编码,

keras Two Training Model Methods fit and fit_generator (Save Memory)

https://qaofficial.com/post/2019/03/30/23724-keras-two-training-model-methods-fit-and-fit_generator-save-memory.html 2019-03-30
First, fit import keras from keras.models import Sequential from keras.layers import Dense import numpy as np from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import OneHotEncoder from sklearn.model_selection import train_test_split #读取数据 x_train = np.load("D:\\machineTest\\testmulPE_win7\\data_sprase.npy")[()] y_train = np.load("D:\\machineTest\\testmulPE_win7\\lable_sprase.npy") # 获取分类类别总数 classes = len(np.unique(y_train)) #对label进行one-hot编码,

Classification and Labeling

https://qaofficial.com/post/2019/03/30/23673-classification-and-labeling.html 2019-03-30
Classification and tagging are two functions of the blog system.Classification refers to user-defined categories and categorizing blog posts.This is a natural way to organize articles, so it has become the basic function of the blog system.Later, articles on the internet developed a convenient and friendly feature-Label or Tag.The label is equivalent to the keywords of traditional articles and the index of books, indicating a certain content or feature contained in the articles.

Decorator, Static Agent, Dynamic Agent

https://qaofficial.com/post/2019/03/30/25101-decorator-static-agent-dynamic-agent.html 2019-03-30
decoratorIntroduction: To modify the method of a class, the decorator mode is used when the class already exists.Decorator and Decorator to Realize the Same InterfaceThe decorator will declare a constructor with the decorator interface as its inputdecorators, want to transform the method of their own implementation, don't want to transform the method using the method of the decoratorWhen using, there is already a decorated object, new a decorator, to pass

Deep Learning Keras Library Running Example

https://qaofficial.com/post/2019/03/30/23576-deep-learning-keras-library-running-example.html 2019-03-30
Run lmdb_lstm.py runs the lstm example first because lstm is required. 1. After downloading the official website, run lmdb_lstm.py directly.Always prompt cannot download, open the program to see, through load_data to download data, but this data cannot be downloaded online, resulting in running impassability.print("Loading data...")(X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=max_features, test_split=0.2) solution: change the path in lmdb.pyAs shown in the following edge, give the path directly.# path = get_file(path, origin="https://s3.amazonaws.com/text-datasets/imdb.pkl")path

ImageCaption algorithm summary

https://qaofficial.com/post/2019/03/30/23869-imagecaption-algorithm-summary.html 2019-03-30
Overview: Recently, I've been looking at relevant contents of Image Capture. Image Capture is simply talking by looking at pictures. When I input a picture, the output is a sentence. I mainly read two articles by Duke in Zhihu, two articles by Google on NIC model, and configured neutraltalk, neutraltalk2.Tensorflow/im2txt(NIC) environment, ran the inference of neutraltalk and neutraltalk2, and im2txt training. References: summary paper+open source project:Image / Video Captioning