QA Official

keras fit_generator memory saving example

https://qaofficial.com/post/2019/03/30/23723-keras-fit_generator-memory-saving-example.html 2019-03-30
When writing keras before, it was all direct model.fit (). Later, it was found that this did not save memory, especially when the input data itself was not large, but it was especially useful when internal arrangement and combination were needed. Here, record the usage of fit_generator: fit_generator(self, generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0) The above is the official calling function. The specific

loss Comparison for Dealing with Imbalance in Visual Classification Tasks

https://qaofficial.com/post/2019/03/30/23613-loss-comparison-for-dealing-with-imbalance-in-visual-classification-tasks.html 2019-03-30
problem introduction In the computer vision (CV) task, we often encounter the problem of category imbalance, such as:1. The picture classification task, some kinds of pictures, some kinds of pictures less2. Testing task.Current detection methods, such as SSD and RCNN series, all use anchor mechanism.The proportion of positive and negative anchor in training is very wide.3. For segmentation tasks, the number of background pixels is usually much larger than that

machine learning-Cross Entropy cross entropy

https://qaofficial.com/post/2019/03/30/24962-machine-learning-cross-entropy-cross-entropy.html 2019-03-30
machine learning-Cross Entropy cross entropy 1. Binary Cross Entropy binary cross entropy Suppose training data D = { (X1, Y1), (X2, Y2), ..., (Xn, Yn) } D = { (X1, Y1), (X2, Y2), ..., (Xn, Yn) } D = \ { (X _ 1, Y _ 1), (X _ 2, Y _ 2), ..., (X _ n, Y _ n) \ }, Among them x ∈ rnx ∈ rnx \ in r n-training samples, such as pictures;

macro-average and micro-average of Multi-label classification Performance Evaluation

https://qaofficial.com/post/2019/03/30/23646-macro-average-and-micro-average-of-multi-label-classification-performance-evaluation.html 2019-03-30
Generally, we use accuracy when evaluating the performance of the classifier. Consider in the context of multi-class classification accuracy = (Number of Samples Correctly Classified)/(Number of Samples Classified) In fact, it seems quite good to do so, but there may be a serious problem: for example, an opaque bag contains 1,000 mobile phones, including 600 iphone6, 300 galaxy s6, 50 Huawei mate7,50 mx4 (of course, these information classifiers are unknown.

modify caffe source code to meet multi-label input-multi-label lmdb

https://qaofficial.com/post/2019/03/30/23678-modify-caffe-source-code-to-meet-multi-label-input-multi-label-lmdb.html 2019-03-30
Recently, due to the needs of the project, hdf5 format is really inconvenient in image preprocessing to complete a multi-label regression task. Therefore, I read several online articles about modifying CAFE source code to input multi-labels. One is that the modification has affected the use of many modules and compiled many errors. The other is that the version of CAFE is different. The principle of modifying the source code is not clear, which delayed a long time.

multi-label learning-RankSVM method

https://qaofficial.com/post/2019/03/30/23680-multi-label-learning-ranksvm-method.html 2019-03-30
A kernel method for multi-labelled classification 2002 NIPS Andre Elisseeff and Jason Weston Introduction in the work of Schapire and Singer on Boostexter (r.e. schapire and y.singer.booktexter: a boosting-based system for textcategorization.machinerearning,39 (2/3): 135 – 168,2000.), Boostexter is the only universal multi-label ranking system. They observed that over-fitting occurred in relatively small learning sets. They concluded that controlling the complexity of the entire learning system is an important research method.

static and dynamic proxies for AOP

https://qaofficial.com/post/2019/03/30/25102-static-and-dynamic-proxies-for-aop.html 2019-03-30
Reprint Source: http://listenzhangbin.com/post/2016/09/spring-aop-cglib/ AOP(Aspect Orient Programming), which is commonly called Aspect Oriented Programming, is used as a supplement to Object Oriented to deal with crosscutting concerns distributed in various modules in the system, such as transaction management, logging, caching, etc.The key to AOP implementation lies in the AOP proxy automatically created by the AOP framework. AOP proxies are mainly divided into static proxies and dynamic proxies, and the representative of

summary of inet_addr, inet_aton, inet_pton and other functions of Linux network programming IPv4 and IPv6

https://qaofficial.com/post/2019/03/30/23786-summary-of-inet_addr-inet_aton-inet_pton-and-other-functions-of-linux-network-programming-ipv4-and-ipv6.html 2019-03-30
Knowledge background: 210.25.132.181 belongs to ASCII representation of IP address, that is, string form.English is called ipv4numbers-and-dotnotation. If the 210.25.132.181 is converted to integer form, it is 3524887733, and this is the IP address in integer form.English is called binary data.(In fact, binary means binary) For detailed introduction, please refer to the conversion between network byte order and host byte order.   The problem is: How to convert between IP in string form and IP in integer form?

talk about sigmod and softmax

https://qaofficial.com/post/2019/03/30/23762-talk-about-sigmod-and-softmax.html 2019-03-30
1. Common ground 1, the value range is 0 ~ 1 interval, said probability, probability is naturally the value range 2, can be used as the output layer function of the classification task 2. Differences 1, sigmod as output layer function to solve the two classification task, the output value is a decimal;In addition, it can be used as an activation function of the hidden layer. In addition, activation function is the core reason to explain the nonlinearity of neural network.

Keras Function Introduction

https://qaofficial.com/post/2019/03/29/24814-keras-function-introduction.html 2019-03-29
PS: If weights need to be shared (i.e. do not reinitialize), One way: Use global variables (this will only be initialized once and will not be reinitialized afterwards) Another way: do not use global variables, and manually enter the weights every time you initialize (too much trouble, not recommended) ---------------------------------------------------------------------------------------------------------------- PS: https://blog.csdn.net/u011327333/article/details/78501054 (Understanding LSTM Parameters return_sequences and return_state in keras API) ----------------------------------------------------------------------------------------------------------------