QA Official

Several Common Sorting Algorithms

https://qaofficial.com/post/2019/04/03/68595-several-common-sorting-algorithms.html 2019-04-03
1, bubble sort Non-recursive Implementation void bubbleSort(int *array,int len) { int tmp; bool flag; for (int i= len-1; i > 0;i--) { flag = false; for (int j = 0; j < i;j++) { if (array[j] > array[j+1]) { flag = true; tmp = array[j]; array[j] = array[j+1]; array[j+1] = tmp; } } if (flag == false) { break; } } } Recursive Implementation void bubbleSort(int *array,int len) { if

Understanding of BN Layer

https://qaofficial.com/post/2019/04/03/68638-understanding-of-bn-layer.html 2019-04-03
1, BN layer why can prevent gradient disappear Batchnorm is one of the most important achievements proposed since the development of in-depth learning. it has been widely applied to major networks at present and has the effects of accelerating the convergence speed of networks and improving the stability of training. Batchnorm is essentially to solve the gradient problem in the process of back propagation.The full name of batchnorm is batch normalization, short for BN, i.

[vgg16]vgg16 related documents

https://qaofficial.com/post/2019/04/03/68696-vgg16vgg16-related-documents.html 2019-04-03
solver.prototxt net: "models/vgg16/train_val.prototxt" test_iter: 1000 test_interval: 2500 base_lr: 0.001 lr_policy: "step" gamma: 0.1 stepsize: 50000 display: 20 max_iter: 200000 momentum: 0.9 weight_decay: 0.0005 snapshot: 10000 snapshot_prefix: "models/vgg16/caffe_vgg16_train" solver_mode: GPU train.prototxt name: "VGG_ILSVRC_16_layer" layer { name: "data" type: "Data" top: "data" top: "label" include { phase: TRAIN } transform_param { mirror: true crop_size: 224 mean_file: "

addition of pytorch BN

https://qaofficial.com/post/2019/04/03/68633-addition-of-pytorch-bn.html 2019-04-03
pytorch add BN layer Batch Standardization model training is not easy, especially for some very complex models, which cannot get convergence results very well. Therefore, adding some preprocessing to the data and using batch standardization at the same time can get very good convergence results, which is also an important reason why convolution networks can be trained to very deep layers. Data preprocessing At present, the most common methods of

remember thoroughly the function and usage of lower_bound and upper_bound

https://qaofficial.com/post/2019/04/03/68628-remember-thoroughly-the-function-and-usage-of-lower_bound-and-upper_bound.html 2019-04-03
When I used these two functions before, I read a few other people's blogs and remembered the general idea. I mixed them up once every time I used them, which was quite uncomfortable. Today, comparing the source codes of these two functions with my own attempts, I found that these two functions can only be used in " ascending order" sequence. Why put quotation marks?Because the comparison rules can be

unet Portrait Segmentation Model for Mobile Terminal-1

https://qaofficial.com/post/2019/04/03/68673-unet-portrait-segmentation-model-for-mobile-terminal-1.html 2019-04-03
Individuals have always been interested in the development of mobile neural networks.Tencent has been paying attention to the NCNN framework since it opened source last year.Recently, we have successfully used other people's trained mtcnn and mobilefacenet models to create an ios version of face recognition swift version demo.I hope maskrcnn can be transplanted to ncnn to realize some interesting applications on the mobile phone.Because unet model is relatively simple, simply start with this.

Ajax Transfer Json and xml Data

https://qaofficial.com/post/2019/04/02/68564-ajax-transfer-json-and-xml-data.html 2019-04-02
ajax Transmission of xml Data: The transmission can be realized as long as the data is encapsulated into xml format. The foreground js receives xml parameters with responseXML, and the background reading uses stream and dom4j to parse. Front Desk Page <%@ page language="java" import="java.util.*" pageEncoding="UTF-8"%> <%@taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c"%> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <title>Ajax XML数据处理演示&l

Hu Moment, Affine Moment and Normalized Fourier Descriptor (NFD) Invariant Feature Level Fusion

https://qaofficial.com/post/2019/04/02/68485-hu-moment-affine-moment-and-normalized-fourier-descriptor-nfd-invariant-feature-level-fusion.html 2019-04-02
Reference: Aircraft Identification Based on Feature Level Fusion and Support Vector Machine, Zhu Xufeng et al. For images of different aircraft models, Hu moment, affine moment and normalized Fourier descriptor (NFD) invariants are extracted for feature level fusion. Aiming at the problem of large range of combined invariants, four normalization methods are proposed. 1. Image Feature Extraction Hu moment is invariant to rotation, scale and translation, and its disadvantage is that it is sensitive to the outside world.

JSP, AJAX uses POST to submit Chinese garbled code to solve the problem

https://qaofficial.com/post/2019/04/02/68543-jsp-ajax-uses-post-to-submit-chinese-garbled-code-to-solve-the-problem.html 2019-04-02
/* * * * * * * * * * * I am original, welcome to reprint, reprint please keep my information * * * * * * * * * */ By wallimn E-mail: [email protected] Blog: http://blog.csdn.net/wallimn Time: November 15, 2006 /* * * * * * * * * * * I am original, welcome to reprint, reprint please keep my information * * * * * * * * * */

Latex

https://qaofficial.com/post/2019/04/02/68419-latex.html 2019-04-02
https://blog.csdn.net/xuwang777/article/details/79162037 Latex first summary 1: IEEEtran format details the first. tex %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \ DocumentClass [ Journal, Two Column ] { IEEEtran } % Journal, Double Column, IEEE Tran Format \ usepackage { lineno } % lines \ usepackage { graphicx } % chart \ USPACKAGE { SUBFIG } % SUBGRAPH \usepackage{amssymb} \usepackage{epstopdf}% } % picture, eps to pdf \ usepackage { booktabs } % table