QA Official

Data Structure-The Locksmith Sorting (Including All Codes)

https://qaofficial.com/post/2019/04/26/110552-data-structure-the-locksmith-sorting-including-all-codes.html 2019-04-26
function analysis is as follows: SelectSort(SqList &L) Parameters: Sequence Table L Function: Sort (Default Ascending) Space Complexity: O(1) Time Complexity: O(n-side)Stability: unstableThought: Assume that the I-th value is the current minimum value (0 to i-1 are already ascending and are all less than or equal to the I-th value), make min=i, compare backwards from i+1, if less than the I-th valueRecord the subscript (make min equal to the subscript of

Database livelock and Deadlock

https://qaofficial.com/post/2019/04/26/70772-database-livelock-and-deadlock.html 2019-04-26
1. livelock If transaction T1 blocks data R, transaction T2 requests blocking R again, so T2 waits.T3 also requested to block R. When T1 releases the block on R, the system first approved T3' s request, and T2 is still waiting.Then T4 requests to block R again, and after T3 releases the block on R, the system approves T4's request, ..., T2 may wait forever, which is the case in livelock, as shown in Figure 8.

Deep Learning [2] Batch Normalization Paper Translation

https://qaofficial.com/post/2019/04/26/70767-deep-learning-2-batch-normalization-paper-translation.html 2019-04-26
Note: The original text translated in this article is BN theoretical knowledge (up to subsection 3.1) in Batch Normalization: Accelerating Deep Network Training by Reducing Internal Co Variant Shift. At the same time, some understanding will be added to the translation process.The aim is to deepen the understanding of the purpose and principle of BN.The English level and knowledge level need to be improved. Please raise any deficiencies.http://blog.csdn.net/linmingan/article/details/50780761 Abstract: The parameter change of the neural network layer before the current neural network layer causes the distribution of input data at each layer of the neural network to change, which makes training a Deep Neural Networks complicated.

Generalized Linear Model (GLMs) and Its Algorithm Introduction

https://qaofficial.com/post/2019/04/26/70794-generalized-linear-model-glms-and-its-algorithm-introduction.html 2019-04-26
Generally, the linear model we know is aimed at continuous variables and obeys normal distribution, but it is very limited in practical application.Because many of the data we see are discrete and not normally distributed.In view of this situation, the traditional linear model has been extended to the present generalized linear model.Generalized linear model makes variables expand from normal distribution to exponential distribution family and from continuous variable to discrete variable, which makes it have good application in reality.

Learn from the Past -1. Decision Tree, Pruning, RF, adaboost, GBDT, XGBOOST

https://qaofficial.com/post/2019/04/26/70863-learn-from-the-past-1.-decision-tree-pruning-rf-adaboost-gbdt-xgboost.html 2019-04-26
I started learning machine learning 4 years ago and chose the simplest decision tree to get started. However, I often did not answer well in the interview. I still belittled it too much.This time I'd like to make a summary. This article is a key note and does not involve derivation. Decision Tree Construction Principle: 1. Select and divide attribute values 2. Build and stop 3. Pruning 1. Partition Delta = I (Present)-SUM (NVJ/N * I (VJ)) If the information is retained most, the partition is the best.

Machine Learning Notes-Support Vector Machine (V) Support Vector Regression

https://qaofficial.com/post/2019/04/26/70790-machine-learning-notes-support-vector-machine-v-support-vector-regression.html 2019-04-26
regression problem for a given sample d = {(x1, y1), (x2, y2), ..., (xm, ym)}, yi ∈ rd = \ left \ {(\ mathbf {x} _ 1, y _ 1), (\ mathbf {x} _ 2, y _ 2), \ dots, (\ mathbf {x} _ m, y _ m) \ right \}, y_i\in\mathbb{R}We hope to acquire a regression model in the form of f (x) = wtx+BF (\ mathbf {x}) = \ mathbf {w} t \ mathbf {x}+b so that f(x)f(\mathbf{x}) and yy are as close as possible, w\mathbf{w} and bb are model parameters to be determined.

Object Detection Data Summary

https://qaofficial.com/post/2019/04/26/70734-object-detection-data-summary.html 2019-04-26
Object Detection-handong1587 (very full and updated in real time) https://handong1587.github.io/deep_learning/2015/10/09/object-detection.html GitHub - caocuong0306/awesome-object-proposals: A curated list of object proposals resources for object detectionhttps://github.com/caocuong0306/awesome-object-proposals Object Detection series noteshttp://mp.weixin.qq.com/s?__biz=MzA4NDEyMzc2Mw==&mid=2649676492&idx=2&sn=87244c72956b854346665fdd1afb5ea4&chksm=87f673d0b081fac6935b9bc1a5b377817e1e2fa35ce4eb91eb57787f9b05b09daf0d1adf8c34&scene=0#rd Can CNN be trained with multi-label to achieve the effect of detection?-Zhihu https://www.zhihu.com/question/52143412/answer/130037578 Face Detection (Target Detection)-shuzfan's Column-Blog Channel-CSDN.NEThttp://blog.csdn.net/shuzfan/article/category/6429450 [Resources] Deep Learning Top100: Papers Cited Most in Recent 5 Years (Download) http://mp.weixin.qq.com/s?__biz=MzI3MTA0MTk1MA==&mid=2651993500&idx=2&sn=50deefc980e29cbab30419ee557b06a7&chksm=f121416dc656c87b8e4b10f543d81fa3c33f5740c12acf4b2c8d492108b245cd2fde844d8f4c&scene=0#rd

Summary of Knowledge Structure of Natural Language Processing (NLP)

https://qaofficial.com/post/2019/04/26/70897-summary-of-knowledge-structure-of-natural-language-processing-nlp.html 2019-04-26
The knowledge of natural language processing is too large, and the Internet is also full of scattered knowledge. For example, it is difficult to learn some models without context. Therefore, I have summed up a knowledge system structure myself, and the shortcomings are welcome to correct.The content source mainly refers to teacher Huang Zhihong's natural language processing course.The main reference book is "Statistical Natural Language Processing" by Mr. Zong Cheng qin.

Text Classification: Spam Classification

https://qaofficial.com/post/2019/04/26/70904-text-classification-spam-classification.html 2019-04-26
https://appliedmachinelearning.blog/2017/01/23/email-spam-filter-python-scikit-learn/ From: http://finance.jrj.com.cn/tech/2017/07/19111122769373.shtml Text Mining (getting information from text) is a relatively broad concept. This technology has attracted more and more attention in the era of massive text data generation every day.At present, with the help of machine learning model, many text mining applications including sentiment analysis, document classification, topic classification, text summarization, machine translation, etc. have been automated. Among these applications, spam filtering is a good start for beginners to practice file classification.

Using CNN to Diagnose Diseases Based on Chest X-rays Based on CAFFE (Convolutional Architecture for Fast Feature Embedding)

https://qaofficial.com/post/2019/04/26/70836-using-cnn-to-diagnose-diseases-based-on-chest-x-rays-based-on-caffe-convolutional-architecture-for-fast-feature-embedding.html 2019-04-26
I hope that this small example can make more people familiar with the basic usage of CAFFE (Convolutional Architecture for Fast Feature Embedding).In fact, it is relatively easy to write a neural network framework by writing train.prototxt, because the structure and parameter specification between layers are very clear.This article will not explain any details of CNN, all explanations are only related to the program. here we will use Alexnet, a