QA Official

The Future of Microservice Type Is What?(2017-02-22 Daniel Bryant)

https://qaofficial.com/post/2019/04/18/72980-the-future-of-microservice-type-is-what2017-02-22-daniel-bryant.html 2019-04-18
作者| Daniel Bryant 译者|罗远航 In preparation for the 16th and 17th microXchg Conferences to be held in Berlin, InfoQ, together with Uwe Friedrichsen and Adrian Cole, discussed functional service design and new challenges for monitoring distributed systems, as well as the type of Microservice in the future. Key messages from conversations with UWE Friedrichsen (CEO of CodeCentric) and Adrian Cole (software engineer of

Which are the most potential development directions for Python in the future?

https://qaofficial.com/post/2019/04/18/72965-which-are-the-most-potential-development-directions-for-python-in-the-future.html 2019-04-18
In recent years, Python has become more and more popular, because Python is simple and fast to learn, and it is the preferred language for many novice programmers to get started. Python is a scripting language. Python is also called glue language because it can glue together modules written in various other programming languages.Strong inclusiveness and extensive use have attracted more and more attention. Python is very popular academically.

relevant data of defogging algorithm

https://qaofficial.com/post/2019/04/18/73012-relevant-data-of-defogging-algorithm.html 2019-04-18
Latest: Important: Image De-fogging, De-raining, De-blurring and De-noising PPT:https://blog.csdn.net/f290131665/article/details/79572012 Evaluation Index: https://blog.csdn.net/f290131665/article/details/79514410 AOD-NET: https://www.pytorchtutorial.com/pytorch-image-dehazing/ Benchmarking Single Image Dehazing and Beyond 去除雨滴,去雾,去除噪声,去尘土和去模糊等都是这一类的,图像复原(低级图像处理/视觉任务)。采用生成

2017.05.10 Review numpy's Learning from Other python woe iv Implementation References

https://qaofficial.com/post/2019/04/17/70481-2017.05.10-review-numpyamp#x27s-learning-from-other-python-woe-iv-implementation-references.html 2019-04-17
1. It took me more than an hour to write the summary at the beginning of yesterday morning and ended at 11: 03. During the summary of the related usage of pandas, I learned the new method of building empty dataframe and then continuously appending it. 2. After the summary, we monitored the bad debts and the running status of the model. 3. After the summary in the morning, we

Datawhale Algorithm Practice Phase 2 Task 2-Model Evaluation

https://qaofficial.com/post/2019/04/17/70490-datawhale-algorithm-practice-phase-2-task-2-model-evaluation.html 2019-04-17
Task Description Record the scoring tables of 7 models (logistic regression, SVM, decision tree, random forest, GBDT, XGBoost and LightGBM) on accuracy, precision, recall and FIA Formula 1 World Championship-score, auc values, and draw Roc curves. 1. Code Optimization-Data Set Loading because train_test_split supports inputting data in Dataframes format.Therefore, it is suggested to amend the data reading code as follows: (relatively simple) data = pd.read_csv("data_all.csv") # trai

Multi class ovr or ovo

https://qaofficial.com/post/2019/04/17/70376-multi-class-ovr-or-ovo.html 2019-04-17
one-versus-rest and one-versus-one are different.SVM algorithm was originally designed for binary classification problems. When dealing with multi-class problems, it is necessary to construct suitable multi-class classifiers.At present, there are two main methods for constructing SVM multi-class classifiers: one is direct method, which directly modifies the objective function, combines the parameter solutions of multiple classification flour into an optimization problem, and realizes multi-class classification "at one time" by solving the optimization problem.

OnehotEncoder's Understanding in Practical Application

https://qaofficial.com/post/2019/04/17/70476-onehotencoderamp#x27s-understanding-in-practical-application.html 2019-04-17
Monothermal coding is to change data into (1,0,0,...,0),(0,1,0,0,...,0), and the feature attribute has as many categories as it has dimensions.LabelEncoder is to change the data into continuous numerical variables, such as "American", "Japanese" and "Chinese" originally, and (0,1,2) is usually a combination of the two. LabelEncoder sorts the data and encodes them with serial numbers.Before using onehot, you must combine the training set test sets (for example, a new label

Region Split Policy

https://qaofficial.com/post/2019/04/17/70382-region-split-policy.html 2019-04-17
Region concept Region is the basic element of table acquisition and distribution, and consists of one Store for each column family.The object level diagram is as follows: Region size Region size is a thorny problem, which needs to consider the following factors. Region is the smallest unit of distributed storage and load balancing in HBase.Different Region are distributed on different regional servers, but they are not the smallest units of storage.

[TensorFlow Learning Notes -05] Bacth Normalization (BN)

https://qaofficial.com/post/2019/04/17/70482-tensorflow-learning-notes-05-bacth-normalization-bn.html 2019-04-17
[copyright notice] TensorFlow Learning Notes Reference: Li Jiaxuan Author of TensorFlow Technical Analysis and Practical Combat Huang Wenjian Tang Yuan Author of TensorFlow Practical Combat Zheng Zeyu and Gu Siyu Author of TensorFlow Practical Combat Google Deep Learning Framework Yue Yi Bin Wang Author of Deep Learning-Detailed Explanation of CAFFE (Convolutional Architecture for Fast Feature Embedding)'s Classic Model and Practical Combat TENSOR FLOW Chinese CommunityThe http://www.tensorfly.cn/ Geek Institute has written the official documents of TensorFlow in Chinese, TensorFlow in English, as well as your big CSDN blogs and Github etc .

scikit-learn

https://qaofficial.com/post/2019/04/17/70487-scikit-learn.html 2019-04-17
scikit-learn preprocessing Data Preprocessing-Normalization/Standardization/Regularization 1, Z-Score, or remove mean and variance scaling formula is: (X-mean)/std is calculated separately for each attribute/column. Subtract the mean value from the data schedule attribute (by column) and add its variance.The result is that, for each attribute/column, all data are clustered around 0 with a variance of 1. is implemented in two different ways: using sklearn.preprocessing.scale () function, the given data can be directly standardized.