QA Official

Classification of unbalance Data Sets in Multi-classification Problems

https://qaofficial.com/post/2019/04/30/24172-classification-of-unbalance-data-sets-in-multi-classification-problems.html 2019-04-30
When svm method is used for image annotation experiments, some words in the vocabulary set have very few pictures, even smaller than the feature dimension.In this case, neither logistic regression nor non-linear svm can obtain a classifier with good performance.It is equivalent to the degree of freedom where the number of equations is less than the unknowns of the equation, and the equation has no exact solution. This is the

Dahua Text Classification

https://qaofficial.com/post/2019/04/30/24259-dahua-text-classification.html 2019-04-30
Overview Traditional Machine Learning Methods The general steps of classification problems can be divided into feature extraction, model construction, algorithm optimization, cross-validation, etc.For text, how to extract features is a very important and challenging problem.What are the characteristics of a text and how can it be quantified into mathematical expressions? However, the document expression of Term frequency–inverse document frequency only takes into account the frequency information of words, and does not take into account the context structure information of words and the topic information implied by words.

Dynamic Programming

https://qaofficial.com/post/2019/04/30/24133-dynamic-programming.html 2019-04-30
Basic Ideas for Solving Problems with Dynamic Programming 1. If the problem is to find the optimal solution (usually to find the maximum or minimum value) of a problem, and the problem can be decomposed into several subproblems, and there are smaller subproblems overlapping among the subproblems, dynamic programming can be considered to solve the problem.Before applying dynamic programming, it is necessary to analyze whether the large problem can be decomposed into small problems, and there is an optimal solution for each small problem after decomposition.

Image Classification for Deep Learning Series

https://qaofficial.com/post/2019/04/30/24269-image-classification-for-deep-learning-series.html 2019-04-30
Recently, I have been deeply learning relevant knowledge. In order to consolidate what I have learned, I plan to start with more popular tasks such as text classification and image classification, and write blog records to encourage my friends.This article introduces the operation of image classification (data source mnist) using keras framework. The following sections respectively introduce some background knowledge and specific operation steps. 1. Introduction to Data Sets and Frameworks Used

Installation of Deep Learning Framework Keras

https://qaofficial.com/post/2019/04/30/24212-installation-of-deep-learning-framework-keras.html 2019-04-30
I have transferred the latest blog update to my personal website, welcome to visit ~ ~ SCP-173’s BLOG Installation of Deep Learning Framework Keras Keras is an encapsulation framework based on the original deep learning framework Tensorflow or Theano in Python language.If you are going to use Keras, you must first prepare to install Tensorflow or Theano. Keras Chinese Document Address 0.

Quick Start of keras Tutorial

https://qaofficial.com/post/2019/04/30/24204-quick-start-of-keras-tutorial.html 2019-04-30
Sequential Model Implement a simple AND gate neural network.In order to facilitate readers to understand how to deal with the multi-classification problem, this code also treats the two-classification problem as a multi-classification problem. #coding:utf-8 from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation import scipy.io as sio import numpy as np import keras model = Sequential() model.add(Dense(input_dim=2, units=2)) model.add(Activation('sigmoid')) model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['accuracy']) trainDa = np.mat([[1,1],[1,0],[0,1],[0,0]]) trainBl = np.mat([[1],[0],[0],[0]]) testDa

Summary of Knowledge Structure of Natural Language Processing (NLP)

https://qaofficial.com/post/2019/04/30/24261-summary-of-knowledge-structure-of-natural-language-processing-nlp.html 2019-04-30
自然语言处理知识太庞大了,网上也都是一些零零散散的知识,比如单独讲某些模型,也没有来龙去脉,学习起来较为困难,于是我自己总结了一份知识体系结

Text Classification Using Convolutional Neural Network (cnn)

https://qaofficial.com/post/2019/04/30/24237-text-classification-using-convolutional-neural-network-cnn.html 2019-04-30
Convolutional Neural Network has achieved good results in emotion analysis. Compared with previous shallow machine learning methods such as NB and SVM, it has better effect, especially when the data set is large, and CNN does not need us to extract features manually. The original shallow ML requires text feature extraction, text feature representation, normalization and text classification. Text feature extraction can be divided into four steps: (1) segmenting all

Visual Comparison of Results of Various Algorithms in Machine Learning

https://qaofficial.com/post/2019/04/30/24157-visual-comparison-of-results-of-various-algorithms-in-machine-learning.html 2019-04-30
Before the code is executed, all the included modules must be installed first. print(__doc__) # Modified for documentation by Jaques Grobler # License: BSD 3 clause import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn.cross_validation import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.datasets import make_moons, make_circles, make_classification from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.

What is CAFFE (Convolutional Architecture for Fast Feature Embedding)?

https://qaofficial.com/post/2019/04/30/24287-what-is-caffe-convolutional-architecture-for-fast-feature-embedding.html 2019-04-30
The full name of CAFFE (Convolutional Architecture for Fast Feature Embedding) should be Convolutional Architecture for Fast Feature Embedded. It is a clear and efficient in-depth learning framework. It is open source and its core language is C++. It supports command line, Python and Matlab interfaces. It can run on either Central Processor or GPU.Its license is BSD.2-Clause。 Deep Learning is popular mainly because it can learn useful feature from data autonomously.