QA Official

What is TensorFlow?

https://qaofficial.com/post/2019/04/30/24330-what-is-tensorflow.html 2019-04-30
What is TensorFlow?TensorFlow is an open source software library for numerical calculations using data flow diagrams.In other words, it is the best way to build a deep learning model.This article has compiled some excellent lists of TensorFlow's practices, libraries and projects.I. tutorials TensorFlow Tutorial 1 — 从基础到更有趣的 TensorFlow 应用 TensorFlow Tutorial 2 — 基于 Google TensorFlow 框架的深度

What is the difference between CAFFE (Convolutional Architecture for Fast Feature Embedding) 2 and CAFFE (Convolutional Architecture for Fast Feature Embedding)

https://qaofficial.com/post/2019/04/30/24288-what-is-the-difference-between-caffe-convolutional-architecture-for-fast-feature-embedding-2-and-caffe-convolutional-architecture-for-fast-feature-embedding.html 2019-04-30
facebook's open source CAFFE (Convolutional Architecture for Fast Feature Embedding) 2 a few days ago gave us another choice on the deep learning framework.CAFFE (Convolutional Architecture for Fast Feature Embedding) 2 claims to be a lightweight, modular and extensible framework, code once, run anywhere.As an old CAFFE (Convolutional Architecture for Fast Feature Embedding) player, I naturally need to do some research. dependency processing Dependence on the first version of CAFFE (Convolutional Architecture for Fast Feature Embedding) is a headache, especially when installing on the old version of the company's servers, which requires a lot of time.

[CAFFE (Convolutional Architecture for Fast Feature Embedding)]: About the Beginner's Entrance to CAFFE (Convolutional Architecture for Fast Feature Embedding)

https://qaofficial.com/post/2019/04/30/24277-caffe-convolutional-architecture-for-fast-feature-embedding-about-the-beginneramp#x27s-entrance-to-caffe-convolutional-architecture-for-fast-feature-embedding.html 2019-04-30
Several Important Documents of CAFFE (Convolutional Architecture for Fast Feature Embedding) Caffe so long, CAFFE (Convolutional Architecture for Fast Feature Embedding) hasn't written a blog for beginners. Recently, at the request of laboratory Younger, he plans to write a simple and fast-paced popular science article for beginners.The first step in using CAFFE (Convolutional Architecture for Fast Feature Embedding) for deep neural network training needs to understand several important documents: solver.prototxt

[Keras] Chinese document learning notes-get started quickly keras

https://qaofficial.com/post/2019/04/30/24179-keras-chinese-document-learning-notes-get-started-quickly-keras.html 2019-04-30
Learning notes based on Chinese official documents and English official documents summarize the learning process systematically. Keras is a high-level neural network API. Keras is written by pure Python and is based on Tensorflow, Theano and CNTK backend.Keras was born to support rapid experiments and can quickly convert your idea into results. If you have the following requirements, please choose Keras: Simple and Fast Prototype Design (keras is highly modular,

detailed explanation of official website example 4.38(reuters_mlp.py)-keras learning note 4

https://qaofficial.com/post/2019/04/30/24318-detailed-explanation-of-official-website-example-4.38reuters_mlp.py-keras-learning-note-4.html 2019-04-30
Training and evaluating a simple MLP (multi-layer perceptron) based on the topic classification task of news reports in Reuters. Keras Instance Directory Data set download used by Keras (including the following data sets) MNIST cifar-10-batches-py imdb.npz、imdb_word_index.json nietzsche.txt reuters.npz fra-eng code comments '''Trains

keras for Deep Learning

https://qaofficial.com/post/2019/04/30/24180-keras-for-deep-learning.html 2019-04-30
Note that keras is used to press and install theano or tensorflow,keras. Keras uses tensorflow by default First create a moel from keras.models import Sequential model = Sequential() Then add nerve layer and activation function from keras.layers import Dense, Activation model.add(Dense(units=64, input_dim=100)) model.add(Activation('relu')) model.add(Dense(units=10)) model.add(Activation('softmax')) Use Age Function and Optimization Function model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) You can set the parameters of your loss function and optimization function model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True))

official website example details 4.40 (variant _ autoencoder.py)-keras learning notes 4

https://qaofficial.com/post/2019/04/30/24207-official-website-example-details-4.40-variant-_-autoencoder.py-keras-learning-notes-4.html 2019-04-30
使用Keras建立变分自编码器演示脚本 Keras Instance Directory code comments '''This script demonstrates how to build a variational autoencoder with Keras. 使用Keras建立变分自编码器演示脚本 #Reference 参考 - Auto-Encoding Variational Bayes 自动编码变分贝叶斯

the difference between Conv1D and Conv2D in keras

https://qaofficial.com/post/2019/04/30/24238-the-difference-between-conv1d-and-conv2d-in-keras.html 2019-04-30
If there is any mistake, please correct it. My answer is that when the Conv2D input channel is 1, there is no difference between the two or they can be transformed into each other.First of all, the last code called by both is the back-end code (take TensorFlow as an example, which can be found in tensorflow_backend.py): x = tf.nn.convolution( input=x, filter=kernel, dilation_rate=(dilation_rate,), strides=(strides,), padding=padding, data_format=tf_data_format) The difference is that

watermelon book+actual combat+Andrew Ng machine learning (3) machine learning foundation (multi-classification, category imbalance)

https://qaofficial.com/post/2019/04/30/24144-watermelon-book-actual-combat-andrew-ng-machine-learning-3-machine-learning-foundation-multi-classification-category-imbalance.html 2019-04-30
If this article is of a little help to you, please pay attention to it and praise it. I will be very happy ~ 0. Preface This article introduces the problems of multi-classification and class imbalance in machine learning. 1. Multi-classification Learning Some algorithms can directly perform multi-classification, while others cannot. The basic idea is to split the multi-classification task into several two-classification tasks to solve.

2019 fall recruit preparation-machine learning foundation

https://qaofficial.com/post/2019/04/29/24105-2019-fall-recruit-preparation-machine-learning-foundation.html 2019-04-29
Fundamentals of Machine Learning Common Calculation Formulas Term frequency–inverse document frequency (word frequency-IDF inverse document frequency, term frequency-inverse document frequency)Word frequency (TF)= number of times a word appears in an article \text{ word frequency (TF)} = \frac{\text{ number of times a word appears in an article }}{\text{ number of words in an article}} word frequency (TF)= number of times a word appears in an articleIDF inverse document frequency (IDF)=log (total number of documents in corpus contains the word +1)\text{ IDF inverse document frequency (IDF)}=log(\frac{\text{ total number of documents in corpus}} {contains the word +1}) IDF inverse document frequency (IDF)=log (contains the word +1 total number of documents in corpus) Common Loss Functions logloss (crossentropyloss, softmaxloss) is mostly used for classification problems.