Keras Official Case Labels (Space Separated): Keras Learning
MLP Multi-classification Problem import keras from keras.models import Sequential from keras.layers import Dense, Dropout, Activation from keras.optimizers import SGD # Generate dummy data import numpy as np x_train = np.random.random((1000, 20)) y_train = keras.utils.to_categorical(np.random.randint(10, size=(1000, 1)), num_classes=10) x_test = np.random.random((100, 20)) y_test = keras.utils.to_categorical(np.random.randint(10, size=(100, 1)), num_classes=10) model = Sequential() # Dense(64) is a fully-connected layer with 64 hidden units.
Dropout Meaning During learning, some features are immediately removed to avoid over-fitting.
Dropout Layer Source Code dropout layer is in core.py under layer
class Dropout(Layer): '''Applies Dropout to the input. Dropout consists in randomly setting a fraction `p` of input units to 0 at each update during training time, which helps prevent overfitting. # Arguments p: float between 0 and 1. Fraction of the input units to drop.
topic You are playing the following Nim Game with your friend: There is a heap of stones on the table, each time one of you take turns to remove 1 to 3 stones. The one who removes the last stone will be the winner. You will take the first turn to remove the stones.
Both of you are very clever and have optimal strategies for the game. Write a function to determine whether you can win the game given the number of stones in the heap.
Here we use cifar10 as our experimental database.First download the TensorFlow Models library to use the classes of CIFAR-10 data provided therein. git clone https://github.com/tensorflow/models.git cd models/tutorials/image/cifar10 Start to Build CNN Network import cifar10 import cifar10_input import tensorflow as tf import numpy as np import time max_steps = 3000 # 训练轮数（每一轮一个batch参与训练）
1、abstract Object findAttribute(String name)
in page, request, session (if valid) and application
Scope (scope) looks for the Attribute named name, and returns the object when it is found, but returns null when it is not found.
2、abstract Object getAttribute(String name)
find the attribute related to name in page scope, return the object if found, and return null if not found.
The difference between the two is that the search range is different.
Use import or from...import to import the corresponding module in python.Modules are actually collection files of some functions and classes. They can realize some corresponding functions. When we need to use these functions, we can directly import the corresponding modules into our programs and then we can use them.This is similar to the include header file in C language. In Python, we use import to import the modules we need.
1. Naive Bayesian
1. Naive Bayesian model originates from classical mathematical theory and has stable classification efficiency.
2. It performs well on small-scale data, can handle multiple classification tasks, and is suitable for incremental training, especially when the amount of data exceeds the memory, we can perform incremental training batch by batch.
3, not too sensitive to missing data, the algorithm is relatively simple, and is often used for text classification.
encoding: In order to facilitate computer processing, digitally processed information is called encoding.
Character Digitization: Character Coding
Color Digitization: Color Coding Unicode
Unicode as an International Standard
Only a number corresponding to each character is defined
Unicode does not say how you want to save a word
Which scheme to choose is a coding method for Unicode, that is, what we just said is " implementation" utf-8:
Input Normalization/Standardization Alex and Caffe initialization parameters are normalized based on the mean value. if normalization is not done, training will fail because the input is half as large.This is also why Caffe forces the calculation of image mean values for samples.
In this way, the pixel value [ 0,255 ] is adjusted to approximately [-128,128 ].Although the format of the image data is regular, it is quite useful to normalize it.
Face Key Point Network FAN: 5s DAN: time 1.5s openpose face: poor results 3000FPS: can only look at the face, opencv or dlib has integration, fast speed, 10ms Van Face: https://github.com/lsy17096535/face-landmark Allegedly 5ms (Lightweight Network) Vanilla CNN：
https://github.com/cunjian/face_alignment look at boundary:https://github.com/wywu/LAB face-landmark： https://github.com/jiangwqcooler/face-landmark-localization The Nest of the Great Spirit Among them, 5 and 6 are very powerful. VANface is quite good except for the side face effect.