QA Official

python Implementation of Abnormal Jump Step [Sword offer]

https://qaofficial.com/post/2019/04/08/69507-python-implementation-of-abnormal-jump-step-sword-offer.html 2019-04-08
topic description A frog can jump up to 1 step or 2 steps at a time ... It can also jump up to N steps.Find out how many jumping methods the frog can use to jump up an n-step. topic link # -*- coding:utf-8 -*- class Solution: def jumpFloorII(self, number): # write code here ans=[]; ans.append(0); ans.append(1); ans.append(2); for i in range(3 , number+1): sum=1; for j in range(1,i): sum+=ans[j]; ans.

C#4.0 New Features Dynamic Type

https://qaofficial.com/post/2019/04/07/69291-c#4.0-new-features-dynamic-type.html 2019-04-07
transferred from http://www.cnblogs.com/ryanding/archive/2010/12/09/1900106.html

Google Guava Useful Collection Classes

https://qaofficial.com/post/2019/04/07/69355-google-guava-useful-collection-classes.html 2019-04-07
, Preconditions, precondition judgment No Additional Parameters: No Error Messages in Thrown Exceptions;There is an Object object as an extra parameter: the thrown exception uses Object.toString () as the error message;There is a String Object as an extra parameter, and there is a set of any number of additional Object objects: this variant handles exception messages a bit like printf, but only supports the %s indicator considering GWT compatibility and

MNIST

https://qaofficial.com/post/2019/04/07/69390-mnist.html 2019-04-07
MNIST1 import tensorflow as tf import tensorflow.examples.tutorials.mnist.input_data as input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) x = tf.placeholder("float", [None, 784]) W = tf.Variable(tf.zeros([784,10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x,W) + b) y_ = tf.placeholder("float", [None,10]) cross_entropy = -tf.reduce_sum(y_*tf.log(y)) train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy) init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) for i in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) # 评估模型 correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) accuracy =

MNIST dataset simple version

https://qaofficial.com/post/2019/04/07/69416-mnist-dataset-simple-version.html 2019-04-07
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data # 载入数据集 mnist = input_data.read_data_sets("MNIST_data", one_hot=True) # 每个批次的大小 batch_size = 100 # 计算一共有多少个批次 n_batch = mnist.train.num_examples // batch_size # 定义两个placeholder x = tf.placeholder(tf.float32, [None, 784]) y = tf.placeholder(tf.float32, [None,

Mnist dataset and code for input_data.py

https://qaofficial.com/post/2019/04/07/69393-mnist-dataset-and-code-for-input_data.py.html 2019-04-07
Mnist is an introduction to tensorflow, but many people are stuck on Mnist's data set.Some people cannot find the input_data.pyde code.Therefore, code is provided here for those who cannot find input_data.pyOnly for study.The source code comes from HTTPS://Tensor Flow.Google Source.com/Tensor Flow/+/Master/Tensor Flow/Examples/Tutorials/MN IST/Input _ Data.py. The following is the code: # Copyright 2015 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); #

Theano(2) RNN training word vector

https://qaofficial.com/post/2019/04/07/69424-theano2-rnn-training-word-vector.html 2019-04-07
1. Brief introduction of the project projectRecurrent Neural Networks with Word EmbeddingsTutorial Address: http://deeplearning.net/tutorial/rnnslu.htmlTaskThe slot-filling (spoken language understanding) assigns labels to each word in a sentence, which is a classification problem.DatasetThe data set is a small data set of DARPA: ATIS (Airline Travel Information System), expressed by Inside Outside Beginning (IOB)There are 4,978 sentences word 56590 words in the training set of data set.Test set has 893 sentences and word

dropout Principle and Implementation

https://qaofficial.com/post/2019/04/07/69436-dropout-principle-and-implementation.html 2019-04-07
Reprinted from: http://blog.csdn.net/nini_coded/article/details/79302800 dropout, as a regularization method to prevent CNN from over-fitting, was proposed by Hinton et al. in the 2012 classic paper ImageNet Classification with Deep Convolution.The principle of dropout is very simple: in an iteration of training, the neurons (the total number is n) in each layer are randomly eliminated with probability p, and the data (batchsize samples) in this iteration are trained with the network consisting

java Common Collection Class Summary

https://qaofficial.com/post/2019/04/07/69383-java-common-collection-class-summary.html 2019-04-07
In normal code development, collection classes are often used, such as ArrayList for list cache, Map for mapping, etc Recently, I have focused on the hierarchical inheritance relationship and internal storage structure of the next java collection class, making a summary so that I can turn it over at any time later. java collections, whether List, Set, or Map, are inherited from the collection interface, which mainly defines some public

java Foundation Talk about Collection Classes (Summary and Comparison of Collection Classes)

https://qaofficial.com/post/2019/04/07/69354-java-foundation-talk-about-collection-classes-summary-and-comparison-of-collection-classes.html 2019-04-07
Preface: Collection class is a common Base Class Library, and it is a way for java to save objects.It is precisely because they are often used more frequently and speak in a masterful way that it is of great benefit to sum up and summarize them. One. Two Ancestor Interfaces 1)Conllection。A sequence of independent elements, the elements of which are subject to one or more rules, all single-column collection classes implement their interfaces, such as List,Set,Queue.