有趣的Fizzbuzz面试题

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/lilywri823/article/details/79578371

话说Fizz Buzz是什么鬼?

Fizz Buzz是洋人小朋友在学除法时常玩的游戏,玩法是:从1数到100,如果遇见了3的倍数要说Fizz,5的倍数就说Buzz,如果即是3的倍数又是5的倍数就说FizzBuzz。
最后演变为一个编程面试题:写一个程序输出1到100,但是如果遇到数字为3的倍数时输出Fizz,5的倍数输出Buzz,既是3的倍数又是5的倍数输出FizzBuzz。
原题目链接:http://joelgrus.com/2016/05/23/fizz-buzz-in-tensorflow/
翻译版:http://blog.topspeedsnail.com/archives/11010
以下为个人写的python小程序`

for i in range(1,101):
    if ( i%3 == 0 and i%5 ==0):
        print ("FizzBuzz")
    elif i%3==0:
        print("Fizz")
    elif (i%5==0):
        print("Buzz")
    else:
        print(i)

实验证明是正确的。列出部分实验结果如下:
这里写图片描述
但是如何用C++实现呢?对于我一个C++小白来说,最好的方法就是借鉴网上大拿们的code了

#include <iostream>
using namespace std;
int main(){
    for(int i=1;i<=100;i++){
        if(i%15==0)cout<<"FizzBuzz"<<endl;
        else if(i%5==0)cout<<"Buzz"<<endl;
        else if(i%3==0)cout<<"Fizz"<<endl;
        else cout<<i<<endl;
    }
    return 0;
}

实验结果如图所示:
这里写图片描述
以下是某位大神用TensorFlow写出来的程序,他使用两层全连接网络玩了一下,准确率还可以。

import numpy as np
import tensorflow as tf
def binary_encode(i, num_digits):
    return np.array([i >> d & 1 for d in range(num_digits)])
def pro_data():
    data_set_list=[binary_encode(i,14) for i in range(101,10001,1)]
    data_set=np.array(data_set_list)
    data_lable=[]
    for i in range(101,10001,1):
        if i%15==0:
            data_lable.append([1,0,0,0])
        elif i%5==0:
            data_lable.append([0,1,0,0])
        elif i%3==0:
            data_lable.append([0,0,1,0])
        else:
            data_lable.append([0,0,0,1])
    data_lable=np.array(data_lable)
    return data_set,data_lable
def predict2word(num,prediction):
    return ['fizzbuzz','buzz','fizz',str(num)][prediction]
def train_model(epoch=10000):
    train_data,train_label=pro_data()
    X=tf.placeholder('float32',[None,14])
    Y=tf.placeholder('float32',[None,4])
    weights1=tf.Variable(tf.random_normal([14,32]))
    bias1=tf.Variable(tf.random_normal([32]))
    weights2=tf.Variable(tf.random_normal([32,64]))
    bias2=tf.Variable(tf.random_normal([64]))
    weights3 = tf.Variable(tf.random_normal([64,4]))
    bias3 = tf.Variable(tf.random_normal([4]))
    fc1=tf.nn.relu(tf.matmul(X,weights1)+bias1)
    fc2=tf.nn.relu(tf.matmul(fc1,weights2)+bias2)
    out=tf.matmul(fc2,weights3)+bias3
    cost=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(out,Y))
    train_op=tf.train.GradientDescentOptimizer(0.01).minimize(cost)
    predict_op = tf.argmax(out, 1)
    sess = tf.Session()
    init_op=tf.initialize_all_variables()
    sess.run(init_op)
    for i in range(epoch):
        batch_size=256
        rand_oder=np.random.permutation(range(len(train_data)))
        train_data,train_label=train_data[rand_oder],train_label[rand_oder]
        for j in range(0,len(train_data)-1,batch_size):
            end=j+batch_size
            sess.run(train_op,feed_dict={X:train_data[j:end],Y:train_label[j:end]})
        print(i, np.mean(np.argmax(train_label, axis=1) == sess.run(predict_op, feed_dict={X: train_data, Y: train_label})))
    numbers = np.arange(1, 101)
    test_data = np.transpose(binary_encode(numbers, 14))
    test_label = sess.run(predict_op, feed_dict={X: test_data})
    output = np.vectorize(predict2word)(numbers, test_label)
    print(output)
if __name__=='__main__':
    train_model()

附传送门,学习各位大佬们的智慧结晶:

  1. http://blog.csdn.net/koon/article/details/1540780
  2. http://blog.csdn.net/dgh_dean/article/details/54575778 这是一篇原文章的翻译,用深度学习做的,但是那位面试者没有通过面试。
  3. 各种语言来解决:http://coding.memory-forest.com/fizzbuzz%E6%9C%89%E4%BD%95%E8%A7%A3%EF%BC%9F.html
  4. 题目出处 ,一位面试者各种在面试中写出来的小程序,慎点:https://www.cnblogs.com/webary/p/6507413.html

猜你喜欢

转载自blog.csdn.net/lilywri823/article/details/79578371
今日推荐