0
雷鋒網(wǎng)按:本文作者張慶恒,原文載于作者個人博客,雷鋒網(wǎng)(公眾號:雷鋒網(wǎng))已獲授權(quán)。
本篇文章主要記錄對之前用神經(jīng)網(wǎng)絡(luò)做文本識別的初步優(yōu)化,進(jìn)一步將準(zhǔn)確率由原來的65%提高到80%,這里優(yōu)化的幾個方面包括:
● 隨機打亂訓(xùn)練數(shù)據(jù)
● 增加隱層,和驗證集
● 正則化
● 對原數(shù)據(jù)進(jìn)行PCA預(yù)處理
● 調(diào)節(jié)訓(xùn)練參數(shù)(迭代次數(shù),batch大小等)
觀察訓(xùn)練數(shù)據(jù)集,發(fā)現(xiàn)訓(xùn)練集是按類別存儲,讀進(jìn)內(nèi)存后在仍然是按類別順序存放。這樣順序取一部分作為驗證集,很大程度上會減少一個類別的訓(xùn)練樣本數(shù),對該類別的預(yù)測準(zhǔn)確率會有所下降。所以首先考慮打亂訓(xùn)練數(shù)據(jù)。
在已經(jīng)向量化的訓(xùn)練數(shù)據(jù)的基礎(chǔ)上打亂數(shù)據(jù),首先合并data和label,打亂后再將數(shù)據(jù)和標(biāo)簽分離為trian.txt和train_label.txt。這里可以直接使用shell命令:
1、將labels加到trian.txt的第一列
paste -d" " train_labels.txt train.txt > train_to_shuf.txt
2、隨機打亂文件行
shuf train_to_shuf.txt -o train.txt
3、 提取打亂后文件的第一列,保存到train_labels.txt
cat train.txt | awk '{print $1}' > train_labels.txt
4、刪除第一列l(wèi)abel.
awk '{$1="";print $0}' train.txt
這樣再次以相同方式訓(xùn)練,準(zhǔn)確率由65%上升到75% 。
之前的網(wǎng)絡(luò)直接對輸入數(shù)據(jù)做softmax回歸,這里考慮增加隱層,數(shù)量并加入驗證集觀察準(zhǔn)確率的變化情況。這里加入一個隱層,隱層節(jié)點數(shù)為500,激勵函數(shù)使用Relu。替換原來的網(wǎng)絡(luò)結(jié)構(gòu),準(zhǔn)確率進(jìn)一步上升。
觀察模型對訓(xùn)練集的擬合程度到90%+,而通過上步對訓(xùn)練數(shù)據(jù)的準(zhǔn)確率為76%,一定程度上出現(xiàn)了過擬合的現(xiàn)象,這里在原有cost function中上加入正則項,希望減輕過擬合的現(xiàn)象。這里使用L2正則。連同上步部分的代碼如下:
#!/usr/bin/python
#-*-coding:utf-8-*-
LAYER_NODE1 = 500 # layer1 node num
INPUT_NODE = 5000
OUTPUT_NODE = 10
REG_RATE = 0.01
import tensorflow as tf
from datasets import datasets
def interface(inputs, w1, b1, w2,b2):
"""
compute forword progration result
"""
lay1 = tf.nn.relu(tf.matmul(inputs, w1) + b1)
return tf.nn.softmax(tf.matmul(lay1, w2) + b2) # need softmax??
data_sets = datasets()
data_sets.read_train_data(".", True)
sess = tf.InteractiveSession()
x = tf.placeholder(tf.float32, [None, INPUT_NODE], name="x-input")
y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name="y-input")
w1 = tf.Variable(tf.truncated_normal([INPUT_NODE, LAYER_NODE1], stddev=0.1))
b1 = tf.Variable(tf.constant(0.0, shape=[LAYER_NODE1]))
w2 = tf.Variable(tf.truncated_normal([LAYER_NODE1, OUTPUT_NODE], stddev=0.1))
b2 = tf.Variable(tf.constant(0.0, shape=[OUTPUT_NODE]))
y = interface(x, w1, b1, w2, b2)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y + 1e-10))
regularizer = tf.contrib.layers.l2_regularizer(REG_RATE)
regularization = regularizer(w1) + regularizer(w2)
loss = cross_entropy + regularization
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
#training
tf.global_variables_initializer().run()
saver = tf.train.Saver()
cv_feed = {x: data_sets.cv.text, y_: data_sets.cv.label}
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
for i in range(5000):
if i % 200 == 0:
cv_acc = sess.run(acc, feed_dict=cv_feed)
print "train steps: %d, cv accuracy is %g " % (i, cv_acc)
batch_xs, batch_ys = data_sets.train.next_batch(100)
train_step.run({x: batch_xs, y_: batch_ys})
path = saver.save(sess, "./model4/model.md")
一方面對文本向量集是嚴(yán)重稀疏的矩陣,而且維度較大,一方面影響訓(xùn)練速度,一方面消耗內(nèi)存。這里考慮對數(shù)據(jù)進(jìn)行PCA處理。該部分希望保存99%的差異率,得到相應(yīng)的k,即對應(yīng)的維度。
#!/usr/bin/python
#-*-coding:utf-8-*-
"""
PCA for datasets
"""
import os
import sys
import commands
import numpy
from contextlib import nested
from datasets import datasets
ORIGIN_DIM = 5000
def pca(origin_mat):
"""
gen matrix using pca
row of origin_mat is one sample of dataset
col of origin_mat is one feature
return matrix U, s and V
"""
# mean,normaliza1on
avg = numpy.mean(origin_mat, axis=0)
# covariance matrix
cov = numpy.cov(origin_mat-avg,rowvar=0)
#Singular Value Decomposition
U, s, V = numpy.linalg.svd(cov, full_matrices=True)
k = 1;
sigma_s = numpy.sum(s)
# chose smallest k for 99% of variance retained
for k in range(1, ORIGIN_DIM+1):
variance = numpy.sum(s[0:k]) / sigma_s
print "k = %d, variance is %f" % (k, variance)
if variance >= 0.99:
break
if k == ORIGIN_DIM:
print "some thing unexpected , k is same as ORIGIN_DIM"
exit(1)
return U[:, 0:k], k
if __name__ == '__main__':
"""
main, read train.txt, and do pca
save file to train_pca.txt
"""
data_sets = datasets()
train_text, _ = data_sets.read_from_disk(".", "train", one_hot=False)
U, k = pca(train_text)
print "U shpae: ", U.shape
print "k is : ", k
text_pca = numpy.dot(train_text, U)
text_num = text_pca.shape[0]
print "text_num in pca is ", text_num
with open("./train_pca.txt", "a+") as f:
for i in range(0, text_num):
f.write(" ".join(map(str, text_pca[i,:])) + "\n")
最終得到k=2583。該部分準(zhǔn)確率有所提高但影響不大。
該部分主要根據(jù)嚴(yán)重集和測試集的表現(xiàn)不斷調(diào)整網(wǎng)路參數(shù),包括學(xué)習(xí)率、網(wǎng)路層數(shù)、每層節(jié)點個數(shù)、正則損失、迭代次數(shù)、batch大小等。最終得到80%的準(zhǔn)確率。
對神經(jīng)網(wǎng)路進(jìn)行初步優(yōu)化,由原來的65%的準(zhǔn)確率提高到80%,主要的提高在于訓(xùn)練數(shù)據(jù)的隨機化,以及網(wǎng)絡(luò)結(jié)構(gòu)的調(diào)整。為提升訓(xùn)練速度,同時減少內(nèi)存消耗,對數(shù)據(jù)進(jìn)行了降維操作。
之后對代碼的結(jié)構(gòu)進(jìn)行了整理,這里沒有提及,該部分代碼包括 nn_interface.py 和 nn_train.py 分別實現(xiàn)對網(wǎng)絡(luò)結(jié)構(gòu)的定義以及訓(xùn)練流程的管理。
后面會結(jié)合tensorflow的使用技巧對訓(xùn)練進(jìn)行進(jìn)一步優(yōu)化。
雷鋒網(wǎng)相關(guān)文章:
手把手教你用 TensorFlow 實現(xiàn)文本分類(上)
手把手教你如何用 TensorFlow 實現(xiàn)基于 DNN 的文本分類
“TensorFlow & 神經(jīng)網(wǎng)絡(luò)算法高級應(yīng)用班”開課了!
最受歡迎的谷歌TensorFlow 框架,ThoughtWorks大牛教你玩轉(zhuǎn)深度學(xué)習(xí)!
課程鏈接:http://www.mooc.ai/course/82
加入AI慕課學(xué)院人工智能學(xué)習(xí)交流QQ群:624413030,與AI同行一起交流成長
雷峰網(wǎng)版權(quán)文章,未經(jīng)授權(quán)禁止轉(zhuǎn)載。詳情見轉(zhuǎn)載須知。