Solve the problem that imdb.load_data(num_words=10000) cannot download the data set under Keras

When we follow the code tutorial in the deeplearning with python book, the problem of data set download failure often occurs. For example, run the following piece of code

(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

I will go to a website to download the imdb.npz data set. At this time, the download may fail, so what should I do?

You can download the imdb.npz dataset on Baidu, store it in a folder, and then change the code to the following:

(train_data, train_labels), (test_data, test_labels) = imdb.load_data(path="/home/cc/datasets/imdb.npz", num_words=10000)

Haha, is it easy to solve the problem?

Or put the downloaded imdb.npz file in the .keras/datasets folder in the main directory. In the ubuntu system, the .keras/datasets folder is hidden. Press ctrl+H in the main directory to show hidden folder. In this way, there is no need to modify the code. ./keras/datasets is the default storage folder for the downloaded files in the code.

Guess you like

Origin blog.csdn.net/Xiao_Xue_Seng/article/details/83994441