How to save and load files on kaggle, and delete saved files on output at the same time.

 Let’s talk about the regular creation of jupyter notebook on kaggle

 After clicking New Notebook, enter an empty Notebook, you can create and upload the data to be processed at Data.

Among them, you can choose to upload locally, or you can directly click Add Data to find open source data.

 Click ACCELERATOR to select the GPU or TPU to use. If not selected, the default is to use the cpu of your own notebook

 keep:

torch.save(date,path)

Where date is the saved data, path is the path plus the file name, such as: torch.save(date, '/kaggle/working'+".pt")

load

torch.load('/kaggle/working/07pt')

Directly select the file name under the path to load

Delete existing pt files on output

import shutil
import os

if __name__ == '__main__':
    path = '/kaggle/working'
    if os.path.exists(path):
        shutil.rmtree(path)
        print('删除完成')
    else:
        print('原本为空')

Run it directly on the code to delete all files under working. If you choose to delete files in other paths, you only need to change the path

Here to introduce GPU and CPU

GPU

The concept of GPU was proposed by Nvidia in 1999. A GPU is a chip on a graphics card, just like a CPU is a chip on a motherboard. So there were no GPUs on graphics cards before 1999? Of course there was, but at that time no one named it, nor did it attract enough attention, and its development was relatively slow.

Since Nvidia proposed the concept of GPU, GPU has entered a period of rapid development. In short, it has gone through the following stages of development:

1) It is only used for graphics rendering . This function is the original intention of GPU, which can be seen from its name: Graphic Processing Unit, graphics processing unit;

2) Later, it was discovered that it was too wasteful for such a powerful device as GPU to only be used for graphics processing, and it should be used to do more work, such as floating-point operations . How to do it? It is impossible to directly hand over floating point operations to the GPU, because it can only be used for graphics processing (at that time). The easiest thing to think of is to do some processing on floating-point operations, package them into graphics rendering tasks, and then hand them over to the GPU. This is the concept of GPGPU (General Purpose GPU) . However, this has a disadvantage, that is, you must have a certain knowledge of graphics, otherwise you don't know how to package.

3) Therefore, in order to allow people who do not understand graphics knowledge to experience the power of GPU computing, Nvidia has proposed the concept of CUDA.

What is CUDA?

CUDA (Compute Unified Device Architecture), a general-purpose parallel computing architecture , is a computing platform. It contains the CUDA instruction set architecture as well as the parallel computing engine inside the GPU. As long as you use a CUDA C language similar to C language, you can develop CUDA programs, so that you can use the powerful computing power of the GPU more conveniently, instead of packaging computing tasks into graphics rendering tasks as before, and then handing them over to GPU processing.

TPU (Tensor Processing Unit) is a tensor processing unit. It is a chip customized for machine learning. It has been trained in deep machine learning and has higher performance (computing power per watt).

Because it can accelerate the operation of TensorFlow, its second-generation artificial intelligence system, and its efficiency is much higher than that of GPU-Google's deep neural network is driven by the TensorFlow engine. Tailor-made for machine learning, TPUs require fewer transistors to perform each operation and are naturally more efficient.

Compared with the CPU and GPU of the same period, TPU can provide 15-30 times performance improvement and 30-80 times efficiency (performance/watt) improvement.

TPU can provide machine learning with orders of magnitude higher than all commercial GPUs and FPGAs per watt, which is basically equivalent to the level of technology seven years later. TPU is specially developed for machine learning applications to make the chip more durable in the case of reduced calculation accuracy, which means that each operation only needs fewer transistors, uses more sophisticated and high-power machine learning models, and quickly applies these model, so users can get more accurate results.

The advantages of the two applications are different, and you can choose according to different needs.
 

Guess you like

Origin blog.csdn.net/weixin_51672245/article/details/128053661