Where is TensorFlow going? PyTorch accounts for 80% of academia

Source | Heart of the Machine Editor | Mayonnaise

At the top conferences in 2021, the number of papers using PyTorch is at least 3 times that using TensorFlow, and this gap continues to expand.

602419e2ce76aeb4686a30c874fc510a.png

From the early academic frameworks Caffe and Theano, to the later PyTorch and TensorFlow, since deep learning became the focus again in 2012, many machine learning frameworks have become the new favorites of researchers and industry workers.

At the end of 2018, Google launched a brand new JAX framework, and its popularity has been steadily increasing. Many researchers have high hopes that it can replace many deep learning frameworks such as TensorFlow.

However, PyTorch and TensorFlow are still two powerful players in the field of ML frameworks, and the power of other new frameworks cannot be matched for the time being. The relationship between PyTorch and TensorFlow is a trade-off, and the power balance is also changing quietly.

In October 2019, Horace He, an undergraduate student at Cornell University and an intern in the PyTorch team, conducted statistics on the use of PyTorch and TensorFlow in academia. The results show that researchers have flocked to PyTorch in large numbers, but at the time, it seemed that the industry's first choice was still TensorFlow.

As shown in the figure below, since the middle of 2019, PyTorch has overtaken TensorFlow in terms of usage indicators in the top conferences of statistics.

60db1c84492dbe5489e30fc7c364856c.pngData collection time: October 2019.

At that time, the developer community had a heated discussion: in the future, who can usher in the "highlight moment" in the ML framework battle? Two years later, Horace He gave updated statistics again.

Up to now, the proportion of PyTorch of the three top conferences EMNLP, ACL and ICLR has exceeded 80%, and this proportion has also remained above 70% in other conferences. In just two years, the living space of TensorFlow has been greatly reduced.


50fc070762a314c8d51ce39e41ed16a7.png

PyTorch's "overtaking" in academia

Specific to each top meeting, the author also shows detailed data in the chart:

taking CVPR as an example, before CVPR 2018, the usage rate of TensorFlow was still higher than that of PyTorch, and in the next year, the situation immediately reversed.

In CVPR 2019, the usage rate of PyTorch was 22.72% (294 articles), and the usage rate of TensorFlow became 11.44% (148 articles); by CVPR 2020, these two figures became 28.49% (418 articles) and 7.7% (113 articles) respectively. articles).


ce01d534d4a02883cbece218845b0b6c.png

In ICML, ICLR, NeurIPS these conferences, it is still the same competitive situation:

e0e1fea0a48b73163cba19872967fe19.png

c97beae5b829d388f60b63c7529bb1e9.png

PyTorch leads the pack, while TensorFlow continues to decline. In ICLR 2022, the usage rate of PyTorch was 32.20% (1091 articles), and TensorFlow fell to 6.14% (208 articles), opening a five-fold gap.


d4c22ddc7c0bd735d972f638c3b73cbb.png


Does TensorFlow have a future in academia?

So, how did TensorFlow, which is on the sidelines, get to where it is today?

In the Hackrnews community, this topic has sparked a buzz among developers:

"In academic publishing, being able to compare your work to SOTA is critical. If everyone else in your area is using a framework, then you Should do the same. Pytorch has been the framework I've been following the most over the past few years."

"But one of Tensorflow's bright spots is static graphs. As models become denser and require different parts to execute in parallel, our run in PyTorch Some challenges are seen in the model.”

9a5e6acf01d6443db3d71271c138fae0.png

In the developer's opinion, if you want to do a lot of things in parallel, Tensorflow still has some features that other products can't match. It all depends on what you are doing.

Others say that Tensorflow's decline was due to a strategic error.

"I think Tensorflow made a bad move in academia because it was very difficult to use in earlier versions. Of course it always performs better than PyTorch, but when you're a PhD student with a heavy workload , you don’t care much about whether your code is efficient, but more about whether your code can work. Some people say that PyTorch is relatively easy to debug, so those early models were published in PyTorch, and many people came to PyTorch later.”

6fb7c625de3cf8e70e78ba8c3bdd29aa.png

What do you think?

I guess you'll like:

e174206b10b0fd78f577346e585bf450.png Poke me to check out GAN's series of albums~!

A lunch takeaway, become the forefront of CV vision!

Over 110 articles! A summary of the most complete GAN papers in CVPR 2021!

Over 100 articles! A summary of the most complete GAN papers in CVPR 2020!

Teardown of the new GAN: Decoupled representation MixNMatch

StarGAN Version 2: Multi-Domain Diversity Image Generation

Attached download | Chinese version of "Explainable Machine Learning"

Attached download|"TensorFlow 2.0 Deep Learning Algorithm Combat"

Attached download | Sharing of "Mathematical Methods in Computer Vision"

"Review of Surface Defect Detection Methods Based on Deep Learning"

A Review of Zero-Shot Image Classification: A Decade of Progress

"Review of Few-shot Learning Based on Deep Neural Networks"

cd03e0bc3261a3bef589273d61b5ec03.png

Guess you like

Origin blog.csdn.net/lgzlgz3102/article/details/123564642
80