Speaking of the IJCAL review controversy, why is there always controversy in the top meeting?

https://mp.weixin.qq.com/s/9nf0HCeFS65t4esoRlZN9A

By 超神经

场景描述:近日,IJCAI 2019 的评审结果遭到吐槽,我们对此进行了深度剖析,看看 IJCAI 到底是不是如广大网友说得那样不堪。另外,为了提高论文审稿效率,中国已经在试用 AI 工具协助评审。

关键词:顶级会议 论文审核

IJCAI 2019 paper acceptance results have recently set off a wave of complaints on the Internet.

On May 9, the admission results of IJCAI 2019, the top conference in the field of artificial intelligence, were announced: 4,752 valid submitted papers were accepted this year, and 850 were finally included, with an acceptance rate of 17.9%, which is lower than the average acceptance rate of 20% over the years.

After the results were announced, it caused dissatisfaction among many contributors. Authors of rejected manuscripts posted their "encounters" on the Internet, and questioned the review, scoring, and rebuttal links.

In response, Sarit Kraus, the program chairman of IJCAI 2019, responded in an open letter on May 10. She stated in the letter:

I can understand everyone's doubts about the review process, fairness and randomness, and I have received many rejection letters. In this review, I have tried my best to reduce the randomness of the review process, reduce the number of papers received by each PC and SPC, and increase the review time.

However, faced with a total of 4752 papers submitted at this conference, the task of reviewing manuscripts is extremely difficult. I think it is necessary to rethink and innovate the concept of rigorously reviewing academic conferences. IJCAI will also listen to everyone's ideas and build a better journal.

Some scholars believe that this response is more pertinent, understandable and acceptable. After all, after so many years, there should be more innovation.

After the anger subsides, we need to analyze it calmly and objectively. What is IJCAI? Is it really "more and more watery" as some authors complained?

Treat objectively, listen to multiple voices

For any dispute, we should not just listen to one side. Overall, IJCAI was a bit wronged this time.
1

Domestic complaints are overwhelming, but foreign responses are mediocre

First of all, the most grievances are concentrated in the domestic Q&A community "Zhihu", while foreign platforms such as Twitter, Reddit, and Quaro have not seen relevant questions.

Some users said on Weibo that the domestic academic circle has a circle culture, and reviewers will give high marks to acquaintance authors. For authors who are not related, they may give insufficient objective and fair evaluations. In addition, for the first time this year, I heard that AAAI and IJCAI checked reviewer interest conflict. As a result, it was found that 90% of highly suspects were Chinese.

Teacher Wan Xiaojun of Peking University also mentioned on Weibo: "It seems that some authors have used relationships to influence the results of the paper review... I don't know when they started this bad ethos."

Speaking of the IJCAL review controversy, why is there always controversy in the top meeting?
Teacher Wan Xiaojun from Peking University's research fields are natural language processing, text mining, artificial intelligence

It can be guessed from this that perhaps it is because of their own bad manners that the reviewers have biased against the Chinese and thus failed to achieve objective review?
2

It’s good if you hit, bad if you don’t?

Secondly, in the comments on "How to treat IJCAI 2019 recruitment results", I did not completely condemn IJCAI. In general, most of the complaints still come from the authors of the papers.

Some users said:

"Before submitting a manuscript to IJCAI, you should be prepared for randomization. The review result is not just this year. In addition, because IJCAI's papers have too many topics and are not focused enough, there will be situations where reviewers cannot understand. Therefore, contributors should go to more professional conferences in their research field. In today's paper inflation, it is better to work hard to write good articles."

This comment received more than 80 approvals. And other netizens also responded:

"If it is not in the paper, it is said that the review is poor, but why don't you think about whether your work is good enough."

An accepted author stated:

"It is still sad to see that the results of my hard work have been devalued. Objectively speaking, the quality of my own review is quite good and the requirements are very strict. And the reviewers I met are also more responsible."

Teacher Liu Zhiyuan from Tsinghua University replied to him in the comments:

"After all, unreliable reviewers are still rare. Don't worry too much."

Speaking of the IJCAL review controversy, why is there always controversy in the top meeting?

Another accepted author also gave his opinion to the spitters:

Speaking of the IJCAL review controversy, why is there always controversy in the top meeting?

In addition, some netizens said that many people say that IJCAI and AAAI are too watery, and at the same time they put in articles that they are obviously not at the top level of the A level. This is the result of this situation.

From the data point of view, in recent years, the number of IJCAI submissions and the number of accepted papers have shown an increasing trend. The number of accepted papers this year has increased by 37% compared with last year, reaching the highest in history. However, on the contrary, this year's hiring rate is the lowest in recent years, and for the first time lower than the average hiring rate of 20% in previous years.

Reference data (IJCAI paper acceptance and acceptance rate in the past 4 years):
In 2015, a total of 1996 submissions were received, and 575 were accepted
, with an acceptance rate of 28.8%; in 2016, a total of 2294 submissions were received, with an acceptance rate of less than 25% ;
In 2017, a total of 2,540 submissions were received and 660 were accepted, with an acceptance rate of 25.9%. Among the 2,540 papers submitted, the most were from China, accounting for 37% of the total number of submissions; Europe and the United States were both 18%;
2018 In 2015, 3470 submissions were received and 710 papers were accepted, with an acceptance rate of 20.5%. Among the accepted papers, there are 325 papers whose authors are from China, accounting for 46% of the total, which is far ahead of all other countries and regions.

Perhaps as mentioned by netizens above, because too many researchers are not sure about the quality of their papers, but the authors who have the mentality of taking a go, have caused a waste of IJCAI review resources, which is also one of the reasons for the increase in review bugs One.

Therefore, for the results of this recruitment, rational criticism is reasonable, but it is not good to blindly follow the trend and complain. Both the reviewer and the contributor should reflect on it.

Speaking of the IJCAL review controversy, why is there always controversy in the top meeting?
A netizen comment IJCAI has been rejected many times

There is no top meeting that will not be complained

In fact, IJCAI is not the only one to be complained about. We have seen ICML, AAAI, and NeurlPS all have the same "encounter". It seems that it has become common practice to be complained about after being released.

No, just today, the ACL of the NLP conference has just been released, and it has attracted a lot of criticism. Not long ago, ICML 2019, which was put on the list at the beginning of March, also blasted the netizens on Twitter, mocking the results of the review.

Speaking of the IJCAL review controversy, why is there always controversy in the top meeting?
ACL is currently recognized as the largest NLP conference, and it is still hard to escape being complained

Therefore, in the face of the situation that the AI ​​field as a whole cannot escape being complained, we still spend more time to find our own shortcomings and write better papers.

Teacher Zhihua Zhou has previously commented on various conferences in the AI ​​field. Here is a quote from his evaluation of IJCAI as follows for your reference:

IJCAI: AI's best comprehensive conference, started in 1969, held every two years, odd-numbered years. Because AI is really too big, although it can record more than 100 articles per session (now it has reached more than 200 articles), there are not a few articles in each field, such as Machine Learning and Computer Vision. There are about 10 articles per time, so it is very difficult.

However, from the point of view of the acceptance rate, it is not too low, basically about 20%, because the insiders will weigh the weight, so if there is no hope, don't waste the reviewer's time.

Recently, the articles submitted by mainland China to international conferences are like a tide, and because there are few research groups in China that can check by themselves, many conferences are complaining that China's low-quality articles have seriously hampered the work efficiency of PCs. Under this circumstance, it is estimated that the acceptance rate for international conferences will drop in recent years.

IJCAI can be described as the top veteran of the AI ​​field

In terms of seniority, IJCAI can be described as the veteran of many top conferences in the AI ​​field.

Looking at the top international conferences in the field of artificial intelligence, IJCAI is one of the oldest conferences. It was held in the United States since 1969, and held every two years thereafter until it was held every year from 2015. This year happens to be its 50th birthday.

IJCAI is recognized as a category A conference by the list of international academic conferences recommended by the China Computer Federation (CCF), and has always occupied a very authoritative position in the field of artificial intelligence. Its recruitment standards have always been relatively strict, generally maintained at about 20%.

Speaking of the IJCAL review controversy, why is there always controversy in the top meeting?
Seven Class A conferences in the AI ​​field recognized by CCF

In the IJCAI's highest honor award "Research Excellence Award" in the list of winners, at first glance, they are basically big figures with famous names in the industry. For example, they:

In 1985, John McCarthy, the father of artificial intelligence, was the winner of the figure award in 1971. (Interestingly, IJCAI also has an award named after him to recognize researchers in the mid-career.)

In 2005, Geoffrey E. Hinton, the father of neural networks, the winner of the 2018 graph award.

In 2016, Michael I. Jordan, a member of the American Academy of Sciences, and an AI leader.

For the detailed list of previous awards, you can search for it if you are interested. They are all worshipped AI gods.

In addition, the sponsors of the conference can reflect its level to a certain extent. Judging from the sponsors of IJCAI this year, there are no shortage of giant technology companies, including Baidu, Huawei, Ali, Tencent, etc. in China, and IBM Research AI, SONY, BOSCH, HITACHI and Microsoft abroad. The lineup is still very strong.

Speaking of the IJCAL review controversy, why is there always controversy in the top meeting?
IJCAI 2019 sponsor list (partial)

Solution: Use AI to review AI papers

Regarding the discussion of the IJCAI 2019 recruitment results, some people sighed: It seems that it is really entering the era of national AI.

In recent years, China has made rapid progress in the AI ​​field. Among the increasing number of papers in each AI conference, a large part is from domestic contributions. In the past 20 years, China has produced a total of 369,000 papers in the field of artificial intelligence, ranking first in the world, second in the United States, and 3.8 times the third place in the United Kingdom.

It's no wonder that IJCAI complains that a large number of papers reduce the efficiency of the PC.

In response to the current situation, researchers have launched an efficient and unbiased AI funding reviewer.

Yesterday, an article published on Nature stated that the National Natural Science Foundation of China (NSFC) is trying out an AI tool that can select researchers to review funding applications, making the process more effective, faster, and fairer.

Speaking of the IJCAL review controversy, why is there always controversy in the top meeting?
Nature’s article published yesterday, address:
https://www.nature.com/articles/d41586-019-01517-8

The system will use natural language processing technology to collect online scientific literature databases and scientists’ personal web pages, and collect detailed information about potential reviewers’ publications or research projects. Semantic analysis of the text is used to compare funding applications and determine the best match for reviewers.

Some researchers said that the method used by NSFC is world-leading, but they still doubt whether AI can really improve this process.

More top clubs are using AI

Last month, the Norwegian Research Council began using natural language processing technology to group approximately 3000 research proposals and match them with the best review panel.

The Swiss-based academic publisher Frontiers ("Frontiers") assists reviewers and editors with AI tool AIRA (Artificial Intelligence Review Assistant) to improve efficiency.

The AIRA system is built with internal custom algorithms and industry-leading tools, such as Google, Cro***ef's iThenticate, and Editage's Ada.

AIRA currently conducts two key peer review tasks: quality control and reviewer identification. Its algorithm quickly and accurately evaluates submitted manuscripts based on a set of quality indicators (including text overlap, language, presence of human images, and other ethical considerations). Manuscripts that meet the established quality threshold will be passed to the editor, and any potential problems will be sent to the review team for further investigation.

We hope that when AI tools are added to the paper review army, reviewers can save more time from other links, so as to carefully review our papers and give each contributor a satisfactory answer.

IJCAI 2019 will be held in Macau, China from August 10th to 16th this year. This is the second time it will be held in China after the 2013 Beijing meeting.

At present, there are still 3 months before the opening of the conference. It is already so lively and it is estimated that it will be very interesting by then. So, do you want to participate?

HyperNeuropedia

Inductive bias

Inductive preference is the consideration of choice in the process of induction, which corresponds to the assumption of "what kind of model is better" in the learning algorithm.

Inductive preferences can be seen as heuristics or "values" for the learning algorithm itself to choose hypotheses in a huge hypothesis space.

In specific practical problems, whether this assumption is true, that is, whether the algorithm's induction preference matches the problem itself, most of the time directly determines whether the algorithm can achieve good performance.
Speaking of the IJCAL review controversy, why is there always controversy in the top meeting?

Guess you like

Origin blog.51cto.com/14929242/2535369