LLM Review Policy Violations in ICML Papers
ICMLLLMAI Ethics

LLM Review Policy Violations in ICML Papers

PK

Piyush Kalsariya

Full-Stack Developer & AI Builder

March 19, 2026
6 min read

Introduction to ICML and LLM Review Policies

The International Conference on Machine Learning (ICML) is one of the premier conferences in the field of machine learning, attracting thousands of submissions every year. As a full-stack developer working with AI automation, I was intrigued by a recent blog post from ICML discussing the violation of their Large Language Model (LLM) review policies.

What are LLM Review Policies?

LLM review policies are guidelines set by conferences like ICML to regulate the use of AI models, such as language models, in the review process. These policies aim to prevent authors from using AI-generated text in their submissions, ensuring the integrity and originality of the research.

Reasons Behind the Violations

According to the ICML blog post, 2% of submitted papers were desk rejected due to violations of the LLM review policies. The main reasons behind these violations include:

  • Lack of awareness: Many authors were not aware of the LLM review policies or did not understand the guidelines correctly.
  • Misuse of AI tools: Some authors intentionally used AI-generated text in their submissions, violating the policies.
  • Inadequate disclosure: A few authors failed to disclose their use of AI tools in the submission process.

Consequences of LLM Review Policy Violations

The consequences of violating LLM review policies can be severe, including:

  • Desk rejection: Papers that violate the policies may be rejected without review.
  • Loss of credibility: Authors who violate the policies may damage their reputation and credibility in the research community.
  • Waste of resources: Violations can lead to a waste of resources, including time and effort spent on reviewing and processing submissions.

Best Practices for Using AI in Research

As a developer working with AI automation, I believe it is essential to use AI tools responsibly and ethically. Here are some best practices for using AI in research:

  • Understand the guidelines: Familiarize yourself with the LLM review policies and guidelines set by the conference or journal.
  • Disclose AI use: Clearly disclose the use of AI tools in your submission, including the specific tools and models used.
  • Use AI as a tool, not a substitute: Use AI tools to assist with tasks such as proofreading, formatting, and data analysis, but do not rely solely on AI-generated text.

Example Code for AI-Assisted Research

Here is an example of how I use AI tools in my research, using a simple Python script to analyze data:

``python
1import pandas as pd
2from sklearn.model_selection import train_test_split
3from sklearn.ensemble import RandomForestClassifier
4from sklearn.metrics import accuracy_score
5
6data = pd.read_csv('data.csv')
7X = data.drop('target', axis=1)
8y = data['target']
9X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
10model = RandomForestClassifier(n_estimators=100)
11model.fit(X_train, y_train)
12y_pred = model.predict(X_test)
13print('Accuracy:', accuracy_score(y_test, y_pred))
14```

In this example, I use AI tools to assist with data analysis, but I do not rely solely on AI-generated text.

Conclusion

In conclusion, the violation of LLM review policies is a serious issue that can have significant consequences for authors and the research community. As a developer working with AI automation, I believe it is essential to use AI tools responsibly and ethically, following the guidelines and best practices set by conferences and journals. By doing so, we can ensure the integrity and originality of research, while also harnessing the power of AI to accelerate innovation and discovery.

Tags
#ICML#LLM#AI Ethics