LLM Review Policy Violations in ICML
ICMLLLMAI Automation

LLM Review Policy Violations in ICML

PK

Piyush Kalsariya

Full-Stack Developer & AI Builder

March 19, 2026
6 min read

Introduction to ICML and LLM Review Policies

As a full-stack developer working with AI automation, I was intrigued by a recent blog post from the International Conference on Machine Learning (ICML) discussing the violation of their Large Language Model (LLM) review policies. The post highlighted that about 2% of submitted papers were desk rejected because the authors used LLMs in their reviews, which is against the conference's policies.

Understanding LLM Review Policies

The ICML review policies clearly state that authors should not use LLMs to generate reviews or any part of their submissions. This policy is in place to ensure the integrity and originality of the research presented at the conference. As someone who works with LLMs in my development projects, I understand the temptation to use these powerful tools to assist with tasks such as writing and reviewing. However, it's essential to respect the policies and guidelines set by academic and research institutions.

Why LLMs are Prohibited in Reviews

The use of LLMs in reviews can lead to several issues, including:

  • Lack of originality: LLMs can generate text that is similar to existing work, which can lead to accusations of plagiarism or lack of originality in the research.
  • Biased or inaccurate information: LLMs can perpetuate biases or inaccuracies present in the data they were trained on, which can compromise the validity of the research.
  • Over-reliance on automation: The use of LLMs in reviews can lead to over-reliance on automation, rather than human critical thinking and evaluation.

Implications for Researchers and Developers

The ICML's decision to desk reject papers that violate their LLM review policies has significant implications for researchers and developers. As someone who works with AI automation, I believe it's essential to be aware of these policies and ensure that we are using LLMs responsibly and ethically. This includes:

  • Transparently disclosing the use of LLMs: Researchers and developers should clearly disclose the use of LLMs in their work, including any assistance they received from these tools.
  • Ensuring human oversight and evaluation: It's crucial to have human oversight and evaluation in place to ensure that the use of LLMs does not compromise the integrity or validity of the research.
  • Developing guidelines and best practices: The development of guidelines and best practices for the use of LLMs in research and development is essential to ensure that these tools are used responsibly and ethically.

Example Code for Responsible LLM Use

``python
1import torch
2from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
3
4tokenizer = AutoTokenizer.from_pretrained('t5-base')
5model = AutoModelForSeq2SeqLM.from_pretrained('t5-base')
6
7# Define a function to generate text using the LLM
8def generate_text(prompt):
9    inputs = tokenizer(prompt, return_tensors='pt')
10    outputs = model.generate(**inputs)
11    return tokenizer.decode(outputs[0], skip_special_tokens=True)
12
13# Use the function to generate text, while ensuring human oversight and evaluation
14prompt = 'Summarize the paper on LLM review policies'
15generated_text = generate_text(prompt)
16print(generated_text)
17```

Conclusion

In conclusion, the ICML's decision to desk reject papers that violate their LLM review policies highlights the importance of responsible and ethical use of AI automation in research and development. As a full-stack developer working with LLMs, I believe it's essential to be aware of these policies and ensure that we are using these tools in a way that respects the integrity and validity of the research.

Tags
#ICML#LLM#AI Automation