This paper explores the use of Large Language Models (LLMs) in modeling real-world optimization problems. We concretely define the task of translating natural language descriptions into optimization models (NL2OPT) and provide criteria for classifying optimization problems for the NL2OPT task. Our novel multi-agent modeling framework leverages relations identifier agents and a multi-agent verification mechanism, eliminating the need for solver execution. Additionally, we introduce a straightforward and practical evaluation framework, offering a more effective assessment method compared to traditional execution-based evaluations. We have created a unique dataset tailored for optimization modeling, featuring Problem Specifications as a structured representation of optimization problems. Through comprehensive experiments, our study compares our modeling framework with existing LLM reasoning strategies, highlighting their relative effectiveness in optimization modeling tasks. We also perform ablation studies to explore the effect of different components of our modeling framework. Experimental results demonstrate that our multi-agent framework outperforms many common LLM prompting strategies.