Free AccessAboutSectionsView PDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissionsReprints ShareShare onFacebookTwitterLinked InEmail Go to SectionFree Access HomeINFORMS Journal on Data ScienceAhead of Print How Can IJDS Authors, Reviewers, and Editors Use (and Misuse) Generative AI?Galit Shmueli , Bianca Maria Colosimo , David Martens , Rema Padman, Maytal Saar-Tsechansky , Olivia R. Liu Sheng , W. Nick Street , Kwok-Leung Tsui Galit Shmueli , Bianca Maria Colosimo , David Martens , Rema Padman, Maytal Saar-Tsechansky , Olivia R. Liu Sheng , W. Nick Street , Kwok-Leung Tsui Published Online:19 May 2023https://doi.org/10.1287/ijds.2023.0007Introduction by Editor-in-Chief (GS)We have all been subjected to the latest pattern: A conversation on any topic, from data science to travel or food, ends up (with high probability) mentioning the ChatGPT artificial intelligence (AI) chatbot. OpenAI’s latest free “toy” has captured the attention and imagination of so many people within and far beyond the data science community. Communities and individuals are expressing feelings of excitement, fear, optimism, dread, and curiosity regarding AI’s ability to generate new forms of what we used to think of as human expression. Generative AI creates text, images, audio, and video at qualities that surpass many of our expectations. Future versions will inevitably surpass current abilities.As a data science researcher with a background in statistics, my notion of “generating data” or “synthetic data” was monopolized by the idea of creating data for the purpose of evaluating or improving the properties of a method. One example is simulation, where we generate data from an underlying distribution and evaluate the performance of a method on these data. We can then modify the simulation parameters to assess sensitivity to different aspects of the data. Simulation is extremely common in IJDS papers for evaluating the performance of a proposed methodology and its sensitivity to different manipulable factors. A second example is data imputation, where missing data are “filled in” using different approaches. A third example is resampling methods, where multiple samples are randomly drawn from a data set. Popular resampling methods include the bootstrap and cross-validation techniques. Bootstrap is useful for estimating the distribution of an estimator without making parametric assumptions. Cross-validation is useful for evaluating out-of-sample performance. What is common to all these “data generation” methods is the end goal: We want our data science method and its implications to generalize: from a sample to a population, to new observations, or to other data contexts. A second commonality is the type of data generated: quantitative (mostly numerical) data. Third, we generate the data by writing computer code. In short, we create data that we know how to analyze, by using coding skills, so that we can expand our understanding of a model/method.Yet, generative artificial intelligence seems different: First, it creates “unstructured data” such as text, images, audio, and video. The input can be human language (e.g., text prompts). Second, current applications such as OpenAI’s ChatGPT (based on a large language model (LLM)) and DALL-E (for creating images) are already used by the public for purposes other than methodological or applied research. Third, rather than generating a sample of observations, the tools are often used to generate a single “data point” (a response to a question, a single “conversation,” or a single image). (Are you wondering why Monte Carlo simulation software is not so hot among the general public?)Although we expect a burst of new data science research on generative AI, as well as on the impact of generative AI on society, business, and industry, in this editorial, we decided to explore how IJDS authors, reviewers, editorial members, and management might use (or abuse) these new applications. For this purpose, the entire group of IJDS senior editors put their heads together and proposed several uses, challenges, and general recommendations. Some of these uses are already possible with current tools, while some are speculative, awaiting the improved capabilities of future LLMs. We came up with the following.Opportunities for Using Generative AI in IJDS SubmissionsAuthorsAuthors generate, consume, and interact with text and image data of different forms. They write their manuscript using English language and LaTex, create and test computer code, read literature and other resources in written form, generate images for their manuscript, and perhaps even analyze text, images, or other nonnumerical information as their data.Authors can use text-generation tools as a writing assistant while writing their manuscript similar to a professional human editor. Such uses can include language editing, translation, customizing the readability level to the IJDS audience, suggesting paper structure, and helping create a first rough draft. The summarization capabilities of tools such as ChatGPT can be used for shortening the manuscript to meet length requirements (Dwivedi et al. 2023, p. 32). They can also generate ideas for a suitable title and keywords the authors can consider. Summarization can also be useful for helping craft a succinct, but sufficiently informative abstract within the journal’s word limit.As a creative assistant, image generation tools can be used for creating visuals such as illustrative diagrams, replacing current uses of paid design services.As most IJDS papers include code that implements the proposed methodology, generative AI tools can support authors as coding assistants, helping during code writing and for validating human-written code. Code assistance can be done via prompts to an application such as ChatGPT, code editors, or integrated development environments. For example, ChatGPT can be used for debugging and writing LaTex code: a sometimes daunting and frustrating task. Another example is Github Copilot by Microsoft (https://github.com/features/copilot), which can “suggest code and entire functions in real-time, right from your editor” and can use natural language prompts.Authors can use LLM-based tools as a prereviewing assistant for reviewing their manuscript before submission. Such an assistant might help detect unclear arguments, suggest missing references, and who knows what else!Authors proposing new methodologies suitable for nonnumerical data (e.g., text or image) could use generative AI tools for generating data, which can then be used, similar to numerical simulated data, for evaluating the methodology under different assumptions and conditions and for providing reproducible illustrations (especially when the real data cannot be disclosed or shared).ReviewersReviewers provide invaluable feedback on submitted manuscripts. Reviewers interact and generate written text by reading the manuscript, related documents, and communications from the editors, writing a referee report, and sometimes examining submitted code.Reviewing consumes significant time and energy, and therefore we have tried to make the reviewing process more efficient by creating a structured reviewer form with five questions (https://pubsonline.informs.org/page/ijds/reviewer-guidelines). Generative AI tools hold the promise to further improve the efficiency, rigor, and effectiveness for creating referee reports and communicating the feedback to authors. Similar to authors, reviewers can use the language and summarization capabilities of LLM-based tools for crafting their referee report: editing their answers for the five questions, clarifying and prioritizing their further comments and suggestions, and improving the report’s communication style so that authors perceive it as professional and constructive.Using text-generation tools as a writing assistant can help reviewers better mask their identity (IJDS uses a single-blind system where reviewers and associate editors (AEs) are anonymous to authors).Reviewers can use LLMs as a reading assistant by summarizing the main points of different parts of the paper. Many reviewers first skim a paper before reading it in depth, to understand its structure and overall approach. Section-by-section summaries by an LLM can help identify the main points during such a first browse.Reviewers often point authors to key references that are missing, or to other resources that can strengthen the paper’s contribution (e.g., a data set the authors should apply their method to). To do this, the reviewer must identify the exact reference or the resource’s location. Although tools such as Google Scholar are powerful for locating references to share with authors, LLMs can further help the reviewer identify additional relevant references/resources that they might have forgotten, or even suggest ones the reviewer was not aware of (in which case the reviewer should carefully check the proposed references/resources before recommending them to the authors!).For submissions where authors use the output from LLMs as part of their data, analysis or arguments, reviewers can use LLMs as a verification assistant to validate these outputs. To allow this kind of “reproducibility check,” authors must provide the exact system specifications and the prompt used for generating the output.When reviewing submitted code, reviewers can use LLMs as a coding assistant for understanding the purpose of some part of the code (if not clearly written or documented), and even for modifying the code to evaluate some aspect.Finally, at the IJDS, authors and reviewers are asked to consider implications of the proposed methodology. Given that data science ethics is a relatively new and fast-changing area, LLMs can help reviewers identify potential ethical issues that were not considered by the authors (prompt: “what are possible ethical issues with a method that proposes…”). Reviewers using LLMs for such uses should disclose to the authors their use of an LLM and the exact system specifications and prompt used.EditorsThe IJDS editorial board includes the editor-in-chief (EIC), senior editors (SEs), AEs, managing editor, and the reproducibility editor and team. They all encounter and generate text of some form as part of the review process and other journal-related tasks.AEs at the IJDS have the critical task of guiding authors through the expert review team’s feedback. The AE reads the manuscript and the referee reports (including the report from the reproducibility team) and creates an AE report that summarizes the key points, guiding the authors on the way forward (or explaining a rejection recommendation). The AE report is critical for authors to understand, so they can follow the guidance toward successful publication (or if rejected, so they can understand the reasons for rejection and how they might move forward with another outlet). The AE can therefore benefit from the language and summarization capabilities of LLM tools for integrating the key points from the referee reports into the AE report (an integrating assistant). However, such a summary is by no means an appropriate AE report! (This is a critical point we share with IJDS AEs during onboarding and training.) The AE must then use human expert judgment to carefully guide the authors by prioritizing which recommendations the authors should follow, and in what form. Hence, the AE should carefully scrutinize the “integration assistance” by the LLM, and substantially inject their expertise and active guidance to arrive at a useful AE report.Like reviewers, AEs can use LLMs for improving their report in terms of language, readability, and professional communication style. Such editing would also better mask AE identity.AEs perform another critical role: selecting and inviting reviewers for a manuscript. Identifying and recruiting a sufficient number of suitable reviewers is a difficult task. Although AEs have a good network of experts, and the journal’s database includes potential reviewers that can be searched by expertise keywords, LLMs can further help AEs expand their network and identify new potential reviewers based on adequate text prompts. This expansion is likely much broader than the limits of our current community and can also help further diversify our reviewers by specifically directing the search toward certain geographies, gender, or other groupings that are outside the AE’s familiarity.SEs at the IJDS guide the reviewing process and make decisions about manuscripts. They communicate with the EIC and AEs and, importantly, convey decisions, explanations, and guidance to authors. The SEs are not anonymous. SEs can also use LLMs as writing assistants for writing and personalizing their decision letters. Using appropriate prompts, they can make sure their report conveys sufficient information to the authors about the strengths and weaknesses of the manuscript, and about the SE’s expectations of a revision. SEs also read the submissions and thus they can use LLMs as reading assistants, helping to more efficiently identify the contribution for the purpose of making decisions and selecting suitable AEs. Like AEs, the SE can benefit from the summarization capabilities of LLMs for summarizing the review team’s reports.The EIC encounters and generates text through a variety of roles: written communications with authors and editorial board members regarding submissions and with external entities for initiating new efforts, special issues, collaborating with INFORMS and other societies and conference organizers, commissioning of papers, and so on. The EIC produces writing for a variety of avenues, including for decision letters, editorials, journal processes and guidelines (that appear on the journal website), presentations and agendas for editorial meetings and outreach activities. The EIC also consumes text by reading submissions, referee reports, and the AE and SE reports. The EIC can thus benefit from using an LLM as a writing assistant for communications and a reading assistant, for example, for summarizing incoming submissions to determine suitability and further handling. LLMs could help the EIC identify new relevant audiences (e.g., conferences) to introduce to IJDS. Summarization of relevant conference programs might help identify new topics and papers to pursue.Finally, the EIC and SEs can also use LLMs as brainstorming assistants for identifying topics and titles for special issues, determining maturing domains with missing surveys, and identifying potential associate editors from a more diverse pool. Table 1 summarizes the different types of LLM-based assistants that can assist IJDS authors, reviewers, and editors.Table 1. Types of LLM-Based Assistants for IJDS Authors, Reviewers, and EditorsTable 1. Types of LLM-Based Assistants for IJDS Authors, Reviewers, and EditorsIJDS roleType of LLM assistanceAuthorWriting assistant, coding assistant, creative assistant, prereviewing assistantReviewerReading assistant, writing assistant, verification assistant, coding assistantEditorReading assistant, writing assistant, integrating assistant, brainstorming assistantOther IJDS Task Forces: Reproducibility, Media, and ManagementThe IJDS reproducibility team runs a check on each code submission to make sure it is reproducible by reviewers and future readers. Although most submissions share their code via CodeOcean (https://codeocean.com), some software is not supported by CodeOcean. In such cases, the reproducibility team might use LLMs as coding assistants, helping identify potential reproducibility problems and providing suggestions to authors.The IJDS media outreach team has been sharing information on social media, producing interviews with IJDS editors and authors, and posting about new IJDS articles that appear on the journal website. Generative AI can assist the team in creating visuals, customizing interviewing scripts, and summarizing the contribution of new articles to fit social media promotion formats.The submission management system (ScholarOne) has many complex workflows and automated emails that need constant updating. These automated emails can be improved using LLMs. Manual checks of incoming submissions could potentially also benefit from LLM check-in assistance that might point out undetected issues to the managing editor. Ideally, all these tasks will be significantly enhanced once such information systems have a ChatGPT-type interface.What Else?After the coauthors of this editorial shared their ideas, the EIC probed ChatGPT (using GPT-4) regarding how ChatGPT might be useful to an author, reviewer, and editor at a data science journal. The prompts and results are shown in Figures 1–3. We found the answers covered many of our suggestions, but not all (we humans still had some original ideas!), and some items were irrelevant to IJDS-type submissions. The LLM provided a useful additional team member to our brainstorming!Figure 1. (Color online) ChatGPT’s Response for How It Might Be Useful for Authoring a Data Science PaperFigure 2. (Color online) ChatGPT’s Response for How It Might Be Useful for Reviewing a Submission to a Scientific JournalFigure 3. (Color online) ChatGPT’s Response for How It Might Be Useful to an Editor of a Data Science JournalDilemmas and ChallengesSince its free launch in November 2022, reports of challenges of ChatGPT have been appearing frequently in the general media and in the scientific literature. These include issues with privacy and data governance, transparency, accuracy (“AI hallucinations”), accountability, and absence of originality (“stochastic parrot”). In the context of scientific publications, journal editors have voiced a range of sentiments regarding authors’ use of LLMs: from total restrictions to strong embracement. Multiple large publishers have already listed policies regarding authors’ use of LLMs and whether an LLM can be listed as a coauthor (see, e.g., table 4 in Dwivedi et al. 2023).In the following, we focus on a few challenges and dilemmas that arise when authors, reviewers, or editors use LLMs in IJDS submissions.When Does Writing Assistance Become a “Coauthor”?Language editing has been well recognized as a standard presubmission task for most authors. Yet, there are different views on the extent of writing improvements that would make the editing provider become a coauthor. This ethical question quickly became a serious issue as early users of ChatGPT published papers with ChatGPT as a coauthor (e.g., King and ChatGPT 2023). To date, the consensus across many publishers is that an LLM cannot be listed as an author or coauthor, because authors are accountable for their papers. At the time of writing, INFORMS journals do not have a single code of practice regarding LLMs, yet at the IJDS, we currently take the position that an LLM should be considered an assistant and not a coauthor. Our reasoning is that LLMs create a real authorship challenge in fields where the writing itself is the contribution. Yet, at the IJDS, language is just an intermediary for conveying and communicating about a data science methodology, data, and its decision-making context. For this reason, data science journals have typically encouraged authors to hire professional editing services or use tools such as AI-based Grammarly.1 In fact, some journals even offer authors paid editing services.2Will Improved Writing Cover up Methodological Flaws?A possible concern relates to how reviewers and editors might be tempted to skip critical reading and deep thinking when a manuscript was made to “look” good by using an LLM. In other words, the packaging would hide the content from proper review. From our current experience at the IJDS, this is not likely to occur because well-written manuscripts make it easier to identify important issues, thereby requiring fewer review rounds to a final decision than difficult-to-read manuscripts. Moreover, well-written articles give an advantage to native English speakers who are good writers and to authors with funding for writing assistance. The use of LLMs as writing assistants can democratize writing skills by making them more broadly accessible. Similar advantages arise in the generation of “nice looking” diagrams that no longer requires artistic talent or funding for outsourcing the task.LLM AvailabilityWe note that a current practical challenge of using tools such as ChatGPT is their availability. ChatGPT is currently banned in some countries (Italy) and unavailable in others3 (e.g., China, Egypt, Iran, Russia, Syria), and to some authors, the software will be unavailable if they are not able to create a free account. Already, OpenAI’s freemium model for ChatGPT creates inequalities (as of writing, the $20/month ChatGPT Plus version gives access to GPT-4, whereas the free version uses the less advanced GPT-3.5).Will Use of LLM by Reviewers Lead to Inferior Referee Reports?Given the IJDS structured reviewer form, it is possible that reviewers will try to use LLMs to answer the five questions. Reviewers are expected to use LLMs to supplement their own knowledge and to trigger deeper investigations (as described in the previous section). Dealing with low-quality reviews is not new to the reviewing system, which is why the AE’s role in identifying reviewers and evaluating their reports is so important. Low-quality reviews reflect on reviewers and affect their reputation in the community (although reviewers are anonymous to the authors, they are known to the editors). Hence, we do not foresee a decline in referee reports. However, IJDS editors will need to be more vigilant.A more likely challenge related to reviews is the following: Human reviewers typically start their report with a brief summary of the paper and its contribution, so authors can determine whether their main message was perceived correctly by the reviewer. However, if a reviewer uses an LLM tool to write this summary, this human-to-human signaling mechanism becomes moot. Moreover, an incorrect summary can cause the reviewer to misunderstand the paper’s contribution. We therefore strongly recommend reviewers to manually create the initial draft of this important summary and then use LLMs to polish the language (if needed).AccuracyCurrent LLMs have been shown to produce incorrect (and even “imaginary”) output, stated with as much certainty as correct output (Ji et al. 2023). This applies to answers to questions, text summarization, software code, results of computations, and more. Although LLMs promise to be more efficient than search engines or Google Scholar for citing references and identifying missing literature, currently, some LLMs such as ChatGPT suffer from “AI hallucinations,” which include inventing nonexistent references.These accuracy issues are a known problem, and several have already been improved in recent versions of ChatGPT (Bubeck et al. 2023). Although we anticipate these issues will be mitigated in future versions, until then, we strongly recommend authors and reviewers to meticulously examine and double-check any output by LLMs.Sensitive Information and PrivacyAuthors who insert confidential ideas, methods, or code into a tool such as ChatGPT might effectively be giving away the information, as recently occurred in a case involving Samsung.4Authors should take special care not to insert personal or sensitive data about data subjects without making sure their data are not used by the LLM. For example, OpenAI’s policy states they use data inserted by ChatGPT users but not from application programming interface (API) users.5 Not only might entered information be available to the hosting company (e.g., OpenAI), but it can potentially also be probed by other users.Gaming the SystemWe have assumed thus far that the human counterparts are acting in good faith, using LLMs to improve their performance and output. However, malintent and abuse are certainly possible. One such possibility is authors who learn to strategically use LLMs to optimize their submission’s chance for acceptance not based on true contribution but rather by identifying language, structure, and other factors that mislead the review team. Although such tactics might work in some areas, we believe it is more difficult to do so for IJDS-type submissions given the multiple sets of independent, diverse, and expert eyes on each submission, including a reproducibility check of submitted code.AccountabilityA major challenge with the use of LLMs is accountability: Is the LLM to be blamed for an incorrect summary of reviewers’ feedback? Can we blame the LLM’s training data for a discriminatory editorial letter? Can the authors blame OpenAI for fictitious references? In a recent article about AI liability in The Hill,6 Vasant Dhar from New York University wrote the following:“There’s a simple rule from the physical world that seems useful in controlling the risks of AI, namely, a credible threat. If you knowingly release a product that is risky for society, like a toxic effluent into the environment, you are liable for damages. The same must apply to AI.”Our approach at the IJDS is that the output of these tools, when included in an IJDS submission/evaluation, should be subject to careful human scrutiny. The author, reviewer, editor, and journal administrator are all still accountable for any materials and communications arriving from them. In other words, we treat these new applications as decision-support systems (with currently many faults) while maintaining our role as human decision makers. All the uses mentioned in the previous section take this stance. This approach also means that acknowledging the use of, say, ChatGPT, is not needed if an author uses it for improving grammar or if an editor uses it to polish the language in the report. Even without explicitly using a tool such as ChatGPT, LLMs are being integrated into many productivity applications such as Microsoft Word and Google Docs and are also being used to write software programs and produce models. These applications will be widely used by authors to write papers and conduct research thereby becoming part of routine practice and unlikely to raise red flags. In conclusion, our position is that no attribution is needed as long as the human scrutinizes the LLM’s result and as long as the LLM itself did not propose the new data science methodology (Prompt: “create a new data science methodology that addresses an important decision making problem in business, evaluate its performance on real data, and assess its practical and ethical implications”).ConclusionsAt the IJDS, we are enthusiastic about the possibilities of new LLMs for improving the quality of submissions and making the review process more effective and efficient. We mentioned uses of an LLM as a reading assistant, writing assistant, coding assistant, creative assistant, integration assistant, prereviewing assistant, verification assistant, and brainstorming assistant. Other useful assistant roles might also appear. Yet, we are aware that “unknown unknowns” will likely surprise us (as they will other data science journals), and our editorial board is prepared to discuss and tackle issues that appear.Although effective use of LLMs for journal submissions, reviewing, and management will take time to evolve, early adopters will likely already experiment with different uses. Recent research has shown that “presenting humans with arguments for two competing answer options, where one is correct and the other is incorrect, allows human judges to perform more accurately, even when one of the arguments is unreliable and deceptive” (Parrish et al. 2022). Although we encourage experimentation, we remind authors, reviewers, and editors that current LLMs often fail without showing a trace of uncertainty. The choice of prompt has already shown to distinguish between correct and incorrect answers, prompting the new field of “prompt engineering.”As mentioned, multiple publishers have already come up with restrictions for using/attributing LLMs, and at this time of writing, INFORMS journals do not yet have such a code of practice. We anticipate that such a code of practice will appear and evolve as our journals start experiencing and identifying relevant challenges. For IJDS papers, which focus on novel data science methodology for decision making, we expect the humans involved will use generative AI mostly for communication, evaluation, and dissemination of knowledge. This can improve equalizing effort and submission barriers related to language proficiency, lower costs and speed up editing and creating visuals, enrich authors’ and reviewers’ perspectives, improve communications, make the reviewing process more effective and efficient, increase the visibility and readership of IJDS publications, and hopefully free up our most precious and limited resource: time. Whether and how LLMs will help directly accelerate the development of novel data science methodology for decision making is an open question.Although our focus in this editorial has been on using LLMs for authoring and reviewing IJDS manuscripts, we of course anticipate a spike in LLM-based research. We encourage submissions that look into the design, development, and use of such tools, as well as ethical considerations and implications. From this research perspective, editors and reviewers will need to update their knowledge and ability to handle submissions of LLM-based research artifacts and respond appropriately via IJDS-hosted workshops and training sessions. Submissions to IJDS may use LLMs as objects of study as data generating systems, examining face validity, provenance, statistical properties, and so on, of the resulting data. LLMs may also be applied to study consequential sectors and domains such as healthcare where researchers want to study efficiency, effectiveness, safety, and quality of healthcare systems and how workflows get changed when such solutions are deployed in real life settings. Safe, responsible, and meaningful ways to use this technology are currently being debated, and IJDS policies will evolve with them.We end this editorial by inviting readers to share their thoughts and feedback: Do you have further ideas of how authors, reviewers, editors, and the IJDS could benefit from using generative AI? What do you think about the suggested uses? Please feel free to comment on social media or email the EIC.Endnotes1 See https://www.grammarly.com.2 Examples include Taylor & Francis and Wiley: https://authorservices.taylorandfrancis.com/publishing-your-research/writing-your-paper/editing-services-improve-your-manuscript/ and https://onlinelibrary.wiley.com/page/journal/15405915/homepage/forauthors.html (accessed April 21, 2023).3 See https://platform.openai.com/docs/supported-countries (accessed April 21, 2023).4 See https://mashable.com/article/samsung-chatgpt-leak-details (accessed April 21, 2023).5 See https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance (accessed April 21, 2023).6 See https://thehill.com/opinion/cybersecurity/3938554-why-its-time-to-safeguard-against-ai-liability/ (accessed April 21, 2023).ReferencesBubeck S, Chandrasekaran V, Eldan R, Gehrke J, Horvitz E, Kamar E, Lee P, et al. (2023) Sparks of artificial general intelligence: Early experiments with gpt-4. Preprint, submitted March 22, https://arxiv.org/abs/2303.12712.Google ScholarDwivedi YK, Kshetri N, Hughes L, Slade EL, Jeyaraj A, Kar AK, Baabdullah AM, et al. (2023) “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Internat. J. Inform. Management 71:102642.Google ScholarJi Z, Lee N, Frieske R, Yu T, Su D, Xu Y, Ishii E, et al. (2023) Survey of hallucination in natural language generation. ACM Comput. Survey 55(12):1–38.Google ScholarKing MR, ChatGPT (2023) A conversation on artificial intelligence, chatbots, and plagiarism in higher education. Cellular Molecular Bioengrg. 16(1):1–2.Google ScholarParrish A, Trivedi H, Nangia N, Phang J, Padmakumar V, Saimbhi AS, Bowman SR (2022) Single-turn debate does not help humans answer hard reading-comprehension questions. Preprint, submitted April 11, https://arxiv.org/abs/2210.10860.Google Scholar Back to Top Next FiguresReferencesRelatedInformation Articles In Advance Article Information Metrics Information Published Online:May 19, 2023 Copyright © 2023, INFORMSCite asGalit ShmueliBianca Maria Colosimo, David Martens, Rema Padman, Maytal Saar-Tsechansky, Olivia R. Liu Sheng, W. Nick Street, Kwok-Leung Tsui (2023) How Can IJDS Authors, Reviewers, and Editors Use (and Misuse) Generative AI?. INFORMS Journal on Data Science 0(0). https://doi.org/10.1287/ijds.2023.0007 PDF download

Full Text

Published Version
Open DOI Link

Get access to 115M+ research papers

Discover from 40M+ Open access, 2M+ Pre-prints, 9.5M Topics and 32K+ Journals.

Sign Up Now! It's FREE

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call