Generative artificial intelligence (AI) is becoming an increasingly powerful tool in the workplace. At science organizations like national laboratories, its use has the potential to accelerate scientific discovery in critical areas.
A recent study by the University of Chicago and the U.S. Department of Energy’s Argonne National Laboratory provides one of the first real-world examinations of generative AI tools — specifically large language models (LLMs) — within a national lab setting.
Through surveys and interviews, the researchers studied how Argonne employees are already using LLMs and how they envision using them in the future. The study also tracked the early adoption of Argo, the lab’s internal LLM interface.
Based on their analysis, the researchers recommend ways organizations can support effective use of generative AI while addressing associated risks in areas such as privacy, security and transparency.
Argonne and Argo — A case study
Argonne’s workforce includes both science and engineering workers, as well as operations workers in areas like human resources, facilities and finance. Argonne employees also regularly work with sensitive data.
While the study focused on a national laboratory, some of the findings can extend to other organizations like universities, law firms and banks, which have varied user needs and similar cybersecurity challenges.
In 2024, the lab launched Argo, which gives employees secure access to LLMs from OpenAI through an internal interface. Argo doesn’t store or share user data, which makes it a more secure alternative to ChatGPT and other commercial tools.
Argo was the first internal generative AI interface to be deployed at a national laboratory. For several months after Argo’s launch, the researchers tracked its use across the lab. Analysis revealed a small but growing user base.
Collaborating and automating with AI
The researchers found that employees used generative AI in two main ways: as a copilot and as a workflow agent. As a copilot, the AI works alongside the user, helping with tasks like writing code, structuring text or tweaking the tone of an email. Employees are currently sticking to tasks where they can easily check the AI’s work. In the future, employees reported envisioning using copilots to extract insights from large amounts of text, such as scientific literature or survey data.
As a workflow agent, AI is used to automate complex tasks, which it performs mostly on its own. For example, operations workers reported using AI to automate processes like searching databases or tracking projects. Scientists reported automating workflows for processing, analyzing and visualizing data.
Expanding possibilities while mitigating risks
While generative AI presents exciting opportunities, the researchers also emphasize the importance of thoughtful integration of these tools to manage organizational risks and address employee concerns.
The study found that employees were particularly concerned about the reliability of generative AI, data privacy and security, overreliance, potential impacts on hiring and implications for scientific publishing and citation.
To promote the appropriate use of generative AI, the researchers recommend that organizations proactively manage security risks, set clear policies and offer employee training.
View source version on businesswire.com: https://www.businesswire.com/news/home/20250428739918/en/
Contacts
Christopher J. Kramer
Head of Media Relations
Argonne National Laboratory
Office: 630.252.5580
Email: media@anl.gov