News Release

AI cuts wildlife tracking time from months to days

Peer-Reviewed Publication

Washington State University

AIwildlifetracking

image: 

SpeciesNet's AI prediction can be seen on an image of a lynx.

view more 

Credit: Mammal Spatial Ecology and Conservation Lab

PULLMAN, Wash. — Artificial intelligence can dramatically speed up the painstaking work of tracking wildlife with remote cameras, cutting analysis time from months or even a year to just days while producing nearly the same scientific conclusions as humans.

That’s according to a new study led by researchers at Washington State University and Google, published in the Journal of Applied Ecology. The team tested whether a fully automated AI system could replace humans in processing hundreds of thousands to millions of camera trap images collected in Washington, Montana’s Glacier National Park, and Guatemala’s Maya Biosphere Reserve.

They found that, for most species, models built from AI-identified images closely matched those produced by human experts. Across key measures such as where animals occur and what environmental factors influence them, the results aligned in roughly 85–90% of cases, with limited divergence for rare or difficult-to-identify species.

The implications could be significant for conservation. Faster processing means researchers and wildlife managers can move more quickly from collecting data to making decisions, potentially enabling near real-time monitoring of species such as jaguars, wolves, and grizzly bears.

“We’re not trying to replace people,” said WSU wildlife ecologist Daniel Thornton, lead author of the study. “The goal is to help researchers get to answers faster so they can make better decisions about managing and conserving wildlife.”

Traditionally, that process has been slow and labor-intensive. Camera traps, which are motion-activated cameras placed in forests and other habitats, can generate enormous datasets. A single project may produce hundreds of thousands or even millions of images that must be reviewed to determine which species appear in each frame.

Even with a team of undergraduate assistants and a graduate student verifying identifications, Thornton said the process typically takes six to seven months, and sometimes up to a year, before analysis can begin.

Early AI tools offered some relief by filtering out blank images, often 60–70% of the total, but still required humans to review tens of thousands of photos containing animals. The new study tested whether that final human step could be eliminated.

Using a general AI model called SpeciesNet, developed by Google, the researchers ran images through a fully automated pipeline with no human review and compared the results to traditional, expert-labeled datasets.

“The key question wasn’t whether the AI got every image right,” said Dan Morris, a senior staff research scientist at Google who helped create SpeciesNet and is a co-author on the study. “It was whether the ecological conclusions you care about would end up being basically the same.”

For most species, they were. Even when the AI made mistakes, such as misidentifying animals or missing detections, the overall models remained robust because occupancy models rely on repeated observations over time.

In practical terms, the time savings are dramatic. Fully automated processing can now be completed in just a few days, reducing a months-long bottleneck to roughly a week.

That efficiency could be transformative, particularly for smaller or underfunded conservation groups. It may also allow researchers to expand monitoring efforts without being limited by data processing capacity.

The project also contributed to the broader AI-for-conservation community by making part of its dataset publicly available, helping support tools like SpeciesNet that rely on shared data to improve.

Morris emphasized that the study takes a practical approach. Rather than developing new AI algorithms, the team focused on what current tools can already do.

“We weren’t trying to invent a new model,” he said. “We were asking whether, given where the technology is today, people can rely on it for the kinds of analyses they already do.”

The answer, at least for many common species and standard ecological models, appears to be yes.

There are still limitations. Human review is needed for many other applications of camera trapping data, and this paper only dealt with a small subset of species that may be caught on camera. For example, very rare and easily confused species are still problematic for AI detection. But the findings suggest that in some cases, image processing no longer needs to be a major constraint on large-scale camera-trapping studies.

“The big takeaway is that this doesn’t have to be a bottleneck anymore,” Thornton said. “If we can process data faster, we can respond faster, and that’s really what matters for conservation.”

Additional co-authors on the study include Travis King and Lucy Perera-Romero of Washington State University; Alissa Anderson of Washington State University and Montana Fish, Wildlife and Parks; Rony Garcia-Anleu of the Wildlife Conservation Society’s Guatemala Program; Scott Fitkin of the Washington Department of Fish and Wildlife; and Carly Vynne of RESOLVE, who contributed to data collection, analysis, and manuscript development across the project’s study sites in Washington, Montana, and Guatemala.

 


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.