This AI model will help you summarise a research paper in seconds

research paper
Computer science students, this nifty tool is about to ease your research reading forever. Source: RF Studio/Pexels

As any STEM student would attest, information overload is a major issue in the scientific community. What if we told you there was a tool to summarise every research paper, thus allowing you to find the right ones to read? No, we’re not talking about the abstract, but something even simpler.

Researchers at the Allen Institute for Artificial Intelligence have developed an AI-powered model that summarises scientific papers into a few sentences. In other words, it condenses a research paper into TLDR (Too Long; Didn’t Read) format so you can decide which papers are worth reading. It does this by extracting the most important parts from the abstract, introduction, and conclusion sections, creating a snippet to describe the paper.

How is this possible? Through GPT-3 style neuro-linguistic programming (NLP) techniques, which utilises deep learning to produce human-like text. First, researchers trained the model on the English language. Then, they created a dataset of over 5,411 summaries of computer science papers, and further trained the model on over 20,000 more research papers. The result is a nifty tool that can help you sift through hundreds of papers that may seem relevant, to find the right ones for your purpose.

Allen Institute researchers have created an AI model to summarise the focus of any research paper. Source: Allen Institute

“Extreme” research paper summarisation

The model has been rolled out to the Semantic Scholar, which is the Allen Institute’s search engine for scientific literature. Now, when you search for a research paper here, it automatically generates a single-sentence summary along with your results. This summary focuses on the paper’s main contributions, removing the likes of methodological details, which are typically summarised in the abstract.

Johns Hopkins University PhD student Isabel Cachola believes it will help researchers quickly decide which papers to add to their reading list. “People often ask why are TLDRs better than abstracts, but the two serve completely different purposes. Since TLDRs are 20 words instead of 200, they are much faster to skim,” explains Daniel S. Weld, Head of the Semantic Scholar research group at the Allen Institute for AI and Professor of Computer Science at the University of Washington.

The TLDR feature is now available in beta for nearly 10 million papers in Semantic Scholar. It is limited to the computer science domain for now, but you can expect it to enter other areas soon. Try it out for yourself here, then spread the word to your scientist friends!