Can AI help you write your next paper? | Tech Rasta

[ad_1]

You know the text autocomplete function that’s so convenient and occasionally annoying to use on your smartphone? Well, now tools based on the same idea have evolved to the point where they help researchers analyze and write scientific papers, generate code, and generate ideas.

The tools come from natural language processing (NLP), an area of ​​artificial intelligence that aims to help computers ‘understand’ and produce human-readable text. These tools, known as large linguistic models (LLMs), have become not only objects of study but also assistants in research.

LLMs are neural networks trained on large chunks of text to process and, in particular, generate language. OpenAI, a research lab based in San Francisco, California, created GPT-3, the most popular LLM of 2020, by training a network to predict subsequent text based on what came before. On Twitter and elsewhere, researchers have expressed surprise at its eerily human-like writing. And now anyone can use it to build based on a prompt through the OpenAI programming interface. (Prices start at US$0.0004 per 750 words processed – this is a measurement that includes reading the prompt and writing the response.)

“I think I use GPT-3 almost every day,” says Hoffstein Einarsson, a computer scientist at the University of Iceland in Reykjavík. He uses it to form an opinion on summaries of his papers. In one example Einarsson shared at a conference in June, some of the algorithm’s instructions were useless, advising him to add information already included in his text. But others like “state the research question more clearly at the beginning of the abstract” are more helpful. It’s hard to see the errors in your own manuscript, says Einarsson. “You have to sleep on it for two weeks, or you can have someone else look at it. And that ‘someone’ could be GPT-3.

Organized thinking

Some researchers use LLMs to create paper titles or to make text more readable. Mina Lee, a doctoral student in computer science at Stanford University in California, offers GPT-3 prompts such as “Using these keywords, generate a paper title.” To rewrite problematic sections, she uses WordTune, an AI-powered writing assistant by AI21 Labs in Tel Aviv, Israel. “I write a paragraph, and it’s basically like a brain dump,” she says. “I click ‘rewrite’ until I find a cleaner version I like.”

Domenic Rosati, a computer scientist at a technology start-up site in Brooklyn, New York, uses Generate, an LLM, to organize his thinking. Developed by Cohere, an NLP firm in Toronto, Canada, Generate behaves like GPT-3. “I put down notes, or just scribbles and ideas, and I say, ‘Abstract this,’ or ‘Abstract this,'” Rosati says. “It’s really useful for me as a synthesis tool.”

Language models also help in experimental design. For one project, Einarsson is using the game Pictionary as a way to collect language data from participants. With the game description, GPT-3 suggested game variations he could try. In theory, researchers could also ask for fresh takes on experimental protocols. As for Lee, she asked GPT-3 to think of things to do when introducing her boyfriend to her parents. She suggests going to a restaurant on the beach.

Coding Coding

OpenAI researchers trained GPT-3 on a vast assortment of text, including books, news articles, Wikipedia entries, and software code. Later, the team noticed that GPT-3 could complete pieces of code just like any other text. The researchers created a fine-tuned version of the algorithm, called Codex, trained on more than 150 gigabytes of text from the code-sharing platform GitHub.1. GitHub has now integrated Codex into a service called Copilot, which suggests code as people type.

Luca Soldaini, a computer scientist at the Allen Institute for AI (also known as AI2) in Seattle, Washington, says at least half of the people in his office use Copilot. It works best for iterative programming, Soldini said, citing a project that involved writing boilerplate code to process PDFs. “It’s something vague and I hope it’s what you want.” Sometimes it isn’t. As a result, Soldaini says, they are careful to only use CoPilot for languages ​​and libraries they are familiar with so they can spot problems.

Literature searches

The most consistent application of language patterns is in searching and summarizing literature. AI2’s semantic scholar search engine — which covers nearly 200 million papers, mostly from biomedicine and computer science — provides tweet-length descriptions of papers using a linguistic pattern called TLDR (too long; not read). TLDR is derived from an earlier model called BART, developed by researchers at social media platform Facebook, which fine-tuned summaries written by humans. (By today’s standards, TLDR is not a large language model, as it only has 400 million parameters. The largest version of GPT-3 has 175 billion.)

TLDR also appears in AI2’s Semantic Reader, an application that leverages scientific papers. When a user clicks on an in-text citation in Semantic Reader, a box pops up with information containing a TLDR summary. “The idea is to take artificial intelligence and bring it into the reading experience,” says Dan Weld, chief scientist at Semantic Scholar.

When language models generate text summaries, often “people have a problem with calling it an illusion,” says Weld, “when really the language model is just making stuff up or lying.” TLDR performs relatively well in honesty tests2 — Authors of the papers were asked by TLDR to rate its accuracy at 2.5 out of 3. Weld says this is partly because the summaries are only about 20 words long, and partly because the algorithm rejects summaries that introduce unusual words that don’t appear. Full text.

In terms of search tools, Elicit was launched in 2021 from a machine-learning non-profit organization in San Francisco, California. Ask Elicit a question, such as, “What are the effects of mindfulness on decision making?” And it gives a table of ten papers. Users can ask the software to populate columns with content such as abstract summaries and metadata, as well as information about study participants, methodology, and results. Elicit uses tools including GPT-3 to extract or generate this information from papers.

Joel Chan at the University of Maryland in College Park, who studies human-computer interactions, uses Elicit whenever he starts a project. “It works well when I don’t know the right language to search,” he says. Gustav Nilsson, a neuroscientist at the Karolinska Institutet in Stockholm, uses Elicit to find papers with data he can add to pooled analyses. The tool pointed to papers he hadn’t found in other searches, he said.

Evolving models

Prototypes in AI2 herald the future of LLMs. Sometimes researchers have questions after reading a scientific abstract but don’t have time to read the full paper. A team at AI2 has developed a tool that can answer such questions, at least in the NLP domain. It began by asking researchers to read abstracts of NLP papers and ask questions about them (such as “Which five dialog qualities were analyzed?”). The team asked other researchers to answer those questions after reading the full papers3. AI2 trained a version of its longformer language model — which can take in an entire paper, not just the few hundred words that other models take — on the data set to generate answers to various questions about the resulting papers.4.

A model called ACCORD can generate definitions and analogies for 150 scientific concepts related to NLP, while MS^2, a data set consisting of 470,000 medical documents and 20,000 multi-document abstracts, was used to fine-tune BART to allow researchers to adapt a query. and produce a set of papers and a brief meta-analytic summary.

And then there are applications beyond text production. In 2019, AI2 fine-tuned BERT, a language model developed by Google in 2018 on Semantic Scholar papers to create SciBERT, which has 110 million parameters. Scite, which used AI to build a scientific search engine, further fine-tuned SciBERT so that when its search engine lists papers citing a target paper, it categorizes them as supporting, contrasting, or mentioning that paper. Rosati says that nuance helps people identify limitations or gaps in the literature.

AI2’s SPECTER model, also based on SciBERT, reduces papers to compact mathematical representations. Conference organizers use SPECTER to match submitted papers to peer reviewers, Weld said, and Semantic Scholar uses it to recommend papers based on the user library.

Tom Hope, a computer scientist at the Hebrew University of Jerusalem and AI2, said other research projects at AI2 include fine-tuned language models to identify effective drug combinations, connections between genes and disease, and scientific challenges and directions in COVID-19 research.

But can language patterns allow for deeper insight or discovery? In May, Hope and Weld co-authored a review5 Eric Horwitz, chief scientific officer at Microsoft, and others list the challenges in achieving this, including instructional models.[infer] The result of combining the two concepts. “It’s one thing to generate an image of a cat flying into space,” Hope said, referring to OpenAI’s DALL·E 2 image-generation model. But “how do we go from that to abstract, highly complex scientific concepts?”

That is an open question. But LLMs are already having a tangible impact on research. “At some point, people will lose out if they don’t use these large language patterns,” says Einarsson.

[ad_2]

Source link