TAC 2019

Drug-Drug Interaction Extraction from Drug Labels

Background

The U.S. Food and Drug Administration (FDA) is responsible for protecting the public health by assuring the safety, efficacy, and security of all FDA-regulated products, including human and veterinary drugs, prescription and over-the-counter pharmaceutical drugs, vaccines, biopharmaceuticals, blood transfusions, biological products, and others. FDA and the U.S. National Library of Medicine (NLM) have been working together on transforming the content of Structured Product Labeling (SPL) documents for prescription drugs into discrete, coded, computer-readable data that will be made available to the public in individual SPL index documents. Transforming the narrative text to structured information encoded in national standard terminologies is a prerequisite to the effective deployment of drug safety information. Being able to electronically access labeling information and to search and sort that information is an important step toward the creation of a fully automated health information exchange system. TAC 2017 addressed one of the important drug safety issues: automated extraction of adverse drug reactions (ADRs) reported in SPLs. An equally important and complex task is automated extraction of drug-drug interaction information. Drug-drug interactions can lead to a variety of adverse events, and it has been suggested that preventable adverse events are the eighth leading cause of death in the United States.

The results of this TAC track will inform future FDA efforts at automating important safety processes, and could potentially lead to future FDA collaboration with interested researchers in this area.

Changes for TAC 2019

Based on feedback from last years participants and recommendations by the FDA, the following changes have been made for the 2019 evaluation:

Objective

The purpose of this TAC track is to test various natural language processing (NLP) approaches for their information extraction (IE) performance on drug-drug interactions in Structured Product Labeling (SPL) documents. SPL is a document markup standard approved by Health Level Seven (HL7) and adopted by the FDA as a mechanism for exchanging product and facility information about drugs. For more information about TAC, please visit https://tac.nist.gov/about/index.html.

Tasks

The participants may choose any one specific task described below or approach the tasks as each one building upon the previous tasks. Some tasks do necessarily require the output of previous tasks, e.g., Task 2 requires Task 1.

Task 1
Entity recognition task. Extract Mentions of Interacting Drugs/Substances and specific interactions at sentence level. This is similar to many NLP named entity recognition (NER) evaluations.
New for 2019 Mentions of Triggers are no longer evaluated.
Task 2
Relation identification task (sentence-level). Identify interactions at sentence level, including: the interacting drugs, the specific interaction types: pharmacokinetic, pharmacodynamic or unspecified, and the outcomes of pharmacokinetic and pharmacodynamic interactions. This is similar to many NLP relation identification evaluations.
Task 3
Normalization task. The interacting substance should be normalized to UNII, and the drug classes to MED-RT*. Normalize the consequence of the interaction to SNOMED CT if it is a medical condition. Normalize pharmacokinetic effects to National Cancer Institute Thesaurus codes.
New for 2019 Drug classes should be normalized to MED-RT.
New for 2019 Where applicable, we will consider multiple valid mappings for SpecificInteractions.
Task 4
Relation identification task (document-level). Generate a global list of distinct interactions for the label in normalized form.

Any resources, e.g., the UMLS© Terminology Services, may be used to aid with the normalization process.

The new Medication Reference Terminology (MED-RT) is the evolutionary successor to the Veterans Health Administration’s National Drug File – Reference Terminology (VHA NDF-RT). Both are formal ontological representations of medication terminology, pharmacologic classifications, and asserted authoritative relationships between them. MED-RT is released on the same monthly schedule as its predecessor NDF-RT, in XML and other formats. MED-RT release files are available from the National Cancer Institute Enterprise Vocabulary Services (EVS) on the Federal Medication Terminologies webpage. MED-RT content is also integrated into the NLM's Unified Medical Language System (UMLS©) bi-annual releases, available on the UMLS© webpage, to UMLS© licensees in May and November.

Data

The participants are provided with the following data for training:

These labels contain gold standard annotations created by NLM and FDA. An additional set of at least 50 drug labels will be provided as the official test set in the same format. The annotations in the training set were generated semi-automatically and might be missing some interactions. The automatically extracted entities and relations in these sentences were manually corrected by the FDA experts and NLM volunteers using these guidelines (schematic presentation courtesy of Mark Sharp.)

The ultimate aim is to know which interactions are in the labels, not the precise offsets or relations, such that the interactions may be linked to structured knowledge sources. Further, an interaction mentioned several times should not necessarily carry more weight than an interaction mentioned once. As such, the gold standard contains a list of unique interaction aggregated at the document level.

These interactions are mapped as follows:

Annotation Guidelines

The files are annotated according to these guidelines and using this decision tree.

Data Availability

The following datasets may be used for training and are available for immediate download:

The labeled and unlabed test datasets are now available:

Registration

To register for the TAC DDI task, please use the TAC registration form.

Evaluation

An annotated test set of 87 structured product labels in XML format will be used to evaluate performance. The XML schema is available in XSD and DTD formats. The participants will be asked to submit the results on all test set labels in XML format.

The evaluation measures are:

Task Measures Primary Metric
Task 1 Precision/Recall/F1-measure on entity-level annotations, using both partial and exact matching. Micro-averaged F1 on exact matches.
Task 2 Precision/Recall/F1-measure on relations. Micro-averaged F1.
Task 3 Precision/Recall/F1-measure on unique Interactions. Macro-averaged F1 (by label)
Task 4 Precision/Recall/F1-measure on unique Interactions. Macro-averaged F1 (by label)

The official evaluation script will be used to calculate these scores, and is available here.

Submission

Participants are allowed three separate submissions. Submissions that do not conform to the provided XML standards will be rejected without consideration.

Important The submission website is now open and is available here.
Note: You will need to log in with the username and password you recieved when registering for TAC.

Timeline

May 2019
Training set release.
July 2019
Registration deadline for participants.
September 2019
Test set release.
October 4, 2019 Extended
Participants' submissions due.
October 11, 2019
Individual results sent to participants.
October 15, 2019
Short (1-page) system descriptions and workshop presentation proposals due.
October 31, 2019
Notification of acceptance of workshop presentation proposals.
November 1, 2019
Participants' (draft) workshop notebook papers due.
November 12-13, 2019
TAC 2019 Workshop in Gaitherburg, MD, USA.
Early February 2020
Final proceedings papers due.

Mailing List

We are keeping the ADR mailing list for all drug (label) related evaluations:
tac-adr@googlegroups.com

Organizers

Dina Demner-Fushman (ddemner@mail.nih.gov)
Lister Hill National Center for Biomedical Communications, U.S. National Library of Medicine
Travis Goodwin (travis.goodwin@nih.gov)
Lister Hill National Center for Biomedical Communications, U.S. National Library of Medicine
Kin Wah Fung
Lister Hill National Center for Biomedical Communications, U.S. National Library of Medicine
Phong Do
Office of Health Informatics, U.S. Food and Drug Administration