Please use this identifier to cite or link to this item: https://ahro.austin.org.au/austinjspui/handle/1/20337
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMcMaster, Christopher-
dc.contributor.authorLiew, David F L-
dc.contributor.authorKeith, Claire-
dc.contributor.authorAminian, Parnaz-
dc.contributor.authorFrauman, Albert G-
dc.date2019-02-06-
dc.date.accessioned2019-03-04T22:04:18Z-
dc.date.available2019-03-04T22:04:18Z-
dc.date.issued2019-06-
dc.identifier.citationDrug safety 2019; 42(6): 721-725-
dc.identifier.urihttps://ahro.austin.org.au/austinjspui/handle/1/20337-
dc.description.abstractAdverse drug reaction (ADR) detection in hospitals is heavily reliant on spontaneous reporting by clinical staff, with studies in the literature pointing to high rates of underreporting [1]. International Classification of Diseases, 10th Revision (ICD-10) codes have been used in epidemiological studies of ADRs and offer the potential for automated ADR detection systems. The aim of this study was to develop an automated ADR detection system based on ICD-10 codes, using machine-learning algorithms to improve accuracy and efficiency. For a 12-month period from December 2016 to November 2017, every inpatient episode receiving an ICD-10 code in the range Y40.0-Y59.9 (ADR code) was flagged for review as a potential ADR. Each flagged admission was assessed by an expert pharmacist and, if needed, reviewed at regular ADR committee meetings. For each report, a determination was made about ADR probability and severity. The dataset was randomly split into training and test sets. A machine-learning model using the random forest algorithm was developed on the training set to discriminate between true and false ADR reports. The model was then applied to the test set to assess accuracy using the area under the receiver operating characteristic (AUC). In the study period, 2917 Y40.0-Y59.9 codes were applied to admissions, resulting in 245 ADR reports after review. These 245 reports accounted for 44.5% of all ADR reporting in our hospital in the study period. A random forest model built on the training set was able to discriminate between true and false reports on the test set with an AUC of 0.803. Automated ADR detection using ICD-10 coding significantly improved ADR detection in the study period, with improved discrimination between true and false reports by applying a machine-learning model.-
dc.language.isoeng-
dc.titleA Machine-Learning Algorithm to Optimise Automated Adverse Drug Reaction Detection from Clinical Coding.-
dc.typeJournal Article-
dc.identifier.journaltitleDrug safety-
dc.identifier.affiliationPharmacy Department, Austin Health, Heidelberg, Victoria, Australiaen
dc.identifier.affiliationDepartment of Clinical Pharmacology, Austin Health, Heidelberg, Victoria, Australiaen
dc.identifier.affiliationDepartment of Medicine, University of Melbourne, Parkville, VIC, Australiaen
dc.identifier.doi10.1007/s40264-018-00794-y-
dc.identifier.orcid0000-0003-2432-5451-
dc.identifier.pubmedid30725336-
dc.type.austinJournal Article-
local.name.researcherAminian, Parnaz
item.languageiso639-1en-
item.cerifentitytypePublications-
item.fulltextNo Fulltext-
item.grantfulltextnone-
item.openairetypeJournal Article-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
crisitem.author.deptClinical Pharmacology and Therapeutics-
crisitem.author.deptRheumatology-
crisitem.author.deptClinical Pharmacology and Therapeutics-
crisitem.author.deptPharmacy-
crisitem.author.deptPharmacy-
crisitem.author.deptClinical Pharmacology and Therapeutics-
Appears in Collections:Journal articles
Show simple item record

Page view(s)

60
checked on Nov 14, 2024

Google ScholarTM

Check


Items in AHRO are protected by copyright, with all rights reserved, unless otherwise indicated.