When it comes to submitting new drug applications to FDA, it’s safe to assume that the prospect of receiving a complete response letter keeps pharma executives up at night. The risks associated with regulatory submission can have a substantial impact on a business, and approval delays due to data issues can cost a company millions.
As a former FDA reviewer, I’ve seen just about everything when it comes to data quality issues that can derail drug approvals. Luckily, today these kinds of risks can be mitigated using sophisticated data analytics.
As you think about ways to de-risk your trial, below are five key data quality areas to focus on to protect your NDA submission. With the right combination of machine learning anomaly detection and centralized issue management, it’s possible to minimize exposure to data quality questions from regulators:
- Site inconsistency for unknown risks: Identify inconsistencies in how sites evaluate or measure endpoints, whether inconsistencies in subjective interpretation (pain levels) or calibration inconsistencies in objective diagnostics
- Site inconsistency for known risks: Identify inconsistencies in how sites follow the protocol—enrolling patients that don’t meet study criteria, fraction of visits that are missing during the study, etc.
- Differences in adverse event reporting: Identify differences in the actions sites take with regard to an adverse event (reduce dose or interrupt study drug) for each severity level, and check for sites that are underreporting the top 10 adverse events.
- Misconduct: Identify sites that make up data out of neglect or forgetfulness.
- Data inconsistency: Identify data that are impossible or highly unlikely due to data entry errors (vitals, visit dates, etc.)
By implementing a plan to tackle these common data inconsistencies, pharma sponsors can drastically reduce risk in in their NDA submission; Medidata Edge Trial Assurance typically identifies at least 50 unique issues in a trial.
Contact us today to learn more.