Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ JNCI Journal of the ...arrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
JNCI Journal of the National Cancer Institute
Article . 2014 . Peer-reviewed
Data sources: Crossref
versions View all 2 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Validated or Not Validated? That Is the Question

Authors: Melanie Spears; John M. S. Bartlett; Kathleen I. Pritchard;

Validated or Not Validated? That Is the Question

Abstract

deviation estimates were … acceptable as they are <5% of the … reportable range” (page 24 of Supplementary Data). Although this suggests the assay has potential for technical validation, data on assay reproducibility, particularly with respect to diagnosis of patients with “borderline” results is not presented, making it difficult to be confident that the assay robustly stratifies patients, either at central or peripheral hospital laboratories. Experience with single markers [eg, HER2 (6,7)] suggests these are critical components of technical validation, and in our opinion this is as important for multiparameter tests as it is for “simple” single marker assays. Accuracy is more challenging to demonstrate, particularly when a broad phenotypic assay is applied, but evidence shows this challenge can be addressed with appropriately designed studies (8–10). EGAPP (5) guidelines state that “convincing” evidence for analytical validity requires “studies that provide confident estimates of analytic sensitivity and specificity” using representative samples, particularly addressing challenging cases. Such data may be available, and, if so, should be published to demonstrate technical validity of the DNA-damage response deficiency (DDRD) assay. The authors present several analyses supporting utility of the DDRD signature to predict outcome after chemotherapy in both neoadjuvant and adjuvant settings, They show convincingly that the DDRD signature is associated with either pathological complete response or relapse after chemotherapy but not in patients who did not receive chemotherapy. They used publically available and institutional cohorts, including three neoadjuvant cohorts (n = 51, 66, and 86 patients, respectively) treated with fluorouracil, epirubicin, and cyclophosphamide (FEC)/fluorouracil, doxorubicin, and cyclophosphamide (FAC), and four adjuvant cohorts, of which three were untreated and one (n = 191 patients) was treated “historically” with FAC. The data support a role for the DDRD signature in prediction of residual risk after chemotherapy, but do they clinically validate the formalin-fixed paraffin-embedded DDRD assay for anthracycline/cyclophosphamide chemotherapy response as claimed? We argue that, although these data are interesting and part of a pathway leading to clinical validation, they rep

Keywords

Fanconi Anemia, Antineoplastic Combined Chemotherapy Protocols, Humans, Breast Neoplasms, Female, DNA, Neoplasm, DNA Damage

  • BIP!
    Impact byBIP!
    citations
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    1
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
citations
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
1
Average
Average
Average
bronze
Related to Research communities