• Users Online: 139
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Subscribe Contacts Login 


 
 Table of Contents  
MAJOR REVIEW
Year : 2021  |  Volume : 33  |  Issue : 2  |  Page : 126-131

Designing clinical trials


Additional Professor, Department of Ophthalmology, Regional Institute of Ophthalmology, Thiruvananthapuram, Kerala, India

Date of Submission25-Feb-2021
Date of Decision28-Feb-2021
Date of Acceptance01-Mar-2021
Date of Web Publication21-Aug-2021

Correspondence Address:
Smita Narayan
Regional Institute of Ophthalmology, Kunnukuzhy, Thiruvananthapuram, Kerala
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/kjo.kjo_51_21

Rights and Permissions
  Abstract 


A clinical trial is the gold standard study design to assess the effectiveness of interventions in health care. Yet, often paradoxically, clinicians find that the benefits of the randomized controlled clinical trial cannot be replicated in an actual clinical practice. A tradeoff between practicality and ideal trial requirements is needed to develop a pragmatic clinical trial. The results of a pragmatically constituted trial are more likely to be acceptable to practicing clinicians. The major steps and requirements and practices for a clinical trial by clinicians are summarized in this article. The PICOT format is a helpful approach to summarize research questions that explore the effects of therapy. Next, depending on the feasibility, the study population is selected by using inclusion and exclusion criteria. The needed sample size is calculated from this representative study population by providing clinical data inputs to a statistician. It is essential that chance, all possible forms of bias and confounding factors are eliminated or balanced between the study groups. This is ensured by randomization, proper allocation concealment, and masking. A clinical trial also needs to pass through relevant review boards and should be ethical. The good clinical practice guidelines throw light on this aspect. Finally, it is very important that a clinical trial is reported adequately such that readers understand the trial's design, conduct, analysis and interpretation, and can assess the validity of its results. The CONSORT guidelines are very useful in this regard.

Keywords: Bias, clinical trials, randomisation


How to cite this article:
Narayan S. Designing clinical trials. Kerala J Ophthalmol 2021;33:126-31

How to cite this URL:
Narayan S. Designing clinical trials. Kerala J Ophthalmol [serial online] 2021 [cited 2021 Dec 5];33:126-31. Available from: http://www.kjophthal.com/text.asp?2021/33/2/126/324197




  Introduction Top


A clinical trial is the gold standard study design to assess the effectiveness of interventions in health care.[1],[2] It is a planned experiment that determines the magnitude and direction of difference between the application of an intervention and a comparator group. The comparator group is often the current standard of interventions or an acceptable standard of intervention or a placebo.[3] The clinical trial design thus includes the new intervention and the control intervention, and the group that receives the new intervention and the comparator group.[4] In this manuscript, we will look at the essential elements of the design of a clinical trial with specific focus on its applicability in a clinical practice setting.


  When Should We do a Clinical Trial? Top


A clinical trial must be considered when we want to assess the efficacy of a new intervention in comparison to an already existing intervention. Some modalities of therapeutic or diagnostic intervention are available for most medical conditions. It is rare that a medical condition may not have any existing diagnostic or therapeutic process involved in its management although the standards of care may differ between health-care settings and countries. The comparison of a new intervention must be against the locally relevant standard for that medical condition.[5],[6] The use of a placebo or comparing interventions against no interventions are not considered ethical and are to be done only if necessary.[7]


  Phases of a Clinical Trial Top


There are several phases in a clinical trial. It is important to understand the different phases of a clinical trial although clinicians are mostly concerned with phase 3 trials.[6],[8],[9]

Pre-phase 1 trials

These trials are done on animals and are basically used to study the effect of the drug molecule on each level of the organization in the body, i.e., from the cell to the organ systems, and the effect of the body on the drug molecule. In this phase, the lethal dose of the drug needed to kill 50% of the animals is determined. This is called LD50 and the drug undergoes further testing if the LD50 is much higher than the dose required for the desired purpose in the body and if the pharmacokinetic and pharmacodynamic effects are favorable.

Phase 1 trials

This is done on healthy volunteers or on persons with serious forms of disease who are not improving with any available treatment. The same pharmacokinetic and pharmacodynamic effects are evaluated as was done in pre-phase 1. The drug is given in increasing doses till the results obtained in animals are now obtained in humans.

Phase 2 trials

The drug is now given to a greater number of people usually in the hundreds. The molecule is given to those people in whom it was intended rather than healthy volunteers. The proportion of patients who achieve the desired results at the set dose and those who do not achieve it are determined. Any adverse reactions in this group are noted.

Phase 3 trials

This is a planned experiment on humans with sufficient sample size and power to prove or disprove the predicted benefits of the molecule. The results of these trials are submitted to regulatory authorities for approval of the interventions.

Phase 4 trials

Phase 4 trials are the post marketing surveillance that looks to identify rare or serious adverse effects, drug-drug or diet-drug interactions. Very importantly, a wider spectrum of population is studied in this phase. Hence, a drug or an intervention can be declared as safe only if it has passed through enough years of postmarketing surveillance.[10]


  The Designs of a Trial Top


There are few possibilities when we assess the effectiveness of an intervention. The new intervention may be superior to an existing intervention. The new intervention may perform exactly like an existing intervention or may be equivalent to an existing intervention. The new intervention may be nearly similar to an existing intervention, within a margin of acceptability, or is noninferior to an existing intervention. The new intervention may be inferior to the existing intervention. It is important to define the margins of superiority and noninferiority to determine if a new intervention is superior or noninferior. These margins are determined based on clinically relevant outcomes, the quantum of change that is considered as clinically acceptable and may change based on the clinical condition of interest. They may also be influenced by the prevalence of the condition, the severity of the condition and the cost of the intervention. If the condition is mild with milder sequelae and a milder natural progression, we could aim for a larger margin of superiority.[11] If the condition is more severe, with high fatality or severe adverse events, or a rapid natural progression, we may aim for a smaller margin of superiority or a larger margin of noninferiority so that any small improvement may impact favorably on the patient. We must also consider the cost of the newer intervention while determining the margins of superiority and noninferiority. We may consider a larger margin of superiority if the new intervention is much costlier than the existing intervention.

Common clinical trial designs are therefore either superiority designs, noninferiority designs, or equivalence trials. The superiority trial aims to establish that the new intervention is significantly better than an existing intervention based on a predefined margin of clinical relevance. As an example, let us consider that the current standard of care provides good outcomes in 35% of those who receive the intervention. We may predefine superiority as more than 45% good outcomes in those who receive the new intervention. In this instance, we will consider the new intervention as useful only if it leads to good outcomes in more than 45% of those who receive the intervention. The noninferiority trial aims to establish that the new intervention is not inferior to the existing intervention.[12] As an example, we can predefine non inferior as a margin of ±10% and conclude that the new intervention is comparable to the existing intervention if outcomes are ±10% of that with the existing intervention. The outcomes with the new intervention maybe slightly superior, equivalent, or slightly inferior compared to the existing intervention. In a superiority design saying that there is no difference does not conclude that the interventions are noninferior.[13] Similarly, in a noninferiority design even if we find that one intervention is statistically significantly better than the other, we cannot conclude that the newer drug is superior to the existent drug.[14] We must conduct a larger study with an appropriate sample size to confirm this.


  Ideal Versus Pragmatic Trial Top


Clinicians often find that results obtained and reported from a strictly controlled environment of a clinical trial does not translate to equal results in an actual clinical practice setting. The clinical trial is a well-regulated controlled environment while clinical practice settings are not often a strictly controlled environment. Interventions in a clinical trial are tested on subjects with strict inclusion and exclusion criteria, and strict regulation of follow-up. In a real-life clinical setting, patients may not always fall into the strict boxes of inclusion and exclusion set out in a trial. In a pragmatic setup, a clinician wants to know how the intervention will perform in a clinical setup that caters to a wide-ranging variety of population.


  Description of Interventions Top


It is important to have noticeably clear, precise descriptions of the interventions and comparators. This will help ensure that there is no overlap between the groups that can possibly lead to a misclassification bias and that outcomes truly represent differences between the groups.

The PICOT format is a helpful approach to summarize research questions that explore the effect of therapy:

  • (P)– Population refers to the sample of patients we wish to recruit for the study
  • (I)– Intervention refers to the treatment that will be provided to subjects enrolled in the study
  • (C)– Comparison is the treatment that we are comparing with the intervention (I). This is also called the control group. If an existing treatment is considered the “gold standard,” then, it should be the comparison group
  • (O)– Outcome represents what variables we plan to measure to understand the effectiveness of the intervention
  • (T)– Time describes the duration for the data collection.[15]



  Operational Definitions Top


We must describe in detail the interventions and comparators, their assessments and ascertainment, define the outcomes of interest and their assessments and ascertainment. Use standard definitions or standard methodologies if they are available. The possible adverse outcomes and methods of assessment and ascertainment must be defined.


  Selection of Study Population Top


The selection of the study population is an important aspect of the clinical trial. The study population must be of adequate size such that random error is eliminated and that any identified differences are true differences between the groups. This is ensured by sample size calculation about which I shall discuss shortly. The study population must be representative so that the study results can be extrapolated to the general population.[16]


  So How do We Select a Study Population? Top


We must define certain clinical and demographic criteria based on our research question. We also devise certain geographic (e.g., selection of patients coming to the glaucoma clinic of a tertiary care center only) and time bound strategies (e.g., selection of patients coming between January 2021 and December 2021). These four strategies determine our inclusion criteria. To this subset of population, we apply certain exclusion criteria. Usually, we exclude patients whom we think might affect the quality of our data or acceptability of our study or the success of follow-up. This group that we finally obtain is the study population. Ideally, we exclude subjects (based on a clinical determination by the managing physician) who definitely will benefit from the new intervention or subjects who definitely will not benefit from the new intervention. Subjects who state a preference for either the new intervention or the existing intervention are also excluded from a clinical trial to reduce any bias.

The study population may be drawn from a contained geographic zone or from multiple geographic zones. The study population may be drawn from a single center or from multiple centers. We must be pragmatic in deciding which scientific and which practical decisions regarding inclusion and exclusion criteria are to be selected to obtain a representative sample. Considering only purely scientific criteria can provide good internal validity but cannot be generalized to the population, i.e. there is minimal external validity. On the other hand, considering purely practical criteria may have poor internal validity and reproducibility of results. Obviously, this balance depends not only on the methodology but also on the research question and design of the study.[17]

Further, to maintain a representative sample, we also must periodically monitor the recruitment strategy and address “lost to follow-up.” These strategies must be incorporated right from the design of study to its implementation. They may include incentives to patients and repeated sustained contact interactions with patients.[18]


  Sample Size Calculation Top


Sample size estimation is an important part of clinical research. This requires some logical thinking and some information which the investigator should know before starting the study. The sample size estimates include the identification of the variable or outcome you want to study including the primary and secondary outcomes. The expected frequency of the study variable and the proposed differences between study groups (effect size) of clinical relevance are predefined before the trial. Then give a desired precision of the estimate, for example, the Standard Deviation. These are usually obtained from a review of the literature. If this is unknown, a pilot study may have to be done to derive estimates from your own population. Rarely, if it is thought that if the population is normally distributed then we consider that the range is six standard deviations. The lesser the precision of the estimate needed the smaller the sample size would be. Then indicate the degree of certainty you want about the desired precision. Adjustments must be made for population size, design effect and expected response rate. To summarize, the sample size estimation requires the primary outcome measure, alpha and beta errors, the power of the study, and the proposed differences between study groups. The anticipated margins-superiority margins, inferiority margins, and allocation ratio are other considerations to determine the sample size. There are several software (commercial and free) programs that help with the sample size estimation for clinical trials. However, it is recommended to consult a biostatistician at the design stage of a clinical trial to make sure that the appropriate sample size is estimated after considering all necessary inputs. We must remember that the sample size estimation is a statistical calculation that is based on clinical data inputs.[19]


  Randomisation, Allocation Concealment and Masking Top


Randomization of the patients into discrete study groups is the cornerstone of an randomized controlled trial (RCT).[20] This is the process where the patient has an equal chance of being assigned to any one of the study groups. This means that the groups are similar with reference to known and unknown prognostic factors. The first step is the generation of a random sequence. Simple strategies of allocation can include tossing a coin or odd and even number approach but are not recommended as a practice. A truly random number sequence can be obtained from computer generated programs that include a random number table or from sites like www. RANDOM.org. The randomization generation can start from a changing seed or a static seed, can be based on unequal or equal intervals or blocks and can be with replacement or without replacement. Besides randomization, we can choose to allocate subjects equally between the intervention arms, for example, in a 1:1:1 ratio. We can also opt for allocation ratios of 1:2 or 1;3 based on the prevalence of the condition.

The randomization sequence must be such that the investigator is unaware of the sequence of allocation of participants into any of the arms of the study before and until the assignment is complete. A plan of allocation concealment must be stated in the protocol. The modern modes of allocation concealment include centralized or remote telephone based or computer based or web-based allocation systems. Sequentially numbered opaque sealed envelopes, though easy and cost effective have a greater risk of failure. Since we have a pair of eyes, we can choose to study the effects of intervention in one randomly selected eye and use the other eye as control. The goal of allocation concealment is to prevent selection bias. Furthermore, without allocation concealment, there is every chance that the effects of the intervention are overestimated or underestimated.[21]

Masking (previously called blinding) is aimed at preventing ascertainment bias. This means that investigators, patients, or anyone who can directly or indirectly influence the conduct and outcome of the study are prevented from knowing about the patient's allocation.[22] There is no standard definition of the terms such as single blind or double or triple blind. We must explicitly define who was masked. Double blind does not necessarily mean that the patient and investigator are masked. It could be the patient and the outcome assessor who might have no direct contact with the patient. An open-label trial means that the investigator and the patient know to which intervention the patient is assigned. It is important to remember that masking cannot always be incorporated into a RCT. This is especially so when the intervention is a surgery.[23]


  Random Error, BIAS and Confounding Top


Before we talk of bias and chance, we must know the meaning of validity, both internal and external. Having internal validity means that methods have been employed to minimize errors made by the patients or investigators or those who have conceptualized or analyzed the clinical trial. In other words, maximum work has been made to make the study accurate. External validity means the extent to which the results of our clinical trial can be used to generalize to the population.[23] Bias is the error introduced into the study by the patient and/or the investigator such that the association between the independent and dependent variables is incorrectly estimated.[24] This can also be a systematic error. Type 1 error or alpha error causes the null hypothesis to be rejected even though no true difference exists between groups. Here, the conclusion is false positive. Type 2 error or beta error causes the null hypothesis to be accepted even though a true difference exists between groups. This is a false-negative conclusion. Null hypothesis is the opposite of what we expect from the study.

The two most important types of bias are the selection bias and information bias. Selection bias leads to a difference in the baseline characteristics between the study groups. These are the demographic and clinical characteristics. Information bias can be dealt by providing precise operational definitions of variables, or detailed measurement protocols. It also needs repeated training of the personnel involved in the trial and data audits and cleaning.

A variable that directly influences the relationship between an independent variable and the primary outcome variable is a confounder. Confounding is the confusion of effects.[25] It can cause overestimation or underestimation of effects or in a bad case scenario, completely change the direction of an effect. Randomization which aims to make the intervention and control groups as similar as possible ensures that confounding factors are also similarly and evenly distributed between the groups. If we think that a given factor is likely to be a confounder then we can select groups that contain/do not contain the given factor. This process is known as restriction. The only way to account for the influence of confounders in the analysis stage is by performing analysis that adjusts for the effects of the confounders and their interactions. Finally, it must be remembered that unless we take the entire population into consideration, the chance of an unknown variable affecting the outcome cannot be eliminated.[26]


  Compliance Top


Compliance is ensured by maintaining treatment logs after enrollment.


  Manual Top


A precise standard operating Procedure must always accompany a protocol which will provide a detailed description of the path taken by the patient when they present at the clinic. A CONSORT flow diagram showing in detail the steps of enrollment, intervention and analysis is an extremely helpful adjunct to the manual.


  Ethics in Conducting a Clinical Trial Top


The good clinical practice (GCP) guidelines published by the International Council of Harmonization (ICH) is a document that helps regulate clinical trials. The GCP is a standard for the design, conduct, performance, monitoring, auditing, recording, analyses, and reporting of clinical trials that provides assurance that the data and reported results are credible and accurate, and that the rights, integrity, and confidentiality of the trial subjects are protected.[27]

Today, the ICH-GCP guidelines are used in clinical trials throughout the world with the main aim of protecting and preserving human rights.


  Review Boards Top


The clinical trial passes through a scientific review by the Institutional Research Committee. It determines if the research question, study design, and methodology are good or not. Next, it undergoes an ethical review by the Institutional Ethics Committee. This is to ensure the safety and welfare of research participants. The trial must be registered in Clinical Trials Registry-India http://www.ctri.in/before enrollment of the first subject. Registration in the following trial registers is also acceptable: http://www.actr.org.au/; http://www.clinicaltrials.gov/; http://isrctn.org/; http://www.trialregister.nl/trialreg/index.asp; and http://www.umin.ac.jp/ctr.

Depending on the nature of intervention, the trial may have to pass through regulatory reviews conducted by the Drug Controller General of India, or Screening Committees instituted by the Ministry of Health, Government of India.


  Consort Checklist Top


This is a list of items that editors of journals expect to be strictly reported by the authors of an RCT. The CONSORT (CONsolidated Standards of Reporting Trials) 2010 guideline is intended to improve the reporting of parallel-group RCTs, enabling readers to understand a trial's design, conduct, analysis and interpretation, and to assess the validity of its results.[28] It contains a 25-item checklist and flow diagram, freely available for viewing and downloading. There is an accompanying explanatory and elaboration document - intended to enhance the use, understanding, and dissemination of the CONSORT statement - which presents the meaning and rationale for each new and updated checklist item providing examples of good reporting and where possible, references to relevant empirical studies. Several examples of flow diagrams are also included.[29]


  Conclusion Top


Randomization is considered the most powerful design in clinical trials. If all the variables have been carefully distributed equally between the groups by utilizing this process, then the differences in the outcome can only be due to the intervention. In the hierarchy of evidence for clinical interventions, they constitute the best process for seeking the truth.[30] All the stakeholders in a clinical trial need to ensure that trial outcomes are developed with patients in mind, that unbiased methods are adhered to, and that results are reported in full and in line with those prespecified at the trial outset.[31]

Acknowledgment

I acknowledge the support provided by Dr. Praveen Nirmalan, Chief Research mentor in the Research Methodology program organised by Kerala Journal of Ophthalmology.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Freedland KE, King AC, Ambrosius WT, Mayo-Wilson E, Mohr DC, Czajkowski SM, et al. The selection of comparators for randomized controlled trials of health-related behavioral interventions: Recommendations of an NIH expert panel. J Clin Epidemiol 2019;110:74-81.  Back to cited text no. 1
    
2.
Hariton E, Locascio JJ. Randomised controlled trials – The gold standard for effectiveness research: Study design: randomised controlled trials. BJOG 2018;125:1716.  Back to cited text no. 2
    
3.
Sullivan GM. Getting off the “gold standard”: Randomized controlled trials and education research. J Grad Med Educ 2011;3:285-9.  Back to cited text no. 3
    
4.
Bhide A, Shah PS, Acharya G. A simplified guide to randomized controlled trials. Acta Obstet Gynecol Scand 2018;97:380-7.  Back to cited text no. 4
    
5.
Schultz A, Saville BR, Marsh JA, Snelling TL. An introduction to clinical trial design. Paediatr Respir Rev 2019;32:30-5.  Back to cited text no. 5
    
6.
Umscheid CA, Margolis DJ, Grossman CE. Key concepts of clinical trials: A narrative review. Postgrad Med 2011;123:194-204.  Back to cited text no. 6
    
7.
Finniss DG. Placebo effects: Historical and modern evaluation. Int Rev Neurobiol 2018;139:1-27.  Back to cited text no. 7
    
8.
Kukreja JB, Thompson IM Jr., Chapin BF. Organizing a clinical trial for the new investigator. Urol Oncol 2019;37:336-9.  Back to cited text no. 8
    
9.
Yan F, Thall PF, Lu KH, Gilbert MR, Yuan Y. Phase I-II clinical trial design: A state-of-the-art paradigm for dose finding. Ann Oncol 2018;29:694-9.  Back to cited text no. 9
    
10.
Chapter 4 – Experimental Studies and Clinical Trials. In: World Health Organization. Health Research Methodology: A Guide for Training in Research Methods. Manila: WHO Regional Office for the Western Pacific; 2001: p. 55-70. Available from: https://apps.who.int/iris/handle/10665/206929. [Last accessed on 2021 Feb 14].  Back to cited text no. 10
    
11.
Wang B, Wang H, Tu XM, Feng C. Comparisons of superiority, non-inferiority, and equivalence trials. Shanghai Arch Psychiatry 2017;29:385-8.  Back to cited text no. 11
    
12.
HonÓrio HM, Wang L, Rios D. Non-inferiority clinical trials: Importance and applications in health sciences. Braz Oral Res 2020;34 Suppl 2:e072.  Back to cited text no. 12
    
13.
Dunn DT, Copas AJ, Brocklehurst P. Superiority and non-inferiority: Two sides of the same coin? Trials 2018;19:499.  Back to cited text no. 13
    
14.
Sormani MP. Why non-inferiority is more challenging than superiority? Mult Scler 2017;23:790-1.  Back to cited text no. 14
    
15.
Riva JJ, Malik KM, Burnie SJ, Endicott AR, Busse JW. What is your research question? An introduction to the PICOT format for clinicians. J Can Chiropr Assoc 2012;56:167-71.  Back to cited text no. 15
    
16.
Al-Baimani K, Jonker H, Zhang T, Goss GD, Laurie SA, Nicholas G, et al. Are clinical trial eligibility criteria an accurate reflection of a real-world population of advanced non-small-cell lung cancer patients? Curr Oncol 2018;25:e291-7.  Back to cited text no. 16
    
17.
Weijer C. Selecting subjects for participation in clinical research: One sphere of justice. J Med Ethics 1999;25:31-6.  Back to cited text no. 17
    
18.
Sedgwick P. What is an open label trial? BMJ 2014;348:g3434.  Back to cited text no. 18
    
19.
Hickey GL, Grant SW, Dunning J, Siepe M. Statistical primer: Sample size and power calculations – Why, when and how? Eur J Cardiothorac Surg 2018;54:4-9.  Back to cited text no. 19
    
20.
Ernest P, Jandrain B, Scheen AJ. Forces et faiblesses des essais cliniques. Evolution en fonction de l'essor de la médecine personnalisée. Rev Med Liege 2015;70:232-6.  Back to cited text no. 20
    
21.
Schulz KF, Grimes DA. Allocation concealment in randomised trials: Defending against deciphering. Lancet 2002;359:614-8.  Back to cited text no. 21
    
22.
Day SJ, Altman DG. Blinding in clinical trials and other studies. BMJ 2000;321:504.  Back to cited text no. 22
    
23.
Patino CM, Ferreira JC. Internal and external validity: Can you apply research study results to your patients? J Bras Pneumol 2018;44:183.  Back to cited text no. 23
    
24.
Delgado-Rodri'guez M, LIorca J. Bias. Epidemiol Community Health 2004;58:635-41.  Back to cited text no. 24
    
25.
Rothman KJ, Greenland S. Modern Epidemiology. 2nd ed.. Philadelphia: Lippincott-Raven; 1998.  Back to cited text no. 25
    
26.
Zaccai JH. How to assess epidemiological studies. Postgrad Med J 2004;80:140-7.  Back to cited text no. 26
    
27.
Available from: https://ichgcp.net/. [Last accessed on 2021 Feb 14].  Back to cited text no. 27
    
28.
Schulz KF, Altman DG, Moher D, for the CONSORT Group. CONSORT 2010 Statement: Updated guidelines for reporting parallel group randomised trials. BMJ 2010;340:c332.  Back to cited text no. 28
    
29.
Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, et al. CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 2010;340:c869.  Back to cited text no. 29
    
30.
Petrisor B, Bhandari M. The hierarchy of evidence: Levels and grades of recommendation. Indian J Orthop 2007;41:11-5.  Back to cited text no. 30
[PUBMED]  [Full text]  
31.
Heneghan C, Goldacre B, Mahtani KR. Why clinical trial outcomes fail to translate into benefits for patients. Trials 2017;18:122.  Back to cited text no. 31
    




 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
When Should We d...
Phases of a Clin...
The Designs of a...
Ideal Versus Pra...
Description of I...
Operational Defi...
Selection of Stu...
So How do We Sel...
Sample Size Calc...
Randomisation, A...
Random Error, BI...
Compliance
Manual
Ethics in Conduc...
Review Boards
Consort Checklist
Conclusion
References

 Article Access Statistics
    Viewed782    
    Printed12    
    Emailed0    
    PDF Downloaded72    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]