本帖最后由 RA小编辑 于 202168 17:12 编辑
本文转载自FDA News
This report is largely drawn from materialspresented during a recent FDAnews webinar led by
Steven Walfish, president of StatisticalOutsourcing Services. The consultancy provides statistical analysis
and training to various industries, includingthe medical device manufacturing industry. Previously,
Walfish was senior manager of biostatistics, nonclinical,at Human Genome Sciences and a senior associate
at PricewaterhouseCoopers, where hespecialized in the pharmaceutical industry.
Statistical Techniques and Design Controls
When looking at design verification andvalidation, the FDA expects devicemakers to answer two
questions in a clear and detailed fashion:
• Did you make the product correctly?
• Did you make the right product?
Manufacturers may answer both questionsthrough verification and validation testing, part of any
design control requirements, using samplesizes appropriate for both the types of testing done and the
types of product.
Design verification confirms that the designoutputs meet the design input requirements. In other
words, verification confirms that “you made itright,” said Steven Walfish, president of Statistical Outsourcing
Services.
Design validation confirms that the productfulfills its intended purpose or use and meets the user’s
needs. Validation confirms that a company“made the right thing,” Walfish said, noting that a company
would start the validation with a set of userrequirements.
The use and justification of appropriatesample sizes is one of the requirements for effective verification
and validation testing. This area receives agreat deal of FDA attention during device approvals
and inspections. From a statisticalperspective, companies should figure out the best sample size
to use for both types of testing. That helps“to get the level of comfort that we have, in fact, met the
design inputs from our design output process,and that the final medical device does meet the intended
use,” Walfish said.
Types of Applicable Problems
Statistical methods can be applied to solve adiverse set of problems related to device verification
and validation.
“There is a whole fleet of statistics that wecan bring to bear during the validation and verification
process,” Walfish said. The correct samplesize must be chosen for any of these activities, which include:
• Determining the usefulness of a limitednumber of test results of a product characteristic (hypothesis
testing);
• Predicting the expected range of values fora product characteristic based on a limited number
of test data (the confidence interval, or CI);
• Determining the number of tests required toprovide adequate data to support a conclusion (the
sample size);
• Comparing test data between two alternatedesigns in a design change (hypothesis testing);
• Predicting the amount of product that willfall within specifications (tolerance interval);
• Predicting system performance variation(analysis of variation, or ANOVA);
• Planning experiments to discover factorsthat influence a product’s characteristic (design of
experiments); and
• Determining the quantitative relationshipbetween two or more variables (regression).
Understanding which statistical techniques toapply to which problems begins with what Walfish
called the statistical building blocks. Thisrequires a company to first answer a key question: What is the
level of confidence the company wants to havefor a particular device? Stated differently: How much
risk is the company willing to live with as aresult of its design and development process?
“We’re not going to meet perfectly everydesign requirement or every user requirement,” Walfish
said. “There’s going to be a certain risk thatis going to occur.”
The second thing companies must understand isthe relationship between risk management and
probabilistic models.
“A lot of people like to think of this — assome companies will call it — as a P1 value,” Walfish
said. This refers to the probability of agiven failure occurring and, given that probability of occurrence,
a determination of the severity of the impacton the patient if the failure does occur.
“That becomes your harm,” he explained. “Thisis where we start to look at how you assess these
things and get the right sample size to ensureyou are not only meeting your user requirements, but also
that you can assess the risk of putting thepatient at any harm because of potential adverse consequences
of that design.”
Thus, the sample size not only helps devicemanufacturers meet their design verification and validation
needs but also plays a key role in driving howthey look at risk management, by relating risk to
sample size.
With that in mind, companies must next examinethe type of data they will be collecting and scrutinizing.
Statistical data are generally categorized aseither discrete or continuous. The root difference
between the two is that discrete data arecount data with a finite number of possible values, whereas
continuous data have infinite possibleoutcomes. Continuous data are preferred for statistical exercises,
such as developing sample sets forverification and validation testing, but are not always feasible.
With testing involving discrete data, a companywill be doing simple pass/fail tests. Continuous data
typically measure something about the outputof a device, such as cycle times, voltages or pressures.
“Typically, very early on in design, we tendto set our specification around pass/fail attribute requirements,”
Walfish said. “So we count 50 cycles, and ifafter those 50 cycles it doesn’t fail, we say that’s a
success and the device or component passed.”
Confidence Level
The next step is for a company to look at howmany devices, or units, it must test to provide sufficient
confidence that zero failures in the samplecan reasonably be interpreted to mean that the product
meets the user requirements, including safetyfactors.
Two key factors influence this confidencelevel, which must be asserted through the use of statistics.
Risk is the primary factor: A higherlikelihood of harm to a user or patient requires a higher confidence
level. Related to sample size, this means thatthe sample size should be proportionate to the risk of the
feature undergoing verification or validation.In other words, a greater risk potential calls for a larger
sample size.
Conversely, if the potential for harm to auser or patient is less, the sample size necessary to achieve
the desired confidence level will be smaller.
The second factor is variability, which coversvariation from unit to unit or from batch to batch, as
well as any variation in a company’smeasurement system. In short, where more variability is likely, a
larger sample size is necessary to establishthe desired confidence level. Conversely, where variability is
small, sample sizes are likewise smaller.
level. Related to sample size, this means thatthe sample size should be proportionate to the risk of the
feature undergoing verification or validation.In other words, a greater risk potential calls for a larger
sample size.
Conversely, if the potential for harm to auser or patient is less, the sample size necessary to achieve
the desired confidence level will be smaller.
The second factor is variability, which coversvariation from unit to unit or from batch to batch, as
well as any variation in a company’smeasurement system. In short, where more variability is likely, a
larger sample size is necessary to establishthe desired confidence level. Conversely, where variability is
small, sample sizes are likewise smaller. The secondfactor is variability, which covers variation from unit to unit or from batchto batch, as well as anyvariation in a company’s measurement system. In short, where more variabilityis likely, a larger samplesize is necessary to establish the desired confidence level. Conversely, wherevariability is small, samplesizes are likewise smaller.
Risk and Sample Size It is impossibleto discuss design controls without talking about risk management, as well. Theultimate goal ofverification and validation testing is to show that there is a minimal riskthat a device will not perform asintended. To prove this to the FDA’s statistical satisfaction, companies mustselect sample sizes that willaccurately reflect the real risk. To do this, devicemakers need to considerseveral factors. These includethe product’s overall risk profile. “When you get tothe verification and validation sample consideration piece, the most important thing is to tieit into your risk management,” Walfish advised. “So when you are doing designverification and validation,your design history file and developing a risk plan, the sample size isultimately going to be tiedinto that risk plan.” In a nutshell, acompany can trade sample size for risk. The less risk a devicemaker is willingto take, the largerthe sample size it will need. If more risk is acceptable, a smaller sample sizewill suffice. Figure 1 shows aprocedure that a device manufacturer might apply to determine whether to accept or reject adesign based on the results of sample testing.
“So we take a sample and, based on thatsample, we either decide to accept or reject the design,”
Walfish explained. “Acceptable means thedesign meets the user’s requirements. So if my sample says
it’s a good design, then I’ve made a correctdesign.”
While that remains the goal of allverification and validation testing, in reality a devicemaker may
find itself faced with a sample that shows anerror. This can occur even if the design does, in fact, meet
user requirements, Walfish noted. This wouldbe a Type I error, or the “producer risk” noted in Figure 1.
This is where the concept of the CI comes intoplay, Walfish said. “When we say that we’re 95 percent
confident, we’re saying that there’s a 5percent chance that we’re going to reject the null hypothesis
when it’s true, or we’re going to reject thedesign when, in reality, it’s actually a good design.”
CI changes are proportional to productvariation and risk. The same applies to a Type II error, or risk
to consumers; it is referred to asreliability.
“The most important situation is the Type IIerror, consumer risk,” Walfish said. “This occurs when
the sample size that we took — typically toosmall of a sample size — leads to a decision to accept the
design when more testing would have led to adecision to reject it, and in fact, the consumers — the
patients — are not going to get a device thatdoesn’t meet their requirements.”
As with the CI, a 95 percent reliabilityfigure indicates that there is a 5 percent chance of a Type II
error. Reliability is the more important ofthe two figures for device manufacturers. It also will receive
more FDA scrutiny during an inspection. Thatis because reliability has a more direct bearing on the
safety and efficacy of a device when it reachesthe patient.
Accurate determination of appropriateconfidence and reliability levels is critical to determining the
correct sample size for the verification andvalidation testing of a specific device.
As a general rule of thumb, companies can lookat the functionality of a particular device when
determining appropriate levels of confidenceand reliability at different risk levels. For instance, Walfish
said, a confidence level of 95 percent and areliability level of 90 percent would be appropriate for a
lowrisk device, such as a radiation shieldfor an Xray machine. Such a device would need to be validated
for its ability to shield an operator fromextraneous radiation. This assumes that the failure mode
and effects analysis (FMEA) and the productrisk file do, in fact, identify the shield as a lowrisk feature,
Walfish emphasized.
If a product can be categorized as higherrisk, those figures mu st change. In fact, devicemakers
must be diligent in assuring that they haveapplied the correct risk category to any such product. For
instance, for a mediumrisk product, both theconfidence and reliability figures would need to be at 95
percent, he suggested. Meanwhile, for ahighrisk product, those figures would need to be 95 percent
and 99 percent, respectively.
These figures are not carved in stone, Walfishemphasized, adding, “Every company, every organization
is going to do this a little bit differently.I’m giving you a ballpark stick in the ground to work with.
Every company has a different risk tolerance,and every company manufactures different risk and different
classes of medical devices. If you’re in thehighrisk device world — Class III devices — you probably
don’t want to have anything less than 99percent confidence and probably no less than 99 percent
reliability, even for lowrisk features.”
With the riskdependent confidence andreliability figures in place, a device manufacturer can determine
the best sample size for its validation orverification testing by applying the following formula:
Using that formula, the sample size is 28 fora lowrisk product with a confidence level of 95 percent
and a reliability level of 90 percent. Walfishemphasized that that figure refers to the number of tests.
The company would not have to necessarily test28 devices; in fact, the number of individual devices
tested could be as low as one, depending onvariability among devices.
“If the variability of the device is smallenough, I might be able to take a single device and test it
28 times and show that in all 28 times that Idid that testing, my Xray shield shielded the extraneous
radiation,” Walfish explained. “How do I dothat? I take a Geiger counter, I put it by the shield, I shoot
a beam, and I look at my Geiger counter, andif it’s below a certain number, that’s a pass. I do that 28
times; if all 28 times pass, then of course myvalidation and verification would pass.”
To test using fewer devices, a company must beable to show there is minimal variability among
devices; that will serve as its justificationfor not testing on 28 separate devices. That does not mean that
a company must use only one device for the 28tests. It could opt to use, for instance, seven devices,
testing each four times.
The other important thing to note when usingthis approach in determining sample sizes, Walfish
said, is that it assumes a straight pass/failapproach to the testing. This means that all 28 tests under the
above example must “pass.” If even just onetest is a “fail,” that means the device has failed verification
or validation testing. As a result, thecompany must start over with a new series of 28 tests, after identifying
and correcting any problems that contributedto the failure.
When device manufacturers design verificationand validation tes ts, particularly regarding choice
of sample size, they must fully understand therequirements for statistical techniques, including how
different techniques can affect the designcontrol process. It is equally important that m anufacturers
be able to determine what is, and what is not,statistical in nature. In that way, they may understand
what requirements lend themselves tostatistics, and which are not statistical in nature, Walfish noted.
When applying statistics to these types ofproblems in the course of verification and validation testing,
companies need to focus on the specificquestions they need to answer. For instance, when a devicemaker
undertakes hypothesis testing to see if alimited number of test results can accurately characterize
a product feature, what the company reallywants to determine is whether the sample being collected
represents a good product characterization forthe devices in question.
Likewise, when looking at the confidencelevel, the company is really trying to answer the question:
How confident are we — 90 percent, 95 percent,99 percent — that the true value of the data set of the
sample population will fit inside thispredetermined interval, which represents the likelihood that the
device will perform as required.
“And when you have a hypothesis test and aconfidence interval together, then we can come back
and answer the question: ‘How many samples dowe need?’” Walfish explained. “My hypothesis test and
my confidence interval are going to allow meto determine the appropriate sample size.”
It is also important for devicemakers toremember that the key question may not necessarily be a
design validation or verification question, headded. Sometimes a company just needs to do a hypothesis
test to look at the differences between twodesigns, particularly in terms of whether certain design characteristics
deliver better or worse performance, asdiscussed earlier in this report. This is another area
where statistical analysis will prove useful.
Tolerance intervals are important todevicemakers. These figures indicate the ability of the company
— and hence, of the user and the patient aswell — to predict how much of a product will be within or
outside specifications. For instance, Walfishsaid, if a manufacturer is 95 percent confident that 1 percent
of its product could be outofspec, eventhough it doesn’t observe any outofspec units during verification
and validation, that company can still look atthe variability and make a statement about what it
estimates the outofspec, oroutoftolerance, percentage of product will be.
Statistical techniques can be applied topredictions of system performance variation, as well. Here,
companies would look at multiple factorssimultaneously to see how they might affect the device design
using ANOVA techniques.
Finally, statistics come into play duringexperimentation, particularl y experimental design. Statistics
help companies discover the factors thatinfluence a pro duct’s characteristics, as well as the
quantitative relationships between two or morevariables during regressiontype experiments, Walfish
said.
Types of Data Examined
When devicemakers conduct validation andverification testing, they will look at two general types
of data, as previously discussed. Continuous —or typical — data essentially apply to performance requirements.
What companies must examine is parttopartvariability, a factor that is very important to a
device’s risk profile. In choosing a samplesize for testing, companies must consider sources of variation,
which can increase a device’s risk.
For example, a tongue depressor represents asimple, lowrisk medical device used frequently in
doctors’ offices. If a design requirementcalls for the depressor to be flat, and not bowed, the company
making this product must determine how muchvariability exists as part of its determining a sample
size. Process variability and environmentalimpacts are among the sources of variability for this product,
Walfish noted.
“But ultimately the question is: ‘What is therisk to the patient if the physician was to get a slightly
bowed tongue depressor?’” Walfish said. “It’sprobably a very low risk. So I’m not going to spend a lot
of time in sample size and in the risk forthis product.”
However, a highrisk product, such as animplantable pacemaker, is another matter altogether, he
said. If voltage is the specification to betested, for instance, parttopart variation in batteries and wiring,
among other parts, can have an effect. Themanufacturer has to take into account how those tolerances
will affect the pacemaker’s performance. Andthat in turn will affect the sample size for validation and
verification testing.
Both verification and validation may requirephysical samples across lots, batches and operators to
demonstrate that the specified requirementshave been fulfilled. This must occur anytime a product has a
high lottolot variability.
“I used to work in capital equipment in themedical device industry, so you’re making these
$200,000 pieces of equipment. You’re notmaking a lot of them during the validation and verification
process. The assumption in that case is thatthe unittounit variability is much less than the withinunit
variability,” Walfish said. “But if I’m makingwidgets, a plasticinjected molded part, and I’m making
30,000 of these things, it’s probably going tobe more important for me to be able to look at my verification
and validation requirements across not justlots and batches, but across mold cavities and across different
settings of my process. These are the thingsthat are going to drive the variability of my process.”
The second type of data that devicemakers mayneed to consider is discrete data, which typically
involve functional requirements. As withcontinuous data, companies must consider sources of variation
and risk in discrete data when determiningsample size. However, individual parts or a process generally
are not the sources of variation seen indiscrete data. Rather, the variation stems from such things as
interactions, timing, initial conditions,workflow, prior events, configurations, options and accessories.
“So if I’m going to look at a physical, visualcharacteristic o f my medical device,” Walfish said.
“I don’t know what causes that variabilitynecessarily. I might need to go back and do some design
experiments to understand what causes thatvariability because I don’t have a variable output to be
able to look at.”
In some situations, Walfish explained, acompany must test whether something works or not; he
compared it to flicking a light switch: “If Iturn on the switch in a room and the light goes on, I don't
have to test that 100 times. If it works once,it’s always going to work. The design worked, in that it allowed
the switch to turn on the light.”
He also emphasized that validation andverification do not include process validation, but rather are
intended simply to show that the design meetsuser requirements. In the light switch example above,
the user wants the lights to come on when theswitch is thrown. A comparable expectation in the device
world might be when the user pushes a button,the device powers on. Another user requirement might be
that a system boots up within one minute.These are all examples of discrete data.
In terms of sample size, devicemakers may usesmaller numbers of physical samples, with repeated
iterations to stress interactions acrosstiming or users, for instance.
Going back again to the light switch example,Walfish said, “It’s not as though I need to do that 30
times because the variability is almost nilbetween turning the switch on, or when I push a button today,
later the same day and tomorrow. So we onlyhave to worry about doing discrete data and statistical
sample size for those characteristics that arediscrete and have different sources of variability that we can
quantify in the process.”
Device companies also need to remember that,when dealing with discrete data, not all requirements
are statistical in nature. Walfish stressedthat companies must be able to provide the FDA with a documented
rationale for when it considers a testnonstatistical versus statistical.
Nonstatistical Techniques
While the bulk of work developing sample sizesfor verification and validation testing may center
around statistical data and techniques, thereare times when a nonstatistical approach — a simple pass/
fail, it works or it does not work — is moreappropriate. This type of testing can be done on a single unit
of a product.
When companies opt to use a nonstatisticalmethod, they must be able to justify that decision. This requires
a clear understanding of when nonstatisticaltechniques are appropriate. Devicemakers must maintain
written criteria for when it will consider atest to be nonstatistical versus statistical. These criteria can
be as simple as a list of the types of teststhat a company has deemed to be nonstatistical, Walfish said.
Companies should include the written criteriafor statistical versus nonstatistical testing in the validation
and verification protocol. The nonstatisticaltests may sometimes be referred to as “unit tests,”
Walfish noted, referring to the ability to getthe necessary information by testing just a single unit.
That justification comes down to risk. Whenthe risk to the patient is lower, a nonstatistical unit test
is more likely to be appropriate. Sometimes anonstatistical test can be conducted simply to verify the
existence of a feature, for instance. As longas the design includes that feature, and the test indicates that
the feature is present, the user requirementshave been met.
Sources of Error
All verification and validation testing mustbe concerned with variability, also known as error. There
are, essentially, two sources of error.
The first source, which Walfish characterizedas the most common and mostfocusedon by devicemakers,
is the variability of the device or componentitself, or the sample statistic around the unit population.
This is the standard deviation, and representsthe parttopart variability in the process.
The second source is measurement variability,which refers to variability in the measurement of a
device characteristic or output due to theinstrument used. The question asked here is: “How much variability
will I get when I take the exact same unit andmeasure it repeatedly with the same instrument?”
This source of error becomes particularlyimportant when a company wants to look at variability within
a part versus variability parttopart.
“If my measurement system variability is veryhigh, then I’m going to need to test the same unit
more times than necessarily testing multipleunits,” Walfish said. “It becomes very important when we
start to talk about how we’re going topartition out our sample size.”
Thus, companies must periodically perform ameasurement system analysis (MSA). Walfish suggested
that devicemakers should do so beforeundertaking any data collection and decisionmaking. MSAs
also should be repeated periodically on amaintenance basis.
“Always, always, always get this measurementsystem under control,” Walfish said. “Under the
variability prior to doing any data collectionand, more importantly, before you do any decisionmaking
about the design.”
Some key activities that could warrant an MSA include:
• Making product design decisions;
• Performing process validation;
• Improving phase in a black/green beltproject; and
• Conducting a final product inspection.
Walfish placed extra emphasis on the firstbullet point, saying, “If we’re going to make a decision
about the product design, we want to make surethat our measurement system is adequate, and if we’ve
already done a measurement system analysis,that it’s still valid, before we go about making any decisions
about product design.”
Finally, Walfish cautioned against focusingonly on Type I errors — the socalled producer risk —
when developing sampling plans.
“We focus on confidence levels. We focus in ondoing hypothesis testing and getting 95 percent confidence.
We focus in on doing a P value that’s got tobe less than 0.05,” he said. “We focus, focus, focus
on Type I errors, and it’s a bad way to goaround doing this, because a Type I error does not tell me the
risk to a patient. All the Type I error tellsme is what is the risk to us, the company.”
While company risk is important, devicemakersalso need to consider the FDA’s priority, which is
risk to the patient, or Type II errors. Thus,the agency would like to see sampling plans that also control
for this type of risk.
Statistical Justification
Whenever a company comes up with a sample sizefor any validation or verification testing exercise,
it must be able to justify that size in a waythat will be acceptable to the FDA. A company should, for
instance, have written criteria fordetermining when it considers a test to be statistical versus nonstatistical.
This can be as simple as a list of the typesof tests that it considers to be nonstatistical.
“In some cases, that discussion comes aroundrisk,” Walfish said. “You might make a statement that
the risk to the patient is zero, so you’re notgoing to do statistical justification. You’re going to do a unit
test and show that these are nonstatistical.”
Companies also need to have written criteriafor statistical testing. In some cases, they may rely on
recognized standards for a particularstatistical requirement, and these may specify a sample size and acceptance
criteria. If this is the case, a company’sstatement that it is using a recognized standard may be
considered a valid justification.
“For example, there is a standard for humanfactors,” Walfish said. “So when we talk about sample
size for user or usability testing, thosesample sizes can be found in that particular standard document,
which I think is HE75. A devicemaker can usethat as its criteria.”
He added that a company could test more orless than the standard, and use independent statistical
justification as its rationale. The FDA allowsa great deal of leeway for companies to use the approach
that works best for any given individualsituation.
“But if you don’t want to come up with astatistical justification, it is sufficient to say that HE75,
Annex A, requires this sample size for thislevel of confidence and this level of reliability,” he said. “So
ultimately, at the end of the day, you want todocument these things, but sometimes standards exist that
allow you to not have to document theseparticular tests.”
When compiling a statistical justification,devicemakers must remember that there are two parts to
every sample plan — the sample size and theacceptance criterion. Companies must justify both. The
FDA expects that companies will have arationale, in writing, for the sampling plan selected. The plan
will be based on the risk to the patient. Theconfidence statements made about a product must be associated
with passing a sampling plan.
For example, if a company is going to do avariables test on an attribute, by testing on continuous
data, it might say that it is going to sample60 units, with the mean value set between eight and 10. There
must be a rationale to that acceptancecriterion. The company must explain how it came up with all acceptance
criteria as well as how it came up with thesample size.
“The rationale comes back to the risk to thepatient,” Walfish said. “You can say I am 95 percent
confident with 90 percent reliability that Iwill meet the requirements. That says that I’m 95 percent
confident that the true Type I error rate isno worse than 10 percent. So this has to be associated with a
lowrisk characteristic, because you’reallowing yourself up to a 10 percent failure rate.”
Question: How is the sample size determinationformula, n = (Zα + Zβ)2 S2/Δ2, different from the n =
Zs over margin error squared?
Answer: The Zssquared over margin of errorsquared, or Zs over margin of error, the whole thing
squared, only takes into account the Type Ierror. So, the Z is only the Type I error. What the first does
is increase the sample size to effectivelyvalue both the Type I and Type II error. It will give a slightly
larger sample size than the Zs over themeaningful difference.
Q: In cases of process validation orrelocation of manufacturing equipment, is n = (Zα + Zβ)2
S2/Δ2 appropriate?
A: It’s appropriate for everything. The onlydifference is that process validation or relocation will
change the Type I, Type II errors because if acompany has been making this product — relocation is a
good example. The company is transferringproduction from one facility to another — it’s been doing
so for a while. It can probably run it at alower confidence of reliability because it has a certain level of
experience that hopefully will bear itselfout.
Q: When would a company use which sample sizeformulas?
A: Three different sample size formulas havebeen discussed: The first — n = (Zα + Zβ)2 S2/Δ 2 — is
for a mean value, which has to be within acertain spec. The sample size is the key value — X + k * S <
U — with the individual values being within acertain upper and lower specification. The natural log of
one minus confidence over natural log ofreliability (X – k * S > L). That’s the attribute go/nogo, and
that’s when all of the tests must be found tobe acceptable. So those are the three scenarios.
Q: Is ANSI sampling used for manufacturing andnot development?
A: Don’t use the ANSI Z1.4 through 1.9 for thepurposes of validation — design validation or verification
or process validation. Those standards arewritten for incoming inspection. There are switching rules
associated with them, and that is notsomething seen in the design verification and validation activities.
Q: Can process capability (Cpk) be used tosize a sample size or criteria?
A: Cpk can be related back to these things.Just multiply Cpk by 3; that would be the Z value. So,
a company can say it has a Cpk of 1, that’sthe same thing as having a Z of 3. That’s the same thing as
having a reliability of about 0.27 percent.And then the company can use that information to come up
with the sample size and the acceptancecriteria that it needs to have for Cpk.
Q: For calculating the sample size, whichmethod is better, the first or the third?
A: The first is used for continuous data wherea company gets an actual reading, and the third for attribute
where the only thing a company knows is it’seither good or bad. So, if a company applies a force
and it doesn’t break, that’s a Method 3. Ormeasure the force, and that’s Method 1.
Q: What is a source for the derivation on, forinstance, the sample size for continuous data?
A: The easiest one is n = 1n(1 –Confidence)/1n(Reliability) = 1n(1 – 0.95)/1n(0 – 0.90) = 28 at 95
percent confidence and 90 percent reliability;this is in almost every reliability textbook.
Q: And how would the derivation for that comeabout?
A: For the derivation of it, use the binomialdistribution. It’s just the binomial set X equals to zero.
