This topic is essential for competitive exams like SSC JE, RRB JE, GATE EE, UPPCL JE, etc. The following notes summarize key concepts from methods of measurement, classification of instruments, errors, and statistical analysis.
🔹 Methods of Measurement
- Direct Measurement: Comparison with a standard. Less accurate and less sensitive.
- Indirect Measurement: More accurate and sensitive. Preferred in practice.
🔹 Classification of Instruments
- Absolute and Secondary Instruments
- Absolute: Tangent Galvanometer, Rayleigh's Current Balance
- Secondary: Ammeter, Voltmeter, Pressure Gauge
- Deflection and Null Type Instruments
- Deflection: Less accurate, fast response. (PMMC, Moving Iron, Megger, Ohmmeter)
- Null Type: High accuracy and sensitivity. (Potentiometer, Galvanometer)
- Recording and Integrating Instruments
- Recording: CRO, Substation Recorders, ECG
- Integrating: 1-Φ and 3-Φ Energy Meters
🔹 Static Characteristics of Instruments
Characteristic | Description |
---|---|
Accuracy | Closeness to the true value |
Precision | Closeness among repeated measurements |
Sensitivity | Δoutput / Δinput; high sensitivity is preferred |
Repeatability | Consistent readings over time |
Reproducibility | Repeatability under different conditions |
Dead Time | Time taken to respond to a change |
Dead Zone | Range of input without output response |
Linearity | Output is directly proportional to input |
Resolution | Smallest change detected |
Drift | Output variation without input change |
Measurement System Terminologies Explained
In the field of instrumentation and measurement systems, several key terms define the performance and reliability of measuring instruments. Understanding these concepts is essential for electrical engineering students and professionals preparing for competitive exams like SSC JE, RRB JE, GATE, and others.
1. Accuracy
Definition: Accuracy is the closeness of a measured value to the true or actual value.
Explanation: If the actual voltage is 100V and the instrument shows 99.8V, the measurement is considered accurate because it is very close to the true value. High accuracy means low measurement error.
2. Precision
Definition: Precision is the degree of closeness between repeated measurements under the same conditions.
Explanation: If repeated measurements give values like 98.6V, 98.7V, 98.6V, then the instrument is precise, even if the actual value is 100V. Precision focuses on consistency, not correctness.
3. Sensitivity
Definition: Sensitivity is the ratio of change in output to the change in input.
Formula: Sensitivity = ΔOutput / ΔInput
Explanation: If a small change in input causes a significant change in output, the system is highly sensitive. High sensitivity is generally desirable in measuring instruments.
4. Repeatability
Definition: The ability of an instrument to provide the same output for repeated inputs under identical conditions.
Explanation: If a voltage is measured 10 times and the result is 99.6V every time, the instrument has good repeatability. It reflects short-term consistency.
5. Reproducibility
Definition: The degree to which consistent results are obtained under changed conditions (e.g., different operator, time, location).
Explanation: If an instrument gives the same output despite changes in environment or user, it is reproducible. It indicates long-term reliability across different scenarios.
6. Dead Time
Definition: The time delay between the application of an input and the beginning of the output response.
Explanation: If the system takes 2 seconds to show output after a change in input, the dead time is 2 seconds. Shorter dead time is preferred in real-time systems.
7. Dead Zone
Definition: The range of input values over which there is no change in output.
Explanation: If input changes between 0 to 1V but the output remains zero, then 0–1V is called the dead zone. It is an undesirable characteristic in measurement systems.
8. Linearity
Definition: Linearity is the ability of the instrument to produce an output that is directly proportional to the input.
Explanation: A perfectly linear system will have a straight-line graph between input and output. Non-linearity introduces distortion or error.
9. Resolution
Definition: The smallest measurable change in input that the system can detect.
Explanation: If an instrument can detect a minimum change of 0.01V, then its resolution is 0.01V. Higher resolution means the instrument can detect smaller variations.
10. Drift
Definition: Drift refers to the slow change in output of a measurement system over time, even when the input remains constant.
Explanation: This can occur due to aging of components, environmental changes, or internal heating. Instruments with minimal drift are preferred for long-term accuracy.
🔹 Errors in Measurement
- Static Error: δA = Am - AT
- Relative Limiting Error: εr = ((Am - AT) / AT) × 100%
- Static Correction: δC = AT - Am = -δA
- Non-Linearity: NL% = (Max deviation from straight line / Desired value) × 100
- Resolution: FSD / No. of divisions
🔹 Combination of Errors
- Sum/Difference: δX = ± (δX₁ + δX₂ + ...)
- Product/Quotient: εx = ± (ε₁ + ε₂ + ...)
- Composite Formula: x = am·bn/cp → εx = ±(mεa + nεb + pεc)
🔹 Statistical Analysis
- Mean Value: x̄ = (x₁ + x₂ + ... + xn)/n
- Deviation: d = xi - x̄
- Average Deviation: D = Σ|d| / n
- Standard Deviation (n > 20): σ = √(Σd² / n)
- Standard Deviation (n < 20): σ = √(Σd² / (n - 1))
- Variance: σ²
Types of Errors in Measurement System
🔹 Types of Errors
- Systematic Error: Due to identifiable causes (e.g., calibration)
- Random Error: Due to unpredictable variations
- Gross Error: Human mistakes in reading or recording
In any measurement system, errors are inevitable. They can arise from instruments, environment, or even the observer. These errors are broadly classified into three categories: Systematic Error, Random Error, and Gross Error. Understanding their origin and impact is crucial for improving measurement accuracy and reliability.
1. Systematic Error
- Definition: Systematic errors are consistent, repeatable errors caused by identifiable factors such as instrument calibration issues or environmental conditions.
- Explanation: These errors affect all measurements in the same way and can often be corrected once identified. For example, if a voltmeter consistently shows 1V more than the actual value, it’s likely due to calibration error. Other causes include instrumental errors, environmental influences (like temperature), and theoretical assumptions in measurement methods.
- Example: A thermometer that always reads 2°C higher than the actual temperature due to faulty calibration.
2. Random Error
- Definition: Random errors are caused by unpredictable or uncontrollable variations during measurement.
- Explanation: These errors occur irregularly and vary in magnitude and direction. They may be caused by slight fluctuations in the environment, operator handling, or noise in the system. Random errors can’t be completely eliminated but can be reduced using statistical methods like averaging multiple readings.
- Example: Minor variation in voltage readings every time the same battery is measured.
3. Gross Error
- Definition: Gross errors are large, obvious mistakes made by humans during measurement, observation, or recording.
- Explanation: These are typically caused by carelessness, such as misreading an instrument, wrong unit conversion, or incorrect data entry. Gross errors can be avoided with proper training, focus, and use of automated systems where possible.
- Example: Writing 250V instead of 25V in a lab report.