A3 Lecture Notes: Hydrography & Measurement Principles#
Summary: Core Pedagogical Messages for A3#
Key takeaways from this lecture:
Accuracy ≠ Precision: Averaging improves precision (reduces random error) but doesn’t fix bias (systematic error). Climate studies need accuracy.
Signal-to-noise determines requirements: High-variability environments (shallow/coastal) can tolerate biases for process studies. Low-variability environments (deep ocean, climate signals) require high accuracy.
Calibration failures have real consequences: Concrete examples (Argo pressure drift, XBT fall rate) show where improper calibration led to wrong scientific conclusions.
TEOS-10 is the modern standard: Use it, be aware it differs from old practical salinity, matters when comparing across decades.
Report uncertainty appropriately: Significant figures must reflect instrument precision, not computer precision. Always report with uncertainty.
Cross-parameter contamination: One bad calibration (pressure) can corrupt other measurements (temperature). Everything is connected.
Key Concept: When Does Calibration Matter?#
Understanding when you need absolute accuracy versus relative precision is crucial for choosing the right measurement approach and interpreting your data correctly.
The Calibration Tension: Gold Standard vs. Process Study#
Ask yourself: What is the signal you’re trying to measure?
Scenario 1: Climate Studies - Gold Standard Required#
When to use: Deep water masses with slow, long-term changes
Example: Antarctic Bottom Water (AABW) warming
Signal: ~0.001°C warming per year
Challenge: Detecting tiny trends over decades
Solution: High-accuracy sensors with rigorous calibration to laboratory standards
Why accuracy matters: The climate signal IS the small trend. Any systematic bias could mask or create false trends.
Scenario 2: Process Studies - Relative Precision OK#
When to use: Surface waters and shallow seas, or regions with high natural variability which are the focus rather than climate tendencies
Example: Tidal mixing, river plume dynamics, coastal gradients
Signal: 5°C changes between tides, spatial gradients of several degrees
Challenge: Understanding spatial patterns and processes
Solution: Consistent relative measurements with same instruments
Why relative precision works: Day-to-day and seasonal variability >> long-term trends. Small biases don’t affect spatial comparisons on the same cruise.
Key Teaching Point: Signal-to-Noise Ratio#
The measurement approach depends on your signal-to-noise ratio:
Deep ocean: Small trends ARE the signal → accuracy critical
Shallow/surface waters: Large natural variability >> trends → relative precision sufficient
Decision Framework#
When you’re measuring something that:
Changes by 0.001°C per year over decades (deep ocean warming) → You MUST have high accuracy sensors and rigorous calibration
Changes by 5°C between tides → Small bias doesn’t matter for spatial comparisons with the same instruments
Bottom line: Know which problem you’re solving before choosing your measurement strategy.
Practical Application: Why No Water Samples on Seepraktikum?#
During the ship-based practical course, you’ll notice we won’t run water samples in laboratory salinity analysis. Here’s why:
Technical Reason#
Silty coastal water would contaminate the sensitive laboratory salinometer
Scientific Reason#
Your goal: Understanding spatial gradients and coastal processes
What matters: Consistent relative measurements across your survey area
What doesn’t matter: Small absolute biases that affect all measurements equally
Since you’re using the same CTD throughout your cruise, any systematic bias will be consistent across all your measurements. This allows you to:
Map spatial gradients accurately
Compare water masses relative to each other
Understand physical processes
For this type of process study, the relative precision of your CTD measurements is adequate for your scientific objectives.
What Happens When Calibration Goes Wrong?#
Learning from real-world calibration failures helps you understand why quality control and instrument validation are essential parts of oceanographic research.
Case Study 1: Grey-Listed Argo Floats (Pressure Sensor Bias)#
What happened:
APEX floats with faulty pressure sensors had systematic biases around -2 dbar in 2003
Some older floats had errors exceeding 10 dbar
SOLO floats with FSI CTDs had profiles offset upward by one or more pressure levels
The cascade of errors:
Primary problem: Floats thought they were at different depths than reality
Secondary effect: Temperature measurements got assigned to wrong depths
Scientific consequence: Created artificial cooling signal in the data
What you need to know: This could have led to false conclusions about ocean cooling if scientists hadn’t caught and corrected the errors. This is why Argo maintains grey-listed and quality-flagged datasets.
Key lesson: Systematic errors in one parameter (pressure) corrupt measurements of others (temperature). Don’t forget your measurements may relate to each other.
Case Study 2: XBT Fall Rate Manufacturer Change#
What happened:
TSK T-5 XBTs systematically overestimated depth by about 5%
Sippican T-5 XBTs showed almost no bias
Both manufacturers claimed “identical specifications”
The cascade of errors:
Primary problem: Fall rate depends on subtle manufacturing details
Secondary effect: Wrong depth calculations from fall-rate equations
Scientific consequence: Temperature assigned to wrong depth → artificial warm bias in global ocean heat content estimates
What you need to know: This error required major correction efforts for climate change studies because it affected historical datasets used to calculate long-term trends.
Key lesson: Even “identical” instruments from different manufacturers can behave differently. This is why we need ongoing comparisons with reference standards (like CTD casts).
What This Means for You#
As a student/researcher:
Question your data - look for patterns that seem “too good” or unexpected
Understand the full measurement chain: how does each sensor affect the others?
Compare your measurements with independent methods when possible
Document everything: instrument models, serial numbers, calibration dates
Remember: These weren’t amateur mistakes - they were sophisticated measurement systems used by expert oceanographers. Systematic errors can fool anyone, which is why we have quality control procedures and data validation protocols.
TEOS-10: When and Why It Matters#
Why TEOS-10 Was Created#
The problem with old methods: PSS-78 Practical Salinity was based on conductivity measurements, but thermodynamic properties like density actually depend on the real mass of dissolved material in seawater, not just conductivity.
The solution: TEOS-10 uses Absolute Salinity (mass fraction of salt, g/kg) because:
Seawater composition varies spatially
Absolute Salinity can differ from Reference Salinity by up to 0.02 g/kg in open ocean
In coastal areas, differences can reach 0.09 g/kg
What You Need to Know as a Student#
Essential facts:
TEOS-10 exists and modern papers use it - you’ll see it everywhere in recent literature
Use TEOS-10 for Seepraktikum - it’s the correct modern standard
Be aware when reading older literature - papers before ~2010 used different definitions
Reading papers: If you see salinity values around 34.82 in old papers vs slightly different values in new papers using TEOS-10, it might be the definition change, not actual ocean changes.
Simple Example: Same Water, Different Numbers#
Typical difference: ~0.15 g/kg between PSS-78 practical salinity and TEOS-10 absolute salinity for the same water sample.
Example:
Old method (PSS-78): 34.82 psu
New method (TEOS-10): 34.97 g/kg
Same water sample!
When TEOS-10 Really Matters#
Critical applications:
Comparing datasets across time (pre-2010 vs post-2010 studies)
Precise density calculations for oceanographic work
Water mass analysis requiring high accuracy
Bottom line for you: TEOS-10 exists, papers use it, you should too. The difference from old practical salinity is usually small, but when you’re comparing water mass properties across decades or doing precise density calculations, it matters.
Significant Figures and Reporting Uncertainty#
Understanding instrument limitations helps you report measurements correctly and avoid false precision.
Know Your Instrument Specifications#
Typical CTD specifications:
Temperature: Accuracy 0.001°C
Salinity (CTD): Accuracy ~0.003 PSU
Salinity (lab salinometer): Accuracy <0.002 PSU, Precision <0.0002 PSU
Pressure: 0.015% full scale (for 2000 dbar sensor = ±0.3 dbar)
How Many Significant Figures?#
Temperature#
Instrument precision: 0.001°C
Report to: 3 decimal places (e.g., 15.234°C)
Key principle: Averaging 100 measurements doesn’t give you more precision than your instrument
Salinity#
CTD accuracy: ~0.003 PSU
Report to: 3 decimal places (e.g., 35.127 PSU or g/kg for TEOS-10)
Lab salinometer: Can report to 4 decimals (0.0002 precision), but field CTD cannot
Pressure#
For 2000 dbar sensor: 0.015% = ±0.3 dbar
Report to: 1 decimal place (e.g., 1547.2 dbar) or integer dbar for most work
The Golden Rule#
Your reported precision cannot be better than your instrument precision.
When you average measurements, you get better statistical confidence in that average, but you don’t magically get more precision than your instrument can provide.
Common Student Error#
Wrong approach:
Taking average of 10 temperature readings:
(15.2 + 15.3 + 15.2 + ...) / 10 = 15.23456789°C
Report: 15.23456789°C ❌
Correct approach:
Same calculation = 15.23456789°C
But instrument only measures to ±0.001°C
Report: 15.234 ± 0.001°C ✅
Practice Exercise#
Given CTD profile data with the instrument specifications above, what is the appropriate precision for reporting:
Temperature at 1000m?
Answer: 3 decimal places (e.g., 4.234°C)
Salinity at 500m?
Answer: 3 decimal places (e.g., 35.127 g/kg)
Pressure at 1547.2498375 m?
Answer: 1 decimal place (1547.2 dbar) or integer (1547 dbar)
Remember: Show your uncertainty! The goal is honest reporting that reflects your measurement capabilities.