Mjolnir wrote:ZenMonkeyNZ wrote:
We could just as easily say there were two stationary observers, one at M and one at M - vt (i.e. closer to A than B). Both observe the event timings differently, but that doesn't make the event timings dependent on the position of the observer![/b]
I just wondered if perhaps you were typing too quickly there. Did you mean "speed" where you wrote "position"? because unless I read your post totally wrong, that's what makes sense. To me, at least. I think. Maybe.
Hi, sorry for the late reply. I have been away from the forum for a bit – too much reading and writing to fit in around kids and life-in-general.
In that example I did mean "position" as I wrote it, and not "speed". It was in relation to Einstein discussing the
time at which events were measured to occur. I was simply pointing out the problem with conflating the measurement of the time of an event with the actual time of the event. This kind of conflation is inconsistent with the law of identity. An event occurs when it occurs. If you measure some aspect of the occurrence at a later time, that just means some flow-on effect took place at the position and time of measurement. The event itself (which you can limit by whatever arbitrary definition you choose) occurs when it occurs, irregardless of any measurement.
I have included below some writing on metrology, from a philosophy of science treatise I am working on, to clarify the idea of RELATIVE COMPENSATION OF MEASUREMENT that I was getting at in earlier posts. Excuse the length.
METROLOGY
Metrology is the study of measurement. From its root we derive the basic unit of measure for space – the
metre. As with all forms of abstraction, it is vital to examine what
measure entails, and understand its role in scientific methodology.
Measurement is the means by which we record quantitative comparisons of certain properties or relationships between physical bodies or systems. These properties or relationships form the basis for theory, but their measurement also provides the means by which theory can be tested. There are three main properties or relationships we measure:
mass,
length (or distance or displacement), and
time. These relate to the fundamental physical notions of science:
force,
space and
time.
In order to make a meaningful comparison between bodies or events, a baseline measurement must be created. This is known as a
standard. How this is practically achieved is largely arbitrary. Ancient methods of measurement used commonly available objects or natural cycles as standards against which other spacial things or temporal happenings could be measured. The royal cubit in Egypt was defined as seven palms (
shep or
shesep) wide, each of four fingers width. Likewise the palm (
palms minor) was one of the units of measure used by the Romans, and today the
hand is still a common unit of measure for horse height – although it has a precise standardised unit equivalent. In many cultures prior to the invention of mechanical clocks, an hour was defined as one-twelfth of the daylight hours of a day – so in summer an hour was longer than in winter, and the length of an hour also varied according to latitude. Distance (length) was measured in yards, leagues, miles, furlongs, hands, chains, bolts, reeds, paces, and so on. The length of each of these was a set standard by which other lengths or distances could be compared.
After the French Revolution, the French National Assembly attempted to organise a standardised set of weights and measures. This led to the establishment and proliferation of the decimal system, which is the basis of metrology today. The standardisation of units was also a major concern for the growing industrialisation of manufacturing practices and the mass-production that flourished in the nineteenth century.
<<SOME TEXT REMOVED HERE>>
Relative Compensation of Measurement
We must also be aware of the fact that all quantitative measuring devices – whether clocks that measure “ticks” from decaying isotopes, or physical measuring rods – are prone to environmental effects. Atomic clocks were originally thought to be accurate and steady enough to be considered constant (relative to the quantity of matter currently decaying), but even gross variations in the environment such as the time of year were found to affect them. Although nuclear decay is steady enough to be considered constant
for most purposes, that assumption should have raised reg flags when investigating areas that relied on great precision or extremely long periods of time. The reason is simple – since the mechanism(s) of nuclear decay is/are not understood, we are necessarily ignorant of every possible source of variation. Physical measuring standards are potentially affected by everything that affects the physical geometry of matter or the transfer of energy. We should assume that all forms of measurement are, or could be, affected. Absence of evidence of variation is not evidence of absence.
It is vital to understand this:
standards of measure are relative, and physical effects on relative measure must be compensated for.
Failure to fully appreciate the implications of this has led to unreal theories in physics that have created paradoxes. There are no paradoxes in nature. Nature is what it is – this is the first and fundamental law of thought. A paradox is something that both
is, and
is not, in the same sense and at the same time. Paradoxes should serve as an indication that theory has strayed unacceptably far from reality, even if it has remained mathematically viable.
What do I mean by
physical effects on relative measure must be compensated for?
A physical standard for the metre, one of our fundamental units of measure, was defined by a physical metal rod for nearly two centuries. It was created to be equal to one ten-millionth of the distance between the North Pole and the Equator, following a meridian line cutting through Paris, France. This rod (known as the
mètre des Archives) was made from platinum, and later a platinum-iridium alloy, and was kept in the French National Archives and later the
Bureau International des Poids et Mesures. By 1890 several platinum-iridium standard copies had been created, with the official standard kept in France, and the other bars distributed around the world. These were accurate to within ~0.2 micrometres.
So what were the physical effects that affected this standard? The obvious (noticeable) ones were temperature and pressure. At 30°C any given standard rod would be longer than at 0°C. This is due to the fact that temperature is indicative of the average motion of bulk particles. The greater the amplitude of vibration or movement of particles within a physical object, the more the structure of the object expands. This expansion is limited largely by the structure (cohesiveness and elasticity) of the molecules making up the object, but is also slightly limited by the
pressure of the surrounding environment. Thus, at a high altitude (low pressure) a standard rod would be also longer than at sea level (higher pressure). Hence, the standard for the metre was also defined by certain environmental attributes. For the platinum-iridium rod these were defined as a temperature of 0°C and standard atmospheric pressure (~ sea level). These were not the only environmental effects on the standard rod, but given the precision of measurement available (about 10
-7m), they were the ones that mattered.
Physical effects such as these are what create the need for compensation of relative measure.
The relative aspect refers to the measuring rod itself, not the object being measured. Physical considerations such as temperature and pressure will also affect any
measured objects, but since every type of substance is affected differently, those differences are measured separately against the standard rod. Different types of materials are given expansion coefficients, for example, based on these environmental effects as given by the actual measured difference. These measured effects require us to incorporate the
measuring device compensation we are discussing here.
If we take a given object and measure its length using a standard measuring rod at standard pressure, but that is heated to 30°C instead of 0°C (irrespective of what temperature the object is at), we will have a
measured length (L
measure). However, we know that the measured length (L
measure) is not the object’s
actual length (L
actual), because our rod is not in the state defined as its standard. In this case L
measure < L
actual because the standard rod is longer (expanded) than its defined standard value due to its temperature. If the measuring rod is longer than its standard value, then anything measured by it will be measured as shorter than the actual length as defined by the standard.
<<MISSING DIAGRAM>>
Since we can measure the length changes that take place within a standard rod using other standardised rods that are in their defined standard states, we can assign a function of change (F
change) for any given factor affecting our measuring standard, such as temperature. Then we can say L
measure . F
change = L
actual. In this way the physical effects on relative measure (i.e. those affecting our measuring standard) are compensated for. As an example, our function might show that at 30°C a metre standard is actually 1.00026 m in length. In this case we can say that for a measured object L
actual = L
measure x (1 + 0.00026).
In the case above a function of change (F
change) can be given by the basic equation for the linear expansion of material (▵L) added to the standard length (1 metre). This gives a good approximation of the change in length of our standard rod due to temperature.
▵L = L
0 • σ • (T
1-T
0)
Where: L
0 is the original length (or in our case, the standardised unit of measure – one metre)
σ is the expansion co-efficient for the material – for platinum-iridium 90/10 at our stated temperatures and pressure it is ~ 8.7 x 10
-6
T
1 and T
0 are the starting temperature (T
0, or our defined standard temperature) and the final temperature (T
1, the temperature our measuring rod is actually at).
<<MISSING CALCULATION GRAPHIC>>
The metre is now defined by a time relationship and the speed of light. It is taken as the distance light travels in 1 / 299,792,458 of a second in a vacuum. In other words, a light second is defined as 299,792,458 metres. The reasoning behind this methodology relies on two premises – that time can be measured more accurately than length (which, for practical purposes, is currently correct), and that the speed of light is constant and isotropic (non-relative). Light speed is usually defined as an isotropic constant, so this works mathematically within the formalism of Special Relativity (SR). A problem arises when we question the formalism of SR, however, for light is anisotropic when not analysed using the formalism of SR. This has led to some confusion in physics. We shall look at this in more detail in a later section of LIGHT . . .
Intro to this material can be found here:
http://danmesnage.wix.com/coming-soon-pos
DM