Polar satellites

+++
Nearly all the satellite images we see do not come from geostationary satellites (except GOES), or even stationary ones that are said to be orbiting in the heliocentric model. Images we see from Google Earth, Landsat etc. are weather images taken by polar satellites, which are said to orbit the convex Earth from pole to pole. These are a different breed altogether. We know that orbiting, at least the explanation that the space marketing people tell us, is false. Yet, when researching Landsat and Google Earth, some of these base images are very hard to explain without polar satellites. If we take a small leap of faith and look at the information on the web regarding the other technicalities of the satellite as true, how can these two supposedly false and true paradigms be reconciled in the the concave Earth model?

Let’s look at the main imagers on-board some of these satellites and see if we can decipher how these satellites really work compared to the other stationary ones.

AVHRR on NOAAs
SAR on Sentinals
MODIS on Terra and Aqua
VIIRS on Suomi NPP
OLI and TIRS on Landsat 8
Polar satellite orbiting mechanism
Light and polar satellites
Iridium
Satellite images
Google Earth
Summary

+++

POES AVHRR

The geostationary GOES satellites have a good latitudinal range (+/-70°), but a weaker longitudinal one (less than half the Earth’s longitude – about 140°). However, a remote sensor on the poles, such as POES’s AVHRR, looking at the horizon could see all around it in a north-south direction, i.e. at least 180° around (and slightly behind it, if the downward viewing angle allows this). However, if polar satellites were stationary sitting on the glass like GEOS, they wouldn’t be able to see much further than +/-70° from the pole (20° from the equator). Yet, they see either the full latitude to the north/south pole, or up to about +80/85°N.

There is another problem with polar satellites and the glass sky – there might not be glass 2 degrees around the south pole (I have no link to the 2 degrees width as I heard that on a television programme about possible gravity wave detection).

…a spot in the Southern Hemisphere called the Southern Hole, a “clean” patch in the night sky that has low emissions of polarized dust that could hamper measurements… Using a telescope at the Pole can reduce noise not only from Earth’s atmosphere, but from interstellar radiation and dust, too. That’s because a telescope at Amundsen-Scott has perfect access to what is known as the Southern Hole—a patch of unusually clean sky that allows for optimal viewing into very deep space and consequently to the very early universe.

+++

Could this unusually clean sky be a hole in the glass above the south pole? Quite possibly. All polar satellites sent from Vandenberg Airforce base travel to the south pole, so it looks like there must be another mechanism for polar satellites besides sitting on the glass. Let’s look at these super-sensitive radiometers on polar satellites to see if they can offer any clues.

NOAAS are said to operate two polar “orbiting” satellites, which “orbit the Earth in opposite directions“. A lot of polar satellite brands seem to operate in pairs, such as ESA’s Sentinel-1 (1A and 1B).

Complementing the geostationary satellites are two polar-orbiting satellites known as Advanced Television Infrared Observation Satellite (TIROS-N or ATN) constantly circling the Earth in an almost north-south orbit, passing close to both poles.

+++

NOAA’s polar satellites or “POES” also have a visible/infrared passive remote sensor called AVHRR (Advanced Very High Resolution Radiometer) whose facing position is both down and to the side, judging by the in-factory photo of NOAA-N. It is looking at the horizon. The circular scanning mirror is said to rotate on its axis 360°, 6 times a second. However, the mirror in the plexiglas in NOAA-N looks far too big to fully rotate in any direction. The same applies to a much older version of the AVHRR.

noaa-n14
NOAA-N – The mirror in the plexiglas at the front right of the picture looks for too big to rotate fully on any axis. It looks to be able to receive light from below and possibly slightly behind itself and 180° to the side facing the horizon. Note the mirror isn’t nearly as big as the GOES one.
NOAA POES
An old diagram showing where the sensors are on a NOAA POES. The AVHRR unit looks to be where the NOAA-N plexiglas is situated.
image_gallery
A diagram of an old version of the AVHRR (probably 1978).
old AHVRR without walls or panels
The scan mirror in the old style AVHRR also looks far too big for a 360° rotation in any angle.

POES’s swath width is narrower at 2400 km. The narrower swath makes sense as the telescope of the old AVHRR was about 20 cm (8 inches) which looks to be a good deal smaller than the GOES one. Quality-wise AVHRR images look identical and are also black and white with the same resolution.

15hud03f
The infrared image also looks the same, with the identical 4 km resolution.
120309-11_poes_avhrr_vis_bering_sea_ice_anim
The AVHRR visible light image with 1 km resolution looks the same as the GOES imager, except that it is a outward curving strip as if it is looking across a curved horizon.

If the optical mirror rotates only on one axis as stated, then the satellite cannot be stationary as then the mirror must move on both a north-south and east-west axis like GOES. This also strongly points to polar satellites not being stationary on the glass.

All the other sensing instruments are only on the bottom looking down, which tend to be active sensors transmitting AND receiving back those sent EM waves. This includes all the radar frequency transmitting and receiving antennas (ASMU) and look to have a full 360° view of the horizon because they are looking down. (We will see later that it really only has a 180° view and that these sensors don’t stay looking down.) Even the Earth Sensor Assembly is active which is said to detect altitude by emitting and receiving microwaves. Other sounders can be either passive or active, but advanced sounders are now both. (I couldn’t find the true definition of a “sounder”.)

SAR

However, the Sentinel 1 satellites that use SAR as their main imager have it positioned to the front of the satellite. Sitting on the glass in a concave Earth, the imager is looking at the horizon… just like NOAA’s AVHRR. This will also give it a potential 180° view.

Sentinel1_Auto49
The SAR instrument is at the front.
sentinel
The radar imager is looking at the horizon in front.

For Sentinal-1A and 1B to cover the entire longitude of the Earth, they are probably looking in opposite directions positioned on either side of the south pole. Why the south pole and not the north one? Mostly because all polar satellites launched from Vandenberg Airforce base in California are sent to the south pole.

SAR-position
The two SAR satellites should be able to cover a 360° horizon if positioned directly opposite each other.

But do these two satellites move? Not at 7600 m/s it seems. Sentinel-1A also links up with a geostationary satellite:

28 November 2014: Marking a first in space, Sentinel-1A and Alphasat have linked up by laser stretching almost 36,000 km across space to deliver images of Earth just moments after they were captured… Radar data over Asia were acquired and downlink to Earth in near-real time.

+++

How is the Sentinal-1A able to keep an unbroken laser link up while moving 7600 m/s, which requires pinpoint accuracy? These polar satellites have to be moving a lot slower than that surely. Funnily enough, in a concave Earth, laser makes perfect sense because it has a massive amplitude allowing for a much greater horizon without increasing the wattage of the power source. Lasers by their definition are optically amplified directly “without the need to first convert it to an electrical signal”. Some commercial laser pointers have 2000 times more power equivalents than typical sunlight strength.

MODIS

There are plenty of other polar orbiters with all kinds of different sensors. For instance, an improved version of the AVHRR is said to be Modis. This passive remote sensor is on several satellites, but its main one seems to be on NASA’s pair of polar satellites EOS/AM-1 and EOS/PM-1 (Terra and Aqua). The optical mirror is said to be oblong-shaped, double sided, and revolving around 360° on one axis – like the AVHRR. The area where the light enters and reflects into the device is completely missing from both diagrams and photos. Only the aperture cover, scanning mirror, and primary mirror are labelled… and then both diagrams have a lot of the parts in seemingly different places. Very confusing. It looks like light enters through both the space view at the front and from an unlabelled gap at the back left. The scan mirror is double-sided and able to rotate probably because 1. it sometimes needs to reflect the space view or the “back view” to the primary mirror; and 2. it needs to register different parts of the view for the best detection. It’s all guesswork as the diagrams aren’t very clear.

Terra_AutoB
A diagram of the Modis instrument.
Terra_Auto9
Light looks to be hitting the scan mirror from below and in front of the Modis instrument.

The rotating mirror on one axis also suggests that both the Terra and Aqua satellites move, at the very least on the axis that is missing so it can scan the entire Earth. In fact, the satellite would need to have two rotational axes if the mirror’s rotation was due to it being part of a whiskbroom scanner (cross-track), which they say it is. Although the NASA website says, “Images acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Terra and Aqua exhibit strong detector striping. This artifact is common to most push-broom scanners”. A pushbroom scanner does not use rotating optical mirrors. How does a whiskbroom or cross-track scanner work?

One type of scanner is called a whiskbroom scanner also referred to as a cross-track scanners. It uses rotating mirrors to scan the landscape below from side to side perpendicular to the direction of the sensor platform, like a whiskbroom.

+++

This just means that if the width of the scanning strip (swath) is 2,330 km, then the mirror just moves left to right and back again across this 2,300 km strip as the satellite moves both east to west and north to south. It must do this, as the Earth does not rotate under it to give its east-west rotation. How can it do this? Now let’s look at the satellite itself.

modis4
We can recognize the “space” view at the front right. Judging by the size of the man’s forearm, it looks to be about 30 cm tall and half as wide.
HighGain
The black “trough” with the white aperture cover slightly covering a part of it is situated at the front of the satellite and below. It looks to be pointing across the bottom of the modis instrument over its radiative cooler door at an angle. Therefore light is coming in from an angle away from somewhere below to (in some places) an angle in front (the horizon view), and also at an angle from both sides.
VertInstrNE_MED
The edge of the black trough looks to have a viewing area sticking out the side and bent around. What is it’s viewing angle? It doesn’t look like it can see 180°.

The space view is looking at an unobstructed horizon, but how wide is its field of view? It also doesn’t look like it is 180° either. Normally, only fish-eyed lenses can see 180° around itself. These have the curved glass wrapped around the aperture. The space view doesn’t. Let’s look at the full Earth composite of these viewing strips to see if they can tell us what is really going on.

Image generated by Aladdin Ghostscript (device=pnmraw)
There are tight elliptic blind spots extending from about -21°S to +23.5°N latitude. The Modis world map itself is located between approx -70°/75° and +70°/75° latitude.

Are the elliptical blind spots that appear on world Modis image maps a scanning issue rather than an orbiting one? It is a guess, but the blind spots issue could be caused by a straight line sensor array detecting light from a curved surface (the Earth). In fact, the lack of such blind spots by the VIIRS instrument, which uses a rotating telescope, testifies to this. Now let’s map this onto a 3D sphere in Photoshop.

modis world map 3d
Taking the Modis world map and applying Photoshop’s 3D globe feature and brightening it, we can trace back the scanning lines to two distinct locations – not one!. This proves that these satellites do not travel from pole to pole around the Earth.

The fact that the viewing lines do not run through the geographic pole when mapped on to a 3D globe proves that neither Terra nor Aqua orbit the Earth from pole to pole. The exact location of the two positions is not absolutely 100% accurate as the viewing swath widths are not 100% the same angle throughout; and the straight extrapolated lines in white (and purple) projected on to a curved surface isn’t a completely true representation either. However, the lines still don’t, and can’t meet. If we draw a circle around the two satellite positions, the centre of that circle is north-east of the geographic south pole by what looks to be about 2 degrees. Could these two satellites be orbiting around this mysterious centre instead?

Another possible problem with the traditional pole-to-pole explanation is that the map stops above and below -75°/+75° latitude. Certainly, if the satellites were polar orbiting, all latitudes should be visible. Dr. Brooks Agnew claims that since 2006 no satellite images beyond +60°N/-60°S are allowed; but this map shows up to 75°. One possible reason that has been given is that often there is not enough light at the poles. This sounds like nonsense. Half the year, one of the poles is in complete 24-hour daylight, and Antarctica is covered by snow which is very highly reflective (high albedo). Even water at the north pole is said to have a very high reflectivity compared to the very low albedo of water at other locations. Not to mention, these radiometers are ultra super-sensitive.

Polar satellites aren’t always in pairs; there is sometimes just one of them, e.g. Suomi NPP (VIIRS).

VIIRS

VIIRS instrument is said to be an improvement technologically on Modis, and only needs one satellite to see all around itself (all longitudes) and high latitudes. One satellite is the Suomi NPP polar, which created the 2012 “Blue Marble” composite from 4 instruments altogether, overlaid. VIIRS is also facing down on the bottom of the Suomi NPP.

images
The large VIIRS instrument is at the end of the satellite (white square object) looking directly down with a 360° view of the horizon and everything below.
viirs
I couldn’t find any manufacturing photo with the VIIRS instrument’s aperture wide open except a low resolution one from meted.ucar.edu (you have to sign up to perhaps see more).

If this satellite were stationary then a part of its instrument must rotate in two directions (axes), e.g. its optical mirror; or one direction, but with a wide angle 180° fish-eye lens type of field of view. The telescope rotates 360° on one axis.

The VIIRS design incorporates a SeaWiFS-like rotating telescope assembly which protects the optical components from on-orbit contamination. This will result in greater on-orbit stability than other designs. VIIRS also has a solar diffuser assembly with a stability monitor similar to MODIS for tracking on-orbit performance in visible wavelengths, and a MODIS-like black body calibration target for the infrared bands… A rotating off-axis and afocal TMA (Three Mirror Anastigmatic) telescope assembly is employed [Note: The telescope rotates 360°, thus scanning the Earth scene, and then internal calibration targets.]. The aperture of the imaging optics is 19.1 cm in diameter, the focal length is 114 cm (f/5.97). The VIIRS optical train consists of the fore optics (TMA), the aft optics [an all-reflective FMA (Four Mirror Anastigmatic) imager], and the back-end optics

+++

So if the satellite is stationary, it must have a fish-eye lens field-of-view. Unfortunately, when looking at this “cut-and-paste-in-space” image of the Suomi NPP with its aperture open, the lens hood “jaws” prohibit this.

npp_small
The open aperture blocks the required 180° FOV if the satellite where to scan the Earth from a stationary position.

If the sensor was really scanning between straight down (nadir) and up to the horizon line (as I propose is the real “orbiting” mechanism inside a concave Earth), then its field of view (FOV) would get wider the further away from the nadir causing an overlap in the 3040 km swath paths. If the satellite were merely orbiting the Earth from pole to pole, the swath path across the Earth would be a uniform width with no change in viewing angle. It seems the former is the case.

At large angles from nadir (looking straight down), the swath (strip) width in the track direction is sufficiently large that detectors at the edges of the swath overlap the preceding or succeeding swath.

+++

The track along the curved Earth spreads out further along the horizon.

swath_5min_1
The detected strip or swath or track curves out away from satellite.

Unlike Modis, VIIRS was able to detect the EM waves over some of Antarctica but only until about +60°N when looking at its early strips. This is probably because it was made in late November when the Arctic Circle was in darkness.

global_vir_2011328_lrg
First composite image from the Suomi NPP satellite. The northern hemisphere is only visible up to about +55/+60°N. This is understandable as it was taken on November 24th when most of the Arctic circle is very dark. The cloud cover obscures Antarctica making it difficult to see how much of the south pole is visible.
globe_vir_2013251
Another image shows the same lacking northern latitudes, and covers nearly all of the southern hemisphere, bar about half of Antarctica perhaps.

The non-visible light wavelengths can “see” all latitudes, which includes the entire -90° to +90° latitudes, as shown by their 2012 “black marble” image. The black marble does look very cartoonish. That’s because it is a cartoon. A satellite image is not a photograph – it is literally “paint by numbers“. More on that later.

City Lights 2012 - Flat map
A flat map projection of the 2012 black marble composite image shows all latitudes from -90°S to +90°N.
black-marble-3D
The black marble flat map projection projected onto a sphere using Photoshop’s 3D sphere tool, brightened and turned around, shows the whole of Antarctica. (Admittedly, the few degrees around the south pole may have been “filled in”.)

When they take a few of the 3040 km strips and project them on to a globe, the image looks very stretched and out of proportion. This is because the Earth is not a globe; they are forcing the composite image onto one. The swath’s have no perspective, as we will see later.

VIIRS_Earth_4.Jan.2012
Take a portion of a few strips and collate them together on a spherical projection… and voila, a “close up” of the “globe Earth”. I sense a fraud in our midst.

There are other lone polar satellites; at least one of which Google Earth uses, so let’s have a look at them and see if we can find the orbiting mechanism.

OLI and TIRS

Oli (Operational Land Imager) and TIRS (Thermal Infrared Sensor) are two passive radiometers on-board Landsat 8, which was launched in 2013. Oli detects light from 9 narrow bands of visible light and infrared wavelengths. TIRS detects longer thermal infrared wavelengths. Unique to OLI and unlike any previously mentioned instrument and all earlier Landsat satellites, OLI’s sensor detects the whole swath at one time, and doesn’t need a rotating mirror or telescope. The sensor is said to “…view across the entire swath (185 km wide) at once, building strips of data like a pushbroom.” The length of the swath is stated as 180 km.

800px-Landsat_Data_Continuity_Mission_Operational_Land_Imager_Instrument_Design
A diagram of the Operational Land Imager shows a downward facing optical viewport and one at right angles (horizon).
landsat 8
Landsat 8 with the covered OLI sensor visible.

In order for light to enter the aperture from the horizon view, there must be some reflective apparatus or mirror to shine the light at 90° inwards. There is said to be diffuser mechanisms to block light off from either the horizon view or the Earth (nadir or downwards) one. The Spectralon diffuser (page 6 of the PDF) is said to reflect the light from the horizon to the telescope mirrors. This way of reflecting light does seem to reflect all the incoming light from the horizon viewport. However, I can’t see how this can used as an Earth detector without focusing light with a mirror. A diffuser spreads light in all directions. The Earth viewport (looking down) with its specular reflecting mirrors must be used; but how?

oli mirrors
The optical viewports and mirrors of OLI.

OLI looks to have a fairly narrow field of view (stated as 15°). If Landsat 8 were stationary at the south pole, then how are these 180 by 185 km strips acquired if there is no vertical or horizontal rotation of the inner workings of OLI, and a very limiting 15° FOV? Let’s look at the Worldwide Reference System which shows the (orbiting) “path” of Landsats.

wrs2
The Worldwide Reference System flat Earth map shows the path it takes between the red lines. The black dots are the immediate path/row, which I take to be the 180 by 185 km swath.

Now, let’s put this flat map on a 3D Earth globe using Photoshop’s 3D sphere effect. In a convex rotating Earth, the spiral pattern is said to be due to the Earth’s rotation underneath the satellite, which works out. In a stationary concave Earth, the red paths instead show something entirely different, namely that Landsat satellites orbit around the south pole, not travel up and down the Earth from north to south.

wrs2---oli3D
When the WRS is put on a sphere with Antarctica brightened, the “orbiting” paths of Landsat satellites travel in a spiral around the south pole. In a stationary Earth, they are CIRCULATING AROUND THE SOUTH POLE AREA.
wrs2---oli3D-north-pole
The same polar spiral pattern appears for the north pole also.

The paths stop at the top of Antartica (82/83°S latitude), but if we continue them until the width between two red lines is one black dot, then the Landsat satellites exist somewhere tight around the south pole. The explains the horizontal rotation, but not the vertical. To explain vertical rotation, the satellite must slowly spin around (like a bullet) clockwise with the south pole on the inside of the satellite, so that the Earth viewport is turning north to south on the outside of the satellite. This has the same effect as if the satellite were travelling from the north pole to the southern one. It also matches the data set given by this animation where the landsat satellite obtains thermal images of the whole Earth over 16 days – the same as the Modis sensor (Aqua/Terra satellites) funnily enough. It is hard to tell if the strips appear at once, or they start from the top down due to the sped-up time effect.

landsat 8 data animation
Animation of Landsat 8 vertical swath strips from north to south over 16 days. The satellite revolves around the south pole in about 24 hours, and spins on its axis like a bullet with 15 revolutions per 24 hours.

What is the mechanism for the rotation around the south pole and axial spin? If you have read this blog already, then you have probably already guessed that it can only be the holes near the poles.

Polar satellite orbiting mechanism

North pole hole
The north pole hole is said to be somewhere over 2 to 3 degrees off the geographic pole towards the Russian Siberian side (north-east of Greenwich longitude), but who knows? It also acts as a magnetic south pole. This means that looking at it from the side, the magnetic current should spin anti-clockwise due to magnetohydrodynamics. Therefore, upside down in a concave Earth looking at the north hole, it should rotate clockwise. Common school theory states that magnetic current moves out of the magnetic north pole into the southern one. Therefore, the magnetic current should be moving into the geographic north pole hole as well as rotate clockwise when looking at it upside-down. According to this infamous video, it does just that. Whether the video is fake or not, is up for you to decide.

north-pole-hole
(Click to animate). The air current is moving into the north pole, supposedly. This clockwise movement is really anti-clockwise when viewed side-on in a concave Earth which corroborates with the East to West anti-clockwise direction of the Sun.
North-pole-hole
The north pole hole is rotating anti-clockwise when viewed from the side in a concave Earth, which is the same as the Sun.

South pole hole
There is an indication for this south pole hole when inverting Google Earth at the south pole. They are certainly hiding something. The excuse that it is just the map of satellite strips being projected onto a globe leading to a singular point doesn’t hold water, as there is a distinct discolouration defining the circle.

1-161-south-pole
A large circle obscures the south pole. If the circle is only a product of composite strips, why the distinctive discolouration which defines the circle?
south pole hole google earth
This YouTube video shows how it can be done.

How big is the hole? No idea. Theories abound as to the size from 60 and 90 miles across (north pole hole), to 10 miles across as this poster claims; or 50 miles diameter as this “insider” states. Who knows? It probably isn’t massive.

Where is the hole? It isn’t at the exact geographic south pole because that is where the Amundsen–Scott South Pole Station is based. Luckily, we’ve been given a possible clue with the extrapolated locations of the Aqua and Terra satellites. If we rotate the 3D image of the Modis scanner lines to align itself with the map of the estimated location of the north pole hole, they are both in a fairly similar location to their respective geographic poles.

modis-world-map-3d-south-pole-hole-location
The black circle under the white cross is the geographic south pole. The yellow circle is the middle of the circle connecting the two extrapolated satellite locations which carry the Modis instrument. This is the south pole hole and looks to be around 1 degree further north and possibly 3 degrees East of the geographic south pole.
north-pole-hole-other-location
Rodney M. Cluff’s extrapolated north pole hole at 84.4 N Lat, 141 E. It is supposed to be over half as far away – at 87.7 N Lat, 142.2 E Lon – denoted by the yellow cross.

The south pole hole looks to be about 2 or 3 degrees north-east-east of the geographic one. The north pole hole is said to be about 2 to 3 degrees north-east of the geographic one if we take the “insider’s” word for it – 87.7 N Lat, 142.2 E Lon.

The above is not proof of a south pole hole, only interesting anomalies to possibly indicate such. I have taken the southern hole to exist in my concave Earth thesis, and the movement of the magnetic current around this hole would be the opposite of the north one as the geographic south pole is magnetic north. This would mean that the magnetic current (and air) is travelling upwards and rotating in an anti-clockwise direction. This upward draft is what carries the satellite, causing it to rotate anti-clockwise with this upward flow. It reaches an altitude limit due to its weight and possible declining upward magnetic force with altitude. What this altitude is, is unknown to me. It could be a lot higher than the height of the glass sky- say 400 km? or it could be lower. If air is the only thing keeping the satellite buoyant, then the altitude would be much lower than 100 km. If it is the magnetic current itself, then it could be much higher, giving those passive radiometers an even greater horizon range than if they were positioned on the glass like GOES. Also, satellites of different weights and/or magnetic drives may be at slightly different altitudes. There is also the question of ice/water/glass layer transparency. GOES sitting on the glass looking down and around itself has an extremely poor resolution from 1 to 8 km per pixel. Polar satellites can have a resolution under 1 m per pixel, especially military ones. Would these layers interfere with the images? Perhaps, perhaps not. There is a lot of post processing involved as we will see near the end of the article. It is still something to think about.

north pole hole reversed
(Click to animate). This is just the same video as above but reversed to show how the magnetic flow would look at the south pole hole.

The typical factors which cap a satellite’s height to the glass altitude, either don’t exist or are much weaker at the poles. The thermosphere is said to be caused by the Sun’s radiation, which is by far the weakest at the poles. The Sun is never higher than about 23.5° at the south pole, with twilight/darkness for the other 6 months of the year. The Van Allen Belts are also not an issue as they are said not to exist in the Arctic/Antarctic Circle. Also, at least one satellite (exploratory) has been labelled as having an “ion engine”. This is very reminiscent of Lifters and the Biefeld-Brown effect.

Hayabusa2_Auto1A
The Japanese Hayabusa2 “exploratory” satellite has an “ion engine” labelled.

This doesn’t mean that an ion engine is responsible for keeping polar satellites above the south pole, but I postulate that some kind of repelling magnetic drive probably is. The direction of the axial rotation, or roll, of the satellite would always be naturally clockwise with the south pole hole on the inside. Why? Pressure differential. The outside of a plasma/liquid/gas/magnetic vortex rotates at a much slower speed than at the centre – an irrotational vortex. Therefore, the speed of the air on the outside of the satellite would be very slightly slower than on the inside. The higher inside speed would turn the satellite on its axis so that it revolves from the inside out. The Earth view-port on these satellites look like they are not able to view through this magnetic “pillar” shooting out the hole, because satellites are very rarely able to detect light from the night-time Earth (except NPP Suomi). The radiometer can only detect visible light from the outside of the magnetic pillar. Therefore, the south pole hole is a horizon (visible light) blind spot, which also explains why there is a black circle over the approximate south pole area in some satellite images. The official reason is because of the satellites inclination, which funnily enough, also fits with this theory. Weight may also play a role in the speed of a satellite’s axial rotation. POES rotates 14 times on its axis per day, whereas Landsat 8 revolves 15 times per 24 hours. Most revolve 15 to 16 times a day.

south-pole-satellite-orbit
All American polar satellites not only revolve anti-clockwise around the south pole hole, but also spin clockwise (seen from behind) on their axes like a bullet due to their drive’s repelling magnetic field.

Most satellite paths seem to suggest the natural clockwise roll, but a few demonstrate an anti-clockwise one, such as the Dmcii catalogue of satellite images.

catalogue
The Dmcii catalogue of 1000 satellite images for sale shows a north-west orientation rather than the usual north-east. In my theory, this is because the satellite has been rotated anti-clockwise on its axis so the viewport moves from south to north. The only way the mainstream can explain this is if polar satellites orbit in the opposite direction as well, i.e. launched to the north pole. (I couldn’t find much information on the web about this).

DCMII is a subsidiary of Surrey Satellite Technology which uses the Soyuz rocket from Baikonur, Kazakhstan (as well as Plesetsk and Yasny for Earth Observation – EO). “Absolute majority of missions from Baikonur head east and northeast” supposedly. So we are to assume that their polar rockets travel north-east. Why north-east and not north isn’t clear, except that they say it is to avoid China. If the rocket goes north to south around the dark side of the Earth, it will go south to north around the daylight side. Because a few images show a north-west slant rather than the usual north-east one, according to the mainstream theory, there must be a few satellites travelling from the south pole to the northern one during the day as the rotation of the Earth is always in one direction. A minor spanner in the works against this mainstream theory of south to north pole orbiting is that all satellite images are the correct way up. South is always at the bottom of the image, and north at the top. I have yet to find an exception, but one still may exist. Maybe every image is corrected so that it is the “correct way up” before being presented. Perhaps the sensors are the other way round in the south to north satellites.

Image 2 Landsat Map East Delta
A Landsat satellite image of the Nile Delta with the typical north-east slant of polar satellites.
north-west slant nile delta
Another image of the Nile Delta from DMCII with the north-west slant.

In a concave Earth, all American polar rockets are going to the south pole from south California, but the British send theirs from central Asia to the north pole. The geographic north pole is magnetic south, so north polar satellites will also have a magnetic south drive to repel against the north pole hole. When looking at both poles from the side in a concave Earth, the magnetic field is spinning anti-clockwise, and so both types of satellite (north and south pole) will move anti-clockwise around the magnetic field. Both satellites are also repelled by the same field; therefore, the south pole satellite spins clockwise around its axis; whereas the north pole one spins anti-clockwise. This gives us the north-east slant of the American satellite, and the north-west one of the British satellite, as seen in the above two images of the Nile Delta.

north-pole-satellite-orbit
All British polar satellites not only revolve anti-clockwise around the north pole hole, but also spin anti-clockwise (seen from behind) on their axes like a bullet due to their drive’s repelling magnetic field.

This is a strong indication that polar satellites have some kind of repelling magnetic drive, rather than use air currents as a means of buoyancy, as the air and magnetic h-field is moving into the north pole hole. Without a repelling magnetic drive, so too would the north pole satellite.

Sun-synchronous orbits
Also, most polar satellites are said to be in sun-synchronous orbits. This means that “the orbits are designed so that the satellite’s orientation is fixed relative to the Sun throughout the year”. Wiki defines it as “each successive orbital pass occurs at the same local time of day”. Both these definitions fit the concave Earth polar orbiting mechanism perfectly. How?

What determines the time of day in the concave Earth? The revolution of the Sun around the centre. Specific to my thesis, what causes this revolution? The magnetic H-field revolving anti-clockwise from the geographic south pole into the northern one. Therefore, anything revolving with the h-field (magnetic current from the south pole) is sun-synchronous! It would also take 24 hours to make one complete pass of the Earth… and voilà, it takes a polar satellite 24 hours to scan the entire Earth.

Because the sun is around 1° away from the centre on the horizontal axis, so polar satellites are also 1° from the centre of the pole hole; at least according to my thesis. (Although it could even be as close as 0.5°, depending on refraction.) The only difference is that the Sun is about 1° behind the central axis, whereas the satellite is about 1° in front of it. We can guess-culate the amount by taking the “insider’s” 50 mile (80 km) approximate diameter of the hole. The north-south radius of the WGS84 model of the Earth is 6356.752 km. The circumference is (6356.752 x 2 x 3.14) = 39920.40256 km. A quarter of that is the latitude from the equator to the south pole, which is 9980.10064 km. Ninety degrees into that is 110.8900071111111 km per degree. 80.4672 km (approx diameter of hole) divided by 110.8900071111111 is 0.7256487946598503. This would mean that the Sun is 0.725° away from the centre if it is following the edge of the magnetic h-field in the middle of the cavity. Of course, 80 km is only an approximation. If the hole were 90 km wide, the Sun would be 0.81° away from the centre. These figures are extremely close to my approximate calculated distance of the Sun from the centre using completely different data mentioned in another article. Coincidence? Probably not.

But wait. You are probably thinking that in my theory the Sun is revolving around the central axis of the Earth, but the south pole hole has been estimated to be 2 or 3 degrees away from this axis. How can the two be in sync? Theoretically, the simple explanation would be that a vortex quickly moves to the area of least pressure inside a container, which is the exact centre of the sphere. Here are a few photos of water vortices which bend initially before straightening out. I believe the magnetic vortex from south pole hole to north pole hole acts similar to these.

Brcez
This water spout changes direction.
403px-Sog
An artificial vortex also changes direction to flow slightly to the side of the opening, but still vertical.

This means that at whatever altitude the satellites are located, as long as they are the same distance to the centre of the vortex as the Sun, they will be sun-synchronous. Wiki also states,

“A polar orbit is one in which a satellite passes above or nearly above both poles of the body being orbited… Near-polar orbiting satellites commonly choose a Sun-synchronous orbit… To retain the Sun-synchronous orbit as the Earth revolves around the Sun during the year, the orbit of the satellite must precess at the same rate, which is not possible if the satellite were to pass directly over the pole… A satellite can hover over one polar area much of the time, albeit at a large distance…”

+++

So, to remain sun-synchronous, satellites have to be “near” polar orbiting. Of course they do. The sun is also pushed around by this magnetic current exactly like the satellite, but in the centre of the cavity instead. Also, a satellite is said to be able to hover over the pole area for much of the time. In a concave Earth, the natural axial rotation (roll) of a south polar satellite is clockwise, the difference in pressure between either sides of the satellite is probably so small that with enough supply of pressurized gas, there is no reason why the satellite could not be nudged to roll anti-clockwise, or even halt its roll entirely for a limited time period. At its altitude, there would be no air resistance.

Gas-sys-Medium
Pressurized gas cylinders used “with spacecraft pitch control, provides a wide range of manoeuvring capabilities.”

If a satellite revolves more tightly around the pole hole (nearer than the Sun’s distance to the central axis), it will revolve around the hole a lot faster because an irrotational vortex (liquid, gas, plasma) spins faster at its centre than at its periphery. Therefore it will spin much faster than the Sun and cannot be Sun-synchronous. Some remote sensing polar satellites take one to two days to view the entire surface of the Earth, such as Terra and Aqua. These satellites are obviously also not sun-synchronous, because they move slower than the Sun, and so must be further out from the south pole hole. Everything falls into place.

It isn’t the earth that is moving under the polar satellite which is travelling at 90° to the equator pole to pole. It is the magnetic current moving around the Earth which also rotates the satellite on its axis at 90° to the equator!

+++
This is why all polar satellites launched from Vandenberg Airforce Base are sent to the south pole. Don’t forget, the south pole is considerably further away than the north one from Vandenberg Airforce base, yet there they go… and it isn’t because they like to look at penguins. In fact, certain other instruments may be able to detect certain electromagnetic wavelengths through the magnetic pillar from both the night-time Earth side and from inside the hole. Normally polar satellites do not look at the dark Earth side, except for Suomi NPP with the new VIIRS sensor. It “detects light in a range of wavelengths from green to near-infrared and uses filtering techniques to observe dim signals such as city lights, gas flares, auroras, wildfires and reflected moonlight” to create the Black Marble 2012 composite image. “Filtering techniques” to observe dim signals? In other words, they can separate the dark side EM waves from the interfering energy of the magnetic pillar.

There is also the very likely possibility that the military is looking down through the hole, possibly using this kind of sensor and other ones. It is said that a lot of UFOs come out of the holes near (or at) the poles, which may be a reason for observation. Also, there would probably be a limited view of a little flora and fauna inside the hole as well. However, in my opinion, the main reason for observation would be to detect the amount and direction of the magnetic current flowing upwards so they have an early warning system for any of the cyclical cataclysms that are said to involve the Earth.

Cyclical apocalypse theory
The Earth’s magnetic poles flip every so often (“when” is up for debate). In my theory, the reversal of the magnetic flow would cause massive problems for the Earth. Firstly, the slowing down of the magnetic h-field weakens it. If the Earth’s shape relies on this h-field, then when it goes, so too goes the force keeping the Earth together and the tectonic plates can freely move around. The H-field dies first, then the Sun powering the Earth’s electric circuit. I will later theorize gravity to be the downward positive charge from Telluric currents due to the Sun periodically charging the Earth with negative charge. The Earth is said to theoretically lose its negative charge after one hour if there is no input from the Sun. This means that the now freely moving tectonic plates are still being pushed by gravity without the h-field to hold them in place. The Earth now expands. With no h-field, the Sun switches off and after an hour or perhaps several, Telluric currents stop and there is now no gravity. This means there is now no pressure. All the heavy liquids (water, especially semi-heavy and heavy water) kept below the surface by gravity could break out of the deep and flood the Earth. If the magnetic reversal isn’t quick, there would be a continued loss of heat because the Sun is still “switched off”, and the waters would freeze; although with the ventilation system of the holes near the pole not flowing, maybe less so. When the magnetic flow establishes itself in the other direction, everything resumes to normal, but now the Sun rises in the west and sets in the east.

Depending on how long the Sun was “out”, determines the re-making of the Earth. Each cycle could wipe out all civilisation with the Earth becoming drastically larger each time. Perhaps this is the natural way of things. Mankind reproduces very quickly and soon gets out of hand polluting his own bed in the green house. When a crop becomes too overcrowded, it needs to be harvested, the bed cleaned and then re-seeded… or perhaps that is a too negative perspective. Unfortunately, with all the myths around the world of global cataclysms (even small ones like the Navajo Indian tale), this constant restarting does look to be the way of things.

If those in the know want to re-seed themselves for the next cycle, they will have their own “arks” of whatever form they may be. But they need to be informed as soon as possible to give them a chance to man the lifeboats. This is where the tightly orbiting polar satellites come in. Both north and south pole holes and the Sun will be monitored around the clock, especially the south pole hole, as that is the first to change at the end of a cycle (because the magnetic flow is entering this hole from outside). It is also interesting that all but two types of the tightly orbiting polar satellites are military/government only… at least according to the UCS satellite database. The two non-military/government satellites are the Doves and Graces. However, Doves are still said to be Sun-synchronous and therefore not true polar orbiting.

How does light work in a concave Earth to enable satellites at the south pole to see all the way up to the north one?

Light and polar satellites

A satellite imager (radiometer) looks to be similar-ish to a digital camera (radiometer) with a huge Cassegrain telescope and, in the case of OLI, 7000 sensors instead of the one used in a camera. We’ve also seen that they are super-sensitive and have massive amplification (ISO) boosters on-board the satellites. Their altitude could even be higher than the glass layer height, i.e. above 100 km. These five factors – magnification, sensitivity, massive sensor array, amplification, and their very high altitude may enable these imagers to detect light at super-vast distances, nearly fully around the world to the north pole. The Copernican horizon principle is irrelevant in a concave Earth. There are already many horizon breaking real-life examples to consider, especially over water. Only with increased magnification and/or amplification (ISO) could the uber-distant invisible objects become visible again. The question of course is what is this horizon limit? Is there no limit if the detecting machine is sensitive enough or can amplify the light signal enough to be detected? There doesn’t seem to be if satellites remain on the south pole. This needs to be tested with cheap high magnification cameras on high ISO settings, obviously to the limit of the camera. The intensity of light radiation is radial and decreases with the square of the distance from the centre – inverse square law (like all other energy phenomena funnily enough). Therefore, theoretically in a concave Earth, the entire Earth could be observed if the machine were able to detect such weak radiative waves. Also light bends upwards towards the centre(ish) in a concave Earth theorized to be due to the negatively charged Earth, so we need to throw that into the mix too.

point-of-light
A point of light reflected at the north pole becomes weaker radially. The higher the detector is at the south pole, the less sensitive it needs to be.

Interestingly in this model, all light rays flatter than 45° would perhaps have to track back and spiral back into the centre. Could this be the beginnings of an orbiting model for a concave Earth? Also, the greater the distance laterally from the point source equals less of a bend on the light ray. According to radio technicians, the lower the frequency, the further away it can be received. This doesn’t necessarily mean that radio waves bend less than visible light, just that they could be easier to receive (detect) at weaker energy levels than shorter wavelengths such as visible light. It may be as simple as that.

This explains the old argument against a concave Earth, “I live in the UK. When I look up at the sky, why don’t I see Japan on the other side of the world etc.?” If your eyes were as high as and as sensitive and amplified as a satellite’s radiometer, then you would; but only by looking at the horizon, not up. It also explains why the horizon is always at eye-level no matter the altitude (although this topic is slightly controversial). And it demonstrates that the higher the altitude of either the observer or the object, the further the observer is able to see at the same bend, or strength of light. It seems to fit well.

So the concave earthers are correct when they say light bends; and so are the flat earthers when they say that the eye isn’t sensitive enough to see beyond its own limited horizon. This also explains Steven’s video where he shows the exact same variably curved horizon line which we usually see at increasing altitude. He couldn’t input bending light into his graphic programme, but mimicked the same effect by adding a “fog”, i.e. reducing visibility. In the theory I have just described to you, this is sometimes the same thing.

horizon fog
Adding fog, i.e. reducing the distance visibility in a concave Earth produces the exact same variable horizon line we often see in high altitude balloon videos. Coincidence? I don’t think so.

So, how do we conceptualize the bendy light concept with weakening intensity away from the source (fog)? Not easily. The diagram would look a little like the magnetic fields on the side of a bar magnet perhaps, except all “parallel to-the-earth” light rays will be travelling towards the centre. (This isn’t necessarily to point out a possible connection between magnetism and light, just that the pattern would be similar.) The further away from the source, the weaker the light and the straighter it would have traveled when running parallel to the Earth.

photo lines_of_force_2.gif
Magnetic lines of force.
bendy-light-fog
Light traveling parallel to the ground would travel in a similar shape to magnetic field lines traveling from pole to pole, but all heading towards the center of the Earth. The further the distance, the straighter the light.

This super long horizon also works for other active sensors such as SAR (radar) and above all Iridium satellites.

Iridium

Iridium are a network of polar satellites giving worldwide coverage (but expensive) phone service. There are three main panels full of antenna arrays (MMAs) in a tripod shape which link to the ground and retract from the side of the satellite.

9_d593ada2464ef3a23ef9c8f00e2a7df02
An Iridium satellite diagram with the main antenna panels labelled.
800px-Iridium_Satellite_2
These 3 panels full of antenna arrays have a very wide coverage.

The flat panels look to be an array of patch antennas. According to Surrey Satellite Technology, these offer “Wide coverage… provides as wide a field of view as possible when looking from the satellite. It produces a cardioid pattern.” A cardioid pattern is a spherical (sometimes doughnut) shape which offers a full 180° field of view from the patch antenna.

image_93932
The range of a patch antenna is very similar to the sound pick-up zone of a microphone (cardioid pattern).
text_image_para_half-0cb886bcd47c6147f9114348d544c914-e6735b3b-e6eb-45fb-9fff-9e259ac33e16
Two microphones back-to-back show how a flat patch antenna has a full 180° spherical field of view.

Instead of orbiting from pole to pole in the heliocentric model, these satellites rotate around the south pole hole. Sunlight reflecting off their main antenna arrays are said to be responsible for the white dot flares that have been seen moving across the sky (Iridium flares). This is not possible in a concave Earth because they are moving around the south pole hole instead, making asteroids/meteors an even likelier candidate. There are said to be 66 of them, but the Globalstar (another satellite phone company) map looks to show half (22) their stated number (44). Perhaps it is the same for Iridium and they really only have 33 satellites moving around the south pole, not 66. This seems to be the case as the next gen Iridium satellite network are said to have 36 active and 30 spare satellites. They say there are 11 satellites in each of the 6 planes with only 6 active ones per plane for the current network. This means that in a concave Earth there are 6 active satellites in a hexagon shape around the south pole hole. Each plane would be separated by slightly different altitudes and angles in relation to each plane, so that the maximum possible angles are covered.

The satellites don’t communicate to each other via the MMAs, but via other antennas. At least the next gen Iridium satellites will. They’ll have “Four 23 GHz crosslinks to adjacent Iridium NEXT satellites for relay communications”. The above photo of one of the present (old) Iridium satellites shows the crosslink antennas at the front end of the sides. Notice, in both the labelled diagram and photo, the crosslink antennas are protruding away from the side towards the front face slightly. All these crosslink patch antennas must connect which helps us determine the satellite orientation within the hexagon shape. The satellites can’t be facing outwards away from the centre of the south pole hole as the antennas don’t look to be able to connect with each other this way, unless they only communicate vertically, which is possible. Also, more thruster fuel would be needed to stop the satellite constantly wanting to slowly tumble head over heels with the pressure differential of the vortex. Therefore, the satellites are probably slowly spinning on their axis with the south polar vortex, as seen below.

iridium
The 6 Iridium satellites in each plane would be this orientation to be able to communicate with each other.

Some thrusters are needed to make sure the satellite stays stable within the vortex so that they maintain their position in relation to one another and the other planes; and that they spin around their axis, not pushed into a head-over-heels tumbling rotation.

About a dozen of the Iridium satellites are tumbling, or no longer in their intended position. There isn’t much chance they’ll crash into each other, those familiar with satellites say, but a tumbling satellite can’t easily be controlled from Earth. “The Iridium satellites have thrusters on board, which they normally use to maintain their orbits and keep them in the right position,” says Hoyt of Tethers Unlimited, whose company is testing a type of satellite tether.

+++

Mmmm, tethers eh? Is this so they will be able to winch them down after use? Because those crosslink antennas are patch ones, they have a spherical shape range which also allows them to communicate with those satellites positioned on the other planes. Lastly, the satellites will have to communicate with an earth station. This role looks to be done by the few gateway antennas on the front face of the satellite. The Earth station is stationary and the satellites are moving, so the front satellite antennas will have to move to maintain a link with the station. The photos do seem to show the gateway antennas to be on a rotation mechanism, at least on one axis, maybe two. Rotation mechanisms look to be quite common for satellite antennas (especially for polar satellites). Surrey satellite technologies manufacture them for their horn antennas which can rotate on two axes.

18dBi-APM
The rotating mechanism for a horn antenna can rotate up to 270° on one axis and 110° on another.
horn antennas
We can see the two rotating antennas attached to this optical satellite (probably a polar satellite rather than a geostationary one) to get a sense of size.

Next generation Iridium satellites carry all their antenna on the bottom, rather than at the front. This means that the satellite is tipped on its rear with the antenna panel looking out facing the horizon. The satellite will then rotate on its axis like a windmill every 101 minutes, with the antenna panels always facing towards the Earth and the horizon.

IRDM_IridiumNEXT_satellite-configuration_FINAL
The next gen Iridium satellites carry all their antenna on a side that will face the earth and the horizon, very slowly spinning around like a windmill (101 minutes per revolution).
iridium next gen diagram
The four inter-satellite antenna face up and down at an angle connecting the iridium network vertically, not horizontally along one plane.

The crosslink antenna are also at an angle with two looking looking up and two facing downwards. This makes communication along one plane unlikely, but instead they communicate vertically from plane to plane rather than across it. Because of all this cross-linking and the fact they are all around the south pole, this network will be slow compared to Globalstar or another geostationary phone network. This seems to be true, at least anecdotally:

Iridium is more expensive (see prices below) and has a much more extensive satellite network than Globalstar, yet surprisingly I found that the Globalstar gets me quicker connections in Colorado valleys where all satphones have a difficult time, and I get fewer dropped calls.

+++

You may now be thinking, well hang on, what about satellite images? They look like they are taken from above, not from a long horizon shot. How come? Let’s take a look at satellite images and find out what they really are by first researching radiometers.

Satellite images

Each radiometer sensor detects light in multiple wavelength bands (e.g. seven for landsats), but each of those bands is very narrow, hence their name: multispectral radiometers. Also, each sensor is one pixel. Whereas digital camera sensors can only detect one wavelength range per pixel (and all pixels are detected on only one sensor), which is either red, green or blue (RGB), and sometimes cyan. In a digital camera, the missing colour values for each pixel are then estimated using an algorithm (this is CFA interpolation. Also, page 4 of PDF).

landsat imagery
Each pixel (sensor) is assigned a number which is translated into a brightness. No matter the wavelength band, these pixels are always black and white. The variation of brightness is called radiometry.
multi band image
Each pixel assigns a brightness number for each wavelength band that it detects.

The images are only black and white and so colour is added afterwards by a computer algorithm. The colour depends on the wavelength band. So a reddish band in the visible light spectrum will be different shades of red. The same principle is applied to invisible bands like infra-red.

colour image
A supposed true colour image. Red is defined as such.
false colour image
A false colour image where an infra-red band is coloured red.

But all these images above and what we see on the internet are post-computer processing. What does this processing entail? First comes the visual interpretation, which means being able to recognise the terrain in the image whether a street, a town, trees, soils etc. Usually this is due to the different wavelengths reflected by the different terrain – vegetation is green, soil is brown, concrete is grey and so on. Aerial photos, maps and previous interpretations of other images are often used. Using known reference points increases accuracy. (Thanks ICfreely)

DigitalGlobe, provider of imagery for Google Earth, will be launching a new satellite dubbed WorldView I next Tuesday, that will boost the accuracy of its satellite images to half-meter resolution. With that type of accuracy the satellite will now be able to pinpoint objects on the Earth at three to 7.5 meters, or 10 to 25 feet. Using known reference points on the ground, the accuracy could rise to about two meters.

+++

They need markers outside the satellite image, such as a map, to even recognise what the image is actually showing and where each pixel should be. This is not a digital photo (like an Aerial photograph) by any length or means. After the image is visually interpreted, it is enhanced radiometrically and geometrically. Radiometrically adjusting an image means changing the brightness of the individual pixels. Basically, in most cases it seems to mean increasing the brightness overall.

radio-enhancement
A radiometrically altered image after and before.
brightened image
Another image brightened by a computer algorithm.

The image is also geometrically aligned due to a variety of reasons, one of which is oblique viewing.

These procedures include radiometric correction to correct for uneven sensor response over the whole image and geometric correction to correct for geometric distortion due to Earth’s rotation and other imaging conditions (such as oblique viewing)… The geometric processing of SPOT of IRS data includes development of a methodology for implementation of three across-track geometric correction (i.e. the corrections due to oblique viewing, earth curvature and earth rotation).

+++

Oblique means at an angle, not from the top-down (nadir). This is how our eye and a camera sees the Earth from a high altitude when we look out to the horizon. Despite the satellite at the south pole being many, many times higher (possibly 100s of kilometres) than the observed object, the north pole will be the most oblique (fairly near the eye-level horizon line); whereas the southern hemisphere will show a much smaller obliqueness, i.e. the equator will be less than halfway between the nadir and the horizon (less than 45° because of altitude.) Interestingly, some remote sensing images are “seriously oblique”. There are papers showing how the Piecewise Polynomial Model is the most accurate when “correcting” such images.

A paper from geospatialworldforum.org has an interesting paragraph on the subject:

Geometric distortions manifest themselves as errors in the position of a pixel relative to other pixels in the scene and with respect to their absolute position within some defined map projection. If left uncorrected, these geometric distortions render any data extracted from the image useless. This is particularly so if the information is to be compared to other data sets, is it from another image or a GIS data set. Distortions occur for many reasons. For instance distortions occur due to changes in platform attitude (roll, pitch and yaw), altitude, earth rotation, earth curvature, panoramic distortion and detector delay. Most of these distortions can be modelled mathematically and are removed before you buy an image. Changes in attitude however can be difficult to account for mathematically and so a procedure called image rectification is performed. Satellite systems are however geometrically quite stable and geometric rectification is a simple procedure based on a mapping transformation relating real ground coordinates, say in easting and northing, to image line and pixel coordinates.

+++

“Earth rotation” is really satellite rotation in a concave Earth. “Earth curvature” should not be visible (at least in visible light wavelengths). From a height, we all see the Earth as a straight plane; although 19th century balloonists a mile high saw it as concave when looking down and fully around themselves. If it is not the Earth’s concave curvature that is “corrected”, then perhaps it is the obliqueness instead.

IraqMideastAVHRR1990
An AVHRR image of Iraq region. Look how the top half (Turkey) looks like it is bending upwards. Is this because the bottom of the image is in a valley (bowl), or is this an indication of the “curvature” of the Earth that hasn’t been fully corrected?

More interesting is that there needs to be “altitude rectification”. Yes, in a concave Earth different satellites could have different south polar hole altitudes due to weight, but that variation would likely be small. If we are comparing images from the same satellite, then different locations will have different distances from the satellite. These will need to be rectified to make sure the “altitude” remained roughly the same, as if the satellite were looking down from the nadir (or near nadir) on a flat surface for every swath.

Rectification is a process of geometrically correcting an image so that it can be represented on a planar surface, conform to other images or conform to a map… Ground Control Points (GCP) are the specific pixels in the input image for which the output map coordinates are known. By using more points than necessary to solve the transformation equations a least squares solution may be found that minimises the sum of the squares of the errors.

+++

GCP for rectification
Ground Control Points are used to “rectify” an image of different altitudes.

Also, altitude calculations done by software such as Google Earth or Earth Explorer has nothing to do with the stated altitude of the satellite. When you zoom in or out of these imaging software, the software is collating the graphic together from the base resolution image, e.g. Landsat uses 30 x 30m per pixel. The true original “altitude” is the swath width, e.g. for OLI it is 180 by 185 km. Everything else is a calculation by the software. Even the original image does not show real altitude. IKONOS has a 11km swath, OLI has 180 by 185km, Modis has a 2,330 km area width. The variation of their altitudes is not that large, if at all. The difference in swath widths is in the magnification (for the increased resolution) and the field of view. We don’t know how high the satellites really are.

What all these visual interpretations, geometric corrections, and altitude rectifications mean is that a satellite image does not have true perspective. It is not an aerial photograph. These imagers are called either earth sensors or horizon sensors for a reason. Let’s compare an aerial photograph of St. Peter’s square with a high resolution post-processing satellite image of the same location to see the difference.

st-peters-square-comparison
The perspectives are wrong when comparing an aerial photo of St. Peter’s Square with the equivalent satellite image.
satellite-view-of-st-peters-basilica-and-square
The original IKONOS 1m resolution satellite image.

In both the aerial photo and the satellite image, the spire is pointing north showing that the image taker is positioned slightly south for both images. However, only in the aerial photo do we see the south side of the dome to be the most visible, as we would expect from a south viewing angle. In the satellite image, despite the spire being in line with the dome, the north side of the dome is the most visible as if the image taker is positioned slightly north. We don’t see the north side of any other buildings in the satellite image either. This is a false perspective and shows that satellite images do not show a satellite’s true position at all.

Because everyone associates Google Earth with satellite images, let’s analyse this software in detail separately, in light of what we have already found out.

Google Earth

Could Google Earth be merely aerial photography? There is a lot of truth in that:

Although Google uses the word satellite, most of the high-resolution imagery of cities is aerial photography taken from aircraft flying at 800 feet (240 m) to 1,500 feet (460 m); however, most of the other imagery is from satellites… (Horse’s mouth) – Google offers high resolution imagery for thousands of cities, and more are on the way. Most of this imagery is approximately one to three years old and provides an aerial view about 800-1500 feet from the ground… Satellite imagery does not necessarily appear in the same resolution — for example less populated areas often appear with less detail.

+++

This seems to be true if we compare two images below, one of somewhere random in northern Canada, and the other of a road and some trees next to Lake Tahoe.

north canada low res
These are trees in some random wilderness in northern Canada at 1000 feet eye altitude above ground level. The resolution is terrible. Interestingly, this image looks like a slant, not top-down.
Lake Tahoe trees high res
These are trees around a road to the left of Lake Tahoe which has a town nearby called Lake Valley. The difference in resolution compared to the image on the left is massive, despite both being at a hypothetical 1000 feet eye altitude above the ground. Note: This image also doesn’t look very top-down.

When using Google Earth, their sources always appear on the page, which is usually the Landsat satellites (occasionally NOAA, Spot, Astrium and others). Their resolution and swath widths are varied but most of the close ups are Landsat which has a resolution of 30m by 30m. This is not to say that military optical satellite images can’t afford a much higher resolution than commercial Google Earth (such as Corona), and also use different frequencies of the electromagnetic spectrum. After all, one British satellite manufacturer’s blog mentions face recognition research from the university of Surrey which may be indicative of an optical satellite’s true capabilities. This alphabet agency’s wet dream may be a reality.

The Centre for Vision, Speech and Signal Processing (CVSSP) researches signal processing, pattern recognition and interpretation with applications such as automatic face recognition, 3D modelling and video technology, and secure speech communications and protocols.

+++

Aerial views will be why it takes several days to up to three years to update the images of Google Earth (one to three years for most cities). Do Google charter 1000s of small propeller planes over 3 years to take low altitude shots of these cities? Unlikely. Supposedly, according to one poster “most of their plane based imagery comes from statewide and county mapping projects that is funded by tax payers”.

Notice that the source of the wiki article doesn’t say aircraft flying at 800 to 1500 feet, but an “aerial view about 800-1500 feet from the ground”. Could it be that all commercial jets have cameras installed and the resolution allows for a maximum post production zoom to 800 – 1500 feet? If that were the case, I would expect a much quicker update than 1 to 3 years. Also, the problem with all Google Earth images coming from aircraft (even commercial airliners) is that we can zoom into any remotely habitable spot in the world and get accurate photos of these remote locations, which are not under any commercial flight paths – parts of Africa for example. Do Google use their own planes to fill in the gaps? Very unlikely.

openflights-routedb-2048
These are said to be the commercial flight paths around the world from openflights.org. The square red dots are airports.
Luderitz
Luderitz, in Namibia is a port settlement, where there are no flight paths under that entire area of coastline according to the flight map on the left.
tunduru
Tunduru and Masai in Mozambique do not look to be under any flight paths and yet the satellite image is fairly detailed.
Tshabong
Tshabong in South Africa (to the left of Johannesburg) also doesn’t look to be under any flight paths either.

I’m sure if you look hard enough you can find other examples of habitation not under a flight path. Of course, there are roads and natural formations, such as rock and shorelines etc. also always fairly detailed and visible everywhere. And although the flight path lines on the map are very concentrated over Europe, there are still plenty of areas in the densest fly zones where no airplanes are seen overhead or even near the horizon, e.g. most of Scotland for starters.

We’ve already discussed how the altitude of a satellite is indeterminable. For laughs, if we zoom in and out of Google Earth (and Earth Explorer) colours change as the software either uses other sources or collates the 30m by 30m resolution images to the point where the Earth quickly becomes a cartoon.

99-40-5
An actual photo of Lake Tahoe from 30,000 feet looking out of an airplane window.
Lake Tahoe GE 30,000ft
Lake Tahoe from around 30,000 feet (plus 6000 feet for the elevation above sea level) at a 75° angle in Google Earth and a slightly north-east view direction. This is the closest I could get to emulating the real thing on the left. In any event, it looks nothing like a real photo… because it isn’t.
lake tahoe 750mi
Lake Tahoe at 1200 km altitude on Google Earth already looks like a cartoon composite.
lake tahoe 22000 miles
The Earth at supposed geostationary satellite altitude of 35,000 km is clearly a composite with some animated stars.

The replication of the 30,000 feet photo of Lake Tahoe in Google Earth is visual proof that satellite images are not photos, but you already know this. This is also evident when zooming in and out, as small squares of the image are continuously being rendered. Satellites don’t have a zoom function or sweep in from super high altitude to low ones to get all the altitudes and angles etc. They are fixed power reflector telescopes.

In fact, there is proof of rendering when comparing an updated land area to one than isn’t. For example, an area of land under Hudson Bay called Polar Bear National Park with its surrounds. This large square area had not been updated with the surrounding land and still had the white snow covering from winter. At about 18 miles (29 km) pretend altitude and above, the area has the exact same colour as its surrounds. Below this altitude, this square area begins to show pale shadings of white which increases in whiteness until about 3000 feet eye height (hypothetical altitude). From 3000 feet to ground level, the whiteness intensity remains the same. Therefore, we can deduce that this is the resolution of the original image. Every other altitude above that is processed, i.e. not real.

north canada1 - 18 mile
You can see no whiteness of the winter snow covering at all at about 18 miles (29 km) and above.
north canada1 - 3000 ft
The same area as the 18 miles altitude image on the left shows the border between the updated area on the left side and the old snow covered area on the right side. The whiteness did not increase in intensity below this altitude, which was 3000 feet.
Africa equator 48 miles
Somewhere near the equator in Africa looking on the Congo, there is no difference in color between the updated and the old area at 49 miles high.
africa equator 3000 feet
The same area as the image on the left, but at just over 3000 feet, shows the border between the two regions, one of which hasn’t been updated. The colour didn’t change below this altitude proving that this is the original satellite image.

Conclusion
It does seem that Google Earth uses close-up aerial photography for main habitation and satellite images outside populated areas. These are then rendered by the software at higher altitudes than the original resolution, whether satellite or aerial. High resolution, highly inhabitable areas cannot be updated frequently because they require aircraft flyovers. Other areas are only usually updated every 30 days due to variables such as weather (no clouds), a correct angle of the Sun, little haze or pollution etc. and then the data is processed before Google obtains it. Also, we have already seen that some satellites can take many days in sun-synchronous orbit before a satisfactory full Earth scan is made.

Overall reality verdict:

polar-satellite-industry
Polar satellite production is a part of the space industry.
polar-satelite-marketing
How the polar satellite is said to move in “space” is part of space marketing.

Summary

  • Polar satellites aren’t stationary sitting on the glass looking down because: 1. Unlike GOES they can see all around the inside of the globe. 2. Only one axial rotation for the optical mirror or telescope (two is needed for a stationary satellite). 3. Field of view is far too narrow to allow light to enter the viewport from either half around the horizon (180°) or all of it (360°).
  • In a concave Earth, polar satellites have been theorized to orbit around each pole individually instead of pole to pole in a convex one.
  • Standalone evidence for polar satellites orbiting the (south) pole are: 1. One of the names given to the remote sensors on satellites is “horizon sensors”, i.e. they look at the horizon. 2. The Terra and Aqua orbits mapped onto to a 3D globe don’t sync when a straight line is extrapolated to the perceived centre of the polar orbits. Instead, it maps a circular orbit somewhere near the geographic south pole. 3. Satellite images are always the correct way up with south at the bottom and north at the top.
  • The only real counterargument to the concave Earth theory of polar satellites is the continually obliqueness of nadir to horizon images which will sometimes have to be corrected.
  • When taking the proof from the previous article on the nonsense of orbiting, and that the Earth has been shown not to move, both observationally and experimentally, as well as other evidence on this blog, the magnetic pole orbiting mechanism is the only one left that works. Arthur Conan Doyle’s quote applies: “When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.”
  • I found only two pieces of evidence for the possibility of the existence of the south pole hole: 1. Google Earth’s discolouration of the area where it would be located. 2. The already shown Aqua/Terra orbital paths around the south pole.
  • Polar satellite pole orbiting mechanism comes from the magnetic holes near the poles and very loosely estimated to be 2 to 3 degrees (300km?) away from the geographic south pole.
  • The south polar satellite moves around the magnetic vortex once per day and rotates on its axis (roll) normally north to south (clockwise from the back) 14 to 16 times a day depending on weight, propelled gas, and lateral distance in the vortex. North polar satellites are the same except rolling south to north (anti-clockwise).
  • Since the Sun also revolves around this magnetic vortex to give us night and day, all polar satellites following the same distance from the vortex centre are Sun-synchronous.
  • The supposed insider’s approx 50 km diameter of the south pole hole works out very close to the compared distance of the Sun from the north-south central axis of the Earth calculated by different means.
  • A satellite detects light from the nadir (south pole) to the horizon (north pole) by five methods: magnification, sensitivity, massive sensor array, amplification, and altitude.
  • Iridium satellites also revolve around the south pole hole and rotate on their axis linking to each other and the phone user. This cross-linking and their south pole position makes their network anecdotally slower than other non-polar satellite phone networks.
  • Satellite images are only slightly similar to digital cameras with huge telescopes attached.
  • All satellite images are processed and have to be: 1. Visually interpreted, i.e. maps are used to recognise and align the image with the known location. 2. Brightened. 3. Geometrically aligned due to various factors including obliqueness, even serious obliqueness. 4. The altitude is rectified using Ground Control Points, aka comparing the image to real known co-ordinates.
  • Because of these corrections, a satellite image has no true perspective, as shown when compared to an aerial photo of the same location.
  • Google Earth uses aerial photography for populated areas and a combination of other polar satellites (mostly Landsat) for the rest.
  • Google Earth cannot be using commercial airliners for aerial photography as even the remotest areas have a unique image, whether populated or not.
  • Google Earth uses a variety of resolutions from various satellite sources, but uses its software to render the colour and graphics when zooming in and away from the original image.
  • It is impossible to tell the altitude the of a satellite by looking at its image as the original image area size depends entirely on both magnification and field of view.

Hopefully we understand satellites in a lot more detail and how they too can, and are, applied to a concave Earth. What is wrong with the mainstream theory though, that the Earth revolves around the Sun? Why has this model been eliminated from contention by myself? Let’s find out.

Bookmark the permalink.

15 Responses to Polar satellites

  1. 8Salomon8 says:

    And if its not a concave earth? But a flat one it could explaine some thing? may be they just cut in slice to adapte it to a concave model but they just dont have the south pole center because it just deosnt exist and the just dont need the range of view that we think…..

    View Comment
  2. Joel Harris says:

    I have seen many polar orbiting satellites since the 70’s. One passed thru my eyepiece when looking at M13. Then next night I was looking at M13 again, at it passed thru the eyepiece again. Sounds like a 11:58 sat to me.

    View Comment
  3. Wild HereticWild Heretic says:

    I’ve added a little bit halfway through, giving my concept of combining bendy light parallel to the earth and distance from the source (just speculation).

    View Comment
  4. Arturas says:

    Nice range of observation, it lacks of your pin point summary like in concave earth. Your summery is longer than whole article in sense of making point. U2 plane is still in service. Transfer analogical maps gathers in to digital is easy, a lot of software exist to make zooming process more softly and easier fractal geometry (How Long Is the Coast of Britain? by Mandelbrot from INTEL corp.) is still speculation but it is more technologically easier than applied or attached satellite to glass sky. Weather stations is also gathers information around world and transfers via ground based telephone or internet cables into computer centers ,where this days even without supercomputers makes isobaric charts and computer simulation of weather flow. NASA has weather balloons gathering and evidence that we leave in greenhouse sky concave earth not opened black space nothingness universe.

    View Comment
  5. Hoi Polloi says:

    I enjoy your writings on optical sciences, though some of them have been a bit hard to follow. But this one is a new level of vagueness and leaps of faith that I am not willing to make. This article is poorly written compared to your Earth model articles, and makes them look worse off too.

    It reveals a bad habit of writing style that could be confused for deliberately dumbing down your audience by having them follow along using only base assumptions rather than good, solid, well-explained reasoning. If you do not go back and edit this article, you are bound to make Earth studies look like a propaganda attempt rather than an attempt to show the truth. I hope that is not your intent.

    The pattern I am picking up on here is your tendency to create a huge theoretical direction based on just a few images, while ignoring outright the need to identify and vet the sources. Line drawings about curvature of light are one thing. It makes sense to use diagrams when it’s about geometry and math and scientific models that anyone can confirm.

    But so-called “photos” from creepy NASA halls? It doesn’t compliment your technical intelligence when you are taking images of the PR/propaganda company like NASA for face value — or even more bizarrely — allowing yourself to be drawn into PR statements calculated to intrigue and deceive, while providing no solid information for the public to discern why or how their incredible claims have gone un-vetted — even by so-called skeptical thinkers like yourself.

    These are not simple functions of nature you are analyzing anymore. They are claimed to be very complex machines, and you are just forcing them to be in your theory which by necessity makes you endorse official explanations — for what purpose?

    Going back now and seeing your earlier articles is a bit tense now because you had been going slowly and methodically before. You explained each concept, leaving room for the theories to grow in the future. But this article abandons those.

    With this article, you have practically created a huge hole (no pun intended) in your readers’ potential cohesion of an already shaky theory. If people read this article first, they would conclude you don’t actually do any research at all.

    Why have you stopped doing research to this extent? Why?

    Each one of those so-called satellite images you showed is suspect, and the quotes you’ve drawn up are nothing less than PR from a huge time- and money-wasting organization. How can you switch from skeptical inquiry of pure observation to endorsing the simulation to this extent? How can you not cite thoroughly each presumption you make about what you have (seemingly arbitrarily) chosen to trust from the self-contradictory Disney-NASA fantasies?

    It doesn’t make sense when you take NASA images and start speculating and pondering about the functions of photoshopped contraptions that have no proof of full functionality on Earth, let alone in “environments” like “outer space”, which we couldn’t describe and you have only begun to try to replace (and NASA can’t even describe consistently with all their writers and spinners). It’s like in the 9/11 drama, Judy Wood trying to read science “out of the pixels” while failing to credit her “burnt out car photos” to New York scam artist George Marengo.

    Please, remember to build your theories on solid foundations.

    If they have *some* satellites that “work”, demonstrate in detail to us:

    1. How you determined which satellite data fronts are “real” (Starting with why you thought they might be real in the first place)

    2. Specific and clear diagram(s) of the model you are proposing (instead of slowly hinting of a possible hypothesis that we have to imagine)

    3. How the so-called satellite images presented, when there are any images at all to be gleaned from a front site, are by necessity made by an advanced computer “in space” or (“on the glass” as you require) rather than simply constructed in an office.

    4. Analysis of the launch and life cycle that must have been necessary for each

    5. Analysis of each and every “satellite” model and its so-called parts (without relying on trust of NASA’s descriptions) to indicate that the thing is actually going to function in a way that YOU predict rather than how NASA would have it. And, explain how it won’t be fried, irradiated, shredded by debris, etc. (Including in your description at least the moving parts you think it has, and means of conveyance! A computer in a chunk of material and moving parts does not make it more than expensive make-believe)

    There is so much missing, as I’ve explained above, if you wanted to just create the most basic form of the theory you are designing.

    In the face of the incredible amount of lies and propaganda being pumped out of that organization to invent fake technologies, I am disappointed in and suspicious of your direction to basically endorse a bunch of NASA images of their satellites, while failing to provide evidence that the images and articles connected to and about them produce anything of value besides obfuscation of the truth.

    Have you even been to the Modis web site and tried to make any sense of their so-called data? The place is a scientific-appearing front for the fact that they don’t produce anything.

    If you are going to pick some ideas from a mountain of lies and fantasy to just believe, or re-interpret or re-imagine, you’ve got to have a better explanation as to why you are allowing such make believe. And even then, it needs to fit better with your earlier speculations — which, now, sadly, by comparison, look weak and undeveloped compared to these claims, because it hints that you could write another article all together that is purely drawn from no evidence and that’s the pattern we are facing.

    Please, just make a warning on the top of this page, saying something like: “This is a theory in development and in serious need of detailed reasoning, which I am working on” and leave this article as a “stub” for people to read while you collect data that helps a skeptical person understand why you of all people trust the highly suspicious data that you do.

    Otherwise, please explain your reasoning for crafting this article in such a loose and useless style compared to the measurable observations of previous articles. I hope for this site to be a consistent resource rather than a distraction. Please, though you are eager to have a Theory of Everything, stick to the slow and patient methods that are more convincing. Thank you!

    View Comment
    • Wild HereticWild Heretic says:

      I understand that this would piss off those that think satellites are a hoax, but I don’t think so. It is the heliocentric model which I think is a lie.

      Of course, we could say that everything about polar satellites is bogus and that there is no polar satellite industry. But instead of throwing the baby out with the bathwater I have attempted a Siamese twin separation – the model from the technical.

      I can’t see how Landsat and Google earth for example can get ALL their images from aerial fly overs.

      Delving into the optics of satellites has also led to a slight further understanding in optics and how light/horizon works, but it is on the cutting edge of my theoretical understanding too. The subject isn’t the easiest.

      I’ll revise this article a touch concerning the polar magnetic mechanism. It is all very speculative, but that is all we have. If the sats are at very high altitude at the south pole then they must repel against the magnetism of the pole near the hole/magnetic pole. This means that it doesn’t matter if the h-field at the poles is shooting outwards or inwards, the repelling satellite will use a mag drive of opposite polarity to repel against it. This means that the north pole could also be used which makes more sense for Baiknur central Asia rocket site and Surrey Satellite technologies.

      The only real evidence against this theory is that the oblique angles of nadir to horizon have to be sometimes corrected I think.

      EDIT: article has been revised.

      View Comment
      • Hoi Polloi says:

        I just want to understand the physical world we are asked to understand before reading this, because it seems pretty advanced. Can you summarize at the beginning about how this theory requires certain assumptions reached earlier?

        Such as: There is a hole at the poles? Is this like Eric Dubay’s talks about the vortex at the North Pole surrounding a great mountain rising up in its center (barring the Flat Earth model)? Is there glass in the sky? If there is not glass in the sky, there is a problem with the satellites you are imagining sitting on the glass and crawling on the glass or flying through a hole in it.

        Remember, they are prepared to make people develop better sciences, but they might not be telling the truth about what those sciences are for. A more efficient motor?

        What about the science of controlling people and brainwashing them and keeping them involved in supporting massive fibs?

        So there should be clear diagrams showing what you speculate. I could help you illustrate those, but you’d need to explain them much better.

        I don’t see the need to split the difference between utter garbage science of NASA and the good science you are doing. You should just stay on the good side instead of tip-toeing so deeply into the propaganda and balancing a new theory on the monstrous hybrid baby of good and bad science.

        View Comment
        • Wild HereticWild Heretic says:

          Such as: There is a hole at the poles? Is this like Eric Dubay’s talks about the vortex at the North Pole surrounding a great mountain rising up in its center (barring the Flat Earth model)? Is there glass in the sky? If there is not glass in the sky, there is a problem with the satellites you are imagining sitting on the glass and crawling on the glass or flying through a hole in it.

          I don’t know about a mountain and Eric Dubay’s model. I assume he is taking the account of The Smokey God sailors for this info. I believe there are holes near the poles. However, it may not be absolutely necessary for there to be physical holes for the concave Earth and/or satellite model. The holes would help with the ventilation, but then the Earth may be pretty porous with cave systems on land and under water etc. The holes would also explain the Earth’s magnetic field and power source of the Sun, but this magnetic field is also possible sans holes.

          What about the science of controlling people and brainwashing them and keeping them involved in supporting massive fibs?

          Oh sure, I’m with that. Separating the bullshit from the truth isn’t easy. I don’t think that absolutely everything is a lie, but finding out what is really the truth isn’t easy. Every dissenter has their own ideas I think. The question I’m not so sure about is why all these lies. There are a lot of possible speculative reasons, most of them not pretty. I guess I will never know.

          So there should be clear diagrams showing what you speculate. I could help you illustrate those, but you’d need to explain them much better.

          Is this about light and optics? I’m still venturing in that area myself. I wrote on the concave Earth forum that I suspect light from one single angle on the horizon follows the same path as one half of the magnetic field lines at a magnetic pole. This would explain intensity/sensitivity and increase/decrease bend of light. The problem I am having is merging the new info of light intensity and receiver sensitivity with bending light, altitude and perspective. This is difficult and I honestly haven’t got a finished logical diagram type model myself.

          Here is a quick diagram on the magnetic principle.
          http://www.wildheretic.com/wp-content/uploads/2015/06/light-magnetic-field-lines.jpg

          It explains the relationship between intensity and bending light, but not altitude. The only way (at the moment) I can reconcile altitude with this model is that a light receiver of x sensitivity needs x bend of light hitting its sensor. The less sensitive the receiver, the steeper the bend of light has to be. Perhaps (here comes a theoretical long shot) light is in some way longitudinal and the more perpendicular (like a longitudinal wave) light hits the sensor, the less sensitive the sensor has to be. I should also note that the variation of the bend of light is from a one single angle or point in the light source. Putting this model together with angles and perspective is a whole new layer of understanding which I’ll have to have a think about.

          Optics is a work in progress and the above idea may not be correct. 🙂

          I don’t see the need to split the difference between utter garbage science of NASA and the good science you are doing. You should just stay on the good side instead of tip-toeing so deeply into the propaganda and balancing a new theory on the monstrous hybrid baby of good and bad science.

          Basically, the articles came about because when revising the part on satellites I initially looked into the effects of satellites such as GPS, phone, images etc. and came to the conclusion that the effects are generally legitimate. I looked at my previous evidence against orbiting NASA objects and realised that all the evidence was against the model being correct or that people were actually in space, e.g. thermosphere, Newton’s “thought experiments” (lol), Van Allen belts, cut-and-paste- satellites in animated space, composite Earth “globes”, scuba tanks and hairsprayed hair in space etc. Whatever their reasons for their bullshit marketing, the promotion was still evident. So I then had to reconcile the satellite effects with the correct model (no need for people in space either). This is where I took a leap of faith and took what they said technically about the satellites was correct, but not the model of deployment.

          The technicalities needn’t be correct, but I think they probably are for the most part as engineers and scientists not in the know (glass sky/concave Earth) can’t be directly bullshitted on the technicalities they are working on directly. Also, the manufacturing photos mostly seem to gel with the technicalities too. The earth is likely scanned in strips presented to the public, but how they took the strips is a huge lie.

          Again, that is my take on it. I might be wrong, but it is how I see the space industry. Evidence for the glass sky is coming up in the next article on rockets and space shuttle and a separate revision of the original article. These two articles were meant to go before the satellite articles already published, but it didn’t work out that way when researching.

          Hope that helps a little.

          WH

          View Comment
  6. Steve says:

    That was another private note which does NOT need to be published, nor even acted upon, I’m just quietly pointing out 2 possible typos in this second new article, in case you feel like quietly correcting them.) 🙂

    View Comment
    • Wild HereticWild Heretic says:

      Cheers Steve. Those two articles are bound to contain a few typos. They evolved over time and were originally part of one article. A lot of toing and froing.

      I’ll make the corrections.

      WH

      EDIT: Corrected. I’m surprised there weren’t more. They were most intensive subjects I have ever researched. An evolution of understanding. Rockets are next and then the glass sky.

      View Comment
  7. Steve says:

    Here’s another private which note does NOT need to be published, nor even acted upon, I’m just quietly pointing out 2 possible typos in this second new article, in case you feel like quietly correcting them.)

    The same is principle is X
    The same principle is O

    This is not to say that military optical satellite images can afford …
    This is not to say that military optical satellite images can’t afford …
    (After the phrase “not to say”, I think you meant to type “can’t” not “can”.)
    (If you mean the military CAN afford better, then it should say “can’t”.)

    🙂

    View Comment