Black Sky Metering

Metering is the term that describes the process of sensing the scene's luminosity: after metering the camera
passes to determine the exposure settings, as examined above.

While preparing the code for our launch, I was examining a few shots from the HAPS1 mission and I was wondering to the reason there are so many shots from that  mission that are badly over-exposed.


Photo 6083 from HAPS1 Mission

By looking at the videos (for example,
video 6332) the reason is evident.

This happens mainly when the
camera was pointing at the sky and then suddenly points downwards to
the earth. In these cases the camera had metered for the dark sky,
but when the actual shot gets taken the brighter earth is in the
scene instead of the sky. This problem is very frequent and makes about half of those photos un-usable.

This means that the usual strategy (meter luminosity. then shoot)isn't working well in these highly unusual conditions, and we'll have to find another metering technique.

Suppose we are there, and take 100 manual meterings
pointing in 100 random directions. We will probably have something like this:

25% of the shots pointing at dark sky
45% partly pointing at sky, partly pointing at the earth (in different
proportions)
25% of the shots pointing at earth sky
5% of the shots pointing at the sun

The numbers will vary a lot, but the
general disposition will be the same, with some being darker and some
being lighter.

My assumption was that if we
selected the brightest meterings (excluding the top maximum ones) we would identify the meterings taken while pointing at the earth. We could then use these metering as a good value to shoot our photos, and shoot all the photos in the next minutes with those settings.

This would give us a good exposure for the Earth, and independent from where the camera was pointing at that moment, the Earth would be well shot.

Thinking about it, you will realize that even if the camera isn't moving (i.e. ignoring the delay betwee metering and shooting) a shot that only catches a small portion of Earth, with most fram being dark, will try to expose for the sky and badly over-expose the small but interesting part, covering Earth. See for example the problem here:

Photo 7247 from HAPS1 Mission

My solution would solve this problem too!

A risk factor could be that, even when we successfully selected the meterings taken pointing downwards, there could be strong differences in light (for example, meterings taken when looking towards Earth but facing the Sun vs. turning our back to the Sun). In this case we again could have over- and under- exposed photos.

A study of the luminosity of high-altitude photos from other mission convinced me that this problem wasn't terrible. For example, from the HAPS1 flight I selected 22 well exposed photos taken at high altitude. This sample would presumably cover images taken in all directions (facing and opposing the Sun), and by studying the parameters that the camera used for shooting I could have an idea of the luminosity (Bv) that the camera had metered. It seemed that all the readings where in a quite narrow range (+/- 0.6 steps Bv).

I then decided I would try to use the same exposure  value (determined with this procedure) for all shots, regardless of the position.

At the end, the exact procedure used was the following:

1) use a FIFO queue with 100 elements
2) start by filling the FIFO queue with 100 meterings
3) discard the 50% darkest meterings
4) discard the 20% brightest meterings
5) average the remaining meterings and obtain our calculated Bv value
6) shoot a photo with this value
7) take a metering and store the value in the FIFO queue
8) repeat steps 6 and 7 20 times: we would thus have 20 shots taken with the same setting, but in the man time I would have collected some fresh data
9) refresh the calculated Bv value: go to step 3 and re-calculate it with the refreshed meterings

I was a bit scared taking such a radical step and fully taking responsability for the exposure settings ... after all if there was a bug in my procedure, I might as well risk runing all the photos of the mission!

It turns out the results where excellent: from the over 1500 photos from the ICBNN mission, only very few where badly exposed. I know of no other amateur balloon mission that gave so many good photos!

These are the relevant bits of code (taken from metering.lua)

First, we define some variables:

 bvtable={}
 bvtable_sorted={}
 bvtable_len=100
 bvtable_current=0
 bvtable_ptr=0

bvtable is an array that will be filled with 100 BV readings.
bvtable_sorted is a working variable, a copy of bvtable sorted by increasing BV value
bvtable_current is a counter of the BV readings we take
bvtable_ptr is a pointer to the current reading

This is the function that actually reads the BV value (i.e., meters the luminosity of the scene currently seen by the camera):

-- read curent bv from scene
function read_bv96()
 press("shoot_half")
 while get_prop(115)==0 do
  sleep(100)
 end
 bv=get_bv96()
 release("shoot_half")
 return bv
end

This functions feeds the BV table, i.e. reads the BV value and inserts it into the table

function feed_bvtable()
 dbg("Feed Bv Table")
 bvtable_current=bvtable_current+1
 bvtable_ptr = bvtable_current - bvtable_len*(bvtable_current/bvtable_len)
 bvtable[bvtable_ptr]=read_bv96()  
 writelog("MET", "Metered "..bvtable[bvtable_ptr])
end

the line
 bvtable_ptr = bvtable_current -
bvtable_len*(bvtable_current/bvtable_len)
is a workaround to the lack of a modulo operator.

Finally, this is the tricky part, i.e. the function that analyzes the 100 BV values to determine the exposure.
It takes two parameters defining the range of exposure we want to actually use, and averages the exposures falling in the range.
For example, weighted_bv(50, 80) discards the 50% darkest readings and the 20% lightest readings and averages the remaining ones.

function weighted_bv(range_lo, range_hi)
 bvtable_sorted=bvtable
 table.sort(bvtable_sorted)
 l=table.getn(bvtable_sorted)
 lo_sample=(l*range_lo)/100
 hi_sample=(l*range_hi)/100
 if hi_sample<lo_sample then
  hi_sample=lo_sample
 end
 tot=0
 samples=0    
 for n=lo_sample, hi_sample do
  samples=samples+1
  tot=tot+bvtable_sorted[n]
 end
 tot=tot/samples
 dbg("Weighted bv "..tot)
 writelog("MET", "Calculated "..tot)
 return tot
end

These two lines:
 bvtable_sorted=bvtable
 table.sort(bvtable_sorted)
make a sorted copy of bvtable

The following lines:
l=table.getn(bvtable_sorted)
lo_sample=(l*range_lo)/100
hi_sample=(l*range_hi)/100
determine the range of samples that we will need to average

The remaining lines loop through the values bvtable_sorted[lo_sample] to bvtable_sorted[hi_sample]and calculate the average.