Bayer Filter

One issue I thought might be a problem is that the pixels on a camera don’t really each measure all 3 colours. Instead, they each measure one colour & the colours are then interpolated. This isn’t a problem if the object being photographed spans many pixels, but what if the object is a tiny bright dot, as in our situation.

An example Bayer filter (from wikipedia)

An example photo of “What the ladybird heard”:

Example photo of What the Ladybird Heard cover: The raw data from the camera. The yellow cover of the book doesn’t have much blue, so the blue pixels are darker.

Sadly the problem of the filter seems to be impacting our bee-orientation/id experiment. Here I rotate a tag 360:

The in-focus result (notice the dot colours don’t smoothly transition)

The result is less accurate predictions of orientation:

Points plotted on colour triangle (number = angle in degrees)

If we adjust the focus of the lens so the tag isn’t in focus, the colours are more reliable:

Progression of tag colour as it rotates (in 14-16 a non-tag was found by mistake)

This leads to a more reliable prediction:

Points plotted on colour triangle (number = angle in degrees)

I think the plan now is to:

  1. Collect more data but download raw (without interpolation) – this also saves bandwidth from the camera.
  2. Look at fitting the PSF using this raw data.
  3. Maybe leave the camera just a little out of focus, to ensure all the colours are detected.

Orientation from Colour Tag (initial experiment)

This was an initial experiment I ran back in December, to see if this idea might work.

The problem of using polarising filters

So, one thing I’ve been thinking about is how to get the orientation from the polarising filters from a side view. From above it is easy (although the 180 degree symmetry needs resolving) one just uses two cameras (with 0 and 45 degree polarising filters on) and a flat polarising filter on the back of the bee. From the side it’s more awkward – with a ridge etc…

Using Colours

Anyway, I went back to my original idea of using colours. For this experiment I made a hexagonal ‘tube’ – it’s a little large in this case (about 5mm across, when I think 3mm is probably the limit – I made a smaller one yesterday about 3mm across that also worked). I put inside the glass-bead style retroreflector and cover the ends of the tube (maybe needs strengthening using superglue).

tag.jpg
The 6 colours of the retroreflective tag.

I then used a tracking system to take photos of the unit from 8m away (the longest straight line in my house :).

I think maybe this isn’t as bright as it used to be: the colour camera isn’t quite as sensitive, the filters absorb some light, and the cylindrical shape rather than a ridge means it’s also a bit weaker [although works from all angles], and I used one flash instead of four… but anyway, here’s some of the photos to give an idea…

image.png
The titles are “angle [maxRed maxGreen maxBlue] [Max location]” Ideally I should fit a PSF to the dots taking into account the Bayer filter.

To build it I picked 6 filters using their spectra provided by LEE filters, hoping I’d pick some that would lead to a path that doesn’t have overlaps. I also just picked filters that transmitted the most light. This could be improved I think – as you can see the dots aren’t in a neat circle…

image.png
The location of the 6 filters on the colour triangle

This on the same sort of triangle as above (although flipped and rotated, so the two axes represent normalised colour)… the numbers are roughly (+/- 15) the angle of the tag. The tag was imaged in order (0,15,30…345,0,15…) and the lines join sequential measurements. Currently we are just using the average value for each colour in a square around the tag, but in future this could be improved.

image.png
The colours on the colour triangle (numbers are the angle of the tag)

We can fit a Gaussian process (or other regressor to this)…

image.png
Contours numbers indicate predicted angle of tag. Dots are training data.

Cross Validation Results

image.png
Leave one out cross validation

MAE = 31 degrees
RMSE = 48 degrees
(chance level is: MAE 90, RMSE 104).

You can see that there’s two directions that seem to look similar (-30 & 150 degrees) where it gets a bit confused. One can see why in the colour map plots (where the dots around 330ish and 150ish are a bit jumbled together – you might even be able to tell by eye looking at the initial photos in the second figure).

Tweaking the choice of colours should help, also taking more close together training points, rather than asking it to interpolate over 15 degree steps.
Note also the actual angle of the tag was only accurate to +/- 15 degrees.
Anyway – this colour-tag idea is another potential approach, instead of the polarising filters.

I only spent a couple of hours or so getting this together, so hopefully I can make a lot of improvements on this in the new year.

Orientation from Tag Colour

My first outdoor experiment with the ‘colour to get orientation’ project looks like its got promising results, but I need to auto-detect the ‘posts’ to get the ground-truth orientation. I might have to come up with a better idea and try again.

Collecting data (using a frame with some coded columns of paper attached) -> The idea is I can get the ground truth orientation.

The following figure shows that it does seem like the colour changes (these are zoomed in on each of the tags in the first 49 flash photos). The first lot are 24m away, then the later dozen are from 16m away.

Reassuringly colourful. (Note: There are a couple that weren’t of the reflector). Most from 24m. The last few were from 16m.

The puzzle is to get the ground truth orientation from the black-and-white sticks and then see if the colours relate to the orientation.

Flash Delay Issues

For the bee-tracking system, we use a global shutter camera which means we can take extremely brief exposures, just capturing the time the flash is on.

Because there’s a delay between the trigger and the bulb actually firing (roughly 80\mu s) I use the PreDelay option to trigger the flash slightly earlier.

So to get an idea of the best combination of predelay and exposure time, I ran a grid search and found the mean of 3 photos at each combination.

Using just one Neewer TT560 flash (1/32 power).

It was quite noisy.

The average image value for different combinations of PreDelay (x-axis) and exposure (y-axis). On the far right the flash has been & gone before the shutter opens. On the left the flash hasn’t yet started when the shutter opens, but for the longer exposures they still capture the flash. An ideal spot will be towards the top (to minimise the ambient light from the sun): So maybe an exposure of 90us and a predelay of 65us.

The scattering of dots on the right are, I think, due to delayed firing of the flash (by a few hundred microseconds). Similarly the dark dots are either when it didn’t fire, or these delayed events.

Given there are 4 flashes firing at once, I guess occasional failures like this don’t matter too much, but it would be nice to resolve.

Climate Thoughts

The IPCC’s SR15 report a couple of years ago is a useful summary of why we should really avoid going above a 1.5C increase, and gives ballpark budgets for how much CO2 we can still emit. I guess sea level rise, the arctic melting and tipping points like the Amazon become savanna are all a worry, but the main ‘output’ that I’ve worried about from when I first heard about climate change was the impact on crop yield. I’m a complete non-expert, but I imagine this typically will appear not as a gradual reduction in yield but in one or two year large scale crop failures. When this happens in more than one part of the world at once the price of food will go up.

from the The IPCC’s SR15 report, summary for policy makers, page 11. Red indicates severe and
widespread impacts/risks. M = medium confidence level in transition of risk.

See the 2010-2012 world food price crisis. Or today: Madagascar.

From the FEWS.

So how much CO2 can we still produce and stand a good chance of avoiding going into the orangey-red bit of the crop-yield plot? Chapter 2 of the report has this table (I’ve cropped it to fit on the post – see p108 of chapter 2 of the SR15 report).

Part of the table showing carbon budgets for various temperatures and probabilities of reaching them. Notice that the climate lag means if we care about the future we need to be a little more conservative (i.e. do we care about anyone after 2100* see below).

It looks like things to get bad quite quickly after 1.5C. A 50/50 chance of avoiding going past that (and if we care about our grandchildren) gives us a budget of 480GtCO2, but that was 4 years ago nearly. We produced about 40GtCO2 in each of those years so I guess we’re now at 320GtCO2.

So we can continue to emit at this rate for 8 more years (then no emissions).

What are the sources in the UK?

To summarise the UK government’s report on this:

Motor vehicles: 27%
Electricity generation: 21%
Heating houses + cooking: 15%
Business: 17%
Agriculture: 10% [methane from cows is half of this + NOx]
Waste management (mostly methane from landfill): 4%
Industrial processes: 2%
Heating public sector buildings: 2%
Landuse: 1%
(rounding means this adds up to 99%).
(total: 5.4 tonnes/person)

(not included!):
intl aviation: 37MTCO2e (8.5%)
intl shipping: 7.5MTCO2e (1.7%)
(0.66 tonnes/person)

The UK’s electricity: Sources and Uses

From this report for 2020:

units are in TWh for 2020

So wind, wave and solar made 88.5TWh. In 2019 we used a total of 1651TWh (source). Obviously renewables can include burning plants and rubbish (113TWh in the diagram above, and 10% of petrol is now bioethanol and some people have woodburners). But I guess my worry is it feels like we’re a long way from powering our heating and our cars from renewables. 5% of our energy was produced by wind, wave and solar.

Post 2100

My last thought I’ve run out of time to write is that I think we need to aim beyond 2100. My children’s children will be middle-aged in 2100.

Bee Tracking: A List of Improvements needed…

There were several issues that became apparent during and after the 2021 Summer experiments! I’ve listed some of these issues and the planned solution. This isn’t a very exciting post, skip down for the last few posts which have 3d videos!

Detectable Posts: Although fairly easy to detect, sometimes non-posts get detected. I think a few more stripes to make finding them easier (at the cost of a slight reduction in the maximum distance). E.g. go from 11 stripes to 15.


Improved pot design: It turns out we can tag the bees in-field with modified marking pots, but need to add lids to stop them escaping!


Tag contamination: The wax on the retroreflectors degrades them: Try using alternative materials.


Firing rate: This is a big challenge. There are several potential bottlenecks. I found the flash thermal-protection circuits kicked in. I did experiment with only firing half of the flashes each time – I’ve not looked at the results of this yet. Might be able to disable the protection circuit! The other problem was I found many failed images, due, probably, to buffers failing to be serviced quickly enough etc. Will need to experiment (might be able to downscale image 2×2). A rate of 4Hz is currently possible for 10 seconds.


Tag shape / detectability – related to the above issue is that the tags aren’t always visible, possibly because of their shape. Redesigning to make them visible from all directions (e.g. a cone, cylinder or sphere) could help.


Timing alignment – a major issue I found was that the raspberry pi generated time stamps often ‘jumped’ or ‘froze’, given the importance of precise timing, this is causing a major problem. The solution I’m building is a (GPS-time-synced) ‘clock’ that simply displays the time to the nearest 4ms in binary, in a cylinder. LED driver, LED, GPS receiver, arduino [e.g.].

PCB for one part of the ‘binary clock column’. This will give the GPS synchronised time to 1ms accuracy in LEDs.


Range – should keep looking at how to improve the range – switching to glass corner cube reflectors? Focused flash? If frequency is increased could trade it for a tighter ‘high gain’ beam?

First Learning Flight Reconstructed

I applied the Gaussian process approach to the first learning flight recorded from the new nest. Unfortunately two of the four cameras weren’t very useful (one pointed the wrong way! and the other had a battery failure), so the path is more uncertain than I would like.

Projected path reconstruction onto four camera images. Green line = posterior mean. Solid = less than 30cm standard deviation. Dashed = less than 60cm standard deviation. Missing = greater uncertainty. Yellow dots = locations (with times, in seconds) of the observations for that camera. The green numbers are the times along the trajectory.
Same data as above, but zoomed out to the entire image size. Removed observations to reduce clutter. Higher resolution version.


(First ten seconds of the trajectory. Small moving dots are samples from the posterior distribution to give an idea of uncertainty. Similarly the shade of the line indicates uncertainty with white/hidden = a standard deviation of more than 50cm in the prediction). Big dots are the posts and nest. The triangles are the camera locations.

Next I’ll look at other learning flights and start doing some cross validation on the predicted flight path trajectory.

.

Simulated 3d flight path reconstruction

Last week I was using a particle filter to model the bees, but this morning I tried out (with very simple simulated dataset) using a multioutput Gaussian process. I think these results look better*:

The dots are samples to show uncertainty, set to 1/3 of the true standard deviation for clarity.

The straight lines are from the ‘photographs’ taken from two cameras. The green helix is the ‘true’ path of the simulated bee.

I can’t believe how quick it is to use stochastic variational inference! The slowest bit was having to write some code to do batch-compatible cross-products in tensor flow. Notebook here.
I’ve already run the particle smoother on some learning flights, but will try this out on them instead…

Reconstructing Flight Paths

Progress so far:

1) Automatic detection of the markers.
To make life easier in the long run I made posts that are hopefully visible from up to 50m away using the tracking system’s optics and are machine readable (see here). The tool finds the location/orientation of the markers in the image. It then uses the top and bottom of each to help with alignment.

image.png
Finding the posts automatically. Numbers are the id of the post and a % confidence.

2) Automatic registration of the cameras and markers. This was quite hard.
a) I’ve provided the distance between lots of pairs of landmarks and cameras (measured on the day), and approximate x,y coordinates for all of them. The tool then does a rough optimisation step to work out roughly where everything is (i.e. the x,y coordinates of all the objects are adjusted to make the measured distances match):

image.png

b) Once all the landmarks are located in an image from each of the cameras (see step 1), the orientation (yaw, pitch, roll), fov and position of the cameras and the markers is optimised to make their 3d locations ‘match’ the photos.

image.png
The blue crosses mark the ‘rendered’ locations of the objects from their 3d estimated positions. The yellow crosses are the identified locations in the images. Note, ignore the ‘nest’ marker – the ‘nestboxleft/right’ markers on the right of the image show the location of the nest

3) Detect Bee. We then run the standard tag detection algorithm for each camera, which produces image coordinates of possible detected bees. [edit: we then manually confirm each detection].

4) Compute 3d path. To get the 3d path, I’m using a particle filter/smoother [edit: later we use a Gaussian process, but the images here are from the PF] to model the path of the bee. This gives an estimated x,y,z and a distribution over space (which gives us an idea of its confidence). Using the camera configurations determined in step 2, and the coordinates from step 3: Each coordinate in step 3 equates to a line the bee might lie on in 3d space… I’ll skip the details now… the upshot is you end up with a trajectory in 3d. I’ve rendered the trajectory back onto the 4 camera images.

image.png
Camera 1: The blue line is the trajectory (I’ve not smoothed between observation times yet) and the yellow circles indicate one standard error (i.e. gives us an idea how confident the model is to the location of the bee). The numbers are seconds

image.png
Camera 2: The blue line is the trajectory (I’ve not smoothed between observation times yet) and the yellow circles indicate one standard error (i.e. gives us an idea how confident the model is to the location of the bee). The numbers are seconds.

image.png
Camera 3: The blue line is the trajectory (I’ve not smoothed between observation times yet) and the yellow circles indicate one standard error (i.e. gives us an idea how confident the model is to the location of the bee). The numbers are seconds.

image.png
Camera 4: The blue line is the trajectory (I’ve not smoothed between observation times yet) and the yellow circles indicate one standard error (i.e. gives us an idea how confident the model is to the location of the bee). The numbers are seconds.

The trajectories above are 3d, projected back onto the photos.

The nest had been moved a few hours earlier, so we wouldn’t necessarily expect any learning flights really.

Looking at the photos carefully, I think the bee heads left along the hedge (in the images below (2d detection) the blue crosses are where the system thinks it’s found a bee, the rest are just debugging info – i.e. where it’s looked). The smoother had low confidence about where the bee was after ~7 seconds. If I went through and manually clicked on the ‘correct’ targets it would be able to reconstruct more of the path. Note the bee at the top of the hedge.

image.png
image.png
image.png

I’ve just run it on one flight so far (from the 20th at 13:58:00)!

After the first day at the field site, I got a bit nervous and moved the cameras a bit closer to the nest as I was worried the system wouldn’t see the bee. I think the bee is detected quite well, in retrospect, but the problem now is the cameras are slightly too close really! I should have had more faith in the system!