One issue I thought might be a problem is that the pixels on a camera don’t really each measure all 3 colours. Instead, they each measure one colour & the colours are then interpolated. This isn’t a problem if the object being photographed spans many pixels, but what if the object is a tiny bright dot, as in our situation.
An example photo of “What the ladybird heard”:
Sadly the problem of the filter seems to be impacting our bee-orientation/id experiment. Here I rotate a tag 360:
The result is less accurate predictions of orientation:
If we adjust the focus of the lens so the tag isn’t in focus, the colours are more reliable:
This leads to a more reliable prediction:
I think the plan now is to:
Collect more data but download raw (without interpolation) – this also saves bandwidth from the camera.
Look at fitting the PSF using this raw data.
Maybe leave the camera just a little out of focus, to ensure all the colours are detected.
This was an initial experiment I ran back in December, to see if this idea might work.
The problem of using polarising filters
So, one thing I’ve been thinking about is how to get the orientation from the polarising filters from a side view. From above it is easy (although the 180 degree symmetry needs resolving) one just uses two cameras (with 0 and 45 degree polarising filters on) and a flat polarising filter on the back of the bee. From the side it’s more awkward – with a ridge etc…
Anyway, I went back to my original idea of using colours. For this experiment I made a hexagonal ‘tube’ – it’s a little large in this case (about 5mm across, when I think 3mm is probably the limit – I made a smaller one yesterday about 3mm across that also worked). I put inside the glass-bead style retroreflector and cover the ends of the tube (maybe needs strengthening using superglue).
I then used a tracking system to take photos of the unit from 8m away (the longest straight line in my house :).
I think maybe this isn’t as bright as it used to be: the colour camera isn’t quite as sensitive, the filters absorb some light, and the cylindrical shape rather than a ridge means it’s also a bit weaker [although works from all angles], and I used one flash instead of four… but anyway, here’s some of the photos to give an idea…
To build it I picked 6 filters using their spectra provided by LEE filters, hoping I’d pick some that would lead to a path that doesn’t have overlaps. I also just picked filters that transmitted the most light. This could be improved I think – as you can see the dots aren’t in a neat circle…
This on the same sort of triangle as above (although flipped and rotated, so the two axes represent normalised colour)… the numbers are roughly (+/- 15) the angle of the tag. The tag was imaged in order (0,15,30…345,0,15…) and the lines join sequential measurements. Currently we are just using the average value for each colour in a square around the tag, but in future this could be improved.
We can fit a Gaussian process (or other regressor to this)…
Cross Validation Results
MAE = 31 degrees RMSE = 48 degrees (chance level is: MAE 90, RMSE 104).
You can see that there’s two directions that seem to look similar (-30 & 150 degrees) where it gets a bit confused. One can see why in the colour map plots (where the dots around 330ish and 150ish are a bit jumbled together – you might even be able to tell by eye looking at the initial photos in the second figure).
Tweaking the choice of colours should help, also taking more close together training points, rather than asking it to interpolate over 15 degree steps. Note also the actual angle of the tag was only accurate to +/- 15 degrees. Anyway – this colour-tag idea is another potential approach, instead of the polarising filters.
I only spent a couple of hours or so getting this together, so hopefully I can make a lot of improvements on this in the new year.
My first outdoor experiment with the ‘colour to get orientation’ project looks like its got promising results, but I need to auto-detect the ‘posts’ to get the ground-truth orientation. I might have to come up with a better idea and try again.
The following figure shows that it does seem like the colour changes (these are zoomed in on each of the tags in the first 49 flash photos). The first lot are 24m away, then the later dozen are from 16m away.
The puzzle is to get the ground truth orientation from the black-and-white sticks and then see if the colours relate to the orientation.
The IPCC’s SR15 report a couple of years ago is a useful summary of why we should really avoid going above a 1.5C increase, and gives ballpark budgets for how much CO2 we can still emit. I guess sea level rise, the arctic melting and tipping points like the Amazon become savanna are all a worry, but the main ‘output’ that I’ve worried about from when I first heard about climate change was the impact on crop yield. I’m a complete non-expert, but I imagine this typically will appear not as a gradual reduction in yield but in one or two year large scale crop failures. When this happens in more than one part of the world at once the price of food will go up.
So how much CO2 can we still produce and stand a good chance of avoiding going into the orangey-red bit of the crop-yield plot? Chapter 2 of the report has this table (I’ve cropped it to fit on the post – see p108 of chapter 2 of the SR15 report).
It looks like things to get bad quite quickly after 1.5C. A 50/50 chance of avoiding going past that (and if we care about our grandchildren) gives us a budget of 480GtCO2, but that was 4 years ago nearly. We produced about 40GtCO2 in each of those years so I guess we’re now at 320GtCO2.
So we can continue to emit at this rate for 8 more years (then no emissions).
Motor vehicles: 27% Electricity generation: 21% Heating houses + cooking: 15% Business: 17% Agriculture: 10% [methane from cows is half of this + NOx] Waste management (mostly methane from landfill): 4% Industrial processes: 2% Heating public sector buildings: 2% Landuse: 1% (rounding means this adds up to 99%). (total: 5.4 tonnes/person)
So wind, wave and solar made 88.5TWh. In 2019 we used a total of 1651TWh (source). Obviously renewables can include burning plants and rubbish (113TWh in the diagram above, and 10% of petrol is now bioethanol and some people have woodburners). But I guess my worry is it feels like we’re a long way from powering our heating and our cars from renewables. 5% of our energy was produced by wind, wave and solar.
My last thought I’ve run out of time to write is that I think we need to aim beyond 2100. My children’s children will be middle-aged in 2100.
There were several issues that became apparent during and after the 2021 Summer experiments! I’ve listed some of these issues and the planned solution. This isn’t a very exciting post, skip down for the last few posts which have 3d videos!
– Detectable Posts: Although fairly easy to detect, sometimes non-posts get detected. I think a few more stripes to make finding them easier (at the cost of a slight reduction in the maximum distance). E.g. go from 11 stripes to 15.
– Improved pot design: It turns out we can tag the bees in-field with modified marking pots, but need to add lids to stop them escaping!
– Tag contamination: The wax on the retroreflectors degrades them: Try using alternative materials.
– Firing rate: This is a big challenge. There are several potential bottlenecks. I found the flash thermal-protection circuits kicked in. I did experiment with only firing half of the flashes each time – I’ve not looked at the results of this yet. Might be able to disable the protection circuit! The other problem was I found many failed images, due, probably, to buffers failing to be serviced quickly enough etc. Will need to experiment (might be able to downscale image 2×2). A rate of 4Hz is currently possible for 10 seconds.
– Tag shape / detectability – related to the above issue is that the tags aren’t always visible, possibly because of their shape. Redesigning to make them visible from all directions (e.g. a cone, cylinder or sphere) could help.
– Timing alignment – a major issue I found was that the raspberry pi generated time stamps often ‘jumped’ or ‘froze’, given the importance of precise timing, this is causing a major problem. The solution I’m building is a (GPS-time-synced) ‘clock’ that simply displays the time to the nearest 4ms in binary, in a cylinder. LED driver, LED, GPS receiver, arduino [e.g.].
– Range – should keep looking at how to improve the range – switching to glass corner cube reflectors? Focused flash? If frequency is increased could trade it for a tighter ‘high gain’ beam?
I applied the Gaussian process approach to the first learning flight recorded from the new nest. Unfortunately two of the four cameras weren’t very useful (one pointed the wrong way! and the other had a battery failure), so the path is more uncertain than I would like.
(First ten seconds of the trajectory. Small moving dots are samples from the posterior distribution to give an idea of uncertainty. Similarly the shade of the line indicates uncertainty with white/hidden = a standard deviation of more than 50cm in the prediction). Big dots are the posts and nest. The triangles are the camera locations.
Next I’ll look at other learning flights and start doing some cross validation on the predicted flight path trajectory.
Last week I was using a particle filter to model the bees, but this morning I tried out (with very simple simulated dataset) using a multioutput Gaussian process. I think these results look better*:
The dots are samples to show uncertainty, set to 1/3 of the true standard deviation for clarity.
The straight lines are from the ‘photographs’ taken from two cameras. The green helix is the ‘true’ path of the simulated bee.
I can’t believe how quick it is to use stochastic variational inference! The slowest bit was having to write some code to do batch-compatible cross-products in tensor flow. Notebook here. I’ve already run the particle smoother on some learning flights, but will try this out on them instead…
1) Automatic detection of the markers. To make life easier in the long run I made posts that are hopefully visible from up to 50m away using the tracking system’s optics and are machine readable (see here). The tool finds the location/orientation of the markers in the image. It then uses the top and bottom of each to help with alignment.
2) Automatic registration of the cameras and markers. This was quite hard. a) I’ve provided the distance between lots of pairs of landmarks and cameras (measured on the day), and approximate x,y coordinates for all of them. The tool then does a rough optimisation step to work out roughly where everything is (i.e. the x,y coordinates of all the objects are adjusted to make the measured distances match):
b) Once all the landmarks are located in an image from each of the cameras (see step 1), the orientation (yaw, pitch, roll), fov and position of the cameras and the markers is optimised to make their 3d locations ‘match’ the photos.
3) Detect Bee. We then run the standard tag detection algorithm for each camera, which produces image coordinates of possible detected bees. [edit: we then manually confirm each detection].
4) Compute 3d path. To get the 3d path, I’m using a particle filter/smoother [edit: later we use a Gaussian process, but the images here are from the PF] to model the path of the bee. This gives an estimated x,y,z and a distribution over space (which gives us an idea of its confidence). Using the camera configurations determined in step 2, and the coordinates from step 3: Each coordinate in step 3 equates to a line the bee might lie on in 3d space… I’ll skip the details now… the upshot is you end up with a trajectory in 3d. I’ve rendered the trajectory back onto the 4 camera images.
The trajectories above are 3d, projected back onto the photos.
The nest had been moved a few hours earlier, so we wouldn’t necessarily expect any learning flights really.
Looking at the photos carefully, I think the bee heads left along the hedge (in the images below (2d detection) the blue crosses are where the system thinks it’s found a bee, the rest are just debugging info – i.e. where it’s looked). The smoother had low confidence about where the bee was after ~7 seconds. If I went through and manually clicked on the ‘correct’ targets it would be able to reconstruct more of the path. Note the bee at the top of the hedge.
I’ve just run it on one flight so far (from the 20th at 13:58:00)!
After the first day at the field site, I got a bit nervous and moved the cameras a bit closer to the nest as I was worried the system wouldn’t see the bee. I think the bee is detected quite well, in retrospect, but the problem now is the cameras are slightly too close really! I should have had more faith in the system!