Let's face it, birdwatching takes time. A whole lot of time.
Annoyingly for us, birds have evolved not make appearances when there's a big mammal eagerly waiting for them, so what better to do than to automate the hell out of it? Raspberry Pi's rarely move by their own accord, so they're perfect for setting up without spooking wildlife.
We are going to use a Raspberry Pi with an HQ Camera, paired with TensorFlow Lite machine learning to identify when there's a bird in front of the camera, and take a photo when it sees one. Optionally, we can use a touchscreen to make the process of using the camera much easier.
The Raspberry Pi HQ camera lens has a fixed focus, so it should be focused on a set point, such as a birdbath, feeder, or table where birds are likely to visit when left alone. You can also just leave the birbcam on the (dry) grass and place some seed in front of it and you'll be sure to attract at least a pigeon.
We are going to use the new libcamera stack - so make sure you have a recent version of Raspberry Pi OS installed and everything is up to date.
There's no coding needed to get it up and running, but some proficiency with Raspberry Pi will come in handy.
Note: You don't need to use the HQ camera if you're on a budget - a standard camera module will functionally work the same, just not the same photo quality!
This guide doesn't cover everything, so here's some good things to start with:
I chose a 3.5inch display which plugs into the 40 pin GPIO header as we won't be using the GPIO for anything else and it means the entire camera and screen construction will nicely hold together as one piece. We've essentially just made a really fancy digital camera!
Getting the touchscreen working relies on following manufacturer guidelines, for the one I'm using, I followed the instructions on their Wiki.
I found the screen was extremely dim, so followed their advice on setting brightness to maximum using PWM. I found the screen kept resetting to the dim brightness on reboot, so I saved the command inside of /etc/rc.local
which gets run on boot and will auto-set brightness every time:
sudo nano /etc/rc.local
to edit the filegpio -g pwm 18 100
before the line saying exit 0
Things will be tiny, so let's make them more usable!
The main bit of the project is the camera, so install the HQ camera module as per the guidance, and boot the Raspberry Pi.
If you don't have a touchscreen, plug in a monitor, mouse and keyboard to the Pi so we can test if the camera works.
To do this, use the pre-installed
libcamera-hello by running libcamera-hello -t 0
Save the captured image by running libcamera-jpeg by running libcamera-jpeg -o test.jpg
. Practice taking photos and viewing the saved image on the Pi.
To really enjoy and share the photos you take with libcamera-jpeg
you'll want to copy them to your computer or device.
rsync
is a really useful tool to sync files from one place to another. You can then use rclone
to push photos to places like Google Drive or Dropbox.
Follow the guide here on getting rsync
and rclone
set up:
https://raspberrypi-guide.github.io/filesharing/file-synchronisation-rsync-rclone#uploading-data-to-the-cloud
You'll need the Pi and other device connected to the same WiFi router, have SSH enabled on the Pi, and a client on your other device:
Once installed:
pi
/ raspberry
Practice copying over photos, it's a bit manual but you're now up and running.
If you don't want to pull photos and instead want to push them, you can use scp
to securely copy photos from the Pi to another device.
The Raspberry Pi camera is more like a video camera than a still camera, so we need a way to trigger the script to take static photographs. Instead of using general motion detection, we will specifically detect birds so we don't need to filter through thousands of images of leaves moving.
The first step is to install TensorFlow Lite following the instructions.
TensorFlow Lite is based on machine learning and can be used to identify when there's a bird in front of the camera. It doesn't know what a bird is yet, but we can later on give it some models that describe the concept of a bird, and it'll be able to do the magic needed to then use those models to say when it can see a bird.
As you'll have seen, libcamera-apps
come pre-installed with Raspberry Pi OS, so we can just use libcamera-detect to open the camera up, and plug into TensorFlow Lite to identify when a bird is on screen.
While libcamera-apps
comes with the OS, libcamera-detect
does not come pre-built so we will need to build it, folding in our now-installed TensorFlow Lite installation.
Rebuild libcamera-apps
sudo apt install -y libcamera-dev libepoxy-dev libjpeg-dev libtiff5-dev
Make sure you continue on to build libcamera-apps
Install the dependencies: sudo apt install -y cmake libboost-program-options-dev libdrm-dev libexif-dev
Then you'll need to get prepared to build by downloading the code for libcamera-apps
:
cd
git clone https://github.com/raspberrypi/libcamera-apps.git
cd libcamera-apps
mkdir build
cd build
Build it real good
cmake .. -DENABLE_DRM=1 -DENABLE_X11=1 -DENABLE_QT=1 -DENABLE_OPENCV=0 -DENABLE_TFLITE=1
libcamera-detect
will be able to use it.make -j4 # use -j1 on Pi 3 or earlier devices
sudo make install
sudo ldconfig
Breathe - that's the tricky part done!
So far we have most of the pieces in place but the software still doesn't know what a bird is.
Therefore, we need a model which describes what a bird is, and fortunately the hard work is done for us as online there exists a library of common objects which has pre-trained models which we can download.
Common Objects In Context (COCO) is a machine learning dataset of common objects (including birds) that we can use without having to train up our own model of a bird. From exploring the dataset, it looks like it will work perfectly.
For example, this image has multiple items identified: https://cocodataset.org/#explore?id=12805
The context is nothing if not diverse either: https://cocodataset.org/#explore?id=43511
You'll want to download the models and labels file somewhere sensible, I've put them in /home/pi/models
:
Follow the instructions here
Create a file called object_detect_tf.json
and paste the following in:
{
"object_detect_tf":
{
"number_of_threads" : 2,
"refresh_rate" : 10,
"confidence_threshold" : 0.5,
"overlap_threshold" : 0.5,
"model_file" : "/home/pi/models/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29/detect.tflite",
"labels_file" : "/home/pi/models/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29/labelmap.txt",
"verbose" : 1
}
}
This will be used as a "post process file", which means that it's used when the camera is running. It has a model and labels file, so it can understand what it sees and match it up with a label. There are some parameters which you can play with, but these worked well for me.
So far we have:
libcamera-detect
which connects to the camera and runs the output through TensorFlow LiteWe will now run the command that will:
That's done by running the following command:
libcamera-detect -t 0 -o ~/birb%04d.jpg --lores-width 400 --lores-height 300 --post-process-file object_detect_tf.json --info-text "Detecting Birds!" --object bird
You can tweak the parameters by looking at the libcamera-detect docs and common command line options:
-t 0
-o birb%04d.jpg
--lores-width
/--lores-height
--post-process-file object_detect_tf.json
--info-text Detecting
If you have a touchscreen, you can use desktop shortcuts to run commands such as
libcamera-hello -t 30
as a viewfinder and way of setting focus when on sitelibcamera-detect ...
to run the bird spotting program!To keep things separated, we will create a shell script in our home directory (/home/pi
) that will run the libcamera-detect
command. Create the file by running nano ~/detect.sh
and paste into it the following:
#!/bin/bash
DATE=$(date +"%Y-%m-%d_%H%M")
mkdir $DATE
libcamera-detect -t 0 --lores-width 400 --lores-height 300 --post-process-file object_detect_tf.json --object bird -o birds/$DATE/%04d.jpg
libcamera-detect
application2020-04-14_1015_0001.jpg
Now do the same for libcamera-hello
- create a file called viewfinder.sh
with the libcamera-hello -t 30
command inside.
Next we will get these shell scripts to run when a desktop shortcut is pressed.
cd ~/Desktop
- anything saved here will show up on the Desktop.nano Detect.desktop
[Desktop Entry]
Comment=Preview
Terminal=true
Name=Detect
Exec=lxterminal -e "/home/pi/detect.sh"
Type=Application
Icon=/usr/share/raspberrypi-artwork/raspberry-pi-logo-small.png
Viewfinder
which will run the viewfinder.sh
file which will in turn run libcamera-hello
You now have two icons on the Desktop, one will open up libcamera-hello
and acts as a viewfinder, and the other which will open up libcamera-detect
.
This is just the start of a much bigger project - what we have done here works with really good results.
How about:
--object person
)There's also a load of other options on the utilities above to play around with.
And this is all without having to write a script!
Happy birdwatching!
(c) copyright 2022