This is a general discussion about using the Telraam sensors being deployed as part of the ‘Our Streets Chorlton’ project. They are Raspberry Pi based traffic counters to measure the volume, speed and types of traffic down roads in the Chorlton and Chorlton Park areas. The sensors use Artificial Intelligence to work out what is going on with the data accessible on the Telraam website.
Two of our sensors are currently live. Can view theiir data here:
The main telraam map is quite fun to look to at too - someone has one up in Chapeltown in Leeds and another in Lymm!
It would be great to see more sensors get installed especially where there are rat runs
There’s a school at the end of my street, and I’m looking forward to seeing how traffic patterns change when schools are back full time. My perception is that the number of parents dropping off children morning and afternoon is definitely a lot less than ten years ago, but will be checking my my Telraam stats with interest.
It would be interesting to see how traffic on your road will change. One thing that I haven’t quite worked out is if the sun affects the readings. The window that I have my camera placed is south facing and also is triple glazed so there is a chance of getting a lot of inter reflections.
I am not sure of the numbers I am getting out of our Telraam. It seems like the bicycle/pedestrian ratio is a bit off.
Same here - south facing and triple glazed.
Yesterday, for the first time (since put in place in January), Telraam is counting lorries (Great Vehicles), and it’s counting a lot of them. I wonder why?
I am wondering if it treats SUVs as lorries. I do know that there are a surprising amount of supermarket delivery vehicles about these days
I just realised that you said regarding the lorries and the delay in counting them. It happened here too and my thinking, and I have no evidence to back this up but @Nathan might be able to verify, is that the telraam don’t now what a car or a bike or a pedestrian or a heavy vehicle is (I think the heavy vehicle terminology is telling). All it knows is how to identify movement and it has to split what moves into 4 boxes. So it can identify cars and people/bikes first then people and bikes and cars and then lastly it learns to split cars into two groups, heavy vehicles and cars. Thing is where is the boundary between car and heavy vehicle and this is why I think it might be counting large cars as heavy vehicles - This is pure conjecture
I finally got round to counting traffic passing my window, on Friday March 5th between 10am and 11am. This is what me and my Telraam saw -
If you look at the combined pedestrian+bike count and the cars+lorries count, it’s looking pretty close.
That is closer than I expected to be honest
My understanding is that the sensors can detect moving objects and draw boxes around them, from these boxes several variables are measured:
The area (height*width).
The axis ratio is the height/width.
Fullness, which is a measure of how much of the box around the moving object is background (ie, was present before the object was in frame) and how much is the moving object itself.
These two dimensions are then used as the basis for a classifier that sorts objects into pedestrians, cyclists, and ‘other’ (cars/vans/buses/lorries).
To break down the ‘other’ category into cars vs heavy vehicles (vans/buses/lorries), they assume that most objects in this category are cars, and then on this basis infer that the average car size is equivalent to the most commonly seen box area (this must be done on a per camera/per direction basis, to account for differences in things camera angle and distance from road). They then define anything that is above a cutoff value, usually 1.33 the average vehicle size, as a heavy vehicle.
This is a large source of uncertainty. For example, the cutoff between a large car and a small van can be very small, and for roads where there are multiple lanes, a small van in the lane further from the camera may is likely to have a similar size bounding box than a car in the closed lane. Another issue is shadows that can make an object appear much larger during sunset and sunrise.
You can find more on the methodology, sources of potential accuracy, and steps that are being explored to improve accuracy here.
Most of the uncertainty is due to counts being calculated on the fly, on the unit itself, much better algorithms exist but would have to be run externally.
I’ve built one of the Telraam sensors after chatting with Professor Enda Hayes who’s heading up the we-count.net project at Bristol University. Most of the sensors he’s helping to deploy are around Cardiff where he lives.
Mine, like others, is in a South facing window with double glazing. I live on a school rat run so I’m expecting to see two spikes of traffic, one in the morning going to school & another in the afternoon leaving school.
We have a problem with speeding cars too.
I built a similar device this time last year using this code: GitHub - pageauc/speed-camera: A Unix, Windows, Raspberry Pi Object Speed Camera using python, opencv, video streaming, motion tracking. Includes a Standalone Web Server Interface, Image Search using opencv template match and a whiptail Admin Menu Interface Includes picam and webcam Plugins for motion track security camera configuration including rclone sync script. watch-app allows remotely controller camera configuration from a remote storage service name. Uses sqlite3 and gnuplot for reporting. Recently added openalpr license plate reader support.
It worked quite well & gave me some evidence as to the speed.
I’m interested in the Telraam project as part of my day job as Hull City Council’s Smart City lead, can we use these devices as additional counters for our unified traffic control (UTC) SCOOT system as well as engage the people of Hull in both understanding traffic and the power of open data.
We use a similar set up in Hull on our current CCTV but process the image streams on a GPU cluster using software provided by Transpix.
Telraam state their devices provide about 85% accuracy.
Interesting regarding using SCOOT data, this is something that we are trying to do too. @Nathan has been trying to make sense of it. It is not easy data to use especially when you don’t have all of it. We are also developing a cohort of community data gatherers with a focus on traffic surveying, to verify some of the data where getting from the Telraam and SCOOT systems.
Have a chat with Luke Smith (Newcastle Urban Observatory) on Twitter - he processes our scoot data (all available as open data from https://opendata.hullcc.gov.uk) & creates awesome info.
(Scroll down for Hull)
There’s also daily profiles:
He’s open sourced all the notebooks he’s using to create the information.
Our modal transport camera data is also available as open-data on the same platform, but that’s real time with no historic data.
Thanks Adam that is really interesting and although TfGM has an open API to access some SCOOT data it is ‘as is’ and difficult to understand.
I’ve learnt a lot about SCOOT & how it’s measured… also the failings in how it measures vehicles.
Hi @Adam - this is really interesting thank you for sharing! I have messaged Luke on Twitter too
Does anyone have an idea of how Telraam sensors might handle being located on a junction like this?