Our platform allows you to donate to projects that are restoring nature, and follow the progress first-hand. Restoration practitioners around the world use our infrastructure to provide you with 1:1 feedback on the positive impact they've made with your donations. When it comes to the stream of pictures coming directly from the partner, our platform runs a ton of analyses to ensure that the proof you're receiving is real, unique and wouldn't have happened without your support!
To optimize for accountability and efficiency, our Ops team has developed their own AI-models and tech-driven verification dashboard.
We think it's pretty cool, and we'd love to tell you all about it!
Every image tells a story, not just visually but also through its data. Beyond the visible, images contain metadata with details like camera specs, size, orientation, and perhaps more importantly where and when it was taken. Leveraging this data, we've developed several models to check and validate each image.
Our analyses starts with checking the images metadata on the date, time, longitude, latitude and altitude where it was taken, and cross-referencing it against the impact contract we have with the local project. Are the pictures all unique in terms of metadata and have they been taken after we've placed the order? And are they in the place we know the partner has rightful access to? Are the intervals at which the pictures are taken representative of the time it takes a human being doing the work to make them? This is our first step to confirm the photo's authenticity and that it isn't missing crucial details like location data.
To prevent accidental re-uploads, we compare file names and image hash codes. A hash code of an image is a unique digital fingerprint, generated from the image's content, to identify and differentiate it from others. Comparing both names and has codes ensures every image in our dataset is unique and we avoid any double counting within our platform.
Seeing your donation come to live is great but if you have to tilt your head to see it in the right orientation it is a bit annoying. Also the other AI checks we have lined up work better if the pictures all have the same orientation. So we run an algorhytm to check each image’s orientation, if one is detected this is presented to our Ops team to adjust where needed.
The next AI powered verification is to check the images for sharpness. A blurry or shaky image doesn't do justice to your contribution and skews the results of our further analysis and is therefore rejected.
What if one of our practitioners accidentally uploads an image that isn't proof of the implementation of your donations, like an accidental selfie instead of a picture of a tree that gets planted. Our outlier AI model identifies those images that are much different than the rest we've received from this partner, ensuring we keeping only those that are genuine proofs of your donation.
Now, let's explore our most exciting model. Imagine having two images of the same tree, each taken from a different angle. To address this, we've developed an AI model, trained on our dataset, to detect such similarities and prevent double counting.
Automation is cool and efficient, but it still takes some human judgement to make the final call. So just to be sure we keep a human eye on the outcomes of the models, evaluating the highest risk imagery. With every proof upload to the platform learns from the human feedback and gets better!
Additionally, we conduct a bit of detective work, tracing the paths taken by photographers. This ensures that only genuinely duplicate photos are flagged as invalid, like when someone inadvertently captures the same tree while walking in a circle. Our models might not hit the bullseye every time, but with a human in the loop, and with 3rd party sources like satellite or drones, we're pretty confident we are getting quite accurate results.
Curious about the inner workings of these models or eager to help improve them? Send us an email! (info@sumthing.org).