How we made sure people were who they said they were
Nooklyn is a New York based real estate marketplace. Some of its features have a social component which means Nooklyn has a responsibility to protect its users from bad actors.
To foster a safe community for everyone, Nooklyn requires users to verify their identity before they're allowed to use certain features. For a while, there were 2 ways a user could verify their identity: Connect their Facebook account with their Nooklyn account or email us a photo of their passport or ID.
Neither method was ideal and there were a few issues:
My managers had always wanted a way for users to verify their identity with a passport or ID without needing to email us. Luckily, they discovered a (now-defunct) service, Caisson, that built tools to let developers integrate identity verifcation into their own apps.
Caisson needed 3 photos to verify an ID: a selfie, the front of the ID, and the back of the ID. We could send these photos to Caisson's servers and they'd let us know whether the ID was legitimate and the face in the selfie was the same as the one on the ID. Requests could fail for a number of reasons like if one of the photos was blurry or if they couldn't detect text in the ID.
It was a straightforward system but there were a few important details and constraints:
My managers wanted the user experience to be completely automated by Apple's Vision framework, a library of tools that lets developers detect objects, like rectangles and faces, in a live-camera feed.
The primary goal of the user experience became to ensure the user takes well-lit, non-blurry photos without using a capture button.
I worked with my team to sketch a few ideas to help users take good photos. We decided to do a couple things:
I came up with 2 versions of when to ask the user to confirm photos:
Ultimately we chose to ask the user to confirm the photos at the end which let the photo-taking process be more hands-off.
We tested our automated experience and discovered a few issues:
After testing many prototypes to get the timing just right, I decided to show a capture button after about 7 seconds so the user could move manually take a photo. I decided to always show the capture button when the front facing camera is active.
A Figma prototype showing the entire process: