Creating a safe community by verifying users' identities

How we made sure people were who they said they were

My role
Interaction design, visual design, prototyping, iOS programming
1 designer (me), 1 engineer (me), 1 design manager (Moiz Malik), 1 technical manager (Alex Dimitriyadi)
3 months, 2019
Key result
Eliminated dependence on Facebook


Nooklyn is a New York based real estate marketplace. Some of its features have a social component which means Nooklyn has a responsibility to protect its users from bad actors.


To foster a safe community for everyone, Nooklyn requires users to verify their identity before they're allowed to use certain features. For a while, there were 2 ways a user could verify their identity: Connect their Facebook account with their Nooklyn account or email us a photo of their passport or ID.

Neither method was ideal and there were a few issues:

  • Some people didn't have anything to show - Some people simply didn't have any of the required materials which meant they were unable to use certain features.
  • Not everyone want to share - Some people had a Facebook account, passport, or ID but didn't want to share it.
  • Verifying manually was slow - Emailing a photo meant someone at Nooklyn had to manually process the request. It could take hours to get a response and if you emailed on the weekend, it might not get processed until the next Monday.
  • Verifying manually was time-consuming - On the business side of things, it was a manual process that cost man hours.
  • Reliance on Facebook - Facebook's SDK had failed more times than we were comfortable with and we didn't want our app to rely so heavily on Facebook.



My managers had always wanted a way for users to verify their identity with a passport or ID without needing to email us. Luckily, they discovered a (now-defunct) service, Caisson, that built tools to let developers integrate identity verifcation into their own apps.

Caisson needed 3 photos to verify an ID: a selfie, the front of the ID, and the back of the ID. We could send these photos to Caisson's servers and they'd let us know whether the ID was legitimate and the face in the selfie was the same as the one on the ID. Requests could fail for a number of reasons like if one of the photos was blurry or if they couldn't detect text in the ID.

It was a straightforward system but there were a few important details and constraints:

  • All or nothing - All 3 photos were required in a single request.
  • Late errors - We wouldn't find out if a single photo was blurry until we sent all 3 photos.

Apple's vision framework

My managers wanted the user experience to be completely automated by Apple's Vision framework, a library of tools that lets developers detect objects, like rectangles and faces, in a live-camera feed.


The primary goal of the user experience became to ensure the user takes well-lit, non-blurry photos without using a capture button.

I worked with my team to sketch a few ideas to help users take good photos. We decided to do a couple things:

  • Display a masked shape over the camera feed to show users where to place their face and ID
  • Display popovers to give timely, useful instructions
  • Because the process was hands-off, we'd use animation and haptic feedback to let the user know that something was happening
  • Ask the user to confirm that their photos are good

I came up with 2 versions of when to ask the user to confirm photos:

A: Ask the user to confirm after each photo
B: Ask the user to confirm after taking all 3

Ultimately we chose to ask the user to confirm the photos at the end which let the photo-taking process be more hands-off.


We tested our automated experience and discovered a few issues:

  • Sometimes the code didn't work - Sometimes our implementation wouldn't detect an ID in seemingly perfect lighting conditions. When this happened, there was no way to proceed.
  • Having your picture taken automatically is awkward - Apple's Vision framework is powerful so it can detect faces even in extreme conditions like when your camera is barely pointed toward you. This results in photos taken before the user had a chance to pose for the photo.

After testing many prototypes to get the timing just right, I decided to show a capture button after about 7 seconds so the user could move manually take a photo. I decided to always show the capture button when the front facing camera is active.


A Figma prototype showing the entire process:


  • Demo worthy - Our friends over at Caisson let us know that they started showing our app to demo their service.
  • Nearly eliminated manual requests - Saved countless man hours with the new automated feature.
  • Broke free from Facebook - While existing users continued to use Facebook, the majority of new users chose to verify with an ID