Simple Mirror – ICM – Final – Coding and Staging

Here I will document some technical details of my ICM final project.This project is quite simple:

A live webcam, a face tracking library, and P5js.

When it runs on a computer, it works like this:

But my goal is to create a mirror in the real world, so basically I need an external webcam, a projector, and a frame on the wall.

So I was working with the external webcam first. It was unexpectedly stressful to use the USB webcam, which I thought it would be easy —— just select an external webcam in Chrome. However, it doesn’t work that way! You cannot change your webcam in Chrome! I googled again and again; it seems like this tunnel is closed since Chrome ver 52.0.

Choose camera is grayed.

But there are always solutions. I can choose it by Javascript. The price is I have to understand how WebRTC works and write some code to achieve my goal.

WebRTC Example

I’ve never read code as complicated as this one, but I managed to harness it: Scan all the webcam available and list them on the screen, then when the user chooses one, call a function to use that as an input.

Modify it to suit my situation. the logic is:

  1. List cameras.
  2. When user select one, call a function to change the source.
  3. Initial the face tracking script with the source selected

Source Code

<the outcome, img>

And finally, let’s draw the face on my canvas.

brfv4 provides a variable to hold the characteristic points of a face, which is 136 numbers represent 68 points.

Here I was using vertices since I was so silly that didn’t know the library provides 2d vector points directly. So I have to transfer the 136 number into 68 arrays, in which each array holds two item of a point—— x and y.

function DrawFace(face) {
  if (face.state === brfv4.BRFState.FACE_TRACKING_START ||
      face.state === brfv4.BRFState.FACE_TRACKING) {
    // fDS : face dots shifted
    numberOfFaces +=1;
    let fDS = [];
    let center = [
      (face.vertices[0 * 2] + (face.vertices[16 * 2] - face.vertices[0 * 2]) / 2) * 2,
      (face.vertices[1 * 2 + 1] + (face.vertices[15 * 2 + 1] - face.vertices[1 * 2 + 1]) / 2) * 2,
    for (var i = 0; i < face.vertices.length; i += 2) {
      fDS[i / 2] = [face.vertices[i] * 2 - center[0], face.vertices[i + 1] * 2 - center[1]];

Then I need to select some points to draw my sketch. Tested a lot of schemes, in short, I chose to:

let head = [
  0, 0,
  fDS[16][0] - fDS[0][0],
  (fDS[8][1] - fDS[29][1]) * 2
let leftEye = [
  fDS[41][0] + (fDS[40][0] - fDS[41][0]),
  fDS[40][0] - fDS[41][0],
  fDS[40][1] - fDS[38][1],
let rightEye = [
  fDS[47][0] + (fDS[46][0] - fDS[47][0]) / 2,
  fDS[46][0] - fDS[47][0],
  fDS[47][1] - fDS[43][1],

Use points 40, 41, and 38 to create a ellipse representing left eye, and use points 46, 47, and 43 to create a ellipse representing left eye so when user blink, these eyes will change size.

Use point 16, 8, 29 to draw an ellipse, which is the head.

Now here is where the magic begins: Use two bezier curves representing the upper lip and lower lip, so the two curves could represent the facial expression of the user.

  fDS[48][0], fDS[48][1] * 0.95,
  fDS[67][0], fDS[67][1],
  fDS[65][0], fDS[65][1],
  fDS[54][0], fDS[54][1] * 0.95,
  fDS[48][0], fDS[48][1] * 0.95,
  fDS[61][0], fDS[61][1],
  fDS[63][0], fDS[63][1],
  fDS[54][0], fDS[54][1] * 0.95,

Finally, this plan works perfectly. When I gave it the ability to simulate facial expression, it became extremely attractive, or say, addictive. I can’t stop playing with it, neither my testers.

What above is the fundamental programming work of this project.

And how to project it on the wall?

It should be simple, just connect it to a projector and shot the canvas on the wall —— if I can have a short throw projector, which I cannot get from the shop. So when I project it from aside, the screen is distorted. My solution is using MadMapper to map the canvas on the wall correctly.

I will not talk about the basic operation of MadMapper and Syphoner here since that is easy.

My trouble is: it became extremely slow when I run my sketch, MadMapper, and Syphoner together.

What can I do? I come up with an idea: I can make two computing devices working together, one of them running the sketch, and another one controls the mapper and projector.

And how could the first one send the sketch to the second one?

Hah, AirPlay.

And also, I can run the sketch on my phone wirelessly, and hide my phone entirely behind the screen. That means —— you cannot see any wire and device on the wall! Which is amazing!

I set up a small projector and small screen for my in-class presentation. If I could make it on the Winter Show, it would go bigger, which I believe would be more impressive, just like the first picture of this article.

That’s the primary work here. My next step will be optimizing my code to reduce the lag and build a better-looking setup. Stay tuned.

Leave a Reply

Your email address will not be published. Required fields are marked *