Friday, November 15, 2013

sixpence 3D scanning kit

While I was spending my summer in London typing away on my thesis. One of my extra curricular activities was to pop over next door Institute of Making to make some interesting stuff. At one of the workshops, we did some 3D scanning using both open source such as reconstructme + kinect, and also proprietary solution, the Next Engine. Scanning a human object using the kinect at the absent of a scanning rig is really tiresome. Holding the laptop, the kinect, and power supply circling the subject at incremental steps is tedious. Nonetheless, here I present to you: yours truly in meshlab.


Over the weekends, I thought of an idea of making dirt cheap 3D scanning with existing items. What I mean existing items, are items on my desk such as an android mobile phone, arduino, and servo. While researching on cloud computing and it's application, I discovered a really cool website http://apps.123dapp.com/catch/ that leverages on cloud computing to generate a 3D model based on multiple pictures of an object. Taking (at most 70) pictures of an object at 360 degrees manually without a rig is really tiring. So, my weekend project for the 3D scanning kit to automatically take pictures at 360 degrees of a subject without human intervention; can be decomposed to 4 sub parts. part1: I need a turntable of some sort to rotate my subject 360degress. part 2: There must be some sort of communication channel between my turntable and the picture taking apparatus. part3: picture taking apparatus must be capable of receiving commands. part4: upload pictures to 123D catch to generate the 3D model.

part1: turntable
turntable with subject container

manually to take pictures
if you wonder what is the pen doing there


Parts needed. An arduino, full rotation servo, code.
The full rotation servo (FRS) I got on hand was picked up from a rubbish dump. Upon testing, it is still functioning, how lucky. Here comes the interesting problem. With the use of the example code of sweep from arduino, the FRS is behaving erratically. It does not stop exactly at 15 degrees and continue to spin. Reason being, the servo is modified; the "horn" on a gear inside the servo is broken off. tough luck using standard code. So, I have to come out with a scheme to stop the FRS at every 15degrees via code.

As for the container of the subject. I have used newspaper to create the background for the subject. Such that when the 3D model generating algorithm is running, the patterns on the newspaper can be used as the reference point. That is according to the guide of the 123D catch.

Part2: communication
parts needed: android device (API level 17 onwards), OTG cable
Reluctant quite I am, to purchase a bluetooth shield for arduino for communication. Furthermore, I am using an android phone running android 4.3 (API level 19). In this particular version, it supports direct USB connection from say a keyboard or mouse to the android phone via microUSB or OTG cable ( USB typeA female to microUSB male). It is much more cost effective for me to use OTG than the bluetooth shield.

A quick look at the opensource community, I stumble upon this github https://github.com/dtbaker/android-arduino-usb-serial i believe was forked from https://code.google.com/p/usb-serial-for-android/. Many thanks to the open source contributors for allowing me to quickly try out code for USB serial from android <--> arduino. Just a point to note, the baud rate for the android is 115200, so arduino must setup serial at the same baudrate. 

combining part1 and 2, I have devised a scheme for my 3D scanning kit. Arduino turn the turntable every 15degrees, send ASCII characters to android device to signal for taking a picture
The code for arduino is here 
Another point to note: print out the serial data received on android to prove the assumption that it is going to be the same as per received on hyper terminal. I learnt it the hard way.

Part3: multiple picture taking on android device without human intervention.

There are excellent tutorials such as this and this for writing manual code to use the android device's camera to take ONE picture. Having wrote my last android app from scratch on my HTC magic, android 1.6 (API level 4), I assumed that I would have not any issues using the API for android 4.3 (API level 19). Besides that, having use MIT app inventor for mockup and POC without writing code from ground up, and using the standard features following the standard methodology; left me jaded when it comes to developing android app.

The SOP for taking a picture on android device via camera API is quite straightforward. Create an activity. Add a button to listen to a an event to take a picture. Add a view to the frame layout for the preview from the camera. Save the picture to the device's memory. After picture is taken, refresh the preview. I assumed that I would only spend 4 hours max after office hours to write a piece of code that would automatically take multiple pictures without user intervention (nobody click on the button to take picture). Little did I for see I would stare at the code for a few nights, wrestling with the android code framework finding out where are the crashes; due to the nature of pictureCallBack(), onPictureTaken(), and refresh preview were supposed to be used. The experience and amount of code I have tried to challenge my assumptions such as race condition, critical section, multi threading that I thought might be the root cause of the crashes warrant for a lengthy post by itself.

Nonetheless, after staring and experimenting for few nights straight, i present to you The code of this android app that is hosted on github

Part 4: upload pictures to 123D catch

Combing part1,2,3, setup a stand for the android device to take pictures.

copy the 31 images from android phone to be uploaded to the 123D catch

Generate a 3D model from the pictures uploaded

Note: no model generated (I got a blank screen after the supposed completion of 123D catch[online]), and I have waited close to 30min to save the project, but without success.
Edit: I have tried to take a few shots of the same subject manually and upload to 123D catch, just to prove my assumption (pictures taken by my app is not usable) is wrong. Surprisingly, no models generated too. really weird

Some fine tuning is required: I noticed the picture taken by my android device was out of focus. Maybe that is the reason why the model was not generated. pictures generated from sixpence 3D scanning kit do work!

Edit: I placed my subject too close to the lens, hence the depth of field caused the blurry images akin to blur subject and clear background. I am still trying to find the API that allows for macro mode auto focus.

Edit: For some weird reasons, 123D catch online version does not work on my laptop. I have left it running over night and the next day I check my computer, still no model generated. However, the 123D offline version does work, using the pictures generated by sixpence 3D scanning kit.
uploading pictures
processing the capture into 3D model
sitting there looking pretty


THIS IS SPARTAAAAAAAAAAAAA!!!!


A quick view in meshlab. further manipulations are needed before 3D printing. For starters, the newspaper background got to go.



update:
the 3D model opened in meshmixer. the news paper portion is selected and then deleted by pressing key "x"


there are a few gaps need to be fixed in the edited 3D model before it can be printed.

the 3D model is edited into a watertight model, ready for the 3D printer.
the 3D printed model








No comments: