Wednesday, December 4, 2013

[CC] resize hard disk by expanding the capacity of a CloudStack 4.2 virtual machine in XenServer 6.2

Often in a given scenario of use case, the practitioner would perform some capacity planning and cater for the computing resources needed at a slightly higher amount, to give some buffer or leeway. However, we might run to an event where the computing resources is over subscribed. In the setup of the management server for CloudStack 4.2, we might run to an event that requires the engineer to increase the hard disk capacity for both the primary and secondary storage. Assuming having to physical access to the physical blade servers, adding more hard disk capacity can be as simple as plugging more hard disks. provisioning gets complicated when storage is virtualized, and with the virtual machine created in the hypervisor with XenServer 6.2.

The following steps are necessary to resize by increasing the hard disk capacity of a virtual machine residing in XenServer 6.2 as the hypervisor.

First, power off the VM. I like to create a backup of my VM before I perform anything drastic. At such, any awkward situation that requires a save, the backup is to the rescue. To resize hard disk of the VM, From XenCenter, select the corresponding VM and click properties. Change the size of the hard disk accordingly. Here the size is increased to 28GB.

Then, restart the corresponding VM. From here, we will be using the "fdisk" commands on centos6 to manipulate the VM's resources.

Take a look of the VM's hard disk with "fdisk -l" or "fdisk /dev/xvda"

Observe the total capacity of this newly resize hard disk, and also the location of the device created. note that in centos6, /dev/xvda2 is not automatically resize to the new capacity.
The trick here is to create a new primary partition called "/dev/xvda3", and then allocate the subsequent capacity form 1046 onwards to 30064771072. All these can be done with the "fdisk suite of command". use the "m" for the menu of the fdisk.

After creating the new partition, verify the new partition. All these are not done yet. The partition tables need to be written out.

This is not the end of resizing the hard disk for a VM in XenServer 6.2 hypervisor yet.

The unique-ness of centos6 as compared to ubuntu when it comes to disk management, centos6 requires the linux volume group "VolGroup" to be extended with the new physical device "/dev/xvda3". then the centos6 "home" folder that is residing at "/dev/VolGroup/lv_root" need to be extended with the newly acquired physical volume.

note the +20G parameter used caused some errors, because of insufficient disk blocks.

That is not the end of it yet. The file system is not updated with the latest disk capacity and need to be resized with command "resize2fs".

A final check to ensure the newly created hard disk capacity is available.

Tuesday, December 3, 2013

[CC] your own private cloud with cloudstack4.2 xenserver6.2 on minimal hardware

There are scenarios that would benefit from the use of a private cloud.

In a hypothetical situation, I have a bunch of data that is deemed privileged, worthy only on a private cloud. The data need to be sanitized, polished, processed, and finally visualized. The data is too hefty to be process on my laughing stock (i7, 8GB ram, 500GB hdd, on a 32bit OS), and it requires some very serious computing horse power to be leveraged on; using very popular open source, scalable, and distributed computing methodology, such as Apache Hadoop. I could follow the "how" on setting up for my thesis project using Apache Storm running off many virtual machines (VM) hosted on public cloud Amazon Web Services (AWS) EC2; all in all risking of data leakages (from human error).

Definitely need to push out the boat on setting up a private cloud for such scenario. However, the luxury of having a cloud computing center aka data center at yours disposal (yours truly is indeed privy in this), is out of reach to many.  Very much like the previous mentioned, it is the need to experiment the performance of a software on many virtual machines require cloud computing, privy data just made it a private cloud. The lack of access to data center however, should not be the hampering factor to write some interesting algorithms to be used on a distributed computing infrastructure.

In another hypothetical scenario, you may have only a single high performance blade, but the department staff requires access to this computing power on an ad hoc basis. In the good old days, staff would block off the physical blade for a time frame, install whatever software that is necessary to run the task and wipe out of after use. This methodology however come with a few caveats. For starters, lets think through a few of them. 1. assuming the full computing capability of the blade is not harnessed by 1 staff's software. 2. the blade is mutually exclusive, blocking other staff from accessing the computing power. 3. virtualizing the blade itself is not sufficient, staff's computing requirement is elastic. There must be a way for staff to manage their own virtual machines. Stemming from this 3 points, a cloud computing infrastructure fits the bill. whether to make it a public or private or hybird cloud, it is up to the department's plan for the lonely blade server.

The questions come begging: how to setup a private cloud?!
Using the open source stack for cloud computing such as Apache CloudStack, OpenStack, OpenNebula 

two software components at the minimum
1. cloud infrastructure management, in this example, the CloudStack4.2
2. hypervisor aka virtual machine monitor (VMM), in this example XenServer6.2

two "hardware" component at the minimum
1. a "computer" to run the CloudStack4.2, for managing the VMs.
2. a "computer" to run the XenServer6.2, for provisioning the VMs.

I double quoted the "computer" due to the way this private cloud experiment is setup. I have adopted a virtualized hypervisor, which is the Xen-In-Xen feature courtesy of XenServer6.2, and also a virtual machine to run CloudStack. So, I do not physically own any "hardware" for my private cloud setup. cool huh!

Each of the components and it's associated procedure to setup is worthy of a post by it self. There will be more post in the future, detailing the setting up of CloudStack4.2, XenServer6.2, administrating the VMs, and also my attempt at virtualizing a hypervisor in a virtualized hypervisor, the Xen-In-Xen-In-Xen approach. It sounds pretty much "inception-ish" attempt! 

Before we go into the gruesome details of setting up a private cloud, some dangling carrots (screenshots of my private cloud setup) 

using cloudstack to launch a VM, follow by SSH into the newly created VM.

7 easy steps to launch a VM from cloudstack.

dashboard of cloudstack

Saturday, November 30, 2013

RGB fading for ATtiny85

code courtesy of the Internet/help pages

RGB colour cycle Arduino

cycle the colour wheel using RGB LED with Arduino PWM pin Digital Write cycle the colour wheel using RGB LED with Arduino PWM pin Analog Write

Monday, November 18, 2013

Supposedly quick and easy install guide to 3D scanning with Kinect and Reconstructme

Library is hosting interesting workshops of 3D scanning, 3D printing and also ebook making next week. So I thought of bringing my sixpence 3D scanning kit that uses an ardunino turntable, an android phone for image acquisition, and 123D catch to generate a 3D model, and also a "known" 3D scanning solution that is made up of M$ Kinect, and ReconstructMe to the maker space event. Back in London, I have used/tested/setup a rig for 3D scanning using the latter on my own laptop (dell XPS. Installation (ReconstructMe console v0.6.0-405 + OpenNI + PrimeSense) was a breeze. So I assume I would only take 1 hour or so to setup, but in reality............

Installation supposed to be a breeze (as per the setup in London). Somehow, fate took the other turn and I have spent my weekends hiding in my office battling compatibility between kinect drivers<--->Graphic Card drivers <->OpenNI drivers<-->ReconstructMe versions.

The lastest version of ReconstructMeQT suppose to work out of the box in 2 steps. First, install the kinect drivers on windows, lastly install ReconstructMeQT. Because my installation does not work out of the box, therefore I have an adventure over the weekends to find the possible solution.

Now, before you attempt to install ReconstructMe, please do this 
2. Install the latest version of the NVidia graphic card driver (v3XX.YY) or ATi Radeon
3. If you have not install any Kinect drivers on windows, GOOD! Otherwise, uninstall the device and delete the drivers. Device Manager-> Right click on the  Kinect devices that is installed under as XBOX kinect or kinect for windows or NUI kinect or Libfree kinect->uninstall (and check on the box that says delete the driver file)

The very general steps after step0 are
1. Install Kinect Drivers 
2. Insall OpenNI drivers 
3. Install ReconstructMe 

Step1: Install drivers for kinect on windows (32bit or 64bit)
Kinect drivers come in a few flavours. Choose either one of the flavours to work with ReconstructMe. The two major camps are PrimeSense (SensorKinect093-Bin-Win64-v5.1.2.1 or SensorKinect-unstable or Sensor-Win32- and M$ (KinectSDK-v1.8-Setup or KinectRunTime v1.7). I have tried all of the variety on 3 laptops, yielding different results with ReconstructMe.

Step2: Install OpenNI (OpenNI-Windows-x86- or OpenNI-Windows-x64-2.1.0 or OpenNI-Win32- or OpenNI-Win64-
Note: Step2 not required if using ReconstructMeQT

Step3: Install ReconstructMeQT (ReconstructMe Setup-1.2.95). The supposed Finishing Step.
Somehow, in laptopA the GUI hangs at initialization, laptopB, the GUI crashes.

Step3.1: Install ReconstructMe Console (ReconstructMe_Installer_NonCommercial_405)
Somehow, in laptopA the command window hangs at initialization; laptopB, the graphic card chosen by the ReconstructMe is Intel HD4000 instead of GT630M, and crashes after capturing

Sadly, nothing work on my work laptopA. Then I took another laptopB to test, and then another laptopC. 
The laptops are defined as:
laptopA: intel i7, 8GB ram, windows7 32bit (Yes, I know this is dumb, due to some ****. please spare me the embarrassment), and NVS3100M. 

laptopB: intel i5, 8GB ram, windows7 64bit, GT630M (with intel HD4000). 

laptopC: intel C2D, 4GB ram, windows7 32bit, ATi Radeon HD3400 (not in compatibility matrix).  

Unsatisfied with the outcomes, I have brute-force possible combination (with some smart guessing on the combination of course) of drivers (very tedious, I am really tired from the mundane installing and uninstalling regime) to get 3D scanning to work on my 3 laptops.

my winning recipe works on LaptopB by
1. Install KinectSDK-v1.8-Setup
2. Install OpenNI-Win32- [yes, a 32bit OpenNI driver on a 64bit windows 7. Weird, but it works. Installing 64bit OpenNI drivers game me load of problems such as OpenNI drivers not found]
3. Install ReconstructMe_Installer_NonCommercial_405 (v0.6.0-405)
4. Modify scanner parameter with "ReconstructMe.exe --device 1 --scan --sensor mskinect,0 --config cfg/volume_1m_highres.txt" without the quotes 

phewwwww.... now, lets do some 3D scanning!!!!!

demo at the SP library. picture courtesy of kylie the librarian.

I would love to find some time to get skanect to work, as a counter example to ReconstructMe. Time, is scarce and I only have 24 hours per day.

Friday, November 15, 2013

sixpence 3D scanning kit

While I was spending my summer in London typing away on my thesis. One of my extra curricular activities was to pop over next door Institute of Making to make some interesting stuff. At one of the workshops, we did some 3D scanning using both open source such as reconstructme + kinect, and also proprietary solution, the Next Engine. Scanning a human object using the kinect at the absent of a scanning rig is really tiresome. Holding the laptop, the kinect, and power supply circling the subject at incremental steps is tedious. Nonetheless, here I present to you: yours truly in meshlab.

Over the weekends, I thought of an idea of making dirt cheap 3D scanning with existing items. What I mean existing items, are items on my desk such as an android mobile phone, arduino, and servo. While researching on cloud computing and it's application, I discovered a really cool website that leverages on cloud computing to generate a 3D model based on multiple pictures of an object. Taking (at most 70) pictures of an object at 360 degrees manually without a rig is really tiring. So, my weekend project for the 3D scanning kit to automatically take pictures at 360 degrees of a subject without human intervention; can be decomposed to 4 sub parts. part1: I need a turntable of some sort to rotate my subject 360degress. part 2: There must be some sort of communication channel between my turntable and the picture taking apparatus. part3: picture taking apparatus must be capable of receiving commands. part4: upload pictures to 123D catch to generate the 3D model.

part1: turntable
turntable with subject container

manually to take pictures
if you wonder what is the pen doing there

Parts needed. An arduino, full rotation servo, code.
The full rotation servo (FRS) I got on hand was picked up from a rubbish dump. Upon testing, it is still functioning, how lucky. Here comes the interesting problem. With the use of the example code of sweep from arduino, the FRS is behaving erratically. It does not stop exactly at 15 degrees and continue to spin. Reason being, the servo is modified; the "horn" on a gear inside the servo is broken off. tough luck using standard code. So, I have to come out with a scheme to stop the FRS at every 15degrees via code.

As for the container of the subject. I have used newspaper to create the background for the subject. Such that when the 3D model generating algorithm is running, the patterns on the newspaper can be used as the reference point. That is according to the guide of the 123D catch.

Part2: communication
parts needed: android device (API level 17 onwards), OTG cable
Reluctant quite I am, to purchase a bluetooth shield for arduino for communication. Furthermore, I am using an android phone running android 4.3 (API level 19). In this particular version, it supports direct USB connection from say a keyboard or mouse to the android phone via microUSB or OTG cable ( USB typeA female to microUSB male). It is much more cost effective for me to use OTG than the bluetooth shield.

A quick look at the opensource community, I stumble upon this github i believe was forked from Many thanks to the open source contributors for allowing me to quickly try out code for USB serial from android <--> arduino. Just a point to note, the baud rate for the android is 115200, so arduino must setup serial at the same baudrate. 

combining part1 and 2, I have devised a scheme for my 3D scanning kit. Arduino turn the turntable every 15degrees, send ASCII characters to android device to signal for taking a picture
The code for arduino is here 
Another point to note: print out the serial data received on android to prove the assumption that it is going to be the same as per received on hyper terminal. I learnt it the hard way.

Part3: multiple picture taking on android device without human intervention.

There are excellent tutorials such as this and this for writing manual code to use the android device's camera to take ONE picture. Having wrote my last android app from scratch on my HTC magic, android 1.6 (API level 4), I assumed that I would have not any issues using the API for android 4.3 (API level 19). Besides that, having use MIT app inventor for mockup and POC without writing code from ground up, and using the standard features following the standard methodology; left me jaded when it comes to developing android app.

The SOP for taking a picture on android device via camera API is quite straightforward. Create an activity. Add a button to listen to a an event to take a picture. Add a view to the frame layout for the preview from the camera. Save the picture to the device's memory. After picture is taken, refresh the preview. I assumed that I would only spend 4 hours max after office hours to write a piece of code that would automatically take multiple pictures without user intervention (nobody click on the button to take picture). Little did I for see I would stare at the code for a few nights, wrestling with the android code framework finding out where are the crashes; due to the nature of pictureCallBack(), onPictureTaken(), and refresh preview were supposed to be used. The experience and amount of code I have tried to challenge my assumptions such as race condition, critical section, multi threading that I thought might be the root cause of the crashes warrant for a lengthy post by itself.

Nonetheless, after staring and experimenting for few nights straight, i present to you The code of this android app that is hosted on github

Part 4: upload pictures to 123D catch

Combing part1,2,3, setup a stand for the android device to take pictures.

copy the 31 images from android phone to be uploaded to the 123D catch

Generate a 3D model from the pictures uploaded

Note: no model generated (I got a blank screen after the supposed completion of 123D catch[online]), and I have waited close to 30min to save the project, but without success.
Edit: I have tried to take a few shots of the same subject manually and upload to 123D catch, just to prove my assumption (pictures taken by my app is not usable) is wrong. Surprisingly, no models generated too. really weird

Some fine tuning is required: I noticed the picture taken by my android device was out of focus. Maybe that is the reason why the model was not generated. pictures generated from sixpence 3D scanning kit do work!

Edit: I placed my subject too close to the lens, hence the depth of field caused the blurry images akin to blur subject and clear background. I am still trying to find the API that allows for macro mode auto focus.

Edit: For some weird reasons, 123D catch online version does not work on my laptop. I have left it running over night and the next day I check my computer, still no model generated. However, the 123D offline version does work, using the pictures generated by sixpence 3D scanning kit.
uploading pictures
processing the capture into 3D model
sitting there looking pretty


A quick view in meshlab. further manipulations are needed before 3D printing. For starters, the newspaper background got to go.

the 3D model opened in meshmixer. the news paper portion is selected and then deleted by pressing key "x"

there are a few gaps need to be fixed in the edited 3D model before it can be printed.

the 3D model is edited into a watertight model, ready for the 3D printer.
the 3D printed model

Monday, November 11, 2013

poor lad's quick and dirty hack on wireless headphones that uses bluetooth

Since a long time ago, I really wanted to be liberated from my desk. Liberated, free from the shackles that "wired" me down to the computer table.

the wired headphones is such an inconvenience on my desk. I have 1 phablet, 1 laptop, 2 desktop that require some attention. Most of the time. I would plug in(and out) my headphones into the ones that needed my attention. My smarty-pants friend suggested getting one headphones for each of my toys. How cost effective.

Since all my toys came with bluetooth, My plan was to stream audio via bluetooth from the computer that needed my attention at the mouse click. A quick round up in Sim Lim Square, to my horror, wireless headphones come at a price range of S$79 to S$199. The lower end model use Infra-Red, the higher end model uses 2.4Ghz bandwidth (not necessary bluetooth). I am tempted to get the cheapest wireless headphones, but a few questions kept me ponder. I wonder how much data can be carry over the Infra-Red spectrum in this application? Can I achieve a minimum of 44.1KHz audio quality over the spectrum? Do I need to be in the LOS (Line Of Sight) of the Infra-Red? What happens if I move away from the transmitting end of the Infra Red (e.g go behind the base), will I suffer from signal loss? Nothing much can be gleam from the box, nor the shop keeper offer some help. The bugger is only interested to shove the most expensive model down my throat. Hey, Do I look like a rich kid do you?!

Being the cheapskate, I could have deconstruct the wireless headphones to smaller components. I would need a wireless receiver, and a headphone. The wireless receiver, it can be a bluetooth receiver with audio output. This category of product is very popular and widely available on Internet. A quick lookup on the popular website, the price ranges from U$ 6.20 to U$12.98. Being the sucker that can't wait, I paid S$30 for the  Wireless Bluetooth audio receiver Stereo HiFi A2DP Stereo Audio Dongle Adapter Connector 3.5mm Receiver at Sim Lim Square. Initially I saw one of the products was pasted with a S$18 price tag. Happily I went to the cashier. To my dismay at the cashier, the shop keeper quoted me S$36 and insisted that there was a mistake in pasting the price sticker. tough luck.

I got my S$12 headphones from cash coverters, the friendly surplus shop that is native to Singapore. 

To setup a bluetooth device is a no brainer. First turn on the bluetooth device. On windows 7, select the blue tooth icon, then scan and select the intended bluetooth device. Follow the onscreen instructions to complete installation.

Next, "combine" the bluetooth device and headphone. This is the tricky part, because it requires some hand skills and design skills to make it look BEAUTIFUL.
Nonetheless, the nett effect of saving money (S$199-$30-$12=S$157) is indispensable. saved S$157=> like a BOSS !

Tuesday, November 5, 2013

safe arduino: top 10 ways

10 Ways NOT TO Kill An Arduino

method1: use a BJT switch made of transistor (such as 2N2222, ULN2003A. etc) to switch "high load" (e.g load exceeding arduino's rating). more here:
method2: use a Pull Up Resistor (or pull down resistor, which ever convenien) on Input pins. More here: 
method3: always check POLARITY of Supply
method4: Arduino is DC. IF want to control AC or higher load DC than the ones recommended for Arduino, use a SSR (Solid State Relay) or Mechanical Relay. More here:
method5: always check input (and also output) rating (e.g voltage, current, polarity) of the electronic components to used with arduino.
method6: check for common ground connectivity for all electronic components
method7: always TURN OFF supply to an arduino when connecting electronic compoents to arduino.
method8: use a diode (e.g IN4148) to ensure electrical signal (e.g current) only flow one direction
method9: use a logic level shifter connecting devices with different logic levels e.g 3.3v, 5v and 15v.
method10: always check circuitry and check again before turning on.

kill arduino: top 10 ways

10 Ways to KILL arduino

sjteo: I have loads of dead arduino in my office that were accumulated when I was away. I think we should set up a best practices.

Quick Links:
  • Method #1: Shorting I/O Pins to Ground VERY COMMON PROBLEM!
  • Method #2: Shorting I/O Pins to Each Other VERY COMMON PROBLEM!
  • Method #3: Apply Overvoltage to I/O Pins 
  • Method #4: Apply External Vin Power Backwards
  • Method #5: Apply >5V to the 5V Connector Pin
  • Method #6: Apply >3.3V to the 3.3V Connector Pin
  • Method #7: Short Vin to GND
  • Method #8: Apply 5V External Power with Vin Load
  • Method #9: Apply >13V to Reset Pin
  • Method #10: Exceed Total Microcontroller Current VERY COMMON PROBLEM!

Sunday, November 3, 2013

Story of 2 Folder (Android Development)

On Eclipse toolbars-> Windows -> Android Virtual Device -> Create new AVD

Some how the newly created AVD refuse to start, with error message, PANIC: cannot not start android-virtual-device-name

A quick hack to fix this problem of PANIC android virtual device can't start is included as per the screenshot.

Tuesday, October 15, 2013

assemble an open source 3D printer reprap prussa mendel

Collection of pictures taken while assembling an open source 3D printer, the reprap prussa mendel.
the parts

the extruder
the frame partial

roller for the heated bed
extruder support bars
frame partial
frame without the electronics and wiring
wiring the extruder
final assembly
first print
second print