Category Archives: Research

Useful link for research skills

A few days ago, one of my former colleagues from Texas A&M shared an interesting video featured by Simon PJ from Microsoft Research. The video was about how to write a good paper. It was very useful. So I googled him and found more materials from his web page.

The page has links about how to write a good research paper, give a good research talks, and write a good grant proposal.




Media coverage

My project among others funded by an NSF grant was introduced on the Flint Journal, a local newspaper. It is not 100% accurate descriptions about me and the project. One example is that it was not 10 years ago when I studied at Texas A&M University.

Kettering University receives half a million dollars in National Science Foundation grants

This one is on the school news coverage. Again the title is not. It is NOT “MRI” research. The grant name is “MRI” which stands for Major Research Instrument.

NSF grant will support high resolution MRI research

ABC12 news

Kettering University receives grants from National Science Foundation

OpenCV for Android

Why another tutorial for OpenCV for Android? I followed a tutorial from the OpenCV official website. It was disappointing. It turned out there are many problems during installation and building examples in my Mac OS X system.

My Android SDK version is 4.2.2 (API 17) with Android Development Tools (ADT) for its IDE. I believe this tutorial works as well with Android 4.3 (API 18) in Mac OS X.

Here is a list of steps to follow to get OpenCV for Android ready for you.

Download OpenCV for Android

  1. Download OpenCV for Android from You can find an Android version from the web. As I am writing this, the latest version is 2.4.6.
  2. Unzip the zip file and you will see a folder.
  3. Place the folder into a proper folder. In my case, I created a folder in my Documents and named it as work. And copy the OpenCV folder into the new folder.
  4. This is it for now until we build the libraries and samples.

Download Android NDK

You may find an NDK Plugins item from the menu Help – Install New Software. I tried this. But it shows the following error message.

Cannot complete the request. See the error log for details.
“Android Native Development Tools” will be ignored because it is already installed.

So I went to the Android NDK webpage and found a package. It seems we need to install this manually.

  1. Download a package from You can find two packages for each platform. Just select the first package which is a current and recommended toolchain for the NDK.
  2. Unzip it.
  3. Find a folder named “android-ndk-<version>.”
  4. Place this folder into the “work” folder. See the ‘Download OpenCV for Android’ section for more detail about the “work” folder.

Build OpenCV Library and Samples

Now we are kind of ready to build OpenCV Samples because we have to take a few more steps after importing Android projects from the OpenCV folder to your ADT’s workspace.

There is an NDK item in the Preferences of ADT under the Android section. It looks like we have to put NDK folder into the blank. But this is useless at least for the OpenCV samples. So you may ignore this NDK item in the Preferences.

  1. Select File – Import – Android – Existing Android Code into Workspace.
  2. You will see the ‘Import Project’ dialog.
  3. Choose ‘Root Directory’ by clicking the ‘Browse…’ button. The directory is the top folder of the OpenCV folder that you placed in the section ‘Download OpenCV for Android.’
  4. Now you can see five samples and three tutorials and one library. Select all the projects. If you want to use only one sample or tutorial, please don’t forget to select the project “sdk/java – OpenCV Library.” Then Click the ‘Finish’ button.
  5. If you want to maintain all original samples in the folder, you can check the “Copy projects into workspace” checkbox. This will copy all source code into your workspace.
  6. Some projects will show compile errors. According to OpenCV website tutorials, the errors will be shown only on a non-Windows operating systems. Anyway, I am using a Mac. So I see the errors. The remedy in the web didn’t work for me. Here is what I did.
  7. First of all, NDKROOT is not defined in my system. Add NDKROOT with the value of the actual android-ndk folder that you placed in the section ‘Download Android NDK.’ This should be done all the complaining projects with errors.
    1. Go to Properties (@I) of each project. Note that this is NOT ‘Preferences.’ Select one project and right click. Find the Properties item in the bottom of the context menu.
    2. Find C/C++ Build – Environment.
    3. Add a variable. Name it NDKROOT and put a value with your android-ndk folder.
  8. Then go back the C/C++ Build item in the Properties. You can find ‘Build command:” Its corresponding text input box has either ${NDKROOT}/ndk-build.cmd or “${NDKROOT}/ndk-build.cmd” (Note that the quotation marks!). Remove ‘.cmd’ and the quotation marks if it has. I just want ${NDKROOT}/ndk-build in the text input box.

OpenCV Manager

If your Android phone does not have OpenCV Manager, you should download it from

That’s it. Enjoy OpenCV for Android!!!


Not enough storage is available to process this command

I am going to utilize available PCs in labs of our department to process images. Generating intensity attenuated images takes much time (more than five minutes for one single forty layered image). It would be better to run the processing modules on many machines at the same time. Images on a NAS can be accessible (see more details in my previous posting).

In my first few trials, the new system seemed work. But after that, the network resource where all the executables are located are not accessible any more with a strange error message.

Not enough storage is available to process this command.

I came across a few interesting articles.

At first, it scared me a little bit because the articles say that registries should be backed up and restored. I just tried without backing up the registries to test my luck. 😉

After adding the item in my registry, I restarted my computer and run my image processing sub modules. It seems work for now. I will post if I come across any other problem regarding this issue.

Using NAS to save time in generating intensity attenuated images

It takes much time to compose large images into a single image. In my experiments, I am using forty images to generate an intensity attenuated image. This is even more true when the size of an image is huge like about 12,000 x 9,600. One image has 115,200,000 pixels in this size. Think about forty images when they are processed. It took me eight seconds to compose two images into one with a simulated transparent channel. More than five minutes are needed to generate a single composed image from forty images. I have around 9,628 images. From my rough calculation it will take my machine more than a month to process all the images. This is not a big problem since this process should be done only one time. But I do not like this since I have more tissue samples to process.

My idea is to use multiple computers at the same time in our department’s network. NAS (Network Attached Storage) can be a solution to address this issue. Using the network storage is a good approach because the bottle neck of this system is not reading and writing image data files but processing images. (I realized that reading/writing image files from an NAS take much more time than processing them from my experiments. It will not reduce total consuming time as I initially expect but is worth to use the minimal parallelism.) So it is good to use as many computers as possible to reduce whole processing time.

I purchased a NAS that is Synology DiskStation; DS212j along with two hard disk drives; 2TB and 3TB. This will give me enough space at least for a while.

I have one desktop PC and one Mac Pro under a router in my office. The new NAS is attached to the router. Four high performance PCs in a lab of the ECE department are available for five days in a week. Eight Mac minis in my Mobile App Lab are also usable. I am going to utilize all those computers as much as possible.

Here is a rough system design for it. Tools in KESMSuite should be expanded to be run on this configuration. The port number 80 (this is a default port for HTTP) for the IP address of the NAS should be ‘Port Forwarding’ in the router settings. To my surprise, this port forwarding is enough for this system to work.

Making intensity attenuated images

KESM (Knife-Edge Scanning Microscope) can scan really thin (sub-micron level) images from animal tissues. Thin slices are very critical to create accurate volumetric 3D images since the depth structure can be restored more accurately. However, one single image does not make much sense to us when it is shown one by one.


It has bunch of dots and short lines instead of meaningful structure. It would be better to be seen if several images are overlapped with depth attenuation in their intensity levels.

So I implemented a method to create intensity attenuated images. Creating this kind of images is not new because we can use an image editing tool such as Photoshop to create multiple layers in which layer has an alpha channel to set its transparent level. My method does not use the alpha channel and does not need to use a specific image file format that supports the alpha channel. Original JPEG image files can be used without converting them into transparent-support-file-formats. Note that JPEG does not support alpha channel so that you cannot make it transparent with a standard tool.

Here is a sample image. One more good thing is that this image composition can be done automatically using my KESMSuite that is actively being developed. The sample image was generated using 40 consecutive images. To make a realistic pseudo 3D images, the intensity attenuation factor is calculated from a quadratic function of the depth.


Make map tiles with GDAL2Tiles

GDAL and GDAL2Tiles

GDAL (Geospatial Data Abstraction Layer) includes GDAL2Tiles that can generate map tiles for OpenLayers, Google Maps, Google Earth, and similar web maps. GDAL can be installed from OSGeo4W for Windows. We can find OSGeo4W at Unfortunately this only works for 32bit Windows as of now I am writing this article.

OSGeo4W is a package from Open Source Geospatial Foundation for Win32 environments. According to the OSGeo website,

OSGeo4W is a binary distribution of a broad set of open source geospatial software for Win32 environments (Windows XP, Vista, etc). OSGeo4W includes GDAL/OGR GRASSMapServer OpenEV uDig QGIS as well as many other packages (about 150 as of fall 2009).

OSGeo4W Setup for 32bit Windows

Caution: Do not follow the instructions at or for GDAL2Tiles. Especially there is a specific instruction at that you should not follow. That instruction only worked for GDAL 1.6 beta. Here is a new instruction for installation of GDAL for using GDAL2Tiles.

  1. Download the OSGeo4W installer from here.
  2. Run the installer.
  3. Select Advanced install.
  4. Select Libs and select gdal and gdal-python in the Select packages. Caution: do not select any other packages. Some dependent packages will be selected automatically upon your two selections: gdal and gdal-python.
  5. Finish the installation
  6. You can see OSGeo4W icon ion your desktop. That is a batch file invoking the command line prompt.
  7. That’s it.

This only works on Windows 32bit machines. For 64bit Windows machines, we need to follow quite different instructions.

GDAL and GDAL2Tiles Setup for 64bit Windows

OSGeo4W cannot be used for 64bit Windows machines. We have to install GDAL and Python manually.

  1. Install Python from x86-64 Installer at
  2. Run python.exe. We have to find out the compiler version that built the python. In my case, the Python version is 2.7.3 and it was compiled and built with MSC v.1500.Python 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] on win32
  3. GDAL binary packages for 64bit machines can be found at Select a corresponding version in the table. In my case, release-1500-x64-gdal-1-9-mapserver-6-0 is the right version in the”MSVC2008 (Win64) -stable” row because the Python was built by 1500.
  4. Download
    1. Generic installer for the GDAL core components – gdal-19-1500-x64-core.msi
    2. Installer for the GDAL python bindings (requires to install the GDAL core) – I chose this because 1.9.3 is the latest and my Python is 2.7.3.
  5. Install the GDAL core components. There is no option to choose the destination folder for GDAL core. It will be installed into the “C:Program FilesGDAL” folder.
  6. Install the GDAL python bindings.
  7. After the binding, you may move GDAL folder in C:Program Files into wherever you want to.
  8. Add two batch files; gdal.bat and gdal2tiles.bat into GDAL folder. You can find these two bat files below.


@echo off
 rem ---
 @echo Setting environment for using the GDAL Utilities.
 set GDAL_PATH=<full path of your GDAL installation>
 @echo GDAL path is %GDAL_PATH%.
 set GDAL_DATA=%GDAL_PATH%gdal-data
 set GDAL_DRIVER_PATH=%GDAL_PATH%gdalplugins
 set PROJ_LIB=%GDAL_PATH%projlib
 rem ---
 @echo Setting environment for using the Python.
 set PYTHON_PATH=<full path of your Python installation>
 @echo Python path is %PYTHON_PATH%.
 @echo on @if [%1]==[] (cmd.exe /k) else (cmd /c "%*")


python %*

Now, you are ready to use GDAL2Tiles.

  1. Just double click gdal.bat.
  2. Type gdal2tiles with proper options.

You may combine these two into a single command.

  1. Open a command prompt window.
  2. Type gdal gdal2tiles with proper options.

Good luck and have fun.

Intensity Normalization

Images from the KESM do not have consistent intensity levels. This prevents us to have a clear 3D image by stacking images in a row. Background in an image should have a similar intensity level throughout images.

One is an original 3D image. The other one  is processed 3D  image.

This slideshow requires JavaScript.

Test Remote Sensor

Getting temperature from remote sensor. Amarino Toolkit is used for my Android phone communicating with an Arduino board.

Wireless Remote Sensor

Sensor data from the ultrasonic sensor can be sent to remote machines. Many choices can be made. Here in this example, in order to send data a Bluetooth module will be used. A Bluetooth module cannot be connected to the Arduino without using an extra parts. The IO Expansion Shield from is one of options.

The DF-BluetoothV3 Bluetooth module (SKU:TEL0026) is compatible with the IO Expansion. This combination makes me easier when it comes to adding a Bluetooth module. Using this we can simply add wireless capability into the Arduino. The next step is to attach the ultrasonic sensor to the IO expansion board.

On the expansion board, there are digital and analog pins that are connected to corresponding pins to the Arduino board. Make sure that pins for TRIGGER and ECHO from the ultrasonic sensor are connected to the IO expansion board according to the pin usage in the source code that is used in the previous posting. The picture below shows the assembled module.

Note that the Bluetooth module should be disconnected when a binary is being uploaded. Anyway, the exact same code from the previous posting can be used so that you do not need to upload a new binary.

The next step is that making pairs between the computer and the Bluetooth module. By doing this, from the computer  communicating with a Bluetooth module is now just simple serial communications.

Detail steps depend on the operating system. Followings are from Mac OS X. Choose the Set Up Bluetooth Devices menu item. Select the Bluetooth_V3 item.

The default passcode of the Bluetooth module is ‘1234.’ When you are prompted use the passcode.

When the pairing is completed successfully the window below will be shown.

Practically we are done. Open any terminal software for serial communication. I recommend CoolTerm that can be downloaded from here.

One extra optional step is visualizing the sensor data. Processing ( is used to visualize sensor data from the Bluetooth module. I made the visualization code as simple as possible.

import processing.serial.*;
// screen width
int maxWidth = 800;
int maxHeight = 100;
int lf = 10;

// The serial port
Serial myPort; 
float gCurDist;
Graph gGraph;

void setup() {
  // List all the available serial ports

  // Open the port you are using at the rate you want:
  // You may change the index number accordingly.
  myPort = new Serial(this, Serial.list()[0], 9600);

  // myPort = new Serial(this, "/dev/tty.Bluetooth_V3-DevB", 9600); 
  size(maxWidth, maxHeight);

  gGraph = new Graph(700, 80, 10, 10);
void serialEvent(Serial p)
  String incomingString = p.readString();
  String[] incomingValues = split(incomingString, ',');
  if(incomingValues.length > 2) {
    float value = Float.parseFloat(incomingValues[1].trim());
    gCurDist = value;
void draw() {
  gGraph.distance = gCurDist;
class Graph {
  int sizeX, sizeY, posX, posY;
  int minDist = 0;
  int maxDist = 500;
  float distance;

  Graph(int _sizeX, int _sizeY, int _posX, int _posY) {
    sizeX = _sizeX;
    sizeY = _sizeY;
    posX = _posX;
    posY = _posY;

  void render() {
    int stemSize = 30;

    float dispDist = round(distance);
    if(distance > maxDist)
      dispDist = maxDist+1;
    if((int)distance < minDist)
      dispDist = minDist;

    float distSize = (1 - ((dispDist - minDist)/(maxDist-minDist)));
    rect(posX, posY, sizeX-(sizeX*distSize), sizeY);