Monday, December 14, 2015

Setting up ImageMagick for PHP on Azure

Recently I was trying to get PHP+MySQL web application setup on Azure. I was excited to see how Azure would play with non Microsoft platforms. It was quite promising to see that Azure gave me a web applicaiton with PHP out of the box.

Now I wanted to stretch it further by trying to setup ImageMagick extension. Now installing ImageMagick has historically not been the cleanest process. With numerous libraries floating around it has been a less than ideal experience. I did finally get it working though, thanks to the pointer by Brij Raj Singh at StackoverFlow and the article by Mangesh. I am outlining here the steps I followed, tweaking Mangesh's steps a bit to get ImageMagick working on default PHP installation of Azure.

I am assuming in Azure you already have a web application with PHP enabled, and you are using the latest PHP 5.6 from Azure App Settings.

  1. Download the file and extract it in a temporary folder
  2. From, scroll to the 'Windows Binary Release' section and download the latest file ending with Q16-x86-dll.exe. (as of writing it was )
  3. Install the above downloaded exe file on your windows personal machine/laptop
  4. Now from within the FTP of your Azure site, create a folder named 'ext' in your root folder. So the path to this folder would be something like /site/ext
  5. Create a folder named 'ini' in your root folder. So the path to this folder would be something like /site/ini
  6. Create a folder named 'imagickwin' in your root folder. So the path to this folder would be something like /site/imagickwin
  7. In the ini folder created above, create a new file named extensions.ini
  8.  The ini file created above should have the following text extension=d:\home\site\ext\php_imagick.dll 
  9. In Azure portalgo to Application Settings for your app resource, and change PHP to the latest version 5.6
  10. In the Application Settings itself, create a key named 'PHP_INI_SCAN_DIR', and provide its value as d:\home\site\ini . This is as described at site.

  11. Copy all the Core*.dll files from 'C:\Program Files (x86)\ImageMagick-6.9.2-Q16' to the /site/imagickwin folder.
  12. Copy all DLL files from 'C:\Program Files (x86)\ImageMagick-6.9.2-Q16\modules\coders' to the /site/imagickwin folder.
  13. Now go to the folder where you extracted '' archive contents. From this, copy all the dll files to /site/ext folder
  14. Next is the most important step - Create a new file namedapplicationHost.xdt directly under the at the site folder level. This file should have the below:

<configuration xmlns:xdt=""> 
    <runtime xdt:Transform="InsertIfMissing"> 
      <environmentVariables xdt:Transform="InsertIfMissing"> 
        <add name="PATH" value="%PATH%d:\home\site\ext\;" xdt:Locator="Match(name)" xdt:Transform="InsertIfMissing" /> 
        <add name="MAGICK_HOME" value="d:\home\site\ext\" xdt:Locator="Match(name)" xdt:Transform="InsertIfMissing" /> 
        <add name="MAGICK_CODER_MODULE_PATH" value="d:\home\site\imagickwin\" xdt:Locator="Match(name)" xdt:Transform="InsertIfMissing" /> 

Now, restart your web application, and load phpinfo() to verify that you are seeing something like below:

And you are done!

Tuesday, September 22, 2015

Scanning barcodes, using Raspberry Pi

My objective for this proof of concept project was to use a regular webcam to scan for barcodes, parse them and transmit the same via a REST service, using Raspberry Pi as the hardware platform.

I believe a lot of people have already "been there, done that", but somehow I found no single source of information to help me through all the obstacles I came across while trying to scan barcode using Raspberry Pi. So here is my version of how I went about it.


  1. I used the Raspberry Pi B+ described here
  2. I used a Logitech webcam for better scan quality, and automatic focus
  3. Logitech wireless keyboard for ease of coding as listed here
  4. I used power source via USB port
  5. I did NOT use a commercial barcode scanner, though that I had kept as a Plan B, if the code would not have worked via the webcam. Using that is actually very trivial, because once connected via USB, it acts as a regular keyboard. So any barcode it scans, Raspberry should ideally see it as keystrokes containing the barcode data. Anyway, I cannot guarantee that because I did not need to go to this path.
  6. Portable keyboard and optionally mouse for interacting with the Raspberry

Bake the cake time!

I spent almost till midnight with the raspberry trying to understand from scratch how it works, and how it reacts to Python as the base programming language. So I was able to hookup my Logitech wireless keyboard+mouse to the board using the Logitech Unifying dongle. I did not expect the tiny Raspberry to work with Unifying, but it did work instantly post a reboot! Now I had keyboard and mouse access to my Raspberry.

Next, I connected ethernet and saw Raspberry connect to the Internet. I realized later that this was a critical step, as I would soon be downloading a lot of libraries/packages for this project.

I tried to write a hello world Python script and all Bash scripts require the code to begin with #, but I realized that the keyboard was not allowing me to type '#' !! Bummer. I had to dig into Raspberry preferences, and select keyboard style 101, for English US. Post a reboot, my gained the power to type '#' and was happy. While in the settings, I also set the clock and changed the timezone to my preferred CST timezone.

Next I connected the USB external webcam. This was used so I could start off with better quality optics, and the capability of auto focus. To begin using the webcam, I had to make my Raspberry webcam aware. 

After a lot of digging I realized that I needed to install a number of libraries to make Raspberry ready for action. I also had to install 'pillow', which works as an alternative for PIL image manipulation library. I also installed httpplib2 for my other part of this project, which is supposed to make REST service requests.

I installed the libraries in the below order:

sudo apt-get install python-dev
sudo apt-get install python-pip
sudo pip install pillow
sudo apt-get install python-httplib2

Next,  I issued the below command to install USB webcam package. This allows for leveraging an external USB based webcam, with autofocus capabilities and user configurable resolution.

sudo apt-get install fswebcam

I could now take a screenshot using the command

fswebcam image.jpg

I explored if I could change the resolution of the image to increase/decrease the resolution at will. I later found that indeed, that could be done using the command:

fswebcam -r 1280x720 –-no-banner image2.jpg

At this stage, I was feeling confident that Raspberry had started to respond to my whims.

What if i could run a cron job to take screenshots at set intervals? I was able to do that by issuing:

Crontab –e

This allowed me to setup cron to repeat, and loop for continuous image shots

All this was fun, but till now all that was achieved was to make the webcam work. Nothing done so far, had anything to do with the real parsing of barcodes.

Enter ZBar. This is an Open Source library, which is excellent in what it advertises. It can scan and parse a lot of different types of barcodes, and is very configurable. Unfortunately, the original Zbar library id not work for me as it was not well maintained. I spent a lot of time trying to work but got stuck at one place or the other. The original Zbar is quite old code and not meant for this version of ARM processor. Then I discovered a fork of Zbar at Github, which was a PHP wrapper for the original Zbar. This was available here which had superb sample scripts too to get started.

I installed python Zbar dependencies by executing:

sudo apt-get install python-zbar
sudo apt-get install libzbar-dev

Next I put the script files from the above URL on Raspberry in a folder, and then from that folder executed:

python install –user

The above went successfully and Zbar was now completely ready to do its magic.

Now, together with Zbar I had four available capability ready to be used as per will:
  • Take an image capture using fswebcam, and process later with zbar
  • Use video stream to read the first available barcode
  • Use video stream to read as many stable barcodes as possible
  • Use video stream to read continuous barcodes, till a mouse/keyboard activity happens

Here is an example of a simple script I wrote to take screenshots and store it with a unique name each time


DATE=$(date +"%Y-%m-%d_%H%M%S")

fswebcam -r 1280x720 -i 0 --delay 1 --frames 10 --skip 5 --no-banner /home/pi/Documents/saurabh/webcam_cap/$DATE.jpg

And here is the final script, which was majorly provided as a Zbar sample script which initializes the webcam, and then run continually waiting for a valid barcode to appear. As soon as a barcode is detected, it is scanned, parsed and the the script ends. Just what I needed for this part of this proof of concept project.

from sys import argv
import zbar

# create a Processor
proc = zbar.Processor()

# configure the Processor

# initialize the Processor
device = '/dev/video0'
if len(argv) > 1:
    device = argv[1]



# enable the preview window
proc.visible = True

# read at least one barcode (or until window closed)

# hide the preview window
proc.visible = False

# extract results
for symbol in proc.results:
    # do something useful with results
    print 'decoded', symbol.type, 'symbol', '"%s"' %

Here is a screenshot of the code in action. Notice that the barcode got read even when scanned upside down.

Up, Up and Away!

Here is a bonus script. In case you require your little Raspberry to let the world know about this barcode just read, or otherwise, then here is how I called a REST service using another Python script:

import httplib2
import json
import time
import datetime

def sendHTTPRq(barcode):
    httplib2.debuglevel     = 0
    http                    = httplib2.Http()
    content_type_header     = "application/json"

    url = "http://www.YOURSERVER.COM/JSONSVC/" +str(barcode)

    data = {    'Id':         barcode,
                'CreatedDateTime':    str(,
                'CreatedUser':    'saurabh',
                'UpdatedDateTime':    str(

    headers = {'Content-Type': content_type_header}
    print ("Posting %s" % data)

    #while True:
    response, content = http.request( url,
    print (response);
    print (content);

That's It. Do leave a comment if this may have helped you, or if you have suggestions for improvement, or a better process flow.

Thursday, September 10, 2015

Using TortoiseGIT on Windows, with BitBucket

A very common requirement for a developer is to host code on BitBucket, and then use it for version control on his Windows machine. The GIT client chosen, usually is TortoiseGIT.

It turns out, that there are more than a few steps required to get this working correctly and I could not find a good first search result to get it working. I had to jump across various articles before I could stitch my process together. Here I have consolidated the steps in the hope that it would help other too.

This tutorial might also fix some errors which you might be getting such as the dreaded permission denied (publickey) or Permission Denied errors while committing. 
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights

and the repository exists.

git did not exit cleanly (exit code 128)

I am writing this on Windows 8, but this would apply to Windows 7 too.

  1. Begin by installing Putty on your Windows machine. Remember to install the complete installer, provided under the heading "Windows installer for everything except PuTTYtel". You can get it from here
  2. This will usually install the tool at c:\Program Files (x86)\PuTTY if you have proceeded with default settings.
  3. Next, run PuTTYgen to generate the Private ad Public SSH keys which we will require for this process. Select "SSH-2 RSA", and hit 'Generate'. Move the mouse in the blank area to generate the required keys.
  4. Once the tool has generated the keys, preferably provide a 'Key passphrase'. This is just a password to better protect your private key. Now use the buttons provided to save the Private and Public keys in some safe folder for future reference. Leave the PuTTYgen window open for now as we will need it a bit later again.
  5. Now run Pageant from Windows. Click on 'Add Keys', and provide it the private key file you had saved in the above step. Hit 'Close'
  6. Now from the already open PuTTYgen window, copy the text inside the box titled "Public key for pasting into Open SSH authorized_key file”"

  7. Now, we need to tell BitBucket about these new keys too. So open your BitBucket account and click 'Manage Account' 
  8. Click on 'SSH Keys' on the left hand menu. Click on the 'Add Key'. Here, paste the key which you copied from the step 6. above. Click on 'Add Key' to save the key.
  9. Finally, when we install TortoiseGIT, then by default it uses ssh.exe. Change it to use plink.exe. For this, in the folder on your computer which has the repository, right click and open TortoiseGIT settings. Under 'Network', provide the path to your plink file, which got installed when you had installed Putty.
  10. While you are in the setting, also check that under 'Git' > 'Remote', click on 'origin', and then the url provided is the one you got from BitBucket. Note that from Bitbucket you need to get the SSH url and not the HTTPS url. I also provided my private key on this page, but not really sure if that is mandatory.

*Phew!* That's quite a long list of steps to get this working, but once done, hopefully your TortoiseGIT will work super smoothly, not asking for authentication every now and then, and pushing your commits as intended.

Do leave a comment if this helped you too!