PHP Ecommerce in 2015

Something I realized recently is how old and cranky the PHP based eCommerce tools have become. Tools such as OpenCart, ZenCart, WooCommerce, and Magento have been around donkey’s years, they’ve been hacked, patched and modulized. If you want to increase SEO on any of those platforms chances are there’s a huge marketplace of paid-for-modules that’ll claim to enhance your conversion and SEO. Problem is when you start looking at these plugins they tend to focus on one thing like meta and title tags. Other SEO plugins might focus on analytic. So what eventually happens is you end up installing several plugins to cover one area of shop. This comes then with it’s own problems not least compatibility but also security.

While those platforms and their marketplaces have served the PHP world and their merchants well over the years we’re now in 2015 and those players haven’t really stepped up their game all that much.

OpenCart

I’ve worked with OpenCart on and off since 2011 and it feels like it’s still in 2011. The code is lightweight, it has an MVC pattern behind it. But it’s very simple and feel old and dusty under the hood. But it’s a good solution for merchants who might be selling t-shirts or mugs and doing low-volume sales. But if you’re looking at low-volume sales why wouldn’t you consider using a hosted platform like Shopify? One thing merchants increasingly demand is a mobile presence, while the default theme for OpenCart is responsive there isn’t any APIs or anything else a mobile app can integrate with and building any kind of integration with the shop is going to be tough without changing the core unless you talk directly to the database (but that’s cheating!)

WooCommerce

A popular choice. WooCommerce provides a toolkit for selling things in your WordPress environment. I’ll admit I haven’t used WooCommerce but anything that plugins into something else to achieve it’s goal, such as WordPress in my book isn’t really a good solution. Sure there’s some good use-cases for bolting a shop onto a CMS or a blog, it might be convenient and it’ll get your foot in the door to selling stuff but if you’re a serious merchant I suspect it just won’t cut it.

Magento

Ohh Magento. I’ve worked on this beast now for the past year. It’s fantastic from a merchant point of view, but dreadfully slow to work on from a developer point of view. The code base although MVC and well documented is huge and very complicated. I often find working with Magento is quite difficult because it’s structure is quite ridged it has some flexibility out of the box with attributes but wanting to do anything different from what has been provided is quite tricky. Also don’t get me started on the EAV database design. In terms of scalability it’s quite tricky, it’s based on some of the Zend Framework (1) components so has several caching backends available, including Redis and Memcache. But developing with those turned off is painful and you need them turned off in development!

So the future… Magento 2 is around the corner and arrives end of this year, 2015. It’s promised to have a test framework with good coverage, and it’s meant to be based more on the Symfony components. It all sounds good but who knows if the community will adopt it? More importantly there is no upgrade path, so all the modules you have won’t work, and you’ll have to migrate all the data across too. It’s quite risky if you’ve spent along time building a stable system with finely tuned SEO and conversion. With that in mind it could be a good opportunity to migrate to another system anyway.

With the historic players in mind what else is available in 2015? In this day and age PHP developers tend to gravitate towards frameworks rather than fully built pieces of software. Probably because they offer the most flexibility long-term as well as more options for reusable components.

Sylius – http://sylius.org/

This is the most promising choice in 2015, an eCommerce framework built in Symfony 2 with good test coverage at unit, functional and behavioral testing levels. The test suite uses PHPSpec and Behat which is a tell-tale these guys aren’t messing around they’re taking testing seriously. Having looked into Sylius a couple times before it’s a little frustrating to see there’s a lot left on their roadmap before there’s a viable solution for a lot of people. It’s worth keeping an eye on their status on their roadmap page. For now don’t expect a feature full admin panel, or reviews and ratings.

One thing that is very encouraging is a recent tweet by Sylius – https://twitter.com/petewardreiss/status/614336914896912385 which shows a big UK Fashion Retailer hiring Sylius developers. It’s a good sign that the commercial world are keen to adopt Sylius early. You just have to google “Sylius” to find plenty of articles hyping it up.

Jiro – http://jiro.andrewmclagan.com/

Jiro is a Laravel based eCommerce framework. On the face of it it looks like it’s trying to achieve what Sylius is doing but using Laravel under the hood. It’s probably too early to really talk about this project as there isn’t any documentation yet and if you check out their Github repository you’ll note it only started in July 2015 this year. However knowing how fast Laravel become popular it wouldn’t surprise me to see this project get off the ground soon.

Thelia – http://thelia.net/

Unlike the others suggested here Thelia is a full eCommerce shop rather than a framework. It’s build on the Symfony2 framework, and looks pretty feature full. Already on version 2.1 this shop is really worth a consideration if you need something featureful right now. Having glanced through their demo and their feature list I think Thelia will be a real contender when Magento2 is released as it seems to contain everything you’d need to run your shop and more. It’s very SEO capable with meta tags and 301 redirects and more. It also contains a full REST API along with other features you’d expect from Magento such as Coupons, Customer Groups, Abandoned Carts, Analytics and Reporting.

Sonata Project – https://sonata-project.org/bundles/ecommerce/develop/doc/index.html

If you’re a confident PHP programmer or experienced with Symfony as a framework one route you can go down is to use a pre-made bundle. It means you can get an eCommerce experience out of your Symfony app with little effort. Does mean you’ll have to bootstrap and pull together all the bundles it includes so although the features are there you’ll have to be quite involved. The Sonata project from what I gather has been around a while and provides various bundles for the Symfony framework and are quite well used, so this bundle is probably very well supported across the Symfony community. Seems like a safe option to me if you want to create your own shop in Symfony and get a head-start on building the shop’s backend.

Conclusion

For me right now, I’d jump on Sylius. It’s going to be tricky being an early adopter but it’ll be worth it and as a developer means you can contribute back quite quickly. Although at pre-alpha stage the test suite around it is very encouraging so although not officially released you’d expect the framework to be fairly stable. With retailers hiring for Sylius and some noise on Twitter and Google search results I suspect this is a real contender.

Varnish + Apache and HTTPS

If you’ve been keeping up with web programming and web technology you’ll have heard people talk about something called Nginx. Nginx is a reverse proxy that also doubles up as a web-server. It’s pretty fast. So I’ve been told – I actually haven’t used it but the benchmarks look exciting. Unfortunately I’m not ready to make that switch, Apache has served me (pun intended?) well over the years, it’s the grand father of web servers. It’s not the fastest by any means but it’s the most mature and feature packed out there. Where Apache specifically is let down is serving static content along side processing PHP pages, the PHP Apache module bogs down the whole run time of Apache*, and makes it a bit inefficient when processing static requests. When I say static requests, I’m talking content that doesn’t change – images, stylesheets, javascript and icons.

So what can we do to increase Apache’s serving of static content, and let Apache deal with generating the pages dynamically with PHP?

Enter Varnish. Varnish is according to their website:

Varnish is a web application accelerator.

Technically speaking it is a reverse proxy cache. I’ll explain…

When you think of proxies you might think back to your office or school internet system which had a proxy filtering out the naughty content from the web. Well proxies also served another purpose – to cache content for the network. This means every time someone goes to Google on the network it doesn’t have to make a request through to the Google servers, it can use it’s cached version instead, speeding the whole process up. So I guess by definition a reverse proxy works the other way around. Rather than caching content from the web it instead caches the content from your website, and sends it back to your visitor’s browser rather than let the web server handle the request.

Varnish does exactly this. What makes Varnish particularly exciting is the way it caches the data. It does it in memory. For those that don’t know, memory is faster than reading from your local hard-disk. It why other tools like Memcached are soo popular also. So when you think about it; Varnish is quite a simple concept. I’ve particularly noticed a decrease in page load times on legacy websites where there are a lot of images and CSS sprites. When I say noticeable, I mean a couple of seconds – which in page load times is quite significant.

 

Installing for standard HTTP

To install Varnish you can look at the documents here, but essentially it’s a simple Apt-Get on a Ubuntu server.

The confusing part is getting your head around the port forwarding. You can’t have 2 things bound to the same port. Something has to give and since we need to hit Varnish rather than Apache from our web browser, Varnish wins the coin toss for port 80. This means we need to run Apache on a different port number. The default configuration on Varnish, is to serve content from Varnish from port 8080 and talk to the Apache web server running on port 80. We need to reverse this situation.

If you’re running Ubuntu you need to look at your:

/etc/apache2/ports.conf

Change this to use 8080 instead of port 80.

Then you’ll want to update all your VirtualHost configurations for Apache to use port 8080. The line should look like this

<VirtualHost *:8080>

Now you’ll need to change Varnish to run on port 80. Varnish’s configuration is stored in:

/etc/vagrant/default.vcl

In side here we’ll need to configure the back ends. In Varnish’s terminology backend means your Apache web server. You’ll need to have something like this:

backend default{
   .host = "127.0.0.1";
   .port = "80";
}

This only tells Varnish where to find Apache, we now need to bind Varnish to port 80 so our web browser can talk to Varnish. This is stored in:

/etc/default/varnish

In side here is a line that begins with

"DAEMON_OPTS="-a :8080 \"

This line contains the parameters that Vagrant is launched with at system boot and when you call the service program. Here we need to change the :8080 to just :80.

This should be enough configuration to get going with Varnish in a production environment. You’ll now want to restart both Apache and Varnish via

sudo service apache2 restart && sudo service varnish restart

Using Varnish with HTTPS

One draw back with Varnish is that is doesn’t understand SSL encrypted requests. This means HTTPS is alien to it, it doesn’t know what is inside each packet because it’s encrypted, and therefore it can’t figure out if the content being passed between is suitable for it to cache.

So to circumvent this limitation we can introduce Pound. Pound is a reverse proxy similar to Varnish except it’s focus is more on load balancing – directing traffic to the right places. Pound like Varnish is pretty simple to setup on Ubuntu:

sudo apt-get install pound

Like I said Pound is intended to route requests to the various places it needs to. Think of it like Apache rewrite rules that redirect requests to servers rather than pages/scripts. We’ll need Pound to send requests to Varnish when it receives them, and for Varnish to forward the request onto Apache or use it’s own cache. Again for this to work we need to bind Pound to port 80 for our web browser, and since we’re dealing with HTTPS/SSL we’ll need to bind to 443 also.

Once installed Pound’s configuration lives in:

/etc/pound/pound.cfg

We’ll need to look for/create a block of configuration that looks like:

ListenHTTP
	Address 127.0.0.1 ## this needs to be the external IP address of the server.
	Port	80        ## this needs to be 80 for web browsers to connect to.

	## allow PUT and DELETE also (by default only GET, POST and HEAD)?:
	xHTTP		0

	Service
		BackEnd
			Address	127.0.0.1
			Port	8080       ## this needs to be our Varnish port number for HTTP connections
		End
	End
End

Here we’ll need to change the first Address directive from 127.0.0.1 to the external IP address of the server (the one your domain resolves to) this way Pound can handle requests coming to your domain(s). The port number then needs to be changed to 80 so it’s listening to HTTP traffic. The backend configuration block is going to point to our Varnish installation. We’ll configure that in just a second, but we can assume that port number is going to be 9080. Now because we’re going to be dealing with SSL we need to include an extra configuration block to handle incoming HTTPS requests to Pound. The block should look something like this:

ListenHTTPS
	HeadRemove "X-Forwarded-Proto"
	AddHeader "X-Forwarded-Proto: https"
	Address 127.0.0.1 ## this needs to be the external IP address of the server.
	Port 443          ## this needs to be 443 to listen for HTTPS connections from browsers
	xHTTP		0
	Cert "/etc/apache2/ssl/pound.pem" ## this needs to be your SSL certificate
	Service
		Backend
			Address 127.0.0.1
			Port 9443 ## this needs to be the Varnish HTTPS port number
		End
	End
End

As you can see above we’ve got some extra configuration there. We’ve added in two new pieces of configuration at the top, which remove the X-Forwarded-Proto header from the request and then replaces it with another one telling Varnish this is a HTTPS request. This is particularly important, like I said earlier Varnish can’t understand SSL encrypted connections, so the connection between Pound and Varnish will be over HTTP unencrypted. We can get away with this as the connection between the user’s browser and the server is encrypted, its only where it’s going over the loopback interface of the server it’ll be unencrypted.

The other thing we’ve added is our SSL certificate. This is a pem file which took me ages to figure out how to do. I’m not an SSL expert and don’t fully understand certification. Basically the pem file is a concatenation of your certificate (crt file) and your key. Doing this step wrong will result in Pound failing start with the error message “SSL_CTX_use_PrivateKey_file Error“. You need to concatenate the files in the right order for it to work, and make sure your key doesn’t have it’s password on the file. This thread over at the Pound mailing list has some information on this, but for reference you should have it in this order as detailed here. see: http://www.project-open.org/en/howto_pound_https_configuration

Again the backend block needs to refer to our Varnish configuration that is going to handle/forward our HTTPS connections. Here we’ve assumed it’s 9443.

Next we’ll update our Varnish configuration so it’s listening on a different port from Pound. Open:

/etc/default/varnish

You’ll notice we’ve basically put both port 80 and 433 in the 9000 range, this is just a convention so we know 9000 ports are what Varnish is listening on. So we’ll need to change the :80 or :8080 to :9080. We’ll also want Varnish to handle (spoof) HTTPS requests, so we’ll need it to listen on another port for our SSL traffic – this port is 9443. Adding a second port number can be done by using a comma separator. Make sure you don’t forget the colon in-front of the port number – I spent an hour pulling my hair out trying to diagnose this earlier. The configuration should look like this:

"DAEMON_OPTS="-a :9080,:9443 \"

Next we’ll need to make sure Apache isn’t bound to port 443, as we now need this port to go to Pound and then to Varnish. Again open:

/etc/apache2/ports.conf

You’ll need to change 443 to something else, it’s worth checking port 80 isn’t being used as well. We’re going to use the 8000′s here so HTTP traffic should be 8080 and HTTPS should be 8443. Again we’ll need to change over all our VirtualHost directives to listen for the new HTTPS and HTTP ports.

<VirtualHost *:8443>

Now we need to make sure Varnish’s backend reflect our new Apache port numbers, as we only configured what Varnish is listening on before not where it’s forwarding it’s requests. So lets open (again):

/etc/varnish/default.vcl

We’ve already got our HTTP backend defined, now we need to define a new backend for HTTPS requests. Ours will look like this:

backend default_ssl{
    .host = "127.0.0.1";
    .port = "8443";
}

Although we’ve got 2 backends we need to make sure Varnish knows which one to use for which connection it receives. We need to add this small block of logic which defines which backend to use based on the port number it receives the connection on:

sub vcl_recv {
  # Set the director to cycle between web servers.
  if (server.port == 9443) {
    set req.backend = default_ssl;
  }
  else {
   set req.backend = default;
  }
}

We’ll also need to make sure our default’s backend is up to date too, it should be forwarding to 8080.

At this point we should be done with configuring Pound, Varnish and Apache. The only thing left to do is tweak our Apache VirtualHost configuration so it’s speaking plain unencrypted HTTP to Varnish. Remember we sent through that X-Forwarded-Proto header? Well this is where that comes in handy! In our VirtualHost configuration we can comment out our SSL directives, as we don’t want to encrypt anything going through Varnish as Pound will do that for us!

Next however we need to spoof the fact this is a HTTPS connection. This is important because although your browser is visiting the site in HTTPS, Apache won’t know it’s HTTPS and serve all images and CSS as HTTP. This causes all kinds of warnings and sirens in the browser and the result is it breaks the page. We therefore need to add this following directive our side the VirtualHost:

SetEnvIf X-Forwarded-Proto "^https$" HTTPS=on

Our final VirtualHost Configuration should look like:

SetEnvIf X-Forwarded-Proto "^https$" HTTPS=on
<VirtualHost *:8443>
    ServerName www.somesite.com
    DocumentRoot /var/www/site

#   SSLEngine On
#   SSLCertificateFile /etc/apache2/ssl/certificate.crt
#   SSLCertificateKeyFile /etc/apache2/ssl/certificate.key
#   SSLCertificateChainFile /etc/apache2/ssl/certificate.pem
#   SSLVerifyClient None

    <Directory /var/www/site>
        AllowOveride None ## Disable .htaccess
    </Directory>
    
&lt/VirtualHost>

One last thing to do before starting Pound. You’ll need to enable to configuration from the service configuration. You need to add setup=1 in:

/etc/default/pound

Now everything should be configured you just need to reboot all your services.

sudo service apache2 restart
sudo service varnish restart
sudo service pound restart

Problems and Troubleshooting

  • If you have a problem starting Pound because of SSL_CTX_use_PrivateKey_file you’ll need to check your certificate/pem file you’ve put in your Pound configuration.
  • If when you visit your website you see “Service is unavailable, please try again later” this means Pound couldn’t talk to Varnish, make sure you check your Pound configuration matches the ports listening in Varnish. See /etc/pound/pound.cfg and /etc/default/varnish.
  • If you get a 503 error when visiting your site, this is coming from Varnish, and usually means that Varnish is talking to Apache but Apache is sending back encrypted stuff when Varnish is expecting plain old HTTP. Make sure you remove your SSL directives from the VirtualHost and put the SetEnvIf directive on the X-Forwarded-Proto header
  • To diagnose problems, it might be useful to visit your website directly via Apache’s new ports to ensure Apache is working properly. Visit http://www.somesite.com:8443/.
  • Varnish comes with it’s own logging system that stores stuff in-memory, you just need to run
    varnishlog

    on the command line

  • Pound also logs stuff but it goes to syslog. So you’ll want to
    tail /var/log/syslog
  • Over course you always have your Apache logs at
    tail /var/log/apache2/access_log
  • Another tip is to use Curl to see what Headers are being sent in the request.
     curl -v http:/www.somesite.com/

I hope the above article helps someone trying to get a LAMP setup working with HTTPS over Varnish and Pound. It took me a few hours of playing around before I got everything all setup perfectly. It’s definitely worth the effort to get this set-up working as it means your website is much more scalable and configurable.

Thanks
Adam.

*feel free to correct me on that statement.

Announcing: gigHUB

Announcing my project: gigHUB.

gigHUB is my new side project which collates all the local gigs in my area and puts them in one place that’s easy to browse on a mobile. The project started mid-June 2013 with 3 friends, some of whom I work with a Goram + Vincent. So far the project has had positive feedback and gained significant traction through Social Media, we have big plans in-terms of adding new features and building upon our success so far, so if you live in the Bristol area it’s worth adding gigHUB to your bookmarks!

Thanks

Adam.

Don’t be so harsh on PHP…

A recent post to a mailing list I’m on sparked the following reply (paraphrased)…

I can’t say I’ve had a play with many of the server-side languages and frameworks kicking around, and to be honest everyday sees a new framework/library/language I should apparently be trying.

PHP has a lot going for it, not in terms of implementation as such (yea it might be ugly!) but in other areas.

For one PHP has been around since the mid 90s it’s now got quite a bit of maturity, what I mean by this is; it has a level of trust around it. PHP on paper was meant for the Web it’s acronym is/was Personal Home Page! Python, Perl were scripting languages back in the day, and while Perl had it’s opportunity with the Web (loaded as a Apache CGI module) it didn’t take off. Now correct me if I’m wrong but Python on the web has only become more popular recently because of Django? Similarly with Ruby because of Rails? You don’t see many people writing vanilla pure Ruby/Python web applications, a lot of people opt to use the a framework. In the case of Rails it includes the kitchen-sink and everything you’d ever need to be able to develop on the web. PHP isn’t a framework out of the box, but offers enough to get you going on the web without the need of a framework. So back to the ideal of maturity, with PHP’s many years behind it it means that a lot of the earlier security flaws you’d expect from a young language have been rooted out. Compare this with the Rails Stack, where there was some quite high profile security flaws announced and patches released earlier this year, I know a lot of fuss was kicked up about it on my twitter feed…

I can back up some of the above statements partly because I worked recently for a large finance organisation, when I mention things like NodeJS and Rails they chuckled because those technologies are considered “hipster” technologies, in the eyes of the bigger boys. For blue chips and FTSE100′s the obvious choices in languages are the traditional ones like .NET, PHP, Java. Their reasons are like I said above, but for a lot of these organisations there are other concerns for example can their system admin scale it up and manage the application easily, and can they bolt on add-ons such as caches and accelerators.

This is where PHP seems to succeed; scalability. The proven killer stack known as LAMP just works and does what you need it to out the box. But if you want to scale you’ve got APC, Memcache, HipHop, even Nginx for better performance. From what I’ve read Rails/Ruby as a web stack has only caught up in recent years in terms of matching PHP’s performance, let alone begin to out perform it. That said I don’t wanna big up PHP too much as; if performance is your thing you’ll want to be looking to build elastic Java Web apps running in a JVM cloud type thing.

How easy is it for someone to download something like Django/Rails/NodeJS and get going on it? Not very, getting my head around RVM meant I had to use my head a little as a geek something a complete freshman won’t have. What PHP offers in this respect is a low entry barrier, literally anyone can download a WAMP/MAMP/LAMP package and within a few clicks of a button be using something like phpMyAdmin and working through some simple examples in Notepad++ and have a website. It’s that easy. The flip-side to this however is those people will ultimately build lots of awful websites before they get better, but this is part of the learning process and keeps experienced guys in business doing the rewrites when it goes wrong. Of course things change when you use a framework as developers suddenly have to conform to a convention, initially its a good thing it means that developer will start to gain some discipline. But I think there’s another blog post to be had here about using a framework or not using a framework.

Going back to my point about the big blue chips; from an employment point of view, chances are Universities aren’t spewing out Rails or Django or even Zend programmers but they will at least cover some ASP or PHP (which I know I covered at both the Universities I went to, in-fact I covered PHP at College too) which means any budding graduates with a taste for web development are going to be using either of those 2 languages as a starting point. Which means there should be a big amount of development resource around for PHP/ASP sites, and leaves things like Django/NodeJS to be more niche, it means the big blue chips can open their doors to graduates and take that academic knowledge to the commercial level.

So lets not be too harsh on PHP it has done a lot for our industry, and it will continue to be used because of the easy accessibility in academia and a vested technical interest from big organisations.

But from a personal perspective, I’ve played with a few PHP frameworks never done any serious development with any of them other than the original Zend. I liked Zend it was my first framework in a commercial setting however I have some reservations about how bloated it seemed. For me CodeIgnitor is a much cleaner framework – it’s code that looks like I wrote it! Not engineered by a commercial organisation like Zend, but I found I didn’t need some of the things in CodeIgnitor and it didn’t provide anything major I couldn’t have built myself. I’ve also had a little play with Slim and have to say I’ll probably be using that more in personal projects, as it’s light. Frameworks aside I’m quite keen on using libraries, I’m quite a fan of Doctrine as a DBAL, and I’m looking towards Twig for templating and even Recess for API things.

But in recent years one thing I’ve discovered without using frameworks in my various commercial settings is that I don’t need them, no one is forcing you to use a library or a framework, yea you get some of the hard bits done for free but you’ll be at the mercy of the framework’s publicly announced security exploits or hindered by it’s performance over head, or baffled by it’s ORM. What ever that original itch was garenteed that library framework will go someway but wont satisfy that itch completely. I don’t think there’s no shame in writting a few classes and instantiating them when you need them yourself, whats wrong with building your own toolkits and libraries?

Thanks
Adam.

Spotify on the Raspberry Pi

EDIT 30/12/2013:

Hi Guys/Gals, it’s been awhile since I originally posted this quick guide and it looks like people have been running into problems getting this working. I suspect that there has been some changes to the Spotify API that’s meant Hexxeh’s original Python scripts no longer work properly. If you’re still looking to get Spotify running on the Raspberry Pi can I suggest you take a look at the Pi Music Box (it an SD Card image that allows Spotify and Google Play access). If your determined to get the original code working it might be best to file an issue on against Hexxeh’s Spotify WebSocket API here https://github.com/Hexxeh/spotify-websocket-api/issues.

So I heard that it’s possible to run a Spotify client on the Raspberry Pi. So I had a google, and had a good go at getting it running…

Despotify

If you google Spotify for Raspberry Pi, you’ll probably come across Despotify. This was a project to create a command line Linux Spotify client, it was popular and had matured over time. However due to some changes in the Spotify API and the Protocol it no longer works and it looks like no-one has decided to fix it… which is a shame.

Respotify

Bring on Respotify! Respotify is essentially a remake of Despotify, it’s Python based and hooks into the Music Player Daemon. The Respotify package is made by Hexxeh a guy who’s made a few awesome tools over the years most notably he’s maintained some experimental releases of the development version of Chrome OS.

So to get going you’ll want to:

  1. Install Git so you can pull down the source from github.
    1
    
    sudo apt-get update &amp;&amp; sudo apt-get install git git-core
  2. Clone the Git Repo:
    1
    
    git clone https://github.com/Hexxeh/spotify-websocket-api.git
  3. Install Music Player Daemon (MPD)
    1
    
    sudo apt-get update &amp;&amp; sudo apt-get install mpd ncmpc mpc
  4. Install Python and all it’s Dev Packages:
    1
    
    sudo apt-get update &amp;&amp; sudo apt-get install python python-dev
  5. Install Python PIP:
    2
    3
    4
    
    sudo apt-get install python-pip
    cd spotify-websocket-api &amp;&amp; sudo ./install-deps.sh
    sudo easy_install
  6. Installing Dependacies:
    • I had problems with lxml so I used apt-get:
      1
      
      sudo apt-get install python-lxml
    • The rest of the dependencies can be installed using the script in the git repo:
      1
      
      cd spotify-websocket-api &amp;&amp; sudo ./install-deps.sh
  7. This can’t hurt but installing some of the libraries required by Despotify:
    1
    
    sudo apt-get update &amp;&amp; sudo apt-get install libao-dev libtool libssl-dev libmpg123-dev libvorbis-dev libncursesw5-dev
  8. And finally…:
    1
    
    cd spotify-websocket-api/clients/respotify &amp;&amp; python respotify.py &lt;SPOTIFY_USERNAME&gt; &lt;SPOTIFY_PASSWORD&gt;

Hopefully this’ll help someone install Respotify as there doesn’t seem to be much documentation out there for getting this running on a Raspberry Pi.

ED: Added missing step to get easy_install working. Please leave a comment if you uncover other issues with getting everything running.


Thanks
Adam.

Howto get going with your Pi… the OS [part 2]

It’s been a while since my last Raspberry Pi orientated post, but tonight I’ve decided to swap out Ted’s SD Card and try and create a Despotify client.

Background

Well the Raspberry Pi is different from other computers besides the obvious lack of case, and the small form-factor. It’s processor isn’t your typical Intel chip you find in your shiney desktop or fancy pants laptop. It’s an ARM processor. What does that mean? Well the processor in the Raspberry Pi is part of a family of processor chips that you tend to find in small devices, like your tablet or smart phone. These types of processors are quite specialist, they’re designed in a way that they don’t need your typical cooling such as heat-sinks and fans, they’re designed to be low power with low heat dissipation. With that your normal desktop operating system such as Windows or Apple Mac won’t run on this type of processor. However you can run Linux on it, if you don’t know what Linux is, perhaps you’ve heard of or used Ubuntu? Without going into the details of what Linux is (and its politics and ethos), all we need to know is its free and easily available and is able to run on literally anything with a few megahertz.

Introduction

In the beginning during the early Raspberry Pi development, the Linux flavour known as Fedora seemed to be the popular choice however now it seems the flavour known as Debian is taking over. I’m going to give a quick break down on how to get going on this Debian flavour, which has been bundled and repackaged and re-branded for the Raspberry Pi and is now known by the name of Raspbian (clever name eh?).

Tools

The main aim of this whole process is writing the operating system image to the SD Card, for this we need to use a tool called Win32Diskimager (assuming your on Windows!). What Win32Diskimager does is, write every byte of the image to the SD Card at a low-level that the Raspberry Pi can boot from.

You’ll need to go grab the image tool from Sourceforge, and then extract it (Desktop will be fine).

Next we’ll want to go grab the actual operating system image itself! Luckily the Raspberry Pi foundation provide all of this for us! Just head over to downloads page. At the time of writing you’ll want the 2013-02-09-wheezy-raspbian.zip image.

Imaging

Next we’ll want to pop the SD Card into the SD Card Reader slot on your PC/Laptop, and open up Win32DiskImager – and run Win32DiskImager.exe.

Once your image has also downloaded you’ll want to extract that too. Inside the zip file should be a .img file, this is your image file we’re going to write to your SD Card. You’ll then need to open your .img file from within Win32DiskImager, and select the drive letter for your SD Card. Then hit go, it should take a few minutes, remember SD Card tend to read/write at about 4-10mb/s and the .img file is close to a couple gigabytes.

Once it’s written to the SD Card, you can pop it into your Raspberry Pi. By default the new Raspberry Pi image boots up to a Desktop, this means you’ll want to plug in a mouse, keyboard and a network cable as well as your HDMI or RCA leads to your TV.

First Boot

So you may have noticed that the Raspberry Pi boots upon being plugged into the mains. You’ll also notice on your TV that there’s lots of scrolling text, don’t worry about this now! You’ll have plenty of time to find out what all this means ;-)

On first boot Raspbian loads up the raspi-config screen. This is your initial setup screen, here we can make some adjustments to your installation.

  1. One of the first things you’ll want to do is extend the partition size so it fills your entire SD Card. When we wrote the .img file to the SD Card, we wrote the exact bytes of the image file to the SD Card, so at this point in time your SD Card has some unused space. To do this option select expand_rootfs
  2. Next thing to do it ensure your regional settings are correct, so you’ll want to do change_locale to change your language settings so your Raspberry Pi knows your not writing in Chinese! Next you’ll want to change change_timezone, which sets the date-time.
  3. If your TV’s picture is a bit fuzzy or hazy you can change it using the overscan option.
  4. Depending on what you want to use your Pi for, I highly recommend considering overclocking, and reducing the graphics memory.
    • Overclocking the Pi is safe, but I wouldn’t get too adventurous as it’ll kill your Pi over time. Select the overclock option, I would recommend no more than the 800Mhz option.
    • Reducing the graphics memory, means you get more system memory – which depending on what your doing might be a good or a bad thing. Playing games and videos will require more memory, so you may want to opt it higher to say 64Mb, or if you intend to run your Pi ‘headless’ you can reduce it right down to 16Mb. You can find this option in memory_split
  5. Finally you may want to consider how you wish to access your Pi.
    • I opt’d for ‘headless’ meaning I can only SSH in or go in via a command prompt, so I turned off the desktop so it doesn’t load on boot. This saves me some more memory and processing power. You can turn SSH on at boot via the ssh option.
    • If you want the desktop on when you boot you can enable it, or disable it if you don’t want it via boot_behaviour

Once you’re happy with your settings and options, you can press down or tab and it’ll take you to the Finish link. This will ask you to reboot your Pi for the settings to take effect, I highly recommend at this early stage you reboot. Once rebooted you’re ready to go with your Pi.

ORM vs DBAL [explained]

I work for a large organisation who have a large legacy code base, so picking and choosing frameworks and technologies isn’t done lightly. We often bat ideas around and do a bit of digging around on Stackoverflow and Google before we even begin to consider downloading a copy and play with it.

Recently on the agenda there has been a lot of talks of database frameworks. There’s a lot of them around now, and they all do the same thing but in slightly different ways. One of the key things to understand however is what all the buzz-words mean. Google will through up words like DBAL and ORM. But what do they mean?

Firstly let’s think about how we generally connect to a database in PHP, and why you might need a framework. In PHP there’s three main, methods of connecting to a standard MySQL database – mysql_query(), mysqli_query(), and the PDO(). What they all have in common is that you give them a full SQL statement to evaluate and pass to the database, this is sometimes known as a Prepared Statement.

DBAL – Abstraction Layers
So for example using PHP’s now ancient mysql_query() a query may look like this, notice how we’ve defined our statement for the query.

1
2
3
4
<?php 
    $sql = "SELECT * FROM `users`"; 
    $query = mysql_query($sql); 
?>

Now lets say we’re writting lots of these types of queries, and we don’t want to worry about writting all the SQL for each one. A framework can help you by providing a nice abstraction layer, on some places on the internet this is known as DBALDataBase Abstraction Layer. What makes these frameworks useful is that they are able build queries on the fly from a given list of parameters. Abstraction Layers are quite simple to build, so for example if we were to roll our own vague Abstraction Layer it might have a method to perform an insert for us like so:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
<?php
    /**
     *  Inserts into database...
     *  @params string $table - table name
     *  @params array  $values - key/value pair to insert
     *  @return resource
     */
    function insert($table, array $values)
    {
        global $conn;//import connection
 
        $sql = "INSERT INTO {$table} (";
        //extract column names from the keys
        $sql.= implode(", ", array_keys($values));
        $sql.= ") VALUES('";
        //extract values
        $sql.= implode("', '", $values);
        $sql.= ")";
 
        return mysql_query($sql, $conn);
    }
 
    $result = insert('users',array('username'=>'bond007','password'=>'martini'));
 
?>

You can see already that this is a huge timesaver while coding! But hypothetially say what if one day we decided we’d like to port our application across to using Postgres or Oracle? Well, because we’ve abstracted all those scattered database queries through-out your code they now all funnel through our Abstraction Layer. This means we only have to update the underlying database interaction. So the above might look like this for Oracle:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<?php
    /**
     *  Inserts into database...
     *  @params string $table - table name
     *  @params array  $values - key/value pair to insert
     *  @return resource
     */
    function insert($table, array $values)
    {
        global $conn;//import connection
 
        $sql = "INSERT INTO {$table} (";
        //extract column names from the keys
        $sql.= implode(", ", array_keys($values));
        $sql.= ") VALUES('";
        //extract values
        $sql.= implode("', '", $values);
        $sql.= ")";
 
        $stmt = oci_parse($conn, $sql);
        return oci_execute($stmt);
    }
 
    $result = insert('users',array('username'=>'bond007','password'=>'martini'));
 
?>

What you’ll have noticed in the above is that the query is the same but now we’re running the query through PHP’s Oracle interface rather than MySQL. What we’ve done is begin to make our application Database Agnostic. The example above of course if pretty straight forward but it is possible to think outside the box, and use a NoSQL solution. So perhaps we’re not inserting into the “users” table but instead into the “users” collection in MongoDB or keyspace in Cassandra.As you can see the possiblities here are quite big.

One of the most popular Database Frameworks and big names around is Doctrine. If you take a look at the different projects they have listed you’ll notice they provide ORM/DBAL’s for Relational Databases, like MySQL/Oracle and then also NoSQL Databases like MongoDB and CouchDB.

ORM – Object Relational Mapping
So whats all this ORM business then? Well ORM stands for Object Relational Mapping, which sounds kinda scary and it is a bit at first glance. Object Relational Mapping is the idea of taking Database Abstraction to the next level and binding it to our application code. It’s hard to kind of explain, but it means you’ll need to define a PHP Object in your application that will correspond to your database schema. Let me try and explain via a code example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
<?php
    class User extends MyFirstORM
    {
        /**
         * User ID
         * @var int - defined as INT(11) in schema 
         *            auto-incremental primary key
         */
        public $id;
 
        /**
         * Username
         * @var string - defined as VARCHAR(32) in schema
         */
        public $username;
 
        /**
         * Password
         * @var string - defined as VARCHAR(16) in schema
         */
        public $password;
 
        /**
         * Class Constructor
         */
        public function __construct()
        {
            //some application logic here perhaps
            return ;
        }
    }
 
    //Create a new User Object
    $user = new User();
    //Assign properties of the User Object
    $user->username = 'bond007';
    $user->password = 'martini';
    //Call save method in parent class (MyFirstORM)
    $user->save();
 
?>

I’m hoping the above makes sense, and you can see what we’re trying to achieve here. You’ll notice you don’t have to pass an array or specify a table like our DBAL, instead the ORM framework is intelligent enough to be able to figure it all our for you. What this means is you can create numerous models in your application and using your ORM you can persist them straight to the database without even inferring what the SQL interaction should be. While it might seem clever it comes at a cost. It means your application needs to be structured in a certain way, and it also means you may have to keep your database schema and the models defined in your code base synchronized.

So hopefully you’ve got a vague idea of the benefits of Object Relational Mapping and Database Abstraction Layers for your application. It’s important to remember however my examples above are intended to be simple to understand and straight forward, in reality things are a little more complicated. Whether you use an ORM or a DAL, they’ll require bootstrapping and you’ll be introduced to the method chaining syntax, and more importantly you’ll be encouraged (if not forced) into using PHP 5.3+ which can be a plus or a minus given consideration of legacy code!

Summary
To summarize some of my experiences…

I’ve found ORM’s to be complicated and time-consuming to setup and often come with their own syntax which indicates foreign keys and many-to-many relationships to the ORM. However on the other hand they often provide a very powerful interface for querying data, and given that we’re dealing with objects we can think about solving problems using Object Orientated Programming principles. Remember one of the greatest things about an Object Orientated approach is re-usable code, we can re-create objects over and over again in many places in our application with little over-head. It also means we can manipulate objects many times before we finally commit the object to the database.

In comparision I’ve found DBAL’s to be the quickest solution and often provide a lesser of a learning curve. It also means you get more direct control over your database, and are able to build queries how you want to using more of the features of that database system. For example using Stored Procedures is something which is almost impossible using an ORM but is do-able in most DBALs. We’re also able to begin harnessing the power of transactions in our application along with safely quoting our input and protecting against SQL injection.

That’s my thoughts and experiences using ORMs and DBALs. Let me know if you’ve got anything to contribute in the comments.

Thanks
Adam.

Tagged , , , , , , , , , , ,

Howto get going with your Pi…the hardware [part 1]

So you were curious and thought you’d join the crowd, you thought to yourself “I’ll get me one of those Raspberry Pi things”! Then boom! Weeks later of after many emails telling warning you of delays, and checking the Raspberry Pi website it then arrives.

So now you have it… that credit card sized bit of circuit board, but now what do you do with it?

Powering Your Pi

Firstly if you were half a sleep when you ordered your Raspberry Pi you may have forgotten to order any of the accessories including power adaptor – as I did! I’ve been doing my research and the Pi uses about 500mAh/700mAh depending on the model you got (A or B), this power rating is a minimum, so ideally we’re looking at using quite a bit more somewhere about the 1,200mAh mark. This means your PC or USB Hub isn’t going to provide quite enough power (unless you’ve researched thoroughly your hardware and know otherwise), so the best way to get this thing powered is to use a mobile phone charger. I’ve actually got a Blackberry Bold 9900, and the charger works fine in this occasion. I’ve also got a Google Nexus 7, which I’ve looked into and it provides 2A (2amps thats 2,000mAh) which is ample. I’ve also been a bit cheeky and managed to run my Pi from the USB Media Play port on my 32″ Samsung TV, however I wouldn’t recommend this as a permanent power source. I’ve also been looking around and if your stuck for a power adaptor the Nokia AC-10X is perfect and its cheap – Amazon sell them for around £2-£3.50 so really can’t go wrong!

Storage Devices

Again I was asleep and knew I had a spare SD card lying around somewhere that I wasn’t using. Generally when it comes to SD cards on the Raspberry Pi theres only a few things to consider, a) how much space do you need, b) is speed important, and c) do you care about reliability. When it comes to the first point, you need to consider what you might use the Pi for. If you plan on getting it setup as a media centre or using it as a form of back up you might wanna splash out on a 32GB card, but as a bare minimum you should be looking at is 4GB. When we’ve installed Raspbian and got a few tools, you’ll have a little leftover from 4GB which will be enough to begin programming on (as intended). The second consideration is speed, SD Cards come with a rating known as a “class”, this class is usually a good indication of how many megabytes a second can be read from the card. As a minimum I’d recommend a class 6 rated card, this means you’ll get a healthy 6MB/s read speed (on average). If you plan on programming or running some really heavy applications or web services you may want to consider a class 10 card. The final consideration really is do you value the data thats going to be on the SD card. If your coding something mission critical I’d recommend backing up your code to a desktop PC as a best practice, but ultimately this is a judgement call on which brand of SD Card you want. I’ve got a Veho card thats years old, I know people who swear by SanDisk and others by Transcend – its your call.

Interacting

On your very first boot, you’ll want to configure a few things and you’ll want to see the pretty coloured BIOS screen. Obviously we need to plug it into a monitor or TV. The Raspberry Pi supports HDMI and RCA (as stated on the err …box), I have a Playstation 3 so I ‘borrowed’ the HDMI cable which worked fine on my Samsung TV. I later found an old RCA (Yellow/Red/White) cable from an ancient DVD player which also does the trick (for those that have forgotten the RCA/Scart age of TVs you’re only really concerned about the Yellow one for video.) I have to say on my TV I didn’t really notice too much difference between the HDMI or the RCA cable, probably because the Pi outputs as the same resolution on both outputs, however it may differ between TV’s (RCA might be prone to scanlines or wrong refresh rates/flickering). The other bits we need to get going are a mouse and keyboard. If you’re used to a desktop and don’t have a desire to embrace the command line, then you’ll want the mouse – if not brilliant you can use a keyboard only. A problem I’ve read about and encountered myself is repeating keypresses and what appears as unresponsiveness. I have a wireless Microsoft keyboard and mouse, that runs off of a small USB adaptor, this adaptor does BOTH mouse and keyboard on the single USB port. What I found is that is draws quite a bit of power from the USB port on the Raspberry Pi, this results (when I was using my Samsung TV to power it) in repeating and unresponsiveness. I’d press a button once on the keyboard and I’d get a whole row of that letter appear on screen. To solve this I dug out an old wired USB keyboard and wired USB mouse, and it worked fine with that.

Connectivity

Finally the last bit in the jigsaw, network connectivity. It’s not a mandatory thing to have setup on your Pi, however it’s damned useful! The model B Raspberry Pi comes equipped with a 100MBit/s Ethernet port, this is perfect for plugging straight into your router or PC (via ICS – Internet Connection Sharing). If you network your Raspberry Pi it means you can share files between your desktop PC and the Pi, as well as do other cool things like setup a web-server, or remote access the Pi over the internet using SSH, with a network connection the possibilities really open up to all the cool stuff you can do. Whats more you want to be able to share what you’ve done with your friends right?

If you’re close to your router I’d suggest you take advantage of the router’s speed for the ethernet connectivity, if your some distance from the router then ICS might be the way to go. I’m not going to cover ICS but it’s pretty simple, you need to setup a static IP on your desktop PC for your Raspberry Pi to connect to, and then tick the little box which tells Windows to share your Wireless connection.

Another option which might not be apparent is using a Power Ethernet adaptor. This allows you to route an ethernet connection from your router to a wall socket in your house and then pick it up again else where and connect to the ethernet port on the Pi. This maybe more convenient if say you plan on using your Pi to play video streams in your living room and your router is elsewhere in the house.

The other option is to buy a supported USB Wireless Dongle. You may have to hunt around on the internet and do your research before had, as Linux and wireless drivers can be a real pain to setup. One adaptor I’ve seen around that seems reasonable is an Edimax Wireless-N150 Nano adaptor, as of writing Ebuyer are selling these at a good price of £9.99.

Summary

This is it for the first part of this guide, I just wanted to cover some of the basic hardware pitfalls and recommendations for the Raspberry Pi. In the next part I’ll give a quick look at how to get going with Raspbian and get it up and running installed with desktop.

In the mean time here’s some useful part numbers to some reasonably priced accessories:

  • The “Xenta USB to Micro USB Cable” to power your Raspberry Pi from. Ebuyer have these down as 98p (yes pence) a go. Quickfind code: 24226
  • The “Transcend 16GB Secure Digital High Capacity Card” for storage on your Raspberry Pi. It’s a class 10 so its pretty nippy and its priced at £8 on Ebuyer. Quickfind code: 350691
  • The “Edimax Wireless-N150 Nano USB Adapter” is well supported and recommended, again Ebuyer have them for £9.99. Quickfind code: 220220

Thanks for reading

Adam.

 

 

Tagged , , , , , , , , ,

SEO Techniques & Tips

I’m not an SEO expert, and I’m sure alot of the points listed below have been around on the ‘tinernet for sometime but its good to put them into context…. so here it goes:

Semantics
Generally when it comes to trying to influencing Google’s ranking, SEO experts try to focus on a few keywords and try to make them prominent in the website’s design. This relies on you being familar with HTML as it involves getting the semantics right, and making sure you’ve got headers on pages, and text is wrapped inside of a paragraph tag. The Google robot (computer scripts which scan your website) are fairly stupid they don’t know the context of text or headers or images so we have to tell it. This is even more important with images as robots can’t see images, to get around this we have to ensure that we include alternative text captions for images which describe what the image is. You might think images are useless things for Google to index, but if someone happens to be using Google Image search they might find your pictures and thus your website! When building a website yourself these things are easy to include, but it you’ve used an off the shelf solution (such as WordPress) it’ll be more difficult to configure as the software will take all the information you give it and it will generate the pages for you.

URLs
Search engines tend to like descriptive files and web page URLs. Try and make things descriptive, this is a good example of a website URL: http://www.myfirstwebsite.co.uk/cake-boxes-and-cake-boards and this one which isn’t soo good; http://www.myfirstwebsite.co.uk/cms.php?id_cms=4  Again if your building the website yourself you can dig straight in and use Apache’s mod_rewrite module to do regular expression pattern matching and redirect the nice request to a more workable URL. However if you’ve not build your own and again you’ve used a piece of software it’ll depend on what control the software gives you, but you should always try to aim to create a customized URL for pages. It’s also important to ensure you have a canonical web address, search engines treat all subdomains and variations as completely separate sites. This basically means your website should be http://www.myfirstwebsite.co.uk and not just http://myfirstwebsite.co.uk, this also applies if you decide to buy more domains later on. To get around this and make sure all visitors use the same URL to eventually access your website you can setup a “301 redirect” to redirect www.coolkidscookware.com to www.coolkidscookware.co.uk, again this can be done using Apache’s mod_rewrite rules. Extra domains are useful because it makes your website more visible, the search engine’s robots will eventually visit your site via each domain you buy and index it again which is good.

Robots
The other way we can influence the search engine robots is including instructions on the website telling them to index the page or not, this is done by including a robots.txt on the website which contains pretty much a yes or no style flag to indicate if the robot should index the page. Sometime you don’t have to include a robots file and instead you can embed the rule straight into your HTML markup like so:

<meta name="robots" content="index,follow" />


Search Engines
Generally there are 3 big search engines being used, Google, Yahoo and Bing. Pretty much all other search engines use one of these three or a combination for it’s results. Getting yourself on to Google generally happens automatically, you shouldn’t need to submit your website however you can at https://www.google.com/webmasters/tools/submit-url?hl=en_uk&pli=1 However for Bing or Yahoo you will probably want to submit your website there, for Bing use https://ssl.bing.com/webmaster/SubmitSitePage.aspx for Yahoo use http://search.yahoo.com/info/submit.html

Backlinking
This term has been lurking around on the net for a while now, but it basically means try and get other people to link to your site. Search Engines determine which sites are popular by how many times people have linked to it on other websites. Essentially you want to get any friends, family to link to it from their websites, and you’ll also want to maybe get out and about on the internet and get your web address out there. Perhaps posting in relevant forums or commenting on blog posts?

Social Media
Social media tools like Facebook and Twitter can be good, in particular Twitter. However search engines don’t have a Facebook login and so can’t really begin to make much use of it. Don’t get me wrong though, Facebook is good to get people to engage and create a community aspect around the website which is valuable in its own right, but if your goal is to boost page rankings I’d encourage Twitter too. The beauty of Twitter is that unlike Facebook, search engines can see it and index it and make use of it. Whats more lots of other websites interface and mine data from Twitter so your name and comments and web address will be available all over place. To get the best out of Twitter you have to take a different mindset from Facebook, its okay to post to it, but you need to have followers. To get followers you need to post relevant content about your website and your target market and audience, and you’ll also want to go on a hunt and follow as many people who you think will be interested in your products as possible, more often than not they will follow you back! Theres quite a few possiblities but to engage your target audience perhaps think about running competitions, getting people to send in pictures via Twitter with your products could be a good start.

Merchant Services
Theres lots of price comparision websites, and product search databases out there. To use them your going to have to create a feed of all your products; this is usually in the form of an XML document. When someone puts say a product number in or a description Google will returns a list of people who sell that product. Google used to call this service Froogle however its been rebranded as Google Base now, check it out at http://base.google.com/base/ , Google will hold your hand through this process so don’t be scared!

Sitemap
Like I said the search engine robots are stupid and they sometimes don’t find every page on your website. The best way to tell the robot where all your pages are is to create a sitemap. Sitemaps are usually an XML document (same as above), the XML file sits on your website and the robot will find it. You can use something like to create the sitemap http://www.xml-sitemaps.com/ you’ll need to upload it to whereever your index file is, so you can get to it via http://www.myfirstwebsite.co.uk/sitemap.xml Google again has a nice bit of documentation on this http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156184&topic=8476&ctx=topic follow what they say here and you’ll be fine.

Google Analytics
Using Google Analytics? If not you should be, its like the ultimate place to keep an eye on how your website is doing, and who and what is visiting your website. It also contains some useful “Webmaster Tools” to help you get the most out of Google.

Google Adwords
I’m not sure how vauable it really is as most people tend to ignore the sponsored Google results (I do anyways), but you can sign up and pay Google to have you as a sponsored link. This doesn’t garentee visitors, but it does garentee you’ll be somewhere in the top search results on the right hand side. The position again though is completely dependant on what the user searched for, Google always tries to provide relevant results to users not just the highest bidder. You can have a look and sign up here https://adwords.google.co.uk, it tends to work on a threshold basis, you tell Google which keywords you want your website to be associated with (Google charges per word) and then you set how much your willing to spend on a daily rate for clicks. Google will the charge everytime someone clicks on your link in the results, until enough people have clicked it that its reached the limit you set – this way the cost doesn’t go spiralling out of control if your website over night becomes the most searched for thing!

Google Map Links
If your website is representing a physical shop or buisness locally, you can tell Google the place where your at, and it’ll put a marker up on Google Maps to let people know where your business is. This is also good because Google make their map data available to everyone, so people who write apps for iPhones might also be privvy to your business’s location.

Search Terms
Finally just some search terms and terminology thats worth looking up and reading about:

  • SEO – Search Engine Optimization
  • Backlinking
  • Conversion Rate
  • Product Feeds
  • Sitemaps
  • Analytics
  • Robots
  • Social Media
Tagged , , , , , , , , , , , ,

Aptana3: Smarty Highlighting

Aptana 3 doesn’t have full highlighting support or association, but you can set it up to at least do HTML highlighting.

You can associate .tpl files from within explorer by right clicking a .tpl file and going to properties and “open with” and selecting/browsing to Aptana.

To get highlighting going in Aptana, you need to go to “Window” -> “Preferences” -> “General” -> “Content Types” and select “Text” in the right text pane, browse down to “HTML” and click the “Add” button and add “*.tpl” to the extensions list.

Now when you click a .tpl file it’ll open in Aptana and have atleast HTML highlighting!

Tagged , , , ,