Filed under Uncategorized

PHP Ecommerce in 2015

Something I realized recently is how old and cranky the PHP based eCommerce tools have become. Tools such as OpenCart, ZenCart, WooCommerce, and Magento have been around donkey’s years, they’ve been hacked, patched and modulized. If you want to increase SEO on any of those platforms chances are there’s a huge marketplace of paid-for-modules that’ll claim to enhance your conversion and SEO. Problem is when you start looking at these plugins they tend to focus on one thing like meta and title tags. Other SEO plugins might focus on analytic. So what eventually happens is you end up installing several plugins to cover one area of shop. This comes then with it’s own problems not least compatibility but also security.

While those platforms and their marketplaces have served the PHP world and their merchants well over the years we’re now in 2015 and those players haven’t really stepped up their game all that much.

OpenCart

I’ve worked with OpenCart on and off since 2011 and it feels like it’s still in 2011. The code is lightweight, it has an MVC pattern behind it. But it’s very simple and feel old and dusty under the hood. But it’s a good solution for merchants who might be selling t-shirts or mugs and doing low-volume sales. But if you’re looking at low-volume sales why wouldn’t you consider using a hosted platform like Shopify? One thing merchants increasingly demand is a mobile presence, while the default theme for OpenCart is responsive there isn’t any APIs or anything else a mobile app can integrate with and building any kind of integration with the shop is going to be tough without changing the core unless you talk directly to the database (but that’s cheating!)

WooCommerce

A popular choice. WooCommerce provides a toolkit for selling things in your WordPress environment. I’ll admit I haven’t used WooCommerce but anything that plugins into something else to achieve it’s goal, such as WordPress in my book isn’t really a good solution. Sure there’s some good use-cases for bolting a shop onto a CMS or a blog, it might be convenient and it’ll get your foot in the door to selling stuff but if you’re a serious merchant I suspect it just won’t cut it.

Magento

Ohh Magento. I’ve worked on this beast now for the past year. It’s fantastic from a merchant point of view, but dreadfully slow to work on from a developer point of view. The code base although MVC and well documented is huge and very complicated. I often find working with Magento is quite difficult because it’s structure is quite ridged it has some flexibility out of the box with attributes but wanting to do anything different from what has been provided is quite tricky. Also don’t get me started on the EAV database design. In terms of scalability it’s quite tricky, it’s based on some of the Zend Framework (1) components so has several caching backends available, including Redis and Memcache. But developing with those turned off is painful and you need them turned off in development!

So the future… Magento 2 is around the corner and arrives end of this year, 2015. It’s promised to have a test framework with good coverage, and it’s meant to be based more on the Symfony components. It all sounds good but who knows if the community will adopt it? More importantly there is no upgrade path, so all the modules you have won’t work, and you’ll have to migrate all the data across too. It’s quite risky if you’ve spent along time building a stable system with finely tuned SEO and conversion. With that in mind it could be a good opportunity to migrate to another system anyway.

With the historic players in mind what else is available in 2015? In this day and age PHP developers tend to gravitate towards frameworks rather than fully built pieces of software. Probably because they offer the most flexibility long-term as well as more options for reusable components.

Sylius – http://sylius.org/

This is the most promising choice in 2015, an eCommerce framework built in Symfony 2 with good test coverage at unit, functional and behavioral testing levels. The test suite uses PHPSpec and Behat which is a tell-tale these guys aren’t messing around they’re taking testing seriously. Having looked into Sylius a couple times before it’s a little frustrating to see there’s a lot left on their roadmap before there’s a viable solution for a lot of people. It’s worth keeping an eye on their status on their roadmap page. For now don’t expect a feature full admin panel, or reviews and ratings.

One thing that is very encouraging is a recent tweet by Sylius – https://twitter.com/petewardreiss/status/614336914896912385 which shows a big UK Fashion Retailer hiring Sylius developers. It’s a good sign that the commercial world are keen to adopt Sylius early. You just have to google “Sylius” to find plenty of articles hyping it up.

Jiro – http://jiro.andrewmclagan.com/

Jiro is a Laravel based eCommerce framework. On the face of it it looks like it’s trying to achieve what Sylius is doing but using Laravel under the hood. It’s probably too early to really talk about this project as there isn’t any documentation yet and if you check out their Github repository you’ll note it only started in July 2015 this year. However knowing how fast Laravel become popular it wouldn’t surprise me to see this project get off the ground soon.

Thelia – http://thelia.net/

Unlike the others suggested here Thelia is a full eCommerce shop rather than a framework. It’s build on the Symfony2 framework, and looks pretty feature full. Already on version 2.1 this shop is really worth a consideration if you need something featureful right now. Having glanced through their demo and their feature list I think Thelia will be a real contender when Magento2 is released as it seems to contain everything you’d need to run your shop and more. It’s very SEO capable with meta tags and 301 redirects and more. It also contains a full REST API along with other features you’d expect from Magento such as Coupons, Customer Groups, Abandoned Carts, Analytics and Reporting.

Sonata Project – https://sonata-project.org/bundles/ecommerce/develop/doc/index.html

If you’re a confident PHP programmer or experienced with Symfony as a framework one route you can go down is to use a pre-made bundle. It means you can get an eCommerce experience out of your Symfony app with little effort. Does mean you’ll have to bootstrap and pull together all the bundles it includes so although the features are there you’ll have to be quite involved. The Sonata project from what I gather has been around a while and provides various bundles for the Symfony framework and are quite well used, so this bundle is probably very well supported across the Symfony community. Seems like a safe option to me if you want to create your own shop in Symfony and get a head-start on building the shop’s backend.

Conclusion

For me right now, I’d jump on Sylius. It’s going to be tricky being an early adopter but it’ll be worth it and as a developer means you can contribute back quite quickly. Although at pre-alpha stage the test suite around it is very encouraging so although not officially released you’d expect the framework to be fairly stable. With retailers hiring for Sylius and some noise on Twitter and Google search results I suspect this is a real contender.

Varnish + Apache and HTTPS

If you’ve been keeping up with web programming and web technology you’ll have heard people talk about something called Nginx. Nginx is a reverse proxy that also doubles up as a web-server. It’s pretty fast. So I’ve been told – I actually haven’t used it but the benchmarks look exciting. Unfortunately I’m not ready to make that switch, Apache has served me (pun intended?) well over the years, it’s the grand father of web servers. It’s not the fastest by any means but it’s the most mature and feature packed out there. Where Apache specifically is let down is serving static content along side processing PHP pages, the PHP Apache module bogs down the whole run time of Apache*, and makes it a bit inefficient when processing static requests. When I say static requests, I’m talking content that doesn’t change – images, stylesheets, javascript and icons.

So what can we do to increase Apache’s serving of static content, and let Apache deal with generating the pages dynamically with PHP?

Enter Varnish. Varnish is according to their website:

Varnish is a web application accelerator.

Technically speaking it is a reverse proxy cache. I’ll explain…

When you think of proxies you might think back to your office or school internet system which had a proxy filtering out the naughty content from the web. Well proxies also served another purpose – to cache content for the network. This means every time someone goes to Google on the network it doesn’t have to make a request through to the Google servers, it can use it’s cached version instead, speeding the whole process up. So I guess by definition a reverse proxy works the other way around. Rather than caching content from the web it instead caches the content from your website, and sends it back to your visitor’s browser rather than let the web server handle the request.

Varnish does exactly this. What makes Varnish particularly exciting is the way it caches the data. It does it in memory. For those that don’t know, memory is faster than reading from your local hard-disk. It why other tools like Memcached are soo popular also. So when you think about it; Varnish is quite a simple concept. I’ve particularly noticed a decrease in page load times on legacy websites where there are a lot of images and CSS sprites. When I say noticeable, I mean a couple of seconds – which in page load times is quite significant.

 

Installing for standard HTTP

To install Varnish you can look at the documents here, but essentially it’s a simple Apt-Get on a Ubuntu server.

The confusing part is getting your head around the port forwarding. You can’t have 2 things bound to the same port. Something has to give and since we need to hit Varnish rather than Apache from our web browser, Varnish wins the coin toss for port 80. This means we need to run Apache on a different port number. The default configuration on Varnish, is to serve content from Varnish from port 8080 and talk to the Apache web server running on port 80. We need to reverse this situation.

If you’re running Ubuntu you need to look at your:

/etc/apache2/ports.conf

Change this to use 8080 instead of port 80.

Then you’ll want to update all your VirtualHost configurations for Apache to use port 8080. The line should look like this

<VirtualHost *:8080>

Now you’ll need to change Varnish to run on port 80. Varnish’s configuration is stored in:

/etc/vagrant/default.vcl

In side here we’ll need to configure the back ends. In Varnish’s terminology backend means your Apache web server. You’ll need to have something like this:

backend default{
   .host = "127.0.0.1";
   .port = "80";
}

This only tells Varnish where to find Apache, we now need to bind Varnish to port 80 so our web browser can talk to Varnish. This is stored in:

/etc/default/varnish

In side here is a line that begins with

"DAEMON_OPTS="-a :8080 \"

This line contains the parameters that Vagrant is launched with at system boot and when you call the service program. Here we need to change the :8080 to just :80.

This should be enough configuration to get going with Varnish in a production environment. You’ll now want to restart both Apache and Varnish via

sudo service apache2 restart && sudo service varnish restart

Using Varnish with HTTPS

One draw back with Varnish is that is doesn’t understand SSL encrypted requests. This means HTTPS is alien to it, it doesn’t know what is inside each packet because it’s encrypted, and therefore it can’t figure out if the content being passed between is suitable for it to cache.

So to circumvent this limitation we can introduce Pound. Pound is a reverse proxy similar to Varnish except it’s focus is more on load balancing – directing traffic to the right places. Pound like Varnish is pretty simple to setup on Ubuntu:

sudo apt-get install pound

Like I said Pound is intended to route requests to the various places it needs to. Think of it like Apache rewrite rules that redirect requests to servers rather than pages/scripts. We’ll need Pound to send requests to Varnish when it receives them, and for Varnish to forward the request onto Apache or use it’s own cache. Again for this to work we need to bind Pound to port 80 for our web browser, and since we’re dealing with HTTPS/SSL we’ll need to bind to 443 also.

Once installed Pound’s configuration lives in:

/etc/pound/pound.cfg

We’ll need to look for/create a block of configuration that looks like:

ListenHTTP
	Address 127.0.0.1 ## this needs to be the external IP address of the server.
	Port	80        ## this needs to be 80 for web browsers to connect to.

	## allow PUT and DELETE also (by default only GET, POST and HEAD)?:
	xHTTP		0

	Service
		BackEnd
			Address	127.0.0.1
			Port	8080       ## this needs to be our Varnish port number for HTTP connections
		End
	End
End

Here we’ll need to change the first Address directive from 127.0.0.1 to the external IP address of the server (the one your domain resolves to) this way Pound can handle requests coming to your domain(s). The port number then needs to be changed to 80 so it’s listening to HTTP traffic. The backend configuration block is going to point to our Varnish installation. We’ll configure that in just a second, but we can assume that port number is going to be 9080. Now because we’re going to be dealing with SSL we need to include an extra configuration block to handle incoming HTTPS requests to Pound. The block should look something like this:

ListenHTTPS
	HeadRemove "X-Forwarded-Proto"
	AddHeader "X-Forwarded-Proto: https"
	Address 127.0.0.1 ## this needs to be the external IP address of the server.
	Port 443          ## this needs to be 443 to listen for HTTPS connections from browsers
	xHTTP		0
	Cert "/etc/apache2/ssl/pound.pem" ## this needs to be your SSL certificate
	Service
		Backend
			Address 127.0.0.1
			Port 9443 ## this needs to be the Varnish HTTPS port number
		End
	End
End

As you can see above we’ve got some extra configuration there. We’ve added in two new pieces of configuration at the top, which remove the X-Forwarded-Proto header from the request and then replaces it with another one telling Varnish this is a HTTPS request. This is particularly important, like I said earlier Varnish can’t understand SSL encrypted connections, so the connection between Pound and Varnish will be over HTTP unencrypted. We can get away with this as the connection between the user’s browser and the server is encrypted, its only where it’s going over the loopback interface of the server it’ll be unencrypted.

The other thing we’ve added is our SSL certificate. This is a pem file which took me ages to figure out how to do. I’m not an SSL expert and don’t fully understand certification. Basically the pem file is a concatenation of your certificate (crt file) and your key. Doing this step wrong will result in Pound failing start with the error message “SSL_CTX_use_PrivateKey_file Error“. You need to concatenate the files in the right order for it to work, and make sure your key doesn’t have it’s password on the file. This thread over at the Pound mailing list has some information on this, but for reference you should have it in this order as detailed here. see: http://www.project-open.org/en/howto_pound_https_configuration

Again the backend block needs to refer to our Varnish configuration that is going to handle/forward our HTTPS connections. Here we’ve assumed it’s 9443.

Next we’ll update our Varnish configuration so it’s listening on a different port from Pound. Open:

/etc/default/varnish

You’ll notice we’ve basically put both port 80 and 433 in the 9000 range, this is just a convention so we know 9000 ports are what Varnish is listening on. So we’ll need to change the :80 or :8080 to :9080. We’ll also want Varnish to handle (spoof) HTTPS requests, so we’ll need it to listen on another port for our SSL traffic – this port is 9443. Adding a second port number can be done by using a comma separator. Make sure you don’t forget the colon in-front of the port number – I spent an hour pulling my hair out trying to diagnose this earlier. The configuration should look like this:

"DAEMON_OPTS="-a :9080,:9443 \"

Next we’ll need to make sure Apache isn’t bound to port 443, as we now need this port to go to Pound and then to Varnish. Again open:

/etc/apache2/ports.conf

You’ll need to change 443 to something else, it’s worth checking port 80 isn’t being used as well. We’re going to use the 8000′s here so HTTP traffic should be 8080 and HTTPS should be 8443. Again we’ll need to change over all our VirtualHost directives to listen for the new HTTPS and HTTP ports.

<VirtualHost *:8443>

Now we need to make sure Varnish’s backend reflect our new Apache port numbers, as we only configured what Varnish is listening on before not where it’s forwarding it’s requests. So lets open (again):

/etc/varnish/default.vcl

We’ve already got our HTTP backend defined, now we need to define a new backend for HTTPS requests. Ours will look like this:

backend default_ssl{
    .host = "127.0.0.1";
    .port = "8443";
}

Although we’ve got 2 backends we need to make sure Varnish knows which one to use for which connection it receives. We need to add this small block of logic which defines which backend to use based on the port number it receives the connection on:

sub vcl_recv {
  # Set the director to cycle between web servers.
  if (server.port == 9443) {
    set req.backend = default_ssl;
  }
  else {
   set req.backend = default;
  }
}

We’ll also need to make sure our default’s backend is up to date too, it should be forwarding to 8080.

At this point we should be done with configuring Pound, Varnish and Apache. The only thing left to do is tweak our Apache VirtualHost configuration so it’s speaking plain unencrypted HTTP to Varnish. Remember we sent through that X-Forwarded-Proto header? Well this is where that comes in handy! In our VirtualHost configuration we can comment out our SSL directives, as we don’t want to encrypt anything going through Varnish as Pound will do that for us!

Next however we need to spoof the fact this is a HTTPS connection. This is important because although your browser is visiting the site in HTTPS, Apache won’t know it’s HTTPS and serve all images and CSS as HTTP. This causes all kinds of warnings and sirens in the browser and the result is it breaks the page. We therefore need to add this following directive our side the VirtualHost:

SetEnvIf X-Forwarded-Proto "^https$" HTTPS=on

Our final VirtualHost Configuration should look like:

SetEnvIf X-Forwarded-Proto "^https$" HTTPS=on
<VirtualHost *:8443>
    ServerName www.somesite.com
    DocumentRoot /var/www/site

#   SSLEngine On
#   SSLCertificateFile /etc/apache2/ssl/certificate.crt
#   SSLCertificateKeyFile /etc/apache2/ssl/certificate.key
#   SSLCertificateChainFile /etc/apache2/ssl/certificate.pem
#   SSLVerifyClient None

    <Directory /var/www/site>
        AllowOveride None ## Disable .htaccess
    </Directory>
    
&lt/VirtualHost>

One last thing to do before starting Pound. You’ll need to enable to configuration from the service configuration. You need to add setup=1 in:

/etc/default/pound

Now everything should be configured you just need to reboot all your services.

sudo service apache2 restart
sudo service varnish restart
sudo service pound restart

Problems and Troubleshooting

  • If you have a problem starting Pound because of SSL_CTX_use_PrivateKey_file you’ll need to check your certificate/pem file you’ve put in your Pound configuration.
  • If when you visit your website you see “Service is unavailable, please try again later” this means Pound couldn’t talk to Varnish, make sure you check your Pound configuration matches the ports listening in Varnish. See /etc/pound/pound.cfg and /etc/default/varnish.
  • If you get a 503 error when visiting your site, this is coming from Varnish, and usually means that Varnish is talking to Apache but Apache is sending back encrypted stuff when Varnish is expecting plain old HTTP. Make sure you remove your SSL directives from the VirtualHost and put the SetEnvIf directive on the X-Forwarded-Proto header
  • To diagnose problems, it might be useful to visit your website directly via Apache’s new ports to ensure Apache is working properly. Visit http://www.somesite.com:8443/.
  • Varnish comes with it’s own logging system that stores stuff in-memory, you just need to run
    varnishlog

    on the command line

  • Pound also logs stuff but it goes to syslog. So you’ll want to
    tail /var/log/syslog
  • Over course you always have your Apache logs at
    tail /var/log/apache2/access_log
  • Another tip is to use Curl to see what Headers are being sent in the request.
     curl -v http:/www.somesite.com/

I hope the above article helps someone trying to get a LAMP setup working with HTTPS over Varnish and Pound. It took me a few hours of playing around before I got everything all setup perfectly. It’s definitely worth the effort to get this set-up working as it means your website is much more scalable and configurable.

Thanks
Adam.

*feel free to correct me on that statement.

Announcing: gigHUB

Announcing my project: gigHUB.

gigHUB is my new side project which collates all the local gigs in my area and puts them in one place that’s easy to browse on a mobile. The project started mid-June 2013 with 3 friends, some of whom I work with a Goram + Vincent. So far the project has had positive feedback and gained significant traction through Social Media, we have big plans in-terms of adding new features and building upon our success so far, so if you live in the Bristol area it’s worth adding gigHUB to your bookmarks!

Thanks

Adam.

Don’t be so harsh on PHP…

A recent post to a mailing list I’m on sparked the following reply (paraphrased)…

I can’t say I’ve had a play with many of the server-side languages and frameworks kicking around, and to be honest everyday sees a new framework/library/language I should apparently be trying.

PHP has a lot going for it, not in terms of implementation as such (yea it might be ugly!) but in other areas.

For one PHP has been around since the mid 90s it’s now got quite a bit of maturity, what I mean by this is; it has a level of trust around it. PHP on paper was meant for the Web it’s acronym is/was Personal Home Page! Python, Perl were scripting languages back in the day, and while Perl had it’s opportunity with the Web (loaded as a Apache CGI module) it didn’t take off. Now correct me if I’m wrong but Python on the web has only become more popular recently because of Django? Similarly with Ruby because of Rails? You don’t see many people writing vanilla pure Ruby/Python web applications, a lot of people opt to use the a framework. In the case of Rails it includes the kitchen-sink and everything you’d ever need to be able to develop on the web. PHP isn’t a framework out of the box, but offers enough to get you going on the web without the need of a framework. So back to the ideal of maturity, with PHP’s many years behind it it means that a lot of the earlier security flaws you’d expect from a young language have been rooted out. Compare this with the Rails Stack, where there was some quite high profile security flaws announced and patches released earlier this year, I know a lot of fuss was kicked up about it on my twitter feed…

I can back up some of the above statements partly because I worked recently for a large finance organisation, when I mention things like NodeJS and Rails they chuckled because those technologies are considered “hipster” technologies, in the eyes of the bigger boys. For blue chips and FTSE100′s the obvious choices in languages are the traditional ones like .NET, PHP, Java. Their reasons are like I said above, but for a lot of these organisations there are other concerns for example can their system admin scale it up and manage the application easily, and can they bolt on add-ons such as caches and accelerators.

This is where PHP seems to succeed; scalability. The proven killer stack known as LAMP just works and does what you need it to out the box. But if you want to scale you’ve got APC, Memcache, HipHop, even Nginx for better performance. From what I’ve read Rails/Ruby as a web stack has only caught up in recent years in terms of matching PHP’s performance, let alone begin to out perform it. That said I don’t wanna big up PHP too much as; if performance is your thing you’ll want to be looking to build elastic Java Web apps running in a JVM cloud type thing.

How easy is it for someone to download something like Django/Rails/NodeJS and get going on it? Not very, getting my head around RVM meant I had to use my head a little as a geek something a complete freshman won’t have. What PHP offers in this respect is a low entry barrier, literally anyone can download a WAMP/MAMP/LAMP package and within a few clicks of a button be using something like phpMyAdmin and working through some simple examples in Notepad++ and have a website. It’s that easy. The flip-side to this however is those people will ultimately build lots of awful websites before they get better, but this is part of the learning process and keeps experienced guys in business doing the rewrites when it goes wrong. Of course things change when you use a framework as developers suddenly have to conform to a convention, initially its a good thing it means that developer will start to gain some discipline. But I think there’s another blog post to be had here about using a framework or not using a framework.

Going back to my point about the big blue chips; from an employment point of view, chances are Universities aren’t spewing out Rails or Django or even Zend programmers but they will at least cover some ASP or PHP (which I know I covered at both the Universities I went to, in-fact I covered PHP at College too) which means any budding graduates with a taste for web development are going to be using either of those 2 languages as a starting point. Which means there should be a big amount of development resource around for PHP/ASP sites, and leaves things like Django/NodeJS to be more niche, it means the big blue chips can open their doors to graduates and take that academic knowledge to the commercial level.

So lets not be too harsh on PHP it has done a lot for our industry, and it will continue to be used because of the easy accessibility in academia and a vested technical interest from big organisations.

But from a personal perspective, I’ve played with a few PHP frameworks never done any serious development with any of them other than the original Zend. I liked Zend it was my first framework in a commercial setting however I have some reservations about how bloated it seemed. For me CodeIgnitor is a much cleaner framework – it’s code that looks like I wrote it! Not engineered by a commercial organisation like Zend, but I found I didn’t need some of the things in CodeIgnitor and it didn’t provide anything major I couldn’t have built myself. I’ve also had a little play with Slim and have to say I’ll probably be using that more in personal projects, as it’s light. Frameworks aside I’m quite keen on using libraries, I’m quite a fan of Doctrine as a DBAL, and I’m looking towards Twig for templating and even Recess for API things.

But in recent years one thing I’ve discovered without using frameworks in my various commercial settings is that I don’t need them, no one is forcing you to use a library or a framework, yea you get some of the hard bits done for free but you’ll be at the mercy of the framework’s publicly announced security exploits or hindered by it’s performance over head, or baffled by it’s ORM. What ever that original itch was garenteed that library framework will go someway but wont satisfy that itch completely. I don’t think there’s no shame in writting a few classes and instantiating them when you need them yourself, whats wrong with building your own toolkits and libraries?

Thanks
Adam.

Howto get going with your Pi…the hardware [part 1]

So you were curious and thought you’d join the crowd, you thought to yourself “I’ll get me one of those Raspberry Pi things”! Then boom! Weeks later of after many emails telling warning you of delays, and checking the Raspberry Pi website it then arrives.

So now you have it… that credit card sized bit of circuit board, but now what do you do with it?

Powering Your Pi

Firstly if you were half a sleep when you ordered your Raspberry Pi you may have forgotten to order any of the accessories including power adaptor – as I did! I’ve been doing my research and the Pi uses about 500mAh/700mAh depending on the model you got (A or B), this power rating is a minimum, so ideally we’re looking at using quite a bit more somewhere about the 1,200mAh mark. This means your PC or USB Hub isn’t going to provide quite enough power (unless you’ve researched thoroughly your hardware and know otherwise), so the best way to get this thing powered is to use a mobile phone charger. I’ve actually got a Blackberry Bold 9900, and the charger works fine in this occasion. I’ve also got a Google Nexus 7, which I’ve looked into and it provides 2A (2amps thats 2,000mAh) which is ample. I’ve also been a bit cheeky and managed to run my Pi from the USB Media Play port on my 32″ Samsung TV, however I wouldn’t recommend this as a permanent power source. I’ve also been looking around and if your stuck for a power adaptor the Nokia AC-10X is perfect and its cheap – Amazon sell them for around £2-£3.50 so really can’t go wrong!

Storage Devices

Again I was asleep and knew I had a spare SD card lying around somewhere that I wasn’t using. Generally when it comes to SD cards on the Raspberry Pi theres only a few things to consider, a) how much space do you need, b) is speed important, and c) do you care about reliability. When it comes to the first point, you need to consider what you might use the Pi for. If you plan on getting it setup as a media centre or using it as a form of back up you might wanna splash out on a 32GB card, but as a bare minimum you should be looking at is 4GB. When we’ve installed Raspbian and got a few tools, you’ll have a little leftover from 4GB which will be enough to begin programming on (as intended). The second consideration is speed, SD Cards come with a rating known as a “class”, this class is usually a good indication of how many megabytes a second can be read from the card. As a minimum I’d recommend a class 6 rated card, this means you’ll get a healthy 6MB/s read speed (on average). If you plan on programming or running some really heavy applications or web services you may want to consider a class 10 card. The final consideration really is do you value the data thats going to be on the SD card. If your coding something mission critical I’d recommend backing up your code to a desktop PC as a best practice, but ultimately this is a judgement call on which brand of SD Card you want. I’ve got a Veho card thats years old, I know people who swear by SanDisk and others by Transcend – its your call.

Interacting

On your very first boot, you’ll want to configure a few things and you’ll want to see the pretty coloured BIOS screen. Obviously we need to plug it into a monitor or TV. The Raspberry Pi supports HDMI and RCA (as stated on the err …box), I have a Playstation 3 so I ‘borrowed’ the HDMI cable which worked fine on my Samsung TV. I later found an old RCA (Yellow/Red/White) cable from an ancient DVD player which also does the trick (for those that have forgotten the RCA/Scart age of TVs you’re only really concerned about the Yellow one for video.) I have to say on my TV I didn’t really notice too much difference between the HDMI or the RCA cable, probably because the Pi outputs as the same resolution on both outputs, however it may differ between TV’s (RCA might be prone to scanlines or wrong refresh rates/flickering). The other bits we need to get going are a mouse and keyboard. If you’re used to a desktop and don’t have a desire to embrace the command line, then you’ll want the mouse – if not brilliant you can use a keyboard only. A problem I’ve read about and encountered myself is repeating keypresses and what appears as unresponsiveness. I have a wireless Microsoft keyboard and mouse, that runs off of a small USB adaptor, this adaptor does BOTH mouse and keyboard on the single USB port. What I found is that is draws quite a bit of power from the USB port on the Raspberry Pi, this results (when I was using my Samsung TV to power it) in repeating and unresponsiveness. I’d press a button once on the keyboard and I’d get a whole row of that letter appear on screen. To solve this I dug out an old wired USB keyboard and wired USB mouse, and it worked fine with that.

Connectivity

Finally the last bit in the jigsaw, network connectivity. It’s not a mandatory thing to have setup on your Pi, however it’s damned useful! The model B Raspberry Pi comes equipped with a 100MBit/s Ethernet port, this is perfect for plugging straight into your router or PC (via ICS – Internet Connection Sharing). If you network your Raspberry Pi it means you can share files between your desktop PC and the Pi, as well as do other cool things like setup a web-server, or remote access the Pi over the internet using SSH, with a network connection the possibilities really open up to all the cool stuff you can do. Whats more you want to be able to share what you’ve done with your friends right?

If you’re close to your router I’d suggest you take advantage of the router’s speed for the ethernet connectivity, if your some distance from the router then ICS might be the way to go. I’m not going to cover ICS but it’s pretty simple, you need to setup a static IP on your desktop PC for your Raspberry Pi to connect to, and then tick the little box which tells Windows to share your Wireless connection.

Another option which might not be apparent is using a Power Ethernet adaptor. This allows you to route an ethernet connection from your router to a wall socket in your house and then pick it up again else where and connect to the ethernet port on the Pi. This maybe more convenient if say you plan on using your Pi to play video streams in your living room and your router is elsewhere in the house.

The other option is to buy a supported USB Wireless Dongle. You may have to hunt around on the internet and do your research before had, as Linux and wireless drivers can be a real pain to setup. One adaptor I’ve seen around that seems reasonable is an Edimax Wireless-N150 Nano adaptor, as of writing Ebuyer are selling these at a good price of £9.99.

Summary

This is it for the first part of this guide, I just wanted to cover some of the basic hardware pitfalls and recommendations for the Raspberry Pi. In the next part I’ll give a quick look at how to get going with Raspbian and get it up and running installed with desktop.

In the mean time here’s some useful part numbers to some reasonably priced accessories:

  • The “Xenta USB to Micro USB Cable” to power your Raspberry Pi from. Ebuyer have these down as 98p (yes pence) a go. Quickfind code: 24226
  • The “Transcend 16GB Secure Digital High Capacity Card” for storage on your Raspberry Pi. It’s a class 10 so its pretty nippy and its priced at £8 on Ebuyer. Quickfind code: 350691
  • The “Edimax Wireless-N150 Nano USB Adapter” is well supported and recommended, again Ebuyer have them for £9.99. Quickfind code: 220220

Thanks for reading

Adam.

 

 

Tagged , , , , , , , , ,

SEO Techniques & Tips

I’m not an SEO expert, and I’m sure alot of the points listed below have been around on the ‘tinernet for sometime but its good to put them into context…. so here it goes:

Semantics
Generally when it comes to trying to influencing Google’s ranking, SEO experts try to focus on a few keywords and try to make them prominent in the website’s design. This relies on you being familar with HTML as it involves getting the semantics right, and making sure you’ve got headers on pages, and text is wrapped inside of a paragraph tag. The Google robot (computer scripts which scan your website) are fairly stupid they don’t know the context of text or headers or images so we have to tell it. This is even more important with images as robots can’t see images, to get around this we have to ensure that we include alternative text captions for images which describe what the image is. You might think images are useless things for Google to index, but if someone happens to be using Google Image search they might find your pictures and thus your website! When building a website yourself these things are easy to include, but it you’ve used an off the shelf solution (such as WordPress) it’ll be more difficult to configure as the software will take all the information you give it and it will generate the pages for you.

URLs
Search engines tend to like descriptive files and web page URLs. Try and make things descriptive, this is a good example of a website URL: http://www.myfirstwebsite.co.uk/cake-boxes-and-cake-boards and this one which isn’t soo good; http://www.myfirstwebsite.co.uk/cms.php?id_cms=4  Again if your building the website yourself you can dig straight in and use Apache’s mod_rewrite module to do regular expression pattern matching and redirect the nice request to a more workable URL. However if you’ve not build your own and again you’ve used a piece of software it’ll depend on what control the software gives you, but you should always try to aim to create a customized URL for pages. It’s also important to ensure you have a canonical web address, search engines treat all subdomains and variations as completely separate sites. This basically means your website should be http://www.myfirstwebsite.co.uk and not just http://myfirstwebsite.co.uk, this also applies if you decide to buy more domains later on. To get around this and make sure all visitors use the same URL to eventually access your website you can setup a “301 redirect” to redirect www.coolkidscookware.com to www.coolkidscookware.co.uk, again this can be done using Apache’s mod_rewrite rules. Extra domains are useful because it makes your website more visible, the search engine’s robots will eventually visit your site via each domain you buy and index it again which is good.

Robots
The other way we can influence the search engine robots is including instructions on the website telling them to index the page or not, this is done by including a robots.txt on the website which contains pretty much a yes or no style flag to indicate if the robot should index the page. Sometime you don’t have to include a robots file and instead you can embed the rule straight into your HTML markup like so:

<meta name="robots" content="index,follow" />


Search Engines
Generally there are 3 big search engines being used, Google, Yahoo and Bing. Pretty much all other search engines use one of these three or a combination for it’s results. Getting yourself on to Google generally happens automatically, you shouldn’t need to submit your website however you can at https://www.google.com/webmasters/tools/submit-url?hl=en_uk&pli=1 However for Bing or Yahoo you will probably want to submit your website there, for Bing use https://ssl.bing.com/webmaster/SubmitSitePage.aspx for Yahoo use http://search.yahoo.com/info/submit.html

Backlinking
This term has been lurking around on the net for a while now, but it basically means try and get other people to link to your site. Search Engines determine which sites are popular by how many times people have linked to it on other websites. Essentially you want to get any friends, family to link to it from their websites, and you’ll also want to maybe get out and about on the internet and get your web address out there. Perhaps posting in relevant forums or commenting on blog posts?

Social Media
Social media tools like Facebook and Twitter can be good, in particular Twitter. However search engines don’t have a Facebook login and so can’t really begin to make much use of it. Don’t get me wrong though, Facebook is good to get people to engage and create a community aspect around the website which is valuable in its own right, but if your goal is to boost page rankings I’d encourage Twitter too. The beauty of Twitter is that unlike Facebook, search engines can see it and index it and make use of it. Whats more lots of other websites interface and mine data from Twitter so your name and comments and web address will be available all over place. To get the best out of Twitter you have to take a different mindset from Facebook, its okay to post to it, but you need to have followers. To get followers you need to post relevant content about your website and your target market and audience, and you’ll also want to go on a hunt and follow as many people who you think will be interested in your products as possible, more often than not they will follow you back! Theres quite a few possiblities but to engage your target audience perhaps think about running competitions, getting people to send in pictures via Twitter with your products could be a good start.

Merchant Services
Theres lots of price comparision websites, and product search databases out there. To use them your going to have to create a feed of all your products; this is usually in the form of an XML document. When someone puts say a product number in or a description Google will returns a list of people who sell that product. Google used to call this service Froogle however its been rebranded as Google Base now, check it out at http://base.google.com/base/ , Google will hold your hand through this process so don’t be scared!

Sitemap
Like I said the search engine robots are stupid and they sometimes don’t find every page on your website. The best way to tell the robot where all your pages are is to create a sitemap. Sitemaps are usually an XML document (same as above), the XML file sits on your website and the robot will find it. You can use something like to create the sitemap http://www.xml-sitemaps.com/ you’ll need to upload it to whereever your index file is, so you can get to it via http://www.myfirstwebsite.co.uk/sitemap.xml Google again has a nice bit of documentation on this http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156184&topic=8476&ctx=topic follow what they say here and you’ll be fine.

Google Analytics
Using Google Analytics? If not you should be, its like the ultimate place to keep an eye on how your website is doing, and who and what is visiting your website. It also contains some useful “Webmaster Tools” to help you get the most out of Google.

Google Adwords
I’m not sure how vauable it really is as most people tend to ignore the sponsored Google results (I do anyways), but you can sign up and pay Google to have you as a sponsored link. This doesn’t garentee visitors, but it does garentee you’ll be somewhere in the top search results on the right hand side. The position again though is completely dependant on what the user searched for, Google always tries to provide relevant results to users not just the highest bidder. You can have a look and sign up here https://adwords.google.co.uk, it tends to work on a threshold basis, you tell Google which keywords you want your website to be associated with (Google charges per word) and then you set how much your willing to spend on a daily rate for clicks. Google will the charge everytime someone clicks on your link in the results, until enough people have clicked it that its reached the limit you set – this way the cost doesn’t go spiralling out of control if your website over night becomes the most searched for thing!

Google Map Links
If your website is representing a physical shop or buisness locally, you can tell Google the place where your at, and it’ll put a marker up on Google Maps to let people know where your business is. This is also good because Google make their map data available to everyone, so people who write apps for iPhones might also be privvy to your business’s location.

Search Terms
Finally just some search terms and terminology thats worth looking up and reading about:

  • SEO – Search Engine Optimization
  • Backlinking
  • Conversion Rate
  • Product Feeds
  • Sitemaps
  • Analytics
  • Robots
  • Social Media
Tagged , , , , , , , , , , , ,

Aptana3: Smarty Highlighting

Aptana 3 doesn’t have full highlighting support or association, but you can set it up to at least do HTML highlighting.

You can associate .tpl files from within explorer by right clicking a .tpl file and going to properties and “open with” and selecting/browsing to Aptana.

To get highlighting going in Aptana, you need to go to “Window” -> “Preferences” -> “General” -> “Content Types” and select “Text” in the right text pane, browse down to “HTML” and click the “Add” button and add “*.tpl” to the extensions list.

Now when you click a .tpl file it’ll open in Aptana and have atleast HTML highlighting!

Tagged , , , ,

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start blogging!