Vagrant Up

Standard

Somewhat smarter working method for web developers: When building projects, work inside virtual boxes instead on your workstation directly. Why? Each project has it’s own characteristics and software dependancies. Each programmer added on the project has to spent time configuring his or her workstation to your project.

Does the following sound familiar?

We work with OSX so you better bring a mac. No, your are running 5.3, our project is build for PHP 5.4. You need to set the following env variables. Allow .htaccess. Oh, why don’t you have Git? And GD? etc etc.

You know what I mean right? Meet vagrant.

Interpreting the guide below and my brief experiement this morning, Vagrant allows quick starting (and sharing) of Virtualization boxes on workstations. A new employee enters the project, receives a laptop with instructions on how to pull the right vagrant/puppet setup, types ‘vagrant up’ and has a Linux distro of our choice with the right software installed due to chef firing up.

If this triggers your interest, I followed the instructions by ShawnMcCool on Github this morning and already moved one project inside the box. I’m going to experiment a bit more before I move all projects.

Vagrant / Chef instructions by ShawnMcCool on Github

Quick Glance at I3 Window Manager

Standard


I have been using the i3 window management over the weekend.

Now, what is i3? i3 is a tiling window manager created by Michael Stapelberg. Tiling means that instead of the floating windows on your conventional windows or mac desktop, windows are presented in tiles.

I just happened to pass by it and got intrigued. Working a lot in terminal emulators, I sometimes wondered what the desktop experience could be without all the clutter and the heavy use of mouse gestures. A tiled desktop essentially means that your applications are always visible and you have a clear overview. Applications will not hide behind each other but are displayed next to each other. Furthermore, there is no desktop. when you add a window, it’s added and your
screensize is reduced, unless you select a stacking mode.

I watched a presentation of Michael and he explained that the tile based window manager isn’t new. There was one before called ‘window manager improved’, then followed ‘windown manager improved improved’ and thus the name of his version became ‘i3’.

After playing with it for some time, I decided that I’ll keep it for some time longer. However, at work I still use OSX.

Thinking about all those OS’es reminds of a quote by David Field:

The thing about an Operating System is, you spend a huge amount of time invested with it, be it on your Mobile or Desktop its a very personal experience. You put your choice of apps, and your data on it, and spend most of your day using one, in some cases you probably spend more time with your OS than the people you care about. Its a personal choice you’ve invested in, one which is you’re tool of choice.

In the end it’s about picking the tool to do the job. Seeing a different approach to a method we’ve all be aquanted to is a fresh breeze. I can’t wait to work with it some more.

Octopress Update

Standard

Octopress is a highly customizable blogpost generator. Posts are written and stored in Markdown and Octopress allows easy Jekyll distribution as HTML through S3 or Github. If you use Octopress, I recommend following ’Making Octopress Fast’ written by Eric Wendelin. The guide will help you setup a GZipped website on Amazon S3. It worked like a charm but I lost two important elements: Previewing and quick deploys.

Previewing

After following the guide, all HTML/CSS and JS files are stored as GZip-9 files. Unreadable unless you add a ‘Content-Encoding gzip’ to the header of each file and enable a deflate mechanic to your local webserver. Besides, Jekyll is shipped with a preview WEBrick server which is rendered useless.

Solution: Preview from the public folder and add a second directory for compression.

I’ve added a referral to a compressed directory. Then make sure that all directives for minifying and combining read from #{public_dir} and that all zipping is delayed until after task tocompressed is invoked.

desc "Copying public contents to compressed folder"
task :tocompressed do
   puts "## Copying to compressed directory"
   puts "\n## copying #{public_dir} to #{compressed_dir}"
          cp_r "#{public_dir}/.", "#{compressed_dir}"
                 cd "#{compressed_dir}"
    end

Adding an extra directory to the process results in a public folder which can be previewed. Deployment is done from compressed_dir.

Iterative deploys

Here is the problem I was experiencing. Deploys became long and dreadful after adding GZip to the deployment process. The s3cmd tool allows incremental uploading but since I started GZipping the files, s3cmd seems to just upload everything. At first I thought this might be because I was adding ‘Content encoding’ headers to the files I was deploying. Then I wondered if I could get around it with the ’ –skip-existing ’ parameter to the s3cmd command.

Solution: The problem was
caused by GZip leaving a timestamp inside the zip file. This was solved by adding the -n parameter to the gzip commando.

desc "GZip HTML"
task :gzip_html do
puts "##GZipping HTML"
  system 'find compressed/ -type f -name \*.html -exec gzip -9 -n {} \;'
     Dir['**/*.html.gz'].each do |f| test(?f, f) and File.rename(f, f.gsub(/\.html\.gz/, '.html'))
        end
     end

I still find myself evaluating the Octopress environment but it seems highly customizable so what’s not to love?

Final Week for Google’s RSS Reader

Standard

Google will kill it’s reader on the first of July, as mentioned here. I have been trying alternatives since their announcement last March. If you haven’t made up your mind, now is the time to check out this huge list of Reader alternatives. I’m still waiting for the Digg reader to appear, but after reading Macdrifter’s Feedly review i’m pretty certain that at least, I’ve found a promising alternative.

Beautify Terminal With Oh My ZSH

Standard

Beautify terminal with Oh My ZSH

Get a marvelous terminal with just a few steps. If you are already using ZSH as a shell yet you might like to try this. If you are using bash I’d recommend giving it a go and see what you like best.

I installed ZSH using brew:

brew install zsh

zsh

Then I installed Robby Russell’s Oh My ZSH using:

curl -L https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh | sh

Now edit your profile like so:

vim .zshrc

And you can setup your theme. I’ve chosen ‘af-magic’ for my terminal.

Put That Cloud Outside the US

Standard

Some marketing advice by Trevor Pott for non US cloud companies:

To effect change we are left with a boycott in everything but name. It means that non-US Western businesses need to start using “not subject to US law” as a marketing point. We need cloud providers and software vendors that don’t have a US presence, no US data centers, no US employees – no legal attack surface in that nation of any kind. Perhaps most critical of all, we need a non-American credit-card company.

JPG PNG and WebP – Crunching Images for Performance

Standard

The other day we’re scratching our heads about an e-commerce website with terrible load times and found out that 75% of the content were images… and those images weren’t optimized. We went through the process of doing just that and also stumbled on an exciting new format: WebP in the process.

Optimizing JPG and PNG

First the website. Most images were product-images and logo’s etc. These had been created by the site admin, who used Photoshop before uploading them to the admin tool. She asked me why we need to optimize before uploading.

The original bitmap of an image is in fact too large to use online, two common encodings are PNG and JPG. A PNG is a lossless gzip, meaning the decompressed image looks exactly the same as before compression. JPG is lossy, the compression algorithm gets rid of the data by averaging neighboring pixels to make the image a lot smaller. I recommend her to use JPG, unless she need transparancy which isn’t supported with JPG.

However, PNG and JPG image files are often needlessly large. This is due to extra data inside the PNG file such as comments or unused palette entries as well as the use of an inefficient DEFLATE compressor for both PNG and JPG.

To optimize the user’s experience, PNG images should be optimized which can be done using free tools like pngcrush. For convenience, I introduced all the staff to imgoptim – which applies a number of crushers to optimize an image – and requested to optimize the images before uploading. The same day, we saw a 300kb reduction on the homepage, hooray! But a week later we realized that manual optimization wasn’t sufficient; large images were once again popping up left and right and it was hard getting everyone to optimize by hand. So we build a simple cron which is executed daily to take care of the process:

The commands we build in a bash script ($1 = path given to bash script);

find $1 -name *.jpg -print0 | xargs -0 jpegoptim --strip-all -f
find $1 -name *.png -print0 | xargs -0 optipng  -o7
find $1 -name '*.jpg' -exec  /home/joop/bin/jpegrescan -s {} {} \;
find $1 -name '*.jpg'  -exec jpegtran -copy none -optimize -progressive -outfile {} {}\;

I let that run over the weekend and the results were impressive:

folder size reduction
before 5.17 GB
after 4.64 GB 10%

A 10% difference with just a few lines of code! I will now look at most online properties with this script in mind. Use this for optimizing your web properties image folder!

WebP and zopfli

After some digging on compression techniques, we stumbled on the ‘new’ image format called WebP.

WebP is a new image format developed by Google that provides lossless and lossy compression for images on the web. WebP lossless images are 26% smaller in size compared to PNGs. In approach webp images are a hybrid of png and jpg, best of both worlds.

I ran a test:

Picture I took at 경복궁. left: Original (6.2 MB) Right: WebP (0.5 MB)Picture I took at 경복궁. left: Original (6.2 MB) Right: WebP (0.5 MB)

Scenario image size
Original 6.2 MB|
JPG 1.3 MB|21%
WebP 0.5 MB|8%

Google wasn’t lying with their 26%; in my test I was able to reduce the image to 21% (due to lossy setup of 80% quality). I would like to set this up for more websites but unfortunately, WebP isn’t supported by a lot of browsers yet. In fact, only Chrome was able to show me my image on my computer! So browser support is definitely a problem, however a fallback method to jpg could be made to support al browsers.

Another discovery in compression: the WebP compression technique led Google developers to a sideproject called zopfli. It’s supposed to encrypt files further then gzip or 7 zip; Interestingly enough, the compression is supported on all browsers that support deflate, including IE6.

I wonder if zopfli could be used to compress JPG and PNG files so the marvelous compression rates could be achieved on browsers other than Google Chrome. It would probably not be as efficient as WebP but at least we can guarantee browser support with minimal resources.

Mod_pagespeed on Nginx

Standard

mod_pagespeed is an open source module that can optimize your site for speed. It was already available for Apache webservers and now the module is also available for nginx. We already deployed mod_pagespeed on some Apache2 production environments, I installed the nginx version over the weekend to see how that compares.

Installed using the instructions here: github/pagespeed.
Note: I rebuild using the instructions and nginx was installed in a different directory, I had to clean up and relocate some paths including the initrd scripts.

Then I edited the nginx configuration:

vi /usr/local/nginx/sites-available/default 

With the following config:

server {
    listen 8080;
    server_name lab.joop.in;
    root /var/www/lab.joop.in;
    index index.html index.htm index.php;
location ~ "\.pagespeed\.([a-z]\.)?[a-z]{2}\.[^.]{10}\.[^.]+" { }
location ~ "^/ngx_pagespeed_static/" { }
location ~ "^/ngx_pagespeed_beacon$" { }

}

And I tweaked the options a bit:

vi /usr/local/nginx/conf.d/ngx_pagespeed.conf

pagespeed on;
pagespeed ImageRecompressionQuality 80;

pagespeed EnableFilters combine_css,rewrite_css,sprite_images,combine_javascript,rewrite_imagesinline_images,recompress_images,resize_images,collapse_whitespace,remove_comments,extend_cache,combine_heads,move_css_above_scripts,make_google_analytics_async,convert_png_to_jpeg,insert_image_dimensions,rewrite_javascript;

# needs to exist and be writable by nginx
pagespeed FileCachePath /var/ngx_pagespeed_cache;

Then restart nginx and when you open the page you will notice a

X-Page-Speed: <1.5.27.3-3005>

in the HTTP header.

With mod_pagespeed, I was able to alter images to inline content to reduce requests. Beside that, I was quickly able to combine all CSS files into one, and defer the JS execution on the page.

Website 1: A snappy magento website (homepage)

version load time first byte Start Render DOM elements requests
no pagespeed 2.551s 0.111s 1.065s 2175 55
with pagespeed 2.363s 0.130s 2120 51

Website 1: A slow and bulky wordpress blog (homepage)

version load time first byte Start Render DOM elements requests
no pagespeed 11.854s 0.107s 1.251s 10312 139
with pagespeed 11.277s 0.127s 1.190s 10313 134

The outcome was moderate; Eventhough I used a lot of (expiremental) filters they seemed to reduce loading times only a little. Beside the moderate resuls, i’m not too excited about solving problems on a webserver level; problems are best fixed at their origin – like writing code that’s fast to begin with. However, reducing work like inline image generation and automatic sprite creation is a useful one. My biggest problem at the moment is that I can’t seem to get Varnish to play nicely with it, always caching the version of the site that isn’t optimized by mod_pagespeed; however, we also see this problem on our apache servers. In general, this module will play out nicely for a quick speed injection for our smaller nginx projects which don’t get the speed attention they deserve.

Note:

base64 encoding makes file sizes roughly 33% larger than their original binary representations, which means more data down the wire (this might be exceptionally painful on mobile networks) data URIs aren’t supported on IE6 or IE7 base64 encoded data may possibly take longer to process than binary data (anyone want to do a study on this?) (again, this might be exceptionally painful for mobile devices, which have more limited CPU and memory) (side note: CSS background-images seem to actually be faster than img tags) From Davidbcalhoun.com

Achieving 5GHz in Ubuntu With Airport Express and 802.11d

Standard

At home in The Netherlands, I still use an Apple Airport Express bought in South Korea. Last week, I was tinkering around in the Airport Utility settings and realized I could crank up the AE to 5 GHz and Wi-Fi link speed went from 54 Mbit/s to 300 Mbit/s. I noticed the difference immediately. However, one day later I couldn’t get back on my network and I saw this message in my Snow Leopard console:

802.11d country code set to 'NL'.
Supported channels 1 2 3 4 5 6 7 8 9 10 11 12 13 36 40 44 48 52 56 60 64 100 104 108 112 116 120 124 128 132 136 140

Oh dear, the channel which I was using (161) was blocked due to 802.11d regulation, which I wasn’t able to override. I was back at the same old 54 Mbit/s but it felt even slower than before. The sad part is, it worked for a little while, so I got a taste of what was taken from me. I was able to repeat the behaviour: Setup with Airport Utility and get high speeds until reboot. It seemed like a software problem rather than hardware.

Starting with the 802.11d thing, I got tired of all limitations within OSX and tried to find a solution outside the Apple Operating System. Using Re-find I installed ubuntu 13.04 Raring Ringtail on my Macbook Pro 9,1, installing is a process that has become easier over the years. Next up was installing WiFi drivers and seeing if the 5 GHz network would be in my reach.

I’ll save you some time: I found a few howto’s online recommending a broadcom generic WiFi Linux driver called b43-fwcutter and firmware-b43-installer, these aren’t the optimal drivers as they don’t support 5ghz which is mentioned in the documentation. However, i’d recommend to install them anyway by means of building a same enviroment as I had. Hereis how I did it.

I had to reboot. Wi-Fi was working and I was able to see the following channels:

sudo iwlist wlan0 channel
wlan0 14 channels in total; available frequencies :
    Channel 01 : 2.412 GHz
    Channel 02 : 2.417 GHz
    Channel 03 : 2.422 GHz
    Channel 04 : 2.427 GHz
    Channel 05 : 2.432 GHz
    Channel 06 : 2.437 GHz
    Channel 07 : 2.442 GHz
    Channel 08 : 2.447 GHz
    Channel 09 : 2.452 GHz
    Channel 10 : 2.457 GHz
    Channel 11 : 2.462 GHz
    Channel 12 : 2.467 GHz
    Channel 13 : 2.472 GHz
    Channel 14 : 2.484 GHz
    Current Frequency:2.412 GHz (Channel 1)

But like I said, we need higher channels to access 5Ghz. Instead:

sudo apt-get install bcmwl-kernel-source

So after all the work, I was still looking at a limited channel range. Wicher pointed me in the right direction. There are files that are limiting us, let’s find them!

sudo find / -name regulatory.bin

Mine was in /lib/crda. First we copy the file to a safe place to have a backup

cd /lib/crda/
cp regulatory.bin ~/applications/db2bin/regulatory.old

now we need to extract the data

regdbdump regulatory.old > regulatory.redb

Now we edit the file. I copied the Korean settings over the dutch one in an editor.

KOREA:

Band [MHz]   Max BW [MHz]       Flags                                   Max antenna gain [dBi]  Max EIRP [dBm (mW)]
2402.000 - 2482.000     20.000  N/A                                         20.00 (100.00)
5170.000 - 5250.000     20.000  3.00                                    20.00 (100.00)
5250.000 - 5330.000     20.000  DFS                                     3.00                                20.00 (100.00)
5490.000 - 5630.000     20.000  DFS                                     3.00                                30.00 (1000.00)
5735.000 - 5815.000     20.000  3.00                                    30.00 (1000.00)

NETHERLANDS:

Band [MHz] Max BW [MHz]         Flags                                   Max antenna gain [dBi]  Max EIRP [dBm (mW)]
2402.000 - 2482.000     40.000                                              N/A     20.00 (100.00)
5170.000 - 5250.000     40.000  NO-OUTDOOR                  N/A     20.00 (100.00)
5250.000 - 5330.000     40.000  NO-OUTDOOR, DFS         N/A     20.00 (100.00)
5490.000 - 5710.000     40.000  DFS                                     N/A     27.00 (501.19)
57240.000 - 65880.000   2160    NO-OUTDOOR                  N/A     40.00 (10000.00)

Then we go back to bin, I used a python script I pulled from github.com/zioproto/.

python ./db2bin.py  regulatory.output regulatory.redb

and now we overwrite:

sudo cp /home/<user>/applications/db2bin/regulatory.redb /lib/crda/regulatory.bin

And after the reboot:

joop@joop:~$ sudo  iwlist eth1 channel
eth1      26 channels in total; available frequencies :
    Channel 01 : 2.412 GHz
    Channel 02 : 2.417 GHz
    Channel 03 : 2.422 GHz
    Channel 04 : 2.427 GHz
    Channel 05 : 2.432 GHz
    Channel 06 : 2.437 GHz
    Channel 07 : 2.442 GHz
    Channel 08 : 2.447 GHz
    Channel 09 : 2.452 GHz
    Channel 10 : 2.457 GHz
    Channel 11 : 2.462 GHz
    Channel 12 : 2.467 GHz
    Channel 13 : 2.472 GHz
    Channel 14 : 2.484 GHz
    Channel 36 : 5.18 GHz
    Channel 38 : 5.19 GHz
    Channel 40 : 5.2 GHz
    Channel 42 : 5.21 GHz
    Channel 44 : 5.22 GHz
    Channel 46 : 5.23 GHz
    Channel 48 : 5.24 GHz
    Channel 149 : 5.745 GHz
    Channel 153 : 5.765 GHz
    Channel 157 : 5.785 GHz
    Channel 161 : 5.805 GHz
    Channel 165 : 5.825 GHz
    Current Frequency:5.745 GHz (Channel 149)

As it turns out, I am able to access the faster WiFi channels at my home. This brought enormous joy to me, a sense of liberty from the software regulations. But it took my entire Friday evening to achieve it. So next I’m in a pickle: high speed WiFi Linux or back to OSX. Let’s see…

Moving Away From Google, a Top 15.

Standard

Exactly two months ago, Google announced that they were ending Google Reader on July first. Google reader is a service which aggregates content from various websites served by web feeds. For me, it’s my news feed to stay up to date with the people back in Asia and the IT industry. Could they be closing Reader because the free service is still driving more traffic than Google+?

Anyway, since that sudden decision of Google, people seem to be taking stock of the company and started to be reserved about trusting their services, like Jeff Hunsberger:

When Google announced that they were shuttering Reader it made me take stock of how I felt about the company and how I interacted with them. I looked around and saw how heavily invested I had become. Google’s interests and mine were diverging. When they were innovating they always seemed to be pushing the boundaries of what could be done on the web and focused on making it better.

But somewhere during the rise of Facebook, things began to change. Google’s focus was on ad revenue and how to monetize these great base technologies they had helped create and foster. Their focus shifted subtly at first and I was forced to ask the question more and more I am willing to give up access to my personal information for this product? Is it really that good? In most cases, the answer was “yes”.

Long story short, Jeff has been moving away from Google. I read this post a month after he wrote it but I have been trying exactly the same, stopped to use Google services here and there. My thought on the matter is: you get exactly what you pay for. In the end Google is a company that’s in the business to make profit. So then I started to wonder: Oh no! what if Google quits this and that service? So, without further ado I proudly present: A top 15 list of Google services in the priority I need them. Google: Please don’t close anything in my Top 5 anytime soon ok? OK TNX Bye.

My grand ranking of Google services:

1. Search:

I tried Yahoo for a week. Didn’t even try Bing. Seriously. Google is a mindreader knows what I’m trying to find. Nothing to change there. However, for some specific searches I started a move to duckduckgo, wolframalpha and nerdquery.

Absolutely required for my life.

2. Maps:

I tried Apple, Bing, Yahoo and Naver maps but they all fail to get me on my way. For now, Google remains.

Keepers weepers! I need to get home some times…

3. Google Analytics:

From a professional standpoint, I can’t practice my job without google analytics and webmaster tools. However, it was fun giving piwik a go on this private blog for a week. It did show a lot more then GA, like IP addresses but I abandoned this trial because I saw that pages loaded 20% slower compared to the GA embed code. Instead of Piwik, I’m thinking of Logstash + Redis + Elasticsearch + Kibana 3 for a future project. For now GA remains.

Keeper! until the world moved on…

4. Youtube

In our house, we watch a lot of video from bloomberg, tudou and dailymotion. But for silly cat movies, there is no place like the youtubes. A cool thing of youtube is it’s HTML5 player (no Flash), one annoying thing is that youtube has been repeatedly suggesting/forcing a ‘real username’ down on all it’s users recently. But most content is here…. I guess I won’t be blocking youtube any time soon.

Tough one. I’d say close it and see what the rest will do, talking about that level playing field but I can’t do without cat video’s. Keep it!

5. Scholar & Public data

Google scholar is underappreciated. It’s free and it’s informative. However, my university still grants me access to a range of libraries. So I’m not dependent on Scholar anymore for research. However, I’d still like to make use of it’s vast contents and old library books. Also, Public data shows OECD information work hour information better then OECD could do.

No, please keep Scholar & Public data alive.

  1. Adwords and Adsense:
    Have you tried a CPC campaign on Facebook? For now, Google is the standard. Also for banner income.

Keeper!

7. Android

A tough one… We have two Android phones in our house and they are old and painfully slow. My current device is an iPhone. Perhaps ubuntu is an option by the time I want to replace that? Seriously, for now I’m burying my head in the sand and want to say get rid of it. But in the sake of a balanced world i’d say:

Ok fine, keep it… there needs to be more than iOS out there… For the sake of choice.

8. Google Docs

I have a dozen of shared documents on Google documents but use the service once a month at most. I noticed more people are using other cloud services and personally I have been using Naver nDrive.

Gone! No tears would be shed here.

9. Google+ and hangouts

I quit google+ on 13 March 2013 and it felt good. I wasn’t waiting for another Social network. However, I’m still active on Twitter and Facebook but less then before.

Google employees swear by Google+ Hangouts, which I closed down. I’m still using Skype since everyone is still there at the moment.

Gone! I wouldn’t even realize if they closed it tomorrow.

10. Chrome:

On the desktop, I’ve always been a Chrome evangelist, converting many IE, FF and OP users to Google’s browser. So I wondered how it was on the ‘other’ side. At first I gave Maxton browser a try, then Sleipnir. After a week I had enough and moved to Firefox and love it. It syncs, has addons and is fast. However, I am still running Chromium on my laptop with a logged in Google account for work related matters, on the other side, I have been logged out of Google on my Firefox for a few weeks now. More on that later. On my phone I am a Mercury user.

Gone! As I do believe in choice but webkit YAY! And forking webkit wasn’t a nice thing to do Google.

11. Calendar & Contacts:

At the office we use Microsoft exchange, at home I share a calendar with my wife on Google. Last month, I setup an owncloud server and setup ical to make use of those instead. Owncloud supports CalDAV and CardDAV, syncing all my devices to each other.

Hah, don’t need those anymore.

12. Gmail:

I started using Gmail in 2005. I have moved all my family members there as well. Now, eight years later I have noticed that other mail services have evolved as well. There are many out there like freemail and foreign services. I’m using a Korean one called Naver. The point is, I left my @gmail account completely for a month and seem to have no problems sending private mails using my private @joop.in domain. However, Google spam filtering is better. For the rest, a painless switch.

Gone! I could live without Gmail. So can you. Believe in yourself!

13. Translate

I do a bunch of translation from and to Korean every day. Bing and Naver are far superior to Google translate.

In the trash! I could live without Translate.

14. News:

I use news to search for real time events. There are other services for this. So, I guess I’d rather had seen news go then my beloved reader.

Yes, I could live without Google News.

15. Feedburner:

Google bought Feedburner in 2007 but sadly, the product has hardly been developed after the transition. I was using feedburner about insights in RSS and for email subscribers, I left feedburner for mailchimp back in 2012 and haven’t looked back since. I have feeling Feedburner might not live for too long without Reader anyway so I’d recommend moving as well!

Yes, I could live without Feedburner, in fact, close it right away and see if I care. Ta ta…

The verdict

My colleagues seeing me use yahoo jokingly said that Google wouldn’t notice my abandonment of their services. But that wasn’t the point of this odd hobby I picked up in the last two months.

I wanted to know how dependent I was and tinkering away from the big Google seemed like a fun way of demonstrating this. I have friends and collegues who work at/with Google all the time. It was also fun teasing them as well. A month in, I know what I definitely need (Search, Analytics, Maps and Youtube) but wouldn’t be sad to see other things go. In the end, I keep on search and might perhaps go back. Alternatives are always a good thing.

The whole reason why I started to write this silly blogpost was about Google closing down their Reader. Up to today I haven’t decided on the replacement service. Luckily, I still have until July for that choice. For now, all feeds are still maintained by Google.