Didi Dache



China is ahead in the whole mobile payment and app sphere. I couldn’t stay behind so decided to install Alipay on my phone, I entered a world of convenience. One of the apps that a lot of people use is Didi Dache, the Uber of China.

I’m haven’t been a big fan of the Uber concept, but I wanted to try mobile payments. Whilst preparing to go to the airport and having some spare time, I thought I would give it a try. I entered the airports adres, hit a few buttons and before I knew it there was a driver waiting for us downstairs. I had to hurry down! If I knew it was that easy, I would have started trying the app after I went outside. I obviously didn’t know how the concept worked, as I was surprised when the driver picked up a second guest. Well, carpool for the win! The drive was about 50% cheaper to a normal taxi and it felt adventurous and new to talk to some strangers on a long drive like this.

The taxi driver – sporting a shirt with “bentley” written all over it – mentioned that he also drives supercars time to time. Apperantly, one can take a lift in the exclusive sportcars that drive around here. We arrived and the driver said goodbye and drove off, I only had to confirm the trip and the payment on my phone with a few taps. That was almost too easy.


At work: Launched Duckworld



Really proud! After an adventurous development phase in cooperation with The Walt Disney Company, the Kids & Teens editorial team & marketing team has launched DuckWorld;

It was really fun (and special) to be able to cooperate directly with the editors of the Dutch Donald duck magazine on these titles. Even more fun to see the game featured on TV, the magazine etc and watching the traffic grow at the moment.

TV advertisement


Unity editor, from image mapping to object in a 3d world

Unity editor, from image mapping to object in a 3d world


Whilst I did see casual game development up close at Spilgames and 3D modelling at unitedstyles, I never experienced the creation of an actual 3d World up close. Wat I learned is that it takes a lot of expertise from a broad range of (hard working) experts to create such a game.

There is a game director who is in charge of the story, artwork creators and animators. Finally there are developers who create the game logic and movements and arrange the network and database stuff.



Product demo (scrum) with the dev team

We ran the process using Scrum, iterating the world until it was complete enough for release. We then started performance testing and optimizing the systems.  I’m happy that we worked scrum as the basic idea behind Duckworld evolved as well.

We developed the game in Unity3D; We used unity because of portability to other devices and operating systems. Right now we are available in desktop only.


Marketing has started, coverage in news, tv spots etc


Now we have launched, the game is appearing everywhere. On It’s great to see it on tv, read about it in newspapers and see activity on twitter. Looking forward to expanding the world soon; Readers of this blog get a 10% coupon for the month of September using the following code: joopin during checkout on www.duckworld.com

Youtube review by ‘Lord Hudson’:

Creating psychedelic art with deep dream


A few days ago, Google released a Python Notebook to let you play with their very recent work on visualization with neural networks, turning buildings into acid trips and landscapes into Magic Eye pictures. They published their code publicly for everyone to play with.

After I found a docker image I was able to play with the notebook. Holy cow! I put a picture of me with my daughter that transformed us to someones nightmare in a few minutes; eyes popped up everywhere and I started seeing people behind us. So I moved on to objects and buildings; here is a picture of my office. (clickable)



On the left you see the regular picture, next to that you can see the image which had a 7 time inception with the neural network which creates a lot of artifacts, just look what is haning on the roof; On the right an artists approach can be seen where a zoom is applied on the regular image.

After that I played with pictures of clouds and natural items; I found that the engine really goes wild with trees. Checkout this picture I took in Korea:


Ok that’s enough trippy pictures for one evening; If you are interested how it works, read at this google blogpost.




At Sanoma Kids & Teens; a Scrum update



I made a move within Sanoma from Comparison to the ‘Kids & Teens‘ cluster at the start of the year, and am surrounded with Disney characters ever since! This cluster has a solid foundation, based on traditional publishing since 1951. (the year the first donalduck magazine was published), and is expanding through digital business models currently.

Finishing a title called ‘duckworld.com’ (more on this in a month or so), a touch typing course at ducktypen.nl (Polish version coming soon!) and maintain various content and e-commerce sites like fashionista.nl, donaldduck.nl and duckstadshop.nl.

We setup a development team working scrum at the start of the year. Three months in, to raise awareness throughout the cluster, we’ve organized a day-long scrum training with Zilverline yesterday. We started with a basic explanation about Agile and Scrum and zilverline introduced a number of fun activities to get the message across. According to at least a few participants, the day was a success. I can recommend Zilverline highly!

Not only did we explain scrum in practice, we found new energy to improve our working processes further. I heard that the New York Times is already create their newspapers with Scrum, I’m curious what we can do for the cluster in the coming time!

Planning and scaling with scrum



After working with Scrum for some time, I’ve finally became a certified ScrumMaster (SM) at the zilverline course with Marco Mulder en Bas van der Hoek. Addtionally I met one of the inventors of the Scrum software development process. Jeff Sutherland at our company yesterday!

Scrum is like a homing missile

The old school and dreadful IT project style: The customer wants a new website and makes sure to write an extensive project plan beforehand. Better be extra detailed to make sure we’ll get this project right! The customer assumes the project is clear and believes that developers should be able to give a detailed and trustworthy scoping. Planning commences and the project starts. Within no time, unforeseen occurrences happen and the made promises are broken. Both parties eventually become more hostile towards each other and IT has to work overtime to finish and the customer isn’t happy with the final result. Sounds familiar?

The waterfall method described above is like a cannonball. The customer sets a target in the distance and the developers have one shot at hitting it with their cannonball. The shot is taken and is in vain as in fact, the target was moving and there was unforeseen wind, the outcome became clear when it was too late.

An alternative to this is the scrum approach. The empirical approach in which the customer is closely aligned to the developers, releasing software often makes that demands are discovered along the way. Customer happy, developers happy.

Whilst finding roots in software development it’s also applied at banks, healthcare and government around the world, in the Netherlands there’s even a high school applying scrum at a class, where students show increased collaboration!

Epics and roadmaps

I’ve experienced firsthand that trying to scrum a traditional waterfall/Gantt planning can create a lot of friction in an organization. It starts with acknowledgement that Scrum is nice and all but targets are made for an entire year, the ‘business side’ (as apposed to the IT side) should be able to get some commitments over the year.

This doesn’t mean that one should make a roadmap, we’ve learned that roadmaps are too messy of a prediction. A roadmap feels a bit like a large project. The chances of making it are small.

Source: Standish group

Source: Standish group

But… when will the project be ready?

You’ve haven’t even started with the project yet but usually the costs need to be made clear in advance, and marketing activities planned. Expectations have to be given. “When will you finish?” is a pesky question without a roadmap. This is where one needs to be clear. Scrum allows you to produce a realistic planning after some time of development. By reducing size of projects and always focusing on the single most important thing, you will deliver something quickly, and more can be said about the time left for the rest. Tasks are put on a backlog, with most important things in top an detailed. Abstract things for the future at the bottom.

Cone of Uncertainty

Cone of Uncertainty

Finishing sprint after sprint, you will be able to estimate with more confidence. As you know how many storypoints you burn per sprint. Apposed to a long term development like waterfall, scrum will give quicker insight towards completion with a so called ‘release burndown’, showing how many storypoints are left. Additional tasks are added underneath the chart.


Going wide, not deep

Instead of a ‘island’ culture where one team passes work to the other, scrum teams are multidisciplinary and have full access, responsibility and knowledge of the architecture. Only then they are able to understand the why, how and what of the most important item.

Besides that, there can be only one thing important at the same time. People are terrible at multitasking, for a long time people assumed that if you started more, you’ll finish more. It’s the other way around, the more you finish, the more you’ll finish!

During the training, we made a storymap together. It’s an overview of tasks and time. Our task was to prioritize features. In the Image below, we were building the “bare necessity” row.

At first, we added all basic functions .The coach then challenged us and soon we realized that the bare minimum could be stripped a lot more. This was an eyeopener, as I’ve felt into this trap often in the past, thinking “I’ve started working on this anyway, might as well extend a little bit on this part as well.” NO! Work wide, not deep. Only then you can deliver fast. What is the actual minimal working skeleton? We also sheepishly added cards at every column, another team skipped even skipped columns for their first release. Why not?

Scaling scrum

Steady teams and performance

I joined the ScrumMaster training with some experience under my belt and a lot of questions regarding scaling and performance. Personally, I had to deal with proving increased output of the team this year and I was seduced to start micromanaging expanding time logging of the team. (Micro) managing isn’t going to get the best out of a team. Jeff Sutherland stressed that simply making work of a team transparent to outsiders is enough to get the team motivated and self steering. When all seems lost, decrease the tickets in one sprint and focus on applying the practices correctly. To increase performance is to keep addressing the next biggest improvement in the team. Dysfunction team members, dependencies on other teams etc. What scrum does is making the process quantifiable so you can break dreadful processes instead of trying to satisfy them more rigorously. Also don’t let the team get disturbed too often, “The door from the business to the IT team is closed, whilst the door to the business from IT is always open”.

A common mistake is of management to assume that adding member to the team will speed up the sprint. It’s usually the other way around. Scaling should always be done slowly. While growing, together with the ScrumMaster (SM), a decision can be made to divide the scrum team in two. Do not simply duplicate the disciplines with new hires in an all- new team. Instead, split the existing team. Make sure the independent teams are still able to complete tickets autonomously. At first, it’s still ok to do backlog refinements together. Later, you can allocate an own backlog and own PO when necessary.

Scrum of scrums

With different scrum teams in place, a scrum of scrum meeting may be implemented, addressing coordination across teams in a simular fashion as the conventional standup. Due to the content nature, it’s not uncommon to send the PO instead of the SM. Alternatively to the group just standing in one room, walk past all the teams and their scrum boards.


With a split team, the chief product owner can keep a scrum-team-per-lane overview with a so called “Epic board”, to helps to see inefficiencies and to plan future sprints together. When there are multiple teams, it’s advisable to plan demo’s after each other and together. Before the training, I read that Spotify implemented so called component teams, in which teams are allocated to various GUI components; one for the music player, another for the login window and so on. To facilitate this, technology needs to change the architecture. Component thinking could lead to double work between the teams but this is ok and a sertain isolated approach on similar things should be encouraged towards the team.

From the Q&A session:

Tips for increasing efficiency?

Let the team do a retrospective round on paper so everybody can give their opinion.
During planning poker, a discussion might occur. Sometimes it might help to end an argument with another round of planning poker instead of a guess from a few people.
‘Fist of five’ for commitment. Count down and raise a range from 1 to 5 fingers when you support the commitment.

Is a ‘research’ ticket a valid approach?

No. A research ticket is not part of work and should not be part of the sprint. A ticket should be completable within one running sprint.

What if you have dependence of a non scrum team?

When you know this in advance, the backlog item simply wasn’t ready and should be marked accordingly and as impediment. This has to be made clear to PO as quickly as possible. It shouldn’t belong at the top of the backlog, as it doesn’t have priority from the business. In that way, the SM is responsible for solving this situation.

Do you count the points of an unfinished ticket?

When a ticket isn’t finished in a certain sprint, do not count points for that ticket in that sprint.

Does the UX’er work in a ‘sprint zero’?

Sprint 0 does not exist. Sprint zero gives the idea that there is a phase where one doesn’t work. This is not the case. The PO and the UX’er have a secret side to the conventional scrum flow. Whilst the team is following the sprint, they already workout the tickets for the coming sprints. In that way they have a ‘secret’ sprint. A UX’er needs to find balance to work within the team, but also work ahead in making the UX foundations for templates before the sprint starts. If the UX’er does everything ahead of the sprint, the team cannot work together for optimal collaboration.

Should I create a second sprint to prioritize non-business things like IT?

get business priority for IT project. Remember to sell the problem, not the solution. Make sure that the BLI isn’t called ‘Varnish v4 upgrade’ but “Get our website to load within 200ms and increase conversion.” be clear about the results.

Tickets & backlog:

A nice way to avoid assumptions on the backlog is to create (abstract) user stories. Avoid assumptions by always asking why. It shouldn’t be possible that multiple items on the backlog are evenly important.

Scrum organization

I found that some organizations are completely scrum. At ING bank, they do standups at every level of the organization starting bottom up. When there is an impediment at bottom, it should be answerable within 2 hours when top management has a standup._

Continuous deployment model


We’re moving to continuous deployment; What is it, how does it impact the organization and how do we get there?

Early 2010, our DIY fashion company had a ‘heartbeat’, in which every Wednesday, just after lunch, we would release a new version of the website. Although scripted, this climax of a weeks work was always a stressful time for the developers. We had to get the entire staff to agree on the release window and run numerous tests before and afterwards. Due to the hassle of putting things live, some companies release less often, monthly or perhaps every month. Why such a fuss for a release?

Continuous deployment (CD) minimizes the time spent of putting new code to the live users, in production. This is done by automating each step up to deployment, avoiding human intervention where possible. Leading to less stress, which is good :-). In the last years, we’ve gone from ‘waterfall’ method, with an infrequent release, to a bi-weekly release. The next logic step would be to be continuously improving the site, without the ‘overhead’ of releasing software. This is not only a mentality change for the IT department, it also changes the entire organization as idea’s can be implemented in a swiftly (scrum) matter and less ‘project planning’ is involved.

CD leads to a number of advantages:

  1. * Due to automated testing of code combined with reviews quality will be improved
  2. * New ideas are realized quicker; as you can deploy an addition the same day!

To achieve this, we had a scan by Xebia which identifies the various ‘levels’ of automatic deployment. I think it’s an insightful overview and thus I wanted to share it here:

DevOps Monitoring Testing Provisioning Deploying Building
Level 5 Complete Operations and development are both part of the same multidisciplinary delivery team and share responsibilities. Monitoring of business level quality metrics. Predictive failure monitoring.
Monitoring data is used actively to improve the system.
100% fully automated tests all the way to production Self Service portal for requesting environments.
New environments are created with each new release. Network automatically configured.
Continuous end-to-end deployments. End-to-end automated gated builds.
Level 4 Advanced An envoy of operations works along in project, an envoy of development works along with operations. Application Health and Build/Deploy dashboards available to teams, provides continuous insight into quality, health and performance metrics. Automated dynamic quality tests like security scans, functional and performance tests guarantee quality of code. Environments created and torn down by a push of the button. Supporting systems automatically configured Test-gated deployments of end-to-end applications. Deployments occur over multiple environments. Central build environment.
Teams actively reuse generic components in a secure and controlled manner.
Level 3
Development and operations work together when this is required. Monitoring of software quality, application performance. Reports accessible through dashboard. Automated static code and security analysis after code check in. Environments are identical. Operating System is virtualized. Several tools used to provision and configure an environment. Environments are identical. Roll out of applications performed by a push of the button. Auto- deployment to D, T, A and P. Build on commit. Archived components are made available for reuse by other teams.
Level 2
Code accompanied with release notes with which operations should install and manage the application. Monitoring of application log files for errors. Reports generated on demand. Automated tests are initiated as soon as code is checked in. Tests are focused on unit /component testing only. Scripted installations per component for each environment. Supporting systems manually configured. Self service deployments to development and test. Automated builds are performed in a central area and activated manually.
Level 1
Operations engaged at the end of the project. Monitoring of system metrics (CPU, disk, memory, process). Reports accessible to Operations. All tests require manual activity. Some tests are automated but have to be initiated by hand. Manual installation and configuration of Network, OS and software for middleware, databases, application servers, etc. Deployment through execution of separate deployment- and db scripts. Manual configurations and installs / env. Builds are performed on local workstation by use of one or more separate build scripts.



Our team scored at level 3 during the scan some time ago. We’re working on achieving level 4 and later 5. For one, we puppetized our servers this year, allowing central management of their configuration, easy deploy of new machines.

At the early stage of the project, we introduced a so called building server (Jenkins) and put a monitor link on a large tv screen (photo) on the workfloor. This had immediate effect, firing hundreds of pre-written unit (code) and regression (tests on the frontend UI) every time a developer commits a piece of code to the ‘default’ (shippable) branch in our code repository. This saved our Quality Engineer a lot of time. It also made the process more visual, our team was able to see who broke the code. Next we’re scripted the deployment up to production. Here, human interference is still required but this is something we can let go once we trust the system more and more.

Write code > commit > pull request, review (manual) > build > package, staging > production > post deploy test

The final result will be a flow where a programmer will work on a new feature or bug in an own environment. (so called branch) Once the work is complete, the code will be pushed to the ‘default branch’; which will initiate a ‘review’ moment where another developer has to approve the change. The code is then on the default branch, which should at all times be ready to go to live. At this time, our build server will perform numerous unit and regression tests, upon which, the code is deployed to production. On production, another test is done to ensure the quality.

We still have some steps to go, but already reap the advantages of this system today.

Create your own font!


Screen shot 2014-10-23 at 6.50.59 PM

The Internet is beautiful! Sometimes you find a gem that’s just worth sharing.

At myscriptfont.com, everyone can transform ones handwriting to a computer font. The process is simple:

  1. Print out the template
  2. Fill it out
  3. Scan it
  4. Upload to myscriptfont.com

The result is a TTF, SVG or OTF font which you can install on your computer or even online.

I made one together with Suna and the image above shows her reaction. Still thinking of the practicality of this but it’s interesting nonetheless!

[Mac] Export all your iPhoto events to folders


I’m using a mac for over a decade and thus moved my photos into iPhoto almost automatically. But, I never really that program. It’s slow. Last week I decided that I wanted to get out. That was easier than I thought:

Download phoshare here.

And just run the program. You can select the folder to export to. According to the developer, Phoshare preserves both the original and modified image, and I was able to export using a *year*-*month*-*day*-*title* -format, which I really like! I’ve counted the photo’s inside the library and inside the folder, just for verification. I’m now iPhoto free!

Screen Shot 2013-11-03 at 23.46.39


Two podcast recommendations: China History & Dan Carlin


Two US made history podcasts in this recommendation. I wish that I had teachers like Dan Carlin and Laszlo Montgomery in my history classes as a teenager but listening to them while commuting makes up a lot.

China History Podcast

From time to time I stumble upon a podcast that I get really excited about. This is one of them, when Kaiser Kuo mentioned Laszlo in the Sinica podcast. Kaiser explained that Laszlo learned Chinese at a young age and lived and worked in various places around Asia over the years. The China history podcast moves forward over 4000 years of history in the middle kingdom. From the invention of gunpowder to the opium wars and Li ka shing. So start at episode 1! Wish I listened to this before reading the three kingdoms.


Dan Carlin’s Hardcore history & common sense

Dan Carlin is a historian and produces two podcasts, one about history called ‘hardcore history’ and one about current affairs called ‘common sense’. It was recommended by a friend and I didn’t like the podcast at first. The first episode I listened to was about catholicism in Germany and was a much longer listen I was used to. However, The topics are so diverse and Dan’s tinfoil-hat way of looking at our world surely took me to new insights in our present world. Give them a try!


From SVN to Gitlab on RHEL6


Edit: This howto isn’t finished. I was able to import SVN but didn’t get the repository to appear in Gitlab.

There is a certain SVN project that i’ve moved to gitlab over the weekend. It started with installing the RHEL6 environment, moving all the revisions there with gitlab and then setting up svn-git.



Since it was the first time I was installing gitlab, I looked online for a howto guide. I stumbled upon the automatic script by mattias-ohlsson which didn’t work so well for me. The automated script kept returning error messages regarding an outdated ruby version, which my sudo user indeed was having. I didn’t want to wrap my head around that misery and instead made use of these directions made by Torey Maerz. I didn’t run the script automatically but triggered each command manually. Some notes:

At one point whilst compiling the bundles for gitlabhq, I got into trouble because I had no pg_config, instead installed postgresql-devel.x86_64 and after that the package compiled correctly.

Then passenger came with the directives:

LoadModule passenger_module /usr/local/rvm/gems/ruby-1.9.3-p448/gems/passenger-4.0.10/buildout/apache2/mod_passenger.so
PassengerRoot /usr/local/rvm/gems/ruby-1.9.3-p448/gems/passenger-4.0.10
PassengerDefaultRuby /usr/local/rvm/wrappers/ruby-1.9.3-p448/ruby

So I put these in my httpd server, together with the directive to turn se_linux off. Yes naughty.
setenforce 0

and then setup my virtual host as follows. Be sure to use a real domainname, as I had trouble getting gitlab to work on an IP.

<VirtualHost *:4000>
  ServerName <domainname.tld>
  # !!! Be sure to point DocumentRoot to 'public'!
  DocumentRoot /var/www/gitlabhq/public
  <Directory /var/www/gitlabhq/public>
     # This relaxes Apache security settings.
     AllowOverride all
     # MultiViews must be turned off.
     Options -MultiViews

Now we adjust the username and password, be sure to do so. That ‘5iveL!fe’ password is floating all over the internet.


Login to the RHEL6 machine and import the svn-git:
install svn-git
yum install ruby rubygems
gem install svn2git

Next is the svn2git import. The followin command was rambling for about 6 hours for a 10 Gb repository. I’d recommend executing the svn2git command with the ‘time’ flag. I also had to ‘nohup’ the command to safely execute on my wifi connection.

time svn2git http://<domain><reponame>  -v --username joop

If you run into an error “the variable $u was not defined“, don’t worry I solved mine with this this fix. It seems like a harmless patch.

Then you can import your project into gitlab using:
1. Copy bare repositories to /home/git/repositories
2. Run bundle exec rake gitlab:import:repos RAILS_ENV=production