viernes, 17 de octubre de 2014

London Software Craftmanship Community: Pair programming

Yerterday I went to 7th Talk by the London Software Craftmanship Community.
Really glad I could go and meet this great people:
http://www.meetup.com/london-software-craftsmanship/events/212202712/

The event consisted of two half-hour talks and one ten-minute live kata. Everything was recorded and will be published soon.

A new Model of Testing, by Paul Gerrard

I remember sometimes I have talked with some friends about the following: 
We are programmers, and sometimes we don´t know what kind of magic, charms, terters do. All we see is a smile coming to your place, "it doesn´t work, I found a bug".
Testers, designers, developers, we live in the same ecosystem to create a common thing, we should know each other!

It was great to assist to this talk. Paul is a tester who used to be a programmer (he even programs yet).
According to Paul, the old testing way won´t work any more. He is writing a book about that. you can download the paper and give Paul feedback:
http://dev.sp.qa/download/newModel/NMIntro

Bassically the old testing way won´t work because our system are changing, InternetOfThings is here, we have million of devices and it will get humungous in some years, we´ll have internet everywhere and testers won´t be able to test the same way.

Key take aways:

-Forget about testing logistics (using sql or oracle, using chrome or IE, using agile or waterfall,..) testing thinking should not rely on it.
-All testing is exploratory: We identify sources of knowledge, to build test modules, that inform our testing
-All testing is based on models. Humans use models everywhere: maps, class diagrams, uml, mathematics,... Bassically all models are wrong (otherwise it would be reality), but useful!
-When we are testing we should actively think that we are using models.
-When a test fails, first thing testers think: has my testing model failed? That´s when you ask the programmer "is it ok?" If now, then you report a bug, testing model was good, programmer´s produced a model that doesn´t fit to it.
-We explore (create a model) and then test (using the model). Devs, we do the same!!
-Tester and programmers, we have the same capabilities.
-Goal: End manual, let automation do it. Human testers, will produce models (programming) to feed automation frameworks.
-So testers do need to learn how to code
-Testers don't own testing anymore.

Do simple constraints when creating and algorithm, by Sandro Mancuso

It was really impressing to see Sandro in action. He implemented an algorithm to calculate the Romans numbers based on a decimal input. He followed TDD sistematically from the beginning, it has the type of "wow" effect that I always feel when somebody is doing TDD. He also used sublime together with some plugin to run from Jasmine tests (impresing to see it).

I´ll try to reproduce the exercise he did but this is the key take away:
When doing an algorithm, enforce simple constraints in the beginning. By trying to use simple constraints along the way (like if), then use more complex if needed (like exceptions treatment).
If use complex when simple is feasable, the algorithm gets more complex.

When I saw that I remember that according to TDD you write the minimun code that make a test to pass. Sandro´s workshop fits perfectly with that: writing minimun code= writing simple constraints to make a test passing.

Be the best pair you can be! by David Morgantini

David is a software developer with a unique experience: he is married to a girl with did a PhD thesis about Pair Programming. When she was writing it, he had to read it, sometimes :)
He did took the practical essences of it.

I wrote some time ago about the benefit of pair programming:
software-creation-using-collaboration II

When David explained it, I thought, "ok, this is common sense", but you know what happens with it: it is great if somebody reminds you common sense things. 

Here are my notes and take aways. The video will be available soon, so I let you know. It is worth watching.

Some definitions:

Driver, navigator
Disengagement
People: Expert, novice, we can have different types of pairs (EE, EN, NN). If you were the novice in one session you can become the expert in the other, because you know more in that case.

Patterns: 

One keyboard, two keyboards, a laptop, a mirrored lapton,
Dual station (he prefers it,then everyone has its own space).

Benefits:

Researchers strugle to measure Pair programming, measure its actual benefit. Issues are: people have different motivations, benefit is better seeing in practice. But to summarize benefits:

Economics: number of defects is lower.
Quality: two heads looking at the same
Dev statisfaction: you write better code, it makes you happier when you arrive home
Learning/ knowledge sharing
Comunication, team building

Disengagement, the big problem!

If disengaged, the pair looses all of the benefits of Pair Programming!
We see it when...

1. The pair gets distracted: solution mutually agreed disengagement ("let´s do a break")
Expert-Novice: It is very easy novice becomes disengaged. Solution: expert checks attention frequently.

It is the best if pair knows the goal of the session: quality, learning, the two are novices,... Then we can better achieved the goal.

2. Some work doesn't fit for pair programming. We can try to make it work by:
-Split a U/S into tiny tasks. Some of the tiny tasks can fit Pair Programming.
-Break the pair if it doesn´t work.
-Identify tasks that need pairing
-Expert: ensure novice is driving!! 

3. Uncomfortable dev environment. Solutions: 
-swap pairs, 
-try to make your pair feel like at home (same IDE he uses, proper monitor, no laptop for Pair programing!!...)

4. Interruptions: A team leader comes over and tell you important stuff... Rules:
-A pair shouldn´t be disrupted.
-If pair gets interrupted: "Please, wait a minute..." and complete an ongoing discussion
-If pair suffers a longer interruption, plan it! Make sure one of the pair can follow and go on when alone
-Re-establish the pair when interrupted in order to go on: "from where did we leave it?"

5. Time pressure:
-Plan novice pairing in Planning meeting.
-Expert: verbalize progress and ask for feedback

6. Social pressure:
Novice: I don't want to look as a stupid.
-E: Let some time alone for novice to consider solutions alone, before doing the pairing.
-N: stop if you don't know what´s going on, ask questions!
-E: Should Encourage the novice to drive!
Establish context before you start: expert explaining the problem before starting!!

That´s it for today. I tell you when the video is ready. Thanks for reading and have a great weekend!

Maybe find some time in the weekend to?...

Keep coding!!


miércoles, 15 de octubre de 2014

The Good, the Bad and the Ugly of the HTTP Archive: Performance of web sites using HTTP Archive

Hi again!

Yesterday I went to a workshop carried out by the London Web Performance Group, The Good, the Bad and the Ugly of the HTTP Archive.
I was really surprised with the content of the talk, in addition pizza and beer was provided at the end of the talk. Furthermore, a position to work in Google was announced there and they even have a lottery with free tickets for the next Velocity conference in USA.

http://www.meetup.com/London-Web-Performance-Group/events/209433702/

This same talk was held in Velocity Conference in New York this year.
The speakers were Robin Osborne @rposbo and Dean Hume @deanohume, two developers that share a common interest in Web performance.

So what is HTTP Archive?

The HTTP Archive is a vast data store of web sites (http://httparchive.org/), it collects the web content, how it is served and constructed.

It runs once every month and it collects a lot of information about performance, including load time, page size, http requests and much more.
The information is stored in MySQL and can be downloaded. The problem is the size of the file, some hundred terabytes.
But wait a minute, we have google bigquery that provides an endpoint to query this information:
https://cloud.google.com/bigquery/
Bigquery makes it really easy to query big data. We can run SQL queries really fust.

In addition, we have bigqueri.es, comunity of people sharing queries to be run for HTTP Archive.
They provide the sample sql query together with some results and discussion with other members. One can also post his own query and ask questions.

Checking performance of web sites

So Robin and Dean created a couple of benchmarking queries. They aim not only to get the best and worst web site, but more importantly, check what they were doing good or bad in terms of performance. Yesterday they shared their main conclusion.

They explained how they measured it:
-They took out the main 100 websites. This is because big companies have dedicated teams for performance, they wanted to know what they can in their daily basis as regular developers.
-Measurements: Fully Loaded time, Page downloading size, page speed (0-100), speed index (how quicly different check points in the page loading takes).
-If sites were usable and modern, if they follow best practices, and bonus if the site was responsive.

The Good:

Or how web pages went better...

Filament group: 

Very good performant and responsive page, and they blog about how they do it.
They scored 100.
His trick is reducing the critical path, this means getting the main content in the beginning, when the site is loading.
In addition they have made several tools available in github: Grunt-CriticalCSS, LoadCSS, LoadJS

Nature.com:

They score 86.
They stick to basis, 14 rules for faster-loading web sites: http://stevesouders.com/hpws/rules.php
Monitor is key for them: They monitor every new feature they release. They use: ShowSlow.com, statsD and graphite.

Zomato.com:

88
Their point is:
-Start small: A blank page is always going to be the fastest, start from there. Add only what is needed.
-Caching: Heavy use of caching, makes sense since they are a search provider. HTTP caching.

Envato marketplaces:

85
Stick to basics and aim for low hanging fruit.
They think about performance from the earliest design stage.
If users add their own images you need a performance strategy.
Consider user content generated.

The bad:

In this case they didn´t ask directly to the web sites, to avoid dissapointing them.
Instead, Robin and Dean asked themself, if they could do one single thing to improve these sites´performance, what would it be?

Welovefashion.it:

In this case lots of data is downloaded to the client when page loads:
Simple trick, enable compression.

GU-JAPAN.com

15.5 MB images loaded in start up.
717 http requests
And this is because a carousel of images.
One simple trick: Remove carousels, it has been studied that 1% users care about carousel.
If you don´t believe check ShouldIUseACarousel.com

GAMEPEDIA.com

They have users generated content, such images that aren´t scaled. Trick: serve scaled images

The Ugly:

Can it go even worse?

Sailboatlistings.com

They score 44.
The reason is that the build 15000 dom elements in the beginning, because of an everlasting scrolling in the home page.

colorsbycherry.com

Scored 14!!
It takes 1 min loading completely, because of lots of images.

CALLOFDUTTY

Its a great page, with reasonable measures, but... they scored 11 out of 100.
The reason is because they have a video looping in the background, is not streaming, it downloads the entire video, again and again.

Performance in the build process

One great part of the talk. Robin and Dean explained that performance can be measure and automated in the build process.
We can use PSI, that measures performance, automate into build, so it fails if speed is bigger than some threshold.
The problem with PSI is that it needs to expose a public url, and that means that functionality needs to be released. So what happens if we want to automate our own main current developed branch.
We can use NGROK, which creates a secure tunnel to localhost, and then PSI can run using it.


So that´s it for today.

Keep coding and being performant!

lunes, 13 de octubre de 2014

NCrunch, your friend with TDD

Today I wanted to talk you about NCrunch, a great tool I´ve discovered thanks to my colleages in my new job. It allows us to automatically run our Unit Tests while we are writing the production code.


So we´ve been told about TDD, Test Driven Development. The methodology itself is easy to understand but specially in the beginning hard to take, I think is because it may seem like against common sense. TDD is based on three simple steps:
1) Start with a failing test (don´t fail to fail!). This test checks a small working piece of functionality.
2) We write the minimun amount of code that makes it pass. Minimun is minimun!
3) Refactor the code (tests and production code) to avoid duplicated code and do improvements.
We iterate though every of these steps, adding more and more tests until we have all of the user story finished. Previous tests must pass in every moment.

You´ve probably noticed that a quite important part is running tests. I´m so used to watch the tests explorer (whatever it is) and run tests manually as I follow TDD.

So here it comes NCrunch to give us a hand in this task. NCrunch is running on paralel, watching our changes, and running all of the tests again as soon as it finds any modification.
We´ll see a few things going on:


There´s a circle in the corner of the right side. It will be green if every tests pass. Otherwise it goes red with a number, this tells the number of failing tests or projects that don´t compile.

In the left side along the code, we see some arrows. These are starting tests. The arrow is green if it passes or red if the test breaks.
In addition we can find code coverage in a very easy graphical fashion. If you see, we have some circles in colour. This circles are green if that line of code is succesfully reached by a test, likewise it goes red if the test doesn´t pass. Circle becomes black when the line is not covered at all.

If you browse the NCrunch page you can see a video of all of this in action. An image is worth a thousand words:
http://www.ncrunch.net/

You can imagine how this helps doing TDD: we write a test and it goes red inmediately. We implement the minimun code (Resharper can help us doing that from the test). It will be the minimun, otherwise we´ll see black circles appearing in the logic. Test goes green and we do refactor having the test still green, together with other previous tests.

It´s just amazing how you forget about Tests Explorer. When I started in my new job I noticed my colleagues hardly ever debug the code, and the reason for that was easy: NCrunch was doing the hard work for them. They code simply focusing on the programming part.

NCrunch can be used with MSpec, Specflow, MSTests, NUnit and probably many others.
NCrunch is just another productivy tool (yes, Resharper is the big boy). I know I have a few friends that don´t like to polute their wonderful Visual Studio environment, arguing mainly because it makes VS slower.
Well, while the former is true, I´m a big fan of these tools, and the reason for that is because I can just simply a more efficient programmer. Yes, the machine can go slower, but isn´t productivity increasing a good reason to ask for more memory to your manager?:)

As a side note, I would like to talk you something very funny that happened to me regarding NCrunch. In order to get the position I got in my new company, I need to do a programming exercise. I finished it and I was proud of the result. I followed TDD and so, my code coverage ought to be great. A couple of days after I started the new job, I opened my exercise solution with my brand new Visual Studio and NCrunch installed. Suddenly I noticed an entire method full of black circles, that wasn´t covered at all... Luckily my technical leader, Richard, was so kind to let me start working with them.

Cheers mates, keep coding!


sábado, 4 de octubre de 2014

What's going on with C# 6.0!

Hello! This is Juan Antonio, back again into the arena.
I'm sorry, it's been a long time since I last wrote a post back in August.
Certainly there have been lots of things to take care off, but I don't want to stop sharing things with my friends.

A couple of days ago an interesting Pluralsight course fell into my hands, What's new in C# 6.0, by Scott K Allen: http://www.pluralsight.com/courses/csharp-6-whats-new
I really recommend if you can watch it not only because of the content but also because Scott is a great speaker. I felt really amazed with what is coming in C# world.
In this post I wanted to summarize the new features of the language.

Everything I'll tell is possible because of the new compiler, .Net Compiler Platform "Roslyn". It is an open source project that we can download and see the source code (http://roslyn.codeplex.com/). It comes along with a new Visual Studio 2014. When I'm writing this, every of these things are in beta testing version so one can download everything for free.

Autoproperty initializers:

With C#6.0 we can set the initial value of properties easily:


See how we can initialize even a read only property.

Primary Constructors:

Primary constructor allows us to define a class constructor and capture the constructor parameters to initialize class properties.

In C# we have a common pattern, we inject a component through a constructor and we use this injected component throughout the rest of the class:



With Primary Constructor we don't need to specify the constructor, instead we can use primary constructor together with autoproperty initializers:


The code becomes more elegant and concise and the benefit is that the menu primary contructor variable is available in the rest of the class as well.

Dictionary initializers:

Initializing a dictionary is now more concise with the new syntax:


Event initializers inside constructors:

Until C#6 the following piece of code is illegal:


Params input parameters now with IEnumerable:

Now we can specify a variable number of arguments with IEnumerable and not only with array, like this:

Declaration expresions:

Normally we declare variable and then we assign it a value of an expression. C# 6 will allow us to join the declaration and the expression. Let me show you an example:


See how we declare out total inside the TryParse.
The benefit is that the code has become more explicit and that we have declare the variable where it makes more sense, taking more control over the scope.

Using static:

Simple but cool feature. If we have a static method on a static class, now we can declare the static class in the using part and just invoke the static method anywhere in the file. Therefore we don't have to type the typical Assert.AreEqual, but instead:

Conditional access:

Now instead of having lots of null checks before accessing properties of object, we can use ?. so we'll get the value of the property only if the object is not null, like this:


Expression bodied members:

This feature allows to assign value s of properties using Lambda syntax, like we have done here for NumberOfDishes:


There are a few more of features but this post is just a taste of what we will have soon available. It makes a bunch of small features but altogether it will allow to code simpler and more concise.
Remember, great power involves great responsibility.
Looking forward to have these tools in our daily projects!

Keep coding!